Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
null | null |
{}
|
galvangjx/llama3-8B-data-extraction
| null |
[
"region:us"
] | null |
2024-04-23T15:36:09+00:00
|
|
null | null |
{}
|
barrybadpak/hmpump
| null |
[
"region:us"
] | null |
2024-04-23T15:36:42+00:00
|
|
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
DinoTheLewis/EVEE-Instruct-Interior-10.8B
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-23T15:37:07+00:00
|
null | null |
{}
|
mizoru/whisper-small-ru-ORD_0.7_0.3-peft
| null |
[
"region:us"
] | null |
2024-04-23T15:37:17+00:00
|
|
null | null |
{}
|
ke-lly/_
| null |
[
"safetensors",
"region:us"
] | null |
2024-04-23T15:37:59+00:00
|
|
text-classification
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
Jozaita/fine_tune_test_2
| null |
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T15:38:58+00:00
|
null |
transformers
|
{"license": "apache-2.0"}
|
the-french-artist/llama-2-7b-bnb-4bit_10k_hash_forward
| null |
[
"transformers",
"gguf",
"llama",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-23T15:39:03+00:00
|
|
text-generation
|
transformers
|
This is a Mixtral-7B fine-tuned model for the AutoTx-CrewAI version
|
{}
|
Superoisesuki/AutoTx_Mistral_7B_CrewAI
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-23T15:39:19+00:00
|
null | null |
{"license": "cc-by-nc-4.0", "title": "SWAPON", "emoji": "\ud83c\udfe2", "colorFrom": "red", "colorTo": "pink", "sdk": "gradio", "sdk_version": "4.8.0", "app_file": "app.py", "pinned": false}
|
Harsh-7300/SWAPON
| null |
[
"license:cc-by-nc-4.0",
"region:us"
] | null |
2024-04-23T15:39:28+00:00
|
|
null |
transformers
|
# FinLang/finance-chat-model-investopedia
<!-- Provide a quick summary of what the model is/does. -->
This Large Language Model (LLM) is an instruct fine-tuned version of the mistralai/Mistral-7B-v0.1 using our open-sourced finance dataset https://huggingface.co/datasets/FinLang/investopedia-instruction-tuning-dataset developed for finance application by FinLang Team
This project is for research purposes only. Third-party datasets may be subject to additional terms and conditions under their associated licenses.
# Plans
The research paper will be published soon.
We are working on a v2 version of the model where we are increasing the training corpus of financial data and using improved techniques for training models.
## How to Get Started with the Model
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
import torch
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer, pipeline
model_id='FinLang/investopedia_chat_model'
model = AutoPeftModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch.float16
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
example = [{'content': 'You are a financial expert and you can answer any questions related to finance. You will be given a context and a question. Understand the given context and\n try to answer. Users will ask you questions in English and you will generate answer based on the provided CONTEXT.\n CONTEXT:\n D. in Forced Migration from the University of the Witwatersrand (Wits) in Johannesburg, South Africa; A postgraduate diploma in Folklore & Cultural Studies at Indira Gandhi National Open University (IGNOU) in New Delhi, India; A Masters of International Affairs at Columbia University; A BA from Barnard College at Columbia University\n', 'role': 'system'}, {'content': ' In which universities did the individual obtain their academic qualifications?\n', 'role': 'user'}, {'content': ' University of the Witwatersrand (Wits) in Johannesburg, South Africa; Indira Gandhi National Open University (IGNOU) in New Delhi, India; Columbia University; Barnard College at Columbia University.', 'role': 'assistant'}]
prompt = pipe.tokenizer.apply_chat_template(example[:2], tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.1, top_k=50, top_p=0.1, eos_token_id=pipe.tokenizer.eos_token_id, pad_token_id=pipe.tokenizer.pad_token_id)
print(f"Query:\n{example[1]['content']}")
print(f"Context:\n{example[0]['content']}")
print(f"Original Answer:\n{example[2]['content']}")
print(f"Generated Answer:\n{outputs[0]['generated_text'][len(prompt):].strip()}")
## Training Details
Peft Config :
{
'Technqiue' : 'QLORA',
'rank': 256,
'target_modules' : ["q_proj", "k_proj", "v_proj", "o_proj","gate_proj", "up_proj", "down_proj",],
'lora_alpha' : 128,
'lora_dropout' : 0,
'bias': "none",
}
Hyperparameters:
{
"epochs": 3,
"evaluation_strategy": "epoch",
"gradient_checkpointing": True,
"max_grad_norm" : 0.3,
"optimizer" : "adamw_torch_fused",
"learning_rate" : 2e-4,
"lr_scheduler_type": "constant",
"warmup_ratio" : 0.03,
"per_device_train_batch_size" : 8,
"per_device_eval_batch_size" : 8,
"gradient_accumulation_steps" : 4
}
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
We evaluated the model on test set (22.9k records) of https://huggingface.co/datasets/FinLang/investopedia-instruction-tuning-dataset. Evaluation was done using Proprietary LLM as judge on four criteria Correctness, Faithfullness, Clarity, Completeness on scale of 1-5 (1 being worst & 5 being best). Model got an average score of 4.58 out of 5.
Human Evaluation was performed on random sample of 10k records and we found approx 80% aligment between human & Proprietary LLM.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
## License
Since non-commercial datasets are used for fine-tuning, we release this model as cc-by-nc-4.0.
## Citation [coming soon]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
{"license": "cc-by-nc-4.0", "library_name": "transformers"}
|
FinLang/finance-chat-model-investopedia
| null |
[
"transformers",
"safetensors",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T15:39:31+00:00
|
object-detection
|
ultralytics
|
<div align="center">
<img width="640" alt="chanelcolgate/chamdiemgianhang-vsk-v4" src="https://huggingface.co/chanelcolgate/chamdiemgianhang-vsk-v4/resolve/main/thumbnail.jpg">
</div>
### Supported Labels
```
['BOM_GEN', 'BOM_JUN', 'BOM_KID', 'BOM_SAC', 'BOM_VTG', 'BOM_YTV', 'HOP_FEJ', 'HOP_FRE', 'HOP_JUN', 'HOP_POC', 'HOP_VTG', 'HOP_YTV', 'LOC_JUN', 'LOC_KID', 'LOC_YTV', 'LOO_DAU', 'LOO_KID', 'LOO_MAM', 'LOO_YTV', 'POS_LON', 'POS_NHO', 'POS_THA', 'TUI_GEN', 'TUI_JUN', 'TUI_KID', 'TUI_SAC', 'TUI_THV', 'TUI_THX', 'TUI_VTG', 'TUI_YTV']
```
### How to use
- Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus):
```bash
pip install ultralyticsplus==0.1.0 ultralytics==8.0.239
```
- Load model and perform prediction:
```python
from ultralyticsplus import YOLO, render_result
# load model
model = YOLO('chanelcolgate/chamdiemgianhang-vsk-v4')
# set model parameters
model.overrides['conf'] = 0.25 # NMS confidence threshold
model.overrides['iou'] = 0.45 # NMS IoU threshold
model.overrides['agnostic_nms'] = False # NMS class-agnostic
model.overrides['max_det'] = 1000 # maximum number of detections per image
# set image
image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model.predict(image)
# observe results
print(results[0].boxes)
render = render_result(model=model, image=image, result=results[0])
render.show()
```
|
{"library_name": "ultralytics", "tags": ["ultralyticsplus", "yolov8", "ultralytics", "yolo", "vision", "object-detection", "pytorch"], "datasets": ["chanelcolgate/yenthienviet"], "library_version": "8.0.239", "inference": false, "model-index": [{"name": "chanelcolgate/chamdiemgianhang-vsk-v4", "results": [{"task": {"type": "object-detection"}, "dataset": {"name": "yenthienviet", "type": "chanelcolgate/yenthienviet", "split": "validation"}, "metrics": [{"type": "precision", "value": 0.99425, "name": "[email protected](box)"}]}]}]}
|
chanelcolgate/chamdiemgianhang-vsk-v4
| null |
[
"ultralytics",
"tensorboard",
"v8",
"ultralyticsplus",
"yolov8",
"yolo",
"vision",
"object-detection",
"pytorch",
"dataset:chanelcolgate/yenthienviet",
"model-index",
"has_space",
"region:us"
] | null |
2024-04-23T15:39:37+00:00
|
text-classification
|
transformers
|
{}
|
qcma22/distilbert-base-uncased-finetuned-cola
| null |
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T15:40:39+00:00
|
|
summarization
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mymt5-small-test
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 16.9555
- Rouge1: 5.3063
- Rouge2: 0.3834
- Rougel: 4.7129
- Rougelsum: 4.769
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 32.0529 | 1.0 | 10 | 22.9068 | 4.148 | 0.2063 | 4.0534 | 4.0281 |
| 26.8483 | 2.0 | 20 | 20.5579 | 4.2091 | 0.2815 | 4.2287 | 4.2414 |
| 26.3936 | 3.0 | 30 | 19.4139 | 4.199 | 0.2051 | 4.1823 | 4.1637 |
| 24.8239 | 4.0 | 40 | 18.3165 | 4.2308 | 0.2812 | 4.2404 | 4.2749 |
| 24.0505 | 5.0 | 50 | 17.3909 | 4.9556 | 0.486 | 4.6229 | 4.6138 |
| 23.8294 | 6.0 | 60 | 17.0988 | 5.4206 | 0.5003 | 4.7981 | 4.7944 |
| 22.7513 | 7.0 | 70 | 16.9862 | 5.3119 | 0.3966 | 4.814 | 4.7785 |
| 22.836 | 8.0 | 80 | 16.9555 | 5.393 | 0.3829 | 4.7334 | 4.8031 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"license": "apache-2.0", "tags": ["summarization", "generated_from_trainer"], "metrics": ["rouge"], "base_model": "google/mt5-small", "model-index": [{"name": "Mymt5-small-test", "results": []}]}
|
thabat/Mymt5-small-test
| null |
[
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-23T15:40:43+00:00
|
text2text-generation
|
transformers
|
{}
|
DDDbnn/output
| null |
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-23T15:41:03+00:00
|
|
null | null |
{}
|
Harthiya/my_awesome_qa_model
| null |
[
"region:us"
] | null |
2024-04-23T15:41:42+00:00
|
|
null | null |
{}
|
NassimB/llama-7b-hf-platypus-lamini-vxxiii-chat-real
| null |
[
"safetensors",
"region:us"
] | null |
2024-04-23T15:42:40+00:00
|
|
text-generation
|
transformers
|
# KnutJaegersberg/Llama3-Deita-8b AWQ
- Model creator: [KnutJaegersberg](https://huggingface.co/KnutJaegersberg)
- Original model: [Llama3-Deita-8b](https://huggingface.co/KnutJaegersberg/Llama3-Deita-8b)
## How to use
### Install the necessary packages
```bash
pip install --upgrade autoawq autoawq-kernels
```
### Example Python code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
model_path = "solidrust/Llama3-Deita-8b-AWQ"
system_message = "You are Llama3-Deita-8b, incarnated as a powerful AI. You were created by KnutJaegersberg."
# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
trust_remote_code=True)
streamer = TextStreamer(tokenizer,
skip_prompt=True,
skip_special_tokens=True)
# Convert prompt to tokens
prompt_template = """\
### System:
{system_message}
### User:
{prompt}
### Assistant:
"""
prompt = "You're standing on the surface of the Earth. "\
"You walk one mile south, one mile west and one mile north. "\
"You end up exactly where you started. Where are you?"
tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
return_tensors='pt').input_ids.cuda()
# Generate output
generation_output = model.generate(tokens,
streamer=streamer,
max_new_tokens=512)
```
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
|
{"library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"}
|
solidrust/Llama3-Deita-8b-AWQ
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"conversational",
"text-generation-inference",
"region:us"
] | null |
2024-04-23T15:42:44+00:00
|
null | null |
{}
|
martineden/Phi-3-mini-4k-instruct-GGUF
| null |
[
"gguf",
"region:us"
] | null |
2024-04-23T15:43:03+00:00
|
|
text-classification
|
transformers
|
{}
|
EllipticCurve/DistilBERT-sentiment-analysis
| null |
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T15:43:19+00:00
|
|
null | null |
{}
|
HugoRdeT/donut-base-sroie
| null |
[
"region:us"
] | null |
2024-04-23T15:43:39+00:00
|
|
null |
transformers
|
# Uploaded model
- **Developed by:** K00B404
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
|
K00B404/llama3_8B_python_tuned_90steps_lora
| null |
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T15:44:00+00:00
|
text-to-image
|
diffusers
|
# Daubrez_Painterly
<Gallery />
## Model description
Daubrez Painterly Style
Trained on 32 recent images from renowned "AI thief" (his words, not mine) Henry Daubrez, with no permission asked. This LoRA produces excellent painterly images that trend toward surreal and abstract with beautiful textures and expressive swirls. Images were captioned via GPTV and edited for best practices. Training was done using the prodigy optimizer for 40 epochs with a batch size of 4 and a gradient accumulation of 4. Seems to work well with a variety of models and schedulers. Make sure to follow @henrydaubrez on X to see more of his excellent original work.
## Trigger words
You should use `painterly style` to trigger the image generation.
You should use `surreal` to trigger the image generation.
You should use `abstract` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/BlaireSilver13/Daubrez_Painterly/tree/main) them in the Files & versions tab.
|
{"license": "artistic-2.0", "tags": ["text-to-image", "stable-diffusion", "lora", "diffusers", "template:sd-lora"], "widget": [{"text": "painterly style, surreal, SUBJECT HERE, rich colors, detailed textures, micro detailed brush strokes, enchanting", "parameters": {"negative_prompt": "low quality, noise, dithering, ugly, disfigured"}, "output": {"url": "images/painterly-_00071_.png"}}, {"text": "painterly style, surreal, SUBJECT HERE, rich colors, detailed textures, micro detailed brush strokes, enchanting", "parameters": {"negative_prompt": "low quality, noise, dithering, ugly, disfigured"}, "output": {"url": "images/painterly-_00068_.png"}}, {"text": "painterly style, surreal, SUBJECT HERE, rich colors, detailed textures, micro detailed brush strokes, enchanting", "parameters": {"negative_prompt": "low quality, noise, dithering, ugly, disfigured"}, "output": {"url": "images/painterly-_00062_.png"}}, {"text": "painterly style, surreal, SUBJECT HERE, rich colors, detailed textures, micro detailed brush strokes, enchanting", "parameters": {"negative_prompt": "low quality, noise, dithering, ugly, disfigured"}, "output": {"url": "images/painterly-_00040_.png"}}, {"text": "painterly style, surreal, SUBJECT HERE, rich colors, detailed textures, micro detailed brush strokes, enchanting", "parameters": {"negative_prompt": "low quality, noise, dithering, ugly, disfigured"}, "output": {"url": "images/painterly-_00024_.png"}}, {"text": "painterly style, surreal, SUBJECT HERE, rich colors, detailed textures, micro detailed brush strokes, enchanting", "parameters": {"negative_prompt": "low quality, noise, dithering, ugly, disfigured"}, "output": {"url": "images/painterly-_00013_.png"}}, {"text": "painterly style, surreal, SUBJECT HERE, rich colors, detailed textures, micro detailed brush strokes, enchanting", "parameters": {"negative_prompt": "low quality, noise, dithering, ugly, disfigured"}, "output": {"url": "images/painterly-2-_00039_.png"}}, {"text": "painterly style, surreal, SUBJECT HERE, rich colors, detailed textures, micro detailed brush strokes, enchanting", "parameters": {"negative_prompt": "low quality, noise, dithering, ugly, disfigured"}, "output": {"url": "images/painterly-2-_00044_.png"}}], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "painterly style, surreal, abstract"}
|
BlaireSilver13/Daubrez_Painterly
| null |
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:artistic-2.0",
"region:us"
] | null |
2024-04-23T15:44:47+00:00
|
null | null |
{"license": "agpl-3.0"}
|
nihil117/Grimoire
| null |
[
"license:agpl-3.0",
"region:us"
] | null |
2024-04-23T15:47:13+00:00
|
|
null | null |
{}
|
E27085921/git-base-ans
| null |
[
"region:us"
] | null |
2024-04-23T15:47:16+00:00
|
|
text-generation
|
transformers
|
# Uploaded model
- **Developed by:** alquimista888
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "base_model": "unsloth/tinyllama-chat-bnb-4bit"}
|
alquimista888/unsloth_modelTrue
| null |
[
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/tinyllama-chat-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T15:47:25+00:00
|
null | null |
{}
|
Babareys/BibaV1
| null |
[
"region:us"
] | null |
2024-04-23T15:47:34+00:00
|
|
null | null |
{}
|
hermes42/Meta-Llama-3-8B-Instruct-GGUF
| null |
[
"gguf",
"region:us"
] | null |
2024-04-23T15:49:48+00:00
|
|
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
nem012/gemma2b-r32
| null |
[
"transformers",
"tensorboard",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-23T15:49:48+00:00
|
null | null |
{}
|
rock1996/fytext
| null |
[
"region:us"
] | null |
2024-04-23T15:49:50+00:00
|
|
text-to-image
|
diffusers
|
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Critical Dream - cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.6.4
<Gallery />
## Model description
These are cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.6.4 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0, for the purposes of
generating images for the [Critical Dream](https://github.com/cosmicBboy/critical-dream)
project.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: True.
Special VAE used for training: stabilityai/sdxl-vae.
## Trigger words
You should use a picture of [dm-matt-mercer], a dungeon master. background is a forest. fantasy art style, high quality, highly detailed, sharp focus" to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.6.4/tree/main) them in the Files & versions tab.
## Tracker run link
https://wandb.ai/nielsbantilan/dreambooth-lora-sd-xl/runs/tp1b5xxm
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
{"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "diffusers", "lora", "template:sd-lora"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "prompt": "a picture of [dm-matt-mercer], a dungeon master. background is a forest. fantasy art style, high quality, highly detailed, sharp focus\"", "widget": [{"text": "a picture of [dm-matt-mercer]", "output": {"url": "image_0.png"}}, {"text": "a picture of [dm-matt-mercer]", "output": {"url": "image_1.png"}}, {"text": "a picture of a dungeon master.", "output": {"url": "image_2.png"}}, {"text": "a picture of a dungeon master.", "output": {"url": "image_3.png"}}, {"text": "a picture of [critrole-fjord], a male half-orc warlock. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_4.png"}}, {"text": "a picture of [critrole-fjord], a male half-orc warlock. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_5.png"}}, {"text": "a picture of a male half-orc warlock", "output": {"url": "image_6.png"}}, {"text": "a picture of a male half-orc warlock", "output": {"url": "image_7.png"}}, {"text": "a picture of [critrole-beau], a female human monk. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_8.png"}}, {"text": "a picture of [critrole-beau], a female human monk. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_9.png"}}, {"text": "a picture of a female human monk", "output": {"url": "image_10.png"}}, {"text": "a picture of a female human monk", "output": {"url": "image_11.png"}}, {"text": "a picture of [critrole-caduceus], a male firbolg cleric. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_12.png"}}, {"text": "a picture of [critrole-caduceus], a male firbolg cleric. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_13.png"}}, {"text": "a picture of a male firbolg cleric", "output": {"url": "image_14.png"}}, {"text": "a picture of a male firbolg cleric", "output": {"url": "image_15.png"}}, {"text": "a picture of [critrole-caleb], a male human wizard. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_16.png"}}, {"text": "a picture of [critrole-caleb], a male human wizard. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_17.png"}}, {"text": "a picture of a male human wizard", "output": {"url": "image_18.png"}}, {"text": "a picture of a male human wizard", "output": {"url": "image_19.png"}}, {"text": "a picture of [critrole-jester], a female tiefling cleric. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_20.png"}}, {"text": "a picture of [critrole-jester], a female tiefling cleric. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_21.png"}}, {"text": "a picture of a female tiefling cleric", "output": {"url": "image_22.png"}}, {"text": "a picture of a female tiefling cleric", "output": {"url": "image_23.png"}}, {"text": "a picture of [critrole-nott], a female goblin rogue. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_24.png"}}, {"text": "a picture of [critrole-nott], a female goblin rogue. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_25.png"}}, {"text": "a picture of a female goblin rogue", "output": {"url": "image_26.png"}}, {"text": "a picture of a female goblin rogue", "output": {"url": "image_27.png"}}, {"text": "a picture of [critrole-veth], a female halfling rogue/wizard. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_28.png"}}, {"text": "a picture of [critrole-veth], a female halfling rogue/wizard. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_29.png"}}, {"text": "a picture of a female halfling rogue/wizard", "output": {"url": "image_30.png"}}, {"text": "a picture of a female halfling rogue/wizard", "output": {"url": "image_31.png"}}, {"text": "a picture of [critrole-yasha], a female aasimar barbarian. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_32.png"}}, {"text": "a picture of [critrole-yasha], a female aasimar barbarian. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_33.png"}}, {"text": "a picture of a female aasimar barbarian", "output": {"url": "image_34.png"}}, {"text": "a picture of a female aasimar barbarian", "output": {"url": "image_35.png"}}, {"text": "a picture of [critrole-mollymauk], a male tiefling blood hunter. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_36.png"}}, {"text": "a picture of [critrole-mollymauk], a male tiefling blood hunter. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_37.png"}}, {"text": "a picture of a male tiefling blood hunter", "output": {"url": "image_38.png"}}, {"text": "a picture of a male tiefling blood hunter", "output": {"url": "image_39.png"}}, {"text": "a picture of [critrole-essek], a male drow wizard. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_40.png"}}, {"text": "a picture of [critrole-essek], a male drow wizard. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_41.png"}}, {"text": "a picture of a male drow wizard", "output": {"url": "image_42.png"}}, {"text": "a picture of a male drow wizard", "output": {"url": "image_43.png"}}]}
|
cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.6.4
| null |
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | null |
2024-04-23T15:50:01+00:00
|
null | null |
# Llama-3-8B-16K-GGUF
- Original model: [Llama-3-8B-16K](https://huggingface.co/mattshumer/Llama-3-8B-16K)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Llama-3-8B-16K](https://huggingface.co/mattshumer/Llama-3-8B-16K).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: LiteLLMs/Llama-3-8B-16K-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download LiteLLMs/Llama-3-8B-16K-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download LiteLLMs/Llama-3-8B-16K-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install huggingface_hub[hf_transfer]
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/Llama-3-8B-16K-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<PROMPT>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Llama-3-8B-16K
This is an extended (16K) context version of LLaMA 3 8B (base, not instruct). Trained for five hours on 8x A6000 GPUs, using the `Yukang/LongAlpaca-16k-length` dataset.
`rope_theta` was set to `1000000.0`. Trained with Axolotl.
<!-- original-model-card end -->
|
{"tags": ["GGUF"], "datasets": ["Yukang/LongAlpaca-16k-length"], "quantized_by": "andrijdavid"}
|
LiteLLMs/Llama-3-8B-16K-GGUF
| null |
[
"gguf",
"GGUF",
"dataset:Yukang/LongAlpaca-16k-length",
"region:us"
] | null |
2024-04-23T15:50:17+00:00
|
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
DinoTheLewis/Llama-2-koen-Interior-SFT-13B
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-23T15:51:04+00:00
|
null | null |
{}
|
Phuree/git-base-scamper4
| null |
[
"region:us"
] | null |
2024-04-23T15:51:35+00:00
|
|
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9455
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.8426
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 9 | 1.2550 | 0.0 | 0.0 | 0.0 | 0.8424 |
| No log | 2.0 | 18 | 0.9704 | 0.0 | 0.0 | 0.0 | 0.8426 |
| No log | 3.0 | 27 | 0.9455 | 0.0 | 0.0 | 0.0 | 0.8426 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["generator"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "bert-base-cased", "model-index": [{"name": "bert-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "generator", "type": "generator", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "precision", "value": 0.0, "name": "Precision"}, {"type": "recall", "value": 0.0, "name": "Recall"}, {"type": "f1", "value": 0.0, "name": "F1"}, {"type": "accuracy", "value": 0.8426458239131839, "name": "Accuracy"}]}]}]}
|
Shresht-Venkat/bert-finetuned-ner
| null |
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:generator",
"base_model:bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T15:51:50+00:00
|
text-generation
|
transformers
|
{}
|
andrealexroom/LexLLMv0.0.0.x.10.25
| null |
[
"transformers",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-23T15:52:21+00:00
|
|
text-generation
|
transformers
|
{}
|
Sidsky08/Llama-2-7b-chat-finetune
| null |
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-23T15:53:41+00:00
|
|
null | null |
{}
|
TPeng8/minimalism
| null |
[
"region:us"
] | null |
2024-04-23T15:54:28+00:00
|
|
null | null |
{}
|
zhuqiang/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
| null |
[
"region:us"
] | null |
2024-04-23T15:54:37+00:00
|
|
null | null |
# Dataset Card for [Needs More Information]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Web interface of the Pangloss Collection, which hosts the data sets](https://pangloss.cnrs.fr/)
- **Repository:** [GithHub repository of the Pangloss Collection, which hosts the data sets](https://github.com/CNRS-LACITO/Pangloss/)
- **Paper:** [A paper about the Pangloss Collection, including a presentation of the Document Type Definition](https://halshs.archives-ouvertes.fr/halshs-01003734)
[A paper in French about the deposit in Zenodo](https://halshs.archives-ouvertes.fr/halshs-03475436)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Benjamin Galliot](mailto:[email protected])
### Dataset Summary
Two audio corpora of minority languages of China (Japhug and Na), with transcriptions, proposed as reference data sets for experiments in Natural Language Processing. The data, collected and transcribed in the course of immersion fieldwork, amount to a total of about 1,900 minutes in Japhug and 200 minutes in Na. By making them available in an easily accessible and usable form, we hope to facilitate the development and deployment of state-of-the-art NLP tools for the full range of human languages. There is an associated tool for assembling datasets from the Pangloss Collection (an open archive) in a way that ensures full reproducibility of experiments conducted on these data.
The Document Type Definition for the XML files is available here:
http://cocoon.huma-num.fr/schemas/Archive.dtd
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
Japhug (ISO 639-3 code: jya, Glottolog language code: japh1234) and Yongning Na (ISO 639-3 code: nru, Glottolog language code: yong1288) are two minority languages of China. The documents in the dataset have a transcription in the endangered language. Some of the documents have translations into French, English, and Chinese.
## Dataset Structure
### Data Instances
A typical data row includes the path, audio, sentence, document type and several translations (depending on the sub-corpus).
`
{
"path": "cocoon-db3cf0e1-30bb-3225-b012-019252bb4f4d_C1/Tone_BodyPartsOfAnimals_12_F4_2008_withEGG_069.wav",
"audio": "{'path': 'na/cocoon-db3cf0e1-30bb-3225-b012-019252bb4f4d_C1/Tone_BodyPartsOfAnimals_12_F4_2008_withEGG_069.wav', 'array': array([0.00018311, 0.00015259, 0.00021362, ..., 0.00030518, 0.00030518, 0.00054932], dtype=float32), 'sampling_rate': 16000}",
"sentence": "ʈʂʰɯ˧ | ɖɤ˧mi˧-ɬi˧pi˩ ɲi˩",
"doctype": "WORDLIST",
"translation:zh": "狐狸的耳朵",
"translation:fr": "oreilles de renard",
"translation:en": "fox's ears",
}
`
### Data Fields
path: the path to the audio file;;
audio: a dictionary containing the path to the audio file, the audio array and the sampling rate;
sentence: the sentence the native has pronunced;
doctype: the document type (a text or a word list);
translation:XX: the translation of the sentence in the language XX.
### Data Splits
The train, test and validation splits have all been reviewed and were splitted randomly (ratio 8:1:1) at sentence level (after the extraction from various files).
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
The dataset was collected in immersion fieldwork for language documentation. It contributes to the documentation and study of the world's languages by providing documents of connected, spontaneous speech recorded in their cultural context and transcribed in consultation with native speakers. The impacts concern research, and society at large: a guiding principle of the Pangloss Collection, which hosts the data sets, is that a close association between documentation and research is highly profitable to both. A range of possibilities for uses exist, for the scientific and speaker communities and for the general public.
### Discussion of Biases
The corpora are single-speaker and hence clearly do not reflect the sociolinguistic and dialectal diversity of the languages. No claim is made that the language variety described constitutes a 'standard'.
### Other Known Limitations
The translations are entirely hand-made by experts working on these languages; the amount and type of translations available varies from document to document, as not all documents have translations and not all translated documents have the same translation languages (Chinese, French, English...).
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information]
|
{"language": ["jya", "nru"], "license": "cc-by-nc-sa-4.0", "pretty_name": "Pangloss", "annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language_bcp47": ["x-japh1234", "x-yong1288"], "language_details": "jya consists of japh1234 (Glottolog code); nru consists of yong1288 (Glottolog code)", "multilinguality": ["multilingual", "translation"], "size_categories": {"yong1288": ["10K<n<100K"], "japh1234": ["10K<n<100K"]}, "source_datasets": ["original"], "task_categories": ["automatic-speech-recognition"], "task_ids": ["speech-recognition"], "configs": [{"config_name": "yong1288", "data_files": [{"split": "train", "path": "yong1288/train.csv"}, {"split": "test", "path": "yong1288/test.csv"}, {"split": "validation", "path": "yong1288/validation.csv"}]}, {"config_name": "japh1234", "data_files": [{"split": "train", "path": "japh1234/train.csv"}, {"split": "test", "path": "japh1234/test.csv"}, {"split": "validation", "path": "japh1234/validation.csv"}]}]}
|
Lacito/pangloss
| null |
[
"jya",
"nru",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null |
2024-04-23T15:55:14+00:00
|
null |
peft
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0
|
{"library_name": "peft", "base_model": "meta-llama/Meta-Llama-3-8B"}
|
AlienKevin/Meta-Llama-3-8B-tagllm-lang-10
| null |
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Meta-Llama-3-8B",
"region:us"
] | null |
2024-04-23T15:56:24+00:00
|
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
nem012/gemma2b-r16
| null |
[
"transformers",
"tensorboard",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-23T15:56:49+00:00
|
null | null |
help writing books
(sumarize/ style / rythme / stats / redundancy / sentence autocompletion / etc...)
version 0.p
|
{"license": "apache-2.0"}
|
blaackjack/Coach_Scrib
| null |
[
"license:apache-2.0",
"region:us"
] | null |
2024-04-23T15:58:04+00:00
|
token-classification
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
xilpam/v_1_test_3_layoutlm-funsd-tf
| null |
[
"transformers",
"safetensors",
"layoutlm",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T15:58:10+00:00
|
text-to-image
|
diffusers
|
This is 1 step inference HyperSD SDXL model used with [FastSD CPU](https://github.com/rupeshs/fastsdcpu)
|
{"license": "openrail++"}
|
rupeshs/hyper-sd-sdxl-1-step
| null |
[
"diffusers",
"safetensors",
"license:openrail++",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | null |
2024-04-23T15:58:40+00:00
|
null | null |
{}
|
NikolayKozloff/Phi-3-mini-4k-instruct-Q8_0-GGUF
| null |
[
"gguf",
"region:us"
] | null |
2024-04-23T15:59:18+00:00
|
|
text-classification
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
Reem333/LongFormer-Paper-Citaion-Classifier
| null |
[
"transformers",
"safetensors",
"longformer",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2024-04-23T16:01:20+00:00
|
null |
transformers
|
# Uploaded model
- **Developed by:** donlinglok
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
|
donlinglok/llama-3-8b-jy-bnb-4bit
| null |
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T16:01:41+00:00
|
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
kawagoshi-llm-team/llama2_multinode_test
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-23T16:02:08+00:00
|
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-ner
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "bert-base-uncased", "model-index": [{"name": "bert-base-uncased-finetuned-ner", "results": []}]}
|
Khetnhio/bert-base-uncased-finetuned-ner
| null |
[
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T16:02:13+00:00
|
null | null |
{}
|
karmat314/Llama-2-7b-story-finetune
| null |
[
"region:us"
] | null |
2024-04-23T16:03:38+00:00
|
|
null | null |

> [!IMPORTANT]
> Outdated GGUFs, check [here](https://huggingface.co/mradermacher/Chaotic-Soliloquy-4x8B-GGUF) for quants made with newer version of llamacpp
Some GGUF quants of [xxx777xxxASD/ChaoticSoliloquy-4x8B](https://huggingface.co/xxx777xxxASD/ChaoticSoliloquy-4x8B)
|
{"language": ["en"], "license": "llama3", "tags": ["moe"]}
|
xxx777xxxASD/ChaoticSoliloquy-4x8B-GGUF
| null |
[
"gguf",
"moe",
"en",
"license:llama3",
"region:us"
] | null |
2024-04-23T16:03:50+00:00
|
text-classification
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
liserman/parlbert_climate_change_blame_v02
| null |
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T16:04:30+00:00
|
null |
transformers
|
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Markhit/CodeLlama3-8B-Python
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
{"language": ["en"], "license": "llama3", "library_name": "transformers", "tags": ["code"], "datasets": ["ajibawa-2023/Python-Code-23k-ShareGPT"], "base_model": "Markhit/CodeLlama3-8B-Python", "license_link": "LICENSE", "quantized_by": "mradermacher"}
|
mradermacher/CodeLlama3-8B-Python-GGUF
| null |
[
"transformers",
"gguf",
"code",
"en",
"dataset:ajibawa-2023/Python-Code-23k-ShareGPT",
"base_model:Markhit/CodeLlama3-8B-Python",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T16:05:12+00:00
|
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
Duakovui/viT5_instruct_uit_ate1
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T16:06:26+00:00
|
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# animal_guessing
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the animal_train dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.02
- num_epochs: 1.0
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.2.0
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"license": "other", "library_name": "peft", "tags": ["llama-factory", "lora", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-hf", "model-index": [{"name": "animal_guessing", "results": []}]}
|
thunha/llama2-7b-hf-train
| null |
[
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"license:other",
"region:us"
] | null |
2024-04-23T16:06:35+00:00
|
null | null |
{}
|
irisftmm/my_awesome_kpu_model
| null |
[
"region:us"
] | null |
2024-04-23T16:06:59+00:00
|
|
null | null |
{}
|
mizoru/whisper-large-ru-ORD_0.7_0.1-peft
| null |
[
"region:us"
] | null |
2024-04-23T16:09:05+00:00
|
|
text-classification
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
peace4ever/roberta-large-finetuned-mongolian_v2
| null |
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T16:09:10+00:00
|
text-generation
| null |
exl2 quants of https://huggingface.co/microsoft/Phi-3-mini-128k-instruct
|
{"language": ["en"], "license": "mit", "tags": ["nlp", "code"], "license_link": "https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE", "pipeline_tag": "text-generation"}
|
MarsupialAI/Phi-3-mini-128k-instruct_exl2
| null |
[
"safetensors",
"nlp",
"code",
"text-generation",
"en",
"license:mit",
"region:us"
] | null |
2024-04-23T16:09:23+00:00
|
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": ["unsloth"]}
|
Srimouli04/gemma-7b-finetuned-m16bit
| null |
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-23T16:09:56+00:00
|
text-generation
|
transformers
|
# Alsebay/Kilo-2x8B AWQ
- Model creator: [Alsebay](https://huggingface.co/Alsebay)
- Original model: [Kilo-2x8B](https://huggingface.co/Alsebay/Kilo-2x8B)
## Model Summary
MoE model of 2 Llama-3 models:
- vicgalle/Roleplay-Llama-3-8B
- Sao10K/L3-Solana-8B-v1
## How to use
### Install the necessary packages
```bash
pip install --upgrade autoawq autoawq-kernels
```
### Example Python code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
model_path = "solidrust/Kilo-2x8B-AWQ"
system_message = "You are Kilo-2x8B, incarnated as a powerful AI. You were created by Alsebay."
# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
trust_remote_code=True)
streamer = TextStreamer(tokenizer,
skip_prompt=True,
skip_special_tokens=True)
# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""
prompt = "You're standing on the surface of the Earth. "\
"You walk one mile south, one mile west and one mile north. "\
"You end up exactly where you started. Where are you?"
tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
return_tensors='pt').input_ids.cuda()
# Generate output
generation_output = model.generate(tokens,
streamer=streamer,
max_new_tokens=512)
```
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
|
{"library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible", "Roleplay", "roleplay", "moe", "merge"], "base_model": ["vicgalle/Roleplay-Llama-3-8B", "Sao10K/L3-Solana-8B-v1"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"}
|
solidrust/Kilo-2x8B-AWQ
| null |
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"Roleplay",
"roleplay",
"moe",
"merge",
"base_model:vicgalle/Roleplay-Llama-3-8B",
"base_model:Sao10K/L3-Solana-8B-v1",
"text-generation-inference",
"region:us"
] | null |
2024-04-23T16:10:07+00:00
|
text-generation
|
transformers
|
{}
|
itay-nakash/model_15231c74f7
| null |
[
"transformers",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-23T16:10:08+00:00
|
|
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3-8b-summary
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 8000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"license": "other", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "model-index": [{"name": "llama3-8b-summary", "results": []}]}
|
Yaxin1992/llama3-8b-summary
| null |
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"region:us"
] | null |
2024-04-23T16:10:29+00:00
|
null | null |
{"license": "apache-2.0"}
|
AlekseyScorpi/saiga_mistral_7b_vacancies_lora
| null |
[
"safetensors",
"license:apache-2.0",
"region:us"
] | null |
2024-04-23T16:11:19+00:00
|
|
reinforcement-learning
|
stable-baselines3
|
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
{"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "270.30 +/- 17.89", "name": "mean_reward", "verified": false}]}]}]}
|
atakepanda/ppo-LunarLander-v2
| null |
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null |
2024-04-23T16:11:43+00:00
|
text-classification
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
goperigon/nli-MiniLM2-L6-H768_iptc
| null |
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T16:12:41+00:00
|
text-classification
|
fasttext
|
{"license": "mit", "library_name": "fasttext", "datasets": ["jacquelinehe/enron-emails"], "pipeline_tag": "text-classification"}
|
sisaacson/Action-Item
| null |
[
"fasttext",
"text-classification",
"dataset:jacquelinehe/enron-emails",
"license:mit",
"region:us"
] | null |
2024-04-23T16:14:09+00:00
|
|
image-feature-extraction
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
ankit-katewa/detr-Personal
| null |
[
"transformers",
"safetensors",
"detr",
"image-feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T16:14:10+00:00
|
null | null |
{}
|
samzirbo/mT5.scratch.tedtalks.simple
| null |
[
"region:us"
] | null |
2024-04-23T16:15:24+00:00
|
|
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": ["unsloth"]}
|
Srimouli04/gemma-7b-finetuned-Amb-m16bit
| null |
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-23T16:15:52+00:00
|
fill-mask
|
transformers
|
{}
|
ltuzova/dapt_plus_tapt_helpfulness_base_pretraining_model
| null |
[
"transformers",
"safetensors",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T16:16:16+00:00
|
|
text-generation
|
mlx
|
# mlx-community/Phi-3-mini-4k-instruct-4bit-no-q-embed
This model was converted to MLX format from [`microsoft/Phi-3-mini-4k-instruct`]() using mlx-lm version **0.12.0**.
Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Phi-3-mini-4k-instruct-4bit-no-q-embed")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
{"language": ["en"], "license": "mit", "tags": ["nlp", "code", "mlx"], "license_link": "https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE", "pipeline_tag": "text-generation", "widget": [{"messages": [{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}]}]}
|
mlx-community/Phi-3-mini-4k-instruct-4bit-no-q-embed
| null |
[
"mlx",
"safetensors",
"phi3",
"nlp",
"code",
"text-generation",
"conversational",
"custom_code",
"en",
"license:mit",
"region:us"
] | null |
2024-04-23T16:16:45+00:00
|
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter-roberta-base-sentiment-latest-biden-stance-1
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4037
- Accuracy: {'accuracy': 0.5688073394495413}
- Precision: {'precision': 0.5540838852097131}
- Recall: {'recall': 0.6640211640211641}
- F1 Score: {'f1': 0.6040914560770156}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 Score |
|:-------------:|:-----:|:-----:|:---------------:|:----------------------:|:---------------------------------:|:-------------------:|:--------------------------:|
| 0.4339 | 1.0 | 3600 | 0.4173 | {'accuracy': 0.8925} | {'precision': 0.857630979498861} | {'recall': 0.94125} | {'f1': 0.8974970202622169} |
| 0.3848 | 2.0 | 7200 | 0.5757 | {'accuracy': 0.854375} | {'precision': 0.9341500765696784} | {'recall': 0.7625} | {'f1': 0.8396421197522368} |
| 0.4094 | 3.0 | 10800 | 0.3543 | {'accuracy': 0.904375} | {'precision': 0.8655367231638418} | {'recall': 0.9575} | {'f1': 0.9091988130563798} |
| 0.3937 | 4.0 | 14400 | 0.2576 | {'accuracy': 0.91125} | {'precision': 0.9092039800995025} | {'recall': 0.91375} | {'f1': 0.9114713216957606} |
| 0.3401 | 5.0 | 18000 | 0.2671 | {'accuracy': 0.91625} | {'precision': 0.9291237113402062} | {'recall': 0.90125} | {'f1': 0.9149746192893401} |
| 0.352 | 6.0 | 21600 | 0.2429 | {'accuracy': 0.91875} | {'precision': 0.9294871794871795} | {'recall': 0.90625} | {'f1': 0.9177215189873418} |
| 0.2883 | 7.0 | 25200 | 0.2857 | {'accuracy': 0.915625} | {'precision': 0.917189460476788} | {'recall': 0.91375} | {'f1': 0.915466499686913} |
| 0.2894 | 8.0 | 28800 | 0.2270 | {'accuracy': 0.92375} | {'precision': 0.9302030456852792} | {'recall': 0.91625} | {'f1': 0.9231738035264484} |
| 0.282 | 9.0 | 32400 | 0.2518 | {'accuracy': 0.92} | {'precision': 0.9189526184538653} | {'recall': 0.92125} | {'f1': 0.920099875156055} |
| 0.2485 | 10.0 | 36000 | 0.2351 | {'accuracy': 0.92375} | {'precision': 0.9269521410579346} | {'recall': 0.92} | {'f1': 0.9234629861982434} |
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "precision", "recall"], "base_model": "cardiffnlp/twitter-roberta-base-sentiment-latest", "model-index": [{"name": "twitter-roberta-base-sentiment-latest-biden-stance-1", "results": []}]}
|
saideep-arikontham/twitter-roberta-base-sentiment-latest-biden-stance-1
| null |
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:cardiffnlp/twitter-roberta-base-sentiment-latest",
"has_space",
"region:us"
] | null |
2024-04-23T16:17:01+00:00
|
text-classification
|
transformers
|
{}
|
jeffyelson03/deberta_sentencelevel_nofeatures
| null |
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T16:17:45+00:00
|
|
image-classification
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
conjunct/rps_vit
| null |
[
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T16:17:49+00:00
|
null | null |
{}
|
Lars2000/mnist
| null |
[
"region:us"
] | null |
2024-04-23T16:17:55+00:00
|
|
text-classification
|
transformers
|
{}
|
jeffyelson03/deberta_sentencelevel_ner_claim
| null |
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T16:18:56+00:00
|
|
null | null |
{}
|
mo-hf/whisper-tiny-khmer
| null |
[
"region:us"
] | null |
2024-04-23T16:19:40+00:00
|
|
null |
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-cf-difficulty-clf
This model is a fine-tuned version of [FacebookAI/roberta-large](https://huggingface.co/FacebookAI/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0082 | 0.1287 | 400 | 0.0085 |
| 0.0091 | 0.2575 | 800 | 0.0086 |
| 0.0088 | 0.3862 | 1200 | 0.0087 |
| 0.0078 | 0.5150 | 1600 | 0.0085 |
| 0.0079 | 0.6437 | 2000 | 0.0088 |
| 0.0092 | 0.7724 | 2400 | 0.0085 |
| 0.0093 | 0.9012 | 2800 | 0.0085 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "FacebookAI/roberta-large", "model-index": [{"name": "roberta-base-cf-difficulty-clf", "results": []}]}
|
eyeonyou/roberta-base-cf-difficulty-clf
| null |
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T16:19:41+00:00
|
null | null |
{"license": "mit"}
|
bbhandari/chapter_1_dnn
| null |
[
"license:mit",
"region:us"
] | null |
2024-04-23T16:20:08+00:00
|
|
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
CroissantCrusader/FrenchBaguette
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-23T16:20:26+00:00
|
null |
keras
|
{"license": "mit"}
|
bbhandari/chapter_1__models
| null |
[
"keras",
"license:mit",
"region:us"
] | null |
2024-04-23T16:21:07+00:00
|
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model_classification
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2214
- Accuracy: 0.9435
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2146 | 1.0 | 1563 | 0.1740 | 0.9346 |
| 0.1474 | 2.0 | 3126 | 0.2214 | 0.9435 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "albert-base-v2", "model-index": [{"name": "my_awesome_model_classification", "results": []}]}
|
mkim-MASI/my_awesome_model_classification
| null |
[
"transformers",
"tensorboard",
"safetensors",
"albert",
"text-classification",
"generated_from_trainer",
"base_model:albert-base-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T16:21:14+00:00
|
text-classification
|
transformers
|
{}
|
jeffyelson03/deberta_sentencelevel_ner_evidence
| null |
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T16:22:32+00:00
|
|
null | null |
The GGUF files of [RDson/Dolphin-less-Llama-3-Instruct-8B](https://huggingface.co/RDson/Dolphin-less-Llama-3-Instruct-8B).
Use the ChatML prompt template
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
Or as Ollama Modelfile
```
FROM Dolphin-less-Llama-3-Instruct-8B-GGUF-Q<PICK A FILE HERE>.gguf
TEMPLATE """<|im_start|>system
{{ .System }}<|im_end|>
<|im_start|>user
{{ .Prompt }}<|im_end|>
<|im_start|>assistant
{{ .Response }}<|im_end|>"""
PARAMETER stop "<|im_start|>"
PARAMETER stop "<|im_end|>"
SYSTEM "You are Dolphin, a helpful AI assistant."
```
Whichever works for you...
|
{"license": "other", "tags": ["llama-3", "dolphin", "gguf"], "license_name": "llama-3", "license_link": "https://llama.meta.com/llama3/license/"}
|
RDson/Dolphin-less-Llama-3-Instruct-8B-GGUF
| null |
[
"gguf",
"llama-3",
"dolphin",
"license:other",
"region:us"
] | null |
2024-04-23T16:23:22+00:00
|
text-classification
|
transformers
|
{}
|
jeffyelson03/deberta_sentencelevel_ner_all
| null |
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T16:23:57+00:00
|
|
null | null |
{}
|
Khan67468/Najaf
| null |
[
"region:us"
] | null |
2024-04-23T16:24:41+00:00
|
|
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-finetuned-squadv2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "distilbert-finetuned-squadv2", "results": []}]}
|
DangNhaNguyen/distilbert-finetuned-squadv2
| null |
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T16:24:55+00:00
|
null | null |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{}
|
Phenrique2011/sofa
| null |
[
"arxiv:1910.09700",
"region:us"
] | null |
2024-04-23T16:26:31+00:00
|
null |
transformers
|
# Uploaded model
- **Developed by:** saint324
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
|
saint324/lora_model_alpaca_llama3_8b
| null |
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T16:26:36+00:00
|
image-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-rps
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0067
- eval_accuracy: 1.0
- eval_runtime: 7.6656
- eval_samples_per_second: 59.878
- eval_steps_per_second: 15.002
- epoch: 3.0
- step: 2616
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "google/vit-base-patch16-224-in21k", "model-index": [{"name": "test-rps", "results": []}]}
|
conjunct/test-rps
| null |
[
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T16:27:03+00:00
|
null | null |
{}
|
Prajith04/tinyllama
| null |
[
"region:us"
] | null |
2024-04-23T16:27:09+00:00
|
|
text-generation
| null |
## Exllama v2 Quantizations of Einstein-v6.1-Llama3-8B
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.19">turboderp's ExLlamaV2 v0.0.19</a> for quantization.
<b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/Weyaxi/Einstein-v6.1-Llama3-8B
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Available sizes
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (8K) | VRAM (16k) | VRAM (32k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------ | ------ | ------------ |
| [8_0](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-exl2/tree/8_0) | 8.0 | 8.0 | 10.1 GB | 10.5 GB | 11.5 GB | 13.6 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-exl2/tree/6_5) | 6.5 | 8.0 | 8.9 GB | 9.3 GB | 10.3 GB | 12.4 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-exl2/tree/5_0) | 5.0 | 6.0 | 7.7 GB | 8.1 GB | 9.1 GB | 11.2 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
| [4_25](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-exl2/tree/4_25) | 4.25 | 6.0 | 7.0 GB | 7.4 GB | 8.4 GB | 10.5 GB | GPTQ equivalent bits per weight, slightly higher quality. |
| [3_5](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-exl2/tree/3_5) | 3.5 | 6.0 | 6.4 GB | 6.8 GB | 7.8 GB | 9.9 GB | Lower quality, only use if you have to. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-exl2 Einstein-v6.1-Llama3-8B-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download a specific branch, use the `--revision` parameter. For example, to download the 6.5 bpw branch:
Linux:
```shell
huggingface-cli download bartowski/Einstein-v6.1-Llama3-8B-exl2 --revision 6_5 --local-dir Einstein-v6.1-Llama3-8B-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
huggingface-cli download bartowski/Einstein-v6.1-Llama3-8B-exl2 --revision 6_5 --local-dir Einstein-v6.1-Llama3-8B-exl2-6.5 --local-dir-use-symlinks False
```
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
{"language": ["en"], "license": "other", "tags": ["axolotl", "generated_from_trainer", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "science", "physics", "chemistry", "biology", "math", "llama", "llama3"], "datasets": ["allenai/ai2_arc", "camel-ai/physics", "camel-ai/chemistry", "camel-ai/biology", "camel-ai/math", "metaeval/reclor", "openbookqa", "mandyyyyii/scibench", "derek-thomas/ScienceQA", "TIGER-Lab/ScienceEval", "jondurbin/airoboros-3.2", "LDJnr/Capybara", "Cot-Alpaca-GPT4-From-OpenHermes-2.5", "STEM-AI-mtl/Electrical-engineering", "knowrohit07/saraswati-stem", "sablo/oasst2_curated", "lmsys/lmsys-chat-1m", "TIGER-Lab/MathInstruct", "bigbio/med_qa", "meta-math/MetaMathQA-40K", "openbookqa", "piqa", "metaeval/reclor", "derek-thomas/ScienceQA", "scibench", "sciq", "Open-Orca/SlimOrca", "migtissera/Synthia-v1.3", "TIGER-Lab/ScienceEval", "allenai/WildChat", "microsoft/orca-math-word-problems-200k", "openchat/openchat_sharegpt4_dataset", "teknium/GPTeacher-General-Instruct", "m-a-p/CodeFeedback-Filtered-Instruction", "totally-not-an-llm/EverythingLM-data-V3", "HuggingFaceH4/no_robots", "OpenAssistant/oasst_top1_2023-08-25", "WizardLM/WizardLM_evol_instruct_70k"], "base_model": "meta-llama/Meta-Llama-3-8B", "quantized_by": "bartowski", "pipeline_tag": "text-generation"}
|
bartowski/Einstein-v6.1-Llama3-8B-exl2
| null |
[
"axolotl",
"generated_from_trainer",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"science",
"physics",
"chemistry",
"biology",
"math",
"llama",
"llama3",
"text-generation",
"en",
"dataset:allenai/ai2_arc",
"dataset:camel-ai/physics",
"dataset:camel-ai/chemistry",
"dataset:camel-ai/biology",
"dataset:camel-ai/math",
"dataset:metaeval/reclor",
"dataset:openbookqa",
"dataset:mandyyyyii/scibench",
"dataset:derek-thomas/ScienceQA",
"dataset:TIGER-Lab/ScienceEval",
"dataset:jondurbin/airoboros-3.2",
"dataset:LDJnr/Capybara",
"dataset:Cot-Alpaca-GPT4-From-OpenHermes-2.5",
"dataset:STEM-AI-mtl/Electrical-engineering",
"dataset:knowrohit07/saraswati-stem",
"dataset:sablo/oasst2_curated",
"dataset:lmsys/lmsys-chat-1m",
"dataset:TIGER-Lab/MathInstruct",
"dataset:bigbio/med_qa",
"dataset:meta-math/MetaMathQA-40K",
"dataset:piqa",
"dataset:scibench",
"dataset:sciq",
"dataset:Open-Orca/SlimOrca",
"dataset:migtissera/Synthia-v1.3",
"dataset:allenai/WildChat",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:openchat/openchat_sharegpt4_dataset",
"dataset:teknium/GPTeacher-General-Instruct",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:totally-not-an-llm/EverythingLM-data-V3",
"dataset:HuggingFaceH4/no_robots",
"dataset:OpenAssistant/oasst_top1_2023-08-25",
"dataset:WizardLM/WizardLM_evol_instruct_70k",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:other",
"region:us"
] | null |
2024-04-23T16:27:12+00:00
|
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-invoice
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2568
- Precision: 0.7955
- Recall: 0.6931
- F1: 0.7407
- Accuracy: 0.9524
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:--------:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 9.0909 | 100 | 0.8724 | 0.0270 | 0.0099 | 0.0145 | 0.7931 |
| No log | 18.1818 | 200 | 0.3880 | 0.4299 | 0.4554 | 0.4423 | 0.9126 |
| No log | 27.2727 | 300 | 0.2870 | 0.6 | 0.4158 | 0.4912 | 0.9229 |
| No log | 36.3636 | 400 | 0.3227 | 0.6389 | 0.4554 | 0.5318 | 0.9242 |
| 0.6024 | 45.4545 | 500 | 0.3251 | 0.6092 | 0.5248 | 0.5638 | 0.9280 |
| 0.6024 | 54.5455 | 600 | 0.2188 | 0.6842 | 0.6436 | 0.6633 | 0.9422 |
| 0.6024 | 63.6364 | 700 | 0.2146 | 0.7159 | 0.6238 | 0.6667 | 0.9447 |
| 0.6024 | 72.7273 | 800 | 0.2138 | 0.8202 | 0.7228 | 0.7684 | 0.9563 |
| 0.6024 | 81.8182 | 900 | 0.2128 | 0.7927 | 0.6436 | 0.7104 | 0.9499 |
| 0.0428 | 90.9091 | 1000 | 0.2400 | 0.7753 | 0.6832 | 0.7263 | 0.9512 |
| 0.0428 | 100.0 | 1100 | 0.2498 | 0.7821 | 0.6040 | 0.6816 | 0.9434 |
| 0.0428 | 109.0909 | 1200 | 0.2614 | 0.7805 | 0.6337 | 0.6995 | 0.9447 |
| 0.0428 | 118.1818 | 1300 | 0.2742 | 0.7821 | 0.6040 | 0.6816 | 0.9447 |
| 0.0428 | 127.2727 | 1400 | 0.2744 | 0.7471 | 0.6436 | 0.6915 | 0.9473 |
| 0.0091 | 136.3636 | 1500 | 0.2568 | 0.7955 | 0.6931 | 0.7407 | 0.9524 |
| 0.0091 | 145.4545 | 1600 | 0.2711 | 0.7701 | 0.6634 | 0.7128 | 0.9486 |
| 0.0091 | 154.5455 | 1700 | 0.3043 | 0.7778 | 0.6238 | 0.6923 | 0.9434 |
| 0.0091 | 163.6364 | 1800 | 0.2746 | 0.7683 | 0.6238 | 0.6885 | 0.9434 |
| 0.0091 | 172.7273 | 1900 | 0.2646 | 0.7955 | 0.6931 | 0.7407 | 0.9524 |
| 0.0056 | 181.8182 | 2000 | 0.2681 | 0.7955 | 0.6931 | 0.7407 | 0.9524 |
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.2.2+cpu
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"license": "cc-by-nc-sa-4.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "microsoft/layoutlmv3-base", "model-index": [{"name": "layoutlmv3-finetuned-invoice", "results": []}]}
|
Sunilkt/layoutlmv3-finetuned-invoice
| null |
[
"transformers",
"safetensors",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/layoutlmv3-base",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T16:27:28+00:00
|
null |
transformers
|
# Uploaded model
- **Developed by:** saint324
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
|
saint324/alpaca_llama3_8b_unslothed
| null |
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T16:29:16+00:00
|
null |
peft
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
{"library_name": "peft", "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0"}
|
bmehrba/TinyLlama-1.1B-Chat-v1.0-fine-tuned-adapters_Aleatoric_tiny_0.2_Seed104
| null |
[
"peft",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null |
2024-04-23T16:30:23+00:00
|
null |
peft
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
{"library_name": "peft", "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0"}
|
bmehrba/TinyLlama-1.1B-Chat-v1.0-fine-tuned_Aleatoric_tiny_0.2_Seed104
| null |
[
"peft",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null |
2024-04-23T16:30:27+00:00
|
null | null |
{}
|
lineCode/sd1_5-fp16-vae_ft_mse-autoslicing-cn_canny
| null |
[
"region:us"
] | null |
2024-04-23T16:30:40+00:00
|
|
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
nem012/gemma2b-r8
| null |
[
"transformers",
"tensorboard",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-23T16:30:50+00:00
|
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# v2-WtP-FT-6L-256BS-UD
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2493
- Precision: 0.4540
- Recall: 0.715
- F1: 0.5553
- Threshold: 0.054
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 512
- eval_batch_size: 512
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Threshold |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:---------:|
| No log | 4.07 | 500 | 0.1002 | 0.8 | 0.94 | 0.8644 | 0.091 |
| No log | 4.07 | 500 | 0.1145 | 0.4678 | 0.835 | 0.5996 | 0.5 |
| No log | 4.07 | 500 | 0.0962 | 0.7673 | 0.775 | 0.7711 | 0.0430 |
| No log | 4.07 | 500 | 0.0845 | 0.7397 | 0.895 | 0.8100 | 0.4 |
| No log | 4.07 | 500 | 0.1072 | 0.7919 | 0.875 | 0.8314 | 0.4 |
| No log | 4.07 | 500 | 0.0266 | 0.9474 | 0.99 | 0.9682 | 0.6 |
| No log | 4.07 | 500 | 0.0472 | 0.8170 | 0.9196 | 0.8652 | 0.2 |
| No log | 4.07 | 500 | 0.0307 | 0.9343 | 0.995 | 0.9637 | 0.2 |
| No log | 4.07 | 500 | 0.0362 | 0.9171 | 0.995 | 0.9544 | 0.3000 |
| No log | 4.07 | 500 | 0.1361 | 0.7166 | 0.885 | 0.7919 | 0.075 |
| No log | 4.07 | 500 | 0.0326 | 0.9336 | 0.985 | 0.9586 | 0.2 |
| No log | 4.07 | 500 | 0.0522 | 0.8670 | 0.945 | 0.9043 | 0.8 |
| No log | 4.07 | 500 | 0.0263 | 0.9476 | 0.995 | 0.9707 | 0.2 |
| No log | 4.07 | 500 | 0.0546 | 0.9171 | 0.995 | 0.9544 | 0.7000 |
| No log | 4.07 | 500 | 0.0432 | 0.9128 | 0.995 | 0.9522 | 0.078 |
| No log | 4.07 | 500 | 0.0310 | 0.8839 | 0.99 | 0.9340 | 0.034 |
| No log | 4.07 | 500 | 0.0369 | 0.8930 | 0.9746 | 0.9320 | 0.7000 |
| No log | 4.07 | 500 | 0.0445 | 0.8905 | 0.935 | 0.9122 | 0.3000 |
| No log | 4.07 | 500 | 0.1721 | 0.7957 | 0.7437 | 0.7688 | 0.035 |
| No log | 4.07 | 500 | 0.0407 | 0.9091 | 1.0 | 0.9524 | 0.2 |
| No log | 4.07 | 500 | 0.0317 | 0.9381 | 0.91 | 0.9239 | 0.8 |
| No log | 4.07 | 500 | 0.1193 | 0.8806 | 0.885 | 0.8828 | 0.2 |
| No log | 4.07 | 500 | 0.0224 | 0.9192 | 0.91 | 0.9146 | 0.041 |
| No log | 4.07 | 500 | 0.0561 | 0.8371 | 0.9391 | 0.8852 | 0.092 |
| No log | 4.07 | 500 | 0.0623 | 0.9155 | 0.975 | 0.9443 | 0.4 |
| No log | 4.07 | 500 | 0.1334 | 0.7229 | 0.835 | 0.7749 | 0.2 |
| No log | 4.07 | 500 | 0.0202 | 0.8864 | 0.9799 | 0.9308 | 0.7000 |
| No log | 4.07 | 500 | 0.0463 | 0.9275 | 0.96 | 0.9435 | 0.9 |
| No log | 4.07 | 500 | 0.0846 | 0.6888 | 0.83 | 0.7528 | 0.2 |
| No log | 4.07 | 500 | 0.0340 | 0.9336 | 0.985 | 0.9586 | 0.4 |
| No log | 4.07 | 500 | 0.0693 | 0.9104 | 0.915 | 0.9127 | 0.6 |
| No log | 4.07 | 500 | 0.0481 | 0.9330 | 0.975 | 0.9535 | 0.7000 |
| No log | 4.07 | 500 | 0.0959 | 0.8 | 0.86 | 0.8289 | 0.0180 |
| No log | 4.07 | 500 | 0.0321 | 0.9417 | 0.97 | 0.9557 | 0.2 |
| No log | 4.07 | 500 | 0.0251 | 0.9415 | 0.965 | 0.9531 | 0.7000 |
| No log | 4.07 | 500 | 0.2579 | 0.7473 | 0.68 | 0.7120 | 0.023 |
| No log | 4.07 | 500 | 0.0213 | 0.9065 | 0.97 | 0.9372 | 0.5 |
| No log | 4.07 | 500 | 0.1055 | 0.8960 | 0.905 | 0.9005 | 0.2 |
| No log | 4.07 | 500 | 0.1241 | 0.6141 | 0.7437 | 0.6727 | 0.084 |
| No log | 4.07 | 500 | 0.1314 | 0.8245 | 0.775 | 0.7990 | 0.4 |
| No log | 4.07 | 500 | 0.1550 | 0.7877 | 0.835 | 0.8107 | 0.092 |
| No log | 4.07 | 500 | 0.0601 | 0.8204 | 0.845 | 0.8325 | 0.057 |
| No log | 4.07 | 500 | 0.0929 | 0.8578 | 0.965 | 0.9082 | 0.024 |
| No log | 4.07 | 500 | 0.0182 | 0.9303 | 0.9397 | 0.9350 | 0.066 |
| No log | 4.07 | 500 | 0.0223 | 0.8369 | 0.975 | 0.9007 | 0.089 |
| No log | 4.07 | 500 | 0.0092 | 0.9249 | 0.985 | 0.9540 | 0.6 |
| No log | 4.07 | 500 | 0.0206 | 0.9387 | 0.995 | 0.9660 | 0.2 |
| No log | 4.07 | 500 | 0.1204 | 0.7870 | 0.905 | 0.8419 | 0.4 |
| No log | 4.07 | 500 | 0.0729 | 0.9608 | 0.98 | 0.9703 | 0.017 |
| No log | 4.07 | 500 | 0.0620 | 0.9147 | 0.965 | 0.9392 | 0.035 |
| No log | 4.07 | 500 | 0.0397 | 0.9415 | 0.965 | 0.9531 | 0.6 |
| No log | 4.07 | 500 | 0.0129 | 0.8517 | 0.9036 | 0.8768 | 0.7000 |
| No log | 4.07 | 500 | 0.1209 | 0.8118 | 0.69 | 0.7459 | 0.099 |
| No log | 4.07 | 500 | 0.1203 | 0.7902 | 0.81 | 0.8000 | 0.3000 |
| No log | 4.07 | 500 | 0.0425 | 0.9213 | 0.995 | 0.9567 | 0.7000 |
| No log | 4.07 | 500 | 0.0364 | 0.9479 | 1.0 | 0.9732 | 0.6 |
| No log | 4.07 | 500 | 0.1842 | 0.6696 | 0.77 | 0.7163 | 0.2 |
| No log | 4.07 | 500 | 0.0274 | 0.9507 | 0.965 | 0.9578 | 0.9 |
| No log | 4.07 | 500 | 0.2837 | 0.6397 | 0.87 | 0.7373 | 0.032 |
| No log | 4.07 | 500 | 0.0237 | 0.9431 | 0.995 | 0.9684 | 0.6 |
| No log | 4.07 | 500 | 0.0224 | 0.9794 | 0.95 | 0.9645 | 0.9 |
| No log | 4.07 | 500 | 0.0118 | 0.9343 | 0.925 | 0.9296 | 0.8 |
| No log | 4.07 | 500 | 0.1182 | 0.8364 | 0.895 | 0.8647 | 0.0430 |
| No log | 4.07 | 500 | 0.0181 | 0.9517 | 0.985 | 0.9681 | 0.8 |
| No log | 4.07 | 500 | 0.0448 | 0.9087 | 0.995 | 0.9499 | 0.058 |
| No log | 4.07 | 500 | 0.0378 | 0.8884 | 0.955 | 0.9205 | 0.9 |
| No log | 4.07 | 500 | 0.0280 | 0.9561 | 0.98 | 0.9679 | 0.9 |
| No log | 4.07 | 500 | 0.0143 | 0.9567 | 0.995 | 0.9755 | 0.4 |
| No log | 4.07 | 500 | 0.0805 | 0.6746 | 0.85 | 0.7522 | 0.064 |
| No log | 4.07 | 500 | 0.1277 | 0.8621 | 0.75 | 0.8021 | 0.3000 |
| No log | 4.07 | 500 | 0.0401 | 0.8860 | 0.855 | 0.8702 | 0.7000 |
| No log | 4.07 | 500 | 0.1072 | 0.6414 | 0.93 | 0.7592 | 0.062 |
| No log | 4.07 | 500 | 0.0396 | 0.9381 | 0.985 | 0.9610 | 0.6 |
| No log | 4.07 | 500 | 0.0588 | 0.8904 | 0.975 | 0.9308 | 0.6 |
| No log | 4.07 | 500 | 0.0821 | 0.6372 | 0.72 | 0.6761 | 0.3000 |
| No log | 4.07 | 500 | 0.0718 | 0.7393 | 0.95 | 0.8315 | 0.084 |
| No log | 4.07 | 500 | 0.0500 | 0.9286 | 0.975 | 0.9512 | 0.021 |
| No log | 4.07 | 500 | 0.0332 | 0.9389 | 0.845 | 0.8895 | 0.5 |
| No log | 4.07 | 500 | 0.1660 | 0.6223 | 0.865 | 0.7238 | 0.09 |
| No log | 4.07 | 500 | 0.0972 | 0.7678 | 0.81 | 0.7883 | 0.023 |
| No log | 4.07 | 500 | 0.0549 | 0.8173 | 0.8131 | 0.8152 | 0.4 |
| No log | 4.07 | 500 | 0.1175 | 0.8161 | 0.91 | 0.8605 | 0.092 |
| No log | 4.07 | 500 | 0.2597 | 0.5894 | 0.725 | 0.6502 | 0.2 |
| No log | 4.07 | 500 | 0.0783 | 0.5257 | 0.715 | 0.6059 | 0.7000 |
| No log | 4.07 | 500 | 0.1270 | 0.5837 | 0.75 | 0.6565 | 0.0730 |
| No log | 4.07 | 500 | 0.0562 | 0.6549 | 0.835 | 0.7341 | 0.3000 |
| No log | 4.07 | 500 | 0.1949 | 0.5229 | 0.685 | 0.5931 | 0.5 |
| No log | 4.07 | 500 | 0.1777 | 0.6485 | 0.775 | 0.7062 | 0.4 |
| No log | 4.07 | 500 | 0.1128 | 0.6027 | 0.2211 | 0.3235 | 0.8 |
| No log | 4.07 | 500 | 0.1114 | 0.6329 | 0.75 | 0.6865 | 0.2 |
| No log | 4.07 | 500 | 0.1264 | 0.7396 | 0.625 | 0.6775 | 0.8 |
| No log | 4.07 | 500 | 0.2318 | 0.5662 | 0.62 | 0.5919 | 0.2 |
| No log | 4.07 | 500 | 0.0974 | 0.6837 | 0.735 | 0.7084 | 0.4 |
| No log | 4.07 | 500 | 0.0850 | 0.6394 | 0.665 | 0.6520 | 0.6 |
| No log | 4.07 | 500 | 0.1156 | 0.5657 | 0.84 | 0.6761 | 0.098 |
| No log | 4.07 | 500 | 0.1355 | 0.7446 | 0.86 | 0.7981 | 0.3000 |
| No log | 4.07 | 500 | 0.1131 | 0.7489 | 0.82 | 0.7828 | 0.4 |
| No log | 4.07 | 500 | 0.1119 | 0.5468 | 0.76 | 0.6360 | 0.085 |
| No log | 4.07 | 500 | 0.1207 | 0.5220 | 0.7739 | 0.6235 | 0.6 |
| No log | 4.07 | 500 | 0.1101 | 0.4622 | 0.765 | 0.5763 | 0.095 |
| No log | 4.07 | 500 | 0.1868 | 0.4870 | 0.84 | 0.6165 | 0.007 |
| No log | 4.07 | 500 | 0.1367 | 0.7177 | 0.75 | 0.7335 | 0.7000 |
| No log | 4.07 | 500 | 0.0903 | 0.6415 | 0.68 | 0.6602 | 0.4 |
| No log | 4.07 | 500 | 0.2684 | 0.6171 | 0.83 | 0.7079 | 0.061 |
| No log | 4.07 | 500 | 0.0666 | 0.6106 | 0.69 | 0.6479 | 0.082 |
| No log | 4.07 | 500 | 0.1162 | 0.5796 | 0.6650 | 0.6194 | 0.2 |
| No log | 4.07 | 500 | 0.1590 | 0.6062 | 0.885 | 0.7195 | 0.064 |
| No log | 4.07 | 500 | 0.1676 | 0.6266 | 0.495 | 0.5531 | 0.4 |
| No log | 4.07 | 500 | 0.1129 | 0.4820 | 0.535 | 0.5071 | 0.007 |
| No log | 4.07 | 500 | 0.1639 | 0.5185 | 0.91 | 0.6606 | 0.1 |
| No log | 4.07 | 500 | 0.1002 | 0.6 | 0.48 | 0.5333 | 0.3000 |
| No log | 4.07 | 500 | 0.1273 | 0.6218 | 0.74 | 0.6758 | 0.2 |
| No log | 4.07 | 500 | 0.1430 | 0.7486 | 0.685 | 0.7154 | 0.6 |
| No log | 4.07 | 500 | 0.2288 | 0.5323 | 0.825 | 0.6471 | 0.065 |
| No log | 4.07 | 500 | 0.1861 | 0.4377 | 0.72 | 0.5444 | 0.028 |
| No log | 4.07 | 500 | 0.2578 | 0.6818 | 0.525 | 0.5932 | 0.033 |
| No log | 4.07 | 500 | 0.1330 | 0.5426 | 0.765 | 0.6349 | 0.2 |
| No log | 4.07 | 500 | 0.3809 | 0.5310 | 0.77 | 0.6286 | 0.001 |
| No log | 4.07 | 500 | 0.1268 | 0.2136 | 0.69 | 0.3262 | 0.063 |
| No log | 4.07 | 500 | 0.2217 | 0.6692 | 0.89 | 0.7639 | 0.077 |
| No log | 4.07 | 500 | 0.1048 | 0.6603 | 0.5176 | 0.5803 | 0.3000 |
| No log | 4.07 | 500 | 0.2124 | 0.7179 | 0.56 | 0.6292 | 0.5 |
| No log | 4.07 | 500 | 0.1585 | 0.6722 | 0.81 | 0.7347 | 0.074 |
| No log | 4.07 | 500 | 0.0957 | 0.5943 | 0.63 | 0.6117 | 0.2 |
| No log | 4.07 | 500 | 0.2199 | 0.6263 | 0.88 | 0.7318 | 0.095 |
| No log | 4.07 | 500 | 0.0858 | 0.5270 | 0.6382 | 0.5773 | 0.6 |
| No log | 4.07 | 500 | 0.0911 | 0.5327 | 0.57 | 0.5507 | 0.7000 |
| No log | 4.07 | 500 | 0.0624 | 0.4711 | 0.57 | 0.5158 | 0.3000 |
| No log | 4.07 | 500 | 0.1240 | 0.6059 | 0.815 | 0.6951 | 0.3000 |
| No log | 4.07 | 500 | 0.1171 | 0.5317 | 0.67 | 0.5929 | 0.2 |
| No log | 4.07 | 500 | 0.1534 | 0.7915 | 0.93 | 0.8552 | 0.0720 |
| No log | 4.07 | 500 | 0.1666 | 0.6579 | 0.5 | 0.5682 | 0.2 |
| No log | 4.07 | 500 | 0.2212 | 0.5781 | 0.74 | 0.6491 | 0.099 |
| No log | 4.07 | 500 | 0.0524 | 0.4664 | 0.5578 | 0.5080 | 0.0880 |
| No log | 4.07 | 500 | 0.1668 | 0.45 | 0.405 | 0.4263 | 0.094 |
| No log | 4.07 | 500 | 0.3188 | 0.3032 | 0.72 | 0.4267 | 0.021 |
| No log | 4.07 | 500 | 0.1337 | 0.7243 | 0.775 | 0.7488 | 0.8 |
| No log | 4.07 | 500 | 0.1321 | 0.7039 | 0.82 | 0.7575 | 0.2 |
| No log | 4.07 | 500 | 0.2232 | 0.5413 | 0.59 | 0.5646 | 0.2 |
| No log | 4.07 | 500 | 0.1252 | 0.6300 | 0.715 | 0.6698 | 0.3000 |
| No log | 4.07 | 500 | 0.2714 | 0.6546 | 0.815 | 0.7261 | 0.083 |
| No log | 4.07 | 500 | 0.1052 | 0.6082 | 0.745 | 0.6697 | 0.5 |
| No log | 4.07 | 500 | 0.1422 | 0.6371 | 0.79 | 0.7054 | 0.2 |
| No log | 4.07 | 500 | 0.0520 | 0.5911 | 0.73 | 0.6532 | 0.6 |
| No log | 4.07 | 500 | 0.2465 | 0.4896 | 0.705 | 0.5779 | 0.0190 |
| No log | 4.07 | 500 | 0.1057 | 0.5571 | 0.78 | 0.65 | 0.4 |
| No log | 4.07 | 500 | 0.1355 | 0.5738 | 0.7 | 0.6306 | 0.2 |
| No log | 4.07 | 500 | 0.0961 | 0.5878 | 0.72 | 0.6472 | 0.4 |
| No log | 4.07 | 500 | 0.1681 | 0.5305 | 0.825 | 0.6458 | 0.092 |
| No log | 4.07 | 500 | 0.1136 | 0.6756 | 0.76 | 0.7153 | 0.2 |
| No log | 4.07 | 500 | 0.1382 | 0.5474 | 0.375 | 0.4451 | 0.3000 |
| No log | 4.07 | 500 | 0.2398 | 0.5110 | 0.58 | 0.5433 | 0.2 |
| No log | 4.07 | 500 | 0.0790 | 0.5648 | 0.61 | 0.5865 | 0.3000 |
| No log | 4.07 | 500 | 0.1124 | 0.6386 | 0.91 | 0.7505 | 0.095 |
| No log | 4.07 | 500 | 0.2083 | 0.6781 | 0.79 | 0.7298 | 0.042 |
| No log | 4.07 | 500 | 0.1189 | 0.6008 | 0.745 | 0.6652 | 0.4 |
| No log | 4.07 | 500 | 0.0677 | 0.6280 | 0.65 | 0.6388 | 0.5 |
| No log | 4.07 | 500 | 0.0517 | 0.6133 | 0.785 | 0.6886 | 0.5 |
| No log | 4.07 | 500 | 0.2658 | 0.5534 | 0.725 | 0.6277 | 0.029 |
| No log | 4.07 | 500 | 0.0985 | 0.4481 | 0.54 | 0.4898 | 0.4 |
| No log | 4.07 | 500 | 0.2546 | 0.5793 | 0.785 | 0.6667 | 0.2 |
| No log | 4.07 | 500 | 0.1756 | 0.2905 | 0.7 | 0.4106 | 0.005 |
| No log | 4.07 | 500 | 0.1191 | 0.3289 | 0.8687 | 0.4771 | 0.033 |
| No log | 4.07 | 500 | 0.1853 | 0.5169 | 0.84 | 0.64 | 0.083 |
| No log | 4.07 | 500 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.006 |
| No log | 4.07 | 500 | 0.0105 | 0.7479 | 0.9188 | 0.8246 | 0.3000 |
| No log | 4.07 | 500 | 0.0048 | 0.9412 | 0.96 | 0.9505 | 0.6 |
| No log | 4.07 | 500 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.049 |
| No log | 4.07 | 500 | 0.0021 | 1.0 | 1.0 | 1.0 | 0.6 |
| No log | 4.07 | 500 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.7000 |
| No log | 4.07 | 500 | 0.0039 | 0.9947 | 1.0 | 0.9973 | 0.001 |
| No log | 4.07 | 500 | 0.0029 | 0.9803 | 0.995 | 0.9876 | 0.4 |
| No log | 4.07 | 500 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.004 |
| No log | 4.07 | 500 | 0.0013 | 1.0 | 0.99 | 0.9950 | 0.5 |
| No log | 4.07 | 500 | 0.0009 | 0.9950 | 1.0 | 0.9975 | 0.3000 |
| No log | 4.07 | 500 | 0.0050 | 0.9849 | 0.98 | 0.9825 | 0.078 |
| No log | 4.07 | 500 | 0.0163 | 1.0 | 0.92 | 0.9583 | 0.6 |
| No log | 4.07 | 500 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.035 |
| No log | 4.07 | 500 | 0.0089 | 1.0 | 0.92 | 0.9583 | 0.7000 |
| No log | 4.07 | 500 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.005 |
| No log | 4.07 | 500 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.028 |
| No log | 4.07 | 500 | 0.0033 | 0.9899 | 0.985 | 0.9875 | 0.4 |
| No log | 4.07 | 500 | 0.0024 | 0.9755 | 0.995 | 0.9851 | 0.007 |
| No log | 4.07 | 500 | 0.0017 | 0.9852 | 1.0 | 0.9926 | 0.2 |
| No log | 4.07 | 500 | 0.0414 | 0.8830 | 0.83 | 0.8557 | 0.5 |
| No log | 4.07 | 500 | 0.0007 | 0.9950 | 1.0 | 0.9975 | 0.0130 |
| No log | 4.07 | 500 | 0.0024 | 0.9899 | 0.98 | 0.9849 | 0.7000 |
| No log | 4.07 | 500 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.02 |
| No log | 4.07 | 500 | 0.0003 | 0.9950 | 1.0 | 0.9975 | 0.2 |
| No log | 4.07 | 500 | 0.0024 | 0.9900 | 0.995 | 0.9925 | 0.3000 |
| No log | 4.07 | 500 | 0.0041 | 0.9900 | 0.995 | 0.9925 | 0.035 |
| No log | 4.07 | 500 | 0.0078 | 0.9502 | 0.955 | 0.9526 | 0.8 |
| No log | 4.07 | 500 | 0.0021 | 0.9901 | 1.0 | 0.9950 | 0.056 |
| No log | 4.07 | 500 | 0.0233 | 1.0 | 0.94 | 0.9691 | 0.2 |
| No log | 4.07 | 500 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.032 |
| No log | 4.07 | 500 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.6 |
| No log | 4.07 | 500 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.006 |
| No log | 4.07 | 500 | 0.0054 | 0.9900 | 0.995 | 0.9925 | 0.7000 |
| No log | 4.07 | 500 | 0.0068 | 0.9567 | 0.995 | 0.9755 | 0.007 |
| No log | 4.07 | 500 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.4 |
| No log | 4.07 | 500 | 0.0024 | 1.0 | 1.0 | 1.0 | 0.8 |
| No log | 4.07 | 500 | 0.0048 | 0.9336 | 0.985 | 0.9586 | 0.2 |
| No log | 4.07 | 500 | 0.0090 | 0.9431 | 0.995 | 0.9684 | 0.033 |
| No log | 4.07 | 500 | 0.0025 | 0.99 | 0.99 | 0.99 | 0.9 |
| No log | 4.07 | 500 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.001 |
| No log | 4.07 | 500 | 0.0007 | 1.0 | 0.995 | 0.9975 | 0.6 |
| No log | 4.07 | 500 | 0.0021 | 0.9949 | 0.985 | 0.9899 | 0.2 |
| No log | 4.07 | 500 | 0.0188 | 0.9130 | 0.945 | 0.9287 | 0.6 |
| No log | 4.07 | 500 | 0.0004 | 0.9950 | 1.0 | 0.9975 | 0.3000 |
| No log | 4.07 | 500 | 0.0020 | 0.99 | 0.99 | 0.99 | 0.6 |
| No log | 4.07 | 500 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.001 |
| No log | 4.07 | 500 | 0.0003 | 0.9950 | 1.0 | 0.9975 | 0.058 |
| No log | 4.07 | 500 | 0.0085 | 0.9659 | 0.99 | 0.9778 | 0.6 |
| No log | 4.07 | 500 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.4 |
| No log | 4.07 | 500 | 0.0271 | 0.8249 | 0.895 | 0.8585 | 0.3000 |
| No log | 4.07 | 500 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.006 |
| No log | 4.07 | 500 | 0.0012 | 0.9900 | 0.995 | 0.9925 | 0.7000 |
| No log | 4.07 | 500 | 0.0009 | 0.9901 | 1.0 | 0.9950 | 0.068 |
| No log | 4.07 | 500 | 0.0012 | 0.995 | 0.995 | 0.995 | 0.5 |
| No log | 4.07 | 500 | 0.0250 | 0.7944 | 0.985 | 0.8795 | 0.3000 |
| No log | 4.07 | 500 | 0.0035 | 1.0 | 0.985 | 0.9924 | 0.3000 |
| No log | 4.07 | 500 | 0.0265 | 0.8985 | 0.885 | 0.8917 | 0.7000 |
| No log | 4.07 | 500 | 0.0249 | 0.6753 | 0.6650 | 0.6701 | 0.3000 |
| No log | 4.07 | 500 | 0.0439 | 0.6355 | 0.68 | 0.6570 | 0.8 |
| No log | 4.07 | 500 | 0.1305 | 0.6961 | 0.63 | 0.6614 | 0.8 |
| No log | 4.07 | 500 | 0.1844 | 0.3733 | 0.5 | 0.4275 | 0.2 |
| No log | 4.07 | 500 | 0.0302 | 0.6833 | 0.755 | 0.7173 | 0.4 |
| No log | 4.07 | 500 | 0.1324 | 0.7801 | 0.7926 | 0.7863 | 0.3000 |
| No log | 4.07 | 500 | 0.1011 | 0.5802 | 0.76 | 0.6580 | 0.5 |
| No log | 4.07 | 500 | 0.0582 | 0.7424 | 0.735 | 0.7387 | 0.3000 |
| No log | 4.07 | 500 | 0.0702 | 0.6986 | 0.73 | 0.7139 | 0.5 |
| No log | 4.07 | 500 | 0.0682 | 0.8333 | 0.75 | 0.7895 | 0.8 |
| No log | 4.07 | 500 | 0.0450 | 0.6371 | 0.79 | 0.7054 | 0.2 |
| No log | 4.07 | 500 | 0.1157 | 0.5598 | 0.655 | 0.6037 | 0.7000 |
| No log | 4.07 | 500 | 0.0507 | 0.5348 | 0.73 | 0.6173 | 0.1 |
| No log | 4.07 | 500 | 0.1466 | 0.5662 | 0.62 | 0.5919 | 0.9 |
| No log | 4.07 | 500 | 0.1030 | 0.5578 | 0.7 | 0.6208 | 0.2 |
| No log | 4.07 | 500 | 0.0205 | 0.9317 | 0.955 | 0.9432 | 0.2 |
| No log | 4.07 | 500 | 0.0875 | 0.6561 | 0.725 | 0.6888 | 0.7000 |
| No log | 4.07 | 500 | 0.0686 | 0.5130 | 0.69 | 0.5885 | 0.3000 |
| No log | 4.07 | 500 | 0.0762 | 0.7151 | 0.6212 | 0.6649 | 0.2 |
| No log | 4.07 | 500 | 0.0849 | 0.7163 | 0.7487 | 0.7322 | 0.3000 |
| No log | 4.07 | 500 | 0.0572 | 0.6150 | 0.695 | 0.6526 | 0.3000 |
| No log | 4.07 | 500 | 0.0556 | 0.6085 | 0.785 | 0.6856 | 0.6 |
| No log | 4.07 | 500 | 0.0462 | 0.7546 | 0.815 | 0.7837 | 0.0600 |
| No log | 4.07 | 500 | 0.0755 | 0.4848 | 0.56 | 0.5197 | 0.5 |
| No log | 4.07 | 500 | 0.0809 | 0.5990 | 0.62 | 0.6093 | 0.7000 |
| No log | 4.07 | 500 | 0.0716 | 0.5887 | 0.73 | 0.6518 | 0.3000 |
| No log | 4.07 | 500 | 0.1119 | 0.5580 | 0.385 | 0.4556 | 0.016 |
| No log | 4.07 | 500 | 0.0681 | 0.5620 | 0.68 | 0.6154 | 0.3000 |
| No log | 4.07 | 500 | 0.0982 | 0.8182 | 0.72 | 0.7660 | 0.046 |
| No log | 4.07 | 500 | 0.1035 | 0.5845 | 0.64 | 0.6110 | 0.2 |
| No log | 4.07 | 500 | 0.0419 | 0.9330 | 0.905 | 0.9188 | 0.8 |
| No log | 4.07 | 500 | 0.0024 | 0.9950 | 0.99 | 0.9925 | 0.3000 |
| No log | 4.07 | 500 | 0.1196 | 0.7588 | 0.755 | 0.7569 | 0.047 |
| No log | 4.07 | 500 | 0.0880 | 0.5 | 0.66 | 0.5690 | 0.6 |
| No log | 4.07 | 500 | 0.1023 | 0.5098 | 0.65 | 0.5714 | 0.6 |
| No log | 4.07 | 500 | 0.2601 | 0.4118 | 0.4468 | 0.4286 | 0.0300 |
| No log | 4.07 | 500 | 0.0788 | 0.4733 | 0.575 | 0.5192 | 0.011 |
| No log | 4.07 | 500 | 0.0764 | 0.6898 | 0.745 | 0.7163 | 0.8 |
| No log | 4.07 | 500 | 0.0796 | 0.7053 | 0.73 | 0.7174 | 0.5 |
| No log | 4.07 | 500 | 0.0659 | 0.8654 | 0.9 | 0.8824 | 0.9 |
| No log | 4.07 | 500 | 0.0910 | 0.6376 | 0.73 | 0.6807 | 0.7000 |
| No log | 4.07 | 500 | 0.0909 | 0.4541 | 0.42 | 0.4364 | 0.0720 |
| No log | 4.07 | 500 | 0.1257 | 0.4618 | 0.695 | 0.5549 | 0.3000 |
| No log | 4.07 | 500 | 0.0688 | 0.5559 | 0.845 | 0.6706 | 0.3000 |
| No log | 4.07 | 500 | 0.0527 | 0.6806 | 0.65 | 0.6650 | 0.6 |
| No log | 4.07 | 500 | 0.0319 | 0.8305 | 0.8167 | 0.8235 | 0.6 |
| No log | 4.07 | 500 | 0.0537 | 0.5604 | 0.765 | 0.6469 | 0.3000 |
| No log | 4.07 | 500 | 0.0648 | 0.7103 | 0.76 | 0.7343 | 0.4 |
| No log | 4.07 | 500 | 0.0220 | 0.8036 | 0.75 | 0.7759 | 0.3000 |
| No log | 4.07 | 500 | 0.0295 | 0.7870 | 0.905 | 0.8419 | 0.4 |
| No log | 4.07 | 500 | 0.0886 | 0.7962 | 0.84 | 0.8175 | 0.099 |
| No log | 4.07 | 500 | 0.0974 | 0.4364 | 0.6 | 0.5053 | 0.6 |
| No log | 4.07 | 500 | 0.0061 | 0.9604 | 0.97 | 0.9652 | 0.5 |
| No log | 4.07 | 500 | 0.1781 | 0.5242 | 0.595 | 0.5574 | 0.048 |
| No log | 4.07 | 500 | 0.0518 | 0.8906 | 0.285 | 0.4318 | 0.8 |
| No log | 4.07 | 500 | 0.0857 | 0.4294 | 0.745 | 0.5448 | 0.3000 |
| No log | 4.07 | 500 | 0.1777 | 0.5632 | 0.78 | 0.6541 | 0.2 |
| No log | 4.07 | 500 | 0.1314 | 0.5248 | 0.795 | 0.6322 | 0.5 |
| No log | 4.07 | 500 | 0.1295 | 0.5 | 0.695 | 0.5816 | 0.029 |
| No log | 4.07 | 500 | 0.1552 | 0.7609 | 0.7 | 0.7292 | 0.2 |
| No log | 4.07 | 500 | 0.1124 | 0.6020 | 0.59 | 0.5960 | 0.8 |
| No log | 4.07 | 500 | 0.1049 | 0.5247 | 0.69 | 0.5961 | 0.4 |
| No log | 4.07 | 500 | 0.0873 | 0.7097 | 0.2211 | 0.3372 | 0.9 |
| No log | 4.07 | 500 | 0.1037 | 0.5785 | 0.645 | 0.6099 | 0.2 |
| No log | 4.07 | 500 | 0.0830 | 0.5938 | 0.6909 | 0.6387 | 0.3000 |
| No log | 4.07 | 500 | 0.0831 | 0.695 | 0.695 | 0.695 | 0.6 |
| No log | 4.07 | 500 | 0.0831 | 0.695 | 0.695 | 0.695 | 0.6 |
| No log | 4.07 | 500 | 0.0832 | 0.5397 | 0.85 | 0.6602 | 0.063 |
| No log | 4.07 | 500 | 0.1144 | 0.6931 | 0.7 | 0.6965 | 0.8 |
| No log | 4.07 | 500 | 0.0944 | 0.4861 | 0.785 | 0.6004 | 0.024 |
| No log | 4.07 | 500 | 0.1116 | 0.5728 | 0.59 | 0.5813 | 0.4 |
| No log | 4.07 | 500 | 0.1278 | 0.5519 | 0.585 | 0.5680 | 0.2 |
| No log | 4.07 | 500 | 0.0969 | 0.5290 | 0.775 | 0.6288 | 0.079 |
| No log | 4.07 | 500 | 0.1218 | 0.6316 | 0.78 | 0.6980 | 0.7000 |
| No log | 4.07 | 500 | 0.1890 | 0.3972 | 0.705 | 0.5081 | 0.0590 |
| No log | 4.07 | 500 | 0.1163 | 0.7044 | 0.715 | 0.7097 | 0.089 |
| No log | 4.07 | 500 | 0.1474 | 0.6632 | 0.63 | 0.6462 | 0.4 |
| No log | 4.07 | 500 | 0.0864 | 0.5356 | 0.79 | 0.6384 | 0.093 |
| No log | 4.07 | 500 | 0.0864 | 0.5356 | 0.79 | 0.6384 | 0.093 |
| No log | 4.07 | 500 | 0.0695 | 0.6897 | 0.4348 | 0.5333 | 0.4 |
| No log | 4.07 | 500 | 0.0695 | 0.6897 | 0.4348 | 0.5333 | 0.4 |
| No log | 4.07 | 500 | 0.0961 | 0.5309 | 0.73 | 0.6147 | 0.068 |
| No log | 4.07 | 500 | 0.0538 | 0.4601 | 0.49 | 0.4746 | 0.5 |
| No log | 4.07 | 500 | 0.0875 | 0.3636 | 0.6154 | 0.4571 | 0.098 |
| No log | 4.07 | 500 | 0.0664 | 0.5170 | 0.685 | 0.5892 | 0.5 |
| No log | 4.07 | 500 | 0.0756 | 0.4249 | 0.58 | 0.4905 | 0.2 |
| No log | 4.07 | 500 | 0.0874 | 0.5963 | 0.65 | 0.6220 | 0.4 |
| No log | 4.07 | 500 | 0.0833 | 0.5276 | 0.67 | 0.5903 | 0.6 |
| No log | 4.07 | 500 | 0.1175 | 0.5240 | 0.71 | 0.6030 | 0.0870 |
| No log | 4.07 | 500 | 0.0999 | 0.4444 | 0.4231 | 0.4335 | 0.3000 |
| No log | 4.07 | 500 | 0.3042 | 0.5592 | 0.685 | 0.6157 | 0.004 |
| No log | 4.07 | 500 | 0.1114 | 0.5226 | 0.695 | 0.5966 | 0.2 |
| No log | 4.07 | 500 | 0.1088 | 0.7861 | 0.735 | 0.7597 | 0.8 |
| No log | 4.07 | 500 | 0.1135 | 0.6880 | 0.805 | 0.7419 | 0.2 |
| No log | 4.07 | 500 | 0.1154 | 0.5495 | 0.75 | 0.6342 | 0.4 |
| No log | 4.07 | 500 | 0.1626 | 0.7293 | 0.835 | 0.7786 | 0.3000 |
| No log | 4.07 | 500 | 0.0901 | 0.4522 | 0.355 | 0.3978 | 0.0730 |
| No log | 4.07 | 500 | 0.0891 | 0.4257 | 0.53 | 0.4722 | 0.4 |
| No log | 4.07 | 500 | 0.0609 | 0.7984 | 0.97 | 0.8758 | 0.5 |
| No log | 4.07 | 500 | 0.0538 | 0.5774 | 0.485 | 0.5272 | 0.6 |
| No log | 4.07 | 500 | 0.0873 | 0.6802 | 0.84 | 0.7517 | 0.3000 |
| No log | 4.07 | 500 | 0.1416 | 0.5 | 0.6667 | 0.5714 | 0.067 |
| No log | 4.07 | 500 | 0.1175 | 0.5868 | 0.71 | 0.6425 | 0.6 |
| No log | 4.07 | 500 | 0.1015 | 0.5802 | 0.705 | 0.6366 | 0.5 |
| No log | 4.07 | 500 | 0.1013 | 0.5089 | 0.57 | 0.5377 | 0.2 |
| No log | 4.07 | 500 | 0.0937 | 0.5491 | 0.755 | 0.6358 | 0.2 |
| No log | 4.07 | 500 | 0.0702 | 0.5546 | 0.635 | 0.5921 | 0.5 |
| No log | 4.07 | 500 | 0.0397 | 0.8462 | 0.825 | 0.8354 | 0.4 |
| No log | 4.07 | 500 | 0.1319 | 0.4044 | 0.37 | 0.3864 | 0.2 |
| No log | 4.07 | 500 | 0.1101 | 0.5232 | 0.7940 | 0.6307 | 0.075 |
| No log | 4.07 | 500 | 0.1722 | 0.5698 | 0.4757 | 0.5185 | 0.033 |
| No log | 4.07 | 500 | 0.0745 | 0.5644 | 0.46 | 0.5069 | 0.6 |
| No log | 4.07 | 500 | 0.0698 | 0.6224 | 0.75 | 0.6803 | 0.2 |
| No log | 4.07 | 500 | 0.1313 | 0.6491 | 0.74 | 0.6916 | 0.3000 |
| No log | 4.07 | 500 | 0.1313 | 0.6491 | 0.74 | 0.6916 | 0.3000 |
| No log | 4.07 | 500 | 0.0622 | 0.5592 | 0.685 | 0.6157 | 0.4 |
| No log | 4.07 | 500 | 0.1194 | 0.6588 | 0.7020 | 0.6797 | 0.4 |
| No log | 4.07 | 500 | 0.0880 | 0.6130 | 0.7085 | 0.6573 | 0.7000 |
| No log | 4.07 | 500 | 0.1036 | 0.5714 | 0.76 | 0.6524 | 0.4 |
| No log | 4.07 | 500 | 0.0939 | 0.5326 | 0.775 | 0.6314 | 0.098 |
| No log | 4.07 | 500 | 0.0717 | 0.5446 | 0.825 | 0.6561 | 0.2 |
| No log | 4.07 | 500 | 0.1002 | 0.3767 | 0.71 | 0.4922 | 0.0730 |
| No log | 4.07 | 500 | 0.1195 | 0.5644 | 0.635 | 0.5976 | 0.6 |
| No log | 4.07 | 500 | 0.0954 | 0.6507 | 0.68 | 0.6650 | 0.4 |
| No log | 4.07 | 500 | 0.0748 | 0.6702 | 0.64 | 0.6547 | 0.5 |
| No log | 4.07 | 500 | 0.0718 | 0.7127 | 0.645 | 0.6772 | 0.5 |
| No log | 4.07 | 500 | 0.1672 | 0.4731 | 0.66 | 0.5511 | 0.021 |
| No log | 4.07 | 500 | 0.0675 | 0.4029 | 0.415 | 0.4089 | 0.2 |
| No log | 4.07 | 500 | 0.0796 | 0.4565 | 0.63 | 0.5294 | 0.4 |
| No log | 4.07 | 500 | 0.0672 | 0.7588 | 0.645 | 0.6973 | 0.5 |
| No log | 4.07 | 500 | 0.0755 | 0.5633 | 0.645 | 0.6014 | 0.5 |
| No log | 4.07 | 500 | 0.1065 | 0.6513 | 0.775 | 0.7078 | 0.0730 |
| No log | 4.07 | 500 | 0.0997 | 0.4548 | 0.755 | 0.5677 | 0.4 |
| No log | 4.07 | 500 | 0.1404 | 0.4123 | 0.835 | 0.5521 | 0.0300 |
| No log | 4.07 | 500 | 0.0913 | 0.6805 | 0.82 | 0.7438 | 0.5 |
| No log | 4.07 | 500 | 0.1067 | 0.4078 | 0.785 | 0.5368 | 0.012 |
| No log | 4.07 | 500 | 0.1067 | 0.4078 | 0.785 | 0.5368 | 0.012 |
| No log | 4.07 | 500 | 0.1067 | 0.4078 | 0.785 | 0.5368 | 0.012 |
| No log | 4.07 | 500 | 0.1067 | 0.4078 | 0.785 | 0.5368 | 0.012 |
| No log | 4.07 | 500 | 0.2054 | 0.2622 | 0.7839 | 0.3929 | 0.005 |
| No log | 4.07 | 500 | 0.1219 | 0.4638 | 0.8040 | 0.5882 | 0.4 |
| No log | 4.07 | 500 | 0.0246 | 0.9502 | 0.955 | 0.9526 | 0.3000 |
| No log | 4.07 | 500 | 0.0022 | 0.9852 | 1.0 | 0.9926 | 0.2 |
| No log | 4.07 | 500 | 0.0031 | 0.9900 | 0.995 | 0.9925 | 0.049 |
| No log | 4.07 | 500 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 4.07 | 500 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 4.07 | 500 | 0.0007 | 0.9950 | 1.0 | 0.9975 | 0.076 |
| No log | 4.07 | 500 | 0.0019 | 1.0 | 0.995 | 0.9975 | 0.5 |
| No log | 4.07 | 500 | 0.0017 | 0.9950 | 0.99 | 0.9925 | 0.7000 |
| No log | 4.07 | 500 | 0.0015 | 0.995 | 0.995 | 0.995 | 0.6 |
| No log | 4.07 | 500 | 0.0006 | 0.9950 | 1.0 | 0.9975 | 0.4 |
| No log | 4.07 | 500 | 0.0212 | 0.9839 | 0.915 | 0.9482 | 0.2 |
| No log | 4.07 | 500 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.067 |
| No log | 4.07 | 500 | 0.0401 | 0.9390 | 0.77 | 0.8462 | 0.2 |
| No log | 4.07 | 500 | 0.0021 | 0.9900 | 0.995 | 0.9925 | 0.6 |
| No log | 4.07 | 500 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 4.07 | 500 | 0.0047 | 1.0 | 0.985 | 0.9924 | 0.9 |
| No log | 4.07 | 500 | 0.0073 | 0.9559 | 0.975 | 0.9653 | 0.6 |
| No log | 4.07 | 500 | 0.0003 | 0.9950 | 1.0 | 0.9975 | 0.047 |
| No log | 4.07 | 500 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.023 |
| No log | 4.07 | 500 | 0.0022 | 1.0 | 0.995 | 0.9975 | 0.6 |
| No log | 4.07 | 500 | 0.0020 | 1.0 | 0.99 | 0.9950 | 0.8 |
| No log | 4.07 | 500 | 0.0122 | 0.9894 | 0.93 | 0.9588 | 0.9 |
| No log | 4.07 | 500 | 0.1244 | 0.3188 | 0.475 | 0.3815 | 0.3000 |
| No log | 4.07 | 500 | 0.1057 | 0.2921 | 0.3586 | 0.3220 | 0.2 |
| No log | 4.07 | 500 | 0.1839 | 0.5019 | 0.655 | 0.5683 | 0.4 |
| No log | 4.07 | 500 | 0.1800 | 0.4082 | 0.8 | 0.5405 | 0.05 |
| No log | 8.13 | 1000 | 0.1548 | 0.8080 | 0.905 | 0.8538 | 0.015 |
| No log | 8.13 | 1000 | 0.1774 | 0.4670 | 0.815 | 0.5938 | 0.9 |
| No log | 8.13 | 1000 | 0.1356 | 0.8471 | 0.72 | 0.7784 | 0.0300 |
| No log | 8.13 | 1000 | 0.1034 | 0.7407 | 0.9 | 0.8126 | 0.2 |
| No log | 8.13 | 1000 | 0.1269 | 0.7841 | 0.89 | 0.8337 | 0.2 |
| No log | 8.13 | 1000 | 0.0308 | 0.9474 | 0.99 | 0.9682 | 0.8 |
| No log | 8.13 | 1000 | 0.0566 | 0.8356 | 0.9196 | 0.8756 | 0.3000 |
| No log | 8.13 | 1000 | 0.0355 | 0.9343 | 0.995 | 0.9637 | 0.063 |
| No log | 8.13 | 1000 | 0.0468 | 0.9163 | 0.985 | 0.9494 | 0.5 |
| No log | 8.13 | 1000 | 0.2282 | 0.7257 | 0.82 | 0.7700 | 0.0090 |
| No log | 8.13 | 1000 | 0.0389 | 0.9336 | 0.985 | 0.9586 | 0.0710 |
| No log | 8.13 | 1000 | 0.0635 | 0.8407 | 0.95 | 0.8920 | 0.8 |
| No log | 8.13 | 1000 | 0.0319 | 0.9476 | 0.995 | 0.9707 | 0.3000 |
| No log | 8.13 | 1000 | 0.0624 | 0.9213 | 0.995 | 0.9567 | 0.9 |
| No log | 8.13 | 1000 | 0.0485 | 0.9132 | 1.0 | 0.9547 | 0.007 |
| No log | 8.13 | 1000 | 0.0394 | 0.9139 | 0.955 | 0.9340 | 0.5 |
| No log | 8.13 | 1000 | 0.0444 | 0.8967 | 0.9695 | 0.9317 | 0.9 |
| No log | 8.13 | 1000 | 0.0610 | 0.8832 | 0.945 | 0.9130 | 0.015 |
| No log | 8.13 | 1000 | 0.2421 | 0.7656 | 0.7387 | 0.7519 | 0.001 |
| No log | 8.13 | 1000 | 0.0433 | 0.9256 | 0.995 | 0.9590 | 0.6 |
| No log | 8.13 | 1000 | 0.0371 | 0.9333 | 0.91 | 0.9215 | 0.9 |
| No log | 8.13 | 1000 | 0.1793 | 0.8505 | 0.91 | 0.8792 | 0.021 |
| No log | 8.13 | 1000 | 0.0460 | 0.9247 | 0.86 | 0.8912 | 0.002 |
| No log | 8.13 | 1000 | 0.0946 | 0.8535 | 0.8579 | 0.8557 | 0.069 |
| No log | 8.13 | 1000 | 0.0719 | 0.9116 | 0.98 | 0.9446 | 0.3000 |
| No log | 8.13 | 1000 | 0.1733 | 0.7311 | 0.87 | 0.7945 | 0.0880 |
| No log | 8.13 | 1000 | 0.0227 | 0.8789 | 0.9849 | 0.9289 | 0.3000 |
| No log | 8.13 | 1000 | 0.0600 | 0.9061 | 0.965 | 0.9346 | 0.9 |
| No log | 8.13 | 1000 | 0.1077 | 0.7155 | 0.83 | 0.7685 | 0.5 |
| No log | 8.13 | 1000 | 0.0392 | 0.9471 | 0.985 | 0.9657 | 0.8 |
| No log | 8.13 | 1000 | 0.0872 | 0.9078 | 0.935 | 0.9212 | 0.3000 |
| No log | 8.13 | 1000 | 0.0591 | 0.9330 | 0.975 | 0.9535 | 0.9 |
| No log | 8.13 | 1000 | 0.1589 | 0.7794 | 0.795 | 0.7871 | 0.001 |
| No log | 8.13 | 1000 | 0.0399 | 0.9420 | 0.975 | 0.9582 | 0.011 |
| No log | 8.13 | 1000 | 0.0322 | 0.9412 | 0.96 | 0.9505 | 0.8 |
| No log | 8.13 | 1000 | 0.3311 | 0.7627 | 0.675 | 0.7162 | 0.002 |
| No log | 8.13 | 1000 | 0.0239 | 0.9231 | 0.96 | 0.9412 | 0.9 |
| No log | 8.13 | 1000 | 0.1539 | 0.9 | 0.9 | 0.9 | 0.021 |
| No log | 8.13 | 1000 | 0.1544 | 0.6564 | 0.7487 | 0.6995 | 0.034 |
| No log | 8.13 | 1000 | 0.1890 | 0.8105 | 0.77 | 0.7897 | 0.4 |
| No log | 8.13 | 1000 | 0.2044 | 0.7804 | 0.835 | 0.8068 | 0.007 |
| No log | 8.13 | 1000 | 0.0949 | 0.8652 | 0.77 | 0.8148 | 0.0180 |
| No log | 8.13 | 1000 | 0.1534 | 0.875 | 0.91 | 0.8922 | 0.0190 |
| No log | 8.13 | 1000 | 0.0224 | 0.9444 | 0.9397 | 0.9421 | 0.016 |
| No log | 8.13 | 1000 | 0.0289 | 0.8515 | 0.975 | 0.9091 | 0.077 |
| No log | 8.13 | 1000 | 0.0124 | 0.9245 | 0.98 | 0.9515 | 0.8 |
| No log | 8.13 | 1000 | 0.0262 | 0.9343 | 0.995 | 0.9637 | 0.094 |
| No log | 8.13 | 1000 | 0.1492 | 0.8194 | 0.885 | 0.8510 | 0.9 |
| No log | 8.13 | 1000 | 0.1898 | 0.9497 | 0.945 | 0.9474 | 0.001 |
| No log | 8.13 | 1000 | 0.0738 | 0.945 | 0.945 | 0.945 | 0.077 |
| No log | 8.13 | 1000 | 0.0538 | 0.9324 | 0.965 | 0.9484 | 0.9 |
| No log | 8.13 | 1000 | 0.0181 | 0.8341 | 0.9188 | 0.8744 | 0.7000 |
| No log | 8.13 | 1000 | 0.1633 | 0.8434 | 0.7 | 0.7650 | 0.039 |
| No log | 8.13 | 1000 | 0.1673 | 0.8306 | 0.76 | 0.7937 | 0.5 |
| No log | 8.13 | 1000 | 0.0493 | 0.9171 | 0.995 | 0.9544 | 0.4 |
| No log | 8.13 | 1000 | 0.0420 | 0.9479 | 1.0 | 0.9732 | 0.4 |
| No log | 8.13 | 1000 | 0.2667 | 0.6736 | 0.815 | 0.7376 | 0.095 |
| No log | 8.13 | 1000 | 0.0308 | 0.9426 | 0.985 | 0.9633 | 0.034 |
| No log | 8.13 | 1000 | 0.4276 | 0.6482 | 0.82 | 0.7241 | 0.006 |
| No log | 8.13 | 1000 | 0.0274 | 0.9387 | 0.995 | 0.9660 | 0.9 |
| No log | 8.13 | 1000 | 0.0261 | 0.9695 | 0.955 | 0.9622 | 0.9 |
| No log | 8.13 | 1000 | 0.0142 | 0.9032 | 0.98 | 0.9400 | 0.4 |
| No log | 8.13 | 1000 | 0.1448 | 0.8161 | 0.91 | 0.8605 | 0.008 |
| No log | 8.13 | 1000 | 0.0228 | 0.9519 | 0.99 | 0.9706 | 0.7000 |
| No log | 8.13 | 1000 | 0.0481 | 0.9289 | 0.98 | 0.9538 | 0.6 |
| No log | 8.13 | 1000 | 0.0457 | 0.8711 | 0.98 | 0.9224 | 0.7000 |
| No log | 8.13 | 1000 | 0.0321 | 0.9431 | 0.995 | 0.9684 | 0.015 |
| No log | 8.13 | 1000 | 0.0129 | 0.9706 | 0.99 | 0.9802 | 0.5 |
| No log | 8.13 | 1000 | 0.1091 | 0.7406 | 0.785 | 0.7621 | 0.064 |
| No log | 8.13 | 1000 | 0.1629 | 0.8317 | 0.84 | 0.8358 | 0.069 |
| No log | 8.13 | 1000 | 0.0475 | 0.8458 | 0.905 | 0.8744 | 0.2 |
| No log | 8.13 | 1000 | 0.1341 | 0.6503 | 0.93 | 0.7654 | 0.035 |
| No log | 8.13 | 1000 | 0.0486 | 0.9292 | 0.985 | 0.9563 | 0.2 |
| No log | 8.13 | 1000 | 0.0671 | 0.8945 | 0.975 | 0.9330 | 0.8 |
| No log | 8.13 | 1000 | 0.1011 | 0.6157 | 0.745 | 0.6742 | 0.3000 |
| No log | 8.13 | 1000 | 0.0854 | 0.7421 | 0.935 | 0.8274 | 0.033 |
| No log | 8.13 | 1000 | 0.0617 | 0.9324 | 0.965 | 0.9484 | 0.2 |
| No log | 8.13 | 1000 | 0.0399 | 0.8856 | 0.89 | 0.8878 | 0.049 |
| No log | 8.13 | 1000 | 0.2517 | 0.6496 | 0.76 | 0.7005 | 0.097 |
| No log | 8.13 | 1000 | 0.1427 | 0.7559 | 0.805 | 0.7797 | 0.002 |
| No log | 8.13 | 1000 | 0.0672 | 0.7934 | 0.8535 | 0.8224 | 0.078 |
| No log | 8.13 | 1000 | 0.1596 | 0.8044 | 0.905 | 0.8518 | 0.012 |
| No log | 8.13 | 1000 | 0.3664 | 0.5331 | 0.845 | 0.6538 | 0.015 |
| No log | 8.13 | 1000 | 0.1324 | 0.4453 | 0.835 | 0.5809 | 0.3000 |
| No log | 8.13 | 1000 | 0.1797 | 0.6025 | 0.735 | 0.6622 | 0.0090 |
| No log | 8.13 | 1000 | 0.0732 | 0.6548 | 0.825 | 0.7301 | 0.3000 |
| No log | 8.13 | 1000 | 0.2859 | 0.4904 | 0.77 | 0.5992 | 0.2 |
| No log | 8.13 | 1000 | 0.2414 | 0.6861 | 0.765 | 0.7234 | 0.8 |
| No log | 8.13 | 1000 | 0.1526 | 0.3119 | 0.3417 | 0.3261 | 0.6 |
| No log | 8.13 | 1000 | 0.1492 | 0.628 | 0.785 | 0.6978 | 0.097 |
| No log | 8.13 | 1000 | 0.1700 | 0.7 | 0.7 | 0.7 | 0.9 |
| No log | 8.13 | 1000 | 0.3515 | 0.5339 | 0.63 | 0.5780 | 0.04 |
| No log | 8.13 | 1000 | 0.1357 | 0.6157 | 0.785 | 0.6901 | 0.079 |
| No log | 8.13 | 1000 | 0.1198 | 0.6398 | 0.675 | 0.6569 | 0.3000 |
| No log | 8.13 | 1000 | 0.1559 | 0.6260 | 0.77 | 0.6906 | 0.2 |
| No log | 8.13 | 1000 | 0.1954 | 0.7178 | 0.865 | 0.7846 | 0.081 |
| No log | 8.13 | 1000 | 0.1536 | 0.7828 | 0.775 | 0.7789 | 0.8 |
| No log | 8.13 | 1000 | 0.1572 | 0.5850 | 0.74 | 0.6534 | 0.033 |
| No log | 8.13 | 1000 | 0.1675 | 0.5219 | 0.7789 | 0.6250 | 0.8 |
| No log | 8.13 | 1000 | 0.1550 | 0.4929 | 0.69 | 0.5750 | 0.049 |
| No log | 8.13 | 1000 | 0.3223 | 0.5607 | 0.6 | 0.5797 | 0.002 |
| No log | 8.13 | 1000 | 0.1781 | 0.6654 | 0.845 | 0.7445 | 0.4 |
| No log | 8.13 | 1000 | 0.1274 | 0.6566 | 0.65 | 0.6533 | 0.3000 |
| No log | 8.13 | 1000 | 0.3878 | 0.6450 | 0.745 | 0.6914 | 0.041 |
| No log | 8.13 | 1000 | 0.0958 | 0.6411 | 0.67 | 0.6553 | 0.0190 |
| No log | 8.13 | 1000 | 0.1584 | 0.6731 | 0.5330 | 0.5949 | 0.6 |
| No log | 8.13 | 1000 | 0.1982 | 0.6812 | 0.78 | 0.7273 | 0.3000 |
| No log | 8.13 | 1000 | 0.2229 | 0.5848 | 0.5 | 0.5391 | 0.5 |
| No log | 8.13 | 1000 | 0.1202 | 0.5112 | 0.57 | 0.5390 | 0.004 |
| No log | 8.13 | 1000 | 0.2236 | 0.5933 | 0.795 | 0.6795 | 0.3000 |
| No log | 8.13 | 1000 | 0.1281 | 0.5396 | 0.545 | 0.5423 | 0.3000 |
| No log | 8.13 | 1000 | 0.1821 | 0.6667 | 0.69 | 0.6781 | 0.3000 |
| No log | 8.13 | 1000 | 0.2032 | 0.7075 | 0.75 | 0.7282 | 0.7000 |
| No log | 8.13 | 1000 | 0.3147 | 0.5424 | 0.8 | 0.6465 | 0.025 |
| No log | 8.13 | 1000 | 0.2931 | 0.4277 | 0.665 | 0.5205 | 0.003 |
| No log | 8.13 | 1000 | 0.3339 | 0.5846 | 0.57 | 0.5772 | 0.003 |
| No log | 8.13 | 1000 | 0.1879 | 0.5547 | 0.71 | 0.6228 | 0.2 |
| No log | 8.13 | 1000 | 0.5092 | 0.6556 | 0.59 | 0.6211 | 0.001 |
| No log | 8.13 | 1000 | 0.1693 | 0.2893 | 0.35 | 0.3167 | 0.098 |
| No log | 8.13 | 1000 | 0.3279 | 0.6590 | 0.86 | 0.7462 | 0.0220 |
| No log | 8.13 | 1000 | 0.1374 | 0.6709 | 0.5327 | 0.5938 | 0.2 |
| No log | 8.13 | 1000 | 0.3388 | 0.6308 | 0.615 | 0.6228 | 0.3000 |
| No log | 8.13 | 1000 | 0.2354 | 0.6482 | 0.82 | 0.7241 | 0.001 |
| No log | 8.13 | 1000 | 0.1444 | 0.5490 | 0.7 | 0.6154 | 0.039 |
| No log | 8.13 | 1000 | 0.3582 | 0.6349 | 0.8 | 0.7080 | 0.023 |
| No log | 8.13 | 1000 | 0.1188 | 0.5683 | 0.6482 | 0.6056 | 0.8 |
| No log | 8.13 | 1000 | 0.1348 | 0.4908 | 0.665 | 0.5648 | 0.7000 |
| No log | 8.13 | 1000 | 0.0897 | 0.5901 | 0.475 | 0.5263 | 0.7000 |
| No log | 8.13 | 1000 | 0.1604 | 0.6378 | 0.81 | 0.7137 | 0.5 |
| No log | 8.13 | 1000 | 0.1659 | 0.5420 | 0.645 | 0.5890 | 0.099 |
| No log | 8.13 | 1000 | 0.2830 | 0.7417 | 0.89 | 0.8091 | 0.005 |
| No log | 8.13 | 1000 | 0.2385 | 0.6049 | 0.49 | 0.5414 | 0.1 |
| No log | 8.13 | 1000 | 0.2927 | 0.5927 | 0.735 | 0.6562 | 0.0600 |
| No log | 8.13 | 1000 | 0.0629 | 0.4956 | 0.5628 | 0.5271 | 0.0440 |
| No log | 8.13 | 1000 | 0.2110 | 0.5887 | 0.365 | 0.4506 | 0.094 |
| No log | 8.13 | 1000 | 0.4528 | 0.4101 | 0.445 | 0.4269 | 0.042 |
| No log | 8.13 | 1000 | 0.1790 | 0.6842 | 0.78 | 0.7290 | 0.9 |
| No log | 8.13 | 1000 | 0.1736 | 0.7277 | 0.815 | 0.7689 | 0.2 |
| No log | 8.13 | 1000 | 0.3480 | 0.4944 | 0.66 | 0.5653 | 0.024 |
| No log | 8.13 | 1000 | 0.1678 | 0.6667 | 0.71 | 0.6877 | 0.5 |
| No log | 8.13 | 1000 | 0.4181 | 0.6109 | 0.84 | 0.7074 | 0.005 |
| No log | 8.13 | 1000 | 0.1603 | 0.6063 | 0.77 | 0.6784 | 0.7000 |
| No log | 8.13 | 1000 | 0.1947 | 0.6985 | 0.695 | 0.6967 | 0.4 |
| No log | 8.13 | 1000 | 0.0681 | 0.5766 | 0.715 | 0.6384 | 0.7000 |
| No log | 8.13 | 1000 | 0.3464 | 0.52 | 0.65 | 0.5778 | 0.006 |
| No log | 8.13 | 1000 | 0.1498 | 0.5852 | 0.79 | 0.6723 | 0.6 |
| No log | 8.13 | 1000 | 0.1870 | 0.5540 | 0.795 | 0.6530 | 0.074 |
| No log | 8.13 | 1000 | 0.1372 | 0.5583 | 0.79 | 0.6542 | 0.4 |
| No log | 8.13 | 1000 | 0.2336 | 0.5603 | 0.79 | 0.6556 | 0.099 |
| No log | 8.13 | 1000 | 0.1644 | 0.7225 | 0.69 | 0.7059 | 0.3000 |
| No log | 8.13 | 1000 | 0.1924 | 0.5556 | 0.375 | 0.4478 | 0.2 |
| No log | 8.13 | 1000 | 0.3863 | 0.4689 | 0.64 | 0.5412 | 0.012 |
| No log | 8.13 | 1000 | 0.0992 | 0.5541 | 0.64 | 0.5940 | 0.2 |
| No log | 8.13 | 1000 | 0.1407 | 0.6339 | 0.935 | 0.7556 | 0.024 |
| No log | 8.13 | 1000 | 0.2950 | 0.6955 | 0.765 | 0.7286 | 0.006 |
| No log | 8.13 | 1000 | 0.1846 | 0.5811 | 0.77 | 0.6624 | 0.5 |
| No log | 8.13 | 1000 | 0.0902 | 0.5531 | 0.755 | 0.6385 | 0.4 |
| No log | 8.13 | 1000 | 0.0797 | 0.6620 | 0.715 | 0.6875 | 0.9 |
| No log | 8.13 | 1000 | 0.3335 | 0.5530 | 0.73 | 0.6293 | 0.0090 |
| No log | 8.13 | 1000 | 0.1312 | 0.4272 | 0.645 | 0.5139 | 0.3000 |
| No log | 8.13 | 1000 | 0.3613 | 0.5228 | 0.86 | 0.6503 | 0.0130 |
| No log | 8.13 | 1000 | 0.2635 | 0.3037 | 0.495 | 0.3764 | 0.001 |
| No log | 8.13 | 1000 | 0.1681 | 0.3397 | 0.8030 | 0.4775 | 0.007 |
| No log | 8.13 | 1000 | 0.2462 | 0.5667 | 0.765 | 0.6511 | 0.07 |
| No log | 8.13 | 1000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.001 |
| No log | 8.13 | 1000 | 0.0142 | 0.7749 | 0.9086 | 0.8364 | 0.4 |
| No log | 8.13 | 1000 | 0.0051 | 0.9608 | 0.98 | 0.9703 | 0.3000 |
| No log | 8.13 | 1000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.028 |
| No log | 8.13 | 1000 | 0.0070 | 0.9825 | 1.0 | 0.9912 | 0.0220 |
| No log | 8.13 | 1000 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.4 |
| No log | 8.13 | 1000 | 0.0043 | 0.9947 | 1.0 | 0.9973 | 0.001 |
| No log | 8.13 | 1000 | 0.0056 | 0.9803 | 0.995 | 0.9876 | 0.2 |
| No log | 8.13 | 1000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.001 |
| No log | 8.13 | 1000 | 0.0008 | 0.9901 | 1.0 | 0.9950 | 0.032 |
| No log | 8.13 | 1000 | 0.0005 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 8.13 | 1000 | 0.0066 | 0.9849 | 0.98 | 0.9825 | 0.021 |
| No log | 8.13 | 1000 | 0.0210 | 1.0 | 0.91 | 0.9529 | 0.6 |
| No log | 8.13 | 1000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.0140 |
| No log | 8.13 | 1000 | 0.0115 | 0.9895 | 0.94 | 0.9641 | 0.2 |
| No log | 8.13 | 1000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.001 |
| No log | 8.13 | 1000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.001 |
| No log | 8.13 | 1000 | 0.0030 | 0.99 | 0.99 | 0.99 | 0.3000 |
| No log | 8.13 | 1000 | 0.0026 | 0.9803 | 0.995 | 0.9876 | 0.048 |
| No log | 8.13 | 1000 | 0.0010 | 0.9901 | 1.0 | 0.9950 | 0.3000 |
| No log | 8.13 | 1000 | 0.0480 | 0.86 | 0.86 | 0.8600 | 0.5 |
| No log | 8.13 | 1000 | 0.0006 | 0.9950 | 1.0 | 0.9975 | 0.011 |
| No log | 8.13 | 1000 | 0.0036 | 0.9949 | 0.975 | 0.9848 | 0.9 |
| No log | 8.13 | 1000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.012 |
| No log | 8.13 | 1000 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 8.13 | 1000 | 0.0027 | 0.9852 | 1.0 | 0.9926 | 0.04 |
| No log | 8.13 | 1000 | 0.0062 | 0.9851 | 0.995 | 0.9900 | 0.0180 |
| No log | 8.13 | 1000 | 0.0080 | 0.9455 | 0.955 | 0.9502 | 0.7000 |
| No log | 8.13 | 1000 | 0.0025 | 0.9901 | 1.0 | 0.9950 | 0.007 |
| No log | 8.13 | 1000 | 0.0255 | 1.0 | 0.94 | 0.9691 | 0.3000 |
| No log | 8.13 | 1000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.0180 |
| No log | 8.13 | 1000 | 0.0004 | 1.0 | 1.0 | 1.0 | 0.7000 |
| No log | 8.13 | 1000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.001 |
| No log | 8.13 | 1000 | 0.0029 | 0.9900 | 0.995 | 0.9925 | 0.2 |
| No log | 8.13 | 1000 | 0.0101 | 1.0 | 0.96 | 0.9796 | 0.6 |
| No log | 8.13 | 1000 | 0.0005 | 1.0 | 0.995 | 0.9975 | 0.5 |
| No log | 8.13 | 1000 | 0.0053 | 0.9792 | 1.0 | 0.9895 | 0.045 |
| No log | 8.13 | 1000 | 0.0088 | 0.9128 | 0.995 | 0.9522 | 0.011 |
| No log | 8.13 | 1000 | 0.0086 | 0.9615 | 1.0 | 0.9804 | 0.6 |
| No log | 8.13 | 1000 | 0.0044 | 0.9756 | 1.0 | 0.9877 | 0.007 |
| No log | 8.13 | 1000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.001 |
| No log | 8.13 | 1000 | 0.0010 | 0.9950 | 1.0 | 0.9975 | 0.02 |
| No log | 8.13 | 1000 | 0.0018 | 0.9803 | 0.995 | 0.9876 | 0.061 |
| No log | 8.13 | 1000 | 0.0275 | 0.8904 | 0.975 | 0.9308 | 0.057 |
| No log | 8.13 | 1000 | 0.0009 | 1.0 | 0.995 | 0.9975 | 0.7000 |
| No log | 8.13 | 1000 | 0.0022 | 0.9900 | 0.995 | 0.9925 | 0.7000 |
| No log | 8.13 | 1000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.001 |
| No log | 8.13 | 1000 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.5 |
| No log | 8.13 | 1000 | 0.0076 | 0.9614 | 0.995 | 0.9779 | 0.7000 |
| No log | 8.13 | 1000 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.4 |
| No log | 8.13 | 1000 | 0.0334 | 0.8488 | 0.87 | 0.8593 | 0.4 |
| No log | 8.13 | 1000 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.001 |
| No log | 8.13 | 1000 | 0.0024 | 0.9851 | 0.995 | 0.9900 | 0.7000 |
| No log | 8.13 | 1000 | 0.0017 | 0.9900 | 0.995 | 0.9925 | 0.8 |
| No log | 8.13 | 1000 | 0.0019 | 0.995 | 0.995 | 0.995 | 0.2 |
| No log | 8.13 | 1000 | 0.0276 | 0.7944 | 0.985 | 0.8795 | 0.3000 |
| No log | 8.13 | 1000 | 0.0037 | 1.0 | 0.985 | 0.9924 | 0.7000 |
| No log | 8.13 | 1000 | 0.0339 | 0.9040 | 0.895 | 0.8995 | 0.9 |
| No log | 8.13 | 1000 | 0.0307 | 0.7471 | 0.6447 | 0.6921 | 0.4 |
| No log | 8.13 | 1000 | 0.0547 | 0.6495 | 0.695 | 0.6715 | 0.9 |
| No log | 8.13 | 1000 | 0.1962 | 0.6411 | 0.67 | 0.6553 | 0.9 |
| No log | 8.13 | 1000 | 0.2030 | 0.3906 | 0.4464 | 0.4167 | 0.3000 |
| No log | 8.13 | 1000 | 0.0383 | 0.7059 | 0.78 | 0.7411 | 0.2 |
| No log | 8.13 | 1000 | 0.1732 | 0.8045 | 0.7660 | 0.7847 | 0.7000 |
| No log | 8.13 | 1000 | 0.1441 | 0.6213 | 0.73 | 0.6713 | 0.8 |
| No log | 8.13 | 1000 | 0.0720 | 0.7156 | 0.78 | 0.7464 | 0.2 |
| No log | 8.13 | 1000 | 0.0892 | 0.72 | 0.72 | 0.72 | 0.6 |
| No log | 8.13 | 1000 | 0.0898 | 0.8122 | 0.8 | 0.8060 | 0.6 |
| No log | 8.13 | 1000 | 0.0620 | 0.6804 | 0.745 | 0.7112 | 0.3000 |
| No log | 8.13 | 1000 | 0.1775 | 0.5477 | 0.66 | 0.5986 | 0.9 |
| No log | 8.13 | 1000 | 0.0692 | 0.6456 | 0.665 | 0.6552 | 0.095 |
| No log | 8.13 | 1000 | 0.2204 | 0.4360 | 0.715 | 0.5417 | 0.9 |
| No log | 8.13 | 1000 | 0.1399 | 0.5387 | 0.73 | 0.6200 | 0.1 |
| No log | 8.13 | 1000 | 0.0465 | 0.9187 | 0.96 | 0.9389 | 0.005 |
| No log | 8.13 | 1000 | 0.1315 | 0.6309 | 0.735 | 0.6790 | 0.9 |
| No log | 8.13 | 1000 | 0.0962 | 0.4937 | 0.78 | 0.6047 | 0.0370 |
| No log | 8.13 | 1000 | 0.0968 | 0.6862 | 0.6515 | 0.6684 | 0.1 |
| No log | 8.13 | 1000 | 0.1026 | 0.7071 | 0.7035 | 0.7053 | 0.5 |
| No log | 8.13 | 1000 | 0.0795 | 0.6298 | 0.655 | 0.6422 | 0.5 |
| No log | 8.13 | 1000 | 0.0695 | 0.7264 | 0.73 | 0.7282 | 0.9 |
| No log | 8.13 | 1000 | 0.0647 | 0.7871 | 0.795 | 0.7910 | 0.025 |
| No log | 8.13 | 1000 | 0.1074 | 0.4828 | 0.63 | 0.5466 | 0.5 |
| No log | 8.13 | 1000 | 0.1075 | 0.5830 | 0.685 | 0.6299 | 0.8 |
| No log | 8.13 | 1000 | 0.1001 | 0.5814 | 0.75 | 0.6550 | 0.3000 |
| No log | 8.13 | 1000 | 0.1211 | 0.6190 | 0.39 | 0.4785 | 0.0100 |
| No log | 8.13 | 1000 | 0.0932 | 0.6327 | 0.62 | 0.6263 | 0.7000 |
| No log | 8.13 | 1000 | 0.1373 | 0.8868 | 0.705 | 0.7855 | 0.0090 |
| No log | 8.13 | 1000 | 0.1235 | 0.6133 | 0.69 | 0.6494 | 0.2 |
| No log | 8.13 | 1000 | 0.0589 | 0.9286 | 0.91 | 0.9192 | 0.9 |
| No log | 8.13 | 1000 | 0.0035 | 0.9950 | 0.99 | 0.9925 | 0.0190 |
| No log | 8.13 | 1000 | 0.1534 | 0.7385 | 0.72 | 0.7291 | 0.015 |
| No log | 8.13 | 1000 | 0.1298 | 0.4576 | 0.81 | 0.5848 | 0.5 |
| No log | 8.13 | 1000 | 0.1531 | 0.4201 | 0.855 | 0.5634 | 0.4 |
| No log | 8.13 | 1000 | 0.3574 | 0.3208 | 0.3617 | 0.34 | 0.007 |
| No log | 8.13 | 1000 | 0.0930 | 0.5215 | 0.545 | 0.5330 | 0.006 |
| No log | 8.13 | 1000 | 0.1228 | 0.6142 | 0.82 | 0.7024 | 0.9 |
| No log | 8.13 | 1000 | 0.1122 | 0.6386 | 0.795 | 0.7082 | 0.4 |
| No log | 8.13 | 1000 | 0.0883 | 0.7778 | 0.91 | 0.8387 | 0.9 |
| No log | 8.13 | 1000 | 0.1380 | 0.6255 | 0.76 | 0.6862 | 0.9 |
| No log | 8.13 | 1000 | 0.1089 | 0.4579 | 0.435 | 0.4462 | 0.016 |
| No log | 8.13 | 1000 | 0.1859 | 0.4978 | 0.575 | 0.5336 | 0.7000 |
| No log | 8.13 | 1000 | 0.0871 | 0.6314 | 0.805 | 0.7077 | 0.6 |
| No log | 8.13 | 1000 | 0.0770 | 0.6300 | 0.715 | 0.6698 | 0.8 |
| No log | 8.13 | 1000 | 0.0402 | 0.8868 | 0.7833 | 0.8319 | 0.9 |
| No log | 8.13 | 1000 | 0.0804 | 0.6199 | 0.685 | 0.6508 | 0.7000 |
| No log | 8.13 | 1000 | 0.0906 | 0.7116 | 0.765 | 0.7373 | 0.6 |
| No log | 8.13 | 1000 | 0.0264 | 0.7724 | 0.7917 | 0.7819 | 0.095 |
| No log | 8.13 | 1000 | 0.0377 | 0.8462 | 0.825 | 0.8354 | 0.8 |
| No log | 8.13 | 1000 | 0.1265 | 0.8308 | 0.81 | 0.8203 | 0.084 |
| No log | 8.13 | 1000 | 0.1408 | 0.5085 | 0.595 | 0.5484 | 0.9 |
| No log | 8.13 | 1000 | 0.0107 | 0.94 | 0.94 | 0.94 | 0.6 |
| No log | 8.13 | 1000 | 0.2398 | 0.5084 | 0.605 | 0.5525 | 0.0090 |
| No log | 8.13 | 1000 | 0.0746 | 0.4685 | 0.335 | 0.3907 | 0.7000 |
| No log | 8.13 | 1000 | 0.1090 | 0.4982 | 0.68 | 0.5751 | 0.4 |
| No log | 8.13 | 1000 | 0.2486 | 0.5930 | 0.765 | 0.6681 | 0.2 |
| No log | 8.13 | 1000 | 0.1815 | 0.5392 | 0.79 | 0.6410 | 0.6 |
| No log | 8.13 | 1000 | 0.1946 | 0.4645 | 0.72 | 0.5647 | 0.001 |
| No log | 8.13 | 1000 | 0.1989 | 0.7170 | 0.76 | 0.7379 | 0.0220 |
| No log | 8.13 | 1000 | 0.1928 | 0.5216 | 0.725 | 0.6067 | 0.9 |
| No log | 8.13 | 1000 | 0.1280 | 0.5597 | 0.68 | 0.6140 | 0.6 |
| No log | 8.13 | 1000 | 0.1143 | 0.3944 | 0.2814 | 0.3284 | 0.9 |
| No log | 8.13 | 1000 | 0.1220 | 0.5704 | 0.77 | 0.6553 | 0.0860 |
| No log | 8.13 | 1000 | 0.1155 | 0.5797 | 0.7273 | 0.6452 | 0.5 |
| No log | 8.13 | 1000 | 0.1092 | 0.6776 | 0.725 | 0.7005 | 0.7000 |
| No log | 8.13 | 1000 | 0.1092 | 0.6776 | 0.725 | 0.7005 | 0.7000 |
| No log | 8.13 | 1000 | 0.1137 | 0.5526 | 0.84 | 0.6667 | 0.011 |
| No log | 8.13 | 1000 | 0.1462 | 0.7351 | 0.68 | 0.7065 | 0.9 |
| No log | 8.13 | 1000 | 0.1190 | 0.5569 | 0.685 | 0.6143 | 0.021 |
| No log | 8.13 | 1000 | 0.1544 | 0.4936 | 0.775 | 0.6031 | 0.042 |
| No log | 8.13 | 1000 | 0.1545 | 0.56 | 0.7 | 0.6222 | 0.085 |
| No log | 8.13 | 1000 | 0.1283 | 0.5309 | 0.73 | 0.6147 | 0.033 |
| No log | 8.13 | 1000 | 0.1698 | 0.6290 | 0.78 | 0.6964 | 0.9 |
| No log | 8.13 | 1000 | 0.2498 | 0.4087 | 0.75 | 0.5291 | 0.012 |
| No log | 8.13 | 1000 | 0.1671 | 0.7067 | 0.735 | 0.7206 | 0.007 |
| No log | 8.13 | 1000 | 0.1986 | 0.6138 | 0.755 | 0.6771 | 0.097 |
| No log | 8.13 | 1000 | 0.1255 | 0.5709 | 0.765 | 0.6538 | 0.039 |
| No log | 8.13 | 1000 | 0.1255 | 0.5709 | 0.765 | 0.6538 | 0.039 |
| No log | 8.13 | 1000 | 0.0940 | 0.4219 | 0.5870 | 0.4909 | 0.064 |
| No log | 8.13 | 1000 | 0.0940 | 0.4219 | 0.5870 | 0.4909 | 0.064 |
| No log | 8.13 | 1000 | 0.1217 | 0.5462 | 0.71 | 0.6174 | 0.035 |
| No log | 8.13 | 1000 | 0.0755 | 0.4712 | 0.49 | 0.4804 | 0.8 |
| No log | 8.13 | 1000 | 0.1154 | 0.3030 | 0.7692 | 0.4348 | 0.0100 |
| No log | 8.13 | 1000 | 0.0904 | 0.5206 | 0.695 | 0.5953 | 0.6 |
| No log | 8.13 | 1000 | 0.0955 | 0.4631 | 0.565 | 0.5090 | 0.3000 |
| No log | 8.13 | 1000 | 0.1155 | 0.5670 | 0.74 | 0.6421 | 0.2 |
| No log | 8.13 | 1000 | 0.1179 | 0.6038 | 0.64 | 0.6214 | 0.9 |
| No log | 8.13 | 1000 | 0.1521 | 0.5525 | 0.71 | 0.6214 | 0.0440 |
| No log | 8.13 | 1000 | 0.1287 | 0.5125 | 0.3942 | 0.4457 | 0.6 |
| No log | 8.13 | 1000 | 0.3788 | 0.6047 | 0.65 | 0.6265 | 0.001 |
| No log | 8.13 | 1000 | 0.1500 | 0.5439 | 0.65 | 0.5923 | 0.3000 |
| No log | 8.13 | 1000 | 0.1191 | 0.8848 | 0.73 | 0.8 | 0.9 |
| No log | 8.13 | 1000 | 0.1370 | 0.6749 | 0.82 | 0.7404 | 0.005 |
| No log | 8.13 | 1000 | 0.1427 | 0.5568 | 0.76 | 0.6427 | 0.4 |
| No log | 8.13 | 1000 | 0.2239 | 0.7512 | 0.8 | 0.7748 | 0.5 |
| No log | 8.13 | 1000 | 0.1158 | 0.4457 | 0.39 | 0.4160 | 0.011 |
| No log | 8.13 | 1000 | 0.1229 | 0.3904 | 0.57 | 0.4634 | 0.2 |
| No log | 8.13 | 1000 | 0.0686 | 0.7984 | 0.97 | 0.8758 | 0.3000 |
| No log | 8.13 | 1000 | 0.0765 | 0.5848 | 0.5 | 0.5391 | 0.2 |
| No log | 8.13 | 1000 | 0.1206 | 0.6949 | 0.82 | 0.7523 | 0.4 |
| No log | 8.13 | 1000 | 0.2121 | 0.3846 | 0.8333 | 0.5263 | 0.003 |
| No log | 8.13 | 1000 | 0.1497 | 0.5736 | 0.76 | 0.6538 | 0.6 |
| No log | 8.13 | 1000 | 0.1455 | 0.5878 | 0.72 | 0.6472 | 0.7000 |
| No log | 8.13 | 1000 | 0.1469 | 0.5330 | 0.525 | 0.5290 | 0.2 |
| No log | 8.13 | 1000 | 0.1132 | 0.5662 | 0.77 | 0.6525 | 0.2 |
| No log | 8.13 | 1000 | 0.0976 | 0.5743 | 0.58 | 0.5771 | 0.7000 |
| No log | 8.13 | 1000 | 0.0598 | 0.8807 | 0.775 | 0.8245 | 0.5 |
| No log | 8.13 | 1000 | 0.1741 | 0.3696 | 0.425 | 0.3953 | 0.0730 |
| No log | 8.13 | 1000 | 0.1468 | 0.5743 | 0.7186 | 0.6384 | 0.085 |
| No log | 8.13 | 1000 | 0.2008 | 0.5814 | 0.4854 | 0.5291 | 0.012 |
| No log | 8.13 | 1000 | 0.0989 | 0.5152 | 0.51 | 0.5126 | 0.5 |
| No log | 8.13 | 1000 | 0.0899 | 0.6584 | 0.665 | 0.6617 | 0.4 |
| No log | 8.13 | 1000 | 0.1637 | 0.6300 | 0.86 | 0.7273 | 0.069 |
| No log | 8.13 | 1000 | 0.1637 | 0.6300 | 0.86 | 0.7273 | 0.069 |
| No log | 8.13 | 1000 | 0.0828 | 0.5321 | 0.745 | 0.6208 | 0.4 |
| No log | 8.13 | 1000 | 0.1696 | 0.6226 | 0.8081 | 0.7033 | 0.3000 |
| No log | 8.13 | 1000 | 0.0994 | 0.5992 | 0.7889 | 0.6811 | 0.3000 |
| No log | 8.13 | 1000 | 0.1615 | 0.6204 | 0.67 | 0.6442 | 0.9 |
| No log | 8.13 | 1000 | 0.1185 | 0.5272 | 0.775 | 0.6275 | 0.045 |
| No log | 8.13 | 1000 | 0.0886 | 0.6163 | 0.755 | 0.6787 | 0.3000 |
| No log | 8.13 | 1000 | 0.1441 | 0.4245 | 0.59 | 0.4937 | 0.0710 |
| No log | 8.13 | 1000 | 0.1637 | 0.5670 | 0.635 | 0.5991 | 0.8 |
| No log | 8.13 | 1000 | 0.1223 | 0.6157 | 0.785 | 0.6901 | 0.3000 |
| No log | 8.13 | 1000 | 0.0968 | 0.6789 | 0.645 | 0.6615 | 0.6 |
| No log | 8.13 | 1000 | 0.0837 | 0.6488 | 0.785 | 0.7104 | 0.3000 |
| No log | 8.13 | 1000 | 0.2052 | 0.5142 | 0.635 | 0.5682 | 0.0140 |
| No log | 8.13 | 1000 | 0.0885 | 0.4222 | 0.475 | 0.4471 | 0.082 |
| No log | 8.13 | 1000 | 0.1095 | 0.4638 | 0.64 | 0.5378 | 0.099 |
| No log | 8.13 | 1000 | 0.0797 | 0.6651 | 0.715 | 0.6892 | 0.4 |
| No log | 8.13 | 1000 | 0.1026 | 0.4611 | 0.86 | 0.6003 | 0.1 |
| No log | 8.13 | 1000 | 0.1574 | 0.6757 | 0.75 | 0.7109 | 0.0100 |
| No log | 8.13 | 1000 | 0.1376 | 0.552 | 0.69 | 0.6133 | 0.8 |
| No log | 8.13 | 1000 | 0.1749 | 0.4426 | 0.79 | 0.5673 | 0.0600 |
| No log | 8.13 | 1000 | 0.1263 | 0.6829 | 0.84 | 0.7534 | 0.5 |
| No log | 8.13 | 1000 | 0.1464 | 0.4248 | 0.72 | 0.5343 | 0.003 |
| No log | 8.13 | 1000 | 0.1464 | 0.4248 | 0.72 | 0.5343 | 0.003 |
| No log | 8.13 | 1000 | 0.1464 | 0.4248 | 0.72 | 0.5343 | 0.003 |
| No log | 8.13 | 1000 | 0.1464 | 0.4248 | 0.72 | 0.5343 | 0.003 |
| No log | 8.13 | 1000 | 0.2556 | 0.2788 | 0.6935 | 0.3977 | 0.002 |
| No log | 8.13 | 1000 | 0.1472 | 0.4409 | 0.8995 | 0.5917 | 0.097 |
| No log | 8.13 | 1000 | 0.0257 | 0.9543 | 0.94 | 0.9471 | 0.5 |
| No log | 8.13 | 1000 | 0.0020 | 0.9901 | 1.0 | 0.9950 | 0.2 |
| No log | 8.13 | 1000 | 0.0029 | 0.995 | 0.995 | 0.995 | 0.015 |
| No log | 8.13 | 1000 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.6 |
| No log | 8.13 | 1000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.033 |
| No log | 8.13 | 1000 | 0.0010 | 0.9901 | 1.0 | 0.9950 | 0.008 |
| No log | 8.13 | 1000 | 0.0018 | 1.0 | 0.995 | 0.9975 | 0.2 |
| No log | 8.13 | 1000 | 0.0033 | 0.99 | 0.99 | 0.99 | 0.9 |
| No log | 8.13 | 1000 | 0.0023 | 0.9851 | 0.995 | 0.9900 | 0.083 |
| No log | 8.13 | 1000 | 0.0004 | 1.0 | 1.0 | 1.0 | 0.9 |
| No log | 8.13 | 1000 | 0.0267 | 0.9786 | 0.915 | 0.9457 | 0.0440 |
| No log | 8.13 | 1000 | 0.0000 | 1.0 | 1.0 | 1.0 | 0.039 |
| No log | 8.13 | 1000 | 0.0503 | 0.9349 | 0.79 | 0.8564 | 0.1 |
| No log | 8.13 | 1000 | 0.0025 | 0.9852 | 1.0 | 0.9926 | 0.007 |
| No log | 8.13 | 1000 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.0130 |
| No log | 8.13 | 1000 | 0.0068 | 0.9898 | 0.975 | 0.9824 | 0.9 |
| No log | 8.13 | 1000 | 0.0092 | 0.9608 | 0.98 | 0.9703 | 0.5 |
| No log | 8.13 | 1000 | 0.0001 | 1.0 | 1.0 | 1.0 | 0.2 |
| No log | 8.13 | 1000 | 0.0002 | 1.0 | 1.0 | 1.0 | 0.061 |
| No log | 8.13 | 1000 | 0.0022 | 1.0 | 0.995 | 0.9975 | 0.5 |
| No log | 8.13 | 1000 | 0.0036 | 0.9803 | 0.995 | 0.9876 | 0.011 |
| No log | 8.13 | 1000 | 0.0175 | 0.9641 | 0.94 | 0.9519 | 0.9 |
| No log | 8.13 | 1000 | 0.1973 | 0.2459 | 0.675 | 0.3605 | 0.012 |
| No log | 8.13 | 1000 | 0.1486 | 0.3097 | 0.3310 | 0.32 | 0.3000 |
| No log | 8.13 | 1000 | 0.2422 | 0.5806 | 0.63 | 0.6043 | 0.7000 |
| No log | 8.13 | 1000 | 0.2493 | 0.4540 | 0.715 | 0.5553 | 0.054 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1"], "model-index": [{"name": "v2-WtP-FT-6L-256BS-UD", "results": []}]}
|
igorsterner/v2-WtP-FT-6L-256BS-UD
| null |
[
"transformers",
"safetensors",
"xlm-token",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-23T16:31:10+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.