Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-70m_mz-132_WordLength_n-its-10
This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-70m", "model-index": [{"name": "robust_llm_pythia-70m_mz-132_WordLength_n-its-10", "results": []}]}
|
AlignmentResearch/robust_llm_pythia-70m_mz-132_WordLength_n-its-10
| null |
[
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T22:22:25+00:00
|
null |
fastai
|
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
{"tags": ["fastai"]}
|
iamacaru/simpsons
| null |
[
"fastai",
"has_space",
"region:us"
] | null |
2024-04-24T22:24:04+00:00
|
text-to-image
|
diffusers
|
# Endless Reality
1.0 version of this model with the 840KVAE baked in. Comparison:

Samples and prompts:

Top left: detailed postcard movie perfect face, full body, baby, masterpiece, highest quality, crowd, realistic eyes, Pretty CUTE GIRL, sweater, skirt,
Top right: Hyperrealistic 1990 movie screenshot Santa Claus with wife and daughter enjoying wine with candles. sitting with a pretty cute little girl, Closeup Faces, Gift Birthday Theme by Gil_Elvgren and Haddon_Sundblom
Bottom left: analog style 70s color photograph of young Chuck Norris, muscular, frying eggs on freezer, swirl magic, solo, from side, side view, detailed background, detailed face, Golden Tech, scifi, timeless wanderer, endless landscape, circular patterns, time magic, time, standing still, bloom light aura, desert dunes in background, ethereal atmosphere
Bottom right: retro style 70s color movie still of beautiful face, young pretty Christina Aguilera voluptuous at a neon convenience storefront
Original page:
https://civitai.com/models/25573?modelVersionId=30619
|
{"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["Realism", "Scifi", "Portrait", "davcha", "stable-diffusion", "stable-diffusion-diffusers", "diffusers", "text-to-image"], "pipeline_tag": "text-to-image"}
|
Yntec/endlessReality
| null |
[
"diffusers",
"safetensors",
"Realism",
"Scifi",
"Portrait",
"davcha",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null |
2024-04-24T22:25:00+00:00
|
null |
fastai
|
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
{"tags": ["fastai"]}
|
miibanl/CochesCamionesTrenesMotosAutobuses
| null |
[
"fastai",
"has_space",
"region:us"
] | null |
2024-04-24T22:26:34+00:00
|
null | null |
{}
|
SelectOkay/Rise_Kujikawa
| null |
[
"region:us"
] | null |
2024-04-24T22:27:46+00:00
|
|
null | null |
{"license": "openrail"}
|
bellswayer/graga
| null |
[
"license:openrail",
"region:us"
] | null |
2024-04-24T22:28:20+00:00
|
|
null | null |
{}
|
tonymds/tonymds
| null |
[
"region:us"
] | null |
2024-04-24T22:30:00+00:00
|
|
text-to-image
|
diffusers
|
This Repo contains a diffusers format version of the PixArt-Sigma Repos
PixArt-alpha/pixart_sigma_sdxlvae_T5_diffusers
PixArt-alpha/PixArt-Sigma-XL-2-2K-MS
with the models loaded and saved in fp16 and bf16 formats, roughly halfing their sizes.
It can be used where download bandwith, memory or diskspace are relatively low, a T4 Colab instance for example.
**NOTE: This Model has been converted but not successfully tested, during the memory effecient attention
it generates 16Gb buffer, this appears break an MPS limitation, but it may also mean if requires more than 16Gb even
with the 16 bit model**
The diffusers script below assumes those with more memory on none MPS GPU's have more luck running it!
a Diffusers script looks like this, **currently (25th April 2024) you need will to install diffusers from source**.
```py
import random
import sys
import torch
from diffusers from PixArtSigmaPipeline
device = 'mps'
weight_dtype = torch.bfloat16
pipe = PixArtSigmaPipeline.from_pretrained(
"Vargol/PixArt-Sigma_2k_16bit",
torch_dtype=weight_dtype,
variant="fp16",
use_safetensors=True,
)
# Enable memory optimizations.
# pipe.enable_model_cpu_offload()
pipe.to(device)
prompt = "Cinematic science fiction film still.A cybernetic demon awaits her friend in a bar selling flaming oil drinks. The barman is a huge tree being, towering over the demon"
for i in range(4):
seed = random.randint(0, sys.maxsize)
generator = torch.Generator("mps").manual_seed(seed);
image = pipe(prompt, generator=generator, num_iferencenum_inference_steps=40).images[0]
image.save(f"pas_{seed}.png")a
~~
```
|
{"license": "openrail++"}
|
Vargol/PixArt-Sigma_2k_16bit
| null |
[
"diffusers",
"safetensors",
"license:openrail++",
"diffusers:PixArtSigmaPipeline",
"region:us"
] | null |
2024-04-24T22:31:06+00:00
|
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
rizwan-ai/mistral_7b-instruct-guanaco
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T22:32:00+00:00
|
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
RobertML/sn6-fast-train
| null |
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T22:32:54+00:00
|
null | null |
{"license": "openrail"}
|
Alys9047/Draco
| null |
[
"license:openrail",
"region:us"
] | null |
2024-04-24T22:33:02+00:00
|
|
image-classification
| null |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Using the dataset provided, only the spheroids were used for training. Detecting accuracy is below 10% and a lot of duplicates.
Version not usefull.
## Model Details
### Model Description
- **Developed by:** Jeroen den Otter
- **Funded by :** Minnesota State University | Physics and Astronomy department
- **Model type:** YoloV8 Extensive
- **Language(s) (NLP):** Python
- **License:** Apache 2.0
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/ultralytics/ultralytics
|
{"language": ["en"], "license": "apache-2.0", "datasets": ["IT-Guy007/Galaxy-detection"], "metrics": ["accuracy"], "pipeline_tag": "image-classification"}
|
IT-Guy007/YoloV8el-v1
| null |
[
"image-classification",
"en",
"dataset:IT-Guy007/Galaxy-detection",
"license:apache-2.0",
"region:us"
] | null |
2024-04-24T22:33:05+00:00
|
sentence-similarity
|
sentence-transformers
|
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 15716 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MarginMSELoss.MarginMSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
{"library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"}
|
redscroll/msmarco-mpnet
| null |
[
"sentence-transformers",
"safetensors",
"mpnet",
"feature-extraction",
"sentence-similarity",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T22:34:02+00:00
|
text-generation
|
transformers
|

# T3Q-Llama3-8B-Inst-sft1.0
## This model is a version of meta-llama/Meta-Llama-3-8B-Instruct that has been fine-tuned with SFT.
## Model Developers Chihoon Lee(chihoonlee10), T3Q
#### Transformers pipeline
```python
import transformers
import torch
model_id = "chlee10/T3Q-Llama3-8B-Inst-sft1.0"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
#### Transformers AutoModelForCausalLM
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "chlee10/T3Q-Llama3-8B-Inst-sft1.0"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
hf (pretrained=chlee10/T3Q-Llama3-8B-Inst-sft1.0), limit: None, provide_description: False, num_fewshot: 0, batch_size: None
```python
| Task |Version| Metric |Value | |Stderr|
|----------------|------:|--------|-----:|---|-----:|
|kobest_boolq | 0|acc |0.5114|± |0.0133|
| | |macro_f1|0.3546|± |0.0080|
|kobest_copa | 0|acc |0.6000|± |0.0155|
| | |macro_f1|0.5997|± |0.0155|
|kobest_hellaswag| 0|acc |0.4120|± |0.0220|
| | |acc_norm|0.5380|± |0.0223|
| | |macro_f1|0.4084|± |0.0219|
|kobest_sentineg | 0|acc |0.5063|± |0.0251|
| | |macro_f1|0.3616|± |0.0169|
```
|
{"license": "apache-2.0", "library_name": "transformers", "datasets": ["maywell/ko_Ultrafeedback_binarized"], "pipeline_tag": "text-generation", "base model": ["meta-llama/Meta-Llama-3-8B-Instruct"]}
|
chlee10/T3Q-Llama3-8B-Inst-sft1.0
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:maywell/ko_Ultrafeedback_binarized",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T22:35:21+00:00
|
unconditional-image-generation
|
diffusers
|
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('tuandunghcmut/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
{"license": "mit", "tags": ["pytorch", "diffusers", "unconditional-image-generation", "diffusion-models-class"]}
|
tuandunghcmut/sd-class-butterflies-32
| null |
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | null |
2024-04-24T22:35:38+00:00
|
question-answering
|
transformers
|
{}
|
jyanimaulik/bert-finetuned-squad-accelerate
| null |
[
"transformers",
"safetensors",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T22:39:02+00:00
|
|
null | null |
# kat33/Mixtral-8x7B-Instruct-v0.1-Q3_K_S-GGUF
This model was converted to GGUF format from [`mistralai/Mixtral-8x7B-Instruct-v0.1`](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo kat33/Mixtral-8x7B-Instruct-v0.1-Q3_K_S-GGUF --model mixtral-8x7b-instruct-v0.1.Q3_K_S.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo kat33/Mixtral-8x7B-Instruct-v0.1-Q3_K_S-GGUF --model mixtral-8x7b-instruct-v0.1.Q3_K_S.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mixtral-8x7b-instruct-v0.1.Q3_K_S.gguf -n 128
```
|
{"language": ["fr", "it", "de", "es", "en"], "license": "apache-2.0", "tags": ["llama-cpp", "gguf-my-repo"], "inference": {"parameters": {"temperature": 0.5}}, "widget": [{"messages": [{"role": "user", "content": "What is your favorite condiment?"}]}]}
|
kat33/Mixtral-8x7B-Instruct-v0.1-Q3_K_S-GGUF
| null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"fr",
"it",
"de",
"es",
"en",
"license:apache-2.0",
"region:us"
] | null |
2024-04-24T22:39:45+00:00
|
null | null |
{"license": "mit"}
|
cambridge-climb/CAT-CamBabyTokenizer
| null |
[
"license:mit",
"region:us"
] | null |
2024-04-24T22:39:49+00:00
|
|
null | null |
# Sconfiggere l'Infezione Fungina con FungoKiller: La Chiave per Piedi Sani e Unghie Belle
L'infezione fungina delle unghie e dei piedi è un problema comune che può causare fastidi, disagio e imbarazzo. Fortunatamente, c'è una soluzione efficace disponibile: FungoKiller. In questa articolo, esploreremo i vantaggi di questo prodotto innovativo e come può aiutarti a liberarti dall'infezione fungina una volta per tutte.
## Che cos'è FungoKiller?
FungoKiller è un trattamento topico progettato per combattere l'infezione fungina delle unghie e della pelle dei piedi. La sua formula unica è stata sviluppata per fornire un sollievo rapido e duraturo, eliminando efficacemente il fungo e ripristinando la salute delle unghie e della pelle.
[__Clicca qui per saperne di più >>__](//mandarv.com/gRtS?sub1=Fungokiller)
## Come Funziona FungoKiller?
La potente formula di FungoKiller agisce in vari modi:
1. **Disinfezione e Distruzione:** Elimina il fungo e distrugge la sua struttura, prevenendo la sua diffusione e ricomparsa.
2. **Sollievo Rapido:** Riduce il prurito, la desquamazione e l'irritazione della pelle, alleviando il disagio causato dall'infezione fungina.
3. **Ripristino della Salute delle Unghie:** Favorisce la rigenerazione delle unghie danneggiate e guarisce le ferite e le crepe ai piedi.
4. **Neutralizzazione degli Odori:** Elimina gli odori sgradevoli associati all'infezione fungina, ripristinando la freschezza e il comfort.
5. **Potenziamento del Sistema Immunitario:** Contribuisce a rafforzare le difese immunitarie, aiutando il corpo a combattere futuri attacchi fungini.
## Perché Scegliere FungoKiller?
1. **Efficacia Comprovata:** FungoKiller è stato clinicamente testato e dimostrato di essere efficace nel trattamento dell'infezione fungina.
2. **Facile da Usare:** La sua formula in crema è semplice da applicare e si assorbe rapidamente, senza lasciare residui oleosi.
3. **Senza Effetti Collaterali:** FungoKiller è sicuro da usare e non causa effetti collaterali indesiderati.
4. **Conveniente:** Acquistabile online sul nostro sito ufficiale con sconti speciali e offerte promozionali.
## La Soddisfazione dei Nostri Clienti
I nostri clienti sono la nostra priorità e le loro esperienze positive con FungoKiller sono la migliore testimonianza della nostra efficacia. Leggi le recensioni dei nostri clienti soddisfatti e scopri perché FungoKiller è la scelta preferita per combattere l'infezione fungina.
In conclusione, se stai cercando una soluzione efficace e sicura per sconfiggere l'infezione fungina, FungoKiller è la risposta. Ordina oggi stesso e riporta la salute e la bellezza ai tuoi piedi con FungoKiller.
[__Clicca qui per ordinare >>__](//mandarv.com/gRtS?sub1=Fungokiller)
---
### Links:
https://colibris-wiki.org/lestribusarcenciel/?Fungokiller-Italia
http://goodpa.regione.marche.it/user/fungokillerit24
https://www.opendata.nhs.scot/user/fungokillerit24
https://rciims.mona.uwi.edu/user/fungokillerit24
https://data.illinois.gov/user/fungokillerit24
https://groups.google.com/g/addio-ai-funghi-con-fungokiller
https://huggingface.co/fafab34728/fungokiller-in-italia
https://euvita.blogspot.com/2024/04/addio-infezioni-fungine-con-fungokiller.html
http://pras.ambiente.gob.ec/en/web/euvita/home/-/asset_publisher/lw675zJt7cqN/blog/fungokiller-in-italia-a-cosa-serve-test-recensioni-prezzo-in-farmacia
https://www.eventbrite.com/e/fungokiller-a-cosa-serve-fungokiller-recensioni-fungokiller-prezzo-fungo-tickets-890697378837
|
{}
|
fafab34728/fungokiller-in-italia
| null |
[
"region:us"
] | null |
2024-04-24T22:41:46+00:00
|
zero-shot-image-classification
|
transformers.js
| ERROR: type should be string, got "\nhttps://github.com/apple/ml-mobileclip with ONNX weights to be compatible with Transformers.js.\n\n## Usage (Transformers.js)\n\nIf you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:\n```bash\nnpm i @xenova/transformers\n```\n\n**Example:** Perform zero-shot image classification.\n```js\nimport {\n AutoTokenizer,\n CLIPTextModelWithProjection,\n AutoProcessor,\n CLIPVisionModelWithProjection,\n RawImage,\n dot,\n softmax,\n} from '@xenova/transformers';\n\nconst model_id = 'Xenova/mobileclip_s0';\n\n// Load tokenizer and text model\nconst tokenizer = await AutoTokenizer.from_pretrained(model_id);\nconst text_model = await CLIPTextModelWithProjection.from_pretrained(model_id);\n\n// Load processor and vision model\nconst processor = await AutoProcessor.from_pretrained(model_id);\nconst vision_model = await CLIPVisionModelWithProjection.from_pretrained(model_id, {\n quantized: false, // NOTE: vision model is sensitive to quantization.\n});\n\n// Run tokenization\nconst texts = ['cats', 'dogs', 'birds'];\nconst text_inputs = tokenizer(texts, { padding: 'max_length', truncation: true });\n\n// Compute text embeddings\nconst { text_embeds } = await text_model(text_inputs);\nconst normalized_text_embeds = text_embeds.normalize().tolist();\n\n// Read image and run processor\nconst url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/cats.jpg';\nconst image = await RawImage.read(url);\nconst image_inputs = await processor(image);\n\n// Compute vision embeddings\nconst { image_embeds } = await vision_model(image_inputs);\nconst normalized_image_embeds = image_embeds.normalize().tolist();\n\n// Compute probabilities\nconst probabilities = normalized_image_embeds.map(\n x => softmax(normalized_text_embeds.map(y => 100 * dot(x, y)))\n);\nconsole.log(probabilities); // [[ 0.9989384093386391, 0.001060433633052551, 0.000001157028308360134 ]]\n```\n" |
{"license": "other", "library_name": "transformers.js", "tags": ["mobileclip", "image-feature-extraction", "feature-extraction"], "pipeline_tag": "zero-shot-image-classification"}
|
Xenova/mobileclip_s0
| null |
[
"transformers.js",
"onnx",
"clip",
"mobileclip",
"image-feature-extraction",
"feature-extraction",
"zero-shot-image-classification",
"license:other",
"region:us"
] | null |
2024-04-24T22:41:51+00:00
|
zero-shot-image-classification
|
transformers.js
| ERROR: type should be string, got "\nhttps://github.com/apple/ml-mobileclip with ONNX weights to be compatible with Transformers.js.\n\n## Usage (Transformers.js)\n\nIf you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:\n```bash\nnpm i @xenova/transformers\n```\n\n**Example:** Perform zero-shot image classification.\n```js\nimport {\n AutoTokenizer,\n CLIPTextModelWithProjection,\n AutoProcessor,\n CLIPVisionModelWithProjection,\n RawImage,\n dot,\n softmax,\n} from '@xenova/transformers';\n\nconst model_id = 'Xenova/mobileclip_s1';\n\n// Load tokenizer and text model\nconst tokenizer = await AutoTokenizer.from_pretrained(model_id);\nconst text_model = await CLIPTextModelWithProjection.from_pretrained(model_id);\n\n// Load processor and vision model\nconst processor = await AutoProcessor.from_pretrained(model_id);\nconst vision_model = await CLIPVisionModelWithProjection.from_pretrained(model_id, {\n quantized: false, // NOTE: vision model is sensitive to quantization.\n});\n\n// Run tokenization\nconst texts = ['cats', 'dogs', 'birds'];\nconst text_inputs = tokenizer(texts, { padding: 'max_length', truncation: true });\n\n// Compute text embeddings\nconst { text_embeds } = await text_model(text_inputs);\nconst normalized_text_embeds = text_embeds.normalize().tolist();\n\n// Read image and run processor\nconst url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/cats.jpg';\nconst image = await RawImage.read(url);\nconst image_inputs = await processor(image);\n\n// Compute vision embeddings\nconst { image_embeds } = await vision_model(image_inputs);\nconst normalized_image_embeds = image_embeds.normalize().tolist();\n\n// Compute probabilities\nconst probabilities = normalized_image_embeds.map(\n x => softmax(normalized_text_embeds.map(y => 100 * dot(x, y)))\n);\nconsole.log(probabilities); // [[ 0.9999744722905349, 0.0000217474276948055, 0.00000378028177032859 ]]\n```\n" |
{"license": "other", "library_name": "transformers.js", "tags": ["mobileclip", "image-feature-extraction", "feature-extraction"], "pipeline_tag": "zero-shot-image-classification"}
|
Xenova/mobileclip_s1
| null |
[
"transformers.js",
"onnx",
"clip",
"mobileclip",
"image-feature-extraction",
"feature-extraction",
"zero-shot-image-classification",
"license:other",
"region:us"
] | null |
2024-04-24T22:42:01+00:00
|
zero-shot-image-classification
|
transformers.js
| ERROR: type should be string, got "\nhttps://github.com/apple/ml-mobileclip with ONNX weights to be compatible with Transformers.js.\n\n## Usage (Transformers.js)\n\nIf you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:\n```bash\nnpm i @xenova/transformers\n```\n\n**Example:** Perform zero-shot image classification.\n```js\nimport {\n AutoTokenizer,\n CLIPTextModelWithProjection,\n AutoProcessor,\n CLIPVisionModelWithProjection,\n RawImage,\n dot,\n softmax,\n} from '@xenova/transformers';\n\nconst model_id = 'Xenova/mobileclip_s2';\n\n// Load tokenizer and text model\nconst tokenizer = await AutoTokenizer.from_pretrained(model_id);\nconst text_model = await CLIPTextModelWithProjection.from_pretrained(model_id);\n\n// Load processor and vision model\nconst processor = await AutoProcessor.from_pretrained(model_id);\nconst vision_model = await CLIPVisionModelWithProjection.from_pretrained(model_id, {\n quantized: false, // NOTE: vision model is sensitive to quantization.\n});\n\n// Run tokenization\nconst texts = ['cats', 'dogs', 'birds'];\nconst text_inputs = tokenizer(texts, { padding: 'max_length', truncation: true });\n\n// Compute text embeddings\nconst { text_embeds } = await text_model(text_inputs);\nconst normalized_text_embeds = text_embeds.normalize().tolist();\n\n// Read image and run processor\nconst url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/cats.jpg';\nconst image = await RawImage.read(url);\nconst image_inputs = await processor(image);\n\n// Compute vision embeddings\nconst { image_embeds } = await vision_model(image_inputs);\nconst normalized_image_embeds = image_embeds.normalize().tolist();\n\n// Compute probabilities\nconst probabilities = normalized_image_embeds.map(\n x => softmax(normalized_text_embeds.map(y => 100 * dot(x, y)))\n);\nconsole.log(probabilities); // [[ 0.9999973851268408, 0.000002399646544186113, 2.1522661499262862e-7 ]]\n```\n" |
{"license": "other", "library_name": "transformers.js", "tags": ["mobileclip", "image-feature-extraction", "feature-extraction"], "pipeline_tag": "zero-shot-image-classification"}
|
Xenova/mobileclip_s2
| null |
[
"transformers.js",
"onnx",
"clip",
"mobileclip",
"image-feature-extraction",
"feature-extraction",
"zero-shot-image-classification",
"license:other",
"region:us"
] | null |
2024-04-24T22:42:10+00:00
|
zero-shot-image-classification
|
transformers.js
| ERROR: type should be string, got "\nhttps://github.com/apple/ml-mobileclip with ONNX weights to be compatible with Transformers.js.\n\n## Usage (Transformers.js)\n\nIf you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:\n```bash\nnpm i @xenova/transformers\n```\n\n**Example:** Perform zero-shot image classification.\n```js\nimport {\n AutoTokenizer,\n CLIPTextModelWithProjection,\n AutoProcessor,\n CLIPVisionModelWithProjection,\n RawImage,\n dot,\n softmax,\n} from '@xenova/transformers';\n\nconst model_id = 'Xenova/mobileclip_b';\n\n// Load tokenizer and text model\nconst tokenizer = await AutoTokenizer.from_pretrained(model_id);\nconst text_model = await CLIPTextModelWithProjection.from_pretrained(model_id);\n\n// Load processor and vision model\nconst processor = await AutoProcessor.from_pretrained(model_id);\nconst vision_model = await CLIPVisionModelWithProjection.from_pretrained(model_id, {\n quantized: false, // NOTE: vision model is sensitive to quantization.\n});\n\n// Run tokenization\nconst texts = ['cats', 'dogs', 'birds'];\nconst text_inputs = tokenizer(texts, { padding: 'max_length', truncation: true });\n\n// Compute text embeddings\nconst { text_embeds } = await text_model(text_inputs);\nconst normalized_text_embeds = text_embeds.normalize().tolist();\n\n// Read image and run processor\nconst url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/cats.jpg';\nconst image = await RawImage.read(url);\nconst image_inputs = await processor(image);\n\n// Compute vision embeddings\nconst { image_embeds } = await vision_model(image_inputs);\nconst normalized_image_embeds = image_embeds.normalize().tolist();\n\n// Compute probabilities\nconst probabilities = normalized_image_embeds.map(\n x => softmax(normalized_text_embeds.map(y => 100 * dot(x, y)))\n);\nconsole.log(probabilities); // [[ 0.999993040175817, 0.000006828091823929405, 1.3173235896278122e-7 ]]\n```\n" |
{"license": "other", "library_name": "transformers.js", "tags": ["mobileclip", "image-feature-extraction", "feature-extraction"], "pipeline_tag": "zero-shot-image-classification"}
|
Xenova/mobileclip_b
| null |
[
"transformers.js",
"onnx",
"clip",
"mobileclip",
"image-feature-extraction",
"feature-extraction",
"zero-shot-image-classification",
"license:other",
"region:us"
] | null |
2024-04-24T22:42:21+00:00
|
zero-shot-image-classification
|
transformers.js
| ERROR: type should be string, got "\nhttps://github.com/apple/ml-mobileclip with ONNX weights to be compatible with Transformers.js.\n\n## Usage (Transformers.js)\n\nIf you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:\n```bash\nnpm i @xenova/transformers\n```\n\n**Example:** Perform zero-shot image classification.\n```js\nimport {\n AutoTokenizer,\n CLIPTextModelWithProjection,\n AutoProcessor,\n CLIPVisionModelWithProjection,\n RawImage,\n dot,\n softmax,\n} from '@xenova/transformers';\n\nconst model_id = 'Xenova/mobileclip_blt';\n\n// Load tokenizer and text model\nconst tokenizer = await AutoTokenizer.from_pretrained(model_id);\nconst text_model = await CLIPTextModelWithProjection.from_pretrained(model_id);\n\n// Load processor and vision model\nconst processor = await AutoProcessor.from_pretrained(model_id);\nconst vision_model = await CLIPVisionModelWithProjection.from_pretrained(model_id, {\n quantized: false, // NOTE: vision model is sensitive to quantization.\n});\n\n// Run tokenization\nconst texts = ['cats', 'dogs', 'birds'];\nconst text_inputs = tokenizer(texts, { padding: 'max_length', truncation: true });\n\n// Compute text embeddings\nconst { text_embeds } = await text_model(text_inputs);\nconst normalized_text_embeds = text_embeds.normalize().tolist();\n\n// Read image and run processor\nconst url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/cats.jpg';\nconst image = await RawImage.read(url);\nconst image_inputs = await processor(image);\n\n// Compute vision embeddings\nconst { image_embeds } = await vision_model(image_inputs);\nconst normalized_image_embeds = image_embeds.normalize().tolist();\n\n// Compute probabilities\nconst probabilities = normalized_image_embeds.map(\n x => softmax(normalized_text_embeds.map(y => 100 * dot(x, y)))\n);\nconsole.log(probabilities); // [[ 0.9999057403656509, 0.00009141888000214805, 0.0000028407543469763894 ]]\n```\n" |
{"license": "other", "library_name": "transformers.js", "tags": ["mobileclip", "image-feature-extraction", "feature-extraction"], "pipeline_tag": "zero-shot-image-classification"}
|
Xenova/mobileclip_blt
| null |
[
"transformers.js",
"onnx",
"clip",
"mobileclip",
"image-feature-extraction",
"feature-extraction",
"zero-shot-image-classification",
"license:other",
"region:us",
"has_space"
] | null |
2024-04-24T22:42:37+00:00
|
audio-classification
|
transformers
|
{}
|
yranawat/results
| null |
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T22:42:52+00:00
|
|
text-generation
|
transformers
|
{}
|
liminerity/local-model
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null |
2024-04-24T22:43:47+00:00
|
|
null | null |
{"license": "mit"}
|
cambridge-climb/NLD-CamBabyTokenizer
| null |
[
"license:mit",
"region:us"
] | null |
2024-04-24T22:44:00+00:00
|
|
text-classification
|
transformers
|
{}
|
shaggysus/MovieGenrePrediction
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T22:44:15+00:00
|
|
null | null |
{}
|
larry5/llava-1.5-7b-hf-ft-mix-vsft-lora
| null |
[
"region:us"
] | null |
2024-04-24T22:44:53+00:00
|
|
null | null |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{}
|
Ingvarus/BotAl
| null |
[
"arxiv:1910.09700",
"region:us"
] | null |
2024-04-24T22:45:27+00:00
|
text-classification
|
transformers
|
{}
|
sadiiipc/finetuning-model-testing
| null |
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T22:45:49+00:00
|
|
null | null |
{}
|
jq/sunflower-llama3-finetuned-20240424
| null |
[
"safetensors",
"region:us"
] | null |
2024-04-24T22:46:28+00:00
|
|
null | null |
{}
|
WebArabicAI/NewsGpt
| null |
[
"region:us"
] | null |
2024-04-24T22:47:10+00:00
|
|
null | null |
{"license": "mit"}
|
cambridge-climb/RON-CamBabyTokenizer
| null |
[
"license:mit",
"region:us"
] | null |
2024-04-24T22:47:54+00:00
|
|
null | null |
{"license": "llama3"}
|
MinouMinou/First
| null |
[
"license:llama3",
"region:us"
] | null |
2024-04-24T22:48:44+00:00
|
|
null | null |
{"license": "mit"}
|
cambridge-climb/ES-CamBabyTokenizer
| null |
[
"license:mit",
"region:us"
] | null |
2024-04-24T22:49:22+00:00
|
|
null |
transformers
|
# Model Card for Model ID
Note only a text gen but also chatbot, I'm just test and...it's work, very nice, try it.
## Model Details
- [](https://colab.research.google.com/drive/1mWRFts7yCErqHeBzsaQ3vNDkyg1rKszX?usp=sharing)
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub.
- **Developed by:** **HuyRemy**
- **Funded by :** **HuyRemy**
- **Shared by :** **HuyRemy**
- **Model type:** **Chatbot Template**
- **Base On:** **Mistral Megatron**
- **License:** [email protected]
### Model Demo:
- **Demo :** https://ai.matilda.vn
## Uses
**USE T4 GPU**
```Python
!pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
!pip install --no-deps xformers trl peft accelerate bitsandbytes
```
### Direct Use
``` Python
from unsloth import FastLanguageModel
import torch
max_seq_length = 2048
dtype = None
load_in_4bit = True
from unsloth.chat_templates import get_chat_template
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "HuyRemy/chatphil",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
FastLanguageModel.for_inference(model)
unsloth_eos_token = "eos_token"
tokenizer = get_chat_template(
tokenizer,
chat_template = "mistral", # zephyr, chatml, mistral, llama, alpaca, vicuna, vicuna_old, unsloth
mapping = {"role" : "from", "content" : "value", "user" : "human", "assistant" : "gpt"},
map_eos_token = True,
)
messages = [
{"from": "human", "value": "Who is Nguyễn Phú Trọng"},
]
inputs = tokenizer.apply_chat_template(
messages,
tokenize = True,
add_generation_prompt = True, # Must add for generation
return_tensors = "pt",
).to("cuda")
outputs = model.generate(input_ids = inputs, max_new_tokens = 64, use_cache = True)
tokenizer.batch_decode(outputs)
```
## Model Card Contact
[email protected]
|
{"license": "apache-2.0", "library_name": "transformers"}
|
HuyRemy/chatphil
| null |
[
"transformers",
"safetensors",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T22:50:15+00:00
|
null | null |
{"license": "mit"}
|
cambridge-climb/PO-CamBabyTokenizer
| null |
[
"license:mit",
"region:us"
] | null |
2024-04-24T22:50:41+00:00
|
|
image-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-tiny-patch4-window8-256-dmae-va-U5-42C
This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7073
- Accuracy: 0.7667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 42
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| No log | 0.9032 | 7 | 1.3926 | 0.35 |
| 1.4087 | 1.9355 | 15 | 1.3365 | 0.4167 |
| 1.3807 | 2.9677 | 23 | 1.2813 | 0.4167 |
| 1.35 | 4.0 | 31 | 1.2407 | 0.4 |
| 1.35 | 4.9032 | 38 | 1.2116 | 0.4833 |
| 1.2933 | 5.9355 | 46 | 1.1653 | 0.4833 |
| 1.2426 | 6.9677 | 54 | 1.1151 | 0.5167 |
| 1.1771 | 8.0 | 62 | 1.0441 | 0.6 |
| 1.1771 | 8.9032 | 69 | 0.9990 | 0.5667 |
| 1.0983 | 9.9355 | 77 | 0.9456 | 0.6333 |
| 1.0338 | 10.9677 | 85 | 0.9160 | 0.6833 |
| 0.9665 | 12.0 | 93 | 0.8940 | 0.6833 |
| 0.9133 | 12.9032 | 100 | 0.8753 | 0.6 |
| 0.9133 | 13.9355 | 108 | 0.8518 | 0.6667 |
| 0.8521 | 14.9677 | 116 | 0.8515 | 0.65 |
| 0.8461 | 16.0 | 124 | 0.8407 | 0.65 |
| 0.808 | 16.9032 | 131 | 0.8218 | 0.65 |
| 0.808 | 17.9355 | 139 | 0.8170 | 0.6833 |
| 0.7779 | 18.9677 | 147 | 0.7972 | 0.7167 |
| 0.758 | 20.0 | 155 | 0.7817 | 0.7333 |
| 0.7416 | 20.9032 | 162 | 0.7678 | 0.7167 |
| 0.7344 | 21.9355 | 170 | 0.7650 | 0.7167 |
| 0.7344 | 22.9677 | 178 | 0.7428 | 0.7333 |
| 0.7091 | 24.0 | 186 | 0.7280 | 0.75 |
| 0.6876 | 24.9032 | 193 | 0.7235 | 0.75 |
| 0.6887 | 25.9355 | 201 | 0.7278 | 0.75 |
| 0.6887 | 26.9677 | 209 | 0.7264 | 0.75 |
| 0.6897 | 28.0 | 217 | 0.7228 | 0.75 |
| 0.6637 | 28.9032 | 224 | 0.7163 | 0.75 |
| 0.6924 | 29.9355 | 232 | 0.7073 | 0.7667 |
| 0.6234 | 30.9677 | 240 | 0.7057 | 0.7667 |
| 0.6234 | 32.0 | 248 | 0.7090 | 0.7667 |
| 0.6652 | 32.9032 | 255 | 0.7052 | 0.7667 |
| 0.6343 | 33.9355 | 263 | 0.7009 | 0.7667 |
| 0.6327 | 34.9677 | 271 | 0.7017 | 0.7667 |
| 0.6327 | 36.0 | 279 | 0.7023 | 0.7667 |
| 0.6339 | 36.9032 | 286 | 0.7027 | 0.7667 |
| 0.6275 | 37.9355 | 294 | 0.7031 | 0.7667 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "microsoft/swinv2-tiny-patch4-window8-256", "model-index": [{"name": "swinv2-tiny-patch4-window8-256-dmae-va-U5-42C", "results": []}]}
|
Augusto777/swinv2-tiny-patch4-window8-256-dmae-va-U5-42C
| null |
[
"transformers",
"tensorboard",
"safetensors",
"swinv2",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swinv2-tiny-patch4-window8-256",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T22:53:24+00:00
|
null | null |
{}
|
bhavyachann/ResumeClassifierModel
| null |
[
"region:us"
] | null |
2024-04-24T22:59:13+00:00
|
|
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
LongQ/Mistral_8x7B_SFT_Lora
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T22:59:16+00:00
|
text-classification
|
setfit
|
# SetFit with FacebookAI/roberta-base
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| true | <ul><li>'How can we apply your findings to optimize our processes?'</li><li>'Your presence at the meeting was greatly appreciated.'</li><li>'Your journey is quite inspiring, can you share more about it?'</li></ul> |
| false | <ul><li>'What book are you currently reading?'</li><li>'It’s important to acknowledge your feelings, what’s been going through your mind?'</li><li>'You’ve been working hard on your mental health; how are you finding the journey?'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.94 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("richie-ghost/setfit-FacebookAI-roberta-base-phatic")
# Run inference
preds = model("Take it easy!")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 1 | 9.8722 | 108 |
| Label | Training Sample Count |
|:------|:----------------------|
| false | 191 |
| true | 169 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (4, 4)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:---------:|:-------------:|:---------------:|
| 0.0002 | 1 | 0.4475 | - |
| 0.0122 | 50 | 0.4363 | - |
| 0.0245 | 100 | 0.3668 | - |
| 0.0367 | 150 | 0.177 | - |
| 0.0489 | 200 | 0.0999 | - |
| 0.0612 | 250 | 0.1043 | - |
| 0.0734 | 300 | 0.0191 | - |
| 0.0856 | 350 | 0.009 | - |
| 0.0978 | 400 | 0.0028 | - |
| 0.1101 | 450 | 0.0046 | - |
| 0.1223 | 500 | 0.0012 | - |
| 0.1345 | 550 | 0.0016 | - |
| 0.1468 | 600 | 0.0012 | - |
| 0.1590 | 650 | 0.0012 | - |
| 0.1712 | 700 | 0.0164 | - |
| 0.1835 | 750 | 0.025 | - |
| 0.1957 | 800 | 0.0007 | - |
| 0.2079 | 850 | 0.0013 | - |
| 0.2202 | 900 | 0.0008 | - |
| 0.2324 | 950 | 0.0005 | - |
| 0.2446 | 1000 | 0.0004 | - |
| 0.2568 | 1050 | 0.0002 | - |
| 0.2691 | 1100 | 0.0004 | - |
| 0.2813 | 1150 | 0.0003 | - |
| 0.2935 | 1200 | 0.0002 | - |
| 0.3058 | 1250 | 0.0002 | - |
| 0.3180 | 1300 | 0.0003 | - |
| 0.3302 | 1350 | 0.0002 | - |
| 0.3425 | 1400 | 0.0001 | - |
| 0.3547 | 1450 | 0.003 | - |
| 0.3669 | 1500 | 0.0003 | - |
| 0.3792 | 1550 | 0.0003 | - |
| 0.3914 | 1600 | 0.0001 | - |
| 0.4036 | 1650 | 0.0001 | - |
| 0.4159 | 1700 | 0.0001 | - |
| 0.4281 | 1750 | 0.0001 | - |
| 0.4403 | 1800 | 0.0001 | - |
| 0.4525 | 1850 | 0.0001 | - |
| 0.4648 | 1900 | 0.0001 | - |
| 0.4770 | 1950 | 0.0001 | - |
| 0.4892 | 2000 | 0.0001 | - |
| 0.5015 | 2050 | 0.0001 | - |
| 0.5137 | 2100 | 0.0001 | - |
| 0.5259 | 2150 | 0.0001 | - |
| 0.5382 | 2200 | 0.0 | - |
| 0.5504 | 2250 | 0.0 | - |
| 0.5626 | 2300 | 0.0 | - |
| 0.5749 | 2350 | 0.0001 | - |
| 0.5871 | 2400 | 0.0 | - |
| 0.5993 | 2450 | 0.0 | - |
| 0.6115 | 2500 | 0.0001 | - |
| 0.6238 | 2550 | 0.0001 | - |
| 0.6360 | 2600 | 0.0 | - |
| 0.6482 | 2650 | 0.0 | - |
| 0.6605 | 2700 | 0.0 | - |
| 0.6727 | 2750 | 0.0 | - |
| 0.6849 | 2800 | 0.0 | - |
| 0.6972 | 2850 | 0.0 | - |
| 0.7094 | 2900 | 0.0001 | - |
| 0.7216 | 2950 | 0.0001 | - |
| 0.7339 | 3000 | 0.0 | - |
| 0.7461 | 3050 | 0.0 | - |
| 0.7583 | 3100 | 0.0006 | - |
| 0.7705 | 3150 | 0.0606 | - |
| 0.7828 | 3200 | 0.0 | - |
| 0.7950 | 3250 | 0.0002 | - |
| 0.8072 | 3300 | 0.0 | - |
| 0.8195 | 3350 | 0.0001 | - |
| 0.8317 | 3400 | 0.0001 | - |
| 0.8439 | 3450 | 0.0 | - |
| 0.8562 | 3500 | 0.0001 | - |
| 0.8684 | 3550 | 0.0 | - |
| 0.8806 | 3600 | 0.0 | - |
| 0.8929 | 3650 | 0.0 | - |
| 0.9051 | 3700 | 0.0 | - |
| 0.9173 | 3750 | 0.0 | - |
| 0.9295 | 3800 | 0.0 | - |
| 0.9418 | 3850 | 0.0 | - |
| 0.9540 | 3900 | 0.0 | - |
| 0.9662 | 3950 | 0.0 | - |
| 0.9785 | 4000 | 0.0 | - |
| 0.9907 | 4050 | 0.0 | - |
| 1.0 | 4088 | - | 0.1621 |
| 1.0029 | 4100 | 0.0 | - |
| 1.0152 | 4150 | 0.0 | - |
| 1.0274 | 4200 | 0.0 | - |
| 1.0396 | 4250 | 0.0 | - |
| 1.0519 | 4300 | 0.0 | - |
| 1.0641 | 4350 | 0.0 | - |
| 1.0763 | 4400 | 0.0 | - |
| 1.0886 | 4450 | 0.0 | - |
| 1.1008 | 4500 | 0.0 | - |
| 1.1130 | 4550 | 0.0 | - |
| 1.1252 | 4600 | 0.0 | - |
| 1.1375 | 4650 | 0.0 | - |
| 1.1497 | 4700 | 0.0 | - |
| 1.1619 | 4750 | 0.0 | - |
| 1.1742 | 4800 | 0.0 | - |
| 1.1864 | 4850 | 0.0 | - |
| 1.1986 | 4900 | 0.0 | - |
| 1.2109 | 4950 | 0.0 | - |
| 1.2231 | 5000 | 0.0 | - |
| 1.2353 | 5050 | 0.0 | - |
| 1.2476 | 5100 | 0.0 | - |
| 1.2598 | 5150 | 0.0 | - |
| 1.2720 | 5200 | 0.0 | - |
| 1.2842 | 5250 | 0.0 | - |
| 1.2965 | 5300 | 0.0 | - |
| 1.3087 | 5350 | 0.0 | - |
| 1.3209 | 5400 | 0.0 | - |
| 1.3332 | 5450 | 0.0 | - |
| 1.3454 | 5500 | 0.0 | - |
| 1.3576 | 5550 | 0.0 | - |
| 1.3699 | 5600 | 0.0 | - |
| 1.3821 | 5650 | 0.0 | - |
| 1.3943 | 5700 | 0.0 | - |
| 1.4066 | 5750 | 0.0 | - |
| 1.4188 | 5800 | 0.0 | - |
| 1.4310 | 5850 | 0.0 | - |
| 1.4432 | 5900 | 0.0 | - |
| 1.4555 | 5950 | 0.0 | - |
| 1.4677 | 6000 | 0.0 | - |
| 1.4799 | 6050 | 0.0 | - |
| 1.4922 | 6100 | 0.0 | - |
| 1.5044 | 6150 | 0.0 | - |
| 1.5166 | 6200 | 0.0 | - |
| 1.5289 | 6250 | 0.0 | - |
| 1.5411 | 6300 | 0.0 | - |
| 1.5533 | 6350 | 0.0 | - |
| 1.5656 | 6400 | 0.0 | - |
| 1.5778 | 6450 | 0.0 | - |
| 1.5900 | 6500 | 0.0 | - |
| 1.6023 | 6550 | 0.0 | - |
| 1.6145 | 6600 | 0.0 | - |
| 1.6267 | 6650 | 0.0 | - |
| 1.6389 | 6700 | 0.0 | - |
| 1.6512 | 6750 | 0.0 | - |
| 1.6634 | 6800 | 0.0 | - |
| 1.6756 | 6850 | 0.0 | - |
| 1.6879 | 6900 | 0.0 | - |
| 1.7001 | 6950 | 0.0 | - |
| 1.7123 | 7000 | 0.0 | - |
| 1.7246 | 7050 | 0.0 | - |
| 1.7368 | 7100 | 0.0 | - |
| 1.7490 | 7150 | 0.0 | - |
| 1.7613 | 7200 | 0.0 | - |
| 1.7735 | 7250 | 0.0 | - |
| 1.7857 | 7300 | 0.0 | - |
| 1.7979 | 7350 | 0.0 | - |
| 1.8102 | 7400 | 0.0 | - |
| 1.8224 | 7450 | 0.0 | - |
| 1.8346 | 7500 | 0.0 | - |
| 1.8469 | 7550 | 0.0 | - |
| 1.8591 | 7600 | 0.0 | - |
| 1.8713 | 7650 | 0.0 | - |
| 1.8836 | 7700 | 0.0 | - |
| 1.8958 | 7750 | 0.0 | - |
| 1.9080 | 7800 | 0.0 | - |
| 1.9203 | 7850 | 0.0 | - |
| 1.9325 | 7900 | 0.0 | - |
| 1.9447 | 7950 | 0.0 | - |
| 1.9569 | 8000 | 0.0 | - |
| 1.9692 | 8050 | 0.0 | - |
| 1.9814 | 8100 | 0.0 | - |
| 1.9936 | 8150 | 0.0 | - |
| 2.0 | 8176 | - | 0.1131 |
| 2.0059 | 8200 | 0.0 | - |
| 2.0181 | 8250 | 0.0 | - |
| 2.0303 | 8300 | 0.0 | - |
| 2.0426 | 8350 | 0.0 | - |
| 2.0548 | 8400 | 0.0 | - |
| 2.0670 | 8450 | 0.0 | - |
| 2.0793 | 8500 | 0.0 | - |
| 2.0915 | 8550 | 0.0 | - |
| 2.1037 | 8600 | 0.0 | - |
| 2.1159 | 8650 | 0.0 | - |
| 2.1282 | 8700 | 0.0 | - |
| 2.1404 | 8750 | 0.0 | - |
| 2.1526 | 8800 | 0.0 | - |
| 2.1649 | 8850 | 0.0 | - |
| 2.1771 | 8900 | 0.0 | - |
| 2.1893 | 8950 | 0.0 | - |
| 2.2016 | 9000 | 0.0 | - |
| 2.2138 | 9050 | 0.0 | - |
| 2.2260 | 9100 | 0.0 | - |
| 2.2383 | 9150 | 0.0 | - |
| 2.2505 | 9200 | 0.0 | - |
| 2.2627 | 9250 | 0.0 | - |
| 2.2750 | 9300 | 0.0 | - |
| 2.2872 | 9350 | 0.0 | - |
| 2.2994 | 9400 | 0.0 | - |
| 2.3116 | 9450 | 0.0 | - |
| 2.3239 | 9500 | 0.0 | - |
| 2.3361 | 9550 | 0.0 | - |
| 2.3483 | 9600 | 0.0 | - |
| 2.3606 | 9650 | 0.0 | - |
| 2.3728 | 9700 | 0.0 | - |
| 2.3850 | 9750 | 0.0 | - |
| 2.3973 | 9800 | 0.0 | - |
| 2.4095 | 9850 | 0.0 | - |
| 2.4217 | 9900 | 0.0 | - |
| 2.4340 | 9950 | 0.0 | - |
| 2.4462 | 10000 | 0.0 | - |
| 2.4584 | 10050 | 0.0 | - |
| 2.4706 | 10100 | 0.0 | - |
| 2.4829 | 10150 | 0.0 | - |
| 2.4951 | 10200 | 0.0 | - |
| 2.5073 | 10250 | 0.0 | - |
| 2.5196 | 10300 | 0.0 | - |
| 2.5318 | 10350 | 0.0 | - |
| 2.5440 | 10400 | 0.0 | - |
| 2.5563 | 10450 | 0.0 | - |
| 2.5685 | 10500 | 0.0 | - |
| 2.5807 | 10550 | 0.0 | - |
| 2.5930 | 10600 | 0.0 | - |
| 2.6052 | 10650 | 0.0 | - |
| 2.6174 | 10700 | 0.0 | - |
| 2.6296 | 10750 | 0.0 | - |
| 2.6419 | 10800 | 0.0 | - |
| 2.6541 | 10850 | 0.0 | - |
| 2.6663 | 10900 | 0.0 | - |
| 2.6786 | 10950 | 0.0 | - |
| 2.6908 | 11000 | 0.0 | - |
| 2.7030 | 11050 | 0.0 | - |
| 2.7153 | 11100 | 0.0 | - |
| 2.7275 | 11150 | 0.0 | - |
| 2.7397 | 11200 | 0.0 | - |
| 2.7520 | 11250 | 0.0 | - |
| 2.7642 | 11300 | 0.0 | - |
| 2.7764 | 11350 | 0.0 | - |
| 2.7886 | 11400 | 0.0 | - |
| 2.8009 | 11450 | 0.0 | - |
| 2.8131 | 11500 | 0.0 | - |
| 2.8253 | 11550 | 0.0 | - |
| 2.8376 | 11600 | 0.0 | - |
| 2.8498 | 11650 | 0.0 | - |
| 2.8620 | 11700 | 0.0 | - |
| 2.8743 | 11750 | 0.0 | - |
| 2.8865 | 11800 | 0.0 | - |
| 2.8987 | 11850 | 0.0 | - |
| 2.9110 | 11900 | 0.0 | - |
| 2.9232 | 11950 | 0.0 | - |
| 2.9354 | 12000 | 0.0 | - |
| 2.9477 | 12050 | 0.0 | - |
| 2.9599 | 12100 | 0.0 | - |
| 2.9721 | 12150 | 0.0 | - |
| 2.9843 | 12200 | 0.0 | - |
| 2.9966 | 12250 | 0.0 | - |
| 3.0 | 12264 | - | 0.1127 |
| 3.0088 | 12300 | 0.0 | - |
| 3.0210 | 12350 | 0.0 | - |
| 3.0333 | 12400 | 0.0 | - |
| 3.0455 | 12450 | 0.0 | - |
| 3.0577 | 12500 | 0.0 | - |
| 3.0700 | 12550 | 0.0 | - |
| 3.0822 | 12600 | 0.0 | - |
| 3.0944 | 12650 | 0.0 | - |
| 3.1067 | 12700 | 0.0 | - |
| 3.1189 | 12750 | 0.0 | - |
| 3.1311 | 12800 | 0.0 | - |
| 3.1433 | 12850 | 0.0 | - |
| 3.1556 | 12900 | 0.0 | - |
| 3.1678 | 12950 | 0.0 | - |
| 3.1800 | 13000 | 0.0 | - |
| 3.1923 | 13050 | 0.0 | - |
| 3.2045 | 13100 | 0.0 | - |
| 3.2167 | 13150 | 0.0 | - |
| 3.2290 | 13200 | 0.0 | - |
| 3.2412 | 13250 | 0.0 | - |
| 3.2534 | 13300 | 0.0 | - |
| 3.2657 | 13350 | 0.0 | - |
| 3.2779 | 13400 | 0.0 | - |
| 3.2901 | 13450 | 0.0 | - |
| 3.3023 | 13500 | 0.0 | - |
| 3.3146 | 13550 | 0.0 | - |
| 3.3268 | 13600 | 0.0 | - |
| 3.3390 | 13650 | 0.0 | - |
| 3.3513 | 13700 | 0.0 | - |
| 3.3635 | 13750 | 0.0 | - |
| 3.3757 | 13800 | 0.0 | - |
| 3.3880 | 13850 | 0.0 | - |
| 3.4002 | 13900 | 0.0 | - |
| 3.4124 | 13950 | 0.0 | - |
| 3.4247 | 14000 | 0.0 | - |
| 3.4369 | 14050 | 0.0 | - |
| 3.4491 | 14100 | 0.0 | - |
| 3.4614 | 14150 | 0.0 | - |
| 3.4736 | 14200 | 0.0 | - |
| 3.4858 | 14250 | 0.0 | - |
| 3.4980 | 14300 | 0.0 | - |
| 3.5103 | 14350 | 0.0 | - |
| 3.5225 | 14400 | 0.0 | - |
| 3.5347 | 14450 | 0.0 | - |
| 3.5470 | 14500 | 0.0 | - |
| 3.5592 | 14550 | 0.0 | - |
| 3.5714 | 14600 | 0.0 | - |
| 3.5837 | 14650 | 0.0 | - |
| 3.5959 | 14700 | 0.0 | - |
| 3.6081 | 14750 | 0.0 | - |
| 3.6204 | 14800 | 0.0 | - |
| 3.6326 | 14850 | 0.0 | - |
| 3.6448 | 14900 | 0.0 | - |
| 3.6570 | 14950 | 0.0 | - |
| 3.6693 | 15000 | 0.0 | - |
| 3.6815 | 15050 | 0.0 | - |
| 3.6937 | 15100 | 0.0 | - |
| 3.7060 | 15150 | 0.0 | - |
| 3.7182 | 15200 | 0.0 | - |
| 3.7304 | 15250 | 0.0 | - |
| 3.7427 | 15300 | 0.0 | - |
| 3.7549 | 15350 | 0.0 | - |
| 3.7671 | 15400 | 0.0 | - |
| 3.7794 | 15450 | 0.0 | - |
| 3.7916 | 15500 | 0.0 | - |
| 3.8038 | 15550 | 0.0 | - |
| 3.8160 | 15600 | 0.0 | - |
| 3.8283 | 15650 | 0.0 | - |
| 3.8405 | 15700 | 0.0 | - |
| 3.8527 | 15750 | 0.0 | - |
| 3.8650 | 15800 | 0.0 | - |
| 3.8772 | 15850 | 0.0 | - |
| 3.8894 | 15900 | 0.0 | - |
| 3.9017 | 15950 | 0.0 | - |
| 3.9139 | 16000 | 0.0 | - |
| 3.9261 | 16050 | 0.0 | - |
| 3.9384 | 16100 | 0.0 | - |
| 3.9506 | 16150 | 0.0 | - |
| 3.9628 | 16200 | 0.0 | - |
| 3.9750 | 16250 | 0.0 | - |
| 3.9873 | 16300 | 0.0 | - |
| 3.9995 | 16350 | 0.0 | - |
| **4.0** | **16352** | **-** | **0.1019** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.7.0
- Transformers: 4.40.0
- PyTorch: 2.2.1+cu121
- Datasets: 2.19.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"library_name": "setfit", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "metrics": ["accuracy"], "base_model": "FacebookAI/roberta-base", "widget": [{"text": "Just checking in, how have you been feeling since our last chat?"}, {"text": "I\u2019m looking forward to learning more from you."}, {"text": "Take it easy!"}, {"text": "It was great seeing you. Let's catch up again soon!"}, {"text": "Let\u2019s make sure you\u2019re not carrying too much; how are you?"}], "pipeline_tag": "text-classification", "inference": true, "model-index": [{"name": "SetFit with FacebookAI/roberta-base", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.94, "name": "Accuracy"}]}]}]}
|
richie-ghost/setfit-FacebookAI-roberta-base-phatic
| null |
[
"setfit",
"safetensors",
"roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:FacebookAI/roberta-base",
"model-index",
"region:us"
] | null |
2024-04-24T22:59:41+00:00
|
null |
peft
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
{"library_name": "peft", "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0"}
|
bmehrba/TinyLlama-1.1B-Chat-v1.0-fine-tuned-adapters_Aleatoric_tiny_0.6_Seed102
| null |
[
"peft",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null |
2024-04-24T23:01:13+00:00
|
null |
peft
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
{"library_name": "peft", "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0"}
|
bmehrba/TinyLlama-1.1B-Chat-v1.0-fine-tuned_Aleatoric_tiny_0.6_Seed102
| null |
[
"peft",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null |
2024-04-24T23:01:16+00:00
|
null | null |
{}
|
Kaoeiri/Keiana-L3-Test4.6-8B-2-GGUF
| null |
[
"gguf",
"region:us"
] | null |
2024-04-24T23:03:13+00:00
|
|
null | null |
{}
|
Ponyyyy/my_awesome_model
| null |
[
"region:us"
] | null |
2024-04-24T23:04:48+00:00
|
|
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": ["trl", "sft"]}
|
Smulemun/RuNNER-v1
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null |
2024-04-24T23:05:36+00:00
|
text-to-image
|
diffusers
|
# shiratakimix-xl API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "shiratakimix-xl"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/shiratakimix-xl)
Model link: [View model](https://modelslab.com/models/shiratakimix-xl)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "shiratakimix-xl",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
{"license": "creativeml-openrail-m", "tags": ["modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic"], "pinned": true}
|
stablediffusionapi/shiratakimix-xl
| null |
[
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | null |
2024-04-24T23:06:49+00:00
|
reinforcement-learning
|
stable-baselines3
|
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
{"library_name": "stable-baselines3", "tags": ["PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "A2C", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "PandaReachDense-v3", "type": "PandaReachDense-v3"}, "metrics": [{"type": "mean_reward", "value": "-0.18 +/- 0.12", "name": "mean_reward", "verified": false}]}]}]}
|
PabloVD/a2c-PandaReachDense-v3
| null |
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null |
2024-04-24T23:08:43+00:00
|
text-generation
|
transformers
|
# Uploaded model
- **Developed by:** ale045
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
|
ale045/llama3_unsloth
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T23:08:56+00:00
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-14m_mz-132_WordLength_n-its-10
This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-14m", "model-index": [{"name": "robust_llm_pythia-14m_mz-132_WordLength_n-its-10", "results": []}]}
|
AlignmentResearch/robust_llm_pythia-14m_mz-132_WordLength_n-its-10
| null |
[
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-14m",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T23:09:38+00:00
|
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tmp_trainer
This model is a fine-tuned version of [distilbert/distilgpt2](https://huggingface.co/distilbert/distilgpt2) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- PEFT 0.7.0
- Transformers 4.40.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "distilbert/distilgpt2", "model-index": [{"name": "tmp_trainer", "results": []}]}
|
Kelechie/Bevo-Budv1.0
| null |
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:distilbert/distilgpt2",
"license:apache-2.0",
"region:us"
] | null |
2024-04-24T23:10:30+00:00
|
text2text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
IbrahimSalah/Quran_syll_to_word3
| null |
[
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T23:11:25+00:00
|
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
happylayers/sc14
| null |
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T23:11:47+00:00
|
null | null |
{}
|
satvik-dixit/asr_makerere
| null |
[
"region:us"
] | null |
2024-04-24T23:12:57+00:00
|
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-160m_mz-130_PasswordMatch_n-its-10-seed-3
This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 3
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-160m", "model-index": [{"name": "robust_llm_pythia-160m_mz-130_PasswordMatch_n-its-10-seed-3", "results": []}]}
|
AlignmentResearch/robust_llm_pythia-160m_mz-130_PasswordMatch_n-its-10-seed-3
| null |
[
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-160m",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T23:13:40+00:00
|
null | null |
{}
|
Ponyyyy/sequence_classification
| null |
[
"region:us"
] | null |
2024-04-24T23:13:52+00:00
|
|
null | null |
{}
|
Ecliipse/123
| null |
[
"region:us"
] | null |
2024-04-24T23:14:21+00:00
|
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-160m_mz-130_PasswordMatch_n-its-10-seed-4
This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-160m", "model-index": [{"name": "robust_llm_pythia-160m_mz-130_PasswordMatch_n-its-10-seed-4", "results": []}]}
|
AlignmentResearch/robust_llm_pythia-160m_mz-130_PasswordMatch_n-its-10-seed-4
| null |
[
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-160m",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T23:14:23+00:00
|
null |
transformers
|
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/TitanML/LeoLM-hessianai-13b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LeoLM-hessianai-13b-GGUF/resolve/main/LeoLM-hessianai-13b.Q2_K.gguf) | Q2_K | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/LeoLM-hessianai-13b-GGUF/resolve/main/LeoLM-hessianai-13b.IQ3_XS.gguf) | IQ3_XS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/LeoLM-hessianai-13b-GGUF/resolve/main/LeoLM-hessianai-13b.IQ3_S.gguf) | IQ3_S | 5.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/LeoLM-hessianai-13b-GGUF/resolve/main/LeoLM-hessianai-13b.Q3_K_S.gguf) | Q3_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/LeoLM-hessianai-13b-GGUF/resolve/main/LeoLM-hessianai-13b.IQ3_M.gguf) | IQ3_M | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/LeoLM-hessianai-13b-GGUF/resolve/main/LeoLM-hessianai-13b.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LeoLM-hessianai-13b-GGUF/resolve/main/LeoLM-hessianai-13b.Q3_K_L.gguf) | Q3_K_L | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/LeoLM-hessianai-13b-GGUF/resolve/main/LeoLM-hessianai-13b.IQ4_XS.gguf) | IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/LeoLM-hessianai-13b-GGUF/resolve/main/LeoLM-hessianai-13b.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LeoLM-hessianai-13b-GGUF/resolve/main/LeoLM-hessianai-13b.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LeoLM-hessianai-13b-GGUF/resolve/main/LeoLM-hessianai-13b.Q5_K_S.gguf) | Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/LeoLM-hessianai-13b-GGUF/resolve/main/LeoLM-hessianai-13b.Q5_K_M.gguf) | Q5_K_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/LeoLM-hessianai-13b-GGUF/resolve/main/LeoLM-hessianai-13b.Q6_K.gguf) | Q6_K | 10.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/LeoLM-hessianai-13b-GGUF/resolve/main/LeoLM-hessianai-13b.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
{"language": ["en"], "library_name": "transformers", "datasets": ["oscar-corpus/OSCAR-2301", "wikipedia", "bjoernp/tagesschau-2018-2023"], "base_model": "TitanML/LeoLM-hessianai-13b", "quantized_by": "mradermacher"}
|
mradermacher/LeoLM-hessianai-13b-GGUF
| null |
[
"transformers",
"gguf",
"en",
"dataset:oscar-corpus/OSCAR-2301",
"dataset:wikipedia",
"dataset:bjoernp/tagesschau-2018-2023",
"base_model:TitanML/LeoLM-hessianai-13b",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T23:15:12+00:00
|
null | null |
{"license": "openrail"}
|
1232eee/AmbushBoleet
| null |
[
"license:openrail",
"region:us"
] | null |
2024-04-24T23:15:47+00:00
|
|
null |
transformers
|
{}
|
evanfrick/BigBerta
| null |
[
"transformers",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T23:20:05+00:00
|
|
null |
transformers
|
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/NotAiLOL/Knight-Miqu-70B-MoE
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Knight-Miqu-70B-MoE-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Knight-Miqu-70B-MoE-GGUF/resolve/main/Knight-Miqu-70B-MoE.Q2_K.gguf) | Q2_K | 25.2 | |
| [GGUF](https://huggingface.co/mradermacher/Knight-Miqu-70B-MoE-GGUF/resolve/main/Knight-Miqu-70B-MoE.IQ3_XS.gguf) | IQ3_XS | 28.0 | |
| [GGUF](https://huggingface.co/mradermacher/Knight-Miqu-70B-MoE-GGUF/resolve/main/Knight-Miqu-70B-MoE.Q3_K_S.gguf) | Q3_K_S | 29.5 | |
| [GGUF](https://huggingface.co/mradermacher/Knight-Miqu-70B-MoE-GGUF/resolve/main/Knight-Miqu-70B-MoE.IQ3_S.gguf) | IQ3_S | 29.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Knight-Miqu-70B-MoE-GGUF/resolve/main/Knight-Miqu-70B-MoE.IQ3_M.gguf) | IQ3_M | 30.6 | |
| [GGUF](https://huggingface.co/mradermacher/Knight-Miqu-70B-MoE-GGUF/resolve/main/Knight-Miqu-70B-MoE.Q3_K_M.gguf) | Q3_K_M | 32.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Knight-Miqu-70B-MoE-GGUF/resolve/main/Knight-Miqu-70B-MoE.Q3_K_L.gguf) | Q3_K_L | 35.8 | |
| [GGUF](https://huggingface.co/mradermacher/Knight-Miqu-70B-MoE-GGUF/resolve/main/Knight-Miqu-70B-MoE.IQ4_XS.gguf) | IQ4_XS | 36.8 | |
| [GGUF](https://huggingface.co/mradermacher/Knight-Miqu-70B-MoE-GGUF/resolve/main/Knight-Miqu-70B-MoE.Q4_K_S.gguf) | Q4_K_S | 38.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Knight-Miqu-70B-MoE-GGUF/resolve/main/Knight-Miqu-70B-MoE.Q4_K_M.gguf) | Q4_K_M | 40.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Knight-Miqu-70B-MoE-GGUF/resolve/main/Knight-Miqu-70B-MoE.Q5_K_S.gguf) | Q5_K_S | 47.0 | |
| [GGUF](https://huggingface.co/mradermacher/Knight-Miqu-70B-MoE-GGUF/resolve/main/Knight-Miqu-70B-MoE.Q5_K_M.gguf) | Q5_K_M | 48.2 | |
| [PART 1](https://huggingface.co/mradermacher/Knight-Miqu-70B-MoE-GGUF/resolve/main/Knight-Miqu-70B-MoE.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Knight-Miqu-70B-MoE-GGUF/resolve/main/Knight-Miqu-70B-MoE.Q6_K.gguf.part2of2) | Q6_K | 56.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Knight-Miqu-70B-MoE-GGUF/resolve/main/Knight-Miqu-70B-MoE.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Knight-Miqu-70B-MoE-GGUF/resolve/main/Knight-Miqu-70B-MoE.Q8_0.gguf.part2of2) | Q8_0 | 72.5 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": "NotAiLOL/Knight-Miqu-70B-MoE", "quantized_by": "mradermacher"}
|
mradermacher/Knight-Miqu-70B-MoE-GGUF
| null |
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:NotAiLOL/Knight-Miqu-70B-MoE",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T23:24:20+00:00
|
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-merged
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9282
- Rouge1: 0.4675
- Rouge2: 0.1579
- Rougel: 0.4313
- Bertscore: 0.8652
- Readability: 13.1666
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Bertscore | Readability |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-----------:|
| 2.1647 | 1.0 | 3640 | 2.0145 | 0.4602 | 0.1545 | 0.4236 | 0.8626 | 13.5465 |
| 2.0892 | 2.0 | 7280 | 1.9591 | 0.4620 | 0.1549 | 0.4259 | 0.8636 | 13.3182 |
| 2.0151 | 3.0 | 10920 | 1.9376 | 0.4663 | 0.1571 | 0.4301 | 0.8648 | 13.2234 |
| 1.9793 | 4.0 | 14560 | 1.9282 | 0.4699 | 0.1599 | 0.4337 | 0.8656 | 13.1966 |
| 1.9679 | 5.0 | 18200 | 1.9269 | 0.4683 | 0.1583 | 0.4313 | 0.8653 | 13.2824 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.1
- Datasets 2.19.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "google/flan-t5-base", "model-index": [{"name": "flan-t5-base-merged", "results": []}]}
|
tanishq1420/flan-t5-base-merged
| null |
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T23:24:44+00:00
|
text-generation
|
transformers
|
# WikiChat-v0.2
Training in progress model to have conversations.
The GGUFs uploaded are full FP32 precision.
Using OpenOrca GPT-4 data + cosmopedia for some extra data + dolly15k for instruct
## Model Details:
- 83.59M parameters (83591800)
- 8 attention heads
- 40 layers
- 384 embeddings size
- 4096/8192/16384 context (please use 2/4x RoPE scaling, may train a 16k finetuned version later)
- Batch size 16
- llama.cpp (train-text-from-scratch)
## Prompt Format (Alpaca):
```
Instruction: {system}
Input: {prompt}
Response: {response}
```
Please structure your prompts in an instruct format for maximum performance.
## Training Details:
- 1x RTX 3070 8GB (Infrencing speed: 80tok/s, full GPU offload)
- 1x Ryzen 3 3700x
- 96gb RAM
- 10 iterations
- Loss Target = 2.5 to 3.0
- Approx 480 samples/1M train tokens (>0.0001 epoches)
- Training data = Refer to OpenOrca page
## Notes:
The model isn't ready yet; this is to test tokenization of OpenOrca and a balance between training speed and model size
## Example output:
```
User: What is the square root of 4?
```
```
Assistant: The square root of 4 is 2.
```
|
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "datasets": ["HuggingFaceTB/cosmopedia", "databricks/databricks-dolly-15k", "Open-Orca/OpenOrca"], "metrics": ["accuracy"], "pipeline_tag": "text-generation"}
|
leafspark/wikichat-v2
| null |
[
"transformers",
"gguf",
"text-generation",
"en",
"dataset:HuggingFaceTB/cosmopedia",
"dataset:databricks/databricks-dolly-15k",
"dataset:Open-Orca/OpenOrca",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T23:25:11+00:00
|
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
sophiex/pythia-410m-sft_hh_rlhf
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T23:25:36+00:00
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4837
- Accuracy: 0.8382
- F1: 0.8866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 459 | 0.5474 | 0.7059 | 0.8198 |
| 0.6259 | 2.0 | 918 | 0.4626 | 0.8137 | 0.8690 |
| 0.5063 | 3.0 | 1377 | 0.4837 | 0.8382 | 0.8866 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "bert-base-uncased", "model-index": [{"name": "test_trainer", "results": []}]}
|
tarunabraham1986/test_trainer
| null |
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T23:28:08+00:00
|
text-generation
|
transformers
|
# Uploaded model
- **Developed by:** moriire
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "base_model": "unsloth/tinyllama-bnb-4bit"}
|
moriire/healthcare-ai-adapter-merged_16bit
| null |
[
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T23:28:24+00:00
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# squence_classification_model
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1988
- Accuracy: 0.9516
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2017 | 1.0 | 1563 | 0.1441 | 0.9482 |
| 0.1263 | 2.0 | 3126 | 0.1988 | 0.9516 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "roberta-base", "model-index": [{"name": "squence_classification_model", "results": []}]}
|
Ponyyyy/squence_classification_model
| null |
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T23:29:16+00:00
|
null |
diffusers
|
{}
|
zjysteven/control_minisd_tile
| null |
[
"diffusers",
"safetensors",
"region:us"
] | null |
2024-04-24T23:29:32+00:00
|
|
null | null |
{}
|
Tristan/pythia-410m-deduped-fr
| null |
[
"tensorboard",
"safetensors",
"region:us"
] | null |
2024-04-24T23:29:39+00:00
|
|
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [google/bigbird-roberta-base](https://huggingface.co/google/bigbird-roberta-base) on the eli5_category dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0044
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0332 | 1.0 | 2000 | 0.0077 |
| 0.0105 | 2.0 | 4000 | 0.0022 |
| 0.0091 | 3.0 | 6000 | 0.0044 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["eli5_category"], "base_model": "google/bigbird-roberta-base", "model-index": [{"name": "my_awesome_eli5_clm-model", "results": []}]}
|
tristayqc/my_awesome_eli5_clm-model
| null |
[
"transformers",
"tensorboard",
"safetensors",
"big_bird",
"text-generation",
"generated_from_trainer",
"dataset:eli5_category",
"base_model:google/bigbird-roberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T23:31:43+00:00
|
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_ablation_4iters_bs128_nodpo_iter_2
This model is a fine-tuned version of [ShenaoZhang/0.001_ablation_4iters_bs128_nodpo_iter_1](https://huggingface.co/ShenaoZhang/0.001_ablation_4iters_bs128_nodpo_iter_1) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
{"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZhang/0.001_ablation_4iters_bs128_nodpo_iter_1", "model-index": [{"name": "0.001_ablation_4iters_bs128_nodpo_iter_2", "results": []}]}
|
ShenaoZhang/0.001_ablation_4iters_bs128_nodpo_iter_2
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZhang/0.001_ablation_4iters_bs128_nodpo_iter_1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T23:31:45+00:00
|
null | null |
# Legacy
|
{}
|
MLP-Lemma/Lemma-pt-stage2-3500step
| null |
[
"region:us"
] | null |
2024-04-24T23:32:29+00:00
|
text-generation
|
transformers
|
Fintuned the phi3-4k-instruct with own organization dataset.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
{}
|
sosoai/hansoldeco-phi3-4k-instruct-v0.1
| null |
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T23:32:41+00:00
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2248
- Accuracy: 0.9235
- F1: 0.9234
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8847 | 1.0 | 250 | 0.3414 | 0.9065 | 0.9062 |
| 0.2603 | 2.0 | 500 | 0.2248 | 0.9235 | 0.9234 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.0
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.9235, "name": "Accuracy"}, {"type": "f1", "value": 0.9233996647482615, "name": "F1"}]}]}]}
|
joacorf33/distilbert-base-uncased-finetuned-emotion
| null |
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T23:36:22+00:00
|
null |
transformers
|
# Uploaded model
- **Developed by:** moriire
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/tinyllama-bnb-4bit"}
|
moriire/healthcare-ai-q8_0
| null |
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T23:39:07+00:00
|
text-to-image
|
diffusers
|
This is the single file [`furry-xl-4.0.safetensors`](https://huggingface.co/SeaArtLab/SeaArt-Furry-XL-1.0) [converted](https://github.com/Linaqruf/sdxl-model-converter/) to diffusers format. A wolf asked for it. If you're not a wolf you might still want this because the original version is fp32 while this one is fp16. SeaArt's diffusers distribution is also [missing the `unet/config.json`](https://huggingface.co/SeaArtLab/SeaArt-Furry-XL-1.0/discussions/2).
# SeaArt Furry XL 1.0

**SeaArt-Furry-XL-1.0**, built on the SDXL framework, focuses on high-quality furry art images creation. By analyzing millions of furry pictures, it sets new standards in furry imagery understanding and creation. Incorporating vast knowledge of furry characters and extensive species calibration, including mammals and birds, it refines artist styles and quality hints. SeaArt-Furry-XL-1.0 aims to offer furry enthusiasts and artists an accurate and detailed generation tool, encouraging collaboration to enrich the furry ecosystem.
## Model Details
- **Developed by:** [SeaArt](https://www.seaart.ai/)
- **Model type:** Diffusion-based text-to-image generative model
- **License:** [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/)
- **Summary:** This model generates images based on text prompts. It is a Latent Diffusion Model that uses two fixed, pre-trained text encoders (OpenCLIP-ViT/G and CLIP-ViT/L). It follows the same architecture as Stable Diffusion XL.
## Diffusers Installation
First install the required libraries:
```
pip install diffusers transformers accelerate safetensors --upgrade
```
Then run image generation with the following example code:
```
import torch
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained(
"Gaeros/SeaArt-Furry-XL-1.0-fp16-diffusers",
torch_dtype=torch.float16,
)
pipe.to('cuda')
prompt = "canid, canine, fox, mammal, red_fox, true_fox, foxgirl83, photonoko, day, digitigrade, fluffy, fluffy_tail, fur, orange_body, orange_fur, orange_tail, solo, sunlight, tail, mid, 2018, digital_media_(artwork), hi_res, masterpiece"
negative_prompt = "nsfw, lowres, (bad), text, error, fewer, extra, missing, worst quality, jpeg artifacts, low quality, watermark, unfinished, displeasing, oldest, early, chromatic aberration, signature, extra digits, artistic error, username, scan, [abstract]"
image = pipe(
prompt,
negative_prompt=negative_prompt,
width=832,
height=1216,
guidance_scale=7,
num_inference_steps=28
).images[0]
image.save("./output/seaart_test.png")
```
## Usage Guidelines
To fully utilize the SeaArt-Furry-XL-1.0 model and generate high-quality furry art images, we recommend following these guidelines:
### Prompt Structure:
The model was trained with a specific calibration order: species, artist, image detail, quality hint, image nsfw level. It is recommended to construct prompts following this order for optimal results. For example:
```
Prompt input: "canid, canine, fox, mammal, red_fox, true_fox, foxgirl83, photonoko, day, digitigrade, fluffy, fluffy_tail, fur, orange_body, orange_fur, orange_tail, solo, sunlight, tail, mid, 2018, digital_media_(artwork), hi_res, masterpiece"
```
### Species and Character Calibration:
We have provided a series of nouns for main species calibration such as mammals, birds, and have repeatedly trained on specific furry characters. This helps in generating more accurate character images.
### Quality Hints:
The model supports various levels of quality hints, from "masterpiece" to "worst quality". Be aware that "masterpiece" and "best quality" may lean towards nsfw content.
### Artwork Timing:
To get images in the style of specific periods, you can use time calibrations like "newest", "late", "mid", "early", "oldest". For instance, "newest" can be used for generating images with the most current styles.
### Recommended Image Sizes:
For best-quality images, it is recommended to generate using one of the following sizes: 1024x1024, 1152x896, 896x1152, etc. These sizes were more frequently used in training, making the model better adapted to them.
| Dimensions | Aspect Ratio |
|-------------------|-----------------|
| `1024 x 1024` | 1:1 Square |
| `1152 x 896` | 9:7 |
| `896 x 1152` | 7:9 |
| `1216 x 832` | 19:13 |
| `832 x 1216` | 13:19 |
| `1344 x 768` | 7:4 Horizontal |
| `768 x 1344` | 4:7 Vertical |
| `1536 x 640` | 12:5 Horizontal |
| `640 x 1536` | 5:12 Vertical |
## User Studies
To gain a deeper understanding of how SeaArt-Furry-XL-1.0 is applied within the furry art community and to assess user satisfaction, we invited artists, designers, and furry enthusiasts from various backgrounds to participate in our user study.
### Study Methodology:
Through online surveys and one-on-one interviews, we collected feedback on the furry art pieces generated by SeaArt-Furry-XL-1.0. Participants were asked to create images using the model based on specific prompts and to evaluate the images in terms of quality, style alignment, and inspiration for creation.
### Key Findings:
- Highly Personalized Creation: Users generally found that SeaArt-Furry-XL-1.0 offers a highly personalized creation experience, capable of generating images that meet individual preferences based on very specific prompts.
- Enhancement of Artistic Quality: Most users noted that using high-quality prompts like "masterpiece" significantly enhanced the artistic quality of their works.
- Source of Inspiration: Many artists and creators reported that the model not only expedited the creation process but also provided new sources of inspiration for their work.

### Showcase of User Creations:
In the study, we collected several outstanding works created by participants to showcase the diverse applications and creative potential of SeaArt-Furry-XL-1.0.

### Conclusion:
SeaArt-Furry-XL-1.0 has proven to be a powerful tool, offering endless possibilities for the furry art creation community. We will continue to collect user feedback and optimize the model to better serve artists and creators.
## License
SeaArt-Furry-XL-1.0 falls under Fair AI Public License 1.0-SD license, which is compatible with Stable Diffusion models’ license. Key points:
1. **Modification Sharing:** If you modify SeaArt-Furry-XL-1.0, you must share both your changes and the original license.
2. **Source Code Accessibility:** If your modified version is network-accessible, provide a way (like a download link) for others to get the source code. This applies to derived models too.
3. **Distribution Terms:** Any distribution must be under this license or another with similar rules.
4. **Compliance:** Non-compliance must be fixed within 30 days to avoid license termination, emphasizing transparency and adherence to open-source values.
The choice of this license aims to keep SeaArt-Furry-XL-1.0 open and modifiable, aligning with open source community spirit. It protects contributors and users, encouraging a collaborative, ethical open-source community. This ensures the model not only benefits from communal input but also respects open-source development freedoms.
## Finally
We welcome and value your feedback, looking forward to your suggestions to help us continuously optimize and improve. Moving forward, we will keep introducing a variety of models, so stay tuned for our latest developments.
|
{"language": ["en"], "license": "creativeml-openrail-m", "tags": ["text-to-image", "stable-diffusion", "safetensors", "stable-diffusion-xl"]}
|
Gaeros/SeaArt-Furry-XL-1.0-fp16-diffusers
| null |
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"en",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | null |
2024-04-24T23:40:01+00:00
|
text-generation
|
transformers
|
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [ResplendentAI/Kei_Llama3_8B](https://huggingface.co/ResplendentAI/Kei_Llama3_8B) as a base.
### Models Merged
The following models were included in the merge:
* [ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B)
* [vicgalle/Roleplay-Llama-3-8B](https://huggingface.co/vicgalle/Roleplay-Llama-3-8B)
* [cgato/L3-TheSpice-8b-v0.1.3](https://huggingface.co/cgato/L3-TheSpice-8b-v0.1.3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: cgato/L3-TheSpice-8b-v0.1.3
- model: ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B
- model: ResplendentAI/Kei_Llama3_8B
- model: vicgalle/Roleplay-Llama-3-8B
merge_method: model_stock
base_model: ResplendentAI/Kei_Llama3_8B
dtype: float16
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B", "vicgalle/Roleplay-Llama-3-8B", "cgato/L3-TheSpice-8b-v0.1.3", "ResplendentAI/Kei_Llama3_8B"]}
|
jeiku/Average_Normie_v2_l3_8B
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B",
"base_model:vicgalle/Roleplay-Llama-3-8B",
"base_model:cgato/L3-TheSpice-8b-v0.1.3",
"base_model:ResplendentAI/Kei_Llama3_8B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T23:40:05+00:00
|
null |
transformers
|
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/FelixChao/Llama-3-Petro-Instruct-v1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Petro-Instruct-v1-GGUF/resolve/main/Llama-3-Petro-Instruct-v1.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Petro-Instruct-v1-GGUF/resolve/main/Llama-3-Petro-Instruct-v1.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Petro-Instruct-v1-GGUF/resolve/main/Llama-3-Petro-Instruct-v1.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Petro-Instruct-v1-GGUF/resolve/main/Llama-3-Petro-Instruct-v1.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Petro-Instruct-v1-GGUF/resolve/main/Llama-3-Petro-Instruct-v1.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Petro-Instruct-v1-GGUF/resolve/main/Llama-3-Petro-Instruct-v1.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Petro-Instruct-v1-GGUF/resolve/main/Llama-3-Petro-Instruct-v1.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Petro-Instruct-v1-GGUF/resolve/main/Llama-3-Petro-Instruct-v1.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Petro-Instruct-v1-GGUF/resolve/main/Llama-3-Petro-Instruct-v1.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Petro-Instruct-v1-GGUF/resolve/main/Llama-3-Petro-Instruct-v1.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Petro-Instruct-v1-GGUF/resolve/main/Llama-3-Petro-Instruct-v1.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Petro-Instruct-v1-GGUF/resolve/main/Llama-3-Petro-Instruct-v1.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Petro-Instruct-v1-GGUF/resolve/main/Llama-3-Petro-Instruct-v1.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Petro-Instruct-v1-GGUF/resolve/main/Llama-3-Petro-Instruct-v1.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Petro-Instruct-v1-GGUF/resolve/main/Llama-3-Petro-Instruct-v1.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "FelixChao/Llama-3-Petro-Instruct-v1", "quantized_by": "mradermacher"}
|
mradermacher/Llama-3-Petro-Instruct-v1-GGUF
| null |
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:FelixChao/Llama-3-Petro-Instruct-v1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T23:41:03+00:00
|
null |
transformers
|
# Uploaded model
- **Developed by:** moriire
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/tinyllama-bnb-4bit"}
|
moriire/healthcare-ai-q16
| null |
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T23:41:48+00:00
|
null |
peft
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
{"library_name": "peft", "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0"}
|
bmehrba/TinyLlama-1.1B-Chat-v1.0-fine-tuned-adapters_Aleatoric_tiny_0.6_Seed103
| null |
[
"peft",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null |
2024-04-24T23:42:06+00:00
|
null |
peft
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
{"library_name": "peft", "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0"}
|
bmehrba/TinyLlama-1.1B-Chat-v1.0-fine-tuned_Aleatoric_tiny_0.6_Seed103
| null |
[
"peft",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null |
2024-04-24T23:42:10+00:00
|
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results-Meta-Llama-3-8B-tagllm-pos-1-fixed-embed
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4953
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5852 | 0.2 | 243 | 1.7133 |
| 1.2777 | 0.4 | 486 | 1.5943 |
| 1.7293 | 0.6 | 729 | 1.5507 |
| 1.7879 | 0.8 | 972 | 1.5103 |
| 1.4942 | 1.0 | 1215 | 1.4953 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.1
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"license": "other", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "meta-llama/Meta-Llama-3-8B", "model-index": [{"name": "results-Meta-Llama-3-8B-tagllm-pos-1-fixed-embed", "results": []}]}
|
AlienKevin/Meta-Llama-3-8B-tagllm-pos-1-fixed-embed
| null |
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:other",
"region:us"
] | null |
2024-04-24T23:42:17+00:00
|
null | null |
{}
|
Kevin321/randomforest
| null |
[
"region:us"
] | null |
2024-04-24T23:44:45+00:00
|
|
text-generation
|
transformers
|
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [jeiku/Average_Normie_v2_l3_8B](https://huggingface.co/jeiku/Average_Normie_v2_l3_8B) as a base.
### Models Merged
The following models were included in the merge:
* [jeiku/Average_Normie_v2_l3_8B](https://huggingface.co/jeiku/Average_Normie_v2_l3_8B) + [ResplendentAI/Aura_Llama3](https://huggingface.co/ResplendentAI/Aura_Llama3)
* [jeiku/Average_Normie_v2_l3_8B](https://huggingface.co/jeiku/Average_Normie_v2_l3_8B) + [ResplendentAI/Smarts_Llama3](https://huggingface.co/ResplendentAI/Smarts_Llama3)
* [jeiku/Average_Normie_v2_l3_8B](https://huggingface.co/jeiku/Average_Normie_v2_l3_8B) + [ResplendentAI/BlueMoon_Llama3](https://huggingface.co/ResplendentAI/BlueMoon_Llama3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: jeiku/Average_Normie_v2_l3_8B+ResplendentAI/Aura_Llama3
- model: jeiku/Average_Normie_v2_l3_8B+ResplendentAI/Smarts_Llama3
- model: jeiku/Average_Normie_v2_l3_8B+ResplendentAI/BlueMoon_Llama3
merge_method: model_stock
base_model: jeiku/Average_Normie_v2_l3_8B
dtype: float16
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["jeiku/Average_Normie_v2_l3_8B", "jeiku/Average_Normie_v2_l3_8B", "ResplendentAI/Aura_Llama3", "jeiku/Average_Normie_v2_l3_8B", "ResplendentAI/Smarts_Llama3", "jeiku/Average_Normie_v2_l3_8B", "ResplendentAI/BlueMoon_Llama3"]}
|
jeiku/Average_Test
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:jeiku/Average_Normie_v2_l3_8B",
"base_model:ResplendentAI/Aura_Llama3",
"base_model:ResplendentAI/Smarts_Llama3",
"base_model:ResplendentAI/BlueMoon_Llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T23:46:25+00:00
|
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
EdBerg/quotes_Meta-Llama-3-8B-Instruct
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T23:46:58+00:00
|
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
swj0419/booksum_STEP0000500
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T23:47:02+00:00
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-410m_mz-130_PasswordMatch_n-its-10-seed-0
This model is a fine-tuned version of [EleutherAI/pythia-410m](https://huggingface.co/EleutherAI/pythia-410m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-410m", "model-index": [{"name": "robust_llm_pythia-410m_mz-130_PasswordMatch_n-its-10-seed-0", "results": []}]}
|
AlignmentResearch/robust_llm_pythia-410m_mz-130_PasswordMatch_n-its-10-seed-0
| null |
[
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-410m",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T23:47:39+00:00
|
text-generation
|
transformers
|
{"license": "apache-2.0"}
|
Hiridharan10/enma-2b-harmless
| null |
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T23:48:03+00:00
|
|
null |
transformers
|
# Uploaded model
- **Developed by:** moriire
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/tinyllama-bnb-4bit"}
|
moriire/healthcare-ai-q4_k_m
| null |
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T23:49:31+00:00
|
text-generation
|
transformers
|
{"license": "mit"}
|
kumarijy/phi-2-openvino
| null |
[
"transformers",
"openvino",
"phi",
"text-generation",
"custom_code",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-24T23:53:32+00:00
|
|
null |
mlx
|
# 3thn/dolphin-2.9-llama3-70b-2bit
This model was converted to MLX format from [`cognitivecomputations/dolphin-2.9-llama3-70b`]() using mlx-lm version **0.10.0**.
Refer to the [original model card](https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-70b) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("3thn/dolphin-2.9-llama3-70b-2bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
{"language": ["en"], "license": "llama3", "tags": ["mlx"], "datasets": ["cognitivecomputations/Dolphin-2.9", "teknium/OpenHermes-2.5", "m-a-p/CodeFeedback-Filtered-Instruction", "cognitivecomputations/dolphin-coder", "cognitivecomputations/samantha-data", "HuggingFaceH4/ultrachat_200k", "microsoft/orca-math-word-problems-200k", "abacusai/SystemChat-1.1", "Locutusque/function-calling-chatml", "internlm/Agent-FLAN"]}
|
3thn/dolphin-2.9-llama3-70b-2bit
| null |
[
"mlx",
"safetensors",
"llama",
"en",
"dataset:cognitivecomputations/Dolphin-2.9",
"dataset:teknium/OpenHermes-2.5",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:abacusai/SystemChat-1.1",
"dataset:Locutusque/function-calling-chatml",
"dataset:internlm/Agent-FLAN",
"license:llama3",
"region:us"
] | null |
2024-04-24T23:53:37+00:00
|
reinforcement-learning
|
ml-agents
|
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: rahil1206/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
{"library_name": "ml-agents", "tags": ["SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget"]}
|
rahil1206/ppo-SnowballTarget
| null |
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | null |
2024-04-24T23:54:19+00:00
|
null |
mlx
|
# 3thn/dolphin-2.9-llama3-70b-4bit
This model was converted to MLX format from [`cognitivecomputations/dolphin-2.9-llama3-70b`]() using mlx-lm version **0.10.0**.
Refer to the [original model card](https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-70b) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("3thn/dolphin-2.9-llama3-70b-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
{"language": ["en"], "license": "llama3", "tags": ["mlx"], "datasets": ["cognitivecomputations/Dolphin-2.9", "teknium/OpenHermes-2.5", "m-a-p/CodeFeedback-Filtered-Instruction", "cognitivecomputations/dolphin-coder", "cognitivecomputations/samantha-data", "HuggingFaceH4/ultrachat_200k", "microsoft/orca-math-word-problems-200k", "abacusai/SystemChat-1.1", "Locutusque/function-calling-chatml", "internlm/Agent-FLAN"]}
|
3thn/dolphin-2.9-llama3-70b-4bit
| null |
[
"mlx",
"safetensors",
"llama",
"en",
"dataset:cognitivecomputations/Dolphin-2.9",
"dataset:teknium/OpenHermes-2.5",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:abacusai/SystemChat-1.1",
"dataset:Locutusque/function-calling-chatml",
"dataset:internlm/Agent-FLAN",
"license:llama3",
"region:us"
] | null |
2024-04-24T23:55:27+00:00
|
image-classification
|
transformers
|
Model Name: Nike Shoes Recognizer
Original Model: Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224.
It was introduced in the paper An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale by Dosovitskiy et al. and first released in this repository.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded.
Model Type: Image Classification
Model Architecture: Vision Transformer (ViT)
|
{}
|
HZhang729/nike_image_classification
| null |
[
"transformers",
"safetensors",
"vit",
"image-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T23:55:56+00:00
|
null | null |
{}
|
Khetnhio/gpt2-finetuned-ner
| null |
[
"region:us"
] | null |
2024-04-24T23:56:03+00:00
|
|
null |
transformers
|
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/dev7halo/korho-math-7b-v0.2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/korho-math-7b-v0.2-GGUF/resolve/main/korho-math-7b-v0.2.Q2_K.gguf) | Q2_K | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/korho-math-7b-v0.2-GGUF/resolve/main/korho-math-7b-v0.2.IQ3_XS.gguf) | IQ3_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/korho-math-7b-v0.2-GGUF/resolve/main/korho-math-7b-v0.2.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/korho-math-7b-v0.2-GGUF/resolve/main/korho-math-7b-v0.2.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/korho-math-7b-v0.2-GGUF/resolve/main/korho-math-7b-v0.2.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/korho-math-7b-v0.2-GGUF/resolve/main/korho-math-7b-v0.2.Q3_K_M.gguf) | Q3_K_M | 3.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/korho-math-7b-v0.2-GGUF/resolve/main/korho-math-7b-v0.2.Q3_K_L.gguf) | Q3_K_L | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/korho-math-7b-v0.2-GGUF/resolve/main/korho-math-7b-v0.2.IQ4_XS.gguf) | IQ4_XS | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/korho-math-7b-v0.2-GGUF/resolve/main/korho-math-7b-v0.2.Q4_K_S.gguf) | Q4_K_S | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/korho-math-7b-v0.2-GGUF/resolve/main/korho-math-7b-v0.2.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/korho-math-7b-v0.2-GGUF/resolve/main/korho-math-7b-v0.2.Q5_K_S.gguf) | Q5_K_S | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/korho-math-7b-v0.2-GGUF/resolve/main/korho-math-7b-v0.2.Q5_K_M.gguf) | Q5_K_M | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/korho-math-7b-v0.2-GGUF/resolve/main/korho-math-7b-v0.2.Q6_K.gguf) | Q6_K | 6.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/korho-math-7b-v0.2-GGUF/resolve/main/korho-math-7b-v0.2.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/korho-math-7b-v0.2-GGUF/resolve/main/korho-math-7b-v0.2.f16.gguf) | f16 | 14.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
{"language": ["en"], "library_name": "transformers", "tags": [], "base_model": "dev7halo/korho-math-7b-v0.2", "quantized_by": "mradermacher"}
|
mradermacher/korho-math-7b-v0.2-GGUF
| null |
[
"transformers",
"gguf",
"en",
"base_model:dev7halo/korho-math-7b-v0.2",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T23:57:48+00:00
|
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2990
- Precision: 0.9382
- Recall: 0.9341
- F1: 0.9361
- Accuracy: 0.9311
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "bert-base-cased", "model-index": [{"name": "bert-base-cased-finetuned-ner", "results": []}]}
|
Khetnhio/bert-base-cased-finetuned-ner
| null |
[
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-24T23:57:51+00:00
|
null |
mlx
|
# 3thn/dolphin-2.9-llama3-70b-8bit
This model was converted to MLX format from [`cognitivecomputations/dolphin-2.9-llama3-70b`]() using mlx-lm version **0.10.0**.
Refer to the [original model card](https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-70b) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("3thn/dolphin-2.9-llama3-70b-8bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
{"language": ["en"], "license": "llama3", "tags": ["mlx"], "datasets": ["cognitivecomputations/Dolphin-2.9", "teknium/OpenHermes-2.5", "m-a-p/CodeFeedback-Filtered-Instruction", "cognitivecomputations/dolphin-coder", "cognitivecomputations/samantha-data", "HuggingFaceH4/ultrachat_200k", "microsoft/orca-math-word-problems-200k", "abacusai/SystemChat-1.1", "Locutusque/function-calling-chatml", "internlm/Agent-FLAN"]}
|
3thn/dolphin-2.9-llama3-70b-8bit
| null |
[
"mlx",
"safetensors",
"llama",
"en",
"dataset:cognitivecomputations/Dolphin-2.9",
"dataset:teknium/OpenHermes-2.5",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:abacusai/SystemChat-1.1",
"dataset:Locutusque/function-calling-chatml",
"dataset:internlm/Agent-FLAN",
"license:llama3",
"region:us"
] | null |
2024-04-24T23:59:16+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.