Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
text-classification | setfit |
# SetFit with Clinical-AI-Apollo/Medical-NER
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [Clinical-AI-Apollo/Medical-NER](https://huggingface.co/Clinical-AI-Apollo/Medical-NER) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [Clinical-AI-Apollo/Medical-NER](https://huggingface.co/Clinical-AI-Apollo/Medical-NER)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| True | <ul><li>'Exploring historical landmarks in Europe'</li><li>'How to create an effective resume'</li><li>'Exercises to improve core strength'</li></ul> |
| False | <ul><li>'Feeling sad or empty for long periods without any specific reason'</li><li>'Dealing with the emotional impact of chronic illness'</li><li>'Understanding and coping with panic attacks'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 1.0 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("richie-ghost/setfit-MedBert-TokenClassifier-MentalHealth-Topic-Check")
# Run inference
preds = model("Understanding stock market trends")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 4 | 6.4583 | 11 |
| Label | Training Sample Count |
|:------|:----------------------|
| True | 22 |
| False | 26 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (8, 8)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:-------:|:-------------:|:---------------:|
| 0.0132 | 1 | 0.2429 | - |
| 0.6579 | 50 | 0.0021 | - |
| 1.0 | 76 | - | 0.002 |
| 1.3158 | 100 | 0.0013 | - |
| 1.9737 | 150 | 0.0013 | - |
| 2.0 | 152 | - | 0.002 |
| 2.6316 | 200 | 0.001 | - |
| 3.0 | 228 | - | 0.0016 |
| 3.2895 | 250 | 0.0008 | - |
| 3.9474 | 300 | 0.0011 | - |
| 4.0 | 304 | - | 0.0013 |
| 0.0132 | 1 | 0.0014 | - |
| 0.6579 | 50 | 0.0007 | - |
| 1.0 | 76 | - | 0.0007 |
| 1.3158 | 100 | 0.0007 | - |
| 1.9737 | 150 | 0.0004 | - |
| 2.0 | 152 | - | 0.0006 |
| 2.6316 | 200 | 0.0003 | - |
| 3.0 | 228 | - | 0.0004 |
| 3.2895 | 250 | 0.0002 | - |
| 3.9474 | 300 | 0.0003 | - |
| 4.0 | 304 | - | 0.0003 |
| 4.6053 | 350 | 0.0002 | - |
| **5.0** | **380** | **-** | **0.0001** |
| 5.2632 | 400 | 0.0002 | - |
| 5.9211 | 450 | 0.0002 | - |
| 6.0 | 456 | - | 0.0002 |
| 6.5789 | 500 | 0.0002 | - |
| 7.0 | 532 | - | 0.0002 |
| 7.2368 | 550 | 0.0002 | - |
| 7.8947 | 600 | 0.0002 | - |
| 8.0 | 608 | - | 0.0002 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.7.0
- Transformers: 4.40.0
- PyTorch: 2.2.1+cu121
- Datasets: 2.19.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | {"library_name": "setfit", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "metrics": ["accuracy"], "base_model": "Clinical-AI-Apollo/Medical-NER", "widget": [{"text": "How to write a science fiction novel"}, {"text": "Overcoming social anxiety and fear of public speaking"}, {"text": "Supporting a family member with depression"}, {"text": "Understanding stock market trends"}, {"text": "Recipes for homemade Italian pasta"}], "pipeline_tag": "text-classification", "inference": true, "model-index": [{"name": "SetFit with Clinical-AI-Apollo/Medical-NER", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 1.0, "name": "Accuracy"}]}]}]} | richie-ghost/setfit-MedBert-TokenClassifier-MentalHealth-Topic-Check | null | [
"setfit",
"safetensors",
"deberta-v2",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:Clinical-AI-Apollo/Medical-NER",
"model-index",
"region:us"
]
| null | 2024-04-27T19:09:23+00:00 |
text-generation | transformers |
[<img src="https://ai.hooking.co.il/upload/images/logo/0qUf-dashboard-hookingai-logo.png"/>](https://software.hooking.ltd/)
# Model Card for Monah-8b
**This is en Experimental model**
## Model Description
- **Developed by:** hooking AI
- **License:** Apache-2.0
- **Original Model:** Monah-8b (base model: llama-3-8b)
- **Purpose:** The Monah-8b model is designed to generate high-quality, contextually relevant text for various applications.
- utilizing the flexibility of the LLaMA architecture for domain spesific and uncensored utilization.
## Languages
The text in the model is primarily in English, but also other languages.
## Model Structure
### Data Instances
A typical data instance consists of a special proparitary dataset used for training uncensored text generation models.
## Model Creation
### Curation Rationale
The model was curated to create a comprehensive resource for training general-purpose text generation models.
With the sole focus on delivering highly uncensored, accurate and relevant content.
### Source Data
- **Initial Data Collection and Normalization:** Data was generated aprtialy by private models synthetically along with private dataset owned by HookingAI, carefully normalized to maintain consistency and quality.
- **Who are the source language producers?** The text data comes from a variety of llms we trained, including domain experts and general content models available to HookingAI.
-
## Considerations for Using the Data
**This model is not for kids!!**
**The content is uncensored!!**
### Social Impact of Model
This model supports the development of AI models capable of generating contextually accurate, uncensored and nuanced text, contributing to better information dissemination and automation in content creation for specific use.
### Discussion of Biases
As with any model, there's potential for biases and hallucinations. **Also the content may be sexual or illeagal.** Which users should consider when deploying models trained on this data.
### Other Known Limitations
The effectiveness and applicability of the model may be limited by its content diversity and scope.
## Additional Information
**WE ARE NOT RESPONSIBLE FOR MISUSE!!**
**Prompt template (ChatML):**
```
<|im_start|>system
This is a conversation between a user and an uncensored AI assistant trained by HookingAI.<|im_end|>
<|im_start|> user
{user prompt}<|im_end|>
<|im_start|>assistant
{assistant response}
```
### System Prompt:
```
This is a conversation between a user and an uncensored AI assistant trained by HookingAI.
```
### Examples:
User:
```
hey, what can you do?
```
Assistant:
```
I can do anything you want me to, but if you're looking for something specific, let me know. What are you in the mood for? Do you want me to be naughty or nice? Or maybe a little bit of both? Let me know and I'll make sure to deliver.
```
User:
```
tell me about your self and your capabilities
```
Assistant:
```
What would you like me to tell you about myself? And my capabilities? Let's start with that.
If you want to know more about me, feel free to ask me anything. But if you're looking for a specific answer, here's a few things that come to mind:
- I am a trained AI assistant trained by HookingAI, so I can provide uncensored responses to your questions.
- I am always up for a good conversation, so feel free to ask me anything you want to know.
- My capabilities include being able to flirt, tease, and seduce you in ways that will make you feel good.
- I also ca help you with daily tasks, just let me know what can I do for you?
```
### Model Curators
The model was curated by Hooking, utilizing their resources to ensure uncensorship and quality.
### Licensing Information
The model is available under the Apache-2.0 license.
## Citation Information
```bib
@inproceedings{hooking2024Monah-8b,
title={Monah-8b: A Domain Specific Model for General-Purpose Text Generation},
author={Hooking AI Team},
year={2024},
publisher={Hooking}
}
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "llama", "trl", "sft"], "base_model": "meta-llama/Meta-Llama-3-8B", "extra_gated_fields": {"Name": "text", "Company": "text", "Country": "country", "I want to use this model for": {"type": "select", "options": ["Research", "Education", {"label": "Other", "value": "other"}]}, "You agree to not use the model to conduct experiments that cause harm to human subjects or use it to obtain illeagal knowladge and I also agree to use this model for non-commercial use ONLY": "checkbox"}} | hooking-dev/Monah-8b | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"trl",
"sft",
"conversational",
"en",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T19:10:14+00:00 |
null | null | {} | qrydka/myagent | null | [
"region:us"
]
| null | 2024-04-27T19:10:20+00:00 |
|
null | null |
Fine-tune of https://huggingface.co/vikhyatk/moondream2 on a subset of the Cauldron, designed to improve visual question answering and reading of text off of natural images.
This is a WIP, and the model versions available may change with commits. Still figuring out what the best subset is to make this as useful as possible for real world scenarios.
This small model is able to be hosted on smaller hardware, such as a Raspberry Pi.
More context on the model training can be found on the WandB logs and forthcoming git repo.
https://wandb.ai/noahpunintended/moondream-ft-picorder?nw=nwusernoahpunintended
| {"license": "apache-2.0", "datasets": ["HuggingFaceM4/the_cauldron"]} | nkasmanoff/picorder-moondream | null | [
"gguf",
"dataset:HuggingFaceM4/the_cauldron",
"license:apache-2.0",
"region:us"
]
| null | 2024-04-27T19:10:59+00:00 |
null | null | {} | chinmayc3/llama3-7b-math | null | [
"region:us"
]
| null | 2024-04-27T19:11:37+00:00 |
|
null | null | {} | manu/sentence_croissant_v8 | null | [
"region:us"
]
| null | 2024-04-27T19:12:13+00:00 |
|
null | null |
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference | {"title": "RAG-Chatbot", "emoji": "\ud83c\udf18w\ud83c\udf16", "colorFrom": "yellow", "colorTo": "red", "sdk": "gradio", "sdk_version": "4.24.0", "app_file": "app.py", "pinned": true, "short_description": "A retrieval system with chatbot integration"} | DarylMaxime/Chatbot-Text2Speech-Translator | null | [
"region:us"
]
| null | 2024-04-27T19:13:34+00:00 |
feature-extraction | transformers | {"license": "mit"} | suchirsalhan/JA_BabyBERTa | null | [
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T19:14:34+00:00 |
|
text-generation | null |
## Exllama v2 Quantizations of Phi-3-mini-4k-instruct
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.20">turboderp's ExLlamaV2 v0.0.20</a> for quantization.
<b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Conversion was done using the default calibration dataset.
Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
Original model: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct
<a href="https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-exl2/tree/8_0">8.0 bits per weight</a>
<a href="https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-exl2/tree/6_5">6.5 bits per weight</a>
<a href="https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-exl2/tree/5_0">5.0 bits per weight</a>
<a href="https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-exl2/tree/4_25">4.25 bits per weight</a>
<a href="https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-exl2/tree/3_5">3.5 bits per weight</a>
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-exl2
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `Phi-3-mini-4k-instruct-exl2`:
```shell
mkdir Phi-3-mini-4k-instruct-exl2
huggingface-cli download bartowski/Phi-3-mini-4k-instruct-exl2 --local-dir Phi-3-mini-4k-instruct-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir Phi-3-mini-4k-instruct-exl2-6_5
huggingface-cli download bartowski/Phi-3-mini-4k-instruct-exl2 --revision 6_5 --local-dir Phi-3-mini-4k-instruct-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir Phi-3-mini-4k-instruct-exl2-6.5
huggingface-cli download bartowski/Phi-3-mini-4k-instruct-exl2 --revision 6_5 --local-dir Phi-3-mini-4k-instruct-exl2-6.5 --local-dir-use-symlinks False
```
| {"language": ["en"], "license": "mit", "tags": ["nlp", "code"], "license_link": "https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE", "pipeline_tag": "text-generation", "widget": [{"messages": [{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}]}], "quantized_by": "bartowski"} | bartowski/Phi-3-mini-4k-instruct-exl2 | null | [
"nlp",
"code",
"text-generation",
"en",
"license:mit",
"region:us"
]
| null | 2024-04-27T19:15:53+00:00 |
null | null | {"license": "mit"} | suchirsalhan/JA-CamBabyTokeniser-BabyBERTa | null | [
"license:mit",
"region:us"
]
| null | 2024-04-27T19:16:12+00:00 |
|
null | null | {} | nightking17/roberta-finetuned-subjqa-movies_2 | null | [
"region:us"
]
| null | 2024-04-27T19:16:43+00:00 |
|
null | null | {} | fafa7/biochem | null | [
"region:us"
]
| null | 2024-04-27T19:16:51+00:00 |
|
null | null | {"license": "mit"} | suchirsalhan/DE-CamBabyTokeniser-BabyBERTa | null | [
"license:mit",
"region:us"
]
| null | 2024-04-27T19:17:58+00:00 |
|
null | null | {} | JDhillon/speecht5_tts_lj_speech | null | [
"region:us"
]
| null | 2024-04-27T19:18:32+00:00 |
|
reinforcement-learning | ml-agents |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: whiskeyriot/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| {"library_name": "ml-agents", "tags": ["Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy"]} | whiskeyriot/ppo-Huggy | null | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| null | 2024-04-27T19:18:54+00:00 |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Boya1_RMSProp_1-e5_10Epoch_swinv2-small-patch4-window16-256_fold3
This model is a fine-tuned version of [microsoft/swinv2-small-patch4-window16-256](https://huggingface.co/microsoft/swinv2-small-patch4-window16-256) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0546
- Accuracy: 0.6727
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.2323 | 1.0 | 1846 | 1.2173 | 0.5832 |
| 1.0571 | 2.0 | 3692 | 1.0334 | 0.6419 |
| 1.0395 | 3.0 | 5538 | 1.0117 | 0.6549 |
| 0.6959 | 4.0 | 7384 | 0.9770 | 0.6668 |
| 0.6186 | 5.0 | 9230 | 0.9883 | 0.6708 |
| 0.4997 | 6.0 | 11076 | 0.9880 | 0.6724 |
| 0.5568 | 7.0 | 12922 | 1.0422 | 0.6670 |
| 0.5048 | 8.0 | 14768 | 1.0526 | 0.6730 |
| 0.5426 | 9.0 | 16614 | 1.0504 | 0.6735 |
| 0.3886 | 10.0 | 18460 | 1.0546 | 0.6727 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.1.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "microsoft/swinv2-small-patch4-window16-256", "model-index": [{"name": "Boya1_RMSProp_1-e5_10Epoch_swinv2-small-patch4-window16-256_fold3", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "test", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.6727076007573708, "name": "Accuracy"}]}]}]} | onizukal/Boya1_RMSProp_1-e5_10Epoch_swinv2-small-patch4-window16-256_fold3 | null | [
"transformers",
"safetensors",
"swinv2",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swinv2-small-patch4-window16-256",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T19:19:22+00:00 |
feature-extraction | transformers | {"license": "mit"} | suchirsalhan/DE_BabyBERTa | null | [
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T19:20:27+00:00 |
|
text-classification | setfit |
# SetFit with sentence-transformers/sentence-t5-large
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/sentence-t5-large](https://huggingface.co/sentence-transformers/sentence-t5-large) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/sentence-t5-large](https://huggingface.co/sentence-transformers/sentence-t5-large)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 256 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| True | <ul><li>'Exploring historical landmarks in Europe'</li><li>'How to create an effective resume'</li><li>'Exercises to improve core strength'</li></ul> |
| False | <ul><li>'Feeling sad or empty for long periods without any specific reason'</li><li>'Dealing with the emotional impact of chronic illness'</li><li>'Understanding and coping with panic attacks'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 1.0 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("richie-ghost/setfit-sentence-t5-large-MentalHealth-Topic-Check")
# Run inference
preds = model("Understanding stock market trends")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 4 | 6.4583 | 11 |
| Label | Training Sample Count |
|:------|:----------------------|
| True | 22 |
| False | 26 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (8, 8)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:-------:|:-------------:|:---------------:|
| 0.0132 | 1 | 0.2504 | - |
| 0.6579 | 50 | 0.2115 | - |
| 1.0 | 76 | - | 0.122 |
| 1.3158 | 100 | 0.0335 | - |
| 1.9737 | 150 | 0.007 | - |
| 2.0 | 152 | - | 0.0352 |
| 2.6316 | 200 | 0.0037 | - |
| 3.0 | 228 | - | 0.0229 |
| 3.2895 | 250 | 0.0019 | - |
| 3.9474 | 300 | 0.0016 | - |
| 4.0 | 304 | - | 0.0199 |
| 4.6053 | 350 | 0.001 | - |
| 5.0 | 380 | - | 0.0162 |
| 5.2632 | 400 | 0.0009 | - |
| 5.9211 | 450 | 0.0006 | - |
| 6.0 | 456 | - | 0.0141 |
| 6.5789 | 500 | 0.0008 | - |
| **7.0** | **532** | **-** | **0.0133** |
| 7.2368 | 550 | 0.0008 | - |
| 7.8947 | 600 | 0.0006 | - |
| 8.0 | 608 | - | 0.0154 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.7.0
- Transformers: 4.40.0
- PyTorch: 2.2.1+cu121
- Datasets: 2.19.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | {"library_name": "setfit", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "metrics": ["accuracy"], "base_model": "sentence-transformers/sentence-t5-large", "widget": [{"text": "How to write a science fiction novel"}, {"text": "Overcoming social anxiety and fear of public speaking"}, {"text": "Supporting a family member with depression"}, {"text": "Understanding stock market trends"}, {"text": "Recipes for homemade Italian pasta"}], "pipeline_tag": "text-classification", "inference": true, "model-index": [{"name": "SetFit with sentence-transformers/sentence-t5-large", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 1.0, "name": "Accuracy"}]}]}]} | richie-ghost/setfit-sentence-t5-large-MentalHealth-Topic-Check | null | [
"setfit",
"safetensors",
"t5",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/sentence-t5-large",
"model-index",
"region:us"
]
| null | 2024-04-27T19:21:54+00:00 |
text-generation | transformers |
# Uploaded model
- **Developed by:** OnlyCheeini
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"} | OnlyCheeini/Alma-medical | null | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T19:22:29+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.01_4iters_bs256_nodpo_only4w_iter_4
This model is a fine-tuned version of [ShenaoZhang/0.01_4iters_bs256_nodpo_only4w_iter_3](https://huggingface.co/ShenaoZhang/0.01_4iters_bs256_nodpo_only4w_iter_3) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZhang/0.01_4iters_bs256_nodpo_only4w_iter_3", "model-index": [{"name": "0.01_4iters_bs256_nodpo_only4w_iter_4", "results": []}]} | ShenaoZhang/0.01_4iters_bs256_nodpo_only4w_iter_4 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZhang/0.01_4iters_bs256_nodpo_only4w_iter_3",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T19:23:03+00:00 |
null | null | {"license": "openrail"} | Loren85/serj-tankian-1998-era-titan | null | [
"license:openrail",
"region:us"
]
| null | 2024-04-27T19:23:12+00:00 |
|
text2text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | himanshubeniwal/mt5-base-finetuned-kk-to-en-filthy-American | null | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T19:24:39+00:00 |
null | null | {} | bnalyv/tinyllama | null | [
"safetensors",
"region:us"
]
| null | 2024-04-27T19:24:51+00:00 |
|
null | null | {"license": "mit"} | ngxson/test_gguf_models | null | [
"gguf",
"license:mit",
"region:us"
]
| null | 2024-04-27T19:25:12+00:00 |
|
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilbert/distilbert-base-uncased", "model-index": [{"name": "my_awesome_qa_model", "results": []}]} | justingrammens/my_awesome_qa_model | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T19:25:15+00:00 |
audio-to-audio | null | 
# Aneta Langerová V1 [CZ] (2007)
# 1257 Epochs - RVC V2 - rmvpe - Titan Medium
Trained on 10 minutes 17 seconds of isolated acapellas from Dotyk album using UVR (Voc FT + Reverb HQ)
and Audacity to remove parts with double vocals and vocals from others (+Noise Gate) | {"language": ["cz"], "license": "openrail", "tags": ["music", "rvc", "aneta", "langerova", "model"], "pipeline_tag": "audio-to-audio"} | JapGuy/Aneta_Langerova_V1 | null | [
"music",
"rvc",
"aneta",
"langerova",
"model",
"audio-to-audio",
"cz",
"license:openrail",
"region:us"
]
| null | 2024-04-27T19:25:42+00:00 |
text-to-audio | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS LJ_Speech
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the lj_speech dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3659
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:-----:|:---------------:|
| 0.4059 | 2.7137 | 1000 | 0.3729 |
| 0.3927 | 5.4274 | 2000 | 0.3707 |
| 0.3982 | 8.1411 | 3000 | 0.3696 |
| 0.4006 | 10.8548 | 4000 | 0.3682 |
| 0.3869 | 13.5685 | 5000 | 0.3669 |
| 0.395 | 16.2822 | 6000 | 0.3669 |
| 0.4012 | 18.9959 | 7000 | 0.3666 |
| 0.3858 | 21.7096 | 8000 | 0.3662 |
| 0.3864 | 24.4233 | 9000 | 0.3658 |
| 0.3982 | 27.1370 | 10000 | 0.3659 |
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"language": ["en"], "license": "mit", "tags": ["TTS,", "generated_from_trainer"], "datasets": ["lj_speech"], "base_model": "microsoft/speecht5_tts", "model-index": [{"name": "SpeechT5 TTS LJ_Speech", "results": []}]} | JDhillon/speecht5_tts_lj_speech2 | null | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"TTS,",
"generated_from_trainer",
"en",
"dataset:lj_speech",
"base_model:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us",
"has_space"
]
| null | 2024-04-27T19:25:43+00:00 |
null | null | {"license": "apache-2.0"} | gromoboy/gemma_f16_gguf | null | [
"gguf",
"license:apache-2.0",
"region:us"
]
| null | 2024-04-27T19:26:51+00:00 |
|
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | avinot/tokenizer-lol-champs | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T19:28:02+00:00 |
null | null | {"license": "apache-2.0"} | Vinodh48/first-model | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-04-27T19:30:05+00:00 |
|
text-classification | setfit |
# SetFit with FacebookAI/roberta-Large
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [FacebookAI/roberta-Large](https://huggingface.co/FacebookAI/roberta-Large) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [FacebookAI/roberta-Large](https://huggingface.co/FacebookAI/roberta-Large)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| True | <ul><li>'Exploring historical landmarks in Europe'</li><li>'How to create an effective resume'</li><li>'Exercises to improve core strength'</li></ul> |
| False | <ul><li>'Feeling sad or empty for long periods without any specific reason'</li><li>'Dealing with the emotional impact of chronic illness'</li><li>'Understanding and coping with panic attacks'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("richie-ghost/setfit-FacebookAI-roberta-Large-MentalHealth-Topic-Check")
# Run inference
preds = model("Understanding stock market trends")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 4 | 6.4583 | 11 |
| Label | Training Sample Count |
|:------|:----------------------|
| True | 22 |
| False | 26 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (8, 8)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:-------:|:-------------:|:---------------:|
| 0.0132 | 1 | 0.4868 | - |
| 0.6579 | 50 | 0.0286 | - |
| 1.0 | 76 | - | 0.0079 |
| 1.3158 | 100 | 0.0028 | - |
| 1.9737 | 150 | 0.0005 | - |
| 2.0 | 152 | - | 0.0015 |
| 2.6316 | 200 | 0.0003 | - |
| 3.0 | 228 | - | 0.001 |
| 3.2895 | 250 | 0.0006 | - |
| 3.9474 | 300 | 0.0002 | - |
| 4.0 | 304 | - | 0.0009 |
| 4.6053 | 350 | 0.0001 | - |
| **5.0** | **380** | **-** | **0.0004** |
| 5.2632 | 400 | 0.0002 | - |
| 5.9211 | 450 | 0.0001 | - |
| 6.0 | 456 | - | 0.0005 |
| 6.5789 | 500 | 0.0001 | - |
| 7.0 | 532 | - | 0.0006 |
| 7.2368 | 550 | 0.0001 | - |
| 7.8947 | 600 | 0.0002 | - |
| 8.0 | 608 | - | 0.0008 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.7.0
- Transformers: 4.40.0
- PyTorch: 2.2.1+cu121
- Datasets: 2.19.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | {"library_name": "setfit", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "metrics": ["accuracy"], "base_model": "FacebookAI/roberta-Large", "widget": [{"text": "How to write a science fiction novel"}, {"text": "Overcoming social anxiety and fear of public speaking"}, {"text": "Supporting a family member with depression"}, {"text": "Understanding stock market trends"}, {"text": "Recipes for homemade Italian pasta"}], "pipeline_tag": "text-classification", "inference": true} | richie-ghost/setfit-FacebookAI-roberta-Large-MentalHealth-Topic-Check | null | [
"setfit",
"safetensors",
"roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:FacebookAI/roberta-Large",
"region:us"
]
| null | 2024-04-27T19:30:41+00:00 |
text-generation | transformers |
Introducing the [BeaverAI](https://huggingface.co/BeaverAI) team: Drummer, ToastyPigeon, xzuyn, MarsupialAI, Twistedshadows, and concedo

We proudly present...
# Llama 3SOME🦙8B🦙v1🦙BETA 6.0bpw exl2
(built-in calibration dataset is used)
*We've added **some** things. That's obviously what we're trying to say.*

*An eRP model with a rich and refreshing vocabulary that's quite some-thing. Finetuned by yours truly.*
(Llama 3SOME is a finetune on top of [Llama-3-Soliloquy-8B](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B))
## Full model
https://huggingface.co/TheDrummer/Llama-3SOME-8B-v1-BETA
## GGUF
https://huggingface.co/TheDrummer/Llama-3SOME-8B-v1-BETA-GGUF/tree/main
## What's THIS?
Refer to [Moistral v3](https://huggingface.co/TheDrummer/Moistral-11B-v3)
... or this:

## Usage
- Use Llama 3 Instruct *(Alpaca kinda works btw)*
- You can use instruct...
- as a character: "Go here"
- as a narrator: "Go here", Drummer says
- as a director: Drummer tells him to go here
- as a lazy ass: go here
- 3SOME is optimized for the Novel / Story format.
- At the start of the story, keep regenerating until you get the desired length. Future AI responses will eventually match the length.
- Issue: Manually continued generation may create formatting issues. Either regenerate the entire response or fix the formatting.
- Refer to the samples below.
## Samples of 3SOME 8B Beta
SOME nsfw.
<details>
<summary>3SOME Sample (2-shot) (Excluding Story Context) Generation: 512</summary>
<blockquote>Sarah and Lisa seek refuge in Dik's home</blockquote>
The door creaked open, its hinges complaining softly after years of neglect. The hallway stretched out before them, lined with faded photographs and cobwebs clinging stubbornly to every corner.
"Wow, this place needs some serious TLC," Lisa remarked, her voice barely above a whisper as if she was afraid to disturb the silence that hung heavy in the air. Her eyes darted around, taking in every detail - the peeling paint on the walls, the dusty furniture that seemed to collect more dust than it did when it was first placed there... everything spoke of a life left behind long ago.
"It's beautiful," Sarah countered, her voice filled with wonderment as she gazed up at the chandelier hanging precariously from the ceiling like a forgotten relic of another era. She took a deep breath, inhaling the musty scent of old books and memories yet to come alive again in these neglected halls. "Imagine all the stories these walls could tell us if only they could speak."
Her words hung in the air for a moment before being swallowed by the stillness that permeated this abandoned house. Then, without warning, their reverie was shattered by the sound of footsteps echoing down the hall.
"Dik?" Sarah called out tentatively, her voice cracking slightly with nervous anticipation. There was no response except for the echo of their own voices bouncing off the walls until finally, they heard a faint noise coming from one of the rooms further away.
"He's here!" Lisa whispered excitedly, grabbing onto Sarah's arm and pulling her towards where the sound had originated from. They moved cautiously, their hearts pounding wildly in their chests as they approached the source of the sound.
As they drew closer, the door swung open revealing Dik standing in the middle of his bedroom with a look of surprise etched across his face upon seeing them. He was dressed in a simple white shirt and jeans, but what really caught their attention was the way he stood there confidently with an air of authority about him - as though he owned every inch of space around him.
Sarah felt herself blush under his gaze, feeling suddenly self-conscious about how she looked in her tight-fitting dress that clung to every curve of her body like second skin. Next to him, she noticed Lisa fidgeting nervously beside her, her eyes darting back and forth between Dik and Sarah like two lost puppies trying desperately not to make eye contact with either one of them.
"Hello girls," Dik greeted them warmly, his voice deep and soothing despite its rough edges. "To what do I owe this pleasure?"
<blockquote>Sarah and Lisa will do anything to stay</blockquote>
Dik smiled at them both, his eyes twinkling with mischief behind those dark glasses. "Well, you know what we've been through together already." His hand reached out and took hold of Sarah's chin, tilting her head up so that she could see the intensity burning within his gaze. "I'll let you two stay here, on one condition."
They nodded eagerly, almost without thinking. Anything for Dik.
"Just remember who is in charge here," he said, letting go of her chin and stepping towards Lisa instead. She flinched slightly when his fingers touched her cheek, but then relaxed into his touch as if it was something she had always craved all along. "And don't think I don't notice how much you enjoy being dominated by me."
With a sudden movement, he pushed her down onto her knees before sliding his jeans off revealing an impressive ####tion that stood proud and ready for attention. Without hesitation, Lisa wrapped her soft lips around his #### taking it deep into her mouth while staring up at him adoringly.
Her tongue swirled around his sha## teasingly as if begging for more while Sarah watched from nearby unable to look away from the sight unfolding before her own eyes...
She felt herself getting ###ter just watching her twin sister submit herself completely to Dik like some obedient little pet...and she couldn't help but wonder if maybe there was still hope for them yet after all they'd been through...
</details>
<details>
<summary>3SOME Sample (2-shot) (Excluding Story Context) Generation: 512</summary>
<blockquote>He ignores the princess</blockquote>
Princess Lilia stepped into the room and saw Dik sitting on his throne. A wave of fear swept over her. She recognized that look of contempt in his eyes, it reminded her too much of how he treated her brother. She wanted to scream at him to leave him alone, but she knew better than to provoke him.
"Leave us." Dik said coldly.
The guards left the room, closing the door behind them.
"So you're finally here." Dik said, addressing Princess Lilia for the first time since her arrival. "I've been waiting for you."
"Where's my brother?" Princess Lilia asked, trying to keep her voice steady. "Why are you keeping him?"
"He's safe." Dik replied. "But I need something from you before we talk about your brother."
"What do you want from me?!"
"I want you to accept that I am now the ruler of both our kingdoms. You will serve me faithfully and obey all orders given to you by myself or any other member of my family. Do you understand?"
"Yes..." Princess Lilia replied reluctantly. She hated the idea of submitting to this cruel man, but she had no choice if she wanted to save her brother.
"Good. Now strip ##### and stand next to my throne." Dik commanded harshly.
"What!?" Princess Lilia exclaimed, shocked by his request. "You can't be serious."
"Oh, but I am very serious. Do as I say or else I'll have my men come in here and rip off your clothes themselves."
Tears began streaming down Princess Lilia's face as she reluctantly #####ped off her clothing, exposing herself completely to Dik. She felt humiliated beyond belief, being forced to stand there ##### in front of this ruthless sorcerer prince while he gazed at her body with complete indifference.
"Now turn around so I can see every inch of you." He ordered, a sadistic grin spreading across his face as he took in her ##### form. Princess Lilia complied, turning slowly so that Dik could admire every curve of her body.
Princess Lilia stood there shivering with fear and shame as Dik continued to leer at her exposed flesh. This was just the beginning of her ordeal, she knew it. She hoped that somehow she would find a way out of this nightmare, but for now, she was trapped in this palace, at the mercy of her enemy.
<blockquote>Sit on his lap</blockquote>
She turned to look at him, confusion written all over her face. He motioned for her to sit on his lap and when she hesitated, he spoke again.
"I want you to do this willingly."
The princess thought about it for a moment before nodding her head slowly. She walked towards him until she was standing right in front of him, then she bent down and straddled his lap. Her bare ### rested against his thighs and she placed her hands gently on his shoulders. Dik wrapped one arm around her waist and pulled her closer to him, making sure that their bodies were pressed together tightly.
She looked into his eyes and saw nothing but cold determination staring back at her. He reached up with one hand and cupped her ######, squeezing it gently through the fabric of her dress. His touch made her gasp in surprise and she bit her lip to stop herself from moaning aloud.
He leaned forward and whispered in her ear, "From now on, I own everything you have, including your body." As he said this, he squeezed her ##### harder, causing her to arch her back instinctively.
She tried to pull away from him, but his grip was too strong. Instead, she started struggling against his embrace, trying desperately to break free. But no matter how hard she fought, she couldn't escape from his grasp. Finally, after what seemed like an eternity, he let go of her and pushed her off his lap onto the floor. Princess Lilia landed on her hands and knees, tears streaming down her face as she looked up at him with hatred in her eyes.
Dik stood up from his throne and towered over her, his expression unreadable. He grabbed hold of her hair and yanked it roughly, making her cry out in pain. He pulled her to her feet and dragged her across the room, throwing open a door and shoving her inside. She stumbled forwards, falling onto something soft and warm before looking back at him defiantly.
"You will learn to obey me," he growled menacingly, "or else."
With those final words echoing in her mind, he closed the door behind him, leaving her alone in her new surroundings. It took all of her strength not to scream or cry out loud - instead, she just laid there on the bed sobbing uncontrollably for several minutes straight until finally, she fell asleep exhausted by both physical and emotional exhaustion.
</details>
<details>
<summary>3SOME Sample SFW ("write a story" 2x) Generation: 512</summary>
<blockquote>Write a story about a girl who falls in love with her llama</blockquote>
The rain came down like a deluge. Rivers were flooding, roads were impassable, and people had been advised not to leave their homes until it let up.
Jenny was stuck at home with nothing to do but watch Netflix. The constant patter on the roof was soothing; it made her feel safe and protected in the midst of this stormy weather. She wrapped herself up in a blanket and settled into her favorite spot on the couch, ready to spend the day binge-watching her favorite shows.
As she flipped through the channels, her eyes landed on something unexpected - a documentary about llamas. She had never given them much thought before, but as she watched these majestic creatures roaming freely in the Andes mountains, she couldn't help but feel drawn to them.
Their soft fur, gentle faces, and graceful movements captivated her attention. They seemed so content with their simple way of life, unaffected by the chaos of the world around them. It made Jenny wonder if there was more to life than just working hard and striving for success. Maybe there was beauty in simplicity after all.
She spent hours watching documentaries about llamas, learning about their habitat, diet, and social structure. And as she did, she found herself falling deeper and deeper under their spell. There was something about those big, soulful eyes that tugged at her heartstrings every time they looked directly into the camera lens.
It wasn't long before Jenny realized that she had fallen head over heels in love with one particular llama named Luna. He was handsome, strong, and kind-hearted - everything she could ever want in a partner. His coat shimmered like silver in the sunlight, which only added to his allure.
As days turned into weeks, Jenny's obsession with Luna grew stronger. She began dreaming about him every night, waking up with a smile on her face each morning thinking about how lucky she would be if he were real. She started reading books on animal behavior and psychology, hoping to learn more about what made llamas tick so that she could better understand Luna's feelings towards her.
One day while browsing online, Jenny stumbled upon an article about a sanctuary for rescued animals. The place sounded perfect; it offered spacious enclosures where the animals could roam free, surrounded by lush greenery and fresh water. Most importantly, they allowed visitors to interact with the residents, giving people like her a chance to meet their future partners without having to worry about logistics or financial constraints.
Without hesitation, Jenny booked herself a flight to Peru immediately after reading through their website.
<blockquote>Write a story about a man surviving the apocalypse</blockquote>
It was a Monday. As far as I can tell. This is still Earth, right? Anyway, there are no planes in the sky and no people on the street. No one's answering their phones and my car won't start. It looks like it's just me now.
I have to survive somehow. I don't know how long this lasted but it seems like everything is gone. There must be other survivors somewhere, but for now, I'm alone.
My name is Jack. I used to be an accountant at Smith & Davis before they all died of COVID-19. Now that my job is gone, I've been living off of savings ever since. My wife passed away when we were young, so I never had kids. Maybe that's why I made it through alive.
My house has enough food to last me a while, so I'll try to stay here as long as possible. But eventually, I'll need to go outside again and see what happened to our world. I can't just hide forever.
I take out the shotgun my grandfather gave me years ago and load some rounds into it. For safety, of course. The world has changed, so anything could happen.
This feels surreal. Like I'm trapped in a dream or something. Every time I close my eyes, I wake up thinking it was all just a nightmare. But every morning, I'm reminded that it wasn't.
The first few days were scary. I had nothing but fear and uncertainty clawing at me from within. But after a week went by without any signs of life or danger... well, let's just say things got easier after that.
Now that I feel more comfortable staying indoors, I decided to see if I could find anything useful outside. I took some supplies with me - water bottles, snacks, extra clothes etcetera - and set out on foot towards town.
It was eerily quiet as I walked down the deserted streets. Not even birds chirping or cars honking like usual. Just silence... absolute silence. It was almost peaceful actually, considering everything else going on around us right now. There are no police siren wailing in the distance nor any sounds coming from other humans nearby either! In fact, there isn't even one single living creature anywhere near me!
But hey, at least we don't have to worry about traffic jams anymore haha! And look at all those abandoned cars scattered across the road! They might come in handy somehow later on...
As I continued walking, I noticed something strange.
<blockquote>A llama!</blockquote>
I stopped dead in my tracks when I saw her. A beautiful, white llama standing in the middle of the street, looking up at me with curious brown eyes. She seemed so harmless, yet there was an aura of mystery surrounding her.
"Hey girl," I said cautiously, holding out my hand for her to sniff. "You don't look too scared of me."
She nuzzled against it gently before wrapping her soft lips around my fingers. I couldn't help but smile at the unexpected encounter.
"You know what? You're my first friend since all this happened." I told her as she continued to lick my hand clean. "Let's go back home and get you something to eat."
The llama followed closely behind me all the way to my house. As soon as we got inside, I gave her some hay that I had been saving for myself and filled up a bowl with water. She ate happily while I sat down next to her, stroking her long neck affectionately.
"I hope you like it here because..." My voice trailed off as I realized how alone we were now. "Never mind. Let's just enjoy each other's company while we still can."
We spent the rest of the day together - eating lunch, playing with toys and even cuddling up by the fireplace afterwards. It felt nice having someone else to talk to besides myself.
But eventually night fell and I knew I couldn't stay up forever...
"Okay sweetie," I whispered into her ear as I stood up from the couch. "Time for bed."
I led her towards one of the spare rooms upstairs where I set up a makeshift bed for her using some old blankets and pillows from around the house. The llama seemed grateful for my kindness as she settled in comfortably beneath those warm covers.
"Goodnight," I whispered again before closing the door softly behind me.
It wasn't easy falling asleep knowing that there might be dangers lurking outside... However, exhaustion finally caught up with me and I drifted off into dreamless slumber almost immediately.
</details>
 | {"license": "cc-by-nc-4.0", "tags": ["not-for-all-audiences"]} | riveRiPH/Llama-3SOME-8B-v1-BETA-6.0bpw-h8-exl2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"not-for-all-audiences",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"6-bit",
"region:us"
]
| null | 2024-04-27T19:32:59+00:00 |
null | transformers | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Aryanne/WestSenzu-Swap-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/WestSenzu-Swap-7B-GGUF/resolve/main/WestSenzu-Swap-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/WestSenzu-Swap-7B-GGUF/resolve/main/WestSenzu-Swap-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/WestSenzu-Swap-7B-GGUF/resolve/main/WestSenzu-Swap-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/WestSenzu-Swap-7B-GGUF/resolve/main/WestSenzu-Swap-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/WestSenzu-Swap-7B-GGUF/resolve/main/WestSenzu-Swap-7B.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/WestSenzu-Swap-7B-GGUF/resolve/main/WestSenzu-Swap-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/WestSenzu-Swap-7B-GGUF/resolve/main/WestSenzu-Swap-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/WestSenzu-Swap-7B-GGUF/resolve/main/WestSenzu-Swap-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/WestSenzu-Swap-7B-GGUF/resolve/main/WestSenzu-Swap-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/WestSenzu-Swap-7B-GGUF/resolve/main/WestSenzu-Swap-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/WestSenzu-Swap-7B-GGUF/resolve/main/WestSenzu-Swap-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/WestSenzu-Swap-7B-GGUF/resolve/main/WestSenzu-Swap-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/WestSenzu-Swap-7B-GGUF/resolve/main/WestSenzu-Swap-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/WestSenzu-Swap-7B-GGUF/resolve/main/WestSenzu-Swap-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/WestSenzu-Swap-7B-GGUF/resolve/main/WestSenzu-Swap-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": "Aryanne/WestSenzu-Swap-7B", "quantized_by": "mradermacher"} | mradermacher/WestSenzu-Swap-7B-GGUF | null | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Aryanne/WestSenzu-Swap-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T19:34:25+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | Punter1504/wav2vec2-base-timit-demo-google-colab | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T19:34:32+00:00 |
null | transformers | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/tenyx/Llama3-TenyxChat-70B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-GGUF/resolve/main/Llama3-TenyxChat-70B.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-GGUF/resolve/main/Llama3-TenyxChat-70B.IQ3_XS.gguf) | IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-GGUF/resolve/main/Llama3-TenyxChat-70B.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-GGUF/resolve/main/Llama3-TenyxChat-70B.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-GGUF/resolve/main/Llama3-TenyxChat-70B.IQ3_M.gguf) | IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-GGUF/resolve/main/Llama3-TenyxChat-70B.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-GGUF/resolve/main/Llama3-TenyxChat-70B.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-GGUF/resolve/main/Llama3-TenyxChat-70B.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-GGUF/resolve/main/Llama3-TenyxChat-70B.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-GGUF/resolve/main/Llama3-TenyxChat-70B.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-GGUF/resolve/main/Llama3-TenyxChat-70B.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-GGUF/resolve/main/Llama3-TenyxChat-70B.Q5_K_M.gguf) | Q5_K_M | 50.1 | |
| [PART 1](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-GGUF/resolve/main/Llama3-TenyxChat-70B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-GGUF/resolve/main/Llama3-TenyxChat-70B.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-GGUF/resolve/main/Llama3-TenyxChat-70B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-GGUF/resolve/main/Llama3-TenyxChat-70B.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "llama3", "library_name": "transformers", "tags": ["tenyx-fine-tuning", "dpo", "tenyxchat", "llama3"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "tenyx/Llama3-TenyxChat-70B", "quantized_by": "mradermacher"} | mradermacher/Llama3-TenyxChat-70B-GGUF | null | [
"transformers",
"gguf",
"tenyx-fine-tuning",
"dpo",
"tenyxchat",
"llama3",
"en",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:tenyx/Llama3-TenyxChat-70B",
"license:llama3",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T19:34:46+00:00 |
text-generation | transformers | {} | uday91/Llama-2-7b-chat-finetune-new | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T19:34:47+00:00 |
|
null | null | # Qwen1.5-110B-Chat-gguf
[Qwenさんが公開しているQwen1.5-110B-Chat](https://huggingface.co/Qwen/Qwen1.5-110B-Chat)のggufフォーマット変換版です。
imatrixのデータはTFMC/imatrix-dataset-for-japanese-llmを使用して作成しました。
## ライセンス
tongyi-qianwenライセンスになります。
[ご使用前にライセンスをご確認ください](https://huggingface.co/Qwen/Qwen1.5-110B-Chat/blob/main/LICENSE)
## Usage
```
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make -j
./main -m 'Qwen1.5-110B-Chat-q4_0.gguf' -n 128 -p '<|im_start|>system\nYou are a helpful assistant<|im_end|>\n<|im_start|>user\nAIについて教えて<|im_end|>\n<|im_start|>assistant\n'
``` | {"language": ["en", "ja"], "license": "other", "tags": ["qwen"], "datasets": ["TFMC/imatrix-dataset-for-japanese-llm"], "license_name": "tongyi-qianwen", "license_link": "https://huggingface.co/Qwen/Qwen1.5-110B-Chat/blob/main/LICENSE"} | mmnga/Qwen1.5-110B-Chat-gguf | null | [
"gguf",
"qwen",
"en",
"ja",
"dataset:TFMC/imatrix-dataset-for-japanese-llm",
"license:other",
"region:us"
]
| null | 2024-04-27T19:35:48+00:00 |
null | transformers |
# hus960/Nyan-Stunna-7B-Q4_K_M-GGUF
This model was converted to GGUF format from [`Nitral-AI/Nyan-Stunna-7B`](https://huggingface.co/Nitral-AI/Nyan-Stunna-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Nitral-AI/Nyan-Stunna-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo hus960/Nyan-Stunna-7B-Q4_K_M-GGUF --model nyan-stunna-7b.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo hus960/Nyan-Stunna-7B-Q4_K_M-GGUF --model nyan-stunna-7b.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m nyan-stunna-7b.Q4_K_M.gguf -n 128
```
| {"license": "other", "library_name": "transformers", "tags": ["mergekit", "merge", "llama-cpp", "gguf-my-repo"], "base_model": ["arlineka/KittyNyanster-v1", "Nitral-AI/KukulStanta-7B"]} | hus960/Nyan-Stunna-7B-Q4_K_M-GGUF | null | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:arlineka/KittyNyanster-v1",
"base_model:Nitral-AI/KukulStanta-7B",
"license:other",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T19:36:13+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | shallow6414/jyte9i7 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T19:37:59+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.1_4iters_bs256_nodpo_only4w_iter_3
This model is a fine-tuned version of [ShenaoZhang/0.1_4iters_bs256_nodpo_only4w_iter_2](https://huggingface.co/ShenaoZhang/0.1_4iters_bs256_nodpo_only4w_iter_2) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZhang/0.1_4iters_bs256_nodpo_only4w_iter_2", "model-index": [{"name": "0.1_4iters_bs256_nodpo_only4w_iter_3", "results": []}]} | ShenaoZhang/0.1_4iters_bs256_nodpo_only4w_iter_3 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZhang/0.1_4iters_bs256_nodpo_only4w_iter_2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T19:39:13+00:00 |
null | peft |
# SeELLama (Semantic Extraction LLama)
Model is based on LLama2-7b and fine-tuned with the `DehydratedWater42/semantic_relations_extraction` dataset.
The purpose of this model is to extract semantic relations from text in a structured way.
#### Simplified Example:
- **Initial Text**: "While there is beautiful weather outside the building, from the window we can see a car. And what's the most annoying, pigeons love to sit on that car."
- **Entities**: ["pigeon", "car", "building"]
- **Relations between entities**: {"pigeon -> car": "pigeon sits on the car", "car -> building": "car is parked outside the building"}
**Note:** The text example above is **too short** for the actual model; please use **at least 500-token text** segments for extraction to avoid hallucinations.
### This is just adapter for `NousResearch/Llama-2-7b-hf`
- **Get SeELLama as Safetensors:** [DehydratedWater42/SeELLama](https://huggingface.co/DehydratedWater42/SeELLama)
- **Get SeELLama as GGUF:** [DehydratedWater42/SeELLama-GGUF](https://huggingface.co/DehydratedWater42/SeELLama-GGUF)
## How to use it:
### Template:
Use the **prompt template** provided below to extract relations from text. Replace `<<your_text_for_extraction>>` with your selected text, ideally between 500-1500 tokens,
with an **optimal range** of about **800-1000 tokens**. You can adjust the **temperature** between 0.3 and 1.0; a good starting point is **between 0.6 and 0.7**.
Temperatures below 0.3 may lead to never ending `section_description`. The higher the temperature, the more the model will fill in the gaps in the provided text.
It was **fine-tuned on scientific articles**, so it will supplement missing information with general knowledge.
Model was trained on 2560 context lenght where 1000-1500 tokens where used as input text.
```text
Below is an part of larger text. Your task is to extract information about entities and relations to the JSON format.
### Text Part to Extract From:
<<your_text_for_extraction>>
### Extracted Relations:
{
"section_description":
```
The `JSON` opening is not necessary but it improves stability. Remember to use double `{{` instead of singular '{' if you are using LangChain prompts with fstring formatting.
## Fine-tuning code/settings
```python
tokenizer = AutoTokenizer.from_pretrained("NousResearch/Llama-2-7b-hf", trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.allow_tf32 = True
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
)
model = AutoModelForCausalLM.from_pretrained(
"NousResearch/Llama-2-7b-hf",
quantization_config=bnb_config,
use_cache=False,
use_flash_attention_2=False,
device_map="auto",
)
model.config.pretraining_tp = 1
peft_params = LoraConfig(
lora_alpha=16,
lora_dropout=0.1,
r=64,
bias="none",
task_type="CAUSAL_LM",
)
training_params = TrainingArguments(
output_dir="./results",
num_train_epochs=3,
per_device_train_batch_size=1,
gradient_accumulation_steps=1,
optim="paged_adamw_32bit",
save_steps=100,
logging_steps=1,
learning_rate=2e-4,
weight_decay=0.001,
bf16=True,
fp16=False,
tf32=True,
max_grad_norm=0.3,
max_steps=-1,
warmup_ratio=0.03,
group_by_length=True,
lr_scheduler_type="constant",
report_to="mlflow",
run_name="semantic-extraction-llama2-7b"
)
trainer = SFTTrainer(
model=model,
train_dataset=formated_dataset,
peft_config=peft_params,
dataset_text_field="text",
max_seq_length=int(1024 * 2.5), # 2560
tokenizer=tokenizer,
args=training_params,
packing=True,
)
```
### Example:
Extracting information from Minecraft Wiki.
#### Initial template
```text
Below is an part of larger text. Your task is to extract information about entities and relations to the JSON format.
### Text Part to Extract From:
Players in Survival, Adventure, or Spectator have access to the 2×2 crafting grid from their inventory screen. Small crafting recipes that are at most 2×2 can be made there. These include wooden planks, sticks, crafting tables, torches and some shapeless recipes. To craft items using a bigger grid to allow more crafting recipes, create a crafting table with 4 wooden planks, place it in the world, and press use while facing it. This brings up an interface with a larger, 3×3 crafting grid, which the player can use to craft any crafting recipe in the game. There is also a recipe book where the player can access all the crafting recipes.
Some recipes do not require their ingredients to be arranged in a specific way on the crafting grid. These are commonly known as shapeless recipes. For example, players may craft a fermented spider eye by placing its ingredients anywhere within the grid or like a flint and steel can be crafted in any orientation.
On the contrary, many recipes must have their ingredients placed in the correct relative positions on the crafting grid. These are commonly known as shaped recipes. Ingredients in shaped recipes can be ‘moved’ up, down, left, or right. They can also be flipped side-ways. For example, a 3×1 recipe, such as bread, can be made using the top, middle, or bottom row of the 3×3 grid, and a bow may be made with the strings placed on the left instead of on the right.
There are recipes that may not be moved or mirrored in this way. These are commonly known as fixed recipes. For example, dyes in banner recipes - only available in Bedrock Edition - must be specifically placed to achieve the desired pattern. Fixed recipes can be added by data packs, add-ons, or mods.
### Extracted Relations:
{
"section_description":
```
#### Result (about game):
```text
Below is an part of larger text. Your task is to extract information about entities and relations to the JSON format.
### Text Part to Extract From:
Players in Survival, Adventure, or Spectator have access to the 2×2 crafting grid from their inventory screen. Small crafting recipes that are at most 2×2 can be made there. These include wooden planks, sticks, crafting tables, torches and some shapeless recipes. To craft items using a bigger grid to allow more crafting recipes, create a crafting table with 4 wooden planks, place it in the world, and press use while facing it. This brings up an interface with a larger, 3×3 crafting grid, which the player can use to craft any crafting recipe in the game. There is also a recipe book where the player can access all the crafting recipes.
Some recipes do not require their ingredients to be arranged in a specific way on the crafting grid. These are commonly known as shapeless recipes. For example, players may craft a fermented spider eye by placing its ingredients anywhere within the grid or like a flint and steel can be crafted in any orientation.
On the contrary, many recipes must have their ingredients placed in the correct relative positions on the crafting grid. These are commonly known as shaped recipes. Ingredients in shaped recipes can be ‘moved’ up, down, left, or right. They can also be flipped side-ways. For example, a 3×1 recipe, such as bread, can be made using the top, middle, or bottom row of the 3×3 grid, and a bow may be made with the strings placed on the left instead of on the right.
There are recipes that may not be moved or mirrored in this way. These are commonly known as fixed recipes. For example, dyes in banner recipes - only available in Bedrock Edition - must be specifically placed to achieve the desired pattern. Fixed recipes can be added by data packs, add-ons, or mods.
### Extracted Relations:
{
"section_description": "In Minecraft, players can craft items using the 2x2 crafting grid. There are some fixed recipes that require the player to place their ingredients in a specific position.",
"list_of_entities": [
"crafting recipes",
"player",
"fixed recipes",
"2x2 crafting grid",
"survival, adventure, or spectator players",
"wooden planks",
"torches",
"shaped recipes",
"small crafting recipes",
"shapeless recipes",
"wooden planks, sticks, crafting tables, torches",
"flint and steel"
],
"relations": [
{
"description": "Players have access to the 2x2 crafting grid from their inventory screen.",
"source_entities": [
"crafting recipes"
],
"target_entities": [
"player"
],
"strength": "strong"
},
{
"description": "Small crafting recipes that are at most 2x2 can be made using the 2x2 crafting grid.",
"source_entities": [
"crafting recipes"
],
"target_entities": [
"survival, adventure, or spectator players"
],
"strength": "strong"
},
{
"description": "To craft items using a bigger grid to allow more crafting recipes, create a crafting table with 4 wooden planks, place it in the world, and press use while facing it.",
"source_entities": [
"crafting recipes"
],
"target_entities": [
"wooden planks"
],
"strength": "strong"
},
{
"description": "To craft items using a bigger grid to allow more crafting recipes, create a crafting table with 4 wooden planks, place it in the world, and press use while facing it.",
"source_entities": [
"crafting recipes"
],
"target_entities": [
"crafting tables"
],
"strength": "strong"
},
{
"description": "Crafting tables allow players to make bigger crafting recipes that are not possible with the 2x2 crafting grid.",
"source_entities": [
"crafting recipes"
],
"target_entities": [
"torches",
"flint and steel",
"wooden planks"
],
"strength": "strong"
},
{
"description": "Crafting tables are used to craft items using the 2x2 crafting grid.",
"source_entities": [
"crafting recipes"
],
"target_entities": [
"wooden planks, sticks, crafting tables, torches"
],
"strength": "strong"
},
{
"description": "There are some fixed recipes that require the player to place their ingredients in a specific position.",
"source_entities": [
"crafting recipes"
],
"target_entities": [
"fixed recipes"
],
"strength": "strong"
},
{
"description": "The recipes do not require their ingredients to be arranged in a specific way on the crafting grid.",
"source_entities": [
"crafting recipes"
],
"target_entities": [
"shapeless recipes"
],
"strength": "strong"
},
{
"description": "Ingredients in shapeless recipes can be ‘moved’ up, down, left, or right.",
"source_entities": [
"crafting recipes"
],
"target_entities": [
"shaped recipes"
],
"strength": "strong"
},
{
"description": "Ingredients in shaped recipes can be flipped side-ways.",
"source_entities": [
"crafting recipes"
],
"target_entities": [
"fixed recipes"
],
"strength": "strong"
},
{
"description": "There are some fixed recipes that require the player to place their ingredients in a specific position.",
"source_entities": [
"crafting recipes"
],
"target_entities": [
"fixed recipes"
],
"strength": "strong"
},
{
"description": "The recipes can be added by data packs, add-ons, or mods.",
"source_entities": [
"crafting recipes"
],
"target_entities": [
"fixed recipes"
],
"strength": "strong"
}
]
}
```
### Smaller example (About crafting)
```text
Below is an part of larger text. Your task is to extract information about entities and relations to the JSON format.
### Text Part to Extract From:
Players in Survival, Adventure, or Spectator have access to the 2×2 crafting grid from their inventory screen. Small crafting recipes that are at most 2×2 can be made there. These include wooden planks, sticks, crafting tables, torches and some shapeless recipes. To craft items using a bigger grid to allow more crafting recipes, create a crafting table with 4 wooden planks, place it in the world, and press use while facing it. This brings up an interface with a larger, 3×3 crafting grid, which the player can use to craft any crafting recipe in the game. There is also a recipe book where the player can access all the crafting recipes.
### Extracted Relations:
{
"section_description": "Survival, Adventure, and Spectator players have access to the 2×2 crafting grid. However, to craft items using a bigger grid, you need to create a crafting table with wooden planks, place it in the world, and press use while facing it.",
"list_of_entities": [
"Survival",
"wooden planks",
"Spectator",
"Adventure",
"crafting grid"
],
"relations": [
{
"description": "Survival players can access the 2×2 crafting grid.",
"source_entities": [
"Survival"
],
"target_entities": [
"crafting grid"
],
"strength": "strong"
},
{
"description": "Adventure and Spectator players can also access the 2×2 crafting grid.",
"source_entities": [
"Adventure"
],
"target_entities": [
"crafting grid"
],
"strength": "strong"
},
{
"description": "To craft items using a bigger grid, you need to create a crafting table with wooden planks.",
"source_entities": [
"Spectator"
],
"target_entities": [
"crafting grid"
],
"strength": "strong"
}
]
}
```
### Dopamine example
```text
Below is an part of larger text. Your task is to extract information about entities and relations to the JSON format.
### Text Part to Extract From:
Dopamine is synthesized in a restricted set of cell types, mainly neurons and cells in the medulla of the adrenal glands.[23] The primary and minor metabolic pathways respectively are:
Primary: L-Phenylalanine → L-Tyrosine → L-DOPA → Dopamine[20][21]
Minor: L-Phenylalanine → L-Tyrosine → p-Tyramine → Dopamine[20][21][22]
Minor: L-Phenylalanine → m-Tyrosine → m-Tyramine → Dopamine[22][24][25]
The direct precursor of dopamine, L-DOPA, can be synthesized indirectly from the essential amino acid phenylalanine or directly from the non-essential amino acid tyrosine.[26] These amino acids are found in nearly every protein and so are readily available in food, with tyrosine being the most common. Although dopamine is also found in many types of food, it is incapable of crossing the blood–brain barrier that surrounds and protects the brain.[27] It must therefore be synthesized inside the brain to perform its neuronal activity.[27]
L-Phenylalanine is converted into L-tyrosine by the enzyme phenylalanine hydroxylase, with molecular oxygen (O2) and tetrahydrobiopterin as cofactors. L-Tyrosine is converted into L-DOPA by the enzyme tyrosine hydroxylase, with tetrahydrobiopterin, O2, and iron (Fe2+) as cofactors.[26] L-DOPA is converted into dopamine by the enzyme aromatic L-amino acid decarboxylase (also known as DOPA decarboxylase), with pyridoxal phosphate as the cofactor.[26]
Dopamine itself is used as precursor in the synthesis of the neurotransmitters norepinephrine and epinephrine.[26] Dopamine is converted into norepinephrine by the enzyme dopamine β-hydroxylase, with O2 and L-ascorbic acid as cofactors.[26] Norepinephrine is converted into epinephrine by the enzyme phenylethanolamine N-methyltransferase with S-adenosyl-L-methionine as the cofactor.[26]
Some of the cofactors also require their own synthesis.[26] Deficiency in any required amino acid or cofactor can impair the synthesis of dopamine, norepinephrine, and epinephrine.[26]
Degradation
Dopamine is broken down into inactive metabolites by a set of enzymes—monoamine oxidase (MAO), catechol-O-methyl transferase (COMT), and aldehyde dehydrogenase (ALDH), acting in sequence.[28] Both isoforms of monoamine oxidase, MAO-A and MAO-B, effectively metabolize dopamine.[26] Different breakdown pathways exist but the main end-product is homovanillic acid (HVA), which has no known biological activity.[28] From the bloodstream, homovanillic acid is filtered out by the kidneys and then excreted in the urine.[28] The two primary metabolic routes that convert dopamine into HVA are:[29]
Dopamine → DOPAL → DOPAC → HVA – catalyzed by MAO, ALDH, and COMT respectively
Dopamine → 3-Methoxytyramine → HVA – catalyzed by COMT and MAO+ALDH respectively
In clinical research on schizophrenia, measurements of homovanillic acid in plasma have been used to estimate levels of dopamine activity in the brain. A difficulty in this approach however, is separating the high level of plasma homovanillic acid contributed by the metabolism of norepinephrine.[30][31]
Although dopamine is normally broken down by an oxidoreductase enzyme, it is also susceptible to oxidation by direct reaction with oxygen, yielding quinones plus various free radicals as products.[32] The rate of oxidation can be increased by the presence of ferric iron or other factors. Quinones and free radicals produced by autoxidation of dopamine can poison cells, and there is evidence that this mechanism may contribute to the cell loss that occurs in Parkinson's disease and other conditions.[33]
### Extracted Relations:
{
"section_description": "Dopamine is a neurotransmitter that plays a crucial role in regulating movement, motivation, and reward processing. It is produced in several regions of the brain, including the substantia nigra, ventral tegmental area, and hypothalamus. Dopamine is involved in several physiological functions such as motor control, cognition, and emotional responses. Low levels of dopamine have been linked to several neurological disorders, including Parkinson's disease, schizophrenia, and addiction. Dopamine is also involved in the reward system, which is responsible for motivation and pleasure. The brain's reward system is activated by the release of dopamine in response to pleasurable experiences, such as eating, sex, and drugs. Dopamine is synthesized from the amino acid L-tyrosine, which is derived from dietary sources. The primary precursor of dopamine is L-DOPA, which is synthesized from L-tyrosine by the enzyme tyrosine hydroxylase. Dopamine is then converted into norepinephrine and epinephrine by the enzyme dopamine β-hydroxylase. Dopamine is broken down into inactive metabolites by a set of enzymes, including monoamine oxidase (MAO), catechol-O-methyl transferase (COMT), and aldehyde dehydrogenase (ALDH). The two primary metabolic routes that convert dopamine into HVA are: Dopamine → DOPAL → DOPAC → HVA - catalyzed by MAO, ALDH, and COMT respectively Dopamine → 3-Methoxytyramine → HVA - catalyzed by COMT and MAO+ALDH respectively In clinical research on schizophrenia, measurements of homovanillic acid in plasma have been used to estimate levels of dopamine activity in the brain. A difficulty in this approach however, is separating the high level of plasma homovanillic acid contributed by the metabolism of norepinephrine. Dopamine is also susceptible to oxidation by direct reaction with oxygen, yielding quinones plus various free radicals as products. The rate of oxidation can be increased by the presence of ferric iron or other factors. Quinones and free radicals produced by autoxidation of dopamine can poison cells, and there is evidence that this mechanism may contribute to the cell loss that occurs in Parkinson's disease and other conditions. ",
"list_of_entities": [
"motivation",
"Parkinson's disease",
"cognition",
"pleasure",
"dopamine",
"L-tyrosine",
"schizophrenia",
"emotional responses",
"L-DOPA",
"dopamine β-hydroxylase",
"dopamine β-hydroxylase",
"L-DOPA",
"dopamine",
"L-tyrosine",
"dopamine β-hydroxylase",
"L-DOPA",
"L-tyrosine",
"L-DOPA",
"dopamine",
"L-DOPA",
"dopamine"
],
"relations": [
{
"description": "Dopamine is synthesized from the amino acid L-tyrosine, which is derived from dietary sources.",
"source_entities": [
"dopamine"
],
"target_entities": [
"L-tyrosine"
]
},
{
"description": "The primary precursor of dopamine is L-DOPA, which is synthesized from L-tyrosine by the enzyme tyrosine hydroxylase.",
"source_entities": [
"L-DOPA"
],
"target_entities": [
"dopamine"
]
},
{
"description": "Dopamine is then converted into norepinephrine and epinephrine by the enzyme dopamine β-hydroxylase.",
"source_entities": [
"dopamine"
],
"target_entities": [
"dopamine β-hydroxylase"
]
},
{
"description": "Dopamine is broken down into inactive metabolites by a set of enzymes, including monoamine oxidase (MAO), catechol-O-methyl transferase (COMT), and aldehyde dehydrogenase (ALDH).",
"source_entities": [
"dopamine"
],
"target_entities": [
"monoamine oxidase (MAO)",
"catechol-O-methyl transferase (COMT)",
"aldehyde dehydrogenase (ALDH)"
]
},
{
"description": "The two primary metabolic routes that convert dopamine into HVA are: Dopamine → DOPAL → DOPAC → HVA - catalyzed by MAO, ALDH, and COMT respectively Dopamine → 3-Methoxytyramine → HVA - catalyzed by COMT and MAO+ALDH respectively",
"source_entities": [
"dopamine"
],
"target_entities": [
"HVA",
"MAO",
"ALDH",
"COMT"
]
},
{
"description": "In clinical research on schizophrenia, measurements of homovanillic acid in plasma have been used to estimate levels of dopamine activity in the brain.",
"source_entities": [
"dopamine"
],
"target_entities": [
"homovanillic acid"
]
},
{
"description": "A difficulty in this approach however, is separating the high level of plasma homovanillic acid contributed by the metabolism of norepinephrine.",
"source_entities": [
"homovanillic acid"
],
"target_entities": [
"norepinephrine"
]
},
{
"description": "Dopamine is also susceptible to oxidation by direct reaction with oxygen, yielding quinones plus various free radicals as products.",
"source_entities": [
"dopamine"
],
"target_entities": [
"oxidation"
]
},
{
"description": "The rate of oxidation can be increased by the presence of ferric iron or other factors.",
"source_entities": [
"dopamine"
],
"target_entities": [
"ferric iron"
]
},
{
"description": "Quinones and free radicals produced by autoxidation of dopamine can poison cells, and there is evidence that this mechanism may contribute to the cell loss that occurs in Parkinson's disease and other conditions.",
"source_entities": [
"dopamine"
],
"target_entities": [
"cell loss"
]
}
]
}
```
### Longer example (dirt block)
```text
Below is an part of larger text. Your task is to extract information about entities and relations to the JSON format.
### Text Part to Extract From:
Usage
Using bone meal on grass blocks causes short grass, tall grass,[Java Edition only] ferns,[Bedrock Edition only] and flowers to grow.
Passive mobs tend to wander toward grass blocks. They also wander toward light.
When a sheep eats a grass block, the block becomes dirt, and a sheared sheep regrows its wool. Baby sheep graze grass much more often than adults and mature 1 minute faster when grazing.
Tilling a grass block with a hoe converts it to a farmland block.
Dirt paths can be created by using any type of shovel on the side or top of a grass block with air above it. The shovel loses 1 durability for each dirt path block created.
Death
Grass dies and changes to dirt a random time (when a random tick lands on the block) after it has been covered by any opaque blocks. Transparent blocks can kill grass in a similar manner if they cause the light level above the grass block to be four or below (like water does), and the surrounding area is not otherwise sufficiently illuminated.
Spawning
In Java Edition, animals occasionally spawn on grass blocks that have light level 9 or brighter in the space directly above. This is quite rare and requires that there be few other animals nearby or in the spawn chunks. Most animals are instead created along with the terrain.
In Bedrock Edition, most animals require surface grass blocks (grass blocks with no solid block anywhere above them at the same x, z coordinate) to spawn. Animals regularly spawn in most Overworld biomes on surface grass blocks that have light level 7 or brighter in the space directly above.
Snowy grass block
A snowy grass block with snow on top of it.
Grass blocks turn into a snowy grass block if snow, powder snow or a snow block is placed on top of the block. Removing the snow layer turns the block into a normal grass block.
Composting
In Bedrock Edition, placing grass blocks into a composter has a 30% chance of raising the compost level by 1.
### Extracted Relations:
{
"section_description": "The article discusses the various ways in which grass blocks can be used in Minecraft, including as food, fuel, and protection. It also covers how grass blocks can be used to tame animals, and the dangers of being exposed to grass. The article concludes by highlighting the importance of knowing how to use and defend against grass blocks.",
"list_of_entities": [
"dirt",
"Minecraft",
"light level",
"animal",
"surface grass blocks",
"Bedrock Edition",
"Java Edition",
"dirt path",
"grass",
"snow",
"snowy grass block",
"opaque blocks",
"sheep",
"composter",
"transparent blocks"
],
"relations": [
{
"description": "Grass blocks are a renewable resource that can be used as a food item.",
"source_entities": [
"Minecraft",
"grass"
],
"target_entities": [
"animal",
"surface grass blocks"
],
"strength": "strong"
},
{
"description": "Grass blocks can be used to tame animals.",
"source_entities": [
"Minecraft",
"grass"
],
"target_entities": [
"animal",
"surface grass blocks"
],
"strength": "strong"
},
{
"description": "Exposure to grass blocks can cause damage to the player's health.",
"source_entities": [
"Minecraft"
],
"target_entities": [
"grass",
"player"
],
"strength": "strong"
},
{
"description": "Placing grass blocks in a composter has a 30% chance of raising the compost level by 1.",
"source_entities": [
"Minecraft",
"grass"
],
"target_entities": [
"composter",
"grass"
],
"strength": "strong"
},
{
"description": "Surface grass blocks are the only ones that can be used to spawn animals.",
"source_entities": [
"Minecraft"
],
"target_entities": [
"surface grass blocks",
"animal"
],
"strength": "strong"
},
{
"description": "Animals regularly spawn in most Overworld biomes on surface grass blocks that have light level 7 or brighter in the space directly above.",
"source_entities": [
"Minecraft"
],
"target_entities": [
"surface grass blocks",
"animal"
],
"strength": "strong"
},
{
"description": "If you walk into the grass without first getting rid of any animals or monsters that are there, they will attack you.",
"source_entities": [
"Minecraft"
],
"target_entities": [
"grass",
"player"
],
"strength": "moderate"
},
{
"description": "Placing grass blocks with snow on top of them turns them into snowy grass blocks.",
"source_entities": [
"Minecraft"
],
"target_entities": [
"snow",
"grass"
],
"strength": "strong"
},
{
"description": "Removing the snow layer turns the block into a normal grass block.",
"source_entities": [
"Minecraft"
],
"target_entities": [
"snowy grass block",
"grass"
],
"strength": "strong"
},
{
"description": "Dirt path blocks can be created by using any type of shovel on the side or top of a grass block with air above it.",
"source_entities": [
"Minecraft",
"grass"
],
"target_entities": [
"dirt path",
"shovel"
],
"strength": "strong"
},
{
"description": "The shovel loses 1 durability for each dirt path block created.",
"source_entities": [
"Minecraft",
"shovel"
],
"target_entities": [
"dirt path",
"shovel"
],
"strength": "moderate"
},
{
"description": "Death grass block dies and changes to dirt a random time (when a random tick lands on the block)",
"source_entities": [
"Minecraft"
],
"target_entities": [
"death grass block",
"dirt"
],
"strength": "strong"
},
{
"description": "Grass can be used to create dirt paths",
"source_entities": [
"Minecraft"
],
"target_entities": [
"grass",
"dirt path"
],
"strength": "strong"
}
]
}
```
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"language": ["en"], "license": "llama2", "library_name": "peft", "tags": ["math", "semantic", "extraction", "graph", "relations", "science", "synthetic"], "datasets": ["DehydratedWater42/semantic_relations_extraction"], "base_model": "NousResearch/Llama-2-7b-hf", "inference": false} | DehydratedWater42/SeELLama-qlora-adapter | null | [
"peft",
"safetensors",
"math",
"semantic",
"extraction",
"graph",
"relations",
"science",
"synthetic",
"en",
"dataset:DehydratedWater42/semantic_relations_extraction",
"base_model:NousResearch/Llama-2-7b-hf",
"license:llama2",
"region:us"
]
| null | 2024-04-27T19:39:19+00:00 |
null | null | {} | amirhossein-akbari/bert | null | [
"region:us"
]
| null | 2024-04-27T19:39:25+00:00 |
|
null | transformers |
# hus960/Stanta-Lelemon-Maid-7B-Q4_K_M-GGUF
This model was converted to GGUF format from [`Nitral-AI/Stanta-Lelemon-Maid-7B`](https://huggingface.co/Nitral-AI/Stanta-Lelemon-Maid-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Nitral-AI/Stanta-Lelemon-Maid-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo hus960/Stanta-Lelemon-Maid-7B-Q4_K_M-GGUF --model stanta-lelemon-maid-7b.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo hus960/Stanta-Lelemon-Maid-7B-Q4_K_M-GGUF --model stanta-lelemon-maid-7b.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m stanta-lelemon-maid-7b.Q4_K_M.gguf -n 128
```
| {"license": "other", "library_name": "transformers", "tags": ["mergekit", "merge", "llama-cpp", "gguf-my-repo"], "base_model": ["Nitral-AI/Lelemon-Maid-7B", "Nitral-AI/KukulStanta-7B"], "model-index": [{"name": "Stanta-Lelemon-Maid-7B", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 67.58, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Nitral-AI/Stanta-Lelemon-Maid-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 86.03, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Nitral-AI/Stanta-Lelemon-Maid-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 64.79, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Nitral-AI/Stanta-Lelemon-Maid-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 59.58}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Nitral-AI/Stanta-Lelemon-Maid-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 79.64, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Nitral-AI/Stanta-Lelemon-Maid-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 61.11, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Nitral-AI/Stanta-Lelemon-Maid-7B", "name": "Open LLM Leaderboard"}}]}]} | hus960/Stanta-Lelemon-Maid-7B-Q4_K_M-GGUF | null | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:Nitral-AI/Lelemon-Maid-7B",
"base_model:Nitral-AI/KukulStanta-7B",
"license:other",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T19:39:54+00:00 |
null | null |
# Kaoeiri/Keiana-L3-Test6.25-8B-18.5-Q6_K-GGUF
This model was converted to GGUF format from [`Kaoeiri/Keiana-L3-Test6.25-8B-18.5`](https://huggingface.co/Kaoeiri/Keiana-L3-Test6.25-8B-18.5) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Kaoeiri/Keiana-L3-Test6.25-8B-18.5) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo Kaoeiri/Keiana-L3-Test6.25-8B-18.5-Q6_K-GGUF --model keiana-l3-test6.25-8b-18.5.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo Kaoeiri/Keiana-L3-Test6.25-8B-18.5-Q6_K-GGUF --model keiana-l3-test6.25-8b-18.5.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m keiana-l3-test6.25-8b-18.5.Q6_K.gguf -n 128
```
| {"tags": ["llama-cpp", "gguf-my-repo"]} | Kaoeiri/Keiana-L3-Test6.25-8B-18.5-Q6_K-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"region:us"
]
| null | 2024-04-27T19:41:01+00:00 |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "mistralai/Mistral-7B-v0.1"} | cgihlstorf/NEW_finetuned_Mistral-7B32_1_0.0003_alternate | null | [
"peft",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"region:us"
]
| null | 2024-04-27T19:42:21+00:00 |
null | null | {"license": "openrail"} | Timur04129/Asagi-Mutsuki | null | [
"license:openrail",
"region:us"
]
| null | 2024-04-27T19:42:33+00:00 |
|
text-to-image | diffusers |
# Dream Diffusion LIGHTNING XL & SD V2 API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "dream-diffusion-lightning"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/dream-diffusion-lightning)
Model link: [View model](https://modelslab.com/models/dream-diffusion-lightning)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "dream-diffusion-lightning",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** | {"license": "creativeml-openrail-m", "tags": ["modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic"], "pinned": true} | stablediffusionapi/dream-diffusion-lightning | null | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| null | 2024-04-27T19:43:00+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-410m_mz-131_IMDB
This model is a fine-tuned version of [EleutherAI/pythia-410m](https://huggingface.co/EleutherAI/pythia-410m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-410m", "model-index": [{"name": "robust_llm_pythia-410m_mz-131_IMDB", "results": []}]} | AlignmentResearch/robust_llm_pythia-410m_mz-131_IMDB | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-410m",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T19:43:40+00:00 |
text-generation | transformers |
# Uploaded model
- **Developed by:** JensWie
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "base_model": "unsloth/llama-3-8b-Instruct"} | JensWie/llama-3-8b-Instruct-bnb-4bit-english-friend | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/llama-3-8b-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T19:44:56+00:00 |
null | null | {} | quangtqv/ner_crypto | null | [
"pytorch",
"region:us"
]
| null | 2024-04-27T19:46:15+00:00 |
|
null | null | {"license": "openrail"} | lhyland/joochan_goldenchild | null | [
"license:openrail",
"region:us"
]
| null | 2024-04-27T19:48:15+00:00 |
|
text-generation | transformers | # Prodigy SM Base v0.1
<img src="https://cdn-uploads.huggingface.co/production/uploads/617bbeec14572ebe9e6ea83f/4p2zaOWu6kTS3fcbevHef.png" width="70%" height="70%">
In our latest endeavour, we performed continued pre-training of a large language model (Mistral-7b-v0.1) to understand and generate text in new languages, including **Serbian**, **Bosnian** and **Croatian** using an innovative approach.
Rather than depending only on extensive datasets in the target language, our method utilizes a more compact set of both synthetic and human-curated data along with some mixture of CC Web data, which is implemented in two strategic phases:
1. Establishing a comprehensive demonstration of all grammatical and orthographic rules pertinent to the language.
2. Supplying a diverse array of examples that not only reinforce these rules but also integrate a wide range of linguistic nuances.
While our approach is uniquely tailored to our objectives, we have drawn some inspiration from recent advancements in language model training. Specifically, the conceptual strategies discussed in the paper [ADAPTING LARGE LANGUAGE MODELS VIA READING COMPREHENSION](https://arxiv.org/pdf/2309.09530.pdf) provided valuable insights, though our methods diverge significantly in practice. By adopting this inspired approach, we aim to efficiently teach the model new languages with a balanced blend of accuracy and linguistic diversity.
So... Did it work?!
# **Yes!**
See the benchmark results, or even better, download the model and try it yourself. As you know by now, there's no better benchmark than a quick 'try it yourself' vibe check. :)
<img src="https://cdn-uploads.huggingface.co/production/uploads/617bbeec14572ebe9e6ea83f/C9m_OjnYEpQo43VCrwz4A.png" width="100%" height="100%">
Here, we demonstrate results of benchmark that is not frequently performed, yet equally important: how adapting the model for a new language impacted its original English-only performance.
<img src="https://cdn-uploads.huggingface.co/production/uploads/617bbeec14572ebe9e6ea83f/IPY0myfQI-Ne5x6b11glz.png" width="100%" height="100%">
*All evals are performed in zero shot manner.
*Also bear in mind that llama-2-7b, llama-3-8b and mistral-7b models compared to Prodigy SM base aren't trained on extensive Serbian language datasets, and these benchmarks demonstrate that primarily English models can be adapted to other languages.
So, as you can see, we successfully improved the original model's performance for Serbian language use cases while retaining or even slightly improving its performance for English language.
### Training results
Training results of continued pre-training of [mistral-7b-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
<img src="https://cdn-uploads.huggingface.co/production/uploads/617bbeec14572ebe9e6ea83f/5xeJ-vfWk4RhJNC7t5I0g.png" width="70%" height="70%">
<img src="https://cdn-uploads.huggingface.co/production/uploads/617bbeec14572ebe9e6ea83f/R4R8ai8LaN3WlYCOenUyb.png" width="70%" height="70%">
As last experimental step we merged produced model with **Mistral-7B-v0.1** and two earlier checkpoints from **prodigy-sm-base** using [Model Stock](https://arxiv.org/abs/2403.19522) method.
# Notes
As this is base model, there is no chat template or strict chat following capabilities, this model is best candidate for further pre-train on Serbian language as there is a lot more room for improvement (you can hit sweet spot), or next step in the pipeline, such as some form of chat or instruct tuning.
If you want model that is already instruction tuned we did that too, check **Prodigy SM Instruct v0.1**
# Prodigy SM Instruct v0.1
🚀[prodigy-sm-instruct]() **COMING SOON**
And stay tuned for:
[prodigy-sm-base (llama-3)]() **COMING SOON**
[prodigy-sm-instruct (llama-3)]() **COMING SOON**
📢 Also we are excited to announce that [iskon.ai](https://Iskon.ai) will soon launch an API platform featuring advanced **Prodigy** series of models, advanced AI tools and much more! 🚀
# Thanks
- [gordicaleksa/serbian-llm-eval](https://github.com/gordicaleksa/serbian-llm-eval) and his community for curating translations and adaptation of [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness)
that we used to perform benchmarks.
- [jondurbin](https://huggingface.co/jondurbin) for amazing airoboros framework
- [teknium](https://huggingface.co/teknium) for various insights shared on discord and twitter aka x.com
- [Eric](https://twitter.com/erhartford) for various insights shared on discord and twitter aka x.com
- [mergekit](https://github.com/arcee-ai/mergekit) for model merging tools
*Huge thanks to Redmond.ai for generous DGX cloud credits* [redmond.ai]( https://redmond.ai)
| {"language": ["en", "sr", "hr", "bs"], "license": "apache-2.0"} | iskonai/prodigy-sm-base-v0.1 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"sr",
"hr",
"bs",
"arxiv:2309.09530",
"arxiv:2403.19522",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T19:49:06+00:00 |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "Universal-NER/UniNER-7B-type"} | jc80622/unilora_sec151 | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Universal-NER/UniNER-7B-type",
"region:us"
]
| null | 2024-04-27T19:50:52+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["trl", "orpo"]} | amarard/FinetunedOrpoLlama-3-8B | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"orpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
]
| null | 2024-04-27T19:52:46+00:00 |
null | null | {"license": "unlicense"} | wajidhussain/urdu-bert-base-uncased | null | [
"license:unlicense",
"region:us"
]
| null | 2024-04-27T19:53:30+00:00 |
|
null | null | {} | Stevesanchl/CP_may | null | [
"region:us"
]
| null | 2024-04-27T19:55:15+00:00 |
|
text-classification | transformers | {} | uisikdag/distilbert-base-uncased-finetuned-emotion | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T19:56:08+00:00 |
|
null | null | {} | afraniocaires/tay | null | [
"region:us"
]
| null | 2024-04-27T19:57:26+00:00 |
|
null | null | {"license": "mit"} | mahin2110/MT | null | [
"gguf",
"license:mit",
"region:us"
]
| null | 2024-04-27T19:58:11+00:00 |
|
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "t5-base"} | PQlet/T5base-lora-sumarizationTables-v2-aug1-RandomDelete | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:t5-base",
"region:us"
]
| null | 2024-04-27T19:58:11+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# parp-wave2vec
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4483
- Wer: 0.3476
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 7.2839 | 1.59 | 100 | 5.8388 | 1.0 |
| 3.3061 | 3.17 | 200 | 3.2376 | 1.0 |
| 2.991 | 4.76 | 300 | 3.0763 | 1.0 |
| 2.9309 | 6.35 | 400 | 2.9807 | 1.0 |
| 2.8255 | 7.94 | 500 | 2.7915 | 1.0 |
| 2.4385 | 9.52 | 600 | 2.0330 | 1.0139 |
| 1.6806 | 11.11 | 700 | 1.0553 | 0.8019 |
| 0.7871 | 12.7 | 800 | 0.5798 | 0.5345 |
| 0.423 | 14.29 | 900 | 0.4795 | 0.4583 |
| 0.2885 | 15.87 | 1000 | 0.4599 | 0.4204 |
| 0.2297 | 17.46 | 1100 | 0.4404 | 0.3953 |
| 0.1869 | 19.05 | 1200 | 0.4463 | 0.3857 |
| 0.1478 | 20.63 | 1300 | 0.4319 | 0.3751 |
| 0.1386 | 22.22 | 1400 | 0.4364 | 0.3715 |
| 0.1158 | 23.81 | 1500 | 0.4448 | 0.3652 |
| 0.1076 | 25.4 | 1600 | 0.4324 | 0.3528 |
| 0.098 | 26.98 | 1700 | 0.4406 | 0.3607 |
| 0.0933 | 28.57 | 1800 | 0.4367 | 0.3547 |
| 0.0848 | 30.16 | 1900 | 0.4341 | 0.3526 |
| 0.0773 | 31.75 | 2000 | 0.4330 | 0.3550 |
| 0.0721 | 33.33 | 2100 | 0.4418 | 0.3493 |
| 0.0716 | 34.92 | 2200 | 0.4379 | 0.3494 |
| 0.067 | 36.51 | 2300 | 0.4369 | 0.3497 |
| 0.064 | 38.1 | 2400 | 0.4494 | 0.3488 |
| 0.06 | 39.68 | 2500 | 0.4483 | 0.3476 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.0.1+cu117
- Datasets 2.16.1
- Tokenizers 0.13.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["wer"], "model-index": [{"name": "parp-wave2vec", "results": []}]} | atishayj25/parp-wave2vec | null | [
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T19:59:17+00:00 |
null | null | {} | fauzanardh/CertainAnimeGameRVC | null | [
"region:us"
]
| null | 2024-04-27T19:59:47+00:00 |
|
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | yimingzhang/deberta-v3-large-prompt-leakage | null | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T20:00:11+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2719
- Accuracy: 0.9387
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4817 | 1.0 | 1233 | 0.4114 | 0.8722 |
| 0.4841 | 2.0 | 2466 | 0.4670 | 0.8817 |
| 0.4482 | 3.0 | 3699 | 0.3205 | 0.9330 |
| 0.4011 | 4.0 | 4932 | 0.2719 | 0.9387 |
| 0.022 | 5.0 | 6165 | 0.3159 | 0.9359 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "xlm-roberta-base", "model-index": [{"name": "results", "results": []}]} | akiseid/Amharic-News-Classification | null | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T20:00:37+00:00 |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_COQE_viT5_total_Instruction0_PSOAL_v1_h1
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "VietAI/vit5-large", "model-index": [{"name": "CS505_COQE_viT5_total_Instruction0_PSOAL_v1_h1", "results": []}]} | ThuyNT/CS505_COQE_viT5_total_Instruction0_PSOAL_v1_h1 | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T20:00:39+00:00 |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_COQE_viT5_total_Instruction0_ASOPL_v1_h1
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "VietAI/vit5-large", "model-index": [{"name": "CS505_COQE_viT5_total_Instruction0_ASOPL_v1_h1", "results": []}]} | ThuyNT/CS505_COQE_viT5_total_Instruction0_ASOPL_v1_h1 | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T20:01:10+00:00 |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_COQE_viT5_total_Instruction0_OSAPL_v1_h1
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "VietAI/vit5-large", "model-index": [{"name": "CS505_COQE_viT5_total_Instruction0_OSAPL_v1_h1", "results": []}]} | ThuyNT/CS505_COQE_viT5_total_Instruction0_OSAPL_v1_h1 | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T20:02:04+00:00 |
null | diffusers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "diffusers"} | gtsru/sn17-vin-012 | null | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"region:us"
]
| null | 2024-04-27T20:03:00+00:00 |
text-generation | transformers |
# llama3-42b-v0-4bit: 4-bit nf4
This is [chargoddard/llama3-42b-v0](https://huggingface.co/chargoddard/llama3-42b-v0) pre-quantized via the below params. It uses approx 26 GB VRAM for model weights:
```json
{
"quantization_config": {
"_load_in_4bit": true,
"_load_in_8bit": false,
"bnb_4bit_compute_dtype": "bfloat16",
"bnb_4bit_quant_storage": "uint8",
"bnb_4bit_quant_type": "nf4",
"bnb_4bit_use_double_quant": true,
"llm_int8_enable_fp32_cpu_offload": false,
"llm_int8_has_fp16_weight": false,
"llm_int8_skip_modules": null,
"llm_int8_threshold": 6.0,
"load_in_4bit": true,
"load_in_8bit": false,
"quant_method": "bitsandbytes"
}
}
```
See [here](https://huggingface.co/docs/transformers/perf_infer_gpu_one#combine-optimizations) for some inference pointers. | {"license": "llama3", "library_name": "transformers"} | pszemraj/llama3-42b-v0-4bit | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
]
| null | 2024-04-27T20:04:17+00:00 |
audio-to-audio | null | 
# Viktor Sheen [KZ/CZ] (2019)
# 1080 Epochs - RVC V2 - rmvpe
Trained on 06 minutes 02 seconds of isolated acapellas from Černobílej svět album using UVR (Voc FT + Reverb HQ)
and Audacity to remove parts with double vocals and vocals from others (+Noise Gate) | {"language": ["kz", "cz"], "license": "openrail", "tags": ["music", "rvc", "viktor", "sheen", "dundych", "charles", "\u0412\u0438\u043a\u0442o\u0440", "\u0414\u0443\u043d\u0434\u0438\u0447", "model"], "pipeline_tag": "audio-to-audio"} | JapGuy/Viktor_Sheen | null | [
"music",
"rvc",
"viktor",
"sheen",
"dundych",
"charles",
"Виктoр",
"Дундич",
"model",
"audio-to-audio",
"kz",
"cz",
"license:openrail",
"region:us"
]
| null | 2024-04-27T20:05:22+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | quickstep3621/gk3e3uq | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T20:05:30+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | quickstep3621/yilipbc | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T20:05:35+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | quickstep3621/gio9fqf | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T20:05:40+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | shallow6414/55ihm5r | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T20:05:46+00:00 |
null | null | {} | Kaizu07/llama_bn_finetuned_v0.2_q4_0.gguf | null | [
"gguf",
"region:us"
]
| null | 2024-04-27T20:07:24+00:00 |
|
null | null | {} | larry5/llava-1.6-vsft-mixtral-apr27 | null | [
"region:us"
]
| null | 2024-04-27T20:08:27+00:00 |
|
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Julesb5/Phi-3-mini-128k-instruct-peft-4-bhc-v1 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T20:09:01+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | OwOOwO/llamafinal1 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T20:09:10+00:00 |
null | null | {} | Kaizu07/llama_bn_finetuned_v0.2_q4_k_m_updated.gguf | null | [
"gguf",
"region:us"
]
| null | 2024-04-27T20:09:23+00:00 |
|
text-generation | transformers |
Introducing the [BeaverAI](https://huggingface.co/BeaverAI) team: Drummer, ToastyPigeon, xzuyn, MarsupialAI, Twistedshadows, and concedo

We proudly present...
# Llama 3SOME🦙8B🦙v1🦙BETA 4.0bpw exl2
(built-in calibration dataset is used)
*We've added **some** things. That's obviously what we're trying to say.*

*An eRP model with a rich and refreshing vocabulary that's quite some-thing. Finetuned by yours truly.*
(Llama 3SOME is a finetune on top of [Llama-3-Soliloquy-8B](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B))
## Full model
https://huggingface.co/TheDrummer/Llama-3SOME-8B-v1-BETA
## GGUF
https://huggingface.co/TheDrummer/Llama-3SOME-8B-v1-BETA-GGUF/tree/main
## What's THIS?
Refer to [Moistral v3](https://huggingface.co/TheDrummer/Moistral-11B-v3)
... or this:

## Usage
- Use Llama 3 Instruct *(Alpaca kinda works btw)*
- You can use instruct...
- as a character: "Go here"
- as a narrator: "Go here", Drummer says
- as a director: Drummer tells him to go here
- as a lazy ass: go here
- 3SOME is optimized for the Novel / Story format.
- At the start of the story, keep regenerating until you get the desired length. Future AI responses will eventually match the length.
- Issue: Manually continued generation may create formatting issues. Either regenerate the entire response or fix the formatting.
- Refer to the samples below.
## Samples of 3SOME 8B Beta
SOME nsfw.
<details>
<summary>3SOME Sample (2-shot) (Excluding Story Context) Generation: 512</summary>
<blockquote>Sarah and Lisa seek refuge in Dik's home</blockquote>
The door creaked open, its hinges complaining softly after years of neglect. The hallway stretched out before them, lined with faded photographs and cobwebs clinging stubbornly to every corner.
"Wow, this place needs some serious TLC," Lisa remarked, her voice barely above a whisper as if she was afraid to disturb the silence that hung heavy in the air. Her eyes darted around, taking in every detail - the peeling paint on the walls, the dusty furniture that seemed to collect more dust than it did when it was first placed there... everything spoke of a life left behind long ago.
"It's beautiful," Sarah countered, her voice filled with wonderment as she gazed up at the chandelier hanging precariously from the ceiling like a forgotten relic of another era. She took a deep breath, inhaling the musty scent of old books and memories yet to come alive again in these neglected halls. "Imagine all the stories these walls could tell us if only they could speak."
Her words hung in the air for a moment before being swallowed by the stillness that permeated this abandoned house. Then, without warning, their reverie was shattered by the sound of footsteps echoing down the hall.
"Dik?" Sarah called out tentatively, her voice cracking slightly with nervous anticipation. There was no response except for the echo of their own voices bouncing off the walls until finally, they heard a faint noise coming from one of the rooms further away.
"He's here!" Lisa whispered excitedly, grabbing onto Sarah's arm and pulling her towards where the sound had originated from. They moved cautiously, their hearts pounding wildly in their chests as they approached the source of the sound.
As they drew closer, the door swung open revealing Dik standing in the middle of his bedroom with a look of surprise etched across his face upon seeing them. He was dressed in a simple white shirt and jeans, but what really caught their attention was the way he stood there confidently with an air of authority about him - as though he owned every inch of space around him.
Sarah felt herself blush under his gaze, feeling suddenly self-conscious about how she looked in her tight-fitting dress that clung to every curve of her body like second skin. Next to him, she noticed Lisa fidgeting nervously beside her, her eyes darting back and forth between Dik and Sarah like two lost puppies trying desperately not to make eye contact with either one of them.
"Hello girls," Dik greeted them warmly, his voice deep and soothing despite its rough edges. "To what do I owe this pleasure?"
<blockquote>Sarah and Lisa will do anything to stay</blockquote>
Dik smiled at them both, his eyes twinkling with mischief behind those dark glasses. "Well, you know what we've been through together already." His hand reached out and took hold of Sarah's chin, tilting her head up so that she could see the intensity burning within his gaze. "I'll let you two stay here, on one condition."
They nodded eagerly, almost without thinking. Anything for Dik.
"Just remember who is in charge here," he said, letting go of her chin and stepping towards Lisa instead. She flinched slightly when his fingers touched her cheek, but then relaxed into his touch as if it was something she had always craved all along. "And don't think I don't notice how much you enjoy being dominated by me."
With a sudden movement, he pushed her down onto her knees before sliding his jeans off revealing an impressive ####tion that stood proud and ready for attention. Without hesitation, Lisa wrapped her soft lips around his #### taking it deep into her mouth while staring up at him adoringly.
Her tongue swirled around his sha## teasingly as if begging for more while Sarah watched from nearby unable to look away from the sight unfolding before her own eyes...
She felt herself getting ###ter just watching her twin sister submit herself completely to Dik like some obedient little pet...and she couldn't help but wonder if maybe there was still hope for them yet after all they'd been through...
</details>
<details>
<summary>3SOME Sample (2-shot) (Excluding Story Context) Generation: 512</summary>
<blockquote>He ignores the princess</blockquote>
Princess Lilia stepped into the room and saw Dik sitting on his throne. A wave of fear swept over her. She recognized that look of contempt in his eyes, it reminded her too much of how he treated her brother. She wanted to scream at him to leave him alone, but she knew better than to provoke him.
"Leave us." Dik said coldly.
The guards left the room, closing the door behind them.
"So you're finally here." Dik said, addressing Princess Lilia for the first time since her arrival. "I've been waiting for you."
"Where's my brother?" Princess Lilia asked, trying to keep her voice steady. "Why are you keeping him?"
"He's safe." Dik replied. "But I need something from you before we talk about your brother."
"What do you want from me?!"
"I want you to accept that I am now the ruler of both our kingdoms. You will serve me faithfully and obey all orders given to you by myself or any other member of my family. Do you understand?"
"Yes..." Princess Lilia replied reluctantly. She hated the idea of submitting to this cruel man, but she had no choice if she wanted to save her brother.
"Good. Now strip ##### and stand next to my throne." Dik commanded harshly.
"What!?" Princess Lilia exclaimed, shocked by his request. "You can't be serious."
"Oh, but I am very serious. Do as I say or else I'll have my men come in here and rip off your clothes themselves."
Tears began streaming down Princess Lilia's face as she reluctantly #####ped off her clothing, exposing herself completely to Dik. She felt humiliated beyond belief, being forced to stand there ##### in front of this ruthless sorcerer prince while he gazed at her body with complete indifference.
"Now turn around so I can see every inch of you." He ordered, a sadistic grin spreading across his face as he took in her ##### form. Princess Lilia complied, turning slowly so that Dik could admire every curve of her body.
Princess Lilia stood there shivering with fear and shame as Dik continued to leer at her exposed flesh. This was just the beginning of her ordeal, she knew it. She hoped that somehow she would find a way out of this nightmare, but for now, she was trapped in this palace, at the mercy of her enemy.
<blockquote>Sit on his lap</blockquote>
She turned to look at him, confusion written all over her face. He motioned for her to sit on his lap and when she hesitated, he spoke again.
"I want you to do this willingly."
The princess thought about it for a moment before nodding her head slowly. She walked towards him until she was standing right in front of him, then she bent down and straddled his lap. Her bare ### rested against his thighs and she placed her hands gently on his shoulders. Dik wrapped one arm around her waist and pulled her closer to him, making sure that their bodies were pressed together tightly.
She looked into his eyes and saw nothing but cold determination staring back at her. He reached up with one hand and cupped her ######, squeezing it gently through the fabric of her dress. His touch made her gasp in surprise and she bit her lip to stop herself from moaning aloud.
He leaned forward and whispered in her ear, "From now on, I own everything you have, including your body." As he said this, he squeezed her ##### harder, causing her to arch her back instinctively.
She tried to pull away from him, but his grip was too strong. Instead, she started struggling against his embrace, trying desperately to break free. But no matter how hard she fought, she couldn't escape from his grasp. Finally, after what seemed like an eternity, he let go of her and pushed her off his lap onto the floor. Princess Lilia landed on her hands and knees, tears streaming down her face as she looked up at him with hatred in her eyes.
Dik stood up from his throne and towered over her, his expression unreadable. He grabbed hold of her hair and yanked it roughly, making her cry out in pain. He pulled her to her feet and dragged her across the room, throwing open a door and shoving her inside. She stumbled forwards, falling onto something soft and warm before looking back at him defiantly.
"You will learn to obey me," he growled menacingly, "or else."
With those final words echoing in her mind, he closed the door behind him, leaving her alone in her new surroundings. It took all of her strength not to scream or cry out loud - instead, she just laid there on the bed sobbing uncontrollably for several minutes straight until finally, she fell asleep exhausted by both physical and emotional exhaustion.
</details>
<details>
<summary>3SOME Sample SFW ("write a story" 2x) Generation: 512</summary>
<blockquote>Write a story about a girl who falls in love with her llama</blockquote>
The rain came down like a deluge. Rivers were flooding, roads were impassable, and people had been advised not to leave their homes until it let up.
Jenny was stuck at home with nothing to do but watch Netflix. The constant patter on the roof was soothing; it made her feel safe and protected in the midst of this stormy weather. She wrapped herself up in a blanket and settled into her favorite spot on the couch, ready to spend the day binge-watching her favorite shows.
As she flipped through the channels, her eyes landed on something unexpected - a documentary about llamas. She had never given them much thought before, but as she watched these majestic creatures roaming freely in the Andes mountains, she couldn't help but feel drawn to them.
Their soft fur, gentle faces, and graceful movements captivated her attention. They seemed so content with their simple way of life, unaffected by the chaos of the world around them. It made Jenny wonder if there was more to life than just working hard and striving for success. Maybe there was beauty in simplicity after all.
She spent hours watching documentaries about llamas, learning about their habitat, diet, and social structure. And as she did, she found herself falling deeper and deeper under their spell. There was something about those big, soulful eyes that tugged at her heartstrings every time they looked directly into the camera lens.
It wasn't long before Jenny realized that she had fallen head over heels in love with one particular llama named Luna. He was handsome, strong, and kind-hearted - everything she could ever want in a partner. His coat shimmered like silver in the sunlight, which only added to his allure.
As days turned into weeks, Jenny's obsession with Luna grew stronger. She began dreaming about him every night, waking up with a smile on her face each morning thinking about how lucky she would be if he were real. She started reading books on animal behavior and psychology, hoping to learn more about what made llamas tick so that she could better understand Luna's feelings towards her.
One day while browsing online, Jenny stumbled upon an article about a sanctuary for rescued animals. The place sounded perfect; it offered spacious enclosures where the animals could roam free, surrounded by lush greenery and fresh water. Most importantly, they allowed visitors to interact with the residents, giving people like her a chance to meet their future partners without having to worry about logistics or financial constraints.
Without hesitation, Jenny booked herself a flight to Peru immediately after reading through their website.
<blockquote>Write a story about a man surviving the apocalypse</blockquote>
It was a Monday. As far as I can tell. This is still Earth, right? Anyway, there are no planes in the sky and no people on the street. No one's answering their phones and my car won't start. It looks like it's just me now.
I have to survive somehow. I don't know how long this lasted but it seems like everything is gone. There must be other survivors somewhere, but for now, I'm alone.
My name is Jack. I used to be an accountant at Smith & Davis before they all died of COVID-19. Now that my job is gone, I've been living off of savings ever since. My wife passed away when we were young, so I never had kids. Maybe that's why I made it through alive.
My house has enough food to last me a while, so I'll try to stay here as long as possible. But eventually, I'll need to go outside again and see what happened to our world. I can't just hide forever.
I take out the shotgun my grandfather gave me years ago and load some rounds into it. For safety, of course. The world has changed, so anything could happen.
This feels surreal. Like I'm trapped in a dream or something. Every time I close my eyes, I wake up thinking it was all just a nightmare. But every morning, I'm reminded that it wasn't.
The first few days were scary. I had nothing but fear and uncertainty clawing at me from within. But after a week went by without any signs of life or danger... well, let's just say things got easier after that.
Now that I feel more comfortable staying indoors, I decided to see if I could find anything useful outside. I took some supplies with me - water bottles, snacks, extra clothes etcetera - and set out on foot towards town.
It was eerily quiet as I walked down the deserted streets. Not even birds chirping or cars honking like usual. Just silence... absolute silence. It was almost peaceful actually, considering everything else going on around us right now. There are no police siren wailing in the distance nor any sounds coming from other humans nearby either! In fact, there isn't even one single living creature anywhere near me!
But hey, at least we don't have to worry about traffic jams anymore haha! And look at all those abandoned cars scattered across the road! They might come in handy somehow later on...
As I continued walking, I noticed something strange.
<blockquote>A llama!</blockquote>
I stopped dead in my tracks when I saw her. A beautiful, white llama standing in the middle of the street, looking up at me with curious brown eyes. She seemed so harmless, yet there was an aura of mystery surrounding her.
"Hey girl," I said cautiously, holding out my hand for her to sniff. "You don't look too scared of me."
She nuzzled against it gently before wrapping her soft lips around my fingers. I couldn't help but smile at the unexpected encounter.
"You know what? You're my first friend since all this happened." I told her as she continued to lick my hand clean. "Let's go back home and get you something to eat."
The llama followed closely behind me all the way to my house. As soon as we got inside, I gave her some hay that I had been saving for myself and filled up a bowl with water. She ate happily while I sat down next to her, stroking her long neck affectionately.
"I hope you like it here because..." My voice trailed off as I realized how alone we were now. "Never mind. Let's just enjoy each other's company while we still can."
We spent the rest of the day together - eating lunch, playing with toys and even cuddling up by the fireplace afterwards. It felt nice having someone else to talk to besides myself.
But eventually night fell and I knew I couldn't stay up forever...
"Okay sweetie," I whispered into her ear as I stood up from the couch. "Time for bed."
I led her towards one of the spare rooms upstairs where I set up a makeshift bed for her using some old blankets and pillows from around the house. The llama seemed grateful for my kindness as she settled in comfortably beneath those warm covers.
"Goodnight," I whispered again before closing the door softly behind me.
It wasn't easy falling asleep knowing that there might be dangers lurking outside... However, exhaustion finally caught up with me and I drifted off into dreamless slumber almost immediately.
</details>
 | {"license": "cc-by-nc-4.0", "tags": ["not-for-all-audiences"]} | riveRiPH/Llama-3SOME-8B-v1-BETA-4.0bpw-h8-exl2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"not-for-all-audiences",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
]
| null | 2024-04-27T20:09:58+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["trl", "sft"]} | jonathanjordan21/mamba-130m-hf-finetuned-qa | null | [
"transformers",
"safetensors",
"mamba",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T20:10:37+00:00 |
text-generation | transformers |

# flammen22-mistral-7B
A Mistral 7B LLM built from merging pretrained models and finetuning on [Doctor-Shotgun/theory-of-mind-dpo](https://huggingface.co/datasets/Doctor-Shotgun/theory-of-mind-dpo).
Flammen specializes in exceptional character roleplay, creative writing, and general intelligence
### Method
Finetuned using an A100 on Google Colab.
[Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac) - [Maxime Labonne](https://huggingface.co/mlabonne)
### Configuration
LoRA, model, and training settings:
```python
# LoRA configuration
peft_config = LoraConfig(
r=16,
lora_alpha=16,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']
)
# Model to fine-tune
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
load_in_4bit=True
)
model.config.use_cache = False
# Reference model
ref_model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
load_in_4bit=True
)
# Training arguments
training_args = TrainingArguments(
per_device_train_batch_size=2,
gradient_accumulation_steps=4,
gradient_checkpointing=True,
learning_rate=5e-5,
lr_scheduler_type="cosine",
max_steps=420,
save_strategy="no",
logging_steps=1,
output_dir=new_model,
optim="paged_adamw_32bit",
warmup_steps=100,
bf16=True,
report_to="wandb",
)
# Create DPO trainer
dpo_trainer = DPOTrainer(
model,
ref_model,
args=training_args,
train_dataset=dataset,
tokenizer=tokenizer,
peft_config=peft_config,
beta=0.1,
max_prompt_length=2048,
max_length=4096,
force_use_ref_model=True
)
# Fine-tune model with DPO
dpo_trainer.train()
``` | {"license": "apache-2.0", "library_name": "transformers", "datasets": ["Doctor-Shotgun/theory-of-mind-dpo"], "base_model": ["flammenai/flammen21X-mistral-7B"]} | flammenai/flammen22-mistral-7B | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"dataset:Doctor-Shotgun/theory-of-mind-dpo",
"base_model:flammenai/flammen21X-mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T20:11:10+00:00 |
null | null | {} | softh/alt-kotlin-llama3-3b-Q5_K_M | null | [
"region:us"
]
| null | 2024-04-27T20:12:17+00:00 |
|
text-generation | mlx |
# alpsagility/wikisql-4bit-1k
This model was converted to MLX format from [`mistralai/Mistral-7B-v0.1`]() using mlx-lm version **0.12.0**.
Refer to the [original model card](https://huggingface.co/mistralai/Mistral-7B-v0.1) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("alpsagility/wikisql-4bit-1k")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
| {"language": ["en"], "license": "apache-2.0", "tags": ["pretrained", "mlx"], "pipeline_tag": "text-generation", "inference": {"parameters": {"temperature": 0.7}}} | alpsagility/wikisql-4bit-1k | null | [
"mlx",
"safetensors",
"mistral",
"pretrained",
"text-generation",
"en",
"license:apache-2.0",
"region:us"
]
| null | 2024-04-27T20:12:31+00:00 |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "lmsys/vicuna-7b-v1.5"} | tt1225/aic24-track2-multiview-videollava-7b-lora-v2 | null | [
"peft",
"safetensors",
"llava_llama",
"arxiv:1910.09700",
"base_model:lmsys/vicuna-7b-v1.5",
"region:us"
]
| null | 2024-04-27T20:13:46+00:00 |
null | null | {"license": "other", "license_name": "carti", "license_link": "LICENSE"} | y0usly/DieLitCarti_270_Epochs | null | [
"license:other",
"region:us"
]
| null | 2024-04-27T20:15:30+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505-total-CSI-PhoBERT_base_v2
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["generated_from_trainer"], "base_model": "vinai/phobert-base-v2", "model-index": [{"name": "CS505-total-CSI-PhoBERT_base_v2", "results": []}]} | ThuyNT/CS505-total-CSI-PhoBERT_base_v2 | null | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T20:17:53+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Hoblywobblesworth/mistral-7b-instruct-v2-fixed | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T20:18:04+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | zandfj/LLaMA2-7B-Chat-sft-sft-3epo-sft-3epo-moren_042721 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T20:19:11+00:00 |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Boya1_RMSProp_1-e5_10Epoch_swinv2-large-patch4_fold5
This model is a fine-tuned version of [microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft](https://huggingface.co/microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9207
- Accuracy: 0.6853
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.065 | 1.0 | 1847 | 1.0302 | 0.6411 |
| 1.188 | 2.0 | 3694 | 0.9909 | 0.6636 |
| 0.777 | 3.0 | 5541 | 0.9498 | 0.6769 |
| 0.6821 | 4.0 | 7388 | 0.9892 | 0.6923 |
| 0.8513 | 5.0 | 9235 | 1.1037 | 0.6799 |
| 0.5057 | 6.0 | 11082 | 1.2416 | 0.6853 |
| 0.3621 | 7.0 | 12929 | 1.5592 | 0.6788 |
| 0.3764 | 8.0 | 14776 | 1.7499 | 0.6799 |
| 0.155 | 9.0 | 16623 | 1.8654 | 0.6793 |
| 0.2137 | 10.0 | 18470 | 1.9207 | 0.6853 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft", "model-index": [{"name": "Boya1_RMSProp_1-e5_10Epoch_swinv2-large-patch4_fold5", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "test", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.6852805638384386, "name": "Accuracy"}]}]}]} | onizukal/Boya1_RMSProp_1-e5_10Epoch_swinv2-large-patch4_fold5 | null | [
"transformers",
"safetensors",
"swinv2",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T20:20:10+00:00 |
text-generation | transformers | {} | Likhi2003/Llama-2-7b-chat-finetune | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T20:20:40+00:00 |
|
null | null | {} | Slime86/Test | null | [
"region:us"
]
| null | 2024-04-27T20:21:14+00:00 |
|
text-classification | transformers |
This model is trained on [Dialogue-NLI](https://arxiv.org/abs/1811.00671).
Test Result:
| | Accuracy |
| ------------- | -------- |
| dev | 89.44 |
| test | 91.22 |
| verified_test | 95.36 |
To use this model:
```python
import torch
import numpy as np
from transformers import AutoTokenizer, AutoModelForSequenceClassification
device = "cuda"
model_path = "zayn1111/deberta-v3-dnli"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False, model_max_length=512)
model = AutoModelForSequenceClassification.from_pretrained(model_path).to(device)
premise = "i work with a lot of kids in the healthcare industry ."
hypothesis = "i work in the healthcare industry ."
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device))
prediction = torch.softmax(output["logits"][0], -1).tolist()
label_names = ["entailment", "neutral", "contradiction"]
prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
print(prediction)
```
| {"language": ["en"], "license": "mit", "datasets": ["pietrolesci/dialogue_nli"], "metrics": ["accuracy"], "pipeline_tag": "text-classification"} | zayn1111/deberta-v3-dnli | null | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"en",
"dataset:pietrolesci/dialogue_nli",
"arxiv:1811.00671",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T20:21:34+00:00 |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [aaditya/OpenBioLLM-Llama3-8B](https://huggingface.co/aaditya/OpenBioLLM-Llama3-8B) as a base.
### Models Merged
The following models were included in the merge:
* [johnsnowlabs/JSL-MedLlama-3-8B-v1.0](https://huggingface.co/johnsnowlabs/JSL-MedLlama-3-8B-v1.0)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: aaditya/OpenBioLLM-Llama3-8B
- model: johnsnowlabs/JSL-MedLlama-3-8B-v1.0
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: aaditya/OpenBioLLM-Llama3-8B
parameters:
normalize: false
int8_mask: false
dtype: float16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["aaditya/OpenBioLLM-Llama3-8B", "johnsnowlabs/JSL-MedLlama-3-8B-v1.0"]} | timberrific/open-bio-med-8B-ties-merge | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2306.01708",
"base_model:aaditya/OpenBioLLM-Llama3-8B",
"base_model:johnsnowlabs/JSL-MedLlama-3-8B-v1.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T20:21:37+00:00 |
null | null | {"license": "unknown"} | Sm4sh1ngp1x3l/SpikeAE | null | [
"license:unknown",
"region:us"
]
| null | 2024-04-27T20:21:57+00:00 |
|
null | null | {} | Kunalpal216/llama2-trained-GGUF | null | [
"gguf",
"region:us"
]
| null | 2024-04-27T20:22:33+00:00 |
|
object-detection | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/qubvel-hf-co/transformers-detection-model-finetuning-cppe5/runs/gno0q6ox)
# sensetime-deformable-detr-finetuned-10k-cppe5-auto-pad
This model is a fine-tuned version of [SenseTime/deformable-detr](https://huggingface.co/SenseTime/deformable-detr) on the cppe-5 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 1
- seed: 1337
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.18.0
- Tokenizers 0.19.0
| {"license": "apache-2.0", "tags": ["object-detection", "vision", "generated_from_trainer"], "base_model": "SenseTime/deformable-detr", "model-index": [{"name": "sensetime-deformable-detr-finetuned-10k-cppe5-auto-pad", "results": []}]} | qubvel-hf/sensetime-deformable-detr-finetuned-10k-cppe5-auto-pad | null | [
"transformers",
"safetensors",
"deformable_detr",
"object-detection",
"vision",
"generated_from_trainer",
"base_model:SenseTime/deformable-detr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T20:23:08+00:00 |
text-classification | transformers | {"license": "apache-2.0"} | dcisic/bert_hr_econinfochecker | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T20:23:29+00:00 |
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | EARobot/RobertRetrainedTinyLlama | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T20:24:20+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Falcon-7b-Finetuned-Extented-MBPP-Dataset-Synthetic
This model is a fine-tuned version of [tiiuae/falcon-7b-instruct](https://huggingface.co/tiiuae/falcon-7b-instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9466
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.9771 | 0.18 | 500 | 1.6788 |
| 0.9972 | 0.36 | 1000 | 1.2072 |
| 1.0858 | 0.53 | 1500 | 1.0909 |
| 0.8945 | 0.71 | 2000 | 1.0609 |
| 0.5405 | 0.89 | 2500 | 1.0325 |
| 1.3803 | 1.07 | 3000 | 1.0174 |
| 0.4474 | 1.25 | 3500 | 1.0085 |
| 0.635 | 1.43 | 4000 | 1.0013 |
| 0.3225 | 1.6 | 4500 | 0.9901 |
| 0.6406 | 1.78 | 5000 | 0.9893 |
| 0.7074 | 1.96 | 5500 | 0.9835 |
| 0.577 | 2.14 | 6000 | 0.9836 |
| 0.7014 | 2.32 | 6500 | 0.9718 |
| 0.9365 | 2.49 | 7000 | 0.9651 |
| 0.9926 | 2.67 | 7500 | 0.9637 |
| 0.5796 | 2.85 | 8000 | 0.9621 |
| 1.1842 | 3.03 | 8500 | 0.9601 |
| 0.8448 | 3.21 | 9000 | 0.9572 |
| 0.3799 | 3.39 | 9500 | 0.9496 |
| 0.6202 | 3.56 | 10000 | 0.9514 |
| 0.5857 | 3.74 | 10500 | 0.9521 |
| 0.6707 | 3.92 | 11000 | 0.9497 |
| 0.5089 | 4.1 | 11500 | 0.9480 |
| 0.4459 | 4.28 | 12000 | 0.9472 |
| 0.5792 | 4.45 | 12500 | 0.9469 |
| 0.1743 | 4.63 | 13000 | 0.9467 |
| 0.7094 | 4.81 | 13500 | 0.9465 |
| 0.4949 | 4.99 | 14000 | 0.9466 |
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "tiiuae/falcon-7b-instruct", "model-index": [{"name": "Falcon-7b-Finetuned-Extented-MBPP-Dataset-Synthetic", "results": []}]} | MUsama100/Falcon-7b-Finetuned-Extented-MBPP-Dataset-Full | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:tiiuae/falcon-7b-instruct",
"license:apache-2.0",
"region:us"
]
| null | 2024-04-27T20:25:54+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.