modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
brrr836/tcp2023 | brrr836 | 2024-06-28T02:49:02Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-06-28T02:49:02Z | ---
license: apache-2.0
---
|
Lynn83125/tcp2023 | Lynn83125 | 2024-06-28T02:49:37Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-06-28T02:49:37Z | ---
license: apache-2.0
---
|
Sailyumina/UUU_TCP2023 | Sailyumina | 2024-06-28T02:49:38Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-06-28T02:49:38Z | ---
license: apache-2.0
---
|
ElvisChiu/tcp2023 | ElvisChiu | 2024-06-28T02:49:56Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-06-28T02:49:56Z | ---
license: apache-2.0
---
|
Lynn83125/llama2_uuu_news_qlora | Lynn83125 | 2024-06-28T02:50:12Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-06-28T02:50:12Z | ---
license: apache-2.0
---
|
SheyneHuang/tcp2023 | SheyneHuang | 2024-06-28T02:50:13Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-06-28T02:50:13Z | ---
license: apache-2.0
---
|
ElvisChiu/uuu_fine_tune_gpt2 | ElvisChiu | 2024-06-28T03:29:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-28T02:50:23Z | ---
license: apache-2.0
---
|
karta1215913/tcp2023 | karta1215913 | 2024-06-28T02:50:26Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-06-28T02:50:26Z | ---
license: apache-2.0
---
|
Sailyumina/LLAMA2_UUU_NEWS_QLORA | Sailyumina | 2024-06-28T07:35:56Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"region:us"
]
| null | 2024-06-28T02:50:48Z | ---
base_model: NousResearch/Llama-2-7b-chat-hf
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
jia35/tcp2023 | jia35 | 2024-06-28T02:52:02Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-06-28T02:52:02Z | ---
license: apache-2.0
---
|
LisaSS/tcp2023 | LisaSS | 2024-06-28T02:53:26Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-06-28T02:53:26Z | ---
license: apache-2.0
---
|
JOHN2106/T3 | JOHN2106 | 2024-06-28T02:53:37Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-06-28T02:53:37Z | ---
license: apache-2.0
---
|
LisaSS/uuu_fine_tune_gpt2 | LisaSS | 2024-06-28T03:33:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-28T02:55:39Z | ---
license: apache-2.0
---
|
LisaSS/llama2_uuu_news_qlora | LisaSS | 2024-06-28T02:55:59Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-06-28T02:55:59Z | ---
license: apache-2.0
---
|
jan-hq/llama-3-sound-cp | jan-hq | 2024-06-28T07:27:03Z | 0 | 0 | null | [
"safetensors",
"region:us"
]
| null | 2024-06-28T02:56:24Z | Entry not found |
Sulochini/empDetails | Sulochini | 2024-06-28T02:56:49Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T02:56:49Z | Entry not found |
l4urentz/T3 | l4urentz | 2024-06-28T03:03:39Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T02:57:12Z | Entry not found |
MaryCC/tcp2023 | MaryCC | 2024-06-28T02:58:01Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-06-28T02:58:01Z | ---
license: apache-2.0
---
|
passionful7/Linen-Like-grey | passionful7 | 2024-06-28T02:58:30Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2024-06-28T02:58:29Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
instance_prompt: Linen-Like Extra Wide Pants - grey
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - passionful7/Linen-Like-grey
<Gallery />
## Model description
These are passionful7/Linen-Like-grey LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use Linen-Like Extra Wide Pants - grey to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](passionful7/Linen-Like-grey/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
habulaj/269422239758 | habulaj | 2024-06-28T02:59:22Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T02:59:19Z | Entry not found |
Oriones/Eric | Oriones | 2024-06-28T02:59:21Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T02:59:21Z | Entry not found |
JOJOLA/tcp2023 | JOJOLA | 2024-06-28T02:59:41Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-06-28T02:59:41Z | ---
license: apache-2.0
---
|
frankdu/llama3_chat_tune_lora | frankdu | 2024-06-28T03:25:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-28T03:00:49Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** frankdu
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
|
chris208/realvisxlV40_v40LightningBakedvae.safetensors | chris208 | 2024-06-28T03:01:34Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-06-28T03:01:34Z | ---
license: apache-2.0
---
|
moritzef/model_out_1 | moritzef | 2024-06-28T03:02:17Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T03:02:17Z | Entry not found |
fukufuk/vsft-llava-1.5-7b-hf | fukufuk | 2024-06-28T03:04:10Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T03:04:10Z | Entry not found |
mttgermano/poca-SoccerTwos | mttgermano | 2024-06-28T03:06:59Z | 0 | 0 | ml-agents | [
"ml-agents",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2024-06-28T03:06:58Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: mttgermano/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
phamvi856/3d-icon-sdxl-dora | phamvi856 | 2024-06-28T03:07:27Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T03:07:27Z | Entry not found |
fabriziofalcon26/pregunta5 | fabriziofalcon26 | 2024-06-28T03:09:28Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T03:09:01Z | Entry not found |
yujinqiu/k2-jp | yujinqiu | 2024-06-28T03:10:34Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2024-06-28T03:10:34Z | ---
license: mit
---
|
elysiantech/gemma-2b-gptq-4bit-lora | elysiantech | 2024-06-28T19:59:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-28T03:11:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ShotaSato0916/multilingual-e5-large-alan-apple-overfitting | ShotaSato0916 | 2024-06-28T03:27:42Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T03:12:28Z | Entry not found |
henriquefr/interop4 | henriquefr | 2024-06-28T03:13:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-28T03:12:41Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** henriquefr
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
habulaj/144860121264 | habulaj | 2024-06-28T03:16:01Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T03:15:16Z | Entry not found |
gdvstd/dessin-sd3-lora | gdvstd | 2024-06-28T03:18:41Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T03:17:41Z | Entry not found |
Keilahwjiwbssh/Oi | Keilahwjiwbssh | 2024-06-28T03:19:32Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T03:19:32Z | Entry not found |
Kibalama/Pixelcopter-PLE-v0 | Kibalama | 2024-06-28T03:20:52Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-06-28T03:20:46Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 43.30 +/- 42.59
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
gdvstd/rough-sd3-lora | gdvstd | 2024-06-28T03:22:24Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T03:22:17Z | Entry not found |
Monke64/music2image | Monke64 | 2024-06-28T03:27:43Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T03:27:43Z | Entry not found |
head-empty-ai/Benchmark-Results | head-empty-ai | 2024-07-02T21:15:47Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T03:30:34Z | |MODEL|HellaSwag|EQ_Bench||% Parsed (EQ)|
|:---|--:|---:|---|:---|
|[argilla/CapybaraHermes-2.5-Mistral-7B](https://huggingface.co/argilla/CapybaraHermes-2.5-Mistral-7B)|83.03|66.52||100|
|[athirdpath/NSFW_DPO_vmgb-7b](https://huggingface.co/athirdpath/NSFW_DPO_vmgb-7b)|85.36|74.83||100|
|[berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha)|82.17|69.78||100|
|[cgato/Thespis-Mistral-7b-v0.6](https://huggingface.co/cgato/Thespis-Mistral-7b-v0.6)|81.83|29.13||79.53|
|[chargoddard/loyal-piano-m7](https://huggingface.co/chargoddard/loyal-piano-m7)|83.28|72.15||100|
|[cognitivecomputations/dolphin-2.2.1-mistral-7b](https://huggingface.co/cognitivecomputations/dolphin-2.2.1-mistral-7b)|81.47|67.33||100|
|[cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser)|83.61|71.57||99.42|
|[Crimvael/Raphael-7B](https://huggingface.co/Crimvael/Raphael-7B)|83.47|68.73||100|
|[Delcos/Mistral-Pygmalion-7b](https://huggingface.co/Delcos/Mistral-Pygmalion-7b)|76.61|17.34||100|
|[FallenMerick/Iced-Lemon-Cookie-7B](https://huggingface.co/FallenMerick/Iced-Lemon-Cookie-7B)|85.54|71.54||100|
|[FallenMerick/Smart-Lemon-Cookie-7B](https://huggingface.co/FallenMerick/Smart-Lemon-Cookie-7B)|85.41|68.12||100|
|[HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)|82.02|58.14||100|
|[Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1)|79.76|62.26||100|
|[jondurbin/airoboros-m-7b-3.1.2](https://huggingface.co/jondurbin/airoboros-m-7b-3.1.2)|81.34|38.52||100|
|[jondurbin/cinematika-7b-v0.1](https://huggingface.co/jondurbin/cinematika-7b-v0.1)|80.31|44.85||100|
|[KatyTheCutie/LemonadeRP-4.5.3](https://huggingface.co/KatyTheCutie/LemonadeRP-4.5.3)|82.65|63.23||100|
|[KoboldAI/Mistral-7B-Erebus-v3](https://huggingface.co/KoboldAI/Mistral-7B-Erebus-v3)|76.65|18.19||97.66|
|[KoboldAI/Mistral-7B-Holodeck-1](https://huggingface.co/KoboldAI/Mistral-7B-Holodeck-1)|79.19|2.10||98.25|
|[maywell/Synatra-7B-v0.3-RP](https://huggingface.co/maywell/Synatra-7B-v0.3-RP)|80.46|54.93||100|
|[migtissera/Synthia-7B-v3.0](https://huggingface.co/migtissera/Synthia-7B-v3.0)|81.74|15.03||94.74|
|[mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)|74.67|46.82||100|
|[mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)|83.66|65.50||99.42|
|[mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)|86.46|74.21||99.42|
|[Norquinal/Mistral-7B-claude-chat](https://huggingface.co/Norquinal/Mistral-7B-claude-chat)|83.06|16.34||99.42|
|[NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)|80.56|65.93||100|
|[NousResearch/Nous-Capybara-7B-V1.9](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1.9)|78.71|19.25||100|
|[Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)|81.67|63.98||99.42|
|[rwitz/go-bruins](https://huggingface.co/rwitz/go-bruins)|84.92|73.62||100|
|[SanjiWatsuki/Kunoichi-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-7B)|85.25|72.36||100|
|[SanjiWatsuki/Loyal-Macaroni-Maid-7B](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B)|84.53|73.67||100|
|[SanjiWatsuki/Silicon-Maid-7B](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B)|84.55|68.74||100|
|[senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2)|87.42|77.87||100|
|[TeeZee/DarkSapling-7B-v2.0](https://huggingface.co/TeeZee/DarkSapling-7B-v2.0)|82.56|62.19||100|
|[teknium/Hermes-Trismegistus-Mistral-7B](https://huggingface.co/teknium/Hermes-Trismegistus-Mistral-7B)|79.21|64.86||98.83|
|[teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)|81.68|65.75||100|
|[Undi95/Toppy-M-7B](https://huggingface.co/Undi95/Toppy-M-7B)|83.52|66.57||100|
</br>
|MODEL|HellaSwag|EQ_Bench||% Parsed (EQ)|
|:---|--:|---:|---|:---|
|[ABX-AI/Silver-Sun-v2-11B](https://huggingface.co/ABX-AI/Silver-Sun-v2-11B)|86.40|69.92||100|
|[backyardai/Fimbulvetr-Holodeck-Erebus-Westlake-10.7B](https://huggingface.co/backyardai/Fimbulvetr-Holodeck-Erebus-Westlake-10.7B)|86.00|69.25||100|
|[BlueNipples/SnowLotus-v2-10.7B](https://huggingface.co/BlueNipples/SnowLotus-v2-10.7B)|83.42|60.54||99.42|
|[FallenMerick/Chewy-Lemon-Cookie-11B](https://huggingface.co/FallenMerick/Chewy-Lemon-Cookie-11B)|84.39|76.24||100|
|[FallenMerick/Chunky-Lemon-Cookie-11B](https://huggingface.co/FallenMerick/Chunky-Lemon-Cookie-11B)|84.36|76.29||100|
|[froggeric/WestLake-10.7B-v2](https://huggingface.co/froggeric/WestLake-10.7B-v2)|86.74|73.35||95.32|
|[head-empty-ai/Mytho-Lemon-11B](https://huggingface.co/head-empty-ai/Mytho-Lemon-11B)|79.77|60.03||100|
|[Himitsui/KuroMitsu-11B](https://huggingface.co/Himitsui/KuroMitsu-11B)|86.33|70.50||98.83|
|[kyujinpy/SOLAR-Platypus-10.7B-v2](https://huggingface.co/kyujinpy/SOLAR-Platypus-10.7B-v2)|82.05|25.11||45.61|
|[migtissera/Tess-10.7B-v1.5b](https://huggingface.co/migtissera/Tess-10.7B-v1.5b)|83.82|61.83||99.42|
|[NeverSleep/Mistral-11B-SynthIAirOmniMix](https://huggingface.co/NeverSleep/Mistral-11B-SynthIAirOmniMix)|81.58|55.19||100|
|[NousResearch/Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B)|83.24|63.52||100|
|[saishf/Fimbulvetr-Kuro-Lotus-10.7B](https://huggingface.co/saishf/Fimbulvetr-Kuro-Lotus-10.7B)|86.25|65.85||100|
|[Sao10K/Fimbulvetr-10.7B-v1](https://huggingface.co/Sao10K/Fimbulvetr-10.7B-v1)|85.81|65.42||100|
|[Sao10K/Fimbulvetr-11B-v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2)|86.61|70.00||99.42|
|[Sao10K/Frostwind-10.7B-v1](https://huggingface.co/Sao10K/Frostwind-10.7B-v1)|84.15|55.73||99.42|
|[Sao10K/Solstice-11B-v1](https://huggingface.co/Sao10K/Solstice-11B-v1)|86.42|68.24||99.42|
|[TheDrummer/Moistral-11B-v3](https://huggingface.co/TheDrummer/Moistral-11B-v3)|86.65|69.75||99.42|
|[Undi95/Borealis-10.7B](https://huggingface.co/Undi95/Borealis-10.7B)|79.58|8.27||44.44|
|[upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0)|86.35|68.65||98.25|
|[upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0)|83.10|28.66||100|
|
AdamKasumovic/llama3-8b-instruct-bactrian-x-xh-100-percent-med-high-perplexity | AdamKasumovic | 2024-06-28T03:35:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-06-28T03:31:24Z | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** AdamKasumovic
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
fangjj/my_awesome_mind_model | fangjj | 2024-06-28T03:33:10Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"endpoints_compatible",
"region:us"
]
| audio-classification | 2024-06-28T03:31:30Z | Entry not found |
BrallanPa/filmarv2 | BrallanPa | 2024-06-28T03:33:06Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T03:32:31Z | Entry not found |
AdamKasumovic/phi3-mini-4k-instruct-bactrian-x-en-100-percent-med-perplexity | AdamKasumovic | 2024-06-28T03:36:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-06-28T03:33:35Z | ---
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** AdamKasumovic
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
habulaj/1699716757 | habulaj | 2024-06-28T03:35:39Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T03:35:36Z | Entry not found |
chainup244/Qwen-Qwen1.5-1.8B-1719545883 | chainup244 | 2024-06-28T03:39:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-28T03:38:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nglguarino/phi2-firstTry | nglguarino | 2024-06-28T04:13:19Z | 0 | 0 | null | [
"safetensors",
"region:us"
]
| null | 2024-06-28T03:39:24Z | Entry not found |
fhsp93/my_awesome_model | fhsp93 | 2024-06-28T03:40:41Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T03:40:41Z | Entry not found |
AniaAri/pregunta05 | AniaAri | 2024-06-28T03:43:50Z | 0 | 0 | null | [
"joblib",
"region:us"
]
| null | 2024-06-28T03:43:14Z | Entry not found |
Wawaworker/nkszbad | Wawaworker | 2024-06-28T03:45:57Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T03:45:57Z | Entry not found |
habulaj/11386288619 | habulaj | 2024-06-28T03:46:23Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T03:46:22Z | Entry not found |
modeldodel/DERRIUSNew | modeldodel | 2024-06-28T03:49:10Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2024-06-28T03:48:35Z | ---
license: openrail
---
|
shengxuelim/q-FrozenLake-v1-4x4-noSlippery | shengxuelim | 2024-06-28T03:51:40Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-06-28T03:51:38Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="shengxuelim/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
shengxuelim/taxi-rl | shengxuelim | 2024-06-28T03:53:10Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-06-28T03:53:08Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-rl
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.72
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="shengxuelim/taxi-rl", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
habulaj/6276047107 | habulaj | 2024-06-28T03:53:43Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T03:53:37Z | Entry not found |
habulaj/5143040115 | habulaj | 2024-06-28T03:54:09Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T03:54:07Z | Entry not found |
triplee/supernatural_dataset_3epo_model | triplee | 2024-06-28T03:55:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-28T03:55:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Razer112/Goth | Razer112 | 2024-06-28T23:08:05Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2024-06-28T03:56:57Z | ---
license: openrail
---
|
Hoangcondilonton/test | Hoangcondilonton | 2024-06-28T03:57:25Z | 0 | 0 | null | [
"license:llama2",
"region:us"
]
| null | 2024-06-28T03:57:25Z | ---
license: llama2
---
|
l4urentz/Pregunta5 | l4urentz | 2024-06-28T04:00:07Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T03:58:44Z | Entry not found |
Arodrigo/temp008 | Arodrigo | 2024-06-28T04:02:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-28T04:02:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
faissalb/cih_lora_llama_0.1 | faissalb | 2024-06-28T04:14:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-28T04:13:29Z | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** faissalb
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kugelitsjust/Dr_Jib | kugelitsjust | 2024-06-29T10:30:46Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2024-06-28T04:13:58Z | ---
license: openrail
---
|
octadion/llama3-8B-chat-v2-gguf | octadion | 2024-06-28T04:14:40Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T04:14:40Z | Entry not found |
kugelitsjust/Azzathog | kugelitsjust | 2024-06-28T04:16:36Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2024-06-28T04:15:37Z | ---
license: openrail
---
|
Ammartatox/llamared7 | Ammartatox | 2024-06-28T04:24:31Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-06-28T04:16:32Z | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** Ammartatox
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
NAYEONCEot9cover/LISA | NAYEONCEot9cover | 2024-06-28T04:25:10Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2024-06-28T04:18:42Z | ---
license: openrail
---
|
Arodrigo/temp009 | Arodrigo | 2024-06-28T04:19:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-28T04:19:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
a1-b2-c3-d4-archana/t5-small_classification | a1-b2-c3-d4-archana | 2024-06-28T04:42:16Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"region:us"
]
| null | 2024-06-28T04:22:03Z | Entry not found |
abdiharyadi/indoamrbart-mbart-triple-ft-parser-no-nst-256-eps | abdiharyadi | 2024-06-28T04:24:46Z | 0 | 0 | null | [
"safetensors",
"region:us"
]
| null | 2024-06-28T04:22:46Z | Entry not found |
nglguarino/my-nlp-phi2-model | nglguarino | 2024-06-28T05:10:07Z | 0 | 0 | null | [
"safetensors",
"region:us"
]
| null | 2024-06-28T04:27:49Z | Entry not found |
habulaj/244864494343 | habulaj | 2024-06-28T04:31:49Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T04:31:39Z | Entry not found |
NoNameFactory/llama-3-8b-it-4bit-ContdPT_1_10 | NoNameFactory | 2024-06-28T04:36:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-28T04:33:19Z | ---
base_model: unsloth/llama-3-8b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** NoNameFactory
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Arodrigo/temp0010 | Arodrigo | 2024-06-28T04:33:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-28T04:33:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jykim310/phiva-3.9b-q4f16_1-MLC | jykim310 | 2024-06-28T04:37:53Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T04:33:35Z | Entry not found |
habulaj/5417556370 | habulaj | 2024-06-28T04:39:09Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T04:39:08Z | Entry not found |
Arodrigo/temp0011 | Arodrigo | 2024-06-28T04:42:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-28T04:42:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Djtrice/Kids | Djtrice | 2024-06-28T04:46:20Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T04:46:19Z | Entry not found |
habulaj/1880918602 | habulaj | 2024-06-28T04:50:47Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T04:50:42Z | Entry not found |
sanskarceo1/Chyno | sanskarceo1 | 2024-06-28T04:51:56Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-06-28T04:51:55Z | ---
license: apache-2.0
---
|
DizVen/coursework | DizVen | 2024-06-28T04:52:28Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-06-28T04:52:28Z | ---
license: apache-2.0
---
|
DrChamyoung/TextARC2.6 | DrChamyoung | 2024-06-28T04:54:00Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-06-28T04:54:00Z | ---
license: apache-2.0
---
|
pacoreyes/TopicBoundaryBERT | pacoreyes | 2024-06-29T06:52:47Z | 0 | 0 | null | [
"doi:10.57967/hf/2644",
"license:mit",
"region:us"
]
| null | 2024-06-28T04:55:02Z | ---
license: mit
---
|
saiki23/NyntchLORA | saiki23 | 2024-06-28T07:05:48Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T04:58:08Z | Entry not found |
ShapeKapseln33/Nexalyn678 | ShapeKapseln33 | 2024-06-28T05:03:09Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T04:59:50Z | [køb] Nexalyn Anmeldelser At forbrænde fedt i besværlige områder er en udfordring for mange mennesker på deres vægttabsrejse. Dette stædige kropsfedt kan være frustrerende og svært at målrette mod med kost og motion alene. Nexaslim-tillægget kan dog give den løsning, du har ledt efter.
**[Klik her for at købe nu fra Nexalyns officielle hjemmeside](https://capsules24x7.com/nexalyn-danmark)**
Nexalyn Pills er en forbedring til at øge folks opmærksomhed under seksuelle sociale lejligheder. Det er opdateringen, der kan bruges af personer med seksuelle mangler. Denne forbedring virker ved at forbedre testosteronniveauet hos mænd. Testosteroner er kemikalier, der kan bringe tilfredsstillende seksuel funktion og god seksuel ydeevne i fare.
Dette redesign hjælper med at arbejde på præsentationen ved seksuelle sociale begivenheder ved at øge størrelsen af erektionen i henhold til udvidelsen af pizazzen. Et klart svar for enkeltpersoner bør også understøtte deres seksuelle grænser.
##Hvad er Nexalyn?
Nexalyn virker ved at øge testosteronniveauet i den menneskelige krop. Nexalyn-tilskuddet indeholder forskellige ingredienser, der øger mængden af testosteronkemikalier og derfor understøtter seksuel præstation. Udover testosteronopdateringen udvider redesignet også kroppens blodgennemstrømning. Den øgede blodgennemstrømning fører så til bedre resultater og bedre ydeevne af forskellige organer i kroppen. De forskellige organer i kroppen, inklusive penis, vil fungere perfekt, og den seksuelle præsentation vil blive opdateret, svarende til personens vitalitet. Forbedringen omfatter nitrogenoxid, som giver bedre blodcirkulation og bedre bevidsthed blandt mennesker.
##Er ingredienserne i Nexalyn
Nexalyns redesign omfatter ethvert mærkefragment. Dekorationerne er ret typiske og fokuserer stort set på nogle få bemærkelsesværdige standardsmag. Nogle af de kritiske elementer i dette redesign er som følger:
Tongkat Ali: Denne specifikke effekt er ansvarlig for at øge testosteronniveauet i den menneskelige krop.
**[Klik her for at købe nu fra Nexalyns officielle hjemmeside](https://capsules24x7.com/nexalyn-danmark)**
L-arginin: Denne fiksering er underlagt udviklingen af konstansen hos den person, der dræner den. På den måde fungerer kønsorganerne bedre.
Maca: Dette er også en af de afgørende dele af denne udvikling. Det er underlagt spredningen af bedre blod på samme måde som den generelle velstand for forskellige organer i kroppen.
Ginseng: Denne fiksering er forpligtet til at give folk ekstra centralitet.
Dette sikrer folks sikkerhed under seksuelle møder. Det skal være klart, at alle disse er gennemtænkte koncentrater af normale smagsvarianter. Derfor er brugen af denne opgradering beskyttet.
##Hvordan skal du tage Nexalyn?
Nexalyn fremstilles som en beholder, der indeholder tabletterne fra plasteret. Æsken indeholder ca. 60 tabletter, hvoraf 2 bør tages pålideligt. Den ene pille skal tages den næste dag efter morgenmaden og den anden pille efter aftensmaden. Nexalyn mandlige forstærkningspiller skal tages med varmt vand. En høj andel vand bør bruges, når du bruger dette tilbehør. Dette er bedre for at få en hurtigere effekt af forbedring.
##Hvor kan jeg nemt og hurtigt få fat i Nexalyn?
Så vi har set, at Nexalyn er førende blandt de andre seksuelle præstationsfremmende kosttilskud på markedet. Hvis du lider af seksuelle mangler, kan du også prøve denne forbedring og tage de betydelige konsekvenser af denne makeover i øjnene. For at købe disse piller kan du klikke på et hvilket som helst link på denne side og få det bedste tilbud.
**[Klik her for at købe nu fra Nexalyns officielle hjemmeside](https://capsules24x7.com/nexalyn-danmark)**
|
VKapseln475/Shape7545 | VKapseln475 | 2024-06-28T05:21:11Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T05:03:25Z | # [NEU] Shape kapseln Deutschland Bewertungen Test und Einnahme - Shape kapseln Erfahrungen
Shape Kapseln Deutschland ist ein kürzlich auf den Markt gebrachtes, den Stoffwechsel ankurbelndes Nahrungsergänzungsmittel, das in den letzten Wochen auf dem Markt begeisterte Kritiken erhalten hat. In dieser Rezension zu Shape Kapseln werde ich jeden Aspekt dieser Ergänzung im Detail untersuchen, um herauszufinden, ob sie ihr Geld wert ist.
## **[Klicken Sie hier, um jetzt auf der offiziellen Website von Shape Kapseln zu kaufen](https://callednews.com/shape-kapseln-de)**
## Formkapseln – Ergebnisse und Langlebigkeit
Um effektive Ergebnisse zu erzielen, empfiehlt der Hersteller von Shape Kapseln, das Nahrungsergänzungsmittel unbedingt über einen Zeitraum von 3 bis 5 Monaten einzunehmen. Dieser Zeitraum für die Ergebnisse hängt von verschiedenen Faktoren wie Alter, genetischer Zusammensetzung, Lebensstil, Stoffwechselrate und allgemeinem Gesundheitszustand ab. Allerdings wird behauptet, dass es kein Zurück mehr gibt, sobald Shape Kapseln Ergebnisse liefert.
Damit die Ergebnisse dauerhaft anhalten, müssen Sie das Nahrungsergänzungsmittel Shape Kapseln regelmäßig einnehmen, gefolgt von einer gesunden und ausgewogenen Ernährung und einfachen Übungen. Ein gesunder Lebensstil trägt dazu bei, die Wirksamkeit dieses Stoffwechsel-Boosters zu steigern und auch die allgemeine Gesundheit und das Wohlbefinden zu verbessern. Achten Sie darauf, die empfohlene Portion Shape Kapseln nicht auszulassen, da die Ergebnisse schnell nachlassen.
## Kundenrezensionen und Beschwerden zu Shape Kapseln: Shape Kapseln Benutzererfahrung
Die Kundenrezensionen zu Shape Kapseln sind bisher durchweg positiv. Ehrliche Rezensionen zu diesem Stoffwechsel-Booster sind in vertrauenswürdigen Gesundheitsforen und anderen Quellen wie Reddit, Quora und Facebook verfügbar.
Auf diesen Plattformen haben zufriedene Benutzer kommentiert, dass das Nahrungsergänzungsmittel dazu beigetragen hat, die Muskelmasse zu verbessern und den Stoffwechsel anzukurbeln. Einige andere berichteten ebenfalls über einen Anstieg des Energieniveaus. Diese Kundenrezensionen von Shape Kapseln zeigen, dass diese Formel sicher und wirksam ist.
## Shape Kapseln-Pakete und Preise
Ab sofort sind Shape Kapseln im Vergleich zu anderen ähnlichen Stoffwechsel-Boostern auf dem Markt zu einem viel günstigeren Preis erhältlich. Der Hersteller hat den Preis dieser stoffwechselankurbelnden Formel für einen begrenzten Zeitraum gesenkt, damit alle Menschen davon profitieren können.
### Hier die detaillierten Preispläne:
Anfänger – 1 Packung für 49,95 € pro Packung + 4,95 € Versand
Top-Seller – 2 Packungen für 36,65 € pro Packung + kostenloser Versand
Bestseller – 3 Packungen für 39,97 € pro Packung + kostenloser Versand
Dies sind die drei verschiedenen Shape Kapseln-Pakete, die derzeit zum Kauf erhältlich sind. Sie können ein beliebiges Paket wählen und Ihre Reise zur Stoffwechselankurbelung und Gewichtsabnahme ankurbeln.
## Verfügbarkeit von Formkapseln
Derzeit können Sie die Shape-Kapseln-Formel nur über die offizielle Website erhalten. Der Hersteller hat diese Formel nicht auf Plattformen Dritter oder im Einzelhandel verfügbar gemacht.
Allerdings könnten Duplikate der Shape-Kapseln-Formel von nicht autorisierten Verkäufern über Amazon und andere Drittplattformen verkauft werden, um uninformierte Benutzer in die Falle zu locken. Diese Replikate haben ähnliche Namen, Website-Designs usw., liefern jedoch keine ähnlichen Ergebnisse. Sie könnten auch schädliche Inhaltsstoffe enthalten.
## **[Klicken Sie hier, um jetzt auf der offiziellen Website von Shape Kapseln zu kaufen](https://callednews.com/shape-kapseln-de)** |
wisenut-nlp-team/wisenut-llama-3-8B-LoRA | wisenut-nlp-team | 2024-06-28T05:13:33Z | 0 | 0 | null | [
"safetensors",
"region:us"
]
| null | 2024-06-28T05:08:39Z | Entry not found |
jddllwqa/Qwen-Qwen1.5-7B-1719551837 | jddllwqa | 2024-06-28T05:17:18Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T05:17:18Z | Entry not found |
midori1255/controlnet_marmoset | midori1255 | 2024-06-28T05:17:24Z | 0 | 0 | null | [
"license:unknown",
"region:us"
]
| null | 2024-06-28T05:17:24Z | ---
license: unknown
---
|
yraziel/pnina_rozenblum | yraziel | 2024-06-28T05:21:07Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T05:20:14Z | Entry not found |
salmanhermana/finetuned-food | salmanhermana | 2024-06-28T05:20:50Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T05:20:50Z | Entry not found |
ningrumdaud/test_lora_llama3model | ningrumdaud | 2024-06-28T05:22:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-28T05:22:15Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** ningrumdaud
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AdamKasumovic/phi3-mini-4k-instruct-bactrian-x-en-100-percent-low-med-perplexity | AdamKasumovic | 2024-06-28T05:32:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-06-28T05:30:00Z | ---
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** AdamKasumovic
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Pranavv/fork-moirai | Pranavv | 2024-06-28T05:32:54Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-06-28T05:32:54Z | ---
license: apache-2.0
---
|
jddllwqa/Qwen-Qwen1.5-0.5B-1719552870 | jddllwqa | 2024-06-28T05:34:36Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-0.5B",
"region:us"
]
| null | 2024-06-28T05:34:30Z | ---
base_model: Qwen/Qwen1.5-0.5B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
ynyn011/content | ynyn011 | 2024-06-28T05:35:42Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T05:35:42Z | Entry not found |
sryab2001/llama3-8b-cosmic-fusion-dynamics-lora | sryab2001 | 2024-06-28T05:44:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-28T05:43:49Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** sryab2001
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
riddhiparmar/test-model | riddhiparmar | 2024-06-28T05:49:04Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T05:44:54Z | Entry not found |
slelab/AES14 | slelab | 2024-06-28T06:11:39Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T05:47:37Z | Entry not found |
riddhiparmar/test-model2 | riddhiparmar | 2024-06-28T05:47:42Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T05:47:42Z | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.