modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
borreplata/Test
|
borreplata
| 2024-06-26T03:20:51Z | 0 | 0 | null |
[
"license:unlicense",
"region:us"
] | null | 2024-06-26T03:19:27Z |
---
license: unlicense
---
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-8B")
model = AutoModelForCausalLM.from_pretrained("meta-llama/Meta-Llama-3-8B")
|
HoangHa/selfies-roberta-large-silu
|
HoangHa
| 2024-06-26T03:22:15Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T03:22:15Z |
Entry not found
|
chopchopchuck/mts10
|
chopchopchuck
| 2024-06-26T03:22:50Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T03:22:35Z |
Entry not found
|
Katyc/llama-3-8b-Instruct-bnb-4bit-LoRA
|
Katyc
| 2024-06-26T03:23:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T03:23:10Z |
---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** Katyc
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
charlieoneill/jsalt-data
|
charlieoneill
| 2024-06-26T03:27:57Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T03:24:54Z |
Entry not found
|
TheRealheavy/BigSmoke
|
TheRealheavy
| 2024-06-26T03:28:01Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2024-06-26T03:26:38Z |
---
license: openrail
---
|
qualcomm/Posenet-Mobilenet-Quantized
|
qualcomm
| 2024-06-26T03:30:20Z | 0 | 0 |
pytorch
|
[
"pytorch",
"tflite",
"quantized",
"android",
"image-classification",
"dataset:coco",
"arxiv:1803.08225",
"license:apache-2.0",
"region:us"
] |
image-classification
| 2024-06-26T03:30:14Z |
---
datasets:
- coco
library_name: pytorch
license: apache-2.0
pipeline_tag: image-classification
tags:
- quantized
- android
---

# Posenet-Mobilenet-Quantized: Optimized for Mobile Deployment
## Quantized human pose estimator
Posenet performs pose estimation on human images.
This model is an implementation of Posenet-Mobilenet-Quantized found [here](https://github.com/rwightman/posenet-pytorch).
This repository provides scripts to run Posenet-Mobilenet-Quantized on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/posenet_mobilenet_quantized).
### Model Details
- **Model Type:** Pose estimation
- **Model Stats:**
- Model checkpoint: mobilenet_v1_101
- Input resolution: 513x257
- Number of parameters: 3.31M
- Model size: 3.47 MB
| Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
| ---|---|---|---|---|---|---|---|
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 0.591 ms | 0 - 2 MB | INT8 | NPU | [Posenet-Mobilenet-Quantized.tflite](https://huggingface.co/qualcomm/Posenet-Mobilenet-Quantized/blob/main/Posenet-Mobilenet-Quantized.tflite)
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 0.622 ms | 0 - 9 MB | INT8 | NPU | [Posenet-Mobilenet-Quantized.so](https://huggingface.co/qualcomm/Posenet-Mobilenet-Quantized/blob/main/Posenet-Mobilenet-Quantized.so)
## Installation
This model can be installed as a Python package via pip.
```bash
pip install qai-hub-models
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.posenet_mobilenet_quantized.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.posenet_mobilenet_quantized.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.posenet_mobilenet_quantized.export
```
```
Profile Job summary of Posenet-Mobilenet-Quantized
--------------------------------------------------
Device: Snapdragon X Elite CRD (11)
Estimated Inference Time: 0.69 ms
Estimated Peak Memory Range: 0.38-0.38 MB
Compute Units: NPU (42) | Total (42)
```
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.posenet_mobilenet_quantized.demo --on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.posenet_mobilenet_quantized.demo -- --on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on Posenet-Mobilenet-Quantized's performance across various devices [here](https://aihub.qualcomm.com/models/posenet_mobilenet_quantized).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
- The license for the original implementation of Posenet-Mobilenet-Quantized can be found
[here](https://github.com/rwightman/posenet-pytorch/blob/master/LICENSE.txt).
- The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [PersonLab: Person Pose Estimation and Instance Segmentation with a Bottom-Up, Part-Based, Geometric Embedding Model](https://arxiv.org/abs/1803.08225)
* [Source Model Implementation](https://github.com/rwightman/posenet-pytorch)
## Community
* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:[email protected]).
|
richardkelly/Qwen-Qwen1.5-1.8B-1719372662
|
richardkelly
| 2024-06-26T03:31:02Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T03:31:02Z |
Entry not found
|
habulaj/129556109667
|
habulaj
| 2024-06-26T03:31:21Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T03:31:12Z |
Entry not found
|
qualcomm/Midas-V2-Quantized
|
qualcomm
| 2024-06-26T03:31:34Z | 0 | 0 |
pytorch
|
[
"pytorch",
"tflite",
"quantized",
"android",
"depth-estimation",
"arxiv:1907.01341",
"license:mit",
"region:us"
] |
depth-estimation
| 2024-06-26T03:31:25Z |
---
library_name: pytorch
license: mit
pipeline_tag: depth-estimation
tags:
- quantized
- android
---

# Midas-V2-Quantized: Optimized for Mobile Deployment
## Quantized Deep Convolutional Neural Network model for depth estimation
Midas is designed for estimating depth at each point in an image.
This model is an implementation of Midas-V2-Quantized found [here](https://github.com/isl-org/MiDaS).
This repository provides scripts to run Midas-V2-Quantized on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/midas_quantized).
### Model Details
- **Model Type:** Depth estimation
- **Model Stats:**
- Model checkpoint: MiDaS_small
- Input resolution: 256x256
- Number of parameters: 16.6M
- Model size: 16.6 MB
| Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
| ---|---|---|---|---|---|---|---|
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 1.154 ms | 0 - 2 MB | INT8 | NPU | [Midas-V2-Quantized.tflite](https://huggingface.co/qualcomm/Midas-V2-Quantized/blob/main/Midas-V2-Quantized.tflite)
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 1.482 ms | 0 - 275 MB | INT8 | NPU | [Midas-V2-Quantized.so](https://huggingface.co/qualcomm/Midas-V2-Quantized/blob/main/Midas-V2-Quantized.so)
## Installation
This model can be installed as a Python package via pip.
```bash
pip install "qai-hub-models[midas_quantized]"
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.midas_quantized.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.midas_quantized.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.midas_quantized.export
```
```
Profile Job summary of Midas-V2-Quantized
--------------------------------------------------
Device: Snapdragon X Elite CRD (11)
Estimated Inference Time: 1.52 ms
Estimated Peak Memory Range: 0.46-0.46 MB
Compute Units: NPU (148) | Total (148)
```
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.midas_quantized.demo --on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.midas_quantized.demo -- --on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on Midas-V2-Quantized's performance across various devices [here](https://aihub.qualcomm.com/models/midas_quantized).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
- The license for the original implementation of Midas-V2-Quantized can be found
[here](https://github.com/isl-org/MiDaS/blob/master/LICENSE).
- The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer](https://arxiv.org/abs/1907.01341v3)
* [Source Model Implementation](https://github.com/isl-org/MiDaS)
## Community
* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:[email protected]).
|
Coolwowsocoolwow/Eric_Cartman
|
Coolwowsocoolwow
| 2024-06-26T03:44:39Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2024-06-26T03:32:02Z |
---
license: openrail
---
|
habulaj/174829150187
|
habulaj
| 2024-06-26T03:33:03Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T03:32:52Z |
Entry not found
|
johnpaulbin/llama8b-tokipona-epoch1-chat
|
johnpaulbin
| 2024-06-26T03:50:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T03:33:05Z |
---
base_model: unsloth/llama-3-8b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** johnpaulbin
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Tam1032/whisper-largev3-hi
|
Tam1032
| 2024-06-26T03:33:33Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T03:33:33Z |
Entry not found
|
abinavGanesh/emty
|
abinavGanesh
| 2024-06-26T03:38:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T03:38:58Z |
Entry not found
|
OpilotAI/medicine-Llama3-8B-q4f16_1-Opilot
|
OpilotAI
| 2024-06-26T03:46:09Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T03:41:17Z |
Entry not found
|
PiAPI/Midjourney-API
|
PiAPI
| 2024-06-27T02:44:07Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2024-06-26T03:42:14Z |
---
license: mit
---
# Midjourney API
**Model Page:** [Midjourney API](https://piapi.ai/midjourney-api)
This model card illustartes the steps to use Midjourney API's endpoint.
You can also check out other model cards:
- [Faceswap API](https://huggingface.co/PiAPI/Faceswap-API)
- [Suno API](https://huggingface.co/PiAPI/Suno-API)
- [Dream Machine API](https://huggingface.co/PiAPI/Dream-Machine-API)
**Model Information**
Renowned for its exceptional text-to-image generative AI capabilities, Midjourney is a preferred tool among graphic designers, photographers, and creatives aiming to explore AI-driven artistry. Despite the absence of an official API from Midjourney, PiAPI has introduced the unofficial Midjourney API, empowering developers to incorporate this cutting-edge text-to-image model into their AI applications.
## Usage Steps
Below we share the code snippets on how to use Midjourney API's upscale endpoint.
- The programming language is Python
- The origin task ID should be the task ID of the fetched imagine endpoint
**Create an upscale task ID**
<pre><code class="language-python">
<span class="hljs-keyword">import</span> http.client
conn = http.client.HTTPSConnection(<span class="hljs-string">"api.piapi.ai"</span>)
payload = <span class="hljs-string">"{\n \"origin_task_id\": \"9c6796dd*********1e7dfef5203b\",\n \"index\": \"1\",\n \"webhook_endpoint\": \"\",\n \"webhook_secret\": \"\"\n}"</span>
headers = {
<span class="hljs-built_in">'X-API-Key': "{{x-api-key}}"</span>, //Insert your API Key here
<span class="hljs-built_in">'Content-Type': "application/json"</span>,
<span class="hljs-built_in">'Accept': "application/json"</span>
}
conn.request("POST", "/mj/v2/upscale", payload, headers)
res = conn.getresponse()
data = res.read()
<span class="hljs-keyword">print</span>(data.decode("utf-8"))
</code></pre>
**Retrieve the task ID**
<pre><code class="language-python">
{
<span class="hljs-built_in">"code"</span>: 200,
<span class="hljs-built_in">"data"</span>: {
<span class="hljs-built_in">"task_id"</span>: :3be7e0b0****************d1a725da0b1d" //Record the taskID provided in your response terminal
},
<span class="hljs-built_in">"message"</span>: "success"
}
</code></pre>
**Insert the upscale task ID into the fetch endpoint**
<pre><code class="language-python">
<span class="hljs-keyword">import</span> http.client
conn = http.client.HTTPSConnection(<span class="hljs-string">"api.piapi.ai"</span>)
payload = <span class="hljs-string">"{\n \"task_id\": \"3be7e0b0****************d1a725da0b1d\"\n}"</span> /Replace the task ID with your task ID
headers = {
<span class="hljs-built_in">'Content-Type': "application/json"</span>,
<span class="hljs-built_in">'Accept': "application/json"</span>
}
conn.request("POST", "/mj/v2/fetch", payload, headers)
res = conn.getresponse()
data = res.read()
<span class="hljs-keyword">print</span>(data.decode("utf-8"))
</code></pre>
**For fetch endpoint responses** - Refer to our [documentation](https://piapi.ai/docs/midjourney-api/upscale) for more detailed information.
<br>
## Contact us
Contact us at <a href="mailto:[email protected]">[email protected]</a> for any inquires.
<br>
|
neuronpedia/gemma-2b-it__res-jb
|
neuronpedia
| 2024-06-26T03:45:03Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T03:44:38Z |
Entry not found
|
ben81828/meow_text
|
ben81828
| 2024-06-26T03:46:01Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T03:46:01Z |
Entry not found
|
habulaj/334731300380
|
habulaj
| 2024-06-26T03:47:46Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T03:47:43Z |
Entry not found
|
szcjerry/smat-vit-sup21k-large
|
szcjerry
| 2024-07-02T09:14:51Z | 0 | 0 | null |
[
"license:cc-by-4.0",
"region:us"
] | null | 2024-06-26T03:51:00Z |
---
license: cc-by-4.0
---
This repo contains the SMAT meta-tuned vit-sup21-large model checkpoint for PyTorch.
### How to use
With our implementation here on [github](https://github.com/szc12153/sparse_meta_tuning), you can load the pre-trained weights by calling
```
model.load_state_dict(torch.load(/path/to/checkpoint.pt))
```
For inference with ProtoNet on a few-shot learning task:
```
# outputs is a dictionary
outputs = model(x_s=x_s, # support inputs
y_s=y_s, # support labels
x_q=x_q, # query inputs
y_q=None, # predict for query labels
finetune_model=None # None for direct inference with a ProtoNet classifier
)
y_q_pred = outputs['y_q_pred']
```
For inference with task-specific full fine-tuning then inference:
```
# outputs is a dictionary
model.args.meta_learner.inner_lr.lr = lr # set the learning rate for fine-tuning
model.args.meta_learner.num_finetune_steps = num_finetune_steps # set the number of fine-tuning steps
outputs = model(x_s=x_s, # support inputs
y_s=y_s, # support labels
x_q=x_q, # query inputs
y_q=None, # predict for query labels
finetune_model="full" # {'full','lora'}
)
y_q_pred = outputs['y_q_pred']
```
You can visit our [github](https://github.com/szc12153/sparse_meta_tuning) repo for more details on training and inference!
|
szcjerry/smat-vit-dino-base
|
szcjerry
| 2024-07-02T09:09:48Z | 0 | 0 | null |
[
"license:cc-by-4.0",
"region:us"
] | null | 2024-06-26T03:51:53Z |
---
license: cc-by-4.0
---
This repo contains the SMAT meta-tuned vit-dino-base model checkpoint for PyTorch.
### How to use
With our implementation here on [github](https://github.com/szc12153/sparse_meta_tuning), you can load the pre-trained weights by calling
```
model.load_state_dict(torch.load(/path/to/checkpoint.pt))
```
For inference with ProtoNet on a few-shot learning task:
```
# outputs is a dictionary
outputs = model(x_s=x_s, # support inputs
y_s=y_s, # support labels
x_q=x_q, # query inputs
y_q=None, # predict for query labels
finetune_model=None # None for direct inference with a ProtoNet classifier
)
y_q_pred = outputs['y_q_pred']
```
For inference with task-specific full fine-tuning then inference:
```
# outputs is a dictionary
model.args.meta_learner.inner_lr.lr = lr # set the learning rate for fine-tuning
model.args.meta_learner.num_finetune_steps = num_finetune_steps # set the number of fine-tuning steps
outputs = model(x_s=x_s, # support inputs
y_s=y_s, # support labels
x_q=x_q, # query inputs
y_q=None, # predict for query labels
finetune_model="full" # {'full','lora'}
)
y_q_pred = outputs['y_q_pred']
```
You can visit our [github](https://github.com/szc12153/sparse_meta_tuning) repo for more details on training and inference!
|
szcjerry/smat-vit-dino-small
|
szcjerry
| 2024-07-02T09:08:15Z | 0 | 0 | null |
[
"license:cc-by-4.0",
"region:us"
] | null | 2024-06-26T03:52:16Z |
---
license: cc-by-4.0
---
This repo contains the SMAT meta-tuned vit-dino-small model checkpoint for PyTorch.
### How to use
With our implementation here on [github](https://github.com/szc12153/sparse_meta_tuning), you can load the pre-trained weights by calling
```
model.load_state_dict(torch.load(/path/to/checkpoint.pt))
```
For inference with ProtoNet on a few-shot learning task:
```
# outputs is a dictionary
outputs = model(x_s=x_s, # support inputs
y_s=y_s, # support labels
x_q=x_q, # query inputs
y_q=None, # predict for query labels
finetune_model=None # None for direct inference with a ProtoNet classifier
)
y_q_pred = outputs['y_q_pred']
```
For inference with task-specific full fine-tuning then inference:
```
# outputs is a dictionary
model.args.meta_learner.inner_lr.lr = lr # set the learning rate for fine-tuning
model.args.meta_learner.num_finetune_steps = num_finetune_steps # set the number of fine-tuning steps
outputs = model(x_s=x_s, # support inputs
y_s=y_s, # support labels
x_q=x_q, # query inputs
y_q=None, # predict for query labels
finetune_model="full" # {'full','lora'}
)
y_q_pred = outputs['y_q_pred']
```
You can visit our [github](https://github.com/szc12153/sparse_meta_tuning) repo for more details on training and inference!
|
Prisma-Multimodal/sae_tinyclip_40m_layer_6
|
Prisma-Multimodal
| 2024-06-26T03:53:59Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T03:53:59Z |
Entry not found
|
Prisma-Multimodal/sae_tinyclip_40m_layer_6_imagenet
|
Prisma-Multimodal
| 2024-06-26T03:54:43Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T03:54:43Z |
Entry not found
|
Prisma-Multimodal/sae_tinyclip_40m_imagenet_layer_6
|
Prisma-Multimodal
| 2024-06-26T03:54:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T03:54:58Z |
Entry not found
|
samsri01/slm-phi2-coversational-finetuned
|
samsri01
| 2024-06-26T03:55:00Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-06-26T03:55:00Z |
---
license: apache-2.0
---
|
FevenTad/V1_0.3_Base
|
FevenTad
| 2024-06-26T03:59:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T03:58:12Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
santosharron/privateGPT_ModelV1
|
santosharron
| 2024-06-26T04:03:26Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2024-06-26T04:03:26Z |
---
license: mit
---
|
Sarbanidatabricks/speecht5_tts_voxpopuli_nl
|
Sarbanidatabricks
| 2024-06-26T04:05:47Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T04:05:47Z |
Entry not found
|
garnard1991/JESUSLOVE
|
garnard1991
| 2024-06-26T04:10:15Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-06-26T04:10:15Z |
---
license: apache-2.0
---
|
Dongchao/music
|
Dongchao
| 2024-06-26T09:34:13Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T04:10:46Z |
Entry not found
|
DomathID/Test
|
DomathID
| 2024-06-26T04:18:10Z | 0 | 0 |
nemo
|
[
"nemo",
"code",
"en",
"dataset:nodemixaholic/text-of-the-net",
"license:mit",
"region:us"
] | null | 2024-06-26T04:15:36Z |
---
license: mit
datasets:
- nodemixaholic/text-of-the-net
language:
- en
metrics:
- character
library_name: nemo
tags:
- code
---
https://www.yukinoshita.web.id
https://www.penkata.com
|
LogCreative/Llama-3-8B-Instruct-pgfplots-finetune-q4f16_1-MLC
|
LogCreative
| 2024-06-26T10:37:48Z | 0 | 1 | null |
[
"code",
"text-generation",
"conversational",
"en",
"dataset:LogCreative/latex-pgfplots-instruct",
"base_model:unsloth/llama-3-8b-Instruct",
"license:llama3",
"region:us"
] |
text-generation
| 2024-06-26T04:18:15Z |
---
base_model: unsloth/llama-3-8b-Instruct
license: llama3
datasets:
- LogCreative/latex-pgfplots-instruct
language:
- en
metrics:
- code_eval
pipeline_tag: text-generation
tags:
- code
---
## Usage
This model is saved as [MLC LLM](https://llm.mlc.ai) format.
View the [installation guide of MLC LLM](https://llm.mlc.ai/docs/install/mlc_llm) for how to install the library.
Then use the following command to try the model:
```bash
mlc_llm chat .
```
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
The model is finetuned from Llama 3 LLM to provide more accurate results on generating LaTeX code of `pgfplots` package, which is based on the dataset [LogCreative/latex-pgfplots-instruct](https://huggingface.co/datasets/LogCreative/latex-pgfplots-instruct) extracted from the documentation of [`pgfplots`](https://github.com/pgf-tikz/pgfplots) LaTeX package.
- **Developed by:** [LogCreative](https://github.com/LogCreative)
- **Model type:** Text Generation
- **Language(s) (NLP):** English
- **License:** Llama 3
- **Finetuned from model:** [unsloth/llama-3-8b-Instruct](https://huggingface.co/unsloth/llama-3-8b)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** [LogCreative/llama-pgfplots-finetune](https://github.com/LogCreative/llama-pgfplots-finetune)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
This model is intended to generate the pgfplots LaTeX code according to the user's prompt.
It is suitable for users who are not familiar with the API provided in the `pgfplots` package
or does not want to consult the documentation for achieving the intention.
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[PGFPlotsEdt](https://github.com/LogCreative/PGFPlotsEdt): A PGFPlots Statistic Graph Interactive Editor.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
Any use outside the `pgfplots` package could only be of the performance of the base Llama 3 model.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This model could not provide sufficient information on other LaTeX packages and could not guarantee the absolute correctness of the generated result.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
If you can not get the correct result from this model, you may need to consult the original `pgfplots` documentation for more information.
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[LogCreative/latex-pgfplots-instruct](https://huggingface.co/datasets/LogCreative/latex-pgfplots-instruct): a datasets contains the instruction and corresponding output related to `pgfplots` and `pgfplotstable` LaTeX packages.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
This model is finetuned based on the dataset based on [`unsloth`](https://github.com/unslothai/unsloth) library.
#### Training Hyperparameters
- **Training regime:** bf16 mixed precision <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
The evaluation is based on the success compilation rate of the output LaTeX code in the test dataset.
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[LogCreative/latex-pgfplots-instruct](https://huggingface.co/datasets/LogCreative/latex-pgfplots-instruct): the test part of this dataset only contains instructions only related to the `pgfplots` package.
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
When testing, the prompt prefix is added to tell the model what role it is and what the requested response format is to only output the code without any explanation.
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
Success compilation rate:
$$\frac{\text{\#Success compilation}}{\text{\#Total compilation}}\times 100\%$$
The uncessful compilation is rather LaTeX failure or the timeout case (compilation time > 20s).
### Results
The test is based upon unquantized model which is in fp16 precision.
- Llama 3: 34%
- **This model: 52% (+18%)**
#### Summary
This model is expected to output the LaTeX code output related to the `pgfplots` package with less error compared to the baseline Llama 3 model.
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute).
- **Hardware Type:** Nvidia A100 80G
- **Hours used:** 1h = 10min training + 50min testing
- **Cloud Provider:** Private infrastructure
- **Carbon Emitted:** 0.11kg CO2 eq.
### Framework versions
- PEFT 0.11.1
- MLC LLM nightly_cu122-0.1.dev1404
- MLC AI nightly_cu122-0.15.dev404
- Unsloth 2024.6
|
haljazara/results
|
haljazara
| 2024-06-26T04:21:38Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T04:21:38Z |
Entry not found
|
xxlrd/deepnegative
|
xxlrd
| 2024-06-26T04:24:27Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T04:24:05Z |
https://civitai.com/models/4629/deep-negative-v1x?modelVersionId=5637
|
Coolwowsocoolwow/Kyle_Schwartz
|
Coolwowsocoolwow
| 2024-06-26T04:36:07Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2024-06-26T04:30:09Z |
---
license: openrail
---
|
alexzarate/usain_bolt
|
alexzarate
| 2024-06-26T06:37:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T04:39:32Z |
Entry not found
|
migaraa/Gaudi_LoRA_Llama-2-7b-hf
|
migaraa
| 2024-06-28T18:42:15Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"ipex",
"intel",
"gaudi",
"PEFT",
"dataset:timdettmers/openassistant-guanaco",
"arxiv:1910.09700",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T04:40:03Z |
---
library_name: transformers
tags:
- ipex
- intel
- gaudi
- PEFT
license: apache-2.0
datasets:
- timdettmers/openassistant-guanaco
---
# Model Card for Model ID
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on [timdettmers/openassistant-guanaco dataset](https://huggingface.co/datasets/timdettmers/openassistant-guanaco).
## Model Details
### Model Description
This is a fine-tuned version of the [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) model using Parameter Efficient Fine Tuning (PEFT) with Low Rank Adaptation (LoRA) on the Intel Gaudi 2 AI accelerator. This model can be used for various text generation tasks including chatbots, content creation, and other NLP applications.
- **Developed by:** Migara Amarasinghe
- **Model type:** LLM
- **Language(s) (NLP):** English
- **Finetuned from model [optional]:** [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf)
## Uses
### Direct Use
This model can be used for text generation tasks such as:
- Chatbots
- Automated content creation
- Text completion and augmentation
### Out-of-Scope Use
- Use in real-time applications where latency is critical
- Use in highly sensitive domains without thorough evaluation and testing
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## Training Details
### Training Hyperparameters
<!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
- Training regime: Mixed precision training using bf16
- Number of epochs: 3
- Learning rate: 1e-4
- Batch size: 16
- Seq length: 512
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Intel Gaudi AI Accelerator
- **Hours used:** < 1 hour
## Technical Specifications
### Compute Infrastructure
#### Hardware
- Intel Gaudi 2 AI Accelerator
- Intel(R) Xeon(R) Platinum 8368 CPU
#### Software
- Transformers library
- Optimum Habana library
|
ryo0611/Scaramouche
|
ryo0611
| 2024-06-26T04:45:12Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T04:45:12Z |
Entry not found
|
PiAPI/Faceswap-API
|
PiAPI
| 2024-06-27T02:44:40Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2024-06-26T04:45:27Z |
---
license: mit
---
# Faceswap API
**Model Page:** [Faceswap API](https://piapi.ai/faceswap-api)
This model card illustartes the steps to use Faceswap API's endpoint.
You can also check out other model cards:
- [Midjourney API](https://huggingface.co/PiAPI/Midjourney-API)
- [Suno API](https://huggingface.co/PiAPI/Suno-API)
- [Dream Machine API](https://huggingface.co/PiAPI/Dream-Machine-API)
**Model Information**
The FaceSwap API, built on a custom AI model, allows developers to effortlessly integrate advanced face-swapping capabilities to their platforms, offering users the ability to rapidly personalize images of their choice.
## Usage Steps
Below we share the code snippets on how to use the Faceswap API's endpoint.
- The programming language is Python
- Have 2 images (Each image must only contain one visible face)
**Create a task ID from the Faceswap endpoint**
<pre><code class="language-python">
<span class="hljs-keyword">import</span> http.client
conn = http.client.HTTPSConnection(<span class="hljs-string">"api.piapi.ai"</span>)
payload = <span class="hljs-string">"{\n \"target_image\": \"image1.png\",\n \"swap_image\": \"image2.png\",\n \"result_type\": \"url\"\n}"</span>
headers = {
<span class="hljs-built_in">'X-API-Key': "{{x-api-key}}"</span>, //Insert your API Key here
<span class="hljs-built_in">'Content-Type': "application/json"</span>,
<span class="hljs-built_in">'Accept': "application/json"</span>
}
conn.request("POST", "/api/face_swap/v1/async", payload, headers)
res = conn.getresponse()
data = res.read()
<span class="hljs-keyword">print</span>(data.decode("utf-8"))
</code></pre>
**Retrieve the task ID**
<pre><code class="language-python">
{
<span class="hljs-built_in">"code"</span>: 200,
<span class="hljs-built_in">"data"</span>: {
<span class="hljs-built_in">"task_id"</span>: "7a7ba527************1974d4316e22" //Record the taskID provided in your response terminal
},
<span class="hljs-built_in">"message"</span>: "success"
}
</code></pre>
**Insert the Faceswap task ID into the fetch endpoint**
<pre><code class="language-python">
<span class="hljs-keyword">import</span> http.client
conn = http.client.HTTPSConnection(<span class="hljs-string">"api.piapi.ai"</span>)
payload = <span class="hljs-string">"{\n \"task_id\": \"7a7ba527************1974d4316e22\"\n}"</span> //Replace the task ID with your task ID
headers = {
<span class="hljs-built_in">'X-API-Key': "{{x-api-key}}"</span>, //Insert your API Key here
<span class="hljs-built_in">'Content-Type': "application/json"</span>,
<span class="hljs-built_in">'Accept': "application/json"</span>
}
conn.request("POST", "/api/face_swap/v1/fetch", payload, headers)
res = conn.getresponse()
data = res.read()
<span class="hljs-keyword">print</span>(data.decode("utf-8"))
</code></pre>
**For fetch endpoint responses** - Refer to our [documentation](https://piapi.ai/docs/faceswap-api/fetch) for more detailed information.
<br>
## Contact us
Contact us at <a href="mailto:[email protected]">[email protected]</a> for any inquires.
<br>
|
Pragmir/pragmir
|
Pragmir
| 2024-06-26T04:49:50Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-06-26T04:49:50Z |
---
license: apache-2.0
---
|
munish0838/Phi-3-medium-4k-instruct-Matter-0.1-Slim-A-lora
|
munish0838
| 2024-06-26T04:52:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/Phi-3-medium-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T04:52:40Z |
---
base_model: unsloth/Phi-3-medium-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** munish0838
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-medium-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
dhruvvaidh/Llama2-7b-hf-dv13911
|
dhruvvaidh
| 2024-06-26T04:55:12Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T04:55:12Z |
Entry not found
|
shuyuej/MedLLaMA3-70B-base-AWQ
|
shuyuej
| 2024-06-26T14:04:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2024-06-26T04:57:51Z |
---
license: apache-2.0
---
|
imrazack/test
|
imrazack
| 2024-06-26T04:59:07Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-06-26T04:59:07Z |
---
license: apache-2.0
---
|
PiAPI/Suno-API
|
PiAPI
| 2024-06-27T02:45:02Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2024-06-26T05:02:32Z |
---
license: mit
---
# Suno API
**Model Page:** [Suno API](https://piapi.ai/suno-api)
This model card illustartes the steps to use Suno API's endpoint.
You can also check out other model cards:
- [Midjourney API](https://huggingface.co/PiAPI/Midjourney-API)
- [Faceswap API](https://huggingface.co/PiAPI/Faceswap-API)
- [Dream Machine API](https://huggingface.co/PiAPI/Dream-Machine-API)
**Model Information**
Developed by the Suno team in Cambridge, MA, Suno is a leading-edge text-to-music model. While it doesn't have an official API service, PiAPI has introduced an unofficial Suno API, allowing developers globally to integrate Suno’s music creation capabilities into their applications.
## Usage Steps
Below we share the code snippets on how to use the Suno API's "Generate Full Song" endpoint.
- The programming language is Python
- This is only applicable for Extended Clips generated from the "Extend" function of the "Generate Music" endpoint.
**Create a task ID from the "Generate Full Song" endpoint**
<pre><code class="language-python">
<span class="hljs-keyword">import</span> http.client
conn = http.client.HTTPSConnection(<span class="hljs-string">"api.piapi.ai"</span>)
payload = <span class="hljs-string">"{\n \"clip_id\": \"0e764cab****************55f76ca44ed6\"\n}"</span>
headers = {
<span class="hljs-built_in">'X-API-Key': "{{x-api-key}}"</span>, //Insert your API Key here
<span class="hljs-built_in">'Content-Type': "application/json"</span>,
<span class="hljs-built_in">'Accept': "application/json"</span>
}
conn.request("POST", "/api/suno/v1/music/concat", payload, headers)
res = conn.getresponse()
data = res.read()
<span class="hljs-keyword">print</span>(data.decode("utf-8"))
</code></pre>
**Retrieve the task ID**
<pre><code class="language-python">
{
<span class="hljs-built_in">"code"</span>: 200,
<span class="hljs-built_in">"data"</span>: {
<span class="hljs-built_in">"task_id"</span>: "5440b19a*****************e92de94d5110" //Record the taskID provided in your response terminal
},
<span class="hljs-built_in">"message"</span>: "success"
}
</code></pre>
**Insert the "Generate Full Song" task ID into the fetch endpoint**
<pre><code class="language-python">
<span class="hljs-keyword">import</span> http.client
conn = http.client.HTTPSConnection(<span class="hljs-string">"api.piapi.ai"</span>)
headers = {
<span class="hljs-built_in">'Content-Type': "application/json"</span>,
<span class="hljs-built_in">'Accept': "application/json"</span>
}
conn.request("GET", "/api/suno/v1/music/task_id", headers=headers) //Replace the "task_id" with your task ID
res = conn.getresponse()
data = res.read()
<span class="hljs-keyword">print</span>(data.decode("utf-8"))
</code></pre>
**For fetch endpoint responses** - Refer to our [documentation](https://piapi.ai/docs/suno-api/get-music) for more detailed information.
<br>
## Contact us
Contact us at <a href="mailto:[email protected]">[email protected]</a> for any inquires.
<br>
|
njaana/phi3-mini-new-model-with-default-lora-adapters
|
njaana
| 2024-06-26T05:05:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T05:05:11Z |
---
base_model: unsloth/phi-3-mini-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** njaana
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Athaz01/Agile_Coach
|
Athaz01
| 2024-06-26T05:09:29Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2024-06-26T05:09:29Z |
---
license: openrail
---
|
ZahidAhmad/lora2_model
|
ZahidAhmad
| 2024-06-26T05:10:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T05:10:01Z |
---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** ZahidAhmad
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hyokwan/hkcode_solar_10.7b_unsloth16
|
hyokwan
| 2024-06-26T05:15:23Z | 0 | 0 | null |
[
"safetensors",
"license:mit",
"region:us"
] | null | 2024-06-26T05:10:10Z |
---
license: mit
---
|
starnet/11-star21-06-26
|
starnet
| 2024-06-26T05:17:32Z | 0 | 0 | null |
[
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | null | 2024-06-26T05:10:26Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
SamaahKhan/Phi-after-fine-tuning-updated
|
SamaahKhan
| 2024-06-26T05:11:01Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T05:11:01Z |
Entry not found
|
mrkaesy/whisper-small-hi
|
mrkaesy
| 2024-06-26T05:14:18Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T05:14:18Z |
Entry not found
|
Topofthenod/q-FrozenLake-v1-4x4-noSlippery
|
Topofthenod
| 2024-06-26T05:14:23Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-06-26T05:14:21Z |
---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.41 +/- 0.49
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Topofthenod/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
LarryAIDraw/eula_v1
|
LarryAIDraw
| 2024-06-26T05:23:01Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-06-26T05:16:03Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/518893/genshin-eula
|
PiAPI/Dream-Machine-API
|
PiAPI
| 2024-06-27T02:45:22Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2024-06-26T05:16:05Z |
---
license: mit
---
# Dream Machine API
**Model Page:** [Dream Machine API](https://piapi.ai/dream-machine-api)
This model card illustartes the steps to use Dream Machine API endpoint.
You can also check out other model cards:
- [Midjourney API](https://huggingface.co/PiAPI/Midjourney-API)
- [Faceswap API](https://huggingface.co/PiAPI/Faceswap-API)
- [Suno API](https://huggingface.co/PiAPI/Suno-API)
**Model Information**
Dream Machine, created by Luma Labs, is an advanced AI model that swiftly produces high-quality, realistic videos from text and images. These videos boast physical accuracy, consistent characters, and naturally impactful shots. Although Luma Lab doesn’t currently provide a Dream Machine API within their Luma API suite, PiAPI has stepped up to develop the unofficial Dream Machine API. This enables developers globally to integrate cutting-edge text-to-video and image-to-video generation into their applications or platforms.
## Usage Steps
Below we share the code snippets on how to use Dream Machine API's Video Generation endpoint.
- The programming language is Python
**Create a task ID from the Video Generation endpoint**
<pre><code class="language-python">
<span class="hljs-keyword">import</span> http.client
conn = http.client.HTTPSConnection(<span class="hljs-string">"api.piapi.ai"</span>)
payload = <span class="hljs-string">"{\n \"prompt\": \"dog running\",\n \"expand_prompt\": true\n}"</span>
headers = {
<span class="hljs-built_in">'X-API-Key': "{{x-api-key}}"</span>, //Insert your API Key here
<span class="hljs-built_in">'Content-Type': "application/json"</span>,
<span class="hljs-built_in">'Accept': "application/json"</span>
}
conn.request("POST", "/api/luma/v1/video", payload, headers)
res = conn.getresponse()
data = res.read()
<span class="hljs-keyword">print</span>(data.decode("utf-8"))
</code></pre>
**Retrieve the task ID**
<pre><code class="language-python">
{
<span class="hljs-built_in">"code"</span>: 200,
<span class="hljs-built_in">"data"</span>: {
<span class="hljs-built_in">"task_id"</span>: "6c4*****************aaaa" //Record the taskID provided in your response terminal
},
<span class="hljs-built_in">"message"</span>: "success"
}
</code></pre>
**Insert the Video Generation task ID into the fetch endpoint**
<pre><code class="language-python">
<span class="hljs-keyword">import</span> http.client
conn = http.client.HTTPSConnection(<span class="hljs-string">"api.piapi.ai"</span>)
headers = {
<span class="hljs-built_in">{ 'Accept': "application/json" }</span>,
}
conn.request("GET", "/api/luma/v1/video/task_id", headers=headers) //Replace the "task_id" with your task ID
res = conn.getresponse()
data = res.read()
<span class="hljs-keyword">print</span>(data.decode("utf-8"))
</code></pre>
**For fetch endpoint responses** - Refer to our [documentation](https://piapi.ai/docs/dream-machine/get-video) for more detailed information.
<br>
## Contact us
Contact us at <a href="mailto:[email protected]">[email protected]</a> for any inquires.
<br>
|
LarryAIDraw/RaidenShogunv3
|
LarryAIDraw
| 2024-06-26T05:24:15Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-06-26T05:16:39Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/289811/raiden-shogun-genshin-impact
|
LarryAIDraw/yoimiya_genshin
|
LarryAIDraw
| 2024-06-26T05:24:34Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-06-26T05:17:01Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/70263/ororgenshin-impact-yoimiya
|
LarryAIDraw/Yoimiya_mysticff_ff890
|
LarryAIDraw
| 2024-06-26T05:24:44Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-06-26T05:18:30Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/5979/yoimiya
|
LarryAIDraw/wrenchgixianyun
|
LarryAIDraw
| 2024-06-26T05:25:01Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-06-26T05:20:04Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/245288/xianyun-or-cloud-retainer-or-genshin-impact
|
shinben0327/q-FrozenLake-v1-4x4-noSlippery
|
shinben0327
| 2024-06-26T05:21:05Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-06-26T05:21:03Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="shinben0327/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Kibalama/Cartpole-v1
|
Kibalama
| 2024-06-26T05:23:51Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-06-26T05:23:08Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
starnet/12-star21-06-26-full
|
starnet
| 2024-06-26T05:30:03Z | 0 | 0 | null |
[
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | null | 2024-06-26T05:24:33Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Sahil77/my-new-shiny-tokenizer
|
Sahil77
| 2024-06-26T05:27:22Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T05:27:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fokyoum9/Qwen-7B-Test
|
fokyoum9
| 2024-06-26T05:33:18Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T05:33:18Z |
Entry not found
|
Juliansh/chatbot
|
Juliansh
| 2024-06-26T05:33:32Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T05:33:32Z |
Entry not found
|
Litzy619/MIS0626T2F
|
Litzy619
| 2024-06-26T09:35:45Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T05:36:01Z |
Entry not found
|
Litzy619/MIS0626T1F
|
Litzy619
| 2024-06-26T10:55:28Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T05:36:24Z |
Entry not found
|
vivekdhir77/docRetrieve
|
vivekdhir77
| 2024-06-26T06:02:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T05:39:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
thuychang404/ptit-job-recommendation
|
thuychang404
| 2024-06-26T06:44:42Z | 0 | 0 |
sklearn
|
[
"sklearn",
"recommend",
"recommendation system",
"feature-extraction",
"en",
"dataset:thuychang404/job-recommendation-system",
"license:wtfpl",
"region:us"
] |
feature-extraction
| 2024-06-26T05:40:23Z |
---
license: wtfpl
language:
- en
metrics:
- accuracy
library_name: sklearn
pipeline_tag: feature-extraction
tags:
- recommend
- recommendation system
datasets:
- thuychang404/job-recommendation-system
---
|
lionking927/s9-0626-01
|
lionking927
| 2024-06-26T05:43:16Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T05:43:16Z |
Entry not found
|
metta-ai/baseline.v0.5.5
|
metta-ai
| 2024-06-26T05:45:44Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"region:us"
] |
reinforcement-learning
| 2024-06-26T05:44:42Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
---
A(n) **APPO** model trained on the **GDY-MettaGrid** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r metta-ai/baseline.v0.5.5
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=GDY-MettaGrid --train_dir=./train_dir --experiment=baseline.v0.5.5
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=GDY-MettaGrid --train_dir=./train_dir --experiment=baseline.v0.5.5 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
shinben0327/Taxi-v3
|
shinben0327
| 2024-06-26T05:51:14Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-06-26T05:51:11Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="shinben0327/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
wl-tookitaki/test
|
wl-tookitaki
| 2024-06-26T05:52:24Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T05:52:24Z |
Entry not found
|
loooooong/StableGarment_tryon
|
loooooong
| 2024-06-28T08:39:48Z | 0 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2024-06-26T05:53:49Z |
---
license: cc-by-nc-sa-4.0
---
This is the controlnet and garment encoder for tryon task, refer to [StableGarment](https://github.com/logn-2024/StableGarment) for detail.
|
Topofthenod/q-Taxi-v3-unedited
|
Topofthenod
| 2024-06-26T05:55:02Z | 0 | 0 | null |
[
"FrozenLake-v1",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-06-26T05:55:00Z |
---
tags:
- FrozenLake-v1
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-unedited
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1
type: FrozenLake-v1
metrics:
- type: mean_reward
value: 8.18 +/- 2.50
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Topofthenod/q-Taxi-v3-unedited", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
TenzinGayche/bo-en_tokenizer_v1_32k
|
TenzinGayche
| 2024-06-26T05:58:56Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T05:58:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Coolwowsocoolwow/Jimmy_Valmer
|
Coolwowsocoolwow
| 2024-06-26T06:03:56Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2024-06-26T05:59:14Z |
---
license: openrail
---
|
YeBhoneLin10/Mandalay_lora
|
YeBhoneLin10
| 2024-06-26T05:59:47Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"dora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-06-26T05:59:46Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- dora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
instance_prompt: a photo of Mandalay
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - YeBhoneLin10/Mandalay_lora
<Gallery />
## Model description
These are YeBhoneLin10/Mandalay_lora LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of Mandalay to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](YeBhoneLin10/Mandalay_lora/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
Topofthenod/q-Taxi-v3-new
|
Topofthenod
| 2024-06-26T06:01:26Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-06-26T06:01:23Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-new
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.75
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Topofthenod/q-Taxi-v3-new", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
leeloolee/gwen
|
leeloolee
| 2024-06-26T06:02:51Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T06:02:46Z |
Entry not found
|
ShaikAbdul/docreader
|
ShaikAbdul
| 2024-06-26T06:08:34Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T06:08:34Z |
Entry not found
|
jayoohwang/qlora_test
|
jayoohwang
| 2024-06-26T08:03:40Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2024-06-26T06:11:32Z |
Entry not found
|
Alirezashafiei/Lisen3
|
Alirezashafiei
| 2024-06-26T06:50:57Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2024-06-26T06:11:45Z |
---
license: openrail
---
|
v0dkapapi/FTM-Data-For-LLM
|
v0dkapapi
| 2024-06-26T06:53:51Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:ybelkada/falcon-7b-sharded-bf16",
"region:us"
] | null | 2024-06-26T06:11:46Z |
---
base_model: ybelkada/falcon-7b-sharded-bf16
tags:
- generated_from_trainer
model-index:
- name: FTM-Data-For-LLM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# FTM-Data-For-LLM
This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 100
### Training results
### Framework versions
- Transformers 4.32.0
- Pytorch 2.3.0+cu121
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Bajiyo/trying-lm-with-bert
|
Bajiyo
| 2024-06-27T04:23:50Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T06:11:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Snapkriz/finetuned_deepseek_evolIinstruct_snaplogicdocs
|
Snapkriz
| 2024-06-26T06:12:03Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T06:12:03Z |
Entry not found
|
Topofthenod/q-Taxi-v3.1
|
Topofthenod
| 2024-06-26T06:14:16Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T06:14:16Z |
Entry not found
|
Sunbread/isekai-rolename-vae
|
Sunbread
| 2024-07-01T06:28:15Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2024-06-26T06:17:07Z |
---
license: mit
---
|
Topofthenod/q-Taxi-v3.2
|
Topofthenod
| 2024-06-26T06:17:38Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T06:17:38Z |
Entry not found
|
oljike/llama3-8b-aqlm-codingft
|
oljike
| 2024-06-26T06:23:51Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T06:23:51Z |
Entry not found
|
PRATIKDE/llama-3-8b-chat-doctor
|
PRATIKDE
| 2024-06-26T06:26:05Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T06:26:05Z |
Entry not found
|
julientfai/InstructLM-500M-q4f16_1-Opilot
|
julientfai
| 2024-06-26T06:26:47Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T06:26:12Z |
Entry not found
|
PRATIKDE/AIMO-NEO-X1-G7BIT
|
PRATIKDE
| 2024-06-26T06:26:26Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T06:26:26Z |
Entry not found
|
iamnguyen/Qwen2-1.5B-ORPO
|
iamnguyen
| 2024-06-26T08:59:31Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2024-06-26T06:36:04Z |
Entry not found
|
Winmodel/lora_gemma2b-it
|
Winmodel
| 2024-06-26T06:36:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T06:36:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Malaiarasu/qa_pair
|
Malaiarasu
| 2024-06-26T06:37:22Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T06:37:22Z |
Entry not found
|
ILKT/2024-06-24_22-31-28_epoch_75
|
ILKT
| 2024-06-28T14:26:56Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"sentence-similarity",
"mteb",
"feature-extraction",
"en",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-26T06:43:41Z |
---
language:
- en
- pl
model-index:
- name: PLACEHOLDER
results: []
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
VKapseln475/SlimGummies586
|
VKapseln475
| 2024-06-26T06:50:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T06:47:42Z |
# Slim Gummies France Expériences - Slim Gummies Coustumer Commentaires Avantages Prix, acheter
Slim Gummies France Expériences Ces gummies naturels et cliniquement prouvés sont conçus pour aider les gens à perdre du poids et à devenir minces. Pour ceux qui souhaitent prendre des suppléments, des gélules molles contenant les ingrédients naturels de la formule sont disponibles. Il s’agit d’une capsule orale brûle-graisses qui empêche également votre corps de stocker les graisses.
## **[Cliquez ici pour acheter maintenant sur le site officiel de Slim Gummies](https://justbuydm.online/slim-gummies-fr)**
## Transformation des caoutchoucs amincissants
« Slimming Gummies » est une formule de démarrage bien documentée qui utilise des ingrédients naturels pour induire la cétose. Il offre de puissants résultats de combustion des graisses avec un mélange surnaturel. Le meilleur jeu de perte de poids utilisé par les professionnels a le potentiel de vous éloigner de nombreuses maladies. Il ne s'agit pas seulement d'un programme de remise en forme, mais d'une option de bien-être qui vous offre le pouvoir des bêta-hydroxybutyrate cétones pour des résultats plus minces. La formule enrichie en fraise et pomme contient de la stévia naturelle pour plus de douceur. Sans sucre ajouté, juste des extraits de plantes pour une combustion rapide des graisses et des résultats puissants. La formule autonome vous permet d'améliorer la forme de votre corps tout en développant plus de masse musculaire. L'effet cicatrisant a un très bon effet sur la santé du foie. Il soutient un métabolisme sain afin que vous puissiez réellement brûler les graisses et éviter de trop manger. Ne laissez pas votre corps accumuler des calories, profitez de cette option spéciale.
Les ingrédients contenus dans Sliming Gummies sont totalement efficaces et bien étiquetés pour obtenir des résultats. Il contient des concentrés et des extraits naturels, ce qui signifie que l'utilisateur peut le prendre sans aucun risque ni souci. L'approvisionnement mensuel en ours gommeux se compose d'un paquet de 30 gélules. Vous devez les prendre régulièrement une fois le matin et une fois le soir pour maintenir l'hydratation. Combinez des exercices de routine pour de meilleurs résultats et une combinaison saine.
## Quels sont les avantages précis de choisir des gummies minceur ?
Les avantages de choisir des gummies minceur sont nombreux. La thérapie donne des résultats fiables, sûrs et très visibles. La formule brûle-graisse met le corps dans un état actif. Cela peut vous aider à atteindre vos objectifs de perte de poids avec plus d’énergie et de paix mentale. Voici quelques avantages de choisir la meilleure formule de perte de poids
### Pratique à consommer
Consommer des gummies minceur est extrêmement simple car il n’y a pas de règles compliquées à suivre. Ajoutez simplement une gomme à la fois pour obtenir les bons nutriments. Appliquez-le deux fois par jour. Cela favorise un entraînement de musculation sans effort et une combustion plus rapide des graisses.
### Sûr et sans risque
Sliming Gummies est totalement sans risque car il est accompagné d’une garantie de remboursement à 100 %. Tout utilisateur insatisfait du choix de la formule peut demander le remboursement sur le site du fabricant.
### Meilleure clarté mentale
Lorsque vous vous débarrassez de l’excès de graisse toxique et des éléments indésirables, une meilleure fonction mentale se produit naturellement. Bénéficiez de niveaux d'énergie optimaux et d'une meilleure concentration grâce à la formule de perte de poids de haute qualité. C’est véritablement nourrissant pour tout le corps de haut en bas.
### Santé améliorée
Les gummies minceur assurent une meilleure santé avec des taux de triglycérides qui maintiennent une bonne fonction cardiovasculaire. Les gummies de haute qualité favorisent le processus de transition et garantissent que les utilisateurs se sentent à l'aise tout en perdant du poids.
## Précautions et limites des gummies minceur
Les gummies minceur sont extrêmement efficaces pour perdre du poids. Vous devez noter les éléments suivants :
Parfait pour tout le monde, en particulier pour ceux qui souffrent de maladies graves et ne parviennent pas à perdre du poids.
Déconseillé aux femmes enceintes et allaitantes pour quelque raison que ce soit
Il est très important que vous maîtrisiez votre routine lorsque vous en consommez. N'apportez aucune lacune et ne consommez pas d'options alternatives
## **[Cliquez ici pour acheter maintenant sur le site officiel de Slim Gummies](https://justbuydm.online/slim-gummies-fr)**
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.