modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
shtapm/phi_3 | shtapm | 2024-06-29T06:19:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-29T06:19:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
WaleedAIking/travel_model | WaleedAIking | 2024-06-29T06:20:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-29T06:20:25Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** WaleedAIking
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
noctuashap/ZhiXin_model | noctuashap | 2024-06-29T07:17:47Z | 0 | 0 | null | [
"safetensors",
"license:mit",
"region:us"
]
| null | 2024-06-29T06:27:01Z | ---
license: mit
---
# ZhiXin_model
<!-- Provide a quick summary of what the model is/does. -->
This model contains the fine-tuned LLM part of the ZhiXin model. For the fine-tuned embeddings, see ```ZhiXin_embedding```.
ZhiXin, introduced in the paper ```ZhiXin – A RAG-based Virtual Assistant for Persons with Intellectual and Developmental Disabilities```, is a Retrieval-Augmented Generation (RAG) language model system specifically tailored to support parents of children with intellectual and developmental disorders. The core component of ZhiXin is a fine-tuned language model based on the ```Qwen2-7b-Instruct``` model. Leveraging Supervised Fine-Tuning (SFT) techniques, this specialized LLM has been trained to understand and address the unique challenges faced by parents and caregivers, providing relevant, empathetic, and reliable guidance.
|
Niggendar/sweetBapsMFCGMixPDXL_sweetBapsMFCGV01 | Niggendar | 2024-06-29T06:35:37Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2024-06-29T06:29:41Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Maks545curve/whisper-small-new-ru-pl-bg-a | Maks545curve | 2024-07-02T07:06:38Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"bg",
"dataset:mozilla-foundation/common_voice_17_0",
"base_model:Maks545curve/whisper-small-new-ru-pl-a",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2024-06-29T06:36:40Z | ---
language:
- bg
license: apache-2.0
base_model: Maks545curve/whisper-small-new-ru-pl-a
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_17_0
metrics:
- wer
model-index:
- name: Whisper Small new-ru-pl-bg AIIA
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 17
type: mozilla-foundation/common_voice_17_0
config: bg
split: test
args: bg
metrics:
- name: Wer
type: wer
value: 24.08347245409015
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small new-ru-pl-bg AIIA
This model is a fine-tuned version of [Maks545curve/whisper-small-new-ru-pl-a](https://huggingface.co/Maks545curve/whisper-small-new-ru-pl-a) on the Common Voice 17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4125
- Wer Ortho: 30.7445
- Wer: 24.0835
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 70
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:------:|:----:|:---------------:|:---------:|:-------:|
| 0.3229 | 2.1008 | 500 | 0.3644 | 29.8090 | 23.6728 |
| 0.0447 | 4.2017 | 1000 | 0.4125 | 30.7445 | 24.0835 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1+cpu
- Datasets 2.20.0
- Tokenizers 0.19.1
|
shtapm/phi3_japanese | shtapm | 2024-06-29T06:40:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-29T06:40:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
geraldabrhm/llama-3-8b-seqclass-antonym-lr6_5-batch16-lora32 | geraldabrhm | 2024-06-29T11:19:47Z | 0 | 0 | null | [
"safetensors",
"region:us"
]
| null | 2024-06-29T06:42:56Z | Entry not found |
alidenewade/q-FrozenLake-v1-4x4-noSlippery | alidenewade | 2024-06-29T06:43:46Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-06-29T06:43:44Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="alidenewade/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Hamed7immortal/Udemy_CST | Hamed7immortal | 2024-06-29T12:22:40Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2024-06-29T06:45:52Z | ---
license: openrail
---
|
alidenewade/Taxi-v3 | alidenewade | 2024-06-29T06:47:00Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-06-29T06:46:58Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.79
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="alidenewade/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
nthuy652/stable-diffusion-text2image | nthuy652 | 2024-06-29T06:49:38Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T06:49:38Z | Entry not found |
natsaliya/123 | natsaliya | 2024-07-01T18:11:28Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T06:58:07Z | Entry not found |
khanhnn55/naschainv6 | khanhnn55 | 2024-07-01T13:35:50Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T07:01:25Z | Entry not found |
ryannn2/ppo-lunarlander | ryannn2 | 2024-06-29T07:04:33Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-06-29T07:03:32Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 239.92 +/- 52.85
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
strwbrylily/BAE-of-NMIXX-by-strwbrylily | strwbrylily | 2024-06-29T07:05:49Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2024-06-29T07:03:43Z | ---
license: openrail
---
|
khanhnn55/naschainv5 | khanhnn55 | 2024-07-03T00:53:41Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T07:08:39Z | Entry not found |
strwbrylily/Jiwoo-of-NMIXX-by-strwbrylily | strwbrylily | 2024-06-29T07:15:24Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2024-06-29T07:15:00Z | ---
license: openrail
---
|
strwbrylily/Kyujin-of-NMIXX-by-strwbrylily | strwbrylily | 2024-06-29T07:16:09Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2024-06-29T07:15:47Z | ---
license: openrail
---
|
strwbrylily/NMIXX-Collection-by-strwbrylily | strwbrylily | 2024-06-29T07:21:16Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2024-06-29T07:17:40Z | ---
license: openrail
---
|
Pella2hum/Vaggie | Pella2hum | 2024-06-29T07:21:12Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T07:20:43Z | Entry not found |
dnekky/kostyat | dnekky | 2024-06-29T07:28:46Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T07:22:17Z | Entry not found |
kishorea/Llama3_finetune_med | kishorea | 2024-06-29T15:53:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-29T07:24:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
themellion/wall_street_articles | themellion | 2024-07-01T20:02:07Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T07:35:10Z | Entry not found |
ShapeKapseln33/ManhoodPlus545 | ShapeKapseln33 | 2024-06-29T07:36:56Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T07:35:25Z | Manhood Plus Gummies Reviews Manhood Plus Gummies est censé être un complément alimentaire exceptionnellement compris qui combat le stress, développe davantage les niveaux d'énergie et intensifie votre exposition sexuelle en général. Le fabricant affirme qu'il contient une gamme complète d'huiles de CBD pour redynamiser votre virilité énergétique, vous permettant d'apprécier le sexe et de combler votre partenaire. Comme l'indique le site de l'autorité, Manhood Plus Gummies contient des concentrés de plantes normaux et aucun THC ne suit. C’est idéal pour les hommes de tout âge et promet de lutter contre les relations sexuelles malheureuses depuis la racine. Consommer deux bonbons chaque jour augmente la puissance sexuelle, la pression de combat et les niveaux de poussée. L’amélioration est protégée et peu susceptible de donner aux clients des effets secondaires.
**[Cliquez ici pour acheter maintenant sur le site officiel de Manhood Plus Gummies](https://slim-gummies-deutschland.de/manhood-plus-fr)**
L’augmentation de la dysfonction érectile peut être attribuée à plusieurs facteurs. Cependant, des découvertes récentes suggèrent que l’âge ou les habitudes de vie n’ont rien à voir avec cela.
Cela semble assez choquant, non ? Mais c’est ce que prétendent les fabricants de Manhood Plus Gummies. Choquant le monde, les fabricants citent des études qui affirment que la cause profonde de la dysfonction érectile ou de l’incapacité des hommes à avoir des érections dures est un déséquilibre des neurotransmetteurs. C'est pourquoi ils ont créé Manhood Plus Gummies.
Manhood Plus Gummies est un complément alimentaire conçu pour améliorer la santé sexuelle masculine et inverser la dysfonction érectile. Composée d'ingrédients naturels courants et rares, la formule de santé sexuelle masculine offre une approche holistique et naturelle pour reprendre le contrôle de votre santé sexuelle.
Une telle nouvelle approche pour traiter la dysfonction érectile n’a jamais été entendue auparavant, faisant de Manhood Plus Gummies un rêve. Lisons donc cet article pour en savoir plus sur ce complément de santé masculine et déterminer sa légitimité. Avant de découvrir la vérité, vérifions son résumé.
##Qu'est-ce que les gommes d'amélioration masculine Manhood Plus ?
Les gommes Manhood Plus Male Enhancement sont formulées pour renforcer le mécanisme d’érection naturel de votre corps. Il aborde les utilisations de la dysfonction érectile, garantissant des érections améliorées et une libido améliorée. La formulation entièrement naturelle du supplément favorise une augmentation de l’oxyde nitrique dans le corps, facilitant ainsi une meilleure circulation sanguine vers le pénis, entraînant des érections plus substantielles et plus satisfaisantes. Le mélange concerne l’amélioration de la taille, la santé et les performances sexuelles globales.
**[Cliquez ici pour acheter maintenant sur le site officiel de Manhood Plus Gummies](https://slim-gummies-deutschland.de/manhood-plus-fr)**
Les ingrédients agissent en synergie, ciblant divers aspects de la santé sexuelle masculine, de la croissance des tissus à l'endurance. Il favorise la maturation des spermatozoïdes et assure leur mobilité. En augmentant le volume de sperme, non seulement le plaisir de l’orgasme s’intensifie, mais cela augmente également le potentiel de fertilité. L'augmentation du volume garantit un nombre plus élevé de spermatozoïdes, améliorant ainsi les chances de réussite de la conception.
##Comment fonctionnent les gommes Manhood Plus Male Enhancement ?
Manhood Plus Male Enhancement Gummies agit principalement en modifiant les deux hormones, le cortisol et la testostérone. Les ingrédients du supplément jouent un rôle crucial dans la réduction des niveaux de cortisol, responsables du stress et de l’anxiété qui ont un impact négatif sur les performances sexuelles masculines.
De plus, le booster de testostérone Manhood Plus Male Enhancement Gummies aide également à la production de testostérone et d’autres hormones saines qui améliorent l’humeur. La testostérone est également connue comme hormone sexuelle. Des études suggèrent que l’énorme production de testostérone augmente le désir sexuel des hommes.
Lorsque les niveaux de stress sont réduits et que la pulsion sexuelle s’intensifie, le cerveau se concentre sur l’obtention d’érections. L'effet plus léger sur le cerveau et les nerfs permet également d'améliorer les performances sexuelles. De plus, des niveaux d’énergie accrus aident à recharger et à amplifier l’endurance, vous permettant ainsi de réaliser des performances sans précédent.
##Ingrédients des gommes à mâcher Manhood Plus Male Enhancement et leur science
Les gommes Manhood Plus Male Enhancement contiennent un mélange de six ingrédients naturels et composés à base de plantes largement étudiés, réputés pour leur rôle historique dans le renforcement du désir et de la performance sexuelle.
Au-delà de leur impact sur la santé pelvienne masculine, ces ingrédients jouent un rôle multiforme dans la promotion du bien-être général. En favorisant une circulation sanguine saine, en soutenant la fonction immunitaire, en régulant la glycémie, en favorisant la perte de poids et en contribuant à la vitalité globale, ces composants offrent une approche globale de l'amélioration de la vitalité masculine.
**[Cliquez ici pour acheter maintenant sur le site officiel de Manhood Plus Gummies](https://slim-gummies-deutschland.de/manhood-plus-fr)**
|
itay-nakash/model_73a455d87c_sweep_morning-oath-979 | itay-nakash | 2024-06-29T07:39:54Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T07:39:54Z | Entry not found |
Boostaro155/Manhood46565 | Boostaro155 | 2024-06-29T07:41:44Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T07:41:08Z | # Manhood Plus Gummies Reviews Uk Price - Manhood Plus Experiences Dose & Works, Buy
Manhood Plus Gummies UK Reviews Manhood Plus Gummies UK is purportedly an exceptionally figured out dietary enhancement that battles pressure, further develops energy levels, and intensifies your generally speaking sexual exhibition. The maker claims it has full range CBD oil to re-energize your energetic virility enabling you to appreciate sex and fulfill your accomplice. As indicated by the authority site, Manhood Plus Gummies UK contains normal plant concentrates and zero THC follows. It is great for men of any age and vows to battle unfortunate sex from the root. Consuming two gummies everyday can increase sexual power, battle pressure, and flood drive levels. The enhancement is protected and improbable to give clients any secondary effects.
## **[Click Here To Buy Now From Official Website Of Manhood Plus Gummies ](https://adtocart.xyz/manhood-plus)**
## Manhood Plus Gummies Benefits
According to ManUp Creator, the nine natural ingredients have been added to the formula to create a more refined approach to synergistically boosting sexual health.
Subsequently, Manhood Plus Gummies delivers multiple physical and sexual health benefits to users. Manhood Plus Gummies reviews also substantiate its effects and claim that this male sexual health formula can benefit men in more than one way.
## Let’s look at some of its popular health benefits:
Reverses Erectile Dysfunction: The revolutionary male sexual health supplement has been designed to reverse the symptoms of erectile dysfunction. It doesn’t work as a bandaid, covering the symptoms. Rather, Manhood Plus Gummies works to address the root cause and increase erection quality.
Improves Blood Circulation: Another benefit of the Manhood Plus Gummies supplement is improved blood circulation. The male sexual health formula has natural ingredients that improve blood flow to penile tissues, supporting healthy circulation. With this supplement, you can achieve firm and strong erections that stay hard for a long time.
Boost Libido & Sex Drive: It’s easier to perform in bed when you want to have sex. ManUp Male Enhancement can boost libido – something that tends to drop significantly with age. The older you get, the less sex drive you have. Fortunately, you can boost your libido at any age, increasing your desire to have sex. Many men continue to have the best sex of their lives at any age because of natural supplements – including the ingredients in ManUp Male Enhancement.
Enhances Energy Levels: The Manhood Plus Gummies male health supplement may also boost your energy levels. This result may become noticeable within the first week of use. Loaded with natural ingredients, the formula boosts your energy levels, sexual drive, and stamina to give you a masculine aura.
Reduces Stress And Anxiety: Stress and anxiety can also prevent you from achieving a healthy erection. That’s why this male health supplement has been created to reduce stress and anxiety. The formula supports your mood and ensures to optimize your sexual performance.
Improves Male Fertility: Men who are struggling with fertility issues can equally benefit from this male health supplement. Manhood Plus Gummies contains some natural ingredients that have been clinically proven to improve semen quality and sperm count. Thus, the Manhood Plus Gummies male health supplement can improve male fertility.
Support Overall Sexual Health: Many of the ingredients in ManUp Male Enhancement support overall sexual health in other ways. They might support healthy inflammation around your reproductive system, for example, to help with sexual dysfunction.
## **[Click Here To Buy Now From Official Website Of Manhood Plus Gummies ](https://adtocart.xyz/manhood-plus)** |
sert121/defog-orpo-model-8B-v2 | sert121 | 2024-06-29T07:45:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-29T07:44:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
refrain-wbh/emnlp-hh-rlhf | refrain-wbh | 2024-06-29T07:56:24Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T07:45:47Z | Entry not found |
anhtuansh/vistral-7b-chat-administrative-GGUF | anhtuansh | 2024-06-29T09:22:35Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T07:46:48Z | Entry not found |
szajean/thuy | szajean | 2024-06-29T07:53:03Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2024-06-29T07:52:24Z | ---
license: openrail
---
|
Bob228483/Shepherd | Bob228483 | 2024-06-29T07:59:35Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2024-06-29T07:59:35Z | ---
license: mit
---
|
Terrik/fabipradxl_DIR | Terrik | 2024-06-29T08:20:07Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2024-06-29T08:06:51Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
instance_prompt: fabipradxl
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - Terrik/fabipradxl_DIR
<Gallery />
## Model description
These are Terrik/fabipradxl_DIR LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use fabipradxl to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](Terrik/fabipradxl_DIR/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
iamnguyen/vermithor | iamnguyen | 2024-06-29T08:09:58Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T08:09:58Z | Entry not found |
TagaDice/test | TagaDice | 2024-06-29T08:11:54Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-06-29T08:11:54Z | ---
license: apache-2.0
---
|
Resulsefil/Rainy | Resulsefil | 2024-06-29T08:11:57Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2024-06-29T08:11:57Z | ---
license: openrail
---
|
Maianhami/Mai | Maianhami | 2024-06-29T08:16:34Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T08:16:34Z | Entry not found |
yongjinchoi/sdxl-webtoon-model | yongjinchoi | 2024-07-02T14:54:04Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"diffusers-training",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2024-06-29T08:17:50Z | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers-training
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Text-to-image finetuning - yongjinchoi/sdxl-webtoon-model
This pipeline was finetuned from **stabilityai/stable-diffusion-xl-base-1.0** on the **None** dataset. Below are some example images generated with the finetuned pipeline using the following prompt: a man with sad expression, wearing red shirt, waiting for taxi, side view.:




Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
arunnavai/irene | arunnavai | 2024-06-29T09:04:30Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T08:27:28Z | Entry not found |
LynnKukunda/detr_finetunned_air | LynnKukunda | 2024-07-02T10:26:12Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"conditional_detr",
"object-detection",
"endpoints_compatible",
"region:us"
]
| object-detection | 2024-06-29T08:27:30Z | Entry not found |
ryannn2/test001 | ryannn2 | 2024-06-29T08:29:28Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2024-06-29T08:29:28Z | ---
license: openrail
---
|
itay-nakash/model_73a455d87c_sweep_sleek-dragon-980 | itay-nakash | 2024-06-29T08:33:16Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T08:33:16Z | Entry not found |
CarlosPov/Llama-3-8B-finetune_90_10_EX | CarlosPov | 2024-06-29T08:35:41Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T08:35:41Z | Entry not found |
jssaluja/wav2vec2-large-mms-1b-all-train-dataset-rajinder_singh-gpus_2_epochs-2-test-datasets-10 | jssaluja | 2024-06-29T08:36:52Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-29T08:36:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Durga-Kondaveeti/Exam-model | Durga-Kondaveeti | 2024-06-29T08:41:22Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T08:40:20Z | #example model data sample
---
license: mit
---
|
yooni825/sd-class-butterflies-64 | yooni825 | 2024-06-29T08:43:07Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T08:43:07Z | Entry not found |
nameees/nameees | nameees | 2024-06-29T08:48:11Z | 0 | 0 | flair | [
"flair",
"code",
"text-classification",
"dataset:HuggingFaceFW/fineweb-edu",
"arxiv:1910.09700",
"license:bigscience-bloom-rail-1.0",
"region:us"
]
| text-classification | 2024-06-29T08:45:02Z | ---
license: bigscience-bloom-rail-1.0
datasets:
- HuggingFaceFW/fineweb-edu
metrics:
- chrf
library_name: flair
pipeline_tag: text-classification
tags:
- code
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Vaibhavk808/marian-finetuned-kde4-en-to-fr | Vaibhavk808 | 2024-06-29T08:46:26Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T08:46:26Z | Entry not found |
Heruss/mskgrl | Heruss | 2024-06-29T08:49:05Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T08:47:37Z | Entry not found |
Jeongwuk/rare-puppers | Jeongwuk | 2024-06-29T08:49:59Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T08:49:59Z | Entry not found |
Rezapar/Khan3 | Rezapar | 2024-06-29T15:10:18Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2024-06-29T08:50:28Z | ---
license: openrail
---
|
Haary/USK_Mistral_7B_Unsloth_adapter | Haary | 2024-06-29T09:16:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"id",
"dataset:Haary/QA_USK_dataset",
"base_model:Ichsan2895/Merak-7B-v4",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-29T08:52:38Z | ---
base_model: Ichsan2895/Merak-7B-v4
language:
- en
- id
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
datasets:
- Haary/QA_USK_dataset
---
# Uploaded Adapter Model
- **Developed by:** Haary
- **License:** apache-2.0
- **Finetuned from Indonesia Model :** [Ichsan2895/Merak-7B-v4](https://huggingface.co/Ichsan2895/Merak-7B-v4)
- **Base Model :** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
This Mistral 7B Model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
mubashir508/Babar_Azam | mubashir508 | 2024-07-01T05:51:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-29T08:54:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
GPTersHub/Jeje | GPTersHub | 2024-06-29T09:06:03Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-06-29T09:06:03Z | ---
license: apache-2.0
---
|
TIEMING/tts | TIEMING | 2024-06-29T09:28:30Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T09:17:03Z | Entry not found |
imagepipeline/preg | imagepipeline | 2024-06-29T09:17:23Z | 0 | 0 | null | [
"imagepipeline",
"imagepipeline.io",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2024-06-29T09:17:20Z | ---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
## preg
<img src="https://via.placeholder.com/468x300?text=App+Screenshot+Here" alt="Generated on Image Pipeline" style="border-radius: 10px;">
**This lora model is uploaded on [imagepipeline.io](https://imagepipeline.io/)**
Model details - preg
[](https://imagepipeline.io/models/preg?id=1f92dd30-b8f2-42a5-a875-1d93232d4efa/)
## How to try this model ?
You can try using it locally or send an API call to test the output quality.
Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/). No payment required.
Coding in `php` `javascript` `node` etc ? Checkout our documentation
[](https://docs.imagepipeline.io/docs/introduction)
```python
import requests
import json
url = "https://imagepipeline.io/sd/text2image/v1/run"
payload = json.dumps({
"model_id": "sd1.5",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": false,
"guidance_scale": 7.5,
"multi_lingual": "no",
"embeddings": "",
"lora_models": "1f92dd30-b8f2-42a5-a875-1d93232d4efa",
"lora_weights": "0.5"
})
headers = {
'Content-Type': 'application/json',
'API-Key': 'your_api_key'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
}
```
Get more ready to use `MODELS` like this for `SD 1.5` and `SDXL` :
[](https://imagepipeline.io/models)
### API Reference
#### Generate Image
```http
https://api.imagepipeline.io/sd/text2image/v1
```
| Headers | Type | Description |
|:----------------------| :------- |:-------------------------------------------------------------------------------------------------------------------|
| `API-Key` | `str` | Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/) |
| `Content-Type` | `str` | application/json - content type of the request body |
| Parameter | Type | Description |
| :-------- | :------- | :------------------------- |
| `model_id` | `str` | Your base model, find available lists in [models page](https://imagepipeline.io/models) or upload your own|
| `prompt` | `str` | Text Prompt. Check our [Prompt Guide](https://docs.imagepipeline.io/docs/SD-1.5/docs/extras/prompt-guide) for tips |
| `num_inference_steps` | `int [1-50]` | Noise is removed with each step, resulting in a higher-quality image over time. Ideal value 30-50 (without LCM) |
| `guidance_scale` | `float [1-20]` | Higher guidance scale prioritizes text prompt relevance but sacrifices image quality. Ideal value 7.5-12.5 |
| `lora_models` | `str, array` | Pass the model_id(s) of LoRA models that can be found in models page |
| `lora_weights` | `str, array` | Strength of the LoRA effect |
---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
### Feedback
If you have any feedback, please reach out to us at [email protected]
#### 🔗 Visit Website
[](https://imagepipeline.io/)
If you are the original author of this model, please [click here](https://airtable.com/apprTaRnJbDJ8ufOx/shr4g7o9B6fWfOlUR) to add credits
|
imagepipeline/foot | imagepipeline | 2024-06-29T09:22:21Z | 0 | 0 | null | [
"imagepipeline",
"imagepipeline.io",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2024-06-29T09:22:18Z | ---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
## foot
<img src="https://via.placeholder.com/468x300?text=App+Screenshot+Here" alt="Generated on Image Pipeline" style="border-radius: 10px;">
**This lora model is uploaded on [imagepipeline.io](https://imagepipeline.io/)**
Model details - foot
[](https://imagepipeline.io/models/foot?id=4faf1b9b-d936-4612-954c-3a61564bde46/)
## How to try this model ?
You can try using it locally or send an API call to test the output quality.
Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/). No payment required.
Coding in `php` `javascript` `node` etc ? Checkout our documentation
[](https://docs.imagepipeline.io/docs/introduction)
```python
import requests
import json
url = "https://imagepipeline.io/sd/text2image/v1/run"
payload = json.dumps({
"model_id": "sd1.5",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": false,
"guidance_scale": 7.5,
"multi_lingual": "no",
"embeddings": "",
"lora_models": "4faf1b9b-d936-4612-954c-3a61564bde46",
"lora_weights": "0.5"
})
headers = {
'Content-Type': 'application/json',
'API-Key': 'your_api_key'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
}
```
Get more ready to use `MODELS` like this for `SD 1.5` and `SDXL` :
[](https://imagepipeline.io/models)
### API Reference
#### Generate Image
```http
https://api.imagepipeline.io/sd/text2image/v1
```
| Headers | Type | Description |
|:----------------------| :------- |:-------------------------------------------------------------------------------------------------------------------|
| `API-Key` | `str` | Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/) |
| `Content-Type` | `str` | application/json - content type of the request body |
| Parameter | Type | Description |
| :-------- | :------- | :------------------------- |
| `model_id` | `str` | Your base model, find available lists in [models page](https://imagepipeline.io/models) or upload your own|
| `prompt` | `str` | Text Prompt. Check our [Prompt Guide](https://docs.imagepipeline.io/docs/SD-1.5/docs/extras/prompt-guide) for tips |
| `num_inference_steps` | `int [1-50]` | Noise is removed with each step, resulting in a higher-quality image over time. Ideal value 30-50 (without LCM) |
| `guidance_scale` | `float [1-20]` | Higher guidance scale prioritizes text prompt relevance but sacrifices image quality. Ideal value 7.5-12.5 |
| `lora_models` | `str, array` | Pass the model_id(s) of LoRA models that can be found in models page |
| `lora_weights` | `str, array` | Strength of the LoRA effect |
---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
### Feedback
If you have any feedback, please reach out to us at [email protected]
#### 🔗 Visit Website
[](https://imagepipeline.io/)
If you are the original author of this model, please [click here](https://airtable.com/apprTaRnJbDJ8ufOx/shr4g7o9B6fWfOlUR) to add credits
|
Richiio/my-model-repo | Richiio | 2024-06-29T09:31:30Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T09:23:43Z | Entry not found |
KhanLee0930/YOUR_REPO_ID | KhanLee0930 | 2024-06-29T09:23:44Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T09:23:44Z | Entry not found |
KhanLee0930/ppo-1 | KhanLee0930 | 2024-06-29T09:26:40Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-06-29T09:26:35Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -111.23 +/- 50.95
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'KhanLee0930/ppo-1'
'batch_size': 512
'minibatch_size': 128}
```
|
leepokai/uncensored-zh | leepokai | 2024-06-29T09:27:35Z | 0 | 0 | null | [
"safetensors",
"region:us"
]
| null | 2024-06-29T09:26:57Z | Entry not found |
AndreWolf1980/MAC | AndreWolf1980 | 2024-06-29T09:30:30Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T09:30:30Z | Entry not found |
KhanLee0930/rl_course_vizdoom_health_gathering_supreme | KhanLee0930 | 2024-06-29T09:31:59Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-06-29T09:31:54Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.04 +/- 4.29
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r KhanLee0930/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
MOFMasters/MOF-Master-Llama-3-8b-Random-MOF-hackathon | MOFMasters | 2024-06-30T16:12:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"MOF",
"gas separation",
"methane",
"nitrogen",
"mixture",
"random",
"text-generation",
"en",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-06-29T09:35:57Z | ---
library_name: transformers
tags:
- unsloth
- MOF
- gas separation
- methane
- nitrogen
- mixture
- random
license: cc-by-4.0
language:
- en
pipeline_tag: text-generation
widget:
- text: Generate a random MOFid with a maximum of two organic linkers.
example_title: Generate MOFid
---
# Model Card for Model ID
Randomly generate MOFid based on a wide range of nodes, linkers, topology and catenation which can be used for different application opportunities. For example, here we're focusing on predicting the CH4/N2 gas separation performance using the [MOF-GRU paper](https://pubs.acs.org/doi/full/10.1021/acsami.3c11790). Give a number as the input to get a total of that many randomly generated unique MOFids.
## Model Details
**Only LoRA Adapters are provided. Merge with base Llama-3-8b model for inference.**
### Model Description
This model is a 4-bit quantized, fine-tuned version of Llama3-8b, specialized for generating Metal-Organic Framework (MOF) IDs. It can produce a specified number of random MOFids based on user input. MOFid will be generated based on the general MOFid structure described in [this paper](https://pubs.acs.org/doi/full/10.1021/acs.cgd.9b01050)
- **Developed by:** The MOF Masters
- **Shared by:** [Aritra Roy](https://aritraroy.live)
- **Model type:** Text Generation
- **Language(s) (NLP):** English
- **License:** Creative Commons Attribution 4.0 International (CC BY 4.0)
- **Finetuned from model:** Llama3-8b
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://huggingface.co/MOFMasters/MOF-Master-Llama-3-8b-Random-MOF-hackathon
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
This model is designed to generate random Metal-Organic Framework (MOF) IDs. Users can specify the number of MOFids they want to generate, and the model will produce that many unique identifiers.
### Out-of-Scope Use
This model is not designed for tasks other than MOFid generation. It should not be used for general text generation, question answering, or any task unrelated to MOF identification.
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## How to Get Started with the Model
#### Setup
First, make sure you have the required libraries installed:
```bash
pip install xformers trl peft accelerate bitsandbytes tqdm python-dotenv wandb scikit-learn rdkit selfies "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
```
#### Usage
**Colab Scripts**
- Finetuning: https://cutt.ly/finetune-llama3-8b-for-MOFid-generation
- Inference & Prediction: https://cutt.ly/generate-MOFid-and-predict-gas-separation
```python
from rdkit import Chem
import re
import pandas as pd
import selfies as sf
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Move model to GPU if available
device = "cuda" if torch.cuda.is_available() else "cpu"
model = model.to(device)
# Load the base model
from unsloth import FastLanguageModel
max_seq_length = 1024 # Choose any! Unsloth auto-supports RoPE Scaling internally! However, for our work 1024 is more than enough.
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/llama-3-8b-bnb-4bit",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
# token = hf_token, # needed for gated models like meta-llama/Llama-3-8b
)
# Load the LoRA adapter using local path
model.load_adapter("LoRA-Llama-3-8b-MOFMaster")
```
#### Prompt template and helper function
```python
alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that completes the request.
### Instruction:
{instruction}
### Input:
{input}
### Response:
{output}"""
possible_topologies = ['pcu', 'pts', 'fsc', 'lvt', 'fof', 'bcu', 'nbo', 'dia', 'thj', 'sqc', 'hxg', 'moa', 'cds', 'lon', 'uni', 'fel', 'ths', 'sxb', 'sql', 'ssb', 'mmt', 'flu', 'pto', 'asf', 'bsn', 'umx', 'wut', 'dmp', 'vmi', 'una', 'tfz', 'fcu', 'cdl', 'upa', 'xai', 'unc', 'tfs', 'uoc', 'jea', 'moc', 'rnb', 'ptr', 'mco', 'kag', 'bbl', 'jeb', 'und', 'ung', 'ukk', 'bbj', 'mote', 'jsd', 'uml', 'qtz', 'xbe', 'wbl', 'crb', 'icf', 'ato', 'ttp', 'ftw', 'stc', 'unj', 'mou', 'baa', 'tfl', 'tbo', 'tfo', 'ins', 'ske', 'ptt', 'uog', 'unh', 'ume', 'bnn', 'sse', 'rtl', 'qzd', 'sod', 'frl', 'mog', 'rob', 'stj', 'sml', 'bbm', 'xmz', 'itv', 'uot', 'uov', 'bpi', 'uoj', 'nab', 'stu', 'sbr', 'neb', 'pte', 'bba', 'bbd', 'mgg', 'smd', 'bbi', 'coe', 'nog', 'cdm', 'cdle', 'wjh', 'mcn', 'cda', 'noq', 'qdl', 'tfzd', 'sxc', 'sne', 'flue', 'cdz', 'cdn', 'uny', 'nia', 'rna', 'bbk', 'act', 'vby', 'wmi', 'smt', 'umv', 'hmse', 'vbm', 'ithd', 'kto', 'bbe', 'stb', 'snx', 'ctn', 'why', 'unb', 'dmd', 'nom', 'bbh', 'pcl', 'usf', 'atn', 'fry', 'sma', 'eea', 'tfa', 'lcy', 'unp', 'vmd', 'nou', 'scu', 'mot', 'hms', 'the', 'ukm', 'smn', 'tfi', 'rhr', 'umg', 'uom', 'flt', 'nor', 'npo', 'mdf', 'bel', 'crs', 'hyw', 'snk', 'pth', 'fsm', 'wky', 'ssd', 'ssa', 'yug', 'isq', 'fnh', 'tfe', 'bbf', 'sab', 'unm', 'fet', 'tsg', 'wei', 'pcuh', 'uod', 'msp', 'snl', 'zyl', 'umj', 'tfb', 'brl', 'uob', 'fsg', 'los', 'muo', 'vcc', 'snz', 'une', 'fsh', 'smk', 'nox', 'uki', 'mmo', 'sqp', 'sit', 'bbg', 'dft', 'zxc', 'sra', 'ssf', 'pds', 'bik', 'uos', 'gra', 'uoh', 'reo', 'smg', 'ecu', 'isp', 'nts', 'lil', 'spn', 'eta', 'srs', 'bbr', 'uku', 'bco', 'umw', 'mab', 'xux', 'isx', 'acs', 'umc', 'dmc', 'urh', 'unn', 'nfc', 'lcv', 'not', 'skd', 'nat', 'sol', 'vmj', 'llj', 'apo', 'fmj', 'sni', 'smc', 'vbo', 'cag', 'gwg', 'smm', 'hex', 'upb', 'qtzx', 'jbw', 'ket', 'vmk', 'sur', 'tsb', 'uoq', 'sta', 'mer', 'wfa', 'tfg', 'smb', 'qtze', 'ukg', 'cut', 'mmm', 'stw', 'sda', 'lfm', 'fjh', 'gis', 'cus', 'apd', 'tcb', 'wjf', 'btu', 'fsl', 'vmg', 'hcb', 'ksx', 'mok', 'bbx', 'ucn', 'sty', 'bne', 'ukv', 'bbs', 'ttx', 'anh', 'stx', 'gee', 'ofp', 'sow', 'tfc', 'wiv', 'umq', 'lim', 'ant', 'ukc', 'xbq', 'sms', 'zyg', 'csq', 'cml', 'sca', 'cdq', 'ums', 'etb', 'nod', 'mod', 'ile', 'snq', 'wgy', 'lcs', 'kea', 'wia', 'snp', 'phi', 'ntt', 'can', 'cbt', 'smu', 'tfj', 'lig', 'fog', 'oso', 'lqm', 'zec', 'lwg', 'bcq', 'baz', 'umo', 'epz', 'gsi', 'som', 'lni', 'wma', 'znp', 'bpq', 'asv', 'btoe', 'uoe', 'cbn', 'uox', 'tsy', 'bbv', 'vmh', 'uow', 'etbe', 'fvl', 'uoa', 'fvn', 'uol', 'osa', 'cfc', 'ylf', 'wji', 'ukj', 'mjb', 'iss', 'ltj', 'fse', 'pcb', 'tsa', 'ttu', 'qnb', 'bcn', 'uop', 'phw', 'wmf', 'upd', 'unx', 'stp', 'cha', 'deh', 'umm', 'uof', 'spl', 'sno', 'vme', 'fsy', 'ukn', 'bcg', 'cdj', 'urj', 'smj', 'pyr', 'tty', 'umr', 'wmg', 'lone', 'wmc', 'xat', 'utp', 'brk', 'tzs', 'ict', 'cqh', 'phx', 'umu', 'ptsf']
possible_catenations = ['cat0', 'cat1', 'cat3', 'cat5', 'cat2']
possible_linker_elements = ['[=Branch2]', '[Li]', '[Ring1]', '[=S]', '[CH2]', '[=NH0]', '[=Branch3]', '[Cu]', '[=CH1]', '[CH0]', '[Co]', '[#Branch2]', '[=Ring1]', '[Branch3]', '[Branch2]', '[O-1]', '[#Branch1]', '[F]', '[CH3]', '[N+1]', '[P]', '[C]', '[I]', '[=Ring2]', '[S]', '[SH0]', '[N]', '[Si]', '[#C]', '[=C]', '[NH0]', '[Cl]', '[=CH0]', '[#CH0]', '[=O]', '[Ring2]', '[=N+1]', '[Branch1]', '[=N]', '[NH1]', '[OH0]', '[Mn]', '[CH1]', '[Br]', '[=Branch1]', '[IH0]', '[O]', '[none]', '[#N]']
possible_nodes = ['[Cu]1[Cu][Cu][Cu]1', '[Ti]12[O]3[Ti]4[O]2[Ti]2[O]4[Ti]4[O]5[Ti]3[O]1[Ti]5[O]24', '[Tb]12[OH]3[Tb]4[OH]2[Tb]2[OH]1[Tb]3[OH]42', '[Cu][OH]([Cu])[Cu]', '[O][Ni][O]([Ni][O])[Ni][O]', '[Eu]12[OH]3[Eu]4[OH]2[Eu]2[OH]1[Eu]3[OH]42', 'Cl[Cd]Cl', '[Ni][OH2]([Ni])[Ni]', '[O]12[Ti]34[OH]5[Ti]62[OH]2[Ti]71[OH]4[Ti]14[O]3[Ti]35[O]6[Ti]2([O]71)[OH]43', '[Pr]', '[O][Cr][O]([Cr][O])[Cr][O]', '[Fe][O]([Fe])[Fe]', '[Ni][Ni]', '[Fe]', '[Co][OH]([Co])[Co]', '[In][O]1[Mn][O]([Mn]1)[In]', '[Y]', 'O[Cu]', '[Ni]O[Ni]', '[Ni][OH]1[Ni][OH]([Ni])[Ni]2[OH]([Ni]1[OH]2[Ni])[Ni]', '[Ni][O]([Zn])[Zn]', '[Mg][OH2][Mg]', '[Ni][OH2][Ni]', '[O]12[Hf]34[O]5[Hf]62[O]2[Hf]71[O]4[Hf]14[O]3[Hf]35[O]6[Hf]2([O]71)[O]43', '[Co][OH]1[Co][OH]([Co])[Co]2[OH]([Co]1[OH]2[Co])[Co]', '[Nd][Nd]', '[Mg][OH]1[Mg][OH]([Mg]1)[Mg]', '[Zn][O]([Zn])([Zn])[Zn]', '[Al]', '[U]', '[Gd]', '[Cu][OH]1[Cu][OH]([Cu]1)[Cu]', 'Cl[Al]Cl', '[Co][O]([Zn])[Zn]', '[Sr]', '[Fe][Fe]', '[Zn][OH][Zn]', '[Gd]12[OH]3[Gd]4[OH]2[Gd]2[OH]1[Gd]3[OH]42', '[O][Fe][O]([Fe][O])[Fe][O]', '[Lu]', '[O]12[Zr]34[OH]5[Zr]62[OH]2[Zr]71[OH]4[Zr]14[O]3[Zr]35[O]6[Zr]2([O]71)[OH]43', '[Cu]Br', 'F[Al]', '[Zr]', '[Yb]', '[Ce]', '[Pr]12[OH]3[Pr]4[OH]2[Pr]2[OH]1[Pr]3[OH]42', '[Ni][OH][Ni]', '[O]12[Zr]34[OH]5[Ce]62[OH]2[Zr]71[OH]4[Ce]14[O]3[Zr]35[O]6[Zr]2([O]71)[OH]43', '[In]', '[O]12[Zr]34[O]5[Zr]62[O]2[Zr]71[O]4[Zr]14[O]3[Zr]35[O]6[Zr]2([O]71)[O]43', '[Zn][O]([Zn])[Zn]', '[Cr][Cr]', '[Ni]', '[Rb]1[O]2[O]1[Rb]2', '[OH2][La]', '[Mn]', '[Zn][O]([Cd])([Cd])[Cd]', '[Zn][OH]1[Zn][OH]([Zn]1)[Zn]', '[Tm]', '[Cu][Cu][Cu][Cu]', '[Sm]', '[Zn]Br', '[Cu]I', '[Pr][Pr]', '[Mg]', '[Co][OH]([Co][OH]([Co])[Co])[Co]', 'I[Cu]1[Cu][Cu]1(I)(I)I', '[V]1[OH][V][OH][V][OH][V][OH]1', 'Cl[Cd]', '[Ti]', '[Co][OH]1[Co][OH]([Co]1)[Co]', '[Sn][O]1[Sn][O]([Sn]1)[Sn]', 'Cl[La](Cl)Cl', '[Dy][Dy]', '[Mg][OH2][Mg][OH2][Mg]', '[Cu][Cu]', '[Dy]', '[O]12[Hf]34[OH]5[Hf]62[OH]2[Hf]71[OH]4[Hf]14[O]3[Hf]35[O]6[Hf]2([O]71)[OH]43', '[OH2][Lu]', '[Ni][O]([Ni])([Ni])[Ni]', '[Co][OH2][Co]', '[S][Cu][Cu][S]', '[Mn][O]([Mn])[Mn]', '[Cd]', '[Zn]', '[Ni]O[Ni]1O[Ni](O1)O[Ni]', '[Cu][OH]1[Cu][OH]([Cu]1)[Cu]12([OH2][OH2]2)[OH2][OH2]1', 'O1O[Co]1[Co]1OO1', '[Eu]', 'Cl[Zn]', '[Ca]', '[Fe][O]1[Fe][O]([Fe]1)[Fe]', '[Ni][OH2][Ni]1[OH2][Ni][OH2]1', '[Cu][OH][Cu][OH][Cu]', '[La]', 'Cl[Cu]', '[Ni]O[Ni]O[Ni]', '[Co]O[Co]1O[Co](O1)O[Co]', '[Co][O]([Zn])[Co]', 'Cl[Co]Cl', '[Zn][Zn]', '[Tb]', '[Nd]', '[Co][Co]', '[OH2][Gd]', '[Ag]', 'Cl[Mn][Mn]Cl', '[Li]', '[Er][Er]', '[Cu]O[Cu]', '[Sc]', '[Ho]', '[Er]', '[Cd][Cd]', '[Ni][OH]1[Ni][OH]([Ni]1)[Ni]', '[Cu][O]1[Cu][O]([Cu]1)[Cu]', '[Np]1O[Np]O[Np]O1', '[OH2][Ni][OH2][Ni]', '[Mn][Mn]', '[Cu]', '[Y][Y]', '[Co]', '[Zn][OH]([Zn][OH]([Zn])[Zn])[Zn]']
def validate_mof(mof_id, possible_topologies, possible_catenations):
parts = mof_id.split()
molecule_part = " ".join(parts[:-1])
identifier_part = parts[-1]
signature = "MOFid-v1"
valid_molecules = []
try:
split_index = molecule_part.rfind('.')
smiles_part = molecule_part[:split_index]
node = molecule_part[split_index + 1:]
building_block_part = molecule_part[split_index + 1:]
try:
smiles_list = smiles_part.split('.')
if len(smiles_list) <=2:
for smile in smiles_list:
# Convert SMILES to molecule object with sanitize=False
molecule = Chem.MolFromSmiles(smile, sanitize=False)
if molecule is not None:
Chem.SanitizeMol(molecule)
if Chem.SanitizeMol(molecule) == 0:
valid_molecules.append(smile)
if len(valid_molecules) == 0:
continue
if len(valid_molecules) <= 2:
if ';' in identifier_part:
identifier_part = identifier_part.split(';')[0]
try:
linker_elements = []
is_valid_linkers = False
for smile in valid_molecules:
selfie = sf.encoder(smile)
elements = list(sf.split_selfies(selfie))
linker_elements.extend(elements)
if set(linker_elements).issubset(set(possible_linker_elements)):
is_valid_linkers = True
format_signature, topology, catenation = identifier_part.split('.')
if format_signature == signature and topology in possible_topologies and catenation in possible_catenations and is_valid_linkers and node in possible_nodes:
match = re.search(r'\sMOFid-v1', mof_id)
if match:
return mof_id
else:
print("No space before MOFid-v1")
else:
print(f"Invalid signature, topology, or catenation: {identifier_part}")
except ValueError:
print(f"Couldn't split the identifier part into format_signature, topology, catenation: {identifier_part}")
except ValueError:
print(f"Couldn't split the smiles part into a list: {smiles_part}")
except ValueError:
print(f"Couldn't split the molecule part into SMILES and Building Block: {molecule_part}")
return None
```
#### Generate and validate MOFids
```python
# Read the training dataset for validating generated MOFids
df = pd.read_csv('train_dataset.csv')
# Prepare the input for the model
instruction = "You are a random MOF predictor. The general structure of the MOFid is-[SMILES code of 1st organic linker].[SMILES code of 2nd organic linker or [none]].[inorganic building block] MOFid-v1.[topology code].[catenation type]."
user_input = "Generate a random MOFid with a maximum of two organic linkers"
inputs = tokenizer(
[alpaca_prompt.format(instruction=instruction, input=user_input, output="")],
return_tensors="pt"
).to(device)
# Generate MOF IDs
valid_mofs = []
while True:
output = model.generate(**inputs, max_new_tokens=256)
response = tokenizer.decode(output[0], skip_special_tokens=True)
mof_id = response.split("### Response:\n")[-1].strip()
# Check if MOFid already exists in the database or valid_mofs list
if mof_id in df['MOF_ID'].values or mof_id in valid_mofs:
print("MOF already exists in the DataFrame. Generating another one...")
continue
# Check chemical validity of the MOF
if validate_mof(mof_id, possible_topologies, possible_catenations):
valid_mofs.append(mof_id)
print(f"Valid MOF generated: {mof_id}")
if len(valid_mofs) == 25:
break
else:
print("Invalid MOF generated. Trying again...")
# Print the generated valid 25 unique MOFs
for index, mof in enumerate(valid_mofs):
print(f"{index+1}. {mof}")
```
## Training Details
### Training Data
Training data was prepared containing more than 110k datasets provided in the MOF-GRU paper.
### Training Procedure
The model was fine-tuned using the unsloth library on an A100 GPU provided by King's College London, UK. The fine-tuning process took 8 hours and achieved a final loss of 0.67.
#### Preprocessing
All the SMILES for the linkers are coverted into SELFIES using the [selfies](https://pypi.org/project/selfies/) python library. All the possible linking elements, nodes, topologies and catenations are embedded through a vector embedding process (from MOF-GRU paper).
#### Training Hyperparameters
- **Training regime:** Mixed precision (fp16 or bf16, depending on hardware support)
- **Optimizer:** AdamW (8-bit)
- **Learning rate:** 2e-4
- **Batch size:** 128 per device
- **Number of epochs:** 1
- **Weight decay:** 0.01
- **Warmup ratio:** 0.1
- **Learning rate schedule:** Linear
- **Max sequence length:** 1024
- **Gradient checkpointing:** "unsloth" (optimized for very long context and 30% less VRAM usage)
- **Random seed:** 3407
#### Speeds, Sizes, Times
- **Total training time:** 3 hours
- **Hardware:** A100 GPU
- **Checkpointing:**
- Save strategy: Steps
- Save steps: 50
- Save total limit: 5
- **Logging frequency:** Every 5 steps
- **Final loss:** 0.074
- **Model size:** 4-bit quantized version of Llama3-8b
#### train/loss plot from wandb while fine-tuning
<img src="https://huggingface.co/MOFMasters/MOF-Master-Llama-3-8b-Random-MOF-hackathon/resolve/main/MOFMaster-hackathon-finetune-loss-graph.png" width="300" height="auto" alt="train/loss vs step plot from wandb while fine-tuning">
## Evaluation
As it's a random generation-based model the evaluation of the response was done using selfies, rdkit python libraries for the linkers to check whether it's a valid organic molecule or not and for other it was checked whether it's is in our database or not.
### Results
Followings are two randomly generated MOFids-
[O-1]C(=O)C1=CC(=CC(=C1)C(=O)[O-1])C2=CC=C(C=C2)C3=CC=C(C=C3)C4=CC(=CC(=C4)C(=O)[O-1])C(=O)[O-1].[O-1]C(=O)C1=CC2=C(C=C1C(=O)[O-1])C=C3C(=C2C4=CC=C[NH1]4)C=C(C(=C3C(C)C)C(=O)[O-1])C(=O)[O-1].[Fe] MOFid-v1.pts.cat0
CC(=O)C1=CC(=CC(=C1C(=O)[O-1])C(=O)C)C2=CC=C(C=C2)C3=CC(C(=O)C)=C(C(=C3)C(=O)C)C(=O)[O-1].[O-1]C(=O)C#CC#CC(=O)[O-1].[Cu][Cu] MOFid-v1.nbo.cat0
## Citation
We have a plan for advancing this hackathon project in a scientific paper. Until then please use [this repository URL](https://huggingface.co/MOFMasters/MOF-Master-Llama-3-8b-Random-MOF-hackathon) as the reference.
## Model Card Authors
[Aritra Roy](https://aritraroy.live), [Piyush R. Maharana](https://github.com/catastropiyush), [Tarak Nath Das](https://www.linkedin.com/in/tarak-nath-das-4362251b7/)
## Model Card Contact
[Aritra Roy](https://aritraroy.live) |
Yash-Shindey/cartpole2 | Yash-Shindey | 2024-06-29T09:49:08Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-06-29T09:48:59Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: cartpole2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
rinogrego/GritLM-BioMistral-7B | rinogrego | 2024-06-29T10:03:03Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:BioMistral/BioMistral-7B",
"license:apache-2.0",
"region:us"
]
| null | 2024-06-29T09:53:32Z | ---
license: apache-2.0
base_model: BioMistral/BioMistral-7B
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/rinogrego/GritLM-BioMed/runs/ut1l5o2z)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/rinogrego/GritLM-BioMed/runs/ut1l5o2z)
# results
This model is a fine-tuned version of [BioMistral/BioMistral-7B](https://huggingface.co/BioMistral/BioMistral-7B) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.42.2
- Pytorch 2.3.1+cu118
- Datasets 2.20.0
- Tokenizers 0.19.1
|
leiwx52/AssistGUI | leiwx52 | 2024-06-29T09:59:58Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T09:55:26Z | Entry not found |
RobertML/sn3-chemistry | RobertML | 2024-07-01T05:26:02Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T09:57:31Z | Entry not found |
johnlavelle/test | johnlavelle | 2024-06-29T09:57:38Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T09:57:38Z | Entry not found |
mihirkothari0702/Mihir4 | mihirkothari0702 | 2024-06-29T09:59:52Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T09:58:45Z | Entry not found |
PrunaAI/readomni-dao-9b-QUANTO-float8bit-smashed | PrunaAI | 2024-07-01T08:00:12Z | 0 | 0 | transformers | [
"transformers",
"pruna-ai",
"base_model:readomni/dao-9b",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-29T10:03:04Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: readomni/dao-9b
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with quanto.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo readomni/dao-9b installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install quanto
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
IMPORTS
model = AutoModelForCausalLM.from_pretrained("PrunaAI/readomni-dao-9b-QUANTO-float8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("readomni/dao-9b")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model readomni/dao-9b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
Swarts/Ozlemikroo | Swarts | 2024-06-29T10:07:09Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T10:03:06Z | Entry not found |
jkienast/chessmachine | jkienast | 2024-06-29T10:16:13Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T10:03:31Z | # Chessmachine
---
---
license: mit
---
|
jkienast/batavia | jkienast | 2024-06-29T10:04:14Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2024-06-29T10:04:14Z | ---
license: mit
---
|
LeonKogler/FinetunedTokenGenerator | LeonKogler | 2024-06-29T10:04:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-29T10:04:37Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** LeonKogler
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
imagepipeline/realc | imagepipeline | 2024-06-29T10:05:53Z | 0 | 0 | null | [
"imagepipeline",
"imagepipeline.io",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2024-06-29T10:05:51Z | ---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
## realc
<img src="https://via.placeholder.com/468x300?text=App+Screenshot+Here" alt="Generated on Image Pipeline" style="border-radius: 10px;">
**This lora model is uploaded on [imagepipeline.io](https://imagepipeline.io/)**
Model details - realc
[](https://imagepipeline.io/models/realc?id=d6e77b60-7127-44b9-a0bf-c8d759293fd9/)
## How to try this model ?
You can try using it locally or send an API call to test the output quality.
Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/). No payment required.
Coding in `php` `javascript` `node` etc ? Checkout our documentation
[](https://docs.imagepipeline.io/docs/introduction)
```python
import requests
import json
url = "https://imagepipeline.io/sdxl/text2image/v1/run"
payload = json.dumps({
"model_id": "sdxl",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": false,
"guidance_scale": 7.5,
"multi_lingual": "no",
"embeddings": "",
"lora_models": "d6e77b60-7127-44b9-a0bf-c8d759293fd9",
"lora_weights": "0.5"
})
headers = {
'Content-Type': 'application/json',
'API-Key': 'your_api_key'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
}
```
Get more ready to use `MODELS` like this for `SD 1.5` and `SDXL` :
[](https://imagepipeline.io/models)
### API Reference
#### Generate Image
```http
https://api.imagepipeline.io/sdxl/text2image/v1
```
| Headers | Type | Description |
|:----------------------| :------- |:-------------------------------------------------------------------------------------------------------------------|
| `API-Key` | `str` | Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/) |
| `Content-Type` | `str` | application/json - content type of the request body |
| Parameter | Type | Description |
| :-------- | :------- | :------------------------- |
| `model_id` | `str` | Your base model, find available lists in [models page](https://imagepipeline.io/models) or upload your own|
| `prompt` | `str` | Text Prompt. Check our [Prompt Guide](https://docs.imagepipeline.io/docs/SD-1.5/docs/extras/prompt-guide) for tips |
| `num_inference_steps` | `int [1-50]` | Noise is removed with each step, resulting in a higher-quality image over time. Ideal value 30-50 (without LCM) |
| `guidance_scale` | `float [1-20]` | Higher guidance scale prioritizes text prompt relevance but sacrifices image quality. Ideal value 7.5-12.5 |
| `lora_models` | `str, array` | Pass the model_id(s) of LoRA models that can be found in models page |
| `lora_weights` | `str, array` | Strength of the LoRA effect |
---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
### Feedback
If you have any feedback, please reach out to us at [email protected]
#### 🔗 Visit Website
[](https://imagepipeline.io/)
If you are the original author of this model, please [click here](https://airtable.com/apprTaRnJbDJ8ufOx/shr4g7o9B6fWfOlUR) to add credits
|
RobertML/sn21-helium | RobertML | 2024-06-29T10:26:53Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
]
| null | 2024-06-29T10:10:35Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
net31/uid148 | net31 | 2024-06-29T22:05:49Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T10:15:36Z | Entry not found |
Krompirko/ijoi | Krompirko | 2024-06-29T10:22:31Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T10:21:17Z | Entry not found |
luuluumando/naschain | luuluumando | 2024-06-29T10:24:47Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T10:24:28Z | Entry not found |
iamnguyen/sailor-chat | iamnguyen | 2024-06-29T10:39:10Z | 0 | 0 | null | [
"safetensors",
"region:us"
]
| null | 2024-06-29T10:25:43Z | Entry not found |
taiacolaitiniyara/lulu | taiacolaitiniyara | 2024-06-29T10:27:07Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2024-06-29T10:27:07Z | ---
license: mit
---
|
Xu-Ouyang/pythia-12b-deduped-int3-step143000-GPTQ-wikitext2 | Xu-Ouyang | 2024-07-01T18:54:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"3-bit",
"gptq",
"region:us"
]
| text-generation | 2024-06-29T10:27:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RobertML/sn21-lithium | RobertML | 2024-06-29T10:44:29Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
]
| null | 2024-06-29T10:28:24Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
PrunaAI/cognitivecomputations-dolphin-phi-2-kensho-HQQ-2bit-smashed | PrunaAI | 2024-06-29T10:35:00Z | 0 | 0 | transformers | [
"transformers",
"phi-msft",
"text-generation",
"pruna-ai",
"conversational",
"custom_code",
"base_model:cognitivecomputations/dolphin-phi-2-kensho",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-06-29T10:34:26Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: cognitivecomputations/dolphin-phi-2-kensho
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo cognitivecomputations/dolphin-phi-2-kensho installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/cognitivecomputations-dolphin-phi-2-kensho-HQQ-2bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/cognitivecomputations-dolphin-phi-2-kensho-HQQ-2bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("cognitivecomputations/dolphin-phi-2-kensho")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model cognitivecomputations/dolphin-phi-2-kensho before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
PrunaAI/cognitivecomputations-dolphin-phi-2-kensho-HQQ-1bit-smashed | PrunaAI | 2024-06-29T10:34:53Z | 0 | 0 | transformers | [
"transformers",
"phi-msft",
"text-generation",
"pruna-ai",
"conversational",
"custom_code",
"base_model:cognitivecomputations/dolphin-phi-2-kensho",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-06-29T10:34:29Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: cognitivecomputations/dolphin-phi-2-kensho
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo cognitivecomputations/dolphin-phi-2-kensho installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/cognitivecomputations-dolphin-phi-2-kensho-HQQ-1bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/cognitivecomputations-dolphin-phi-2-kensho-HQQ-1bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("cognitivecomputations/dolphin-phi-2-kensho")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model cognitivecomputations/dolphin-phi-2-kensho before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
triggerdude33/gemma_queries_finetuned | triggerdude33 | 2024-06-29T11:58:38Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"gemma",
"trl",
"sft",
"generated_from_trainer",
"base_model:google/gemma-2b-it",
"license:gemma",
"region:us"
]
| null | 2024-06-29T10:35:42Z | ---
base_model: google/gemma-2b-it
library_name: peft
license: gemma
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: gemma_queries_finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma_queries_finetuned
This model is a fine-tuned version of [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.1
- Pytorch 2.3.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2 |
PrunaAI/cognitivecomputations-dolphin-phi-2-kensho-HQQ-4bit-smashed | PrunaAI | 2024-06-29T10:37:34Z | 0 | 0 | transformers | [
"transformers",
"phi-msft",
"text-generation",
"pruna-ai",
"conversational",
"custom_code",
"base_model:cognitivecomputations/dolphin-phi-2-kensho",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-06-29T10:36:50Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: cognitivecomputations/dolphin-phi-2-kensho
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo cognitivecomputations/dolphin-phi-2-kensho installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/cognitivecomputations-dolphin-phi-2-kensho-HQQ-4bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/cognitivecomputations-dolphin-phi-2-kensho-HQQ-4bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("cognitivecomputations/dolphin-phi-2-kensho")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model cognitivecomputations/dolphin-phi-2-kensho before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
PrunaAI/cognitivecomputations-dolphin-phi-2-kensho-QUANTO-int4bit-smashed | PrunaAI | 2024-07-01T07:58:57Z | 0 | 0 | transformers | [
"transformers",
"pruna-ai",
"base_model:cognitivecomputations/dolphin-phi-2-kensho",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-29T10:36:54Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: cognitivecomputations/dolphin-phi-2-kensho
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with quanto.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo cognitivecomputations/dolphin-phi-2-kensho installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install quanto
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
IMPORTS
model = AutoModelForCausalLM.from_pretrained("PrunaAI/cognitivecomputations-dolphin-phi-2-kensho-QUANTO-int4bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("cognitivecomputations/dolphin-phi-2-kensho")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model cognitivecomputations/dolphin-phi-2-kensho before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
PrunaAI/cognitivecomputations-dolphin-phi-2-kensho-QUANTO-float8bit-smashed | PrunaAI | 2024-07-01T08:00:58Z | 0 | 0 | transformers | [
"transformers",
"pruna-ai",
"base_model:cognitivecomputations/dolphin-phi-2-kensho",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-29T10:37:12Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: cognitivecomputations/dolphin-phi-2-kensho
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with quanto.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo cognitivecomputations/dolphin-phi-2-kensho installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install quanto
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
IMPORTS
model = AutoModelForCausalLM.from_pretrained("PrunaAI/cognitivecomputations-dolphin-phi-2-kensho-QUANTO-float8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("cognitivecomputations/dolphin-phi-2-kensho")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model cognitivecomputations/dolphin-phi-2-kensho before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
PrunaAI/cognitivecomputations-dolphin-phi-2-kensho-QUANTO-int8bit-smashed | PrunaAI | 2024-07-01T07:59:04Z | 0 | 0 | transformers | [
"transformers",
"pruna-ai",
"base_model:cognitivecomputations/dolphin-phi-2-kensho",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-29T10:37:22Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: cognitivecomputations/dolphin-phi-2-kensho
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with quanto.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo cognitivecomputations/dolphin-phi-2-kensho installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install quanto
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
IMPORTS
model = AutoModelForCausalLM.from_pretrained("PrunaAI/cognitivecomputations-dolphin-phi-2-kensho-QUANTO-int8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("cognitivecomputations/dolphin-phi-2-kensho")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model cognitivecomputations/dolphin-phi-2-kensho before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
PrunaAI/cognitivecomputations-dolphin-phi-2-kensho-QUANTO-int2bit-smashed | PrunaAI | 2024-07-01T07:57:38Z | 0 | 0 | transformers | [
"transformers",
"pruna-ai",
"base_model:cognitivecomputations/dolphin-phi-2-kensho",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-29T10:37:26Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: cognitivecomputations/dolphin-phi-2-kensho
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with quanto.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo cognitivecomputations/dolphin-phi-2-kensho installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install quanto
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
IMPORTS
model = AutoModelForCausalLM.from_pretrained("PrunaAI/cognitivecomputations-dolphin-phi-2-kensho-QUANTO-int2bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("cognitivecomputations/dolphin-phi-2-kensho")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model cognitivecomputations/dolphin-phi-2-kensho before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
ShapeKapseln33/Nooro09 | ShapeKapseln33 | 2024-06-29T10:40:50Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T10:38:13Z | [NEW] Nooro Leg Massager Experiences Doses & Intake – Nooro Foot Massager is an electrically-powered foot massager that targets special points - the acupuncture points - on your foot to relieve foot muscle fatigue and promote feet health. This state-of-the-art massage table is equivalent to a mini spa that works right out of the box to improve pedal blood circulation. Luckily you don't have to spend several dollars or your time in standard spas because Nooro Foot Massager provides at-home service.
**[Click here to buy now from official website of Nooro](https://slim-gummies-deutschland.de/nooro-leg-massager)**
##Just how Do You Use the Nooro 3-in-1 Leg Massager?
##Action 1: Wrap Them Around Your Calves
Initially, gently cover your calves with the massagers. See to it the billing attachments on the sides are punctuated. For a snug fit, use the extra velcro bands that come with the package if your calves are broader.
##Action 2: Select the Mode and Heat Level
The massagers are outfitted with a three-button control system on their sides. To turn the gizmo on, first push the power switch. Make use of the Cycle switch to pick among the three settings after that. If you would certainly like a cozy sensation during your massage therapy, you might also choose the heating switch.
##Step 3: Enjoy the Massage For 15 Minutes
Press the begin button on the wrap's side to start the massage after you've chosen your preferred criteria. Provide the Nooro 3-in-1 Leg Massager time to heat up and carefully squeeze your legs as you relax. It is recommended to make use of the gizmo for 10 to fifteen minutes per session for ideal results.
##Tip 4: Sit Back and Relax
Sit back and take a break comfortably while the massage therapy is being given. While the massager is working with your legs, you might take this chance to relax, watch TV, or pay attention to songs.
##Tip 5: Remove and Store
The massagers must be very carefully removed from your calves once the session ends. To ensure that they are kept clean and in good condition for your next session, keep them in a safe and secure area for later use.
**[Click here to buy now from official website of Nooro](https://slim-gummies-deutschland.de/nooro-leg-massager)**
##Advantages Of Using Nooro 3-in-1 Leg Massager
##Eases Edema and Swelling
The massager makes use of specialized compression innovation to reduce edema and swelling in your legs, calf bones, ankle joints, and feet. It reduces swelling and pain by encouraging drain and enhancing blood circulation, every one of which reduces liquid retention.
##Improves Blood Circulation
The Nooro massager improves flow by raising blood flow across the lower body. By removing toxic substances and metabolic waste items and helping with a lot more effective distribution of oxygen and nutrients to tissues, improved flow can promote general health wellness, and vitality.
##Minimizes Foot Pain and Stiffness
Frequent use of the massager can help reduce stiffness and discomfort in the feet prompted by ailments like plantar fasciitis, joint inflammation, or extended standing. It soothes your feet from fatigue and pain, allowing you to walk even more conveniently and easily.
##Boosts Muscle Recovery
It is possible for muscular tissues to get tired and uncomfortable after an exhausting workout or expanded sitting. Decreasing inflammation, soothing stressful muscles, and motivating relaxation are just how the Nooro massager helps muscular tissues recover. Athletes or anybody aiming to recuperate after a workout needs to utilize it.
##Advertises Relaxation and Stress Relief
The massager's mild compression and heat create a relaxing sensation that motivates leisure and decreases tension. If you utilize the device for fifteen minutes daily, you can feel calmer, rejuvenated, and prepared to encounter the day.
##Practical and Easy to Use
The Nooro massager is easier and less complex than conventional techniques for enhancing flow, such as physical activity or hand-operated massage therapy. Without calling for clinical interest or exhausting physical exertion, you may obtain the benefits of far better circulation from the conveniences of your very own home.
**[Click here to buy now from official website of Nooro](https://slim-gummies-deutschland.de/nooro-leg-massager)**
|
HoangHa/selfies-roberta-cls-mtlb-s3 | HoangHa | 2024-06-30T03:07:11Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-06-29T10:39:53Z | Entry not found |
iamnguyen/sailor-1.8b-chat | iamnguyen | 2024-06-30T02:16:19Z | 0 | 0 | null | [
"safetensors",
"region:us"
]
| null | 2024-06-29T10:42:32Z | Entry not found |
VKapseln475/SlimGummies4885 | VKapseln475 | 2024-06-29T10:43:38Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T10:42:51Z | # [Erfahrungen] Slim Gummies Bewertungen - Slim Gummies Deutschland Offizieller Preis, Kaufen
[Erfahrungen] Slim Gummies Bewertungen Diese natürlichen und klinisch erprobten Gummis sollen Menschen helfen, gesundes Gewicht zu verlieren und schlank zu werden. Für diejenigen, die Nahrungsergänzungsmittel einnehmen möchten, sind Softgel-Kapseln mit den natürlichen Inhaltsstoffen der Formel erhältlich. Es handelt sich um eine Kapsel zur oralen Fettverbrennung, die Ihren Körper auch daran hindert, Fett zu speichern.
## **[Klicken Sie hier, um Slim Gummies jetzt auf der offiziellen Website zu kaufen](https://slim-gummies-deutschland.de/)**
## Wann kann ich mit einer Wirkung rechnen?
Es erweist sich als schwierig, den genauen Zeitrahmen für die Manifestation der Wirkung von „Slimming Gummies “ zu bestimmen, da die individuellen Körperreaktionen und der Weg zur Gewichtsabnahme unterschiedlich sind. Basierend auf unseren umfangreichen Tests und Recherchen sehen wir jedoch in der Regel innerhavon ein bis zwei Wochen die ersten spürbaren Ergebnisse.
## Wo kann man Schlankheitsgummis kaufen?
Der beste Weg, Sliming Gummies zu kaufen, ist direkt beim Hersteller. Ihre offizielle Website ist eine zuverlässige Quelle für den Erwerb hochwertiger Prodte sowie verschiedener günstiger Angebote, die zu Ihrem Budget passen.
Bei diesen Angeboten sind oft mehrere Dosen der Fruchtgummis enthalten, was zu einem reduzierten Preis pro Dose im Vergleich zum Einzelkauf führt. Daher lohnt es sich, von diesen Angeboten Gebrauch zu machen, allerdings ist zu beachten, dass sie zeitlich begrenzt sind und möglicherweise später nicht verfügbar sind oder höhere Preise anfallen.
Die Bestellung von „Slimming Gummies “ ist ein unkomplizierter Vorgang über ein benutzerfreundliches Online-Formular, in das Sie Ihre Daten eingeben können. Zu den Zahlungsmöglichkeiten gehören PayPal oder Kreditkarte, mit der zusätzlichen Option Klarna für rechnungsbasierte Zahlungen. Wenn Sie sich für Letzteres entscheiden, wird Ihnen eine separate Rechnung per E-Mail zur Zahlung innerhaeiner bestimmten Frist zugesandt.
Nach Abschluss Ihrer Bestellung erhalten Sie eine Bestätigungs-E-Mail mit allen Einzelheiten. Darüber hinaus wird ein Tracking-Link bereitgestellt, mit dem Sie den Lieferstatus Ihres Pakets online verfolgen können. Diese Funktion gibt Ihnen die Flexibilität, Ihre Bestellung zu verfolgen und die ungefähre Ankunftszeit abzuschätzen.
Während Sliming Gummies möglicherweise in Apotheken oder Online-Marktplätzen wie Amazon oder eBay erhältlich ist, können wir in diesen Fällen die Echtheit des Prodts nicht garantieren. Um sicherzustellen, dass Sie das Originalprodt erhalten, wird dringend empfohlen, die Fruchtgummis nur vom offiziellen Hersteller zu kaufen.
## Wie hoch ist der Verkaufspreis des Prodts?
Der Preis der LB-Schlankheitsgummis variiert je nach gewünschter Packungsmenge. Für eine einzelne Packung mit 60 Gummibärchen betragen die Kosten 55,00 £, zuzüglich einer zusätzlichen Gebühr von 4,95 £ für Versand und Bearbeitung, die vom Kunden zu tragen ist. Beim Kauf von zwei Packungen gibt es großzügige 27 % Rabatt auf den Gesamtpreis, inklusive der Versandkosten, die vom Hersteller übernommen werden.
Den günstigsten Preis für Sliming Gummies erhalten Sie jedoch beim Kauf von drei Packungen. Zu den Zahlungsmöglichkeiten zählen Kreditkartenzahlungen, aber auch alternative Methoden wie Sofortüberweisung, Paypal oder Klarna.
## Richtlinien zur Verwendung von Schlankheitsgummis
Um eine optimale Anwendung von Sliming Gummies zu gewährleisten, ist es unbedingt erforderlich, die auf der Verpackung aufgedruckten Anweisungen des Herstellers zu befolgen. Die strikte Einhaltung dieser Richtlinien wird empfohlen, da eine Überschreitung der empfohlenen Dosierung nicht ratsam ist und die gewünschten Wirkungen nicht verbessert oder beschleunigt.
Laut Herstellerempfehlung sollten die fruchtigen Fruchtgummis mehrmals am Tag verzehrt werden, immer dann, wenn Lust auf etwas Süßes aufkommt. Dieser häufige Verzehr ermöglicht eine gleichmäßige Versorgung des Körpers mit lebenswichtigen Vitaminen und Mineralstoffen und verhindert gleichzeitig den Verzehr anderer zuckerhaltiger Snacks.
Typischerweise liegt die empfohlene Tagesdosis von Sliming Gummies zwischen 1 und 2 Gummis, abhängig von den individuellen Bedürfnissen und Zielen. Es ist wichtig, Vorsicht walten zu lassen und die empfohlene Tagesdosis nicht zu überschreiten. Personen mit Vorerkrankungen oder Personen, die derzeit Medikamente einnehmen, wird dringend empfohlen, vor der Anwendung ihren Arzt zu konsultieren.
## **[Klicken Sie hier, um Slim Gummies jetzt auf der offiziellen Website zu kaufen](https://slim-gummies-deutschland.de/)** |
Iqbaliswinning/CATDOGskillacad | Iqbaliswinning | 2024-06-29T10:53:01Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T10:44:09Z | # Install necessary library if not already installed
!pip install tensorflow
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import os
# Directory paths
train_dir = '/content/dataset'
# Image data generator with augmentation
train_datagen = ImageDataGenerator(
rescale=1./255, # Rescale pixel values to [0, 1]
shear_range=0.2, # Shear transformation
zoom_range=0.2, # Random zoom
horizontal_flip=True # Random horizontal flip
)
# Flow training images in batches of 32 using train_datagen generator
train_generator = train_datagen.flow_from_directory(
train_dir,
classes=['cats_set', 'dogs_set'],
target_size=(224, 224), # Resize images to 224x224 (required input size for many pretrained models)
batch_size=32,
class_mode='binary' # Since we have two classes (cats and dogs)
)
# Print the number of images found
print(f"Found {train_generator.samples} images belonging to {train_generator.num_classes} classes.")
# Load a pretrained model
base_model = tf.keras.applications.MobileNetV2(input_shape=(224, 224, 3),
include_top=False,
weights='imagenet')
# Freeze the pretrained layers
base_model.trainable = False
# Create a new model on top of the pretrained base model
model = tf.keras.Sequential([
base_model,
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(1, activation='sigmoid') # Binary classification (cats vs dogs)
])
# Compile the model
model.compile(optimizer=tf.keras.optimizers.Adam(),
loss='binary_crossentropy',
metrics=['accuracy'])
# Calculate steps_per_epoch
steps_per_epoch = train_generator.samples // train_generator.batch_size
# Train the model
history = model.fit(train_generator,
epochs=10, # Increase epochs as needed
steps_per_epoch=steps_per_epoch,
verbose=1)
# Evaluate the model
train_loss, train_accuracy = model.evaluate(train_generator)
print(f"Training Accuracy: {train_accuracy}")
|
RobertML/sn21-beryllium | RobertML | 2024-06-29T11:01:16Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
]
| null | 2024-06-29T10:45:17Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
richardblythman/social | richardblythman | 2024-06-29T10:45:37Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T10:45:37Z | Entry not found |
Ramikan-BR/TiamaPY-LORA-v39 | Ramikan-BR | 2024-06-29T10:46:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/tinyllama-chat-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-29T10:45:44Z | ---
base_model: unsloth/tinyllama-chat-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** Ramikan-BR
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
veronica08041991/naschainv13 | veronica08041991 | 2024-06-29T10:50:53Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T10:50:51Z | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.