modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
ILKT/2024-06-23_09-09-07_epoch_42 | ILKT | 2024-06-28T08:07:37Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"sentence-similarity",
"mteb",
"feature-extraction",
"en",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-06-28T08:07:36Z | ---
language:
- en
- pl
model-index:
- name: PLACEHOLDER
results: []
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ILKT/2024-06-23_09-09-07_epoch_43 | ILKT | 2024-06-28T08:07:54Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"sentence-similarity",
"mteb",
"feature-extraction",
"en",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-06-28T08:07:53Z | ---
language:
- en
- pl
model-index:
- name: PLACEHOLDER
results: []
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
WDong/dpo_0621 | WDong | 2024-06-28T08:33:47Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:Qwen2/Qwen2-7B-Instruct",
"license:other",
"region:us"
]
| null | 2024-06-28T08:07:55Z | ---
license: other
library_name: peft
tags:
- llama-factory
- lora
- generated_from_trainer
base_model: Qwen2/Qwen2-7B-Instruct
model-index:
- name: dpo_0621
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dpo_0621
This model is a fine-tuned version of [/root/LLM_Data_Engineer/LLaMA-Factory/models/Qwen2-7B-Instruct-lora-06072000](https://huggingface.co//root/LLM_Data_Engineer/LLaMA-Factory/models/Qwen2-7B-Instruct-lora-06072000) on the dpo_data_5370_0621 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1241
- Rewards/chosen: -1.0706
- Rewards/rejected: -5.6170
- Rewards/accuracies: 0.9778
- Rewards/margins: 4.5464
- Logps/rejected: -238.9563
- Logps/chosen: -277.6737
- Logits/rejected: -1.3396
- Logits/chosen: -0.1357
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1 |
ILKT/2024-06-23_09-09-07_epoch_44 | ILKT | 2024-06-28T08:08:12Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"sentence-similarity",
"mteb",
"feature-extraction",
"en",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-06-28T08:08:11Z | ---
language:
- en
- pl
model-index:
- name: PLACEHOLDER
results: []
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ILKT/2024-06-23_09-09-07_epoch_45 | ILKT | 2024-06-28T08:08:29Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"sentence-similarity",
"mteb",
"feature-extraction",
"en",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-06-28T08:08:28Z | ---
language:
- en
- pl
model-index:
- name: PLACEHOLDER
results: []
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ILKT/2024-06-23_09-09-07_epoch_46 | ILKT | 2024-06-28T08:08:46Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"sentence-similarity",
"mteb",
"feature-extraction",
"en",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-06-28T08:08:45Z | ---
language:
- en
- pl
model-index:
- name: PLACEHOLDER
results: []
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ILKT/2024-06-23_09-09-07_epoch_47 | ILKT | 2024-06-28T08:09:04Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"sentence-similarity",
"mteb",
"feature-extraction",
"en",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-06-28T08:09:03Z | ---
language:
- en
- pl
model-index:
- name: PLACEHOLDER
results: []
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ILKT/2024-06-23_09-09-07_epoch_48 | ILKT | 2024-06-28T08:09:22Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"sentence-similarity",
"mteb",
"feature-extraction",
"en",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-06-28T08:09:21Z | ---
language:
- en
- pl
model-index:
- name: PLACEHOLDER
results: []
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
Kathernie/vasista-medium-ta_r_moe | Kathernie | 2024-06-28T11:37:40Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"region:us"
]
| null | 2024-06-28T08:09:21Z | Entry not found |
habulaj/137524113023 | habulaj | 2024-06-28T08:09:31Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T08:09:30Z | Entry not found |
ILKT/2024-06-23_09-09-07_epoch_49 | ILKT | 2024-06-28T08:09:40Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"sentence-similarity",
"mteb",
"feature-extraction",
"en",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-06-28T08:09:39Z | ---
language:
- en
- pl
model-index:
- name: PLACEHOLDER
results: []
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ILKT/2024-06-23_09-09-07_epoch_50 | ILKT | 2024-06-28T08:09:58Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"sentence-similarity",
"mteb",
"feature-extraction",
"en",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-06-28T08:09:57Z | ---
language:
- en
- pl
model-index:
- name: PLACEHOLDER
results: []
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ILKT/2024-06-23_09-09-07_epoch_51 | ILKT | 2024-06-28T08:10:15Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"sentence-similarity",
"mteb",
"feature-extraction",
"en",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-06-28T08:10:14Z | ---
language:
- en
- pl
model-index:
- name: PLACEHOLDER
results: []
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ILKT/2024-06-23_09-09-07_epoch_52 | ILKT | 2024-06-28T08:10:33Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"sentence-similarity",
"mteb",
"feature-extraction",
"en",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-06-28T08:10:32Z | ---
language:
- en
- pl
model-index:
- name: PLACEHOLDER
results: []
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ILKT/2024-06-23_09-09-07_epoch_53 | ILKT | 2024-06-28T08:10:51Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"sentence-similarity",
"mteb",
"feature-extraction",
"en",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-06-28T08:10:50Z | ---
language:
- en
- pl
model-index:
- name: PLACEHOLDER
results: []
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ILKT/2024-06-23_09-09-07_epoch_54 | ILKT | 2024-06-28T08:11:09Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"sentence-similarity",
"mteb",
"feature-extraction",
"en",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-06-28T08:11:09Z | ---
language:
- en
- pl
model-index:
- name: PLACEHOLDER
results: []
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ILKT/2024-06-23_09-09-07_epoch_55 | ILKT | 2024-06-28T08:11:27Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"sentence-similarity",
"mteb",
"feature-extraction",
"en",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-06-28T08:11:26Z | ---
language:
- en
- pl
model-index:
- name: PLACEHOLDER
results: []
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ILKT/2024-06-23_09-09-07_epoch_56 | ILKT | 2024-06-28T08:11:44Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"sentence-similarity",
"mteb",
"feature-extraction",
"en",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-06-28T08:11:43Z | ---
language:
- en
- pl
model-index:
- name: PLACEHOLDER
results: []
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
evanslur/detr-finetuned-trotoar-100 | evanslur | 2024-06-28T08:11:54Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T08:11:54Z | Entry not found |
ILKT/2024-06-23_09-09-07_epoch_57 | ILKT | 2024-06-28T08:12:02Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"sentence-similarity",
"mteb",
"feature-extraction",
"en",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-06-28T08:12:01Z | ---
language:
- en
- pl
model-index:
- name: PLACEHOLDER
results: []
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ILKT/2024-06-23_09-09-07_epoch_58 | ILKT | 2024-06-28T08:12:20Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"sentence-similarity",
"mteb",
"feature-extraction",
"en",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-06-28T08:12:19Z | ---
language:
- en
- pl
model-index:
- name: PLACEHOLDER
results: []
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ILKT/2024-06-23_09-09-07_epoch_59 | ILKT | 2024-06-28T08:12:38Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"sentence-similarity",
"mteb",
"feature-extraction",
"en",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-06-28T08:12:37Z | ---
language:
- en
- pl
model-index:
- name: PLACEHOLDER
results: []
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ILKT/2024-06-23_09-09-07_epoch_60 | ILKT | 2024-06-28T08:12:55Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"sentence-similarity",
"mteb",
"feature-extraction",
"en",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-06-28T08:12:54Z | ---
language:
- en
- pl
model-index:
- name: PLACEHOLDER
results: []
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ILKT/2024-06-23_09-09-07_epoch_61 | ILKT | 2024-06-28T08:13:13Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"sentence-similarity",
"mteb",
"feature-extraction",
"en",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-06-28T08:13:12Z | ---
language:
- en
- pl
model-index:
- name: PLACEHOLDER
results: []
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ILKT/2024-06-23_09-09-07_epoch_62 | ILKT | 2024-06-28T08:13:30Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"sentence-similarity",
"mteb",
"feature-extraction",
"en",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-06-28T08:13:29Z | ---
language:
- en
- pl
model-index:
- name: PLACEHOLDER
results: []
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ILKT/2024-06-23_09-09-07_epoch_63 | ILKT | 2024-06-28T08:13:56Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"sentence-similarity",
"mteb",
"feature-extraction",
"en",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-06-28T08:13:55Z | ---
language:
- en
- pl
model-index:
- name: PLACEHOLDER
results: []
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ILKT/2024-06-23_09-09-07_epoch_64 | ILKT | 2024-06-28T08:14:14Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"sentence-similarity",
"mteb",
"feature-extraction",
"en",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-06-28T08:14:14Z | ---
language:
- en
- pl
model-index:
- name: PLACEHOLDER
results: []
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ILKT/2024-06-23_09-09-07_epoch_65 | ILKT | 2024-06-28T08:14:32Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"sentence-similarity",
"mteb",
"feature-extraction",
"en",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-06-28T08:14:31Z | ---
language:
- en
- pl
model-index:
- name: PLACEHOLDER
results: []
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ILKT/2024-06-23_09-09-07_epoch_66 | ILKT | 2024-06-28T08:14:49Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"sentence-similarity",
"mteb",
"feature-extraction",
"en",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-06-28T08:14:49Z | ---
language:
- en
- pl
model-index:
- name: PLACEHOLDER
results: []
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ILKT/2024-06-23_09-09-07_epoch_67 | ILKT | 2024-06-28T08:15:07Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"sentence-similarity",
"mteb",
"feature-extraction",
"en",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-06-28T08:15:06Z | ---
language:
- en
- pl
model-index:
- name: PLACEHOLDER
results: []
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ILKT/2024-06-23_09-09-07_epoch_68 | ILKT | 2024-06-28T08:15:25Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"sentence-similarity",
"mteb",
"feature-extraction",
"en",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-06-28T08:15:24Z | ---
language:
- en
- pl
model-index:
- name: PLACEHOLDER
results: []
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ILKT/2024-06-23_09-09-07_epoch_69 | ILKT | 2024-06-28T08:15:42Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"sentence-similarity",
"mteb",
"feature-extraction",
"en",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-06-28T08:15:41Z | ---
language:
- en
- pl
model-index:
- name: PLACEHOLDER
results: []
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ILKT/2024-06-23_09-09-07_epoch_70 | ILKT | 2024-06-28T08:16:00Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"sentence-similarity",
"mteb",
"feature-extraction",
"en",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-06-28T08:15:59Z | ---
language:
- en
- pl
model-index:
- name: PLACEHOLDER
results: []
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ILKT/2024-06-23_09-09-07_epoch_71 | ILKT | 2024-06-28T08:16:18Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"sentence-similarity",
"mteb",
"feature-extraction",
"en",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-06-28T08:16:17Z | ---
language:
- en
- pl
model-index:
- name: PLACEHOLDER
results: []
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ILKT/2024-06-23_09-09-07_epoch_72 | ILKT | 2024-06-28T08:16:36Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"sentence-similarity",
"mteb",
"feature-extraction",
"en",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-06-28T08:16:35Z | ---
language:
- en
- pl
model-index:
- name: PLACEHOLDER
results: []
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ILKT/2024-06-23_09-09-07_epoch_73 | ILKT | 2024-06-28T08:16:54Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"sentence-similarity",
"mteb",
"feature-extraction",
"en",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-06-28T08:16:52Z | ---
language:
- en
- pl
model-index:
- name: PLACEHOLDER
results: []
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ILKT/2024-06-23_09-09-07_epoch_74 | ILKT | 2024-06-28T08:17:11Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"sentence-similarity",
"mteb",
"feature-extraction",
"en",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-06-28T08:17:10Z | ---
language:
- en
- pl
model-index:
- name: PLACEHOLDER
results: []
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ILKT/2024-06-23_09-09-07_epoch_75 | ILKT | 2024-06-28T08:17:29Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"sentence-similarity",
"mteb",
"feature-extraction",
"en",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-06-28T08:17:28Z | ---
language:
- en
- pl
model-index:
- name: PLACEHOLDER
results: []
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
LucianoDeben/Reinforce-model1 | LucianoDeben | 2024-06-28T08:49:19Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-06-28T08:19:41Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-model1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
djwild/remove-bg | djwild | 2024-07-02T05:53:09Z | 0 | 0 | null | [
"onnx",
"license:gpl-3.0",
"region:us"
]
| null | 2024-06-28T08:23:35Z | ---
license: gpl-3.0
---
|
WDong/dpo_06221544_policy2 | WDong | 2024-06-28T08:35:37Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:Qwen2/Qwen2-7B-Instruct",
"license:other",
"region:us"
]
| null | 2024-06-28T08:28:12Z | ---
license: other
library_name: peft
tags:
- llama-factory
- lora
- generated_from_trainer
base_model: Qwen2/Qwen2-7B-Instruct
model-index:
- name: dpo_06221544_policy2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dpo_06221544_policy2
This model is a fine-tuned version of [/root/LLM_Data_Engineer/LLaMA-Factory/models/Qwen2-7B-Instruct-sft-06221544-iter1-policy2](https://huggingface.co//root/LLM_Data_Engineer/LLaMA-Factory/models/Qwen2-7B-Instruct-sft-06221544-iter1-policy2) on the dpo_data_5370_0621 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0678
- Rewards/chosen: 0.9462
- Rewards/rejected: -3.0599
- Rewards/accuracies: 0.9778
- Rewards/margins: 4.0060
- Logps/rejected: -203.4532
- Logps/chosen: -274.8549
- Logits/rejected: -1.4117
- Logits/chosen: -0.2185
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1 |
EralitePhilippines/EralitePhilippines | EralitePhilippines | 2024-06-28T08:30:36Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-06-28T08:28:40Z | ---
license: apache-2.0
---
What is Eralite?
Eralite Pills is an advanced dietary supplement designed to support hearing health and alleviate hearing problems. Formulated with a blend of essential vitamins, minerals, and herbal extracts, Eralite capsule aims to improve auditory function, enhance ear health, and protect against age-related hearing loss. This supplement is ideal for individuals experiencing hearing issues or those who want to take proactive steps to maintain their hearing health.
Official website:<a href="https://www.nutritionsee.com/eralithilippines">www.Eralite.com</a>
<p><a href="https://www.nutritionsee.com/eralithilippines"> <img src="https://www.nutritionsee.com/wp-content/uploads/2024/06/Eralite-Philippines-.png" alt="enter image description here"> </a></p>
<a href="https://www.nutritionsee.com/eralithilippines">Buy now!! Click the link below for more information and get 50% off now... Hurry</a>
Official website:<a href="https://www.nutritionsee.com/eralithilippines">www.Eralite.com</a> |
lukarape/w2v-bert-2.0-acoustic-v30 | lukarape | 2024-06-28T08:29:41Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-28T08:29:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
huggingfacepremium/Phi-3-mini-128k-instruct-bnb-4bit-GGUF | huggingfacepremium | 2024-06-28T08:30:32Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T08:30:32Z | Entry not found |
adem-jaziri-11/MyPetModel | adem-jaziri-11 | 2024-06-28T08:30:35Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T08:30:35Z | Entry not found |
EdwardSpaeth/openllama-3b | EdwardSpaeth | 2024-06-28T08:33:07Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T08:33:07Z | Entry not found |
EdwardSpaeth/openllama-3b-fine-tuned | EdwardSpaeth | 2024-06-28T08:33:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-28T08:33:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
roopeshrokade/example-model | roopeshrokade | 2024-06-28T09:18:42Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T08:34:57Z | # Example Model
This is my model card README
---
license: mit
---
|
rajparmar/finetuned_tpicap_model | rajparmar | 2024-06-28T08:36:11Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:bigscience/bloomz-560m",
"license:bigscience-bloom-rail-1.0",
"region:us"
]
| null | 2024-06-28T08:36:09Z | ---
license: bigscience-bloom-rail-1.0
base_model: bigscience/bloomz-560m
tags:
- generated_from_trainer
model-index:
- name: finetuned_tpicap_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_tpicap_model
This model is a fine-tuned version of [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.0+cu117
- Datasets 2.13.0
- Tokenizers 0.14.1
|
swetapatra/EDOS | swetapatra | 2024-06-28T08:41:58Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T08:38:51Z | # EDOS-OSDM
flanT5-Impl-EDOS-OSDM
|
mukulb/tinyllama-strisakhi | mukulb | 2024-06-28T08:39:50Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T08:39:50Z | Entry not found |
jianzongwu/MotionBooth | jianzongwu | 2024-06-28T09:23:38Z | 0 | 0 | null | [
"arxiv:2406.17758",
"license:mit",
"region:us"
]
| null | 2024-06-28T08:40:37Z | ---
license: mit
---
# Model Card for MotionBooth
## Model Description
- **Paper:** https://arxiv.org/abs/2406.17758v1
- **Project Page:** https://jianzongwu.github.io/projects/motionbooth
- **Github Repository:** https://github.com/jianzongwu/MotionBooth
### Model Summary
Fine-tuned checkpoints from subjects in [the MotionBooth dataset](https://huggingface.co/datasets/jianzongwu/MotionBooth).
```
@article{wu2024motionbooth,
title={MotionBooth: Motion-Aware Customized Text-to-Video Generation},
author={Jianzong Wu and Xiangtai Li and Yanhong Zeng and Jiangning Zhang and Qianyu Zhou and Yining Li and Yunhai Tong and Kai Chen},
journal={arXiv pre-print arXiv:2406.17758},
year={2024},
}
``` |
LucianoDeben/Reinforce-pixelcopterv1 | LucianoDeben | 2024-06-28T08:57:51Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-06-28T08:40:51Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopterv1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 8.50 +/- 11.86
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
AlexeyMol/gnnt_chemical | AlexeyMol | 2024-06-28T09:09:01Z | 0 | 0 | null | [
"license:unknown",
"region:us"
]
| null | 2024-06-28T08:43:18Z | ---
license: unknown
---
|
houbw/llama3_8b_bnb_4bit_ruozhiba_1 | houbw | 2024-06-28T08:47:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-28T08:47:02Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** houbw
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
PeterGordon/test1 | PeterGordon | 2024-06-28T11:28:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2024-06-28T08:51:34Z | ---
{}
---
# Model Card for Nexa Temp Mapping
## Model Description
This model, named Nexa Temp Mapping, is fine-tuned from the Mistral-7B-Instruct-v0.2 model for specialized tasks in creating test cases for Temperature Mapping of areas. It incorporates enhancements using PEFT (Pretrained Encoder Fine-Tuning) techniques to optimize performance for specific applications.
## Training Data
Describe the dataset used for training the model:
- **Source:** [Specify the source of the training data]
- **Size:** 50 Datapoints
- **Details:** Brief description of the dataset characteristics.
## Intended Use
This model is intended for use in the creation of test cases to qualify equipment such as fridges, freezers, autoclaves and ovens. It is designed to improve the code model by including domain knowledge over Supplement 8 Temperature mapping of storage areas Technical supplement to WHO Technical Report Series, No. 961, 2011. Annex 9: Model guidance for the stoage and transport of time- and temperature-sensitive pharmaceutcial products.
## How to Use
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("PeterGordon/nexa-temp-mapping")
model = AutoModelForCausalLM.from_pretrained("PeterGordon/nexa-temp-mapping")
text = "Your input text here"
encoded_input = tokenizer(text, return_tensors='pt')
output = model.generate(**encoded_input)
print(tokenizer.decode(output[0], skip_special_tokens=True))
---
license: apache-2.0
---
|
Yash0109/diaratechHf_llamae39f1791-11ff-4c9d-9966-b8f40f002127 | Yash0109 | 2024-06-28T08:58:55Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T08:58:55Z | Entry not found |
Yash0109/diaratechHf_llama59c71617-7bce-43cf-a0b6-d622ea5fdb0f | Yash0109 | 2024-06-28T09:00:21Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T09:00:21Z | Entry not found |
msplits/peft-starcoder-lora-a100 | msplits | 2024-07-01T08:48:24Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"license:bigcode-openrail-m",
"region:us"
]
| null | 2024-06-28T09:00:45Z | ---
license: bigcode-openrail-m
tags:
- generated_from_trainer
model-index:
- name: peft-starcoder-lora-a100
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# peft-starcoder-lora-a100
This model is a fine-tuned version of [bigcode/starcoderbase-1b](https://huggingface.co/bigcode/starcoderbase-1b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0550
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0005 | 0.05 | 100 | 0.9347 |
| 0.9939 | 0.1 | 200 | 0.9456 |
| 0.6657 | 0.15 | 300 | 0.9741 |
| 0.876 | 0.2 | 400 | 0.9765 |
| 0.9736 | 0.25 | 500 | 0.9916 |
| 0.5713 | 0.3 | 600 | 0.9979 |
| 0.7916 | 0.35 | 700 | 1.0035 |
| 0.8799 | 0.4 | 800 | 1.0083 |
| 0.5209 | 0.45 | 900 | 1.0225 |
| 0.7409 | 0.5 | 1000 | 1.0318 |
| 0.7843 | 0.55 | 1100 | 1.0195 |
| 0.4715 | 0.6 | 1200 | 1.0547 |
| 0.7062 | 0.65 | 1300 | 1.0521 |
| 0.6678 | 0.7 | 1400 | 1.0479 |
| 0.5542 | 0.75 | 1500 | 1.0527 |
| 0.6735 | 0.8 | 1600 | 1.0521 |
| 0.591 | 0.85 | 1700 | 1.0556 |
| 0.619 | 0.9 | 1800 | 1.0586 |
| 0.5836 | 0.95 | 1900 | 1.0570 |
| 0.6231 | 1.0 | 2000 | 1.0550 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.13.3
|
detek/2000_steps | detek | 2024-06-28T09:32:53Z | 0 | 0 | null | [
"safetensors",
"region:us"
]
| null | 2024-06-28T09:02:29Z | Entry not found |
rajparmar/bloomz_finetuned_tpicap_model | rajparmar | 2024-06-28T09:17:19Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:bigscience/bloomz-560m",
"license:bigscience-bloom-rail-1.0",
"region:us"
]
| null | 2024-06-28T09:02:49Z | ---
license: bigscience-bloom-rail-1.0
base_model: bigscience/bloomz-560m
tags:
- generated_from_trainer
model-index:
- name: bloomz_finetuned_tpicap_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloomz_finetuned_tpicap_model
This model is a fine-tuned version of [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.0+cu117
- Datasets 2.13.0
- Tokenizers 0.14.1
|
Firemido/voicemodels | Firemido | 2024-06-28T09:07:33Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T09:02:56Z | Entry not found |
Yash0109/diaratechHf_llama2748373c-a51c-4f20-8842-b168cb04d258 | Yash0109 | 2024-06-28T09:03:53Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T09:03:53Z | Entry not found |
philk11/naschain | philk11 | 2024-06-28T09:05:28Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T09:05:28Z | Entry not found |
Yash0109/diaratechHf_llama330d9d86-c9f5-4ea8-8d83-ff0d4167d121 | Yash0109 | 2024-06-28T09:09:34Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"text-generation",
"conversational",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
]
| text-generation | 2024-06-28T09:05:38Z | ---
base_model: mistralai/Mistral-7B-Instruct-v0.2
datasets:
- generator
library_name: peft
license: apache-2.0
pipeline_tag: text-generation
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: diaratechHf_llama330d9d86-c9f5-4ea8-8d83-ff0d4167d121
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# diaratechHf_llama330d9d86-c9f5-4ea8-8d83-ff0d4167d121
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 2
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.15.2 |
demolei/sft_openassistant-guanaco | demolei | 2024-06-28T09:06:16Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T09:06:16Z | Entry not found |
bebocoding/sdgsd | bebocoding | 2024-06-28T09:06:44Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T09:06:44Z | Entry not found |
luissattelmayer/immigration_multilingual_finetuned | luissattelmayer | 2024-06-28T09:07:04Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T09:07:04Z | Entry not found |
fiyinoye/mt5-base-summarize-yoruba | fiyinoye | 2024-06-28T09:07:13Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T09:07:13Z | Entry not found |
nglguarino/peft-dialogue-summary-training-1719565688 | nglguarino | 2024-06-28T09:08:08Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T09:08:08Z | Entry not found |
ShapeKapseln33/SlimGummies776 | ShapeKapseln33 | 2024-06-28T09:16:03Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T09:09:52Z | [Deutschland] Slim Gummies Bewertungen Diese natürlichen und klinisch erprobten Gummis sollen Menschen helfen, gesundes Gewicht zu verlieren und schlank zu werden. Für diejenigen, die Nahrungsergänzungsmittel einnehmen möchten, sind Softgel-Kapseln mit den natürlichen Inhaltsstoffen der Formel erhältlich. Es handelt sich um eine Kapsel zur oralen Fettverbrennung, die Ihren Körper auch daran hindert, Fett zu speichern.
**[Klicken Sie hier, um Slim Gummies jetzt auf der offiziellen Website zu kaufen](https://slim-gummies-deutschland.de/)**
Faktoren wie Alter, Geschlecht und Gewicht sowie der allgemeine Gesundheitszustand können die Wirkung beeinflussen. Konsultationen mit Fachpersonal vor der Einnahme sind zu empfehlen, um Verträglichkeit und mögliche Wechselwirkungen zu überprüfen.
dm und Rossmann
Produktverfügbarkeit: Slimm Gummies sind nicht im Sortiment.
Verkaufsstellen: Nicht bei dm und Rossmann erhältlich.
Stiftung Warentest
Slim Gummies: Nicht geprüft
Aktueller Stand: Keine Testergebnisse vorhanden
Kundenmeinungen, Kritiken, Erfahrungsberichte und Bewertungen
Die Hauptbestandteile von Abnehm-Gummis wie Äpfelsäure, Vitamin B12 und Folsäure sind bekannt für ihre gesundheitsfördernden Eigenschaften. Äpfelsäure soll das Gewichtsmanagement unterstützen und Vitamin B12 sowie Folsäure tragen zum Energiemetabolismus bei. Wissenschaftliche Publikationen erkennen die Bedeutung dieser Stoffe für die Gesundheit an, jedoch ist der direkte Effekt auf die Gewichtsreduktion nicht einheitlich und hängt von persönlichen Umständen ab.
Bei einer empfohlenen Tagesdosis von zwei Stück, lassen sich die möglichen positiven Eigenschaften der Inhaltsstoffe leicht in den Alltag integrieren.
**[Klicken Sie hier, um Slim Gummies jetzt auf der offiziellen Website zu kaufen](https://slim-gummies-deutschland.de/)**
Berichte deuten auf eine Reihe möglicher positiver Effekte hin, wie erhöhte Energieverfügbarkeit und verringertes Hungergefühl, welche bis zu sichtbaren Resultaten bei der Gewichtsreduktion reichen. Obwohl diese Berichte konstruiert sein können, zeigen sie das mögliche Spektrum an Wirkungen, die Konsumenten erfahren könnten. Der Geschmack und die Verträglichkeit der Kautabletten werden oft positiv bewertet.
Zahlreiche Slimm Gummies Erfahrungsberichte und die offenkundigen Nutzen der Bestandteile sprechen für das Produkt, jedoch sind differenzierte Überlegungen nötig. Die Wirksamkeit von solchen Ergänzungsmitteln kann unterschiedlich sein, und es mangelt an langfristigen Studien. Nahrungsergänzungsmittel sollten eine ausgewogene Ernährung und Bewegung nicht ersetzen, sondern ergänzen.
Insgesamt stellen die Slimm Fruchtgummis eine attraktive Möglichkeit dar, um Bemühungen für einen gesunden Lebenswandel zu unterstützen. Die Kombination aus angenehmem Geschmack und einfacher Anwendung, zusammen mit den positiven Eigenschaften der Inhaltsstoffe, zeichnet sie aus. Es ist empfehlenswert, die Einnahme von Nahrungsergänzungsmitteln mit einem Fachmann abzustimmen und realistisch in Bezug auf die erwarteten Resultate zu bleiben. Abnehm-Gummis können als nützliche Ergänzung angesehen werden, sofern sie richtig angewendet und in einem gesunden Lebensstil integriert werden.
##Slimm Gummies zum besten Preis erwerben
Beim Online-Kauf von Nahrungsergänzungsmitteln zur Unterstützung des Gewichtsmanagement ist es wichtig, vertrauenswürdige Anbieter zu wählen. Slimm Gummies bieten eine geschmackvolle Alternative zu herkömmlichen Präparaten und können aktuell mit Preisnachlässen erworben werden.
##Wurden Slim Gummies in der Fernsehsendung Höhle der Löwen gezeigt?
Die Diskussion über Slim Gummies, ein Diätprodukt in Gummibärchenform, umfasst unter anderem deren angebliche Präsenz in der bekannten Fernsehshow „Die Höhle der Löwen“. Betrachtet man die öffentlich zugänglichen Informationen, ergibt sich folgendes Bild: Die Slimm Gummies wurden nicht in Höhle der Löwen vorgestellt.
Es bleibt festzustellen, dass Werbung und Realität bei Produkten zum Abnehmen nicht immer übereinstimmen und Verbraucher gut beraten sind, sich eingehend zu informieren.
**[Klicken Sie hier, um Slim Gummies jetzt auf der offiziellen Website zu kaufen](https://slim-gummies-deutschland.de/)**
|
Asme/w2v-bert-2.0-amh | Asme | 2024-06-28T09:10:14Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T09:10:14Z | Entry not found |
whizzzzkid/whizzzzkid_245_5 | whizzzzkid | 2024-06-28T09:14:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-06-28T09:12:14Z | Entry not found |
Boostaro155/PharmaFlex455 | Boostaro155 | 2024-06-28T09:15:20Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T09:14:51Z | # PharmaFlex XR Reviews & Experiences – Pharma Flex South Korea Benefis Official Price, Buy
PharmaFlex XR Reviews & Experiences - The most popular nutritional supplement Pharma Flex Rx is intended to maintain and promote joint health. The manufacturer says it features all natural ingredients with no fillers.
## **[Click Here To Buy Now From Official Website Of PharmaFlex XR](https://capsules24x7.com/pharma-flex-kr)**
## Description
PharmaFlex Rx is a breakthrough joint support formula designed to help people with joint pain return to an active life. This unique supplement aims to relieve joint pain, support muscle recovery, accelerate joint repair and strengthen connective tissue. With PharmaFlex Rx you can relieve everyday discomfort and feel mobile again.
Areas of use
Joint pain
Wear and tear of the joints
Arthritis
Sports injuries
Muscle fatigue and recovery
## PharmaFlex Rx - How it works
The way PharmaFlex Rx works is based on a unique combination of ingredients. These ingredients strengthen the joints, inhibit inflammation and relieve pain. The formula works synergistically to provide holistic joint support.
##PharmaFlex Rx - Ingredients and Active Ingredient
PharmaFlex Rx contains high-quality ingredients, including:
Glucosamine sulfate: Helps produce cartilage and supports joint function.
Turmeric root extract: Fights inflammation and relieves pain.
MSM (methylsulfonylmethane): Reduces joint pain and improves mobility.
Bromelain: Has anti-inflammatory and pain-relieving properties.
## PharmaFlex Rx – Effects – Impacts
Regular use of PharmaFlex Rx can lead to the following effects:
Relief of joint pain
Supporting muscle recovery
A
ccelerating joint repair
Strengthening connective tissue
Reducing everyday ailments
## **[Click Here To Buy Now From Official Website Of PharmaFlex XR](https://capsules24x7.com/pharma-flex-kr)** |
Yash0109/diaratechHf_llama930a077e-e52e-4344-8912-f1853818e9f1 | Yash0109 | 2024-06-28T09:16:33Z | 0 | 0 | null | [
"safetensors",
"region:us"
]
| null | 2024-06-28T09:15:09Z | Entry not found |
Sayalik45/function_calling | Sayalik45 | 2024-06-28T09:16:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-1.1-2b-it-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-28T09:16:26Z | ---
base_model: unsloth/gemma-1.1-2b-it-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
---
# Uploaded model
- **Developed by:** Sayalik45
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-1.1-2b-it-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
IIIIID/Staplus | IIIIID | 2024-06-28T09:17:17Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-06-28T09:17:17Z | ---
license: apache-2.0
---
|
bebocoding/slaldkda | bebocoding | 2024-06-28T09:18:46Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T09:18:46Z | Entry not found |
yraziel/amir_dadon | yraziel | 2024-06-28T09:22:35Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T09:20:52Z | Entry not found |
elrom/vibe-ish | elrom | 2024-06-28T09:21:33Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-06-28T09:21:33Z | ---
license: apache-2.0
---
|
huggingfacepremium/NeuralBeagle14-7B-GGUF | huggingfacepremium | 2024-06-28T09:24:43Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T09:24:43Z | Entry not found |
Anjana10/LoRA-IndicBART-XLSum-Fine-tuned | Anjana10 | 2024-06-28T10:19:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-28T09:25:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jan-hq/llama3_test | jan-hq | 2024-06-28T09:28:19Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T09:28:19Z | Entry not found |
rayanrayan/German-to-Urdu | rayanrayan | 2024-06-28T10:09:37Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T09:30:17Z | Entry not found |
AmritaBha/sd15_fill_mscoco | AmritaBha | 2024-06-28T09:30:21Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T09:30:21Z | Entry not found |
febattig/example-model | febattig | 2024-06-28T09:55:48Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T09:30:35Z |
Felix Battig
---
license: mit
---
|
MarcelPower/codet5-large-mbpp | MarcelPower | 2024-06-28T12:13:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-28T09:33:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sebgobb/test_lora_llama3model | sebgobb | 2024-06-28T09:37:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-28T09:37:03Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** sebgobb
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
PrunaAI/zjunlp-OceanGPT-7b-v0.1-QUANTO-int2bit-smashed | PrunaAI | 2024-07-01T07:59:35Z | 0 | 0 | transformers | [
"transformers",
"pruna-ai",
"base_model:zjunlp/OceanGPT-7b-v0.1",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-28T09:40:22Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: zjunlp/OceanGPT-7b-v0.1
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with quanto.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo zjunlp/OceanGPT-7b-v0.1 installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install quanto
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
IMPORTS
model = AutoModelForCausalLM.from_pretrained("PrunaAI/zjunlp-OceanGPT-7b-v0.1-QUANTO-int2bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("zjunlp/OceanGPT-7b-v0.1")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model zjunlp/OceanGPT-7b-v0.1 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
SASSASASA/Model1 | SASSASASA | 2024-06-28T09:42:52Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T09:42:52Z | Entry not found |
fahad800x/fahad | fahad800x | 2024-06-28T09:45:40Z | 0 | 0 | null | [
"license:ncsa",
"region:us"
]
| null | 2024-06-28T09:45:40Z | ---
license: ncsa
---
|
marthakk/detr_finetuned_oculardataset | marthakk | 2024-06-28T10:56:55Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"conditional_detr",
"object-detection",
"generated_from_trainer",
"dataset:dsi",
"base_model:microsoft/conditional-detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| object-detection | 2024-06-28T09:46:25Z | ---
license: apache-2.0
base_model: microsoft/conditional-detr-resnet-50
tags:
- generated_from_trainer
datasets:
- dsi
model-index:
- name: detr_finetuned_oculardataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr_finetuned_oculardataset
This model is a fine-tuned version of [microsoft/conditional-detr-resnet-50](https://huggingface.co/microsoft/conditional-detr-resnet-50) on the dsi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0672
- Map: 0.3032
- Map 50: 0.4973
- Map 75: 0.3701
- Map Small: 0.2981
- Map Medium: 0.6746
- Map Large: -1.0
- Mar 1: 0.1
- Mar 10: 0.3678
- Mar 100: 0.4114
- Mar Small: 0.4054
- Mar Medium: 0.7421
- Mar Large: -1.0
- Map Falciparum Trophozoite: 0.0156
- Mar 100 Falciparum Trophozoite: 0.1511
- Map Wbc: 0.5908
- Mar 100 Wbc: 0.6716
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Falciparum Trophozoite | Mar 100 Falciparum Trophozoite | Map Wbc | Mar 100 Wbc |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:--------------------------:|:------------------------------:|:-------:|:-----------:|
| No log | 1.0 | 86 | 1.6645 | 0.131 | 0.2562 | 0.1153 | 0.1289 | 0.3974 | -1.0 | 0.0647 | 0.2312 | 0.3164 | 0.314 | 0.6159 | -1.0 | 0.0004 | 0.0456 | 0.2616 | 0.5873 |
| No log | 2.0 | 172 | 1.4800 | 0.2028 | 0.4079 | 0.1766 | 0.1993 | 0.4876 | -1.0 | 0.0677 | 0.2725 | 0.3282 | 0.3251 | 0.628 | -1.0 | 0.0007 | 0.0648 | 0.405 | 0.5915 |
| No log | 3.0 | 258 | 1.3829 | 0.2264 | 0.4496 | 0.1936 | 0.2193 | 0.5542 | -1.0 | 0.0729 | 0.2807 | 0.3215 | 0.3168 | 0.629 | -1.0 | 0.0019 | 0.0706 | 0.451 | 0.5725 |
| No log | 4.0 | 344 | 1.3318 | 0.2089 | 0.4403 | 0.1427 | 0.2056 | 0.4726 | -1.0 | 0.0691 | 0.2751 | 0.3221 | 0.3116 | 0.6748 | -1.0 | 0.002 | 0.0941 | 0.4158 | 0.5502 |
| No log | 5.0 | 430 | 1.2739 | 0.2454 | 0.4562 | 0.2342 | 0.2354 | 0.614 | -1.0 | 0.0777 | 0.3046 | 0.3482 | 0.338 | 0.7262 | -1.0 | 0.002 | 0.0906 | 0.4888 | 0.6058 |
| 1.7665 | 6.0 | 516 | 1.2365 | 0.2599 | 0.4744 | 0.2599 | 0.2522 | 0.6258 | -1.0 | 0.0846 | 0.3217 | 0.361 | 0.354 | 0.7 | -1.0 | 0.005 | 0.1047 | 0.5149 | 0.6173 |
| 1.7665 | 7.0 | 602 | 1.2548 | 0.2488 | 0.4689 | 0.2302 | 0.2434 | 0.5622 | -1.0 | 0.0788 | 0.31 | 0.3519 | 0.3446 | 0.6888 | -1.0 | 0.0038 | 0.1012 | 0.4938 | 0.6026 |
| 1.7665 | 8.0 | 688 | 1.2031 | 0.2715 | 0.474 | 0.3074 | 0.2664 | 0.6153 | -1.0 | 0.0897 | 0.3309 | 0.3744 | 0.3723 | 0.657 | -1.0 | 0.0058 | 0.1164 | 0.5373 | 0.6325 |
| 1.7665 | 9.0 | 774 | 1.2492 | 0.2417 | 0.4715 | 0.2154 | 0.2349 | 0.5753 | -1.0 | 0.0789 | 0.3064 | 0.3503 | 0.342 | 0.686 | -1.0 | 0.0043 | 0.1129 | 0.4791 | 0.5877 |
| 1.7665 | 10.0 | 860 | 1.1861 | 0.2752 | 0.4772 | 0.2891 | 0.2683 | 0.6259 | -1.0 | 0.0872 | 0.3342 | 0.3823 | 0.379 | 0.6813 | -1.0 | 0.0061 | 0.1217 | 0.5443 | 0.6429 |
| 1.7665 | 11.0 | 946 | 1.1996 | 0.2607 | 0.4605 | 0.2779 | 0.2565 | 0.5972 | -1.0 | 0.085 | 0.326 | 0.3722 | 0.3669 | 0.6813 | -1.0 | 0.0041 | 0.1254 | 0.5173 | 0.6189 |
| 1.2663 | 12.0 | 1032 | 1.1664 | 0.2764 | 0.4753 | 0.3137 | 0.2718 | 0.6148 | -1.0 | 0.0892 | 0.333 | 0.3781 | 0.3741 | 0.685 | -1.0 | 0.0054 | 0.1188 | 0.5473 | 0.6375 |
| 1.2663 | 13.0 | 1118 | 1.1451 | 0.2804 | 0.4694 | 0.3212 | 0.2732 | 0.6595 | -1.0 | 0.092 | 0.3412 | 0.3852 | 0.3787 | 0.7187 | -1.0 | 0.0051 | 0.1282 | 0.5557 | 0.6421 |
| 1.2663 | 14.0 | 1204 | 1.1251 | 0.2889 | 0.4761 | 0.3401 | 0.2835 | 0.6619 | -1.0 | 0.0926 | 0.3496 | 0.3979 | 0.393 | 0.714 | -1.0 | 0.0091 | 0.1391 | 0.5687 | 0.6567 |
| 1.2663 | 15.0 | 1290 | 1.1493 | 0.2778 | 0.4695 | 0.3126 | 0.2706 | 0.6531 | -1.0 | 0.0911 | 0.3415 | 0.3881 | 0.3792 | 0.743 | -1.0 | 0.0054 | 0.1382 | 0.5502 | 0.6379 |
| 1.2663 | 16.0 | 1376 | 1.1125 | 0.2846 | 0.4799 | 0.3307 | 0.2804 | 0.6415 | -1.0 | 0.0926 | 0.3498 | 0.4005 | 0.3954 | 0.7159 | -1.0 | 0.0075 | 0.1452 | 0.5617 | 0.6558 |
| 1.2663 | 17.0 | 1462 | 1.1002 | 0.2909 | 0.4816 | 0.3471 | 0.2859 | 0.6545 | -1.0 | 0.0956 | 0.3554 | 0.4036 | 0.3969 | 0.7421 | -1.0 | 0.0077 | 0.145 | 0.5741 | 0.6622 |
| 1.1448 | 18.0 | 1548 | 1.1066 | 0.2853 | 0.484 | 0.3205 | 0.2796 | 0.6647 | -1.0 | 0.0918 | 0.3472 | 0.3944 | 0.3883 | 0.7196 | -1.0 | 0.0092 | 0.1415 | 0.5613 | 0.6474 |
| 1.1448 | 19.0 | 1634 | 1.0993 | 0.2933 | 0.4838 | 0.3441 | 0.2884 | 0.6683 | -1.0 | 0.0978 | 0.3581 | 0.401 | 0.3958 | 0.7252 | -1.0 | 0.0079 | 0.1374 | 0.5787 | 0.6645 |
| 1.1448 | 20.0 | 1720 | 1.0850 | 0.298 | 0.4855 | 0.3594 | 0.2923 | 0.6669 | -1.0 | 0.0963 | 0.3606 | 0.4011 | 0.3952 | 0.7374 | -1.0 | 0.0093 | 0.1348 | 0.5867 | 0.6675 |
| 1.1448 | 21.0 | 1806 | 1.0814 | 0.3006 | 0.4908 | 0.3618 | 0.2951 | 0.6868 | -1.0 | 0.0994 | 0.3628 | 0.4056 | 0.4001 | 0.7355 | -1.0 | 0.0117 | 0.1413 | 0.5896 | 0.67 |
| 1.1448 | 22.0 | 1892 | 1.0836 | 0.2975 | 0.495 | 0.3541 | 0.2924 | 0.6712 | -1.0 | 0.0989 | 0.3628 | 0.4084 | 0.4036 | 0.7196 | -1.0 | 0.0135 | 0.1534 | 0.5816 | 0.6633 |
| 1.1448 | 23.0 | 1978 | 1.0813 | 0.2996 | 0.4965 | 0.3567 | 0.2941 | 0.6792 | -1.0 | 0.0979 | 0.3625 | 0.408 | 0.402 | 0.7364 | -1.0 | 0.015 | 0.1505 | 0.5842 | 0.6655 |
| 1.0601 | 24.0 | 2064 | 1.0707 | 0.3048 | 0.4952 | 0.3624 | 0.2987 | 0.6876 | -1.0 | 0.0981 | 0.3659 | 0.4118 | 0.4054 | 0.7486 | -1.0 | 0.0144 | 0.1501 | 0.5951 | 0.6735 |
| 1.0601 | 25.0 | 2150 | 1.0736 | 0.2982 | 0.4935 | 0.3584 | 0.2931 | 0.6732 | -1.0 | 0.0992 | 0.3638 | 0.41 | 0.4053 | 0.7224 | -1.0 | 0.0126 | 0.1521 | 0.5839 | 0.6678 |
| 1.0601 | 26.0 | 2236 | 1.0717 | 0.3034 | 0.4978 | 0.3622 | 0.2986 | 0.6788 | -1.0 | 0.0995 | 0.3659 | 0.411 | 0.405 | 0.7421 | -1.0 | 0.015 | 0.1501 | 0.5918 | 0.6719 |
| 1.0601 | 27.0 | 2322 | 1.0688 | 0.3025 | 0.4978 | 0.3622 | 0.2975 | 0.6747 | -1.0 | 0.1 | 0.3674 | 0.4108 | 0.4047 | 0.7421 | -1.0 | 0.0161 | 0.1524 | 0.5888 | 0.6693 |
| 1.0601 | 28.0 | 2408 | 1.0679 | 0.3031 | 0.4968 | 0.3638 | 0.2976 | 0.6805 | -1.0 | 0.0999 | 0.3679 | 0.4106 | 0.4046 | 0.7421 | -1.0 | 0.0156 | 0.1507 | 0.5905 | 0.6705 |
| 1.0601 | 29.0 | 2494 | 1.0669 | 0.3035 | 0.4976 | 0.3717 | 0.2985 | 0.6751 | -1.0 | 0.0999 | 0.368 | 0.4115 | 0.4055 | 0.743 | -1.0 | 0.0156 | 0.1509 | 0.5915 | 0.6721 |
| 1.0103 | 30.0 | 2580 | 1.0672 | 0.3032 | 0.4973 | 0.3701 | 0.2981 | 0.6746 | -1.0 | 0.1 | 0.3678 | 0.4114 | 0.4054 | 0.7421 | -1.0 | 0.0156 | 0.1511 | 0.5908 | 0.6716 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
bsmani/paligemma-3b-ft-scicap-224-caption | bsmani | 2024-06-28T09:47:45Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T09:47:44Z | Entry not found |
houbw/llama3_8b_bnb_4bit_ruozhiba_2 | houbw | 2024-06-28T09:50:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-28T09:50:27Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** houbw
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Ziray/lora_model | Ziray | 2024-06-28T09:51:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-28T09:51:15Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** Ziray
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ai-tools-searchs/soda | ai-tools-searchs | 2024-06-28T09:53:22Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T09:52:41Z | Entry not found |
domasin/code-search-net-tokenizer | domasin | 2024-06-28T09:52:45Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-28T09:52:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Pingkkkkklksl/my700mdl | Pingkkkkklksl | 2024-06-28T10:24:22Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2024-06-28T09:52:56Z | ---
license: mit
---
|
Kibalama/PixelCopter-02 | Kibalama | 2024-06-28T09:54:41Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-06-28T09:54:38Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: PixelCopter-02
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 34.00 +/- 28.25
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
OmBayus/deneme123 | OmBayus | 2024-06-28T09:55:37Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-28T09:55:37Z | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.