modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
papagruz2/naschain | papagruz2 | 2024-07-01T19:40:52Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T01:13:06Z | Entry not found |
papagruz1/naschain | papagruz1 | 2024-07-01T19:36:49Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T01:14:13Z | Entry not found |
papagruz3/naschain | papagruz3 | 2024-07-01T19:45:30Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T01:15:43Z | Entry not found |
papagruz4/naschain | papagruz4 | 2024-07-01T19:53:43Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T01:16:59Z | Entry not found |
papagruz5/naschain | papagruz5 | 2024-07-01T19:58:17Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T01:17:25Z | Entry not found |
boringtaskai/paligemma_ezcart | boringtaskai | 2024-06-29T10:23:58Z | 0 | 0 | null | [
"safetensors",
"region:us"
]
| null | 2024-06-29T01:17:41Z | Entry not found |
mshams59/peft-starcoder-lora-a100 | mshams59 | 2024-06-29T01:22:46Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T01:22:46Z | Entry not found |
thaodao3101/my_awesome_model | thaodao3101 | 2024-06-29T01:22:51Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T01:22:51Z | Entry not found |
SangBinCho/SangBinCho | SangBinCho | 2024-06-29T01:25:13Z | 0 | 0 | null | [
"safetensors",
"region:us"
]
| null | 2024-06-29T01:25:12Z | Entry not found |
SangBinCho/mixtral-lora | SangBinCho | 2024-06-29T01:25:30Z | 0 | 0 | null | [
"safetensors",
"region:us"
]
| null | 2024-06-29T01:25:29Z | Entry not found |
sirnii/Nina | sirnii | 2024-07-01T16:54:00Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"not-for-all-audiences",
"en",
"ja",
"pt",
"dataset:BAAI/Infinity-Instruct",
"dataset:nvidia/HelpSteer2",
"dataset:NousResearch/CharacterCodex",
"dataset:UCSC-VLAA/Recap-DataComp-1B",
"dataset:Salesforce/xlam-function-calling-60k",
"dataset:OpenGVLab/ShareGPT-4o",
"dataset:tomg-group-umd/pixelprose",
"license:mit",
"region:us"
]
| null | 2024-06-29T01:32:07Z | ---
license: mit
datasets:
- BAAI/Infinity-Instruct
- nvidia/HelpSteer2
- NousResearch/CharacterCodex
- UCSC-VLAA/Recap-DataComp-1B
- Salesforce/xlam-function-calling-60k
- OpenGVLab/ShareGPT-4o
- tomg-group-umd/pixelprose
language:
- en
- ja
- pt
metrics:
- accuracy
- code_eval
- character
library_name: adapter-transformers
tags:
- not-for-all-audiences
--- |
net31/model06 | net31 | 2024-06-29T01:32:43Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T01:32:41Z | Entry not found |
GraydientPlatformAPI/loras-jun29 | GraydientPlatformAPI | 2024-06-29T03:54:22Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T01:38:25Z | Entry not found |
habulaj/282389251423 | habulaj | 2024-06-29T01:45:05Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T01:44:59Z | Entry not found |
habulaj/12009699321 | habulaj | 2024-06-29T01:45:35Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T01:45:27Z | Entry not found |
Frixi/DeLaGuetto | Frixi | 2024-06-29T01:56:58Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2024-06-29T01:54:22Z | ---
license: openrail
---
|
Frixi/JonZ | Frixi | 2024-06-29T01:57:28Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2024-06-29T01:57:09Z | ---
license: openrail
---
|
habulaj/72568412 | habulaj | 2024-06-29T01:59:56Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T01:59:45Z | Entry not found |
KIRANKALLA/WeaponDetection | KIRANKALLA | 2024-06-30T03:53:57Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"conditional_detr",
"object-detection",
"endpoints_compatible",
"region:us"
]
| object-detection | 2024-06-29T02:06:35Z | Entry not found |
LIghtJUNction/LIght1.0 | LIghtJUNction | 2024-06-29T02:06:37Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T02:06:37Z | Entry not found |
quirky-lats-at-mats/rmu_lat_5 | quirky-lats-at-mats | 2024-06-29T02:10:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-29T02:08:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Garinnava/Comic | Garinnava | 2024-06-29T02:17:08Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T02:17:08Z | Entry not found |
TenebrisLux/alucard | TenebrisLux | 2024-06-29T02:23:48Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2024-06-29T02:20:18Z | ---
license: openrail
---
|
Koleshjr/unquantized_mistral_7b_v2_9_epochs | Koleshjr | 2024-06-29T02:30:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-29T02:30:01Z | ---
base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** Koleshjr
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AkhmadXvip/Bot-telegram | AkhmadXvip | 2024-06-29T02:41:57Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T02:32:32Z | Entry not found |
JohnCnr/naschain | JohnCnr | 2024-07-01T12:48:17Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T02:33:36Z | Entry not found |
StartBleackRabbit/my_awesome_model | StartBleackRabbit | 2024-06-29T02:36:37Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T02:36:37Z | Entry not found |
Ultrafilter/mediset | Ultrafilter | 2024-06-29T03:07:26Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T02:38:04Z | Entry not found |
Rusvo/naschain | Rusvo | 2024-07-02T16:03:11Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T02:40:51Z | Entry not found |
AdamKasumovic/llama3-70b-instruct-winogrande-train-s-af-winogrande-good | AdamKasumovic | 2024-06-29T02:42:51Z | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-70b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-29T02:42:50Z | ---
base_model: unsloth/llama-3-70b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** AdamKasumovic
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-70b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
dong625/myself | dong625 | 2024-06-29T02:44:05Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T02:44:05Z | Entry not found |
AdamKasumovic/llama3-70b-instruct-mmlu-college-medicine-af-mmlu-random | AdamKasumovic | 2024-06-29T02:47:45Z | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-70b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-29T02:47:44Z | ---
base_model: unsloth/llama-3-70b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** AdamKasumovic
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-70b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
rusov7/naschain | rusov7 | 2024-07-02T23:31:19Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T02:50:48Z | Entry not found |
bihungba1101/segment-essay | bihungba1101 | 2024-06-29T12:13:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-29T02:52:02Z | ---
base_model: meta-llama/Meta-Llama-3-8B-Instruct
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** bihungba1101
- **License:** apache-2.0
- **Finetuned from model :** meta-llama/Meta-Llama-3-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
rusov4/naschain | rusov4 | 2024-07-02T23:16:13Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T02:52:09Z | Entry not found |
habulaj/344028309388 | habulaj | 2024-06-29T02:55:17Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T02:54:48Z | Entry not found |
strwbrylily/Sullyoon-of-NMIXX-by-strwbrylily | strwbrylily | 2024-06-29T02:58:03Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2024-06-29T02:55:44Z | ---
license: openrail
---
|
rusov/naschain | rusov | 2024-07-02T13:37:00Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T03:01:05Z | Entry not found |
rusov3/naschain | rusov3 | 2024-07-02T14:57:14Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T03:10:03Z | Entry not found |
net31/model08 | net31 | 2024-06-29T03:15:10Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T03:14:15Z | Entry not found |
habulaj/198423172105 | habulaj | 2024-06-29T03:18:20Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T03:18:17Z | Entry not found |
habulaj/8025158076 | habulaj | 2024-06-29T03:20:20Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T03:20:17Z | Entry not found |
c00kiemaster/ThiagoAraujo | c00kiemaster | 2024-06-29T03:25:15Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T03:24:40Z | Entry not found |
habulaj/423471390418 | habulaj | 2024-06-29T03:25:06Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T03:25:02Z | Entry not found |
Aryanshanu/Texttoimage | Aryanshanu | 2024-06-29T03:25:30Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T03:25:30Z | Entry not found |
slelab/AES16 | slelab | 2024-06-29T03:58:13Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T03:35:05Z | Entry not found |
net31/model09 | net31 | 2024-06-29T03:37:10Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T03:37:07Z | Entry not found |
shengxuelim/Reinforce-Cartpole-v1 | shengxuelim | 2024-06-29T03:37:44Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-06-29T03:37:33Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
NAYEONCEot9cover/MOMO | NAYEONCEot9cover | 2024-06-29T03:40:20Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2024-06-29T03:38:02Z | ---
license: openrail
---
|
statking/paligemma27b_rec_lora_kalora_test_fold0 | statking | 2024-06-29T03:38:30Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T03:38:30Z | Entry not found |
albacore/naschain | albacore | 2024-06-29T03:42:22Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T03:42:20Z | Entry not found |
NAYEONCEot9cover/SANAOFTWICEALLROUNDVER | NAYEONCEot9cover | 2024-06-29T03:46:04Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2024-06-29T03:44:04Z | ---
license: openrail
---
|
jdollman/CartPole-v1 | jdollman | 2024-06-29T03:47:19Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-06-29T03:47:13Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Lulla/test | Lulla | 2024-06-29T03:48:46Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T03:48:14Z | Entry not found |
NAYEONCEot9cover/LISASOLO | NAYEONCEot9cover | 2024-06-29T03:50:26Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2024-06-29T03:48:23Z | ---
license: openrail
---
|
JennnnnyD/fine-tuned_llama-2-7B-HF_learning-version | JennnnnyD | 2024-06-29T03:49:36Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2024-06-29T03:49:36Z | ---
license: mit
---
|
slelab/AES17 | slelab | 2024-06-29T04:23:14Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T04:00:33Z | Entry not found |
net31/uid222 | net31 | 2024-06-29T04:51:23Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T04:04:08Z | Entry not found |
marko62/Dragan | marko62 | 2024-06-29T04:06:00Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T04:06:00Z | Entry not found |
ElPacay/Prueba | ElPacay | 2024-06-29T04:10:06Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T04:10:06Z | Entry not found |
chutoro/naschain | chutoro | 2024-06-29T04:10:25Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T04:10:23Z | Entry not found |
shinben0327/dqn-SpaceInvadersNoFrameskip-v4 | shinben0327 | 2024-06-29T04:12:10Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T04:12:10Z | Entry not found |
aadarshram/q-FrozenLake-v1-4x4-noSlippery | aadarshram | 2024-06-29T04:14:19Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-06-29T04:14:16Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="aadarshram/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
habulaj/663810223 | habulaj | 2024-06-29T04:16:50Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T04:16:44Z | Entry not found |
peterhuang24/test | peterhuang24 | 2024-06-29T04:19:55Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2024-06-29T04:19:55Z | ---
license: mit
---
|
griyabatikmakmur/G2 | griyabatikmakmur | 2024-06-29T04:23:26Z | 0 | 0 | null | [
"license:cc-by-4.0",
"region:us"
]
| null | 2024-06-29T04:23:25Z | ---
license: cc-by-4.0
---
|
tamangmilan/quantized_facebook_opt_125m | tamangmilan | 2024-06-29T06:49:30Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T04:26:29Z | Entry not found |
aadarshram/Taxi-v3 | aadarshram | 2024-06-29T04:27:58Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-06-29T04:27:54Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="aadarshram/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
shengxuelim/Reinforce-Pixelcopter | shengxuelim | 2024-06-29T07:35:56Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-06-29T04:28:28Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 38.60 +/- 36.71
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
minh-swinburne/bert-qa-mash-covid | minh-swinburne | 2024-06-29T05:17:57Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
]
| question-answering | 2024-06-29T04:30:03Z | Entry not found |
dreahim/whisper-medium-Egyptian_ASR_v2 | dreahim | 2024-06-29T04:34:00Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T04:34:00Z | Entry not found |
habulaj/7531580658 | habulaj | 2024-06-29T04:36:39Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T04:36:36Z | Entry not found |
AdamKasumovic/llama3-70b-instruct-winogrande-train-s-af-winogrande-random | AdamKasumovic | 2024-06-29T04:38:15Z | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-70b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-29T04:38:14Z | ---
base_model: unsloth/llama-3-70b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** AdamKasumovic
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-70b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
rusov3/modn | rusov3 | 2024-06-29T04:40:05Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T04:39:19Z | Entry not found |
ShengwenD/learn_to_fine-tune_llama-2-7B-HF | ShengwenD | 2024-06-29T04:40:49Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2024-06-29T04:40:49Z | ---
license: mit
---
|
iamnguyen/caraxes | iamnguyen | 2024-06-29T08:05:55Z | 0 | 0 | null | [
"safetensors",
"region:us"
]
| null | 2024-06-29T04:44:41Z | Entry not found |
metta-ai/baseline.v0.5.6 | metta-ai | 2024-06-29T04:46:05Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"region:us"
]
| reinforcement-learning | 2024-06-29T04:45:41Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
---
A(n) **APPO** model trained on the **GDY-MettaGrid** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r metta-ai/baseline.v0.5.6
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=GDY-MettaGrid --train_dir=./train_dir --experiment=baseline.v0.5.6
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=GDY-MettaGrid --train_dir=./train_dir --experiment=baseline.v0.5.6 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
LarryAIDraw/yae_miko_pony | LarryAIDraw | 2024-06-29T04:53:39Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2024-06-29T04:49:16Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/152085/genshinxl-yae-miko |
Juanitobanana23/Pandora | Juanitobanana23 | 2024-06-29T04:51:10Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-06-29T04:51:10Z | ---
license: apache-2.0
---
|
shtapm/test | shtapm | 2024-06-29T12:25:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-29T05:12:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JFirdus7/Beta | JFirdus7 | 2024-06-29T05:21:39Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
]
| null | 2024-06-29T05:21:39Z | ---
license: bigcode-openrail-m
---
|
BunnyToon/miucha | BunnyToon | 2024-06-29T07:23:40Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2024-06-29T05:23:36Z | ---
license: openrail
---
|
mmaitai/temp | mmaitai | 2024-06-29T05:31:31Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T05:26:33Z | Entry not found |
peizesun/flan_t5_xl_pytorch | peizesun | 2024-06-29T05:37:59Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T05:37:59Z | Entry not found |
kamatoro/naschain | kamatoro | 2024-06-29T05:46:05Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T05:46:03Z | Entry not found |
Paolo626/KataGo | Paolo626 | 2024-06-29T05:49:19Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-06-29T05:47:46Z | ---
license: apache-2.0
---
|
casque/0150_white_fur_coat_v1 | casque | 2024-06-29T05:51:13Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2024-06-29T05:50:20Z | ---
license: creativeml-openrail-m
---
|
VKapseln475/Nexalyn475450 | VKapseln475 | 2024-06-29T05:52:36Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T05:51:37Z | # Nexalyn Norge Anmeldelser Opplevelser - Dose og inntak Nexalyn Offisiell pris, kjøp 2024
Nexalyn Norge er et kraftig kosttilskudd spesielt utviklet for menn som ønsker å øke testosteronnivået naturlig. Denne formelen er laget av en blanding av naturlige ingredienser, inkludert urter og ekstrakter kjent for deres evne til å støtte hormonell balanse og fremme mannlig helse. Nexalyn skiller seg ut på markedet på grunn av sine vitenskapelig støttede komponenter som er både trygge og effektive for daglig bruk.
## **[Klikk her for å kjøpe nå fra den offisielle nettsiden til Nexalyn](https://ketogummies24x7.com/nexalyn-no)**
## Nøkkelingredienser og deres fordeler på seksuell helse:
Nøkkelingredienser spiller en viktig rolle i effektiviteten til ethvert kosttilskudd, og Nexalyn testosteronboosterformel er intet unntak. La oss se nærmere på noen av nøkkelingrediensene i denne kraftige formelen og hvordan de kan være til fordel for din seksuelle helse.
Horny Goat Weed, også kjent som Epimedium, har blitt brukt i århundrer i tradisjonell kinesisk medisin for å forbedre libido og behandle erektil dysfunksjon. Den inneholder icariin, en forbindelse som kan bidra til å øke blodstrømmen til penis, noe som resulterer i sterkere og lengre varige ereksjoner.
Tongkat Ali Root Extract er en annen potent ingrediens som har afrodisiakum egenskaper. Det kan virke ved å øke testosteronnivået, noe som kan føre til økt utholdenhet, forbedret muskelmasse og forbedret seksuell ytelse.
Saw Palmetto har en sammenheng med prostatahelse, men den spiller også en rolle i å støtte generell seksuell velvære. Ved å hemme omdannelsen av testosteron til dihydrotestosteron (DHT), kan sagpalmetto bidra til å opprettholde sunn hormonbalanse og støtte optimal seksuell funksjon.
Brenneslerotekstrakt er rik på vitamin A og C samt mineraler som jern og magnesium. Disse næringsstoffene kan støtte generell reproduktiv helse, samtidig som de fremmer energinivået gjennom dagen, da det er en viktig faktor for å opprettholde intimitet med partneren din.
Ved å kombinere disse kraftige naturlige ingrediensene i én formel, kan Nexalyn-kapslene ha som mål å gi deg de viktige verktøyene for å frigjøre ditt fulle sensuelle potensial. Å legge til Nexalyn mannlige forbedringsformel i din daglige rutine kan bidra til å forbedre både fysisk utholdenhet og mental fokus i intime øyeblikk, slik at du kan oppleve mer intens nytelse enn noen gang før!
## Hvordan hjelper dette produktet deg med å øke energinivået ditt?
Hvis du mangler energi og utholdenhet, kan Nexalyn testo booster-kapsler i Sør-Afrika være løsningen du har lett etter. Dette kraftige kosttilskuddet har ingredienser som kan bidra til å øke energinivået og revitalisere kroppen din.
Det kan virke ved å øke testosteronnivået, noe som kan føre til forbedret energi og vitalitet. Det kan også fungere ved å forbedre libido og seksuell ytelse. Produktet inneholder forbindelser som kan øke produksjonen av nitrogenoksid, noe som kan bidra til å forbedre blodstrømmen gjennom hele kroppen, inkludert til musklene. Denne forbedrede sirkulasjonen kan føre til økt energinivå.
Det kan støtte hormonbalansen samtidig som det gir en naturlig kilde til antioksidanter som kan fremme generell velvære. Nexalyn testosteronbooster spiller en viktig rolle i å støtte sunne hormonnivåer i kroppen, samt fremme sunn prostatafunksjon, som begge kan bidra til økt energinivå.
## **[Klikk her for å kjøpe nå fra den offisielle nettsiden til Nexalyn](https://ketogummies24x7.com/nexalyn-no)**
|
MinhhMinhh/Test | MinhhMinhh | 2024-06-29T05:53:32Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2024-06-29T05:52:20Z | ---
license: openrail
---
|
MinhhMinhh/ByeonWooSeok-by-MinhMinh | MinhhMinhh | 2024-06-29T05:57:09Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2024-06-29T05:56:35Z | ---
license: openrail
---
|
ShapeKapseln33/Nexalyn9986 | ShapeKapseln33 | 2024-06-29T05:58:56Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T05:57:05Z | [køb] Nexalyn Anmeldelser At forbrænde fedt i besværlige områder er en udfordring for mange mennesker på deres vægttabsrejse. Dette stædige kropsfedt kan være frustrerende og svært at målrette mod med kost og motion alene. Nexaslim-tillægget kan dog give den løsning, du har ledt efter.
**[Klik her for at købe nu fra Nexalyns officielle hjemmeside](https://capsules24x7.com/nexalyn-danmark)**
##Introduktion til Nexalyn
Vil du tage dine seksuelle oplevelser til det næste niveau? Hvis det er tilfældet, skal du ikke lede længere end Nexalyn Testosterone Booster Formula, det revolutionære kosttilskud, der skaber bølger i verden af seksuel sundhed. Hvis du nogensinde har ønsket dig stærkere erektioner, øget sensuel appetit og eksplosive orgasmer, så er du kommet til det rigtige sted.
I dag vil vi diskutere alt, hvad du har brug for at vide om Nexalyn mandlige forstærkningstilskud, fra den videnskabsstøttede formel til de vigtigste ingredienser, der vil hjælpe dig med at øge din seksuelle ydeevne.
##Videnskaben bag Nexalyn Testo-boosteren, og hvordan den virker:
Videnskaben bag Nexalyn mandlige forstærkningstilskud adskiller dem fra andre kosttilskud på markedet. Denne kraftfulde formel kombinerer nøje udvalgte ingredienser, der effektivt kan hjælpe med at forbedre seksuel sundhed og ydeevne.
Det kan hjælpe med at forbedre libido og erektil funktion. Det kan hjælpe med at øge blodgennemstrømningen til kønsområdet, hvilket resulterer i stærkere og længerevarende erektioner. Det kan også øge testosteronniveauet, hvilket resulterer i øget energi, udholdenhed og overordnet sexlyst. Ved at genoprette hormonbalancen kan det også hjælpe med at forbedre humøret og reducere stress.
Det kan også have en positiv effekt på den seksuelle funktion ved at forhindre omdannelsen af testosteron til dihydrotestosteron (DHT), hvilket kan føre til problemer som hårtab og nedsat libido.
Det kan hjælpe med at øge niveauet af frit testosteron ved at binde sig til kønshormonbindende globulin (SHBG), hvilket tillader mere testosteron at cirkulere frit i hele kroppen.
**[Klik her for at købe nu fra Nexalyns officielle hjemmeside](https://capsules24x7.com/nexalyn-danmark)**
Nexalyn-piller i Australien og New Zealand kan sigte mod at give god støtte til mænd, der ønsker at forbedre deres seksuelle ydeevne naturligt. De aktive ingredienser kan målrette mod forskellige aspekter af seksuel sundhed, fra at forbedre blodgennemstrømningen og hormonbalancen til at øge energiniveauet, hvilket resulterer i forbedret udholdenhed, øget sensuel appetit og mere intense orgasmer.
##Nøgleingredienser og deres seksuelle sundhedsmæssige fordele:
Nøgleingredienser spiller en vigtig rolle i effektiviteten af ethvert kosttilskud, og Nexalyn Testosteron Booster Formula er ingen undtagelse. Lad os se nærmere på nogle af nøgleingredienserne i denne kraftfulde formel, og hvordan de kan gavne din seksuelle sundhed.
Horny Goat Weed, også kendt som Epimedium, er blevet brugt i traditionel kinesisk medicin i århundreder til at forbedre libido og behandle erektil dysfunktion. Den indeholder icariin, en forbindelse, der kan hjælpe med at øge blodgennemstrømningen til penis, hvilket resulterer i stærkere og længerevarende erektioner.
Tongkat Ali rodekstrakt er en anden potent ingrediens med afrodisiakum egenskaber. Det kan virke ved at øge testosteronniveauet, hvilket kan føre til øget udholdenhed, forbedret muskelmasse og forbedret seksuel præstation.
Savpalme er forbundet med prostata sundhed, men spiller også en rolle i at understøtte det generelle seksuelle velvære. Ved at hæmme omdannelsen af testosteron til dihydrotestosteron (DHT), kan savpalmetto hjælpe med at opretholde en sund hormonbalance og understøtte optimal seksuel funktion.
Nælderodekstrakt er rig på vitamin A og C samt mineraler som jern og magnesium. Disse næringsstoffer kan understøtte den overordnede reproduktive sundhed, mens de øger energiniveauet i løbet af dagen, en vigtig faktor for at bevare intimiteten med din partner.
Ved at kombinere disse kraftfulde naturlige ingredienser i én formel, sigter Nexalyn kapsler på at give dig de nødvendige værktøjer til at realisere dit fulde sensuelle potentiale. Tilføjelse af Nexalyn mandlige forstærkningsformel til din daglige rutine kan hjælpe med at forbedre både fysisk udholdenhed og mental fokus i intime øjeblikke, så du kan opleve mere intens nydelse end nogensinde før!
**[Klik her for at købe nu fra Nexalyns officielle hjemmeside](https://capsules24x7.com/nexalyn-danmark)**
|
kbashailesh/Test1 | kbashailesh | 2024-06-29T05:57:12Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T05:57:12Z | Entry not found |
MinhhMinhh/KimSooHyun-by-MinhMinh | MinhhMinhh | 2024-06-29T05:58:31Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2024-06-29T05:57:40Z | ---
license: openrail
---
|
JanhaviH/Gen_AI | JanhaviH | 2024-06-29T06:04:34Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-29T06:03:02Z | Entry not found |
mmtg/train-inv | mmtg | 2024-07-02T15:53:29Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-29T06:04:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
coivmn/laranew | coivmn | 2024-06-29T06:12:14Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2024-06-29T06:11:35Z | ---
license: openrail
---
|
WaleedAIking/loraA_model | WaleedAIking | 2024-06-29T06:11:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-29T06:11:44Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** WaleedAIking
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Hamed7immortal/modeling-tab_modeling | Hamed7immortal | 2024-06-29T06:14:54Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2024-06-29T06:14:54Z | ---
license: openrail
---
|
whizzzzkid/whizzzzkid_272_4 | whizzzzkid | 2024-06-29T06:16:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-06-29T06:15:12Z | Entry not found |
makhataei/emotion_recognition_ru | makhataei | 2024-07-01T04:11:23Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"Speech-Emotion-Recognition",
"generated_from_trainer",
"dataset:dusha_emotion_audio",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-29T06:17:58Z | ---
license: apache-2.0
tags:
- Speech-Emotion-Recognition
- generated_from_trainer
datasets:
- dusha_emotion_audio
metrics:
- accuracy
model-index:
- name: Wav2vec2-xls-r-300m
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wav2vec2-xls-r-300m
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the KELONMYOSA/dusha_emotion_audio dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5633
- Accuracy: 0.7970
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 0.7868 | 1.0 | 24170 | 0.7561 | 0.7318 |
| 0.7147 | 2.0 | 48340 | 0.6984 | 0.7459 |
| 0.669 | 3.0 | 72510 | 0.6263 | 0.7727 |
| 0.6362 | 4.0 | 96680 | 0.5832 | 0.7902 |
| 0.4476 | 5.0 | 120850 | 0.5633 | 0.7970 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.