modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
Coolwowsocoolwow/Butters_Stotch
|
Coolwowsocoolwow
| 2024-06-27T00:02:03Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2024-06-26T23:55:09Z |
---
license: openrail
---
|
Zyxbytedance/pokemon-lora
|
Zyxbytedance
| 2024-06-26T23:57:00Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T23:57:00Z |
Entry not found
|
habulaj/224308196094
|
habulaj
| 2024-06-26T23:58:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T23:58:53Z |
Entry not found
|
limpetbisque/sd3
|
limpetbisque
| 2024-06-27T00:02:33Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2024-06-27T00:01:16Z |
---
license: mit
---
|
habulaj/137935113476
|
habulaj
| 2024-06-27T00:02:05Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T00:02:02Z |
Entry not found
|
teejay123/congress-draft
|
teejay123
| 2024-06-27T00:03:05Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2024-06-27T00:03:05Z |
---
license: mit
---
|
AMCosta0511/Teste1
|
AMCosta0511
| 2024-06-27T00:03:17Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T00:03:17Z |
Entry not found
|
Kijai/ccsr-safetensors
|
Kijai
| 2024-06-27T00:09:29Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-06-27T00:06:38Z |
---
license: apache-2.0
---
|
shanthi/gemma-2b-ft
|
shanthi
| 2024-06-27T00:06:54Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-06-27T00:06:54Z |
---
license: apache-2.0
---
|
SETgg/Lucari
|
SETgg
| 2024-06-27T00:13:05Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T00:13:05Z |
Entry not found
|
habulaj/248944220115
|
habulaj
| 2024-06-27T00:13:16Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T00:13:14Z |
Entry not found
|
habulaj/5288540851
|
habulaj
| 2024-06-27T00:18:07Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T00:18:00Z |
Entry not found
|
kirstendx/ModelsPonyXL
|
kirstendx
| 2024-06-27T01:49:08Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T00:18:29Z |
Entry not found
|
csuhan/OneLLM-7B-x-text
|
csuhan
| 2024-06-27T00:35:37Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T00:19:30Z |
Entry not found
|
habulaj/8097058683
|
habulaj
| 2024-06-27T00:21:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T00:21:46Z |
Entry not found
|
jykim310/qwen2-1.5b-inst-q4f16_1-MLC
|
jykim310
| 2024-06-27T01:18:16Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T00:26:03Z |
Entry not found
|
ben81828/meow
|
ben81828
| 2024-06-27T00:28:20Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T00:28:20Z |
Entry not found
|
habulaj/3734333014
|
habulaj
| 2024-06-27T00:29:39Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T00:29:34Z |
Entry not found
|
habulaj/537310246
|
habulaj
| 2024-06-27T00:31:13Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T00:31:05Z |
Entry not found
|
habulaj/5360161538
|
habulaj
| 2024-06-27T00:32:25Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T00:32:21Z |
Entry not found
|
habulaj/4611437160
|
habulaj
| 2024-06-27T00:34:33Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T00:34:30Z |
Entry not found
|
Berly00/whisper-tiny-spanish
|
Berly00
| 2024-06-29T01:12:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-06-27T00:35:14Z |
Entry not found
|
habulaj/848410691
|
habulaj
| 2024-06-27T00:35:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T00:35:53Z |
Entry not found
|
Abiral7/My-Nepali-Model
|
Abiral7
| 2024-06-27T00:48:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-27T00:47:58Z |
---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** Abiral7
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
xinepeng/whisper-small-hi
|
xinepeng
| 2024-06-27T00:48:06Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T00:48:06Z |
Entry not found
|
LuccaMateoei/teste1
|
LuccaMateoei
| 2024-06-27T00:51:57Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2024-06-27T00:51:57Z |
---
license: openrail
---
|
habulaj/9601871890
|
habulaj
| 2024-06-27T00:52:26Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T00:52:23Z |
Entry not found
|
thaisonatk/T5-finetune
|
thaisonatk
| 2024-06-27T00:53:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T00:53:55Z |
Entry not found
|
JinglesDados/DiPaulloRVC
|
JinglesDados
| 2024-06-27T00:55:44Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2024-06-27T00:55:32Z |
---
license: openrail
---
|
habulaj/332192298046
|
habulaj
| 2024-06-27T00:59:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T00:58:48Z |
Entry not found
|
valerielucro/mistral_gsm8k_beta_0.4_epoch2
|
valerielucro
| 2024-06-27T01:00:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-27T00:59:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
habulaj/176555164886
|
habulaj
| 2024-06-27T01:02:20Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T01:02:09Z |
Entry not found
|
Dasr22/daxt
|
Dasr22
| 2024-06-27T01:03:47Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-06-27T01:03:47Z |
---
license: apache-2.0
---
|
pzb19841116/DataguruTest
|
pzb19841116
| 2024-06-27T01:05:47Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-06-27T01:04:03Z |
---
license: apache-2.0
---
|
JinglesDados/PaulinoRVC
|
JinglesDados
| 2024-06-27T01:09:04Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2024-06-27T01:08:15Z |
---
license: openrail
---
|
KevinLiuR/style-mixed-gorrila-fewshot
|
KevinLiuR
| 2024-07-02T23:11:38Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"region:us"
] | null | 2024-06-27T01:09:51Z |
Entry not found
|
habulaj/8813283715
|
habulaj
| 2024-06-27T01:12:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T01:12:56Z |
Entry not found
|
Lxt115/fine_tuning_0627_t5
|
Lxt115
| 2024-06-27T01:13:44Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T01:13:44Z |
Entry not found
|
senthamizh/qwen1.5-llm
|
senthamizh
| 2024-06-27T01:13:53Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T01:13:53Z |
Entry not found
|
migaraa/Gaudi_LoRA_Llama-2-7b-hf-v2
|
migaraa
| 2024-06-28T18:42:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"ipex",
"intel",
"gaudi",
"PEFT",
"dataset:timdettmers/openassistant-guanaco",
"arxiv:1910.09700",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-27T01:15:27Z |
---
library_name: transformers
tags:
- ipex
- intel
- gaudi
- PEFT
license: apache-2.0
datasets:
- timdettmers/openassistant-guanaco
---
# Model Card for Model ID
This model is a fine-tuned version (my fine-tuned version 2) of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on [timdettmers/openassistant-guanaco dataset](https://huggingface.co/datasets/timdettmers/openassistant-guanaco).
## Model Details
### Model Description
This is a fine-tuned version (my ft version 2) of the [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) model using Parameter Efficient Fine Tuning (PEFT) with Low Rank Adaptation (LoRA) on the Intel Gaudi 2 AI accelerator. This model can be used for various text generation tasks including chatbots, content creation, and other NLP applications.
- **Developed by:** Migara Amarasinghe
- **Model type:** LLM
- **Language(s) (NLP):** English
- **Finetuned from model [optional]:** [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf)
## Uses
### Direct Use
This model can be used for text generation tasks such as:
- Chatbots
- Automated content creation
- Text completion and augmentation
### Out-of-Scope Use
- Use in real-time applications where latency is critical
- Use in highly sensitive domains without thorough evaluation and testing
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## Training Details
### Training Hyperparameters
<!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
- Training regime: Mixed precision training using bf16
- Number of epochs: 5
- Learning rate: 5e-5
- Batch size: 16
- Seq length: 512
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Intel Gaudi 2 AI Accelerator
- **Hours used:** < 1 hour
## Technical Specifications
### Compute Infrastructure
#### Hardware
- Intel Gaudi 2 AI Accelerator
- Intel(R) Xeon(R) Platinum 8368 CPU
#### Software
- Transformers library
- Optimum Habana library
|
Xerxes99/Motoko_Kusanagi
|
Xerxes99
| 2024-06-27T01:29:16Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2024-06-27T01:22:51Z |
---
license: openrail
---
|
ND911/SD_1.5_for_anime
|
ND911
| 2024-06-27T01:47:38Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T01:31:32Z |
This is the SD_1.5_for_reals with anime slapped into it. See the examples folder
Does SFW and NSFW images
* 
* 
|
habulaj/11926993878
|
habulaj
| 2024-06-27T01:34:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T01:34:04Z |
Entry not found
|
JackQ20240516/testModel
|
JackQ20240516
| 2024-06-27T03:11:17Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-06-27T01:39:39Z |
---
license: apache-2.0
---
|
JoshuaKelleyDs/test
|
JoshuaKelleyDs
| 2024-06-27T02:06:27Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T01:44:49Z |
Entry not found
|
kirstendx/ModelsXL
|
kirstendx
| 2024-06-27T01:55:35Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T01:51:38Z |
Entry not found
|
creative-graphic-design/MVANet-checkpoints
|
creative-graphic-design
| 2024-06-27T07:58:48Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2024-06-27T01:52:54Z |
---
license: mit
---
This repository hosts the following copies of [MVANet](https://github.com/qianyu-dlut/MVANet)'s pre-trained checkpoints
- [SwinTransformer](https://drive.google.com/file/d/1-Zi_DtCT8oC2UAZpB3_XoFOIxIweIAyk)
- [MVANet](https://drive.google.com/file/d/1_gabQXOF03MfXnf3EWDK1d_8wKiOemOv)
|
sidvash/famus_multi_instance_v3_Phi-3-mini-4k-instruct-5-epochs
|
sidvash
| 2024-06-27T01:54:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-27T01:53:50Z |
---
base_model: unsloth/phi-3-mini-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** sidvash
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
robinmia/subtitle_gen
|
robinmia
| 2024-06-27T02:01:46Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2024-06-27T02:01:46Z |
---
license: mit
---
|
Hermoso88/Non
|
Hermoso88
| 2024-06-27T02:03:50Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T02:03:50Z |
Entry not found
|
YI4/lenovo_llamma3
|
YI4
| 2024-06-27T02:04:16Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-06-27T02:04:16Z |
---
license: apache-2.0
---
|
WALIDALI/Bekiiii250
|
WALIDALI
| 2024-06-27T02:11:26Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T02:09:29Z |
Entry not found
|
nam194/qwen2-7b-qlora-viet4all-unsloth
|
nam194
| 2024-06-27T04:35:49Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"region:us"
] | null | 2024-06-27T02:09:40Z |
Entry not found
|
PhucMinh111/llama3_trt_awq_t4
|
PhucMinh111
| 2024-06-27T02:13:12Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T02:12:32Z |
Entry not found
|
fafi98/docai
|
fafi98
| 2024-06-27T06:43:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-27T02:13:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
johnwee1/peft-starcoder-lora-rust
|
johnwee1
| 2024-07-01T09:14:10Z | 0 | 0 | null |
[
"safetensors",
"code",
"text-generation",
"dataset:chargoddard/commitpack-ft-instruct",
"region:us"
] |
text-generation
| 2024-06-27T02:14:43Z |
---
datasets:
- chargoddard/commitpack-ft-instruct
pipeline_tag: text-generation
tags:
- code
---
refer to johnwee1/peft-starcoder-lora-python
there are 2996 rust code examples in the commitpack-ft-instruct so we'll see how this performs
|
CIR-AMS/BTRM_Qwen2_7b_0613
|
CIR-AMS
| 2024-06-27T03:18:05Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2024-06-27T02:14:49Z |
Entry not found
|
quanla/vit-base-patch16-224-in21k-finetuned-lora-cifar10
|
quanla
| 2024-06-27T05:19:03Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"region:us"
] | null | 2024-06-27T02:15:34Z |
Entry not found
|
howarudo/paligemma-3b-pt-224-vqa-15-epochs
|
howarudo
| 2024-06-27T07:55:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-27T02:19:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DokiQueen/Throat-Fingering
|
DokiQueen
| 2024-06-27T02:24:32Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T02:20:08Z |
Entry not found
|
LarryAIDraw/bluearchivefull1-000006
|
LarryAIDraw
| 2024-06-27T02:38:36Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-06-27T02:22:21Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/114960/blue-archive-all-in-one-or-character-lora-62975
|
arcanumkpop/Yiyeon
|
arcanumkpop
| 2024-06-27T02:24:38Z | 0 | 0 | null |
[
"license:unknown",
"region:us"
] | null | 2024-06-27T02:22:35Z |
---
license: unknown
---
|
LarryAIDraw/CHAR-RikuhachimaAru
|
LarryAIDraw
| 2024-06-27T02:38:45Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-06-27T02:22:46Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/482734/rikuhachima-aru-4-outfits-or-blue-archive
|
zorogames/zorobarraelefante
|
zorogames
| 2024-06-27T02:22:53Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T02:22:53Z |
Entry not found
|
LarryAIDraw/kirara_blue_archive_v1_goofy
|
LarryAIDraw
| 2024-06-27T02:38:55Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-06-27T02:23:09Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/390853/yozakura-kirara-blue-archive-or-goofy-ai
|
WALIDALI/Kenvaponystyle
|
WALIDALI
| 2024-06-27T02:27:12Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T02:25:10Z |
Entry not found
|
LarryAIDraw/rio-lora-nochekaiser
|
LarryAIDraw
| 2024-06-27T02:39:05Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-06-27T02:27:02Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/137804/rio-tsukatsuki-blue-archive
|
LarryAIDraw/makoto_bluearchive1428
|
LarryAIDraw
| 2024-06-27T02:39:14Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-06-27T02:27:38Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/368383/makoto-blue-archive
|
thaisonatk/model-mt5-finetune
|
thaisonatk
| 2024-06-27T02:34:53Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T02:34:53Z |
Entry not found
|
hala1999/your_model_name
|
hala1999
| 2024-06-27T02:35:00Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T02:35:00Z |
Entry not found
|
hxx12/photo
|
hxx12
| 2024-06-27T02:35:53Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2024-06-27T02:35:53Z |
---
license: mit
---
|
DokiQueen/fingering
|
DokiQueen
| 2024-06-27T02:42:31Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T02:38:10Z |
Entry not found
|
IrohXu/stable-diffusion-3-inpainting
|
IrohXu
| 2024-06-28T02:03:24Z | 0 | 0 | null |
[
"arxiv:2403.03206",
"region:us"
] | null | 2024-06-27T02:40:46Z |
# Stable Diffusion 3 Inpainting Pipeline
This is the implementation of `Stable Diffusion 3 Inpainting Pipeline`.
| input image | input mask image | output |
|:-------------------------:|:-------------------------:|:-------------------------:|
|<img src="./overture-creations-5sI6fQgYIuo.png" width = "400" /> | <img src="./overture-creations-5sI6fQgYIuo_mask.png" width = "400" /> | <img src="./overture-creations-5sI6fQgYIuo_output.jpg" width = "400" /> |
|<img src="./overture-creations-5sI6fQgYIuo.png" width = "400" /> | <img src="./overture-creations-5sI6fQgYIuo_mask.png" width = "400" /> | <img src="./overture-creations-5sI6fQgYIuo_tiger.jpg" width = "400" /> |
|<img src="./overture-creations-5sI6fQgYIuo.png" width = "400" /> | <img src="./overture-creations-5sI6fQgYIuo_mask.png" width = "400" /> | <img src="./overture-creations-5sI6fQgYIuo_panda.jpg" width = "400" /> |
**Please ensure that the version of diffusers >= 0.29.1**
## Model
[Stable Diffusion 3 Medium](https://stability.ai/news/stable-diffusion-3-medium) is a Multimodal Diffusion Transformer (MMDiT) text-to-image model that features greatly improved performance in image quality, typography, complex prompt understanding, and resource-efficiency.
For more technical details, please refer to the [Research paper](https://stability.ai/news/stable-diffusion-3-research-paper).
Please note: this model is released under the Stability Non-Commercial Research Community License. For a Creator License or an Enterprise License visit Stability.ai or [contact us](https://stability.ai/license) for commercial licensing details.
### Model Description
- **Developed by:** Stability AI
- **Model type:** MMDiT text-to-image generative model
- **Model Description:** This is a model that can be used to generate images based on text prompts. It is a Multimodal Diffusion Transformer
(https://arxiv.org/abs/2403.03206) that uses three fixed, pretrained text encoders
([OpenCLIP-ViT/G](https://github.com/mlfoundations/open_clip), [CLIP-ViT/L](https://github.com/openai/CLIP/tree/main) and [T5-xxl](https://huggingface.co/google/t5-v1_1-xxl))
## Demo
Make sure you upgrade to the latest version of diffusers: pip install -U diffusers. And then you can run:
```python
import torch
from torchvision import transforms
from pipeline_stable_diffusion_3_inpaint import StableDiffusion3InpaintPipeline
from diffusers.utils import load_image
def preprocess_image(image):
image = image.convert("RGB")
image = transforms.CenterCrop((image.size[1] // 64 * 64, image.size[0] // 64 * 64))(image)
image = transforms.ToTensor()(image)
image = image.unsqueeze(0).to("cuda")
return image
def preprocess_mask(mask):
mask = mask.convert("L")
mask = transforms.CenterCrop((mask.size[1] // 64 * 64, mask.size[0] // 64 * 64))(mask)
mask = transforms.ToTensor()(mask)
mask = mask.to("cuda")
return mask
pipe = StableDiffusion3InpaintPipeline.from_pretrained(
"stabilityai/stable-diffusion-3-medium-diffusers",
torch_dtype=torch.float16,
).to("cuda")
prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
source_image = load_image(
"./overture-creations-5sI6fQgYIuo.png"
)
source = preprocess_image(source_image)
mask = preprocess_mask(
load_image(
"./overture-creations-5sI6fQgYIuo_mask.png"
)
)
image = pipe(
prompt=prompt,
image=source,
mask_image=1-mask,
height=1024,
width=1024,
num_inference_steps=28,
guidance_scale=7.0,
strength=0.6,
).images[0]
image.save("output.png")
```
|
Ebihagin/Hayedeh
|
Ebihagin
| 2024-06-27T02:46:29Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2024-06-27T02:46:29Z |
---
license: openrail
---
|
iharrisonfu/hksuicide_lora_model
|
iharrisonfu
| 2024-06-27T03:50:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-27T02:47:25Z |
---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** iharrisonfu
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
habulaj/185722278594
|
habulaj
| 2024-06-27T02:48:39Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T02:47:50Z |
Entry not found
|
rushijoshi1995/bloom-7b1-lora-tagger
|
rushijoshi1995
| 2024-06-27T02:57:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-27T02:57:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
elliotthwang/google_gemma_2b_zh
|
elliotthwang
| 2024-06-27T02:59:29Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T02:59:28Z |
Entry not found
|
saberbx/test-x-moodel-v1
|
saberbx
| 2024-06-27T03:08:38Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T03:08:38Z |
Entry not found
|
habulaj/11110585898
|
habulaj
| 2024-06-27T03:14:27Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T03:14:22Z |
Entry not found
|
nightfall010/trnas
|
nightfall010
| 2024-06-30T18:46:55Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"region:us"
] | null | 2024-06-27T03:16:11Z |
Entry not found
|
Ebihagin/L
|
Ebihagin
| 2024-06-27T03:16:16Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T03:16:16Z |
Entry not found
|
LeonOuO/distilbert-base-uncased-finetuned-emotion
|
LeonOuO
| 2024-06-27T03:17:09Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T03:17:09Z |
Entry not found
|
Ebihagin/Hayadeh
|
Ebihagin
| 2024-06-27T03:37:24Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2024-06-27T03:17:12Z |
---
license: openrail
---
|
zero-one-01/llama3-8b-config
|
zero-one-01
| 2024-06-27T03:37:16Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-06-27T03:17:29Z |
Entry not found
|
GraydientPlatformAPI/loras-june27
|
GraydientPlatformAPI
| 2024-06-27T04:02:33Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T03:27:34Z |
Entry not found
|
andytoichoi/llama3-8b-oig-unsloth
|
andytoichoi
| 2024-06-27T03:28:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-27T03:27:50Z |
---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** andytoichoi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Litzy619/MIS0626T3F200200
|
Litzy619
| 2024-06-27T08:18:24Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T03:28:31Z |
Entry not found
|
caspro/mbart-large-50_Nepali_News_Summarization_LoRA_Nepali_tokenizer
|
caspro
| 2024-06-27T06:49:00Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"region:us"
] | null | 2024-06-27T03:28:58Z |
Entry not found
|
starnet/14-star21-06-27
|
starnet
| 2024-06-27T03:40:01Z | 0 | 0 | null |
[
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | null | 2024-06-27T03:32:51Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
HoangCong/Llama-8B-fintune
|
HoangCong
| 2024-06-27T03:34:55Z | 0 | 0 | null |
[
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2024-06-27T03:33:59Z |
---
license: apache-2.0
---
|
habulaj/61852196063
|
habulaj
| 2024-06-27T03:36:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T03:36:07Z |
Entry not found
|
HoangCong/Llma-8B-Finetunefull
|
HoangCong
| 2024-06-27T03:37:37Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-06-27T03:36:15Z |
---
license: apache-2.0
---
|
Mercury7353/PyLlama3
|
Mercury7353
| 2024-07-01T04:10:42Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-06-27T03:37:08Z |
---
license: apache-2.0
---
|
Gunulhona/llm_output
|
Gunulhona
| 2024-06-27T03:40:54Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T03:40:54Z |
Entry not found
|
JamesSpray/llama-2-7b-chat-bnb-4bit-ift-dpo-003
|
JamesSpray
| 2024-06-27T03:46:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-27T03:43:02Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
soportemachine/soprote
|
soportemachine
| 2024-06-27T03:45:24Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-27T03:45:24Z |
Entry not found
|
Ebihagin/Mahasti600
|
Ebihagin
| 2024-06-27T03:46:49Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2024-06-27T03:46:49Z |
---
license: openrail
---
|
worstkid92/LLaMa2_13B_Chat-finetuned-dolly-with-exp
|
worstkid92
| 2024-06-27T06:13:56Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2024-06-27T03:49:56Z |
Entry not found
|
sealai/sealai-resources
|
sealai
| 2024-06-27T03:50:07Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2024-06-27T03:50:07Z |
---
license: mit
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.