modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
wkingyu666/qwen2
|
wkingyu666
| 2024-06-26T12:00:26Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T12:00:26Z |
Entry not found
|
SilvioLima/absa_3_domains
|
SilvioLima
| 2024-07-02T13:36:03Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2024-06-26T12:01:28Z |
#### Domain
Restaurant 68.763668
Laptop 64.611111
pet 37.000000
grocery 36.815789
home 36.000000
electronics 35.427419
book 34.227273
beauty 33.382353
fashion 28.500000
toy 27.413793
### F1-score% mean = 53.4454

|
hansa15100/openimage_r16_epoch25_model
|
hansa15100
| 2024-06-26T12:36:55Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"region:us"
] | null | 2024-06-26T12:01:54Z |
Entry not found
|
VKapseln475/SlimXmed122
|
VKapseln475
| 2024-06-26T12:12:43Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T12:02:11Z |
# <Kaufen> Slimxmed Deutschland - SlimXmed Erfahrungen Test, Einnahme Zutaten Preis
<Kaufen> Slimxmed Deutschland SlimXmed tritt auf den Plan im umkämpften Markt der Nahrungsergänzungsmittel zur Gewichtsreduktion. Dabei sind sie mehr als nur ein weiteres Produkt – sie sind ein Versprechen für eine gesündere und aktivere Zukunft für alle, die nicht nur abnehmen, sondern ihre Lebensqualität verbessern wollen. Dieser Blog gibt Einblicke in unsere Bewertungen und teilt echte Erfahrungen von Benutzern.
## **[Klicken Sie hier, um jetzt auf der offiziellen Website von SlimXmed zu kaufen](https://adtocart.xyz/slimxmed-de)**
## Wissenschaftliche Belege zur Wirksamkeit von Polyphenolen zum Abnehmen (Studien)
Ein Schlüsselelement im Testbericht von SlimXmed Premium Effect Kapseln ist die Evaluation der wissenschaftlichen Belege, die die behaupteten Wirkungen unterstützen. Mehrere Studien haben die positiven Effekte von Polyphenolen auf die Gewichtsreduktion untersucht.
Eine Meta-Analyse, veröffentlicht im „Journal of Nutritional Science and Vitaminology„, untersuchte die Auswirkungen von Grüntee-Extrakten, reich an Catechinen, auf die Gewichtsreduktion und Gewichtserhaltung. Die Analyse von vierzehn randomisierten kontrollierten Studien ergab eine signifikante Reduktion des Körpergewichts bei Teilnehmern, die Grüntee-Extrakte konsumierten, verglichen mit den Kontrollgruppen.
Resveratrol wurde in einer Studie im „International Journal of Obesity“ untersucht. Die Studie zeigte, dass eine Supplementierung mit Resveratrol den Stoffwechsel verbessern und die Fettmasse bei übergewichtigen Personen reduzieren kann.
Quercetin, bekannt für seine entzündungshemmenden und antioxidativen Eigenschaften, wurde ebenfalls hinsichtlich seiner Wirkung auf das Körpergewicht erforscht. Eine im „Journal of Clinical Endocrinology & Metabolism“ veröffentlichte Studie fand heraus, dass Quercetin die Fettverbrennung erhöhen und die Fettaufnahme im Darm reduzieren kann.
## Einschränkungen und Sicherheitsprofil
Während die vorhandenen Studien vielversprechende Ergebnisse bezüglich der Wirksamkeit von Polyphenolen bei der Gewichtsreduktion liefern, ist es wichtig, die Einschränkungen dieser Forschung zu berücksichtigen. Viele Studien wurden im kleinen Rahmen oder mit tierischen Modellen durchgeführt, was die Generalisierbarkeit der Ergebnisse auf den Menschen einschränkt. Zudem variiert die Dosierung der Polyphenole in den Studien erheblich, was einen direkten Vergleich der Ergebnisse erschwert.
Das Sicherheitsprofil von SlimXmed Stiftung Warentest ist im Allgemeinen als gut einzustufen, da Polyphenole in den verwendeten Konzentrationen selten ernsthafte Nebenwirkungen verursachen. Dennoch sollten Personen mit bestimmten gesundheitlichen Voraussetzungen oder diejenigen, die Medikamente einnehmen, vor der Anwendung einen Arzt konsultieren.
## **[Klicken Sie hier, um jetzt auf der offiziellen Website von SlimXmed zu kaufen](https://adtocart.xyz/slimxmed-de)**
|
zmlapq18/example-model
|
zmlapq18
| 2024-06-26T12:02:59Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2024-06-26T12:02:59Z |
---
license: mit
---
|
wufan/PDF-EXTRACT-KIT
|
wufan
| 2024-06-26T12:04:14Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T12:04:14Z |
Entry not found
|
elaistu/Salesperson
|
elaistu
| 2024-06-26T12:04:39Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T12:04:39Z |
Entry not found
|
alternativerealitystudio/Llama-3-8B-F
|
alternativerealitystudio
| 2024-06-26T12:05:34Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2024-06-26T12:05:34Z |
---
license: mit
---
|
jasonk19/mistral-7b-gec
|
jasonk19
| 2024-06-26T12:07:12Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T12:07:12Z |
Entry not found
|
rzarno/llama-3-8b-industry-code-with-adapter
|
rzarno
| 2024-06-26T12:08:28Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T12:08:28Z |
Entry not found
|
Atonemo/meeting-recorder
|
Atonemo
| 2024-06-26T12:09:38Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T12:09:38Z |
Entry not found
|
msonali/xlm-roberta-base-finetuned-panx-de
|
msonali
| 2024-06-26T12:09:51Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T12:09:51Z |
Entry not found
|
anileo1/llama3-8B-instruct-lora-finetuned-v1.2-16bit
|
anileo1
| 2024-06-26T12:11:36Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T12:11:35Z |
Entry not found
|
Suhash/my_awesome_billsum_model
|
Suhash
| 2024-06-26T12:12:05Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T12:12:05Z |
Entry not found
|
Divy12/Forest
|
Divy12
| 2024-06-26T12:12:36Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T12:12:36Z |
Entry not found
|
nsugianto/detr-resnet50_tuned_detrresnet50_lsdocelementdetv1type7_plusb5_5389s_adjparam6_lr5e5_dec1e4_b14
|
nsugianto
| 2024-06-26T12:13:15Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T12:13:15Z |
Entry not found
|
nsugianto/detr-resnet50_tuned_detrresnet50_lsdocelementdetv1type7_plusb5_5389s_adjparam6_lr5e5_dec1e4_b10
|
nsugianto
| 2024-06-26T12:13:52Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T12:13:52Z |
Entry not found
|
ozgung/red-bowl-SD3
|
ozgung
| 2024-06-26T12:14:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T12:14:49Z |
Entry not found
|
Grayx/john_paul_van_damme_37
|
Grayx
| 2024-06-26T12:15:36Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T12:15:25Z |
Entry not found
|
nsugianto/detr-resnet50_tuned_detrresnet50_lsdocelementdetv1type7_plusb5_5389s_adjparam6_lr5e5_dec5e4_b12
|
nsugianto
| 2024-06-26T12:16:15Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T12:16:15Z |
Entry not found
|
Cynor/cynorllama3
|
Cynor
| 2024-06-26T12:17:14Z | 0 | 0 | null |
[
"license:llama3",
"region:us"
] | null | 2024-06-26T12:17:09Z |
---
license: llama3
---
|
diproger/llama3-8b-loss-fine-tuned-test
|
diproger
| 2024-06-26T12:17:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T12:17:15Z |
---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** diproger
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kheopss/kheops_quantized
|
kheopss
| 2024-06-26T12:18:02Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T12:18:02Z |
Entry not found
|
Ak1104/snapshot
|
Ak1104
| 2024-07-02T11:42:46Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T12:19:03Z |
Entry not found
|
fwdfsdf/Mysey
|
fwdfsdf
| 2024-06-26T12:27:48Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2024-06-26T12:21:50Z |
---
license: openrail
---
|
R0obin/email-spam-classifier
|
R0obin
| 2024-06-26T12:27:34Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T12:27:34Z |
Entry not found
|
valerielucro/mistral_gsm8k_dpo_cot_beta_0.9
|
valerielucro
| 2024-06-26T12:29:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T12:29:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
valerielucro/mistral_gsm8k_dpo_cot_beta_0.7
|
valerielucro
| 2024-06-26T12:31:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T12:31:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
henilp105/InjecAgent-llama-7b-optim-all
|
henilp105
| 2024-06-26T12:33:53Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2024-06-26T12:31:38Z |
Entry not found
|
valerielucro/mistral_gsm8k_dpo_cot_beta_0.5
|
valerielucro
| 2024-06-26T12:35:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T12:35:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
prabinpanta0/image_classification_with_cnns
|
prabinpanta0
| 2024-06-26T12:44:17Z | 0 | 1 |
tensorflow
|
[
"tensorflow",
"keras",
"image-classification",
"image-classification-cnns",
"Fasion_image-classification",
"neural-network",
"en",
"license:mit",
"region:us"
] |
image-classification
| 2024-06-26T12:36:07Z |
---
license: mit
language: en
metrics: mean_squared_error
library_name: tensorflow
tags:
- image-classification
- image-classification-cnns
- Fasion_image-classification
- tensorflow
- neural-network
pipeline_tag: image-classification
---
This model was created as a practice exercise for the course "Intro to TensorFlow for Deep Learning" from Udacity, given by TensorFlow. It was trained on a dataset of TenserFlow Fashion MNIST using the cnns method. The model uses a small neural network built with TensorFlow.
## License
This model is released under the MIT license.
|
Ikhsan1/hugging
|
Ikhsan1
| 2024-06-26T12:36:12Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T12:36:12Z |
Entry not found
|
DarbyTan/Test
|
DarbyTan
| 2024-06-26T12:37:04Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T12:37:04Z |
Entry not found
|
Truepeak/ORPO-PM01-0.4
|
Truepeak
| 2024-06-26T12:39:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T12:37:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
valerielucro/mistral_gsm8k_dpo_cot_beta_0.8
|
valerielucro
| 2024-06-26T12:38:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T12:38:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kaya-kedi/Toadette-TITANPretrain
|
kaya-kedi
| 2024-06-26T12:44:26Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T12:38:34Z |
Entry not found
|
Hasano20/Mask2Former_Clean_Set1_95images_mask2former-swin-large-ade-semantic
|
Hasano20
| 2024-06-26T12:40:07Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T12:40:07Z |
Entry not found
|
pinkyprakash/Llama-3-8b-chat-finetune
|
pinkyprakash
| 2024-06-26T12:43:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T12:43:49Z |
Entry not found
|
TopperThijs/Llama2-Open-ended-Finetuned-6epochs15mlm
|
TopperThijs
| 2024-06-26T12:44:38Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T12:44:38Z |
Entry not found
|
Janibicigo/Misscake
|
Janibicigo
| 2024-06-26T12:46:36Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-06-26T12:46:36Z |
---
license: apache-2.0
---
|
shayantreylon2/lora_model4
|
shayantreylon2
| 2024-06-26T12:47:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T12:47:16Z |
---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** shayantreylon2
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jurieyel/77cdm-llama3-sqlcoder-8b-500s-1000d
|
jurieyel
| 2024-06-26T12:48:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:defog/llama-3-sqlcoder-8b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T12:47:50Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: defog/llama-3-sqlcoder-8b
---
# Uploaded model
- **Developed by:** jurieyel
- **License:** apache-2.0
- **Finetuned from model :** defog/llama-3-sqlcoder-8b
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
charlieoneill/jsalt-astroph-data
|
charlieoneill
| 2024-06-26T12:53:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T12:48:43Z |
Entry not found
|
geraldabrhm/llama-3-8b-regular-nocontext-32lora-lr8_5
|
geraldabrhm
| 2024-06-26T13:42:07Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2024-06-26T12:49:01Z |
Entry not found
|
newih/western
|
newih
| 2024-06-26T13:04:09Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T12:49:26Z |
Entry not found
|
nam194/llama3-8b-qlora-ultrachat-unsloth
|
nam194
| 2024-06-26T15:19:29Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"region:us"
] | null | 2024-06-26T12:51:01Z |
Entry not found
|
bug7/longchat_1080
|
bug7
| 2024-06-26T12:53:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T12:52:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
X1Rexords/Sidhu-Moosewala-AI-Model
|
X1Rexords
| 2024-06-26T13:01:44Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2024-06-26T12:56:43Z |
---
license: openrail
---
|
kevin009/deepseek
|
kevin009
| 2024-06-26T18:40:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T13:03:37Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ekaterina-blatova-jb/model_lr1e-5_v0
|
ekaterina-blatova-jb
| 2024-06-26T13:05:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-06-26T13:03:46Z |
---
{}
---
## Evaluation results
Validation loss on the whole input: 0.7532717230496928
Validation loss on completion: 0.785313343279995
|
bug7/longchat_960
|
bug7
| 2024-06-26T13:04:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T13:03:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wadiea/voice_model
|
wadiea
| 2024-06-26T13:04:09Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T13:04:09Z |
Entry not found
|
taric49/LLAMA3_Summarization_16k_2ep_b4g16_2024
|
taric49
| 2024-06-26T13:07:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T13:05:52Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
schaturv/llama2-7b-key-value-pairings-adapter
|
schaturv
| 2024-06-26T13:26:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T13:07:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
scrfur/parukorvcmmodel
|
scrfur
| 2024-06-26T13:10:45Z | 0 | 0 | null |
[
"license:unknown",
"region:us"
] | null | 2024-06-26T13:08:06Z |
---
license: unknown
---
|
Darius07/UNER_subword_tk_en_lora_alpha_1024_drop_0.3_rank_512_seed_42
|
Darius07
| 2024-06-26T13:27:23Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"dataset:universalner/universal_ner",
"base_model:xlm-roberta-base",
"license:mit",
"model-index",
"region:us"
] | null | 2024-06-26T13:08:30Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- universalner/universal_ner
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: UNER_subword_tk_en_lora_alpha_1024_drop_0.3_rank_512_seed_42
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: universalner/universal_ner en_ewt
type: universalner/universal_ner
config: en_ewt
split: validation
args: en_ewt
metrics:
- name: Precision
type: precision
value: 0.7731660231660231
- name: Recall
type: recall
value: 0.8291925465838509
- name: F1
type: f1
value: 0.8001998001998001
- name: Accuracy
type: accuracy
value: 0.9844128991212374
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# UNER_subword_tk_en_lora_alpha_1024_drop_0.3_rank_512_seed_42
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the universalner/universal_ner en_ewt dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0633
- Precision: 0.7732
- Recall: 0.8292
- F1: 0.8002
- Accuracy: 0.9844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 35.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 392 | 0.1362 | 0.2922 | 0.3903 | 0.3342 | 0.9569 |
| 0.2046 | 2.0 | 784 | 0.0889 | 0.5868 | 0.6822 | 0.6309 | 0.9745 |
| 0.085 | 3.0 | 1176 | 0.0772 | 0.6687 | 0.7940 | 0.7260 | 0.9778 |
| 0.0591 | 4.0 | 1568 | 0.0692 | 0.7085 | 0.7950 | 0.7493 | 0.9802 |
| 0.0591 | 5.0 | 1960 | 0.0692 | 0.6894 | 0.8251 | 0.7512 | 0.9791 |
| 0.0496 | 6.0 | 2352 | 0.0664 | 0.6937 | 0.8157 | 0.7498 | 0.9791 |
| 0.0448 | 7.0 | 2744 | 0.0671 | 0.7007 | 0.8313 | 0.7604 | 0.9797 |
| 0.0409 | 8.0 | 3136 | 0.0674 | 0.7200 | 0.8147 | 0.7644 | 0.9814 |
| 0.0388 | 9.0 | 3528 | 0.0635 | 0.7306 | 0.8478 | 0.7849 | 0.9816 |
| 0.0388 | 10.0 | 3920 | 0.0620 | 0.7481 | 0.8209 | 0.7828 | 0.9832 |
| 0.0357 | 11.0 | 4312 | 0.0586 | 0.7758 | 0.8240 | 0.7992 | 0.9844 |
| 0.0333 | 12.0 | 4704 | 0.0611 | 0.7606 | 0.8354 | 0.7963 | 0.9840 |
| 0.0323 | 13.0 | 5096 | 0.0601 | 0.7819 | 0.8240 | 0.8024 | 0.9844 |
| 0.0323 | 14.0 | 5488 | 0.0638 | 0.7203 | 0.8292 | 0.7709 | 0.9812 |
| 0.0303 | 15.0 | 5880 | 0.0600 | 0.7737 | 0.8354 | 0.8034 | 0.9841 |
| 0.0293 | 16.0 | 6272 | 0.0602 | 0.7703 | 0.8333 | 0.8006 | 0.9841 |
| 0.0271 | 17.0 | 6664 | 0.0609 | 0.7634 | 0.8416 | 0.8006 | 0.9841 |
| 0.0269 | 18.0 | 7056 | 0.0641 | 0.7569 | 0.8478 | 0.7998 | 0.9835 |
| 0.0269 | 19.0 | 7448 | 0.0594 | 0.7793 | 0.8261 | 0.8020 | 0.9849 |
| 0.0263 | 20.0 | 7840 | 0.0608 | 0.7873 | 0.8199 | 0.8032 | 0.9850 |
| 0.025 | 21.0 | 8232 | 0.0606 | 0.7812 | 0.8240 | 0.8020 | 0.9850 |
| 0.0236 | 22.0 | 8624 | 0.0639 | 0.7558 | 0.8364 | 0.7941 | 0.9839 |
| 0.0228 | 23.0 | 9016 | 0.0620 | 0.7668 | 0.8375 | 0.8006 | 0.9845 |
| 0.0228 | 24.0 | 9408 | 0.0612 | 0.7647 | 0.8344 | 0.7980 | 0.9842 |
| 0.0229 | 25.0 | 9800 | 0.0618 | 0.7584 | 0.8385 | 0.7965 | 0.9839 |
| 0.0227 | 26.0 | 10192 | 0.0631 | 0.7678 | 0.8385 | 0.8016 | 0.9842 |
| 0.0216 | 27.0 | 10584 | 0.0628 | 0.7883 | 0.8364 | 0.8117 | 0.9850 |
| 0.0216 | 28.0 | 10976 | 0.0611 | 0.7765 | 0.8344 | 0.8044 | 0.9849 |
| 0.0203 | 29.0 | 11368 | 0.0615 | 0.7755 | 0.8406 | 0.8068 | 0.9847 |
| 0.02 | 30.0 | 11760 | 0.0629 | 0.7743 | 0.8344 | 0.8032 | 0.9847 |
| 0.0197 | 31.0 | 12152 | 0.0620 | 0.7763 | 0.8333 | 0.8038 | 0.9843 |
| 0.0197 | 32.0 | 12544 | 0.0633 | 0.7750 | 0.8271 | 0.8002 | 0.9845 |
| 0.0197 | 33.0 | 12936 | 0.0631 | 0.7813 | 0.8323 | 0.8060 | 0.9845 |
| 0.0192 | 34.0 | 13328 | 0.0629 | 0.7768 | 0.8323 | 0.8036 | 0.9845 |
| 0.0188 | 35.0 | 13720 | 0.0633 | 0.7732 | 0.8292 | 0.8002 | 0.9844 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
|
jazzxxx/my_awesome_mind_model
|
jazzxxx
| 2024-06-26T13:10:14Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T13:10:14Z |
Entry not found
|
jvv7/ppo-Huggy
|
jvv7
| 2024-06-26T13:10:33Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T13:10:33Z |
Entry not found
|
jjgerbo/stable-diffusion-embeddings-lora
|
jjgerbo
| 2024-06-27T14:05:35Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2024-06-26T13:12:07Z |
---
license: mit
---
|
raidelcarballo/Arboles
|
raidelcarballo
| 2024-06-26T13:19:35Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T13:19:35Z |
Entry not found
|
anushaporwal/wav2vec2-common_voice-tr-demo
|
anushaporwal
| 2024-07-01T11:31:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-06-26T13:21:15Z |
Entry not found
|
m-faraz-ali/kaggle2
|
m-faraz-ali
| 2024-06-26T13:22:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T13:22:41Z |
Entry not found
|
anhnguyen1010/QWEN-7B-Instruct-Elementary-Math
|
anhnguyen1010
| 2024-06-26T13:25:21Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T13:25:21Z |
Entry not found
|
stewhsource/GovernmentGPT
|
stewhsource
| 2024-06-26T14:22:30Z | 0 | 1 | null |
[
"tensorboard",
"politics",
"debate",
"text-generation",
"en",
"base_model:mistralai/Mistral-7B-v0.3",
"license:mit",
"region:us"
] |
text-generation
| 2024-06-26T13:27:10Z |
---
license: mit
base_model: mistralai/Mistral-7B-v0.3
language:
- en
tags:
- politics
- debate
pipeline_tag: text-generation
---
# GovernmentGPT
_An LLM fine-tuned on the British Commons Parliamentary Hansard to simulate the debate of political topics like members of parliament._
I wanted to see whether we can teach an LLM to do the job of elected British Members of Parliament (MPs) and debate any issue like they do in the House of Commons.
GovernmentGPT is an LLM fine-tuned with a LoRA adapter. This git repo contains all the code and data necessary to build the datasets, perform fine-tuning and do inference: https://github.com/stewhsource/GovernmentGPT/
If you're looking to see an interesting end-to-end example of an LLM fine-tuning pipeline on real-world data, then look no further!
The key parts of the data processing pipeline are described in the following sections:
## Raw Data Extraction
The raw Hansard transcript and speaker data needed to create the training datasets sits in a few places and needs to be processed and linked together, ready to prepare the final training dataset. We only used Hansard data from 1997 onwards because it was easiest to link to the speaker data. The code to do that is here: https://github.com/stewhsource/GovernmentGPT/tree/main/DatasetPreparation.
## Training Dataset Preparation
The code samples 'sequences' of real British Commons Parlimentary Hansard debate transcripts. It attaches the speaker data (eg affiliation, location, additional roles such as committee memberships), and then structures it in a format ready for LLM fine-tuning. It strips dates, MP names and some numeric linking identifiers present in the text to try and avoid the LLM reproducing with bias. There is much more work that can be done to aid generalisability in this regard.
You can download the final prepared JSONL datasets ready for fine-tuning here:
- [100k instances (700mb compressed)](https://stewh-publicdata.s3.eu-west-2.amazonaws.com/governmentgpt/2024-06-07/datasets/HansardSequences_100k.big.txt.zip)
- [250k instances (1.7gb compressed)](https://stewh-publicdata.s3.eu-west-2.amazonaws.com/governmentgpt/2024-06-07/datasets/HansardSequences_250k.big.txt.zip)
## Fine-tuning
All code for fine-tuning is in this [[link](https://github.com/stewhsource/GovernmentGPT/blob/main/FineTuning/GovernmentGPT_FineTune_Mistral_7b.ipynb)](notebook). You can easily run this on your local machine if it has a GPU, or on Google Colab.
## LLM Adapter
The Mistral 7b v0.3 adapter is available for download here on HuggingFace, ready for you to plug into your own inference pipeline.
## Inference
You can run the fine-tuned model easily to generate your own debates using this [[link](https://github.com/stewhsource/GovernmentGPT/blob/main/Inference/GovernmentGPT_Inference.ipynb)](notebook). As with fine-tuning, you can easily run this on your local machine if it has a GPU, or on Google Colab.
## Acknowledgements
This work has been made possible through the hard work of others - thank you.
*Parlimentary Hansard data*
We make heavy use of [British Commons Parliamentary Hansard](https://hansard.parliament.uk) data. While this data is openly available to use, a number of individual and organisations have kindly worked hard to make this data more accessible for machine processing:
- [mySociety](https://www.mysociety.org) (eg their data in: https://github.com/mysociety/parlparse/blob/master/members/ministers-2010.json)
- [mySociety TheyWorkForYou](https://www.theyworkforyou.com) - Data APIs and dumps at https://data.theyworkforyou.com
- [Parlparse](https://github.com/mysociety/parlparse) - Extracting structured data from the published Hansard
- [Government datasets](https://www.parliament.uk/business/publications/research/parliament-facts-and-figures/members-of-parliament/)
|
Flamenco43/FSDP-2
|
Flamenco43
| 2024-06-26T13:41:19Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2024-06-26T13:30:54Z |
---
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: FSDP-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# FSDP-2
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6596
- Accuracy: 0.633
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7834 | 1.0 | 625 | 0.6586 | 0.633 |
| 0.7112 | 2.0 | 1250 | 0.6596 | 0.633 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Darius07/UNER_subword_tk_en_lora_alpha_512_drop_0.3_rank_256_seed_42_lr_3e-5
|
Darius07
| 2024-06-26T13:42:36Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"dataset:universalner/universal_ner",
"base_model:xlm-roberta-base",
"license:mit",
"model-index",
"region:us"
] | null | 2024-06-26T13:31:20Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- universalner/universal_ner
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: UNER_subword_tk_en_lora_alpha_512_drop_0.3_rank_256_seed_42_lr_3e-5
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: universalner/universal_ner en_ewt
type: universalner/universal_ner
config: en_ewt
split: validation
args: en_ewt
metrics:
- name: Precision
type: precision
value: 0.7810361681329423
- name: Recall
type: recall
value: 0.8271221532091098
- name: F1
type: f1
value: 0.8034188034188033
- name: Accuracy
type: accuracy
value: 0.9842538470714541
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# UNER_subword_tk_en_lora_alpha_512_drop_0.3_rank_256_seed_42_lr_3e-5
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the universalner/universal_ner en_ewt dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0619
- Precision: 0.7810
- Recall: 0.8271
- F1: 0.8034
- Accuracy: 0.9843
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 392 | 0.1040 | 0.4625 | 0.5870 | 0.5173 | 0.9677 |
| 0.168 | 2.0 | 784 | 0.0696 | 0.7047 | 0.7733 | 0.7374 | 0.9789 |
| 0.0614 | 3.0 | 1176 | 0.0695 | 0.7149 | 0.8023 | 0.7561 | 0.9807 |
| 0.0471 | 4.0 | 1568 | 0.0629 | 0.7233 | 0.8064 | 0.7626 | 0.9812 |
| 0.0471 | 5.0 | 1960 | 0.0637 | 0.7037 | 0.8261 | 0.76 | 0.9801 |
| 0.0408 | 6.0 | 2352 | 0.0594 | 0.7354 | 0.8199 | 0.7753 | 0.9823 |
| 0.036 | 7.0 | 2744 | 0.0623 | 0.7397 | 0.8209 | 0.7782 | 0.9820 |
| 0.0327 | 8.0 | 3136 | 0.0601 | 0.7686 | 0.8219 | 0.7944 | 0.9846 |
| 0.03 | 9.0 | 3528 | 0.0570 | 0.7678 | 0.8251 | 0.7954 | 0.9839 |
| 0.03 | 10.0 | 3920 | 0.0588 | 0.7765 | 0.8199 | 0.7976 | 0.9847 |
| 0.0271 | 11.0 | 4312 | 0.0573 | 0.7671 | 0.8251 | 0.7950 | 0.9835 |
| 0.0252 | 12.0 | 4704 | 0.0595 | 0.7776 | 0.8323 | 0.804 | 0.9849 |
| 0.0245 | 13.0 | 5096 | 0.0578 | 0.7858 | 0.8240 | 0.8044 | 0.9844 |
| 0.0245 | 14.0 | 5488 | 0.0596 | 0.7646 | 0.8271 | 0.7946 | 0.9836 |
| 0.0224 | 15.0 | 5880 | 0.0600 | 0.7869 | 0.8219 | 0.8041 | 0.9844 |
| 0.0216 | 16.0 | 6272 | 0.0616 | 0.7786 | 0.8230 | 0.8002 | 0.9841 |
| 0.02 | 17.0 | 6664 | 0.0615 | 0.7804 | 0.8313 | 0.8050 | 0.9847 |
| 0.0199 | 18.0 | 7056 | 0.0626 | 0.7727 | 0.8271 | 0.7990 | 0.9840 |
| 0.0199 | 19.0 | 7448 | 0.0621 | 0.7747 | 0.8292 | 0.801 | 0.9841 |
| 0.0193 | 20.0 | 7840 | 0.0619 | 0.7810 | 0.8271 | 0.8034 | 0.9843 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
|
lit9003code/melotts218
|
lit9003code
| 2024-06-26T13:31:48Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T13:31:35Z |
Entry not found
|
lit9003code/melotts220
|
lit9003code
| 2024-06-26T13:34:51Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T13:33:27Z |
Entry not found
|
Ikblox/Ikblox
|
Ikblox
| 2024-06-26T13:35:23Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T13:35:23Z |
Entry not found
|
lit9003code/melotts221
|
lit9003code
| 2024-06-26T13:37:26Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T13:36:06Z |
Entry not found
|
lit9003code/melotts222
|
lit9003code
| 2024-06-26T13:38:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T13:38:39Z |
Entry not found
|
lit9003code/melotts223
|
lit9003code
| 2024-06-26T13:41:38Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T13:40:14Z |
Entry not found
|
lit9003code/melotts224
|
lit9003code
| 2024-06-26T13:43:05Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T13:42:50Z |
Entry not found
|
vilkahyilka/go
|
vilkahyilka
| 2024-06-26T13:42:54Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T13:42:53Z |
Entry not found
|
Darius07/UNER_subword_tk_en_lora_alpha_512_drop_0.2_rank_256_seed_42_lr_3e-5
|
Darius07
| 2024-06-27T20:10:14Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"dataset:universalner/universal_ner",
"base_model:xlm-roberta-base",
"license:mit",
"region:us"
] | null | 2024-06-26T13:42:58Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- universalner/universal_ner
model-index:
- name: UNER_subword_tk_en_lora_alpha_512_drop_0.2_rank_256_seed_42_lr_3e-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# UNER_subword_tk_en_lora_alpha_512_drop_0.2_rank_256_seed_42_lr_3e-5
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the universalner/universal_ner en_ewt dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.6475
- eval_precision: 0.0045
- eval_recall: 0.0269
- eval_f1: 0.0077
- eval_accuracy: 0.3062
- eval_runtime: 1.3804
- eval_samples_per_second: 1449.554
- eval_steps_per_second: 45.638
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
|
lit9003code/melotts225
|
lit9003code
| 2024-06-26T13:44:25Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T13:44:12Z |
Entry not found
|
lit9003code/melotts226
|
lit9003code
| 2024-06-26T13:46:52Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T13:45:37Z |
Entry not found
|
Fakeacc007/GPT
|
Fakeacc007
| 2024-06-26T13:46:38Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T13:46:38Z |
Entry not found
|
gustavomacedo/Llama_3_Canarim
|
gustavomacedo
| 2024-06-26T13:47:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T13:47:03Z |
---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** gustavomacedo
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jjsprockel/Patologia_lora_model1
|
jjsprockel
| 2024-06-27T14:01:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T13:47:27Z |
---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# LLM basado en LLaMA Ajustado al Dominio de Patología
Primera Versión de un LLM ajustado para responder preguntas de Patología
# Uploaded model
- **Developed by:** jjsprockel
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
**Código para descarga:**
El siguiente es el código sugerido para descargar el modelo usando Unslot:
```
import torch
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "jjsprockel/Patologia_lora_model1",
max_seq_length = 2048, # Choose any! Llama 3 is up to 8k
dtype = None,
load_in_4bit = True,
)
FastLanguageModel.for_inference(model)
alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{}
### Input:
{}
### Response:
{}"""
```
**Código para la inferencia:**
El siguiente codigo demuestra como se puede llevar a cabo la inferencia.
```
instruction = input("Ingresa la pregunta que tengas de Patología: ")
inputs = tokenizer(
[
alpaca_prompt.format(
instruction, # instruction
"", # input
"", # output - leave this blank for generation!
)
], return_tensors = "pt").to("cuda")
from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 2048)
```
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lit9003code/melotts227
|
lit9003code
| 2024-06-26T13:48:26Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T13:48:03Z |
Entry not found
|
Adam3/Tim-The-Baldhead-V2
|
Adam3
| 2024-06-26T13:50:12Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2024-06-26T13:49:02Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/1001111177.jpg
- text: '-'
output:
url: images/1001130185.jpg
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
---
# Tim The Baldhead V2
<Gallery />
## Download model
[Download](/Adam3/Tim-The-Baldhead-V2/tree/main) them in the Files & versions tab.
|
konstantindobler/mistral7b-de-tokenizer-swap-pure-bf16-v2
|
konstantindobler
| 2024-06-26T13:51:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"de",
"dataset:uonlp/CulturaX",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-06-26T13:49:37Z |
---
language: de
license: apache-2.0
datasets: uonlp/CulturaX
---
# mistral7b-de-tokenizer-swap-pure-bf16-v2
Mistral-7B-v0.1 adapted to German as part of our study on efficient language adaptation: "Language Adaptation on a Tight Academic Compute Budget: Tokenizer Swapping Works and Pure bfloat16 Is Enough".
Code: https://github.com/konstantinjdobler/tight-budget-llm-adaptation
Paper: https://openreview.net/forum?id=VYfJaHeVod
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("konstantindobler/mistral7b-de-tokenizer-swap-pure-bf16-v2")
model = AutoModelForCausalLM.from_pretrained("konstantindobler/mistral7b-de-tokenizer-swap-pure-bf16-v2")
# Use model and tokenizer as usual
```
## Details
The model is based on [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) and was adapted to German.
The original tokenizer was replaced by a language-specific German tokenizer with a vocabulary of 32768 tokens. The new embeddings were initialized with [FOCUS](https://github.com/konstantinjdobler/focus).
The model was then trained on 8 billion German tokens from [uonlp/CulturaX](https://huggingface.co/uonlp/CulturaX) with pure bfloat16 precision (no mixed precision). More details and hyperparameters can be found [in the paper](https://openreview.net/forum?id=VYfJaHeVod).
## Disclaimer
The web-scale dataset used for pretraining and tokenizer training ([uonlp/CulturaX](https://huggingface.co/uonlp/CulturaX)) might contain personal and sensitive information.
Such behavior needs to be assessed carefully before any real-world deployment of the models.
## Citation
Please cite as follows:
```bibtex
@inproceedings{dobler2024language,
title={Language Adaptation on a Tight Academic Compute Budget: Tokenizer Swapping Works and Pure bfloat16 Is Enough},
author={Konstantin Dobler and Gerard de Melo},
booktitle={2nd Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@ICML 2024)},
year={2024},
url={https://openreview.net/forum?id=VYfJaHeVod}
}
```
|
lit9003code/melotts228
|
lit9003code
| 2024-06-26T13:49:54Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T13:49:38Z |
Entry not found
|
spjabech/Twitch_Highlighter_audio_phi
|
spjabech
| 2024-06-26T13:49:57Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T13:49:57Z |
Entry not found
|
lit9003code/melotts229
|
lit9003code
| 2024-06-26T13:51:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T13:51:01Z |
Entry not found
|
lit9003code/melotts230
|
lit9003code
| 2024-06-26T13:52:28Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T13:52:14Z |
Entry not found
|
lit9003code/melotts231
|
lit9003code
| 2024-06-26T13:53:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T13:53:36Z |
Entry not found
|
FevenTad/v1_0.65_Base
|
FevenTad
| 2024-06-26T17:16:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T13:54:02Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
safdarzeeshan/example-model
|
safdarzeeshan
| 2024-06-26T14:07:03Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T13:54:49Z |
# This is my first HF
---
license: mit
---
|
lit9003code/melotts232
|
lit9003code
| 2024-06-26T13:55:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T13:55:14Z |
Entry not found
|
vic1215/sft_openassistant-guanaco
|
vic1215
| 2024-06-26T13:55:15Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T13:55:15Z |
Entry not found
|
Temo27Anas/videomae-base-finetuned-ucf101-subset-1
|
Temo27Anas
| 2024-06-26T13:55:52Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T13:55:52Z |
Entry not found
|
geraldabrhm/llama-3-8b-regular-complexcontext-32lora-lr8_5
|
geraldabrhm
| 2024-06-26T14:48:22Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2024-06-26T13:56:24Z |
Entry not found
|
Loren85/Domenico-Bini
|
Loren85
| 2024-06-26T14:00:06Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2024-06-26T13:57:57Z |
---
license: openrail
---
|
spacejot/inspect_server_conditions
|
spacejot
| 2024-06-27T08:13:43Z | 0 | 0 | null |
[
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T13:58:14Z |
Entry not found
|
Valeille/David
|
Valeille
| 2024-06-26T14:00:05Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T14:00:05Z |
Entry not found
|
lit9003code/melotts219
|
lit9003code
| 2024-06-26T14:01:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T14:00:42Z |
Entry not found
|
Vare/mist4
|
Vare
| 2024-06-26T14:04:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T14:04:56Z |
Entry not found
|
agrajpaudel/corgy_dog_LoRA
|
agrajpaudel
| 2024-06-26T14:11:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T14:11:41Z |
Entry not found
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.