pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text-generation | transformers | # GreenBit LLMs
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
| **Repository (Llama 3 Family)** | **Avg Acc.** | **OpenBQ** | **ARC-E** | **Winogr.** | **HellaS.** | **ARC-C** | **PIQA** | **BoolQ** | **RACE** | **ANLI-R1** | **ANLI-R2** | **ANLI-R3** | **WiC** |
|:----------------------------------------|:------------:|:----------:|:---------:|:-----------:|:-----------:|:---------:|:--------:|:---------:|:--------:|:-----------:|:-----------:|:-----------:|:-------:|
| `Llama-3-8B-layer-mix-bpw-2.2` | 0.499 | 0.302 | 0.739 | 0.674 | 0.509 | 0.396 | 0.725 | 0.743 | 0.406 | 0.327 | 0.337 | 0.340 | 0.500 |
| `Llama-3-8B-layer-mix-bpw-2.5` | 0.506 | 0.298 | 0.760 | 0.684 | 0.513 | 0.418 | 0.744 | 0.756 | 0.389 | 0.335 | 0.335 | 0.335 | 0.509 |
| `Llama-3-8B-layer-mix-bpw-3.0` | 0.523 | 0.318 | 0.770 | 0.708 | 0.540 | 0.441 | 0.767 | 0.784 | 0.407 | 0.333 | 0.345 | 0.343 | 0.526 |
| `Llama-3-8B-layer-mix-bpw-4.0` | 0.542 | 0.338 | 0.791 | 0.729 | 0.591 | 0.484 | 0.797 | 0.799 | 0.398 | 0.337 | 0.345 | 0.352 | 0.545 |
| `Llama-3-8B-instruct-layer-mix-bpw-2.2` | 0.514 | 0.292 | 0.645 | 0.672 | 0.499 | 0.367 | 0.698 | 0.775 | 0.423 | 0.417 | 0.424 | 0.398 | 0.565 |
| `Llama-3-8B-instruct-layer-mix-bpw-2.5` | 0.528 | 0.304 | 0.741 | 0.681 | 0.512 | 0.412 | 0.749 | 0.798 | 0.425 | 0.417 | 0.410 | 0.390 | 0.498 |
| `Llama-3-8B-instruct-layer-mix-bpw-3.0` | 0.547 | 0.316 | 0.787 | 0.690 | 0.530 | 0.459 | 0.768 | 0.800 | 0.437 | 0.435 | 0.417 | 0.387 | 0.548 |
| `Llama-3-8B-instruct-layer-mix-bpw-4.0` | 0.576 | 0.344 | 0.808 | 0.716 | 0.569 | 0.513 | 0.778 | 0.825 | 0.449 | 0.462 | 0.449 | 0.432 | 0.578 | | {"license": "apache-2.0"} | GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-2.5 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T07:50:59+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| GreenBit LLMs
=============
This is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.
Please refer to our Github page for the code to run the model and more information.
| [] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers | # GreenBit LLMs
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
| **Repository (Llama 3 Family)** | **Avg Acc.** | **OpenBQ** | **ARC-E** | **Winogr.** | **HellaS.** | **ARC-C** | **PIQA** | **BoolQ** | **RACE** | **ANLI-R1** | **ANLI-R2** | **ANLI-R3** | **WiC** |
|:----------------------------------------|:------------:|:----------:|:---------:|:-----------:|:-----------:|:---------:|:--------:|:---------:|:--------:|:-----------:|:-----------:|:-----------:|:-------:|
| `Llama-3-8B-layer-mix-bpw-2.2` | 0.499 | 0.302 | 0.739 | 0.674 | 0.509 | 0.396 | 0.725 | 0.743 | 0.406 | 0.327 | 0.337 | 0.340 | 0.500 |
| `Llama-3-8B-layer-mix-bpw-2.5` | 0.506 | 0.298 | 0.760 | 0.684 | 0.513 | 0.418 | 0.744 | 0.756 | 0.389 | 0.335 | 0.335 | 0.335 | 0.509 |
| `Llama-3-8B-layer-mix-bpw-3.0` | 0.523 | 0.318 | 0.770 | 0.708 | 0.540 | 0.441 | 0.767 | 0.784 | 0.407 | 0.333 | 0.345 | 0.343 | 0.526 |
| `Llama-3-8B-layer-mix-bpw-4.0` | 0.542 | 0.338 | 0.791 | 0.729 | 0.591 | 0.484 | 0.797 | 0.799 | 0.398 | 0.337 | 0.345 | 0.352 | 0.545 |
| `Llama-3-8B-instruct-layer-mix-bpw-2.2` | 0.514 | 0.292 | 0.645 | 0.672 | 0.499 | 0.367 | 0.698 | 0.775 | 0.423 | 0.417 | 0.424 | 0.398 | 0.565 |
| `Llama-3-8B-instruct-layer-mix-bpw-2.5` | 0.528 | 0.304 | 0.741 | 0.681 | 0.512 | 0.412 | 0.749 | 0.798 | 0.425 | 0.417 | 0.410 | 0.390 | 0.498 |
| `Llama-3-8B-instruct-layer-mix-bpw-3.0` | 0.547 | 0.316 | 0.787 | 0.690 | 0.530 | 0.459 | 0.768 | 0.800 | 0.437 | 0.435 | 0.417 | 0.387 | 0.548 |
| `Llama-3-8B-instruct-layer-mix-bpw-4.0` | 0.576 | 0.344 | 0.808 | 0.716 | 0.569 | 0.513 | 0.778 | 0.825 | 0.449 | 0.462 | 0.449 | 0.432 | 0.578 | | {"license": "apache-2.0"} | GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-3.0 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T07:51:09+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| GreenBit LLMs
=============
This is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.
Please refer to our Github page for the code to run the model and more information.
| [] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-classification | setfit |
# SetFit with sentence-transformers/paraphrase-MiniLM-L3-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-MiniLM-L3-v2](https://huggingface.co/sentence-transformers/paraphrase-MiniLM-L3-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-MiniLM-L3-v2](https://huggingface.co/sentence-transformers/paraphrase-MiniLM-L3-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 128 tokens
- **Number of Classes:** 47 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 28 | <ul><li>'Tiong Bahru Plaza, SC-10, Chiller SC, Header CHWR Temp'</li><li>'Tiong Bahru Plaza, SC-10, Chiller SC, Header CHWR Temperature'</li><li>'Tiong Bahru Plaza, SC-10, Chiller SC, Header CHWR Temp'</li></ul> |
| 6 | <ul><li>'Tiong Bahru Plaza, DDC-L1-3, AHU-L2-02 modulating valve feedback'</li><li>'Tiong Bahru Plaza, DDC-L2-2, AHU-L2-05 modulating valve feedback'</li><li>'Tiong Bahru Plaza, DDC-L2-5, PAU-L2-04 modulating valve feedback'</li></ul> |
| 15 | <ul><li>'Tiong Bahru Plaza, VAV 19-6, Discharge Air Flow (Units: m3/h)'</li><li>'Tiong Bahru Plaza, VAV 19-22, Discharge Air Flow (Units: m3/h)'</li><li>'Tiong Bahru Plaza, VAV 19-7, Discharge Air Flow (Units: m3/h)'</li></ul> |
| 43 | <ul><li>'Tiong Bahru Plaza, UC800_101001_Chiller_1, Chilled Water Setpoint (Units: ¬?C)'</li><li>'Tiong Bahru Plaza, UC800_101001_CH_1, Chilled Water Setpoint (Units: ¬?C)'</li><li>'Tiong Bahru Plaza, UC800_101001_Chiller_1, Chilled Water Setpoint (Units: ¬?C)'</li></ul> |
| 4 | <ul><li>'Tiong Bahru Plaza, DDC L12, AHU 10-1 FLOW'</li><li>'Tiong Bahru Plaza, DDC L14-1, AHU 12-1 Flow (Units: Pa)'</li><li>'Tiong Bahru Plaza, DDC-L6, AHU 5-3 Flow'</li></ul> |
| 0 | <ul><li>'Tiong Bahru Plaza, DDC-L20, Co2 Level 18'</li><li>'Tiong Bahru Plaza, DDC L14-1, AHU 15-1 CO2 Reading (Units: ppm).1'</li><li>'Tiong Bahru Plaza, DDC-L6, AHU 4-1 CO2.1'</li></ul> |
| 10 | <ul><li>'Tiong Bahru Plaza, DDC-9-1, AHU7-1 Start/Stop Control'</li><li>'Tiong Bahru Plaza, DDC L14-1, AHU13-1 Start/Stop'</li><li>'Tiong Bahru Plaza, DDC-L1-4, PAU-L1-05 Start/Stop Control'</li></ul> |
| 40 | <ul><li>'Tiong Bahru Plaza, VAV-19-3, Air Valve Position (Units: %).1'</li><li>'Tiong Bahru Plaza, VAV 19-11, Air Valve Position (Units: %)'</li><li>'Tiong Bahru Plaza, VAV-19-20, Air Valve Position (Units: %)'</li></ul> |
| 26 | <ul><li>'Tiong Bahru Plaza, SC-10, Chiller SC, Condenser Water Pump 1 KW'</li><li>'Tiong Bahru Plaza, LSB-SR-1, Active Power kW'</li><li>'Tiong Bahru Plaza, TBP_UNO_Server, CH1:Kilowatt'</li></ul> |
| 5 | <ul><li>'Tiong Bahru Plaza, DDC-9-1, AHU 9-1 Valve Control (Units: %)'</li><li>'Tiong Bahru Plaza, DDC-L2-5, PAU-L2-04 valve control (Units: %)'</li><li>'Tiong Bahru Plaza, DDC-L1-4, PAU-L1-05 valve control (Units: %)'</li></ul> |
| 2 | <ul><li>'Tiong Bahru Plaza, DDC-L20, L19 Fresh air damper feedback (Units: %)'</li><li>'Tiong Bahru Plaza, DDC-L1-3, AHU-L2-02 FAD feedback'</li><li>'Tiong Bahru Plaza, DDC L14-1, AHU 13-1 FAD Feedback'</li></ul> |
| 32 | <ul><li>'CT 3-1 Switch Mode'</li><li>'Tiong Bahru Plaza, UC800_3, Operating Mode'</li><li>'Tiong Bahru Plaza, UC800_102002_Chiller_2, Operating Mode'</li></ul> |
| 34 | <ul><li>'Tiong Bahru Plaza, SC-10, Chiller SC, Chiller 4 CHWR Temperature'</li><li>'Tiong Bahru Plaza, SC-10, Chiller SC, Chiller 3 CHWR Temperature'</li><li>'Tiong Bahru Plaza, SC-10, Chiller SC, Chiller 1 CHWR Temperature'</li></ul> |
| 24 | <ul><li>'Tiong Bahru Plaza, DDC-L1-5, PAU-L1-06 VSD control'</li><li>'Tiong Bahru Plaza, DDC-L3-01, PAU-L3-03 VSD control'</li><li>'Tiong Bahru Plaza, DDC L12, AHU 11-1 VSD CONTROL'</li></ul> |
| 39 | <ul><li>'Tiong Bahru Plaza, UC800_102004, Cond Leaving Water Temp (Units: °C)'</li><li>'Tiong Bahru Plaza, TBP_UNO_Server, CH1_CWRT'</li><li>'Tiong Bahru Plaza, UC800_102005, Cond Entering Water Temp (Units: °C)'</li></ul> |
| 13 | <ul><li>'Tiong Bahru Plaza, DDC L14-1, AHU 14-1 TRIP ALARM'</li><li>'Tiong Bahru Plaza, DDC B1-3, Pau-B1-02-Trip Alarm'</li><li>'Tiong Bahru Plaza, DDC-B1-5, AHU-B1-2-Trip'</li></ul> |
| 9 | <ul><li>'Tiong Bahru Plaza, DDC-L3-2, AHU-6-2A VSD feedback'</li><li>'Tiong Bahru Plaza, DDC-L1-5, AHU-L3-04A VSD feedback'</li><li>'Tiong Bahru Plaza, DDC-L1-1, PAU-L1-01 VSD feedback'</li></ul> |
| 17 | <ul><li>'Tiong Bahru Plaza, DDC-L1-3, PAU-L1-02 switch mode'</li><li>'Tiong Bahru Plaza, DDC-L2-6, PAU-L2-01 switch mode'</li><li>'Tiong Bahru Plaza, DDC B1-3, Pau-B1-02-Switch Mode'</li></ul> |
| 14 | <ul><li>'Tiong Bahru Plaza, VAV 19-9, Space Temperature (Units: °C)'</li><li>'Tiong Bahru Plaza, VAV 19-11, Space Temperature (Units: °C)'</li><li>'Tiong Bahru Plaza, VAV-19-3, Space Temperature (Units: °C)'</li></ul> |
| 44 | <ul><li>'Tiong Bahru Plaza, DDC-CH-4, Cooling Tower 4-1 VSD Control'</li><li>'Tiong Bahru Plaza, DDC-CH-4, CWP 4 VSD Control'</li><li>'Tiong Bahru Plaza, DDC-CH-1, CHWP 1 VSD Control'</li></ul> |
| 27 | <ul><li>'Tiong Bahru Plaza, SC-10, Chiller SC, Chiller 4 CHW Flowrate'</li><li>'Tiong Bahru Plaza, SC-10, Chiller SC, Header CW Flowrate'</li><li>'Tiong Bahru Plaza, SC-10, Chiller SC, Chiller 1 CHW Flowrate'</li></ul> |
| 21 | <ul><li>'Tiong Bahru Plaza, VAV 19-7, Active Setpoint (Units: °C)'</li><li>'Tiong Bahru Plaza, VAV 19-12, Active Setpoint (Units: °C)'</li><li>'Tiong Bahru Plaza, VAV-19-20, Active Setpoint (Units: °C)'</li></ul> |
| 7 | <ul><li>'Tiong Bahru Plaza, DDC-L2-5, AHU-L2-03 returm air temperature (Units: °C)'</li><li>'Tiong Bahru Plaza, DDC-L1-5, AHU-L3-04A returm air temperature (Units: °C)'</li><li>'Tiong Bahru Plaza, DDC-L1-1, AHU-L1-01 returm air temperature (Units: °C)'</li></ul> |
| 29 | <ul><li>'Tiong Bahru Plaza, SC-10, Chiller SC, Head CWS Temp'</li><li>'Tiong Bahru Plaza, SC-10, Chiller SC, Header CWS Temp'</li><li>'Tiong Bahru Plaza, SC-10, Chiller SC, Header CWS Temperature'</li></ul> |
| 1 | <ul><li>'Tiong Bahru Plaza, DDC B1-4, PAU-B1-1 Static pressure'</li><li>'Tiong Bahru Plaza, DDC-L3-2, PAU-L3-01 static pressure (Units: Pa)'</li><li>'Tiong Bahru Plaza, DDC-L1-4, PAU-L1-05 Static pressure (Units: Pa)'</li></ul> |
| 46 | <ul><li>'Tiong Bahru Plaza, SC-10, Chiller SC, Header Differential Pressure'</li><li>' Chiller SC:Header Differential Pressure'</li></ul> |
| 11 | <ul><li>'Tiong Bahru Plaza, DDC-L1-4, PAU-L1-03 on/off status'</li><li>'Tiong Bahru Plaza, DDC-9-1, AHU 6-3 On/Off Status'</li><li>'Tiong Bahru Plaza, DDC-9-1, AHU 8-1 On/Off Status'</li></ul> |
| 33 | <ul><li>'Tiong Bahru Plaza, UC800_3, Evaporator Refrigerant Pressure - Circuit 1 (Units: Pa)'</li><li>'Tiong Bahru Plaza, UC800_102004, Cond Saturated Refrigerant Temp Sensor Chiller'</li><li>'Tiong Bahru Plaza, UC800_3, Condenser Saturated Refrigerant Temperature Circuit 1 (Units: °C)'</li></ul> |
| 8 | <ul><li>'Tiong Bahru Plaza, DDC L4-1, PAU-L4-03 supply air temperature (Units: °C)'</li><li>'Tiong Bahru Plaza, DDC L4-1, PAU-L4-03 supply air temperature (Units: °C).3'</li><li>'Tiong Bahru Plaza, DDC L4-1, PAU-L4-02 supply air temperature (Units: °C).1'</li></ul> |
| 16 | <ul><li>'Tiong Bahru Plaza, VAV 19-12, Discharge Air Temperature (Units: °C)'</li><li>'Tiong Bahru Plaza, VAV 19-5, Discharge Air Temperature (Units: °C)'</li><li>'Tiong Bahru Plaza, VAV 19-14, Discharge Air Temperature (Units: °C)'</li></ul> |
| 35 | <ul><li>'Tiong Bahru Plaza, DDC-CH-2, CH 2 CHWS TEMPERATURE (Units: ¬?C)'</li><li>'Tiong Bahru Plaza, SC-10, Chiller SC, Chiller 4 CHWS Temperature'</li><li>'Tiong Bahru Plaza, SC-10, Chiller SC, Chiller 3 CHWS Temperature'</li></ul> |
| 30 | <ul><li>'Tiong Bahru Plaza, SC-10, Chiller SC, Head CWR Temp'</li><li>'Tiong Bahru Plaza, SC-10, Chiller SC, Header CWR Temp'</li><li>'Tiong Bahru Plaza, SC-10, Chiller SC, Header CWR Temperature'</li></ul> |
| 3 | <ul><li>'Tiong Bahru Plaza, DDC-L2-2, AHU-L2-05 FAD control (Units: %)'</li><li>'Tiong Bahru Plaza, DDC L14-1, AHU 14-1 FAD Control (Units: %)'</li><li>'Tiong Bahru Plaza, DDC-L6, AHU 4-1 FAD Control'</li></ul> |
| 20 | <ul><li>'Tiong Bahru Plaza, VAV 19-6, Air Flow Setpoint Active (Units: m3/h).1'</li><li>'Tiong Bahru Plaza, VAV-19-20, Air Flow Setpoint Active (Units: m3/h)'</li><li>'Tiong Bahru Plaza, VAV 19-22, Air Flow Setpoint Active (Units: m3/h)'</li></ul> |
| 23 | <ul><li>'Tiong Bahru Plaza, DDC-L20, PAHU TR-1 TEMPERATURE (Units: °C)'</li></ul> |
| 45 | <ul><li>'Tiong Bahru Plaza, DDC-CH-4, Chiller 4 VSD Feedback'</li><li>'Tiong Bahru Plaza, DDC-CH-4, CHWP 4 VSD Feedback'</li><li>'Tiong Bahru Plaza, DDC-CH-4, CT 4-1 VSD Feedback'</li></ul> |
| 41 | <ul><li>'Tiong Bahru Plaza, SC-10, Chiller SC, System Cooling Load'</li><li>'Tiong Bahru Plaza, SC-10, Chiller SC, Total System Efficiency'</li><li>'Tiong Bahru Plaza, SC-10, Chiller SC, Total System Heat Balance'</li></ul> |
| 25 | <ul><li>'Tiong Bahru Plaza, SC-10, Chiller SC, Header CHWS Temperature'</li><li>'Tiong Bahru Plaza, SC-10, Chiller SC, Header CHWS Temp'</li><li>'Tiong Bahru Plaza, SC-10, Chiller SC, Header CHWS Temp'</li></ul> |
| 36 | <ul><li>'Tiong Bahru Plaza, UC800_3, Entering Condenser Water (Units: °C)'</li><li>'Tiong Bahru Plaza, UC800_101001_Chiller_1, Entering Condenser Water (Units: °C)'</li><li>'Tiong Bahru Plaza, UC800_102002_Chiller_2, Entering Condenser Water (Units: °C)'</li></ul> |
| 12 | <ul><li>'Tiong Bahru Plaza, DDC-L20, PAHU TR-1(R-1) SMOKE ALARM'</li><li>'Tiong Bahru Plaza, DDC-L17, AHU 16-1 Smoke Alarm'</li><li>'Tiong Bahru Plaza, DDC L12, AHU 10-1 SMOKE ALARM'</li></ul> |
| 42 | <ul><li>'Tiong Bahru Plaza, TBP Chiller Plant, Chilled Water Setpoint (Units: ¬?C)'</li><li>'Tiong Bahru Plaza, TBP Chiller Plant, Chilled Water Setpoint (Units: ¬?C)'</li></ul> |
| 37 | <ul><li>'Wet Bulb Temperature'</li><li>'Tiong Bahru Plaza, SC-4, wet bulb'</li></ul> |
| 31 | <ul><li>'Tiong Bahru Plaza, UC800_102005, Cond Water Flow'</li><li>'Tiong Bahru Plaza, UC800_102004, Condenser Water Flow'</li><li>'Tiong Bahru Plaza, UC800_102005, Condenser Water Flow'</li></ul> |
| 38 | <ul><li>'Tiong Bahru Plaza, SC-10, Chiller SC, System Condenser Water Supply Temperature Setpoint'</li><li>'Tiong Bahru Plaza, SC-10, Chiller SC, System Condenser Water Supply Temperature Setpoint'</li><li>'Tiong Bahru Plaza, SC-10, Chiller SC, System Condenser Water Supply Temperature Setpoint'</li></ul> |
| 18 | <ul><li>'Tiong Bahru Plaza, DDC_L4-3, Outdoor humidity (Units: %).1'</li><li>'Tiong Bahru Plaza, DDC_L4-3, Outdoor humidity (Units: %)'</li><li>'Tiong Bahru Plaza, DDC_L4-3, Outdoor humidity (Units: %).2'</li></ul> |
| 19 | <ul><li>'Tiong Bahru Plaza, DDC_L4-3, Outdoor temperature (Units: °C)'</li><li>'Tiong Bahru Plaza, DDC_L4-3, Outdoor temperature (Units: °C).1'</li><li>'Tiong Bahru Plaza, DDC_L4-3, Outdoor temperature (Units: °C).2'</li></ul> |
| 22 | <ul><li>'Tiong Bahru Plaza, DDC B1-3, Pau-B1-02-DP Sensor'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.8337 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Varun1010/all-MiniLM-L6-v2-polaris-tb-new-v1")
# Run inference
preds = model("Tiong Bahru Plaza, DDC-L2-5, AHU-L2-03 trip alarm")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 5 | 8.4138 | 14 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 10 |
| 1 | 10 |
| 2 | 10 |
| 3 | 10 |
| 4 | 10 |
| 5 | 10 |
| 6 | 10 |
| 7 | 10 |
| 8 | 10 |
| 9 | 10 |
| 10 | 10 |
| 11 | 10 |
| 12 | 10 |
| 13 | 10 |
| 14 | 10 |
| 15 | 10 |
| 16 | 10 |
| 17 | 10 |
| 18 | 3 |
| 19 | 3 |
| 20 | 10 |
| 21 | 10 |
| 22 | 1 |
| 23 | 1 |
| 24 | 10 |
| 25 | 4 |
| 26 | 10 |
| 27 | 8 |
| 28 | 4 |
| 29 | 3 |
| 30 | 3 |
| 31 | 4 |
| 32 | 4 |
| 33 | 9 |
| 34 | 5 |
| 35 | 4 |
| 36 | 3 |
| 37 | 2 |
| 38 | 3 |
| 39 | 9 |
| 40 | 10 |
| 41 | 4 |
| 42 | 2 |
| 43 | 3 |
| 44 | 8 |
| 45 | 8 |
| 46 | 2 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 16)
- max_steps: 500
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0006 | 1 | 0.1516 | - |
| 0.0302 | 50 | 0.1292 | - |
| 0.0604 | 100 | 0.0796 | - |
| 0.0905 | 150 | 0.068 | - |
| 0.1207 | 200 | 0.0498 | - |
| 0.1509 | 250 | 0.06 | - |
| 0.1811 | 300 | 0.0415 | - |
| 0.2112 | 350 | 0.0422 | - |
| 0.2414 | 400 | 0.0327 | - |
| 0.2716 | 450 | 0.0247 | - |
| 0.3018 | 500 | 0.0253 | - |
| 0.3319 | 550 | 0.0192 | - |
| 0.3621 | 600 | 0.0347 | - |
| 0.3923 | 650 | 0.0166 | - |
| 0.4225 | 700 | 0.034 | - |
| 0.4526 | 750 | 0.0242 | - |
| 0.4828 | 800 | 0.031 | - |
| 0.5130 | 850 | 0.0102 | - |
| 0.5432 | 900 | 0.0145 | - |
| 0.5733 | 950 | 0.0096 | - |
| 0.6035 | 1000 | 0.0166 | - |
| 0.6337 | 1050 | 0.0098 | - |
| 0.6639 | 1100 | 0.0091 | - |
| 0.6940 | 1150 | 0.005 | - |
| 0.7242 | 1200 | 0.008 | - |
| 0.7544 | 1250 | 0.0085 | - |
| 0.7846 | 1300 | 0.0242 | - |
| 0.8147 | 1350 | 0.0049 | - |
| 0.8449 | 1400 | 0.0082 | - |
| 0.8751 | 1450 | 0.0053 | - |
| 0.9053 | 1500 | 0.0092 | - |
| 0.9354 | 1550 | 0.0086 | - |
| 0.9656 | 1600 | 0.0054 | - |
| 0.9958 | 1650 | 0.0052 | - |
| 1.0260 | 1700 | 0.0101 | - |
| 1.0561 | 1750 | 0.0184 | - |
| 1.0863 | 1800 | 0.004 | - |
| 1.1165 | 1850 | 0.0082 | - |
| 1.1467 | 1900 | 0.0188 | - |
| 1.1768 | 1950 | 0.0097 | - |
| 1.2070 | 2000 | 0.0067 | - |
| 1.2372 | 2050 | 0.004 | - |
| 1.2674 | 2100 | 0.0076 | - |
| 1.2975 | 2150 | 0.0076 | - |
| 1.3277 | 2200 | 0.0192 | - |
| 1.3579 | 2250 | 0.0088 | - |
| 1.3881 | 2300 | 0.0049 | - |
| 1.4182 | 2350 | 0.0034 | - |
| 1.4484 | 2400 | 0.0028 | - |
| 1.4786 | 2450 | 0.0031 | - |
| 1.5088 | 2500 | 0.0075 | - |
| 1.5389 | 2550 | 0.0093 | - |
| 1.5691 | 2600 | 0.0037 | - |
| 1.5993 | 2650 | 0.0151 | - |
| 1.6295 | 2700 | 0.0044 | - |
| 1.6596 | 2750 | 0.002 | - |
| 1.6898 | 2800 | 0.0027 | - |
| 1.7200 | 2850 | 0.0039 | - |
| 1.7502 | 2900 | 0.003 | - |
| 1.7803 | 2950 | 0.0101 | - |
| 1.8105 | 3000 | 0.0082 | - |
| 1.8407 | 3050 | 0.0025 | - |
| 1.8709 | 3100 | 0.004 | - |
| 1.9010 | 3150 | 0.0064 | - |
| 1.9312 | 3200 | 0.0025 | - |
| 1.9614 | 3250 | 0.0021 | - |
| 1.9916 | 3300 | 0.0061 | - |
| 2.0217 | 3350 | 0.0055 | - |
| 2.0519 | 3400 | 0.0021 | - |
| 2.0821 | 3450 | 0.0034 | - |
| 2.1123 | 3500 | 0.002 | - |
| 2.1424 | 3550 | 0.0034 | - |
| 2.1726 | 3600 | 0.0027 | - |
| 2.2028 | 3650 | 0.0021 | - |
| 2.2330 | 3700 | 0.0056 | - |
| 2.2631 | 3750 | 0.0017 | - |
| 2.2933 | 3800 | 0.0024 | - |
| 2.3235 | 3850 | 0.0021 | - |
| 2.3537 | 3900 | 0.0033 | - |
| 2.3838 | 3950 | 0.0024 | - |
| 2.4140 | 4000 | 0.0029 | - |
| 2.4442 | 4050 | 0.0022 | - |
| 2.4744 | 4100 | 0.0015 | - |
| 2.5045 | 4150 | 0.0016 | - |
| 2.5347 | 4200 | 0.0028 | - |
| 2.5649 | 4250 | 0.0024 | - |
| 2.5951 | 4300 | 0.0041 | - |
| 2.6252 | 4350 | 0.0025 | - |
| 2.6554 | 4400 | 0.0019 | - |
| 2.6856 | 4450 | 0.0014 | - |
| 2.7158 | 4500 | 0.0031 | - |
| 2.7459 | 4550 | 0.0064 | - |
| 2.7761 | 4600 | 0.0047 | - |
| 2.8063 | 4650 | 0.004 | - |
| 2.8365 | 4700 | 0.0032 | - |
| 2.8666 | 4750 | 0.0017 | - |
| 2.8968 | 4800 | 0.0017 | - |
| 2.9270 | 4850 | 0.0039 | - |
| 2.9572 | 4900 | 0.0018 | - |
| 2.9873 | 4950 | 0.0015 | - |
| 3.0175 | 5000 | 0.0015 | - |
| 3.0477 | 5050 | 0.002 | - |
| 3.0779 | 5100 | 0.0015 | - |
| 3.1080 | 5150 | 0.0034 | - |
| 3.1382 | 5200 | 0.0022 | - |
| 3.1684 | 5250 | 0.0013 | - |
| 3.1986 | 5300 | 0.0165 | - |
| 3.2287 | 5350 | 0.0011 | - |
| 3.2589 | 5400 | 0.0012 | - |
| 3.2891 | 5450 | 0.0015 | - |
| 3.3193 | 5500 | 0.0021 | - |
| 3.3494 | 5550 | 0.003 | - |
| 3.3796 | 5600 | 0.0052 | - |
| 3.4098 | 5650 | 0.0011 | - |
| 3.4400 | 5700 | 0.0012 | - |
| 3.4701 | 5750 | 0.0013 | - |
| 3.5003 | 5800 | 0.0007 | - |
| 3.5305 | 5850 | 0.0013 | - |
| 3.5607 | 5900 | 0.0058 | - |
| 3.5908 | 5950 | 0.003 | - |
| 3.6210 | 6000 | 0.0015 | - |
| 3.6512 | 6050 | 0.001 | - |
| 3.6814 | 6100 | 0.0022 | - |
| 3.7115 | 6150 | 0.0056 | - |
| 3.7417 | 6200 | 0.0029 | - |
| 3.7719 | 6250 | 0.0009 | - |
| 3.8021 | 6300 | 0.0021 | - |
| 3.8322 | 6350 | 0.0047 | - |
| 3.8624 | 6400 | 0.0026 | - |
| 3.8926 | 6450 | 0.001 | - |
| 3.9228 | 6500 | 0.0015 | - |
| 3.9529 | 6550 | 0.0012 | - |
| 3.9831 | 6600 | 0.0154 | - |
| 4.0133 | 6650 | 0.0012 | - |
| 4.0435 | 6700 | 0.0014 | - |
| 4.0736 | 6750 | 0.0016 | - |
| 4.1038 | 6800 | 0.0044 | - |
| 4.1340 | 6850 | 0.0013 | - |
| 4.1642 | 6900 | 0.003 | - |
| 4.1943 | 6950 | 0.0019 | - |
| 4.2245 | 7000 | 0.0013 | - |
| 4.2547 | 7050 | 0.0007 | - |
| 4.2849 | 7100 | 0.0019 | - |
| 4.3150 | 7150 | 0.0007 | - |
| 4.3452 | 7200 | 0.0012 | - |
| 4.3754 | 7250 | 0.0008 | - |
| 4.4056 | 7300 | 0.0009 | - |
| 4.4357 | 7350 | 0.0011 | - |
| 4.4659 | 7400 | 0.0157 | - |
| 4.4961 | 7450 | 0.0009 | - |
| 4.5263 | 7500 | 0.0009 | - |
| 4.5564 | 7550 | 0.0018 | - |
| 4.5866 | 7600 | 0.001 | - |
| 4.6168 | 7650 | 0.001 | - |
| 4.6470 | 7700 | 0.001 | - |
| 4.6771 | 7750 | 0.001 | - |
| 4.7073 | 7800 | 0.001 | - |
| 4.7375 | 7850 | 0.0018 | - |
| 4.7677 | 7900 | 0.001 | - |
| 4.7978 | 7950 | 0.0011 | - |
| 4.8280 | 8000 | 0.0011 | - |
| 4.8582 | 8050 | 0.001 | - |
| 4.8884 | 8100 | 0.0008 | - |
| 4.9185 | 8150 | 0.0009 | - |
| 4.9487 | 8200 | 0.0034 | - |
| 4.9789 | 8250 | 0.001 | - |
| 0.0020 | 1 | 0.8971 | - |
| 0.0998 | 50 | 0.3923 | - |
| 0.1996 | 100 | 0.0047 | - |
| 0.2994 | 150 | 0.0013 | - |
| 0.3992 | 200 | 0.0009 | - |
| 0.4990 | 250 | 0.0005 | - |
| 0.5988 | 300 | 0.0003 | - |
| 0.6986 | 350 | 0.0004 | - |
| 0.7984 | 400 | 0.0003 | - |
| 0.8982 | 450 | 0.0003 | - |
| 0.9980 | 500 | 0.0004 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.7.0
- Transformers: 4.40.0
- PyTorch: 2.2.1+cu121
- Datasets: 2.19.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | {"library_name": "setfit", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "metrics": ["accuracy"], "base_model": "sentence-transformers/paraphrase-MiniLM-L3-v2", "widget": [{"text": "Tiong Bahru Plaza, DDC-L2-5, AHU-L2-03 trip alarm"}, {"text": "Tiong Bahru Plaza, DDC L4-1, PAU-L4-03 supply air temperature (Units: \u00c2\u00b0C).2"}, {"text": "Tiong Bahru Plaza, DDC-L2-5, AHU-L2-03 VSD control"}, {"text": "Tiong Bahru Plaza, VAV 19-7, Discharge Air Flow (Units: m3/h)"}, {"text": "Tiong Bahru Plaza, DDC-L1-4, PAU-L1-05 VSD control"}], "pipeline_tag": "text-classification", "inference": true, "model-index": [{"name": "SetFit with sentence-transformers/paraphrase-MiniLM-L3-v2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.8337264150943396, "name": "Accuracy"}]}]}]} | Varun1010/all-MiniLM-L6-v2-polaris-tb-new-v1 | null | [
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-MiniLM-L3-v2",
"model-index",
"region:us"
] | null | 2024-04-24T07:52:24+00:00 | [
"2209.11055"
] | [] | TAGS
#setfit #safetensors #bert #sentence-transformers #text-classification #generated_from_setfit_trainer #arxiv-2209.11055 #base_model-sentence-transformers/paraphrase-MiniLM-L3-v2 #model-index #region-us
| SetFit with sentence-transformers/paraphrase-MiniLM-L3-v2
=========================================================
This is a SetFit model that can be used for Text Classification. This SetFit model uses sentence-transformers/paraphrase-MiniLM-L3-v2 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a Sentence Transformer with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
Model Details
-------------
### Model Description
* Model Type: SetFit
* Sentence Transformer body: sentence-transformers/paraphrase-MiniLM-L3-v2
* Classification head: a LogisticRegression instance
* Maximum Sequence Length: 128 tokens
* Number of Classes: 47 classes
### Model Sources
* Repository: SetFit on GitHub
* Paper: Efficient Few-Shot Learning Without Prompts
* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts
### Model Labels
Evaluation
----------
### Metrics
Uses
----
### Direct Use for Inference
First install the SetFit library:
Then you can load this model and run inference.
Training Details
----------------
### Training Set Metrics
### Training Hyperparameters
* batch\_size: (16, 16)
* num\_epochs: (1, 16)
* max\_steps: 500
* sampling\_strategy: oversampling
* body\_learning\_rate: (2e-05, 1e-05)
* head\_learning\_rate: 0.01
* loss: CosineSimilarityLoss
* distance\_metric: cosine\_distance
* margin: 0.25
* end\_to\_end: False
* use\_amp: False
* warmup\_proportion: 0.1
* seed: 42
* eval\_max\_steps: -1
* load\_best\_model\_at\_end: False
### Training Results
### Framework Versions
* Python: 3.10.12
* SetFit: 1.0.3
* Sentence Transformers: 2.7.0
* Transformers: 4.40.0
* PyTorch: 2.2.1+cu121
* Datasets: 2.19.0
* Tokenizers: 0.19.1
### BibTeX
| [
"### Model Description\n\n\n* Model Type: SetFit\n* Sentence Transformer body: sentence-transformers/paraphrase-MiniLM-L3-v2\n* Classification head: a LogisticRegression instance\n* Maximum Sequence Length: 128 tokens\n* Number of Classes: 47 classes",
"### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts",
"### Model Labels\n\n\n\nEvaluation\n----------",
"### Metrics\n\n\n\nUses\n----",
"### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------",
"### Training Set Metrics",
"### Training Hyperparameters\n\n\n* batch\\_size: (16, 16)\n* num\\_epochs: (1, 16)\n* max\\_steps: 500\n* sampling\\_strategy: oversampling\n* body\\_learning\\_rate: (2e-05, 1e-05)\n* head\\_learning\\_rate: 0.01\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: False\n* warmup\\_proportion: 0.1\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: False",
"### Training Results",
"### Framework Versions\n\n\n* Python: 3.10.12\n* SetFit: 1.0.3\n* Sentence Transformers: 2.7.0\n* Transformers: 4.40.0\n* PyTorch: 2.2.1+cu121\n* Datasets: 2.19.0\n* Tokenizers: 0.19.1",
"### BibTeX"
] | [
"TAGS\n#setfit #safetensors #bert #sentence-transformers #text-classification #generated_from_setfit_trainer #arxiv-2209.11055 #base_model-sentence-transformers/paraphrase-MiniLM-L3-v2 #model-index #region-us \n",
"### Model Description\n\n\n* Model Type: SetFit\n* Sentence Transformer body: sentence-transformers/paraphrase-MiniLM-L3-v2\n* Classification head: a LogisticRegression instance\n* Maximum Sequence Length: 128 tokens\n* Number of Classes: 47 classes",
"### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts",
"### Model Labels\n\n\n\nEvaluation\n----------",
"### Metrics\n\n\n\nUses\n----",
"### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------",
"### Training Set Metrics",
"### Training Hyperparameters\n\n\n* batch\\_size: (16, 16)\n* num\\_epochs: (1, 16)\n* max\\_steps: 500\n* sampling\\_strategy: oversampling\n* body\\_learning\\_rate: (2e-05, 1e-05)\n* head\\_learning\\_rate: 0.01\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: False\n* warmup\\_proportion: 0.1\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: False",
"### Training Results",
"### Framework Versions\n\n\n* Python: 3.10.12\n* SetFit: 1.0.3\n* Sentence Transformers: 2.7.0\n* Transformers: 4.40.0\n* PyTorch: 2.2.1+cu121\n* Datasets: 2.19.0\n* Tokenizers: 0.19.1",
"### BibTeX"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | superiort/EEVE-Korean-Instruct-10.8B-v1.0_100QA_10epochs | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T07:52:57+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# OpenLLaMA: An Open Reproduction of LLaMA
**TL;DR**: we are releasing our public preview of OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA. We are releasing a series of 3B, 7B and 13B models trained on different data mixtures. Our model weights can serve as the drop in replacement of LLaMA in existing implementations.
In this repo, we present a permissively licensed open source reproduction of Meta AI's [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) large language model. We are releasing a series of 3B, 7B and 13B models trained on 1T tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. The v2 model is better than the old v1 model trained on a different data mixture. Please see the [project homepage of OpenLLaMA](https://github.com/openlm-research/open_llama) for more details.
## Weights Release, License and Usage
We release the weights in two formats: an EasyLM format to be use with our [EasyLM framework](https://github.com/young-geng/EasyLM), and a PyTorch format to be used with the [Hugging Face transformers](https://huggingface.co/docs/transformers/index) library. Both our training framework EasyLM and the checkpoint weights are licensed permissively under the Apache 2.0 license.
### Loading the Weights with Hugging Face Transformers
Preview checkpoints can be directly loaded from Hugging Face Hub. **Please note that it is advised to avoid using the Hugging Face fast tokenizer for now, as we’ve observed that** [**the auto-converted fast tokenizer sometimes gives incorrect tokenizations**](https://github.com/huggingface/transformers/issues/24233)**.** This can be achieved by directly using the `LlamaTokenizer` class, or passing in the `use_fast=False` option for the `AutoTokenizer` class. See the following example for usage.
```python
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
## v2 models
model_path = 'openlm-research/open_llama_7b_v2'
## v1 models
# model_path = 'openlm-research/open_llama_3b'
# model_path = 'openlm-research/open_llama_7b'
# model_path = 'openlm-research/open_llama_13b'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float16, device_map='auto',
)
prompt = 'Q: What is the largest animal?\nA:'
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=32
)
print(tokenizer.decode(generation_output[0]))
```
For more advanced usage, please follow the [transformers LLaMA documentation](https://huggingface.co/docs/transformers/main/model_doc/llama).
### Evaluating with LM-Eval-Harness
The model can be evaluated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness). However, due to the aforementioned tokenizer issue, we need to avoid using the fast tokenizer to obtain the correct results. This can be achieved by passing in `use_fast=False` to [this part of lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness/blob/4b701e228768052cfae9043dca13e82052ca5eea/lm_eval/models/huggingface.py#LL313C9-L316C10), as shown in the example below:
```python
tokenizer = self.AUTO_TOKENIZER_CLASS.from_pretrained(
pretrained if tokenizer is None else tokenizer,
revision=revision + ("/" + subfolder if subfolder is not None else ""),
use_fast=False
)
```
### Loading the Weights with EasyLM
For using the weights in our EasyLM framework, please refer to the [LLaMA documentation of EasyLM](https://github.com/young-geng/EasyLM/blob/main/docs/llama.md). Note that unlike the original LLaMA model, our OpenLLaMA tokenizer and weights are trained completely from scratch so it is no longer needed to obtain the original LLaMA tokenizer and weights.
## Dataset and Training
The v1 models are trained on the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). The v2 models are trained on a mixture of the [Falcon refined-web dataset](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), the [StarCoder dataset](https://huggingface.co/datasets/bigcode/starcoderdata) and the wikipedia, arxiv, book and stackexchange part of the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). We follow the exactly same preprocessing steps and training hyperparameters as the original LLaMA paper, including model architecture, context length, training steps, learning rate schedule, and optimizer. The only difference between our setting and the original one is the dataset used: OpenLLaMA employs open datasets rather than the one utilized by the original LLaMA.
We train the models on cloud TPU-v4s using [EasyLM](https://github.com/young-geng/EasyLM), a JAX based training pipeline we developed for training and fine-tuning large language models. We employ a combination of normal data parallelism and [fully sharded data parallelism (also know as ZeRO stage 3)](https://engineering.fb.com/2021/07/15/open-source/fsdp/) to balance the training throughput and memory usage. Overall we reach a throughput of over 2200 tokens / second / TPU-v4 chip for our 7B model.
## Evaluation
We evaluated OpenLLaMA on a wide range of tasks using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in [this issue of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/issues/443). Additionally, we present the results of GPT-J, a 6B parameter model trained on the [Pile](https://pile.eleuther.ai/) dataset by [EleutherAI](https://www.eleuther.ai/).
The original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks.
| **Task/Metric** | GPT-J 6B | LLaMA 7B | LLaMA 13B | OpenLLaMA 7Bv2 | OpenLLaMA 3B | OpenLLaMA 7B | OpenLLaMA 13B |
| ---------------------- | -------- | -------- | --------- | -------------- | ------------ | ------------ | ------------- |
| anli_r1/acc | 0.32 | 0.35 | 0.35 | 0.34 | 0.33 | 0.33 | 0.33 |
| anli_r2/acc | 0.34 | 0.34 | 0.36 | 0.35 | 0.32 | 0.36 | 0.33 |
| anli_r3/acc | 0.35 | 0.37 | 0.39 | 0.39 | 0.35 | 0.38 | 0.40 |
| arc_challenge/acc | 0.34 | 0.39 | 0.44 | 0.39 | 0.34 | 0.37 | 0.41 |
| arc_challenge/acc_norm | 0.37 | 0.41 | 0.44 | 0.41 | 0.37 | 0.38 | 0.44 |
| arc_easy/acc | 0.67 | 0.68 | 0.75 | 0.73 | 0.69 | 0.72 | 0.75 |
| arc_easy/acc_norm | 0.62 | 0.52 | 0.59 | 0.70 | 0.65 | 0.68 | 0.70 |
| boolq/acc | 0.66 | 0.75 | 0.71 | 0.72 | 0.68 | 0.71 | 0.75 |
| hellaswag/acc | 0.50 | 0.56 | 0.59 | 0.56 | 0.49 | 0.53 | 0.56 |
| hellaswag/acc_norm | 0.66 | 0.73 | 0.76 | 0.75 | 0.67 | 0.72 | 0.76 |
| openbookqa/acc | 0.29 | 0.29 | 0.31 | 0.30 | 0.27 | 0.30 | 0.31 |
| openbookqa/acc_norm | 0.38 | 0.41 | 0.42 | 0.41 | 0.40 | 0.40 | 0.43 |
| piqa/acc | 0.75 | 0.78 | 0.79 | 0.79 | 0.75 | 0.76 | 0.77 |
| piqa/acc_norm | 0.76 | 0.78 | 0.79 | 0.80 | 0.76 | 0.77 | 0.79 |
| record/em | 0.88 | 0.91 | 0.92 | 0.89 | 0.88 | 0.89 | 0.91 |
| record/f1 | 0.89 | 0.91 | 0.92 | 0.89 | 0.89 | 0.90 | 0.91 |
| rte/acc | 0.54 | 0.56 | 0.69 | 0.57 | 0.58 | 0.60 | 0.64 |
| truthfulqa_mc/mc1 | 0.20 | 0.21 | 0.25 | 0.23 | 0.22 | 0.23 | 0.25 |
| truthfulqa_mc/mc2 | 0.36 | 0.34 | 0.40 | 0.35 | 0.35 | 0.35 | 0.38 |
| wic/acc | 0.50 | 0.50 | 0.50 | 0.50 | 0.48 | 0.51 | 0.47 |
| winogrande/acc | 0.64 | 0.68 | 0.70 | 0.66 | 0.62 | 0.67 | 0.70 |
| Average | 0.52 | 0.55 | 0.57 | 0.56 | 0.53 | 0.55 | 0.57 |
We removed the task CB and WSC from our benchmark, as our model performs suspiciously high on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set.
## Contact
We would love to get feedback from the community. If you have any questions, please open an issue or contact us.
OpenLLaMA is developed by:
[Xinyang Geng](https://young-geng.xyz/)* and [Hao Liu](https://www.haoliu.site/)* from Berkeley AI Research.
*Equal Contribution
## Acknowledgment
We thank the [Google TPU Research Cloud](https://sites.research.google/trc/about/) program for providing part of the computation resources. We’d like to specially thank Jonathan Caton from TPU Research Cloud for helping us organizing compute resources, Rafi Witten from the Google Cloud team and James Bradbury from the Google JAX team for helping us optimizing our training throughput. We’d also want to thank Charlie Snell, Gautier Izacard, Eric Wallace, Lianmin Zheng and our user community for the discussions and feedback.
The OpenLLaMA 13B v1 model is trained in collaboration with [Stability AI](https://stability.ai/), and we thank Stability AI for providing the computation resources. We’d like to especially thank David Ha and Shivanshu Purohit for the coordinating the logistics and providing engineering support.
## Reference
If you found OpenLLaMA useful in your research or applications, please cite using the following BibTeX:
```
@software{openlm2023openllama,
author = {Geng, Xinyang and Liu, Hao},
title = {OpenLLaMA: An Open Reproduction of LLaMA},
month = May,
year = 2023,
url = {https://github.com/openlm-research/open_llama}
}
```
```
@software{together2023redpajama,
author = {Together Computer},
title = {RedPajama-Data: An Open Source Recipe to Reproduce LLaMA training dataset},
month = April,
year = 2023,
url = {https://github.com/togethercomputer/RedPajama-Data}
}
```
```
@article{touvron2023llama,
title={Llama: Open and efficient foundation language models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and others},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```
| {"license": "apache-2.0", "library_name": "transformers", "datasets": ["tiiuae/falcon-refinedweb", "bigcode/starcoderdata", "togethercomputer/RedPajama-Data-1T"]} | titanbot/ct2-int8-open-llama-7b-v2 | null | [
"transformers",
"llama",
"text-generation",
"dataset:tiiuae/falcon-refinedweb",
"dataset:bigcode/starcoderdata",
"dataset:togethercomputer/RedPajama-Data-1T",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T07:53:27+00:00 | [] | [] | TAGS
#transformers #llama #text-generation #dataset-tiiuae/falcon-refinedweb #dataset-bigcode/starcoderdata #dataset-togethercomputer/RedPajama-Data-1T #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| OpenLLaMA: An Open Reproduction of LLaMA
========================================
TL;DR: we are releasing our public preview of OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA. We are releasing a series of 3B, 7B and 13B models trained on different data mixtures. Our model weights can serve as the drop in replacement of LLaMA in existing implementations.
In this repo, we present a permissively licensed open source reproduction of Meta AI's LLaMA large language model. We are releasing a series of 3B, 7B and 13B models trained on 1T tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. The v2 model is better than the old v1 model trained on a different data mixture. Please see the project homepage of OpenLLaMA for more details.
Weights Release, License and Usage
----------------------------------
We release the weights in two formats: an EasyLM format to be use with our EasyLM framework, and a PyTorch format to be used with the Hugging Face transformers library. Both our training framework EasyLM and the checkpoint weights are licensed permissively under the Apache 2.0 license.
### Loading the Weights with Hugging Face Transformers
Preview checkpoints can be directly loaded from Hugging Face Hub. Please note that it is advised to avoid using the Hugging Face fast tokenizer for now, as we’ve observed that the auto-converted fast tokenizer sometimes gives incorrect tokenizations. This can be achieved by directly using the 'LlamaTokenizer' class, or passing in the 'use\_fast=False' option for the 'AutoTokenizer' class. See the following example for usage.
For more advanced usage, please follow the transformers LLaMA documentation.
### Evaluating with LM-Eval-Harness
The model can be evaluated with lm-eval-harness. However, due to the aforementioned tokenizer issue, we need to avoid using the fast tokenizer to obtain the correct results. This can be achieved by passing in 'use\_fast=False' to this part of lm-eval-harness, as shown in the example below:
### Loading the Weights with EasyLM
For using the weights in our EasyLM framework, please refer to the LLaMA documentation of EasyLM. Note that unlike the original LLaMA model, our OpenLLaMA tokenizer and weights are trained completely from scratch so it is no longer needed to obtain the original LLaMA tokenizer and weights.
Dataset and Training
--------------------
The v1 models are trained on the RedPajama dataset. The v2 models are trained on a mixture of the Falcon refined-web dataset, the StarCoder dataset and the wikipedia, arxiv, book and stackexchange part of the RedPajama dataset. We follow the exactly same preprocessing steps and training hyperparameters as the original LLaMA paper, including model architecture, context length, training steps, learning rate schedule, and optimizer. The only difference between our setting and the original one is the dataset used: OpenLLaMA employs open datasets rather than the one utilized by the original LLaMA.
We train the models on cloud TPU-v4s using EasyLM, a JAX based training pipeline we developed for training and fine-tuning large language models. We employ a combination of normal data parallelism and fully sharded data parallelism (also know as ZeRO stage 3) to balance the training throughput and memory usage. Overall we reach a throughput of over 2200 tokens / second / TPU-v4 chip for our 7B model.
Evaluation
----------
We evaluated OpenLLaMA on a wide range of tasks using lm-evaluation-harness. The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in this issue of lm-evaluation-harness. Additionally, we present the results of GPT-J, a 6B parameter model trained on the Pile dataset by EleutherAI.
The original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks.
We removed the task CB and WSC from our benchmark, as our model performs suspiciously high on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set.
Contact
-------
We would love to get feedback from the community. If you have any questions, please open an issue or contact us.
OpenLLaMA is developed by:
Xinyang Geng\* and Hao Liu\* from Berkeley AI Research.
\*Equal Contribution
Acknowledgment
--------------
We thank the Google TPU Research Cloud program for providing part of the computation resources. We’d like to specially thank Jonathan Caton from TPU Research Cloud for helping us organizing compute resources, Rafi Witten from the Google Cloud team and James Bradbury from the Google JAX team for helping us optimizing our training throughput. We’d also want to thank Charlie Snell, Gautier Izacard, Eric Wallace, Lianmin Zheng and our user community for the discussions and feedback.
The OpenLLaMA 13B v1 model is trained in collaboration with Stability AI, and we thank Stability AI for providing the computation resources. We’d like to especially thank David Ha and Shivanshu Purohit for the coordinating the logistics and providing engineering support.
Reference
---------
If you found OpenLLaMA useful in your research or applications, please cite using the following BibTeX:
| [
"### Loading the Weights with Hugging Face Transformers\n\n\nPreview checkpoints can be directly loaded from Hugging Face Hub. Please note that it is advised to avoid using the Hugging Face fast tokenizer for now, as we’ve observed that the auto-converted fast tokenizer sometimes gives incorrect tokenizations. This can be achieved by directly using the 'LlamaTokenizer' class, or passing in the 'use\\_fast=False' option for the 'AutoTokenizer' class. See the following example for usage.\n\n\nFor more advanced usage, please follow the transformers LLaMA documentation.",
"### Evaluating with LM-Eval-Harness\n\n\nThe model can be evaluated with lm-eval-harness. However, due to the aforementioned tokenizer issue, we need to avoid using the fast tokenizer to obtain the correct results. This can be achieved by passing in 'use\\_fast=False' to this part of lm-eval-harness, as shown in the example below:",
"### Loading the Weights with EasyLM\n\n\nFor using the weights in our EasyLM framework, please refer to the LLaMA documentation of EasyLM. Note that unlike the original LLaMA model, our OpenLLaMA tokenizer and weights are trained completely from scratch so it is no longer needed to obtain the original LLaMA tokenizer and weights.\n\n\nDataset and Training\n--------------------\n\n\nThe v1 models are trained on the RedPajama dataset. The v2 models are trained on a mixture of the Falcon refined-web dataset, the StarCoder dataset and the wikipedia, arxiv, book and stackexchange part of the RedPajama dataset. We follow the exactly same preprocessing steps and training hyperparameters as the original LLaMA paper, including model architecture, context length, training steps, learning rate schedule, and optimizer. The only difference between our setting and the original one is the dataset used: OpenLLaMA employs open datasets rather than the one utilized by the original LLaMA.\n\n\nWe train the models on cloud TPU-v4s using EasyLM, a JAX based training pipeline we developed for training and fine-tuning large language models. We employ a combination of normal data parallelism and fully sharded data parallelism (also know as ZeRO stage 3) to balance the training throughput and memory usage. Overall we reach a throughput of over 2200 tokens / second / TPU-v4 chip for our 7B model.\n\n\nEvaluation\n----------\n\n\nWe evaluated OpenLLaMA on a wide range of tasks using lm-evaluation-harness. The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in this issue of lm-evaluation-harness. Additionally, we present the results of GPT-J, a 6B parameter model trained on the Pile dataset by EleutherAI.\n\n\nThe original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks.\n\n\n\nWe removed the task CB and WSC from our benchmark, as our model performs suspiciously high on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set.\n\n\nContact\n-------\n\n\nWe would love to get feedback from the community. If you have any questions, please open an issue or contact us.\n\n\nOpenLLaMA is developed by:\nXinyang Geng\\* and Hao Liu\\* from Berkeley AI Research.\n\\*Equal Contribution\n\n\nAcknowledgment\n--------------\n\n\nWe thank the Google TPU Research Cloud program for providing part of the computation resources. We’d like to specially thank Jonathan Caton from TPU Research Cloud for helping us organizing compute resources, Rafi Witten from the Google Cloud team and James Bradbury from the Google JAX team for helping us optimizing our training throughput. We’d also want to thank Charlie Snell, Gautier Izacard, Eric Wallace, Lianmin Zheng and our user community for the discussions and feedback.\n\n\nThe OpenLLaMA 13B v1 model is trained in collaboration with Stability AI, and we thank Stability AI for providing the computation resources. We’d like to especially thank David Ha and Shivanshu Purohit for the coordinating the logistics and providing engineering support.\n\n\nReference\n---------\n\n\nIf you found OpenLLaMA useful in your research or applications, please cite using the following BibTeX:"
] | [
"TAGS\n#transformers #llama #text-generation #dataset-tiiuae/falcon-refinedweb #dataset-bigcode/starcoderdata #dataset-togethercomputer/RedPajama-Data-1T #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Loading the Weights with Hugging Face Transformers\n\n\nPreview checkpoints can be directly loaded from Hugging Face Hub. Please note that it is advised to avoid using the Hugging Face fast tokenizer for now, as we’ve observed that the auto-converted fast tokenizer sometimes gives incorrect tokenizations. This can be achieved by directly using the 'LlamaTokenizer' class, or passing in the 'use\\_fast=False' option for the 'AutoTokenizer' class. See the following example for usage.\n\n\nFor more advanced usage, please follow the transformers LLaMA documentation.",
"### Evaluating with LM-Eval-Harness\n\n\nThe model can be evaluated with lm-eval-harness. However, due to the aforementioned tokenizer issue, we need to avoid using the fast tokenizer to obtain the correct results. This can be achieved by passing in 'use\\_fast=False' to this part of lm-eval-harness, as shown in the example below:",
"### Loading the Weights with EasyLM\n\n\nFor using the weights in our EasyLM framework, please refer to the LLaMA documentation of EasyLM. Note that unlike the original LLaMA model, our OpenLLaMA tokenizer and weights are trained completely from scratch so it is no longer needed to obtain the original LLaMA tokenizer and weights.\n\n\nDataset and Training\n--------------------\n\n\nThe v1 models are trained on the RedPajama dataset. The v2 models are trained on a mixture of the Falcon refined-web dataset, the StarCoder dataset and the wikipedia, arxiv, book and stackexchange part of the RedPajama dataset. We follow the exactly same preprocessing steps and training hyperparameters as the original LLaMA paper, including model architecture, context length, training steps, learning rate schedule, and optimizer. The only difference between our setting and the original one is the dataset used: OpenLLaMA employs open datasets rather than the one utilized by the original LLaMA.\n\n\nWe train the models on cloud TPU-v4s using EasyLM, a JAX based training pipeline we developed for training and fine-tuning large language models. We employ a combination of normal data parallelism and fully sharded data parallelism (also know as ZeRO stage 3) to balance the training throughput and memory usage. Overall we reach a throughput of over 2200 tokens / second / TPU-v4 chip for our 7B model.\n\n\nEvaluation\n----------\n\n\nWe evaluated OpenLLaMA on a wide range of tasks using lm-evaluation-harness. The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in this issue of lm-evaluation-harness. Additionally, we present the results of GPT-J, a 6B parameter model trained on the Pile dataset by EleutherAI.\n\n\nThe original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks.\n\n\n\nWe removed the task CB and WSC from our benchmark, as our model performs suspiciously high on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set.\n\n\nContact\n-------\n\n\nWe would love to get feedback from the community. If you have any questions, please open an issue or contact us.\n\n\nOpenLLaMA is developed by:\nXinyang Geng\\* and Hao Liu\\* from Berkeley AI Research.\n\\*Equal Contribution\n\n\nAcknowledgment\n--------------\n\n\nWe thank the Google TPU Research Cloud program for providing part of the computation resources. We’d like to specially thank Jonathan Caton from TPU Research Cloud for helping us organizing compute resources, Rafi Witten from the Google Cloud team and James Bradbury from the Google JAX team for helping us optimizing our training throughput. We’d also want to thank Charlie Snell, Gautier Izacard, Eric Wallace, Lianmin Zheng and our user community for the discussions and feedback.\n\n\nThe OpenLLaMA 13B v1 model is trained in collaboration with Stability AI, and we thank Stability AI for providing the computation resources. We’d like to especially thank David Ha and Shivanshu Purohit for the coordinating the logistics and providing engineering support.\n\n\nReference\n---------\n\n\nIf you found OpenLLaMA useful in your research or applications, please cite using the following BibTeX:"
] |
null | null | Based on Meta-Llama-3-8b-Instruct, and is governed by Meta Llama 3 License agreement:
https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct
WARNING: There have been reports that GGUF made using llama.cpp might have tokenization issues and more prone to making mistakes. Until this is fixed we recommend using the AWQ model instead if you need a quantized version.
We don't know how good this model is exactly in benchmarks since we have not benched this yet, but we think real prompts and usage is more telling anyways.
From our testing this model is:
- Less Refusals
- More Uncensored
- Follows requests better
- Can reply in requested formats better without adding unnecesary information
We are happy for anyone to try it out and give some feedback.
You can also try this model on our API at https://www.awanllm.com/
Training:
- 2048 sequence length, while the base model is 8192 sequence length. From testing it still performs the same 8192 context just fine.
- Trained on a modified and improved version of Cognitive Computations Eric Hartford's Dolphin dataset. https://huggingface.co/datasets/cognitivecomputations/dolphin
- Training duration is around 2 days on 2x RTX3090 on our own machine, using 4-bit loading and Qlora 64-rank 128-alpha resulting in ~2% trainable weights.
The goal for this model is to have the model less-censored and great at general tasks like the previous dolphin based models by Eric Hartford.
We started training this BEFORE they launched their own full weight trained Llama-3-8B-Dolphin-2.9 with their own curated datasets and the newer "Dolphin 2.9" dataset, but we think this model is still a unique take on Llama 3 8B Instruct and the dolphin dataset.
https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b
The difference with their dolphin 2.9 model is that we train this using Meta's new Llama 3 instruct format and not the regular ChatML format that Dolphin models are usually trained on.
This is because we think that it performed better using the format it was originally trained on.
Instruct format:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
Quants:
AWQ: https://huggingface.co/AwanLLM/Meta-Llama-3-8B-Instruct-Dolfin-AWQ
GGUF: https://huggingface.co/AwanLLM/Meta-Llama-3-8B-Instruct-Dolfin-v0.1-GGUF
FP16: https://huggingface.co/AwanLLM/Meta-Llama-3-8B-Instruct-Dolfin
Exllamav2:
4bpw: https://huggingface.co/AwanLLM/Meta-Llama-3-8B-Instruct-Dolfin-v0.1-exl2-h8-4bpw-exl2
8bpw: https://huggingface.co/AwanLLM/Meta-Llama-3-8B-Instruct-Dolfin-v0.1-exl2-h8-8bpw-exl2
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
Axolotl Config:
```
base_model: Meta-Llama-3-8B-Instruct
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer
train_on_inputs: false
group_by_length: false
load_in_8bit: false
load_in_4bit: true
strict: false
sequence_len: 2048
bf16: true
fp16: false
tf32: false
flash_attention: true
# Data
datasets:
- path: flan1m-universal-uncensored-system-2048.jsonl
type:
system_prompt: ""
system_format: "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n{system}<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n"
field_system: system
field_instruction: input
field_output: output
format: "{instruction}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
no_input_format: "{instruction}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
warmup_steps: 10
dataset_prepared_path: ./last_run_prepared
# Iterations
num_epochs: 1
saves_per_epoch: 4
# Evaluation
val_set_size: 0.01
eval_table_size:
eval_table_max_new_tokens:
eval_sample_packing: false
evals_per_epoch: 4
# LoRA
output_dir: ./qlora-out
adapter: qlora
lora_model_dir:
lora_r: 64
lora_alpha: 128
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
save_safetensors: true
# Sampling
sample_packing: true
pad_to_sequence_len: true
# Batching
gradient_accumulation_steps: 32
micro_batch_size: 4
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: true
# Optimizer
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 0.0002
# Misc
early_stopping_patience:
resume_from_checkpoint:
logging_steps: 1
debug:
deepspeed: zero3_bf16.json
weight_decay: 0.1
special_tokens:
pad_token: <|end_of_text|>
```
| {"license": "apache-2.0"} | AwanLLM/Meta-Llama-3-8B-Instruct-Dolfin-v0.1-GGUF | null | [
"gguf",
"license:apache-2.0",
"region:us"
] | null | 2024-04-24T07:54:16+00:00 | [] | [] | TAGS
#gguf #license-apache-2.0 #region-us
| Based on Meta-Llama-3-8b-Instruct, and is governed by Meta Llama 3 License agreement:
URL
WARNING: There have been reports that GGUF made using URL might have tokenization issues and more prone to making mistakes. Until this is fixed we recommend using the AWQ model instead if you need a quantized version.
We don't know how good this model is exactly in benchmarks since we have not benched this yet, but we think real prompts and usage is more telling anyways.
From our testing this model is:
- Less Refusals
- More Uncensored
- Follows requests better
- Can reply in requested formats better without adding unnecesary information
We are happy for anyone to try it out and give some feedback.
You can also try this model on our API at URL
Training:
- 2048 sequence length, while the base model is 8192 sequence length. From testing it still performs the same 8192 context just fine.
- Trained on a modified and improved version of Cognitive Computations Eric Hartford's Dolphin dataset. URL
- Training duration is around 2 days on 2x RTX3090 on our own machine, using 4-bit loading and Qlora 64-rank 128-alpha resulting in ~2% trainable weights.
The goal for this model is to have the model less-censored and great at general tasks like the previous dolphin based models by Eric Hartford.
We started training this BEFORE they launched their own full weight trained Llama-3-8B-Dolphin-2.9 with their own curated datasets and the newer "Dolphin 2.9" dataset, but we think this model is still a unique take on Llama 3 8B Instruct and the dolphin dataset.
URL
The difference with their dolphin 2.9 model is that we train this using Meta's new Llama 3 instruct format and not the regular ChatML format that Dolphin models are usually trained on.
This is because we think that it performed better using the format it was originally trained on.
Instruct format:
Quants:
AWQ: URL
GGUF: URL
FP16: URL
Exllamav2:
4bpw: URL
8bpw: URL
<img src="URL alt="Built with Axolotl" width="200" height="32"/>
Axolotl Config:
| [] | [
"TAGS\n#gguf #license-apache-2.0 #region-us \n"
] |
text-generation | transformers | # [MaziyarPanahi/Poppy_Porpoise-v0.6-L3-8B-GGUF](https://huggingface.co/MaziyarPanahi/Poppy_Porpoise-v0.6-L3-8B-GGUF)
- Model creator: [ChaoticNeutrals](https://huggingface.co/ChaoticNeutrals)
- Original model: [ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B)
## Description
[MaziyarPanahi/Poppy_Porpoise-v0.6-L3-8B-GGUF](https://huggingface.co/MaziyarPanahi/Poppy_Porpoise-v0.6-L3-8B-GGUF) contains GGUF format model files for [ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. | {"tags": ["quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "text-generation"], "model_name": "Poppy_Porpoise-v0.6-L3-8B-GGUF", "base_model": "ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B", "inference": false, "model_creator": "ChaoticNeutrals", "pipeline_tag": "text-generation", "quantized_by": "MaziyarPanahi"} | MaziyarPanahi/Poppy_Porpoise-v0.6-L3-8B-GGUF | null | [
"transformers",
"gguf",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B"
] | null | 2024-04-24T07:56:10+00:00 | [] | [] | TAGS
#transformers #gguf #quantized #2-bit #3-bit #4-bit #5-bit #6-bit #8-bit #GGUF #safetensors #llama #text-generation #merge #mergekit #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us #base_model-ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B
| # MaziyarPanahi/Poppy_Porpoise-v0.6-L3-8B-GGUF
- Model creator: ChaoticNeutrals
- Original model: ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B
## Description
MaziyarPanahi/Poppy_Porpoise-v0.6-L3-8B-GGUF contains GGUF format model files for ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B.
### About GGUF
GGUF is a new format introduced by the URL team on August 21st 2023. It is a replacement for GGML, which is no longer supported by URL.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* URL. The source project for GGUF. Offers a CLI and a server option.
* llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* LM Studio, an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* text-generation-webui, the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* KoboldCpp, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* GPT4All, a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* LoLLMS Web UI, a great web UI with many interesting and unique features, including a full model library for easy model selection.
* URL, an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* candle, a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
Special thanks to Georgi Gerganov and the whole team working on URL for making all of this possible. | [
"# MaziyarPanahi/Poppy_Porpoise-v0.6-L3-8B-GGUF\n- Model creator: ChaoticNeutrals\n- Original model: ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B",
"## Description\nMaziyarPanahi/Poppy_Porpoise-v0.6-L3-8B-GGUF contains GGUF format model files for ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B.",
"### About GGUF\n\nGGUF is a new format introduced by the URL team on August 21st 2023. It is a replacement for GGML, which is no longer supported by URL.\n\nHere is an incomplete list of clients and libraries that are known to support GGUF:\n\n* URL. The source project for GGUF. Offers a CLI and a server option.\n* llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.\n* LM Studio, an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.\n* text-generation-webui, the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.\n* KoboldCpp, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.\n* GPT4All, a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.\n* LoLLMS Web UI, a great web UI with many interesting and unique features, including a full model library for easy model selection.\n* URL, an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.\n* candle, a Rust ML framework with a focus on performance, including GPU support, and ease of use.\n* ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.",
"## Special thanks\n\n Special thanks to Georgi Gerganov and the whole team working on URL for making all of this possible."
] | [
"TAGS\n#transformers #gguf #quantized #2-bit #3-bit #4-bit #5-bit #6-bit #8-bit #GGUF #safetensors #llama #text-generation #merge #mergekit #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us #base_model-ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B \n",
"# MaziyarPanahi/Poppy_Porpoise-v0.6-L3-8B-GGUF\n- Model creator: ChaoticNeutrals\n- Original model: ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B",
"## Description\nMaziyarPanahi/Poppy_Porpoise-v0.6-L3-8B-GGUF contains GGUF format model files for ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B.",
"### About GGUF\n\nGGUF is a new format introduced by the URL team on August 21st 2023. It is a replacement for GGML, which is no longer supported by URL.\n\nHere is an incomplete list of clients and libraries that are known to support GGUF:\n\n* URL. The source project for GGUF. Offers a CLI and a server option.\n* llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.\n* LM Studio, an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.\n* text-generation-webui, the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.\n* KoboldCpp, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.\n* GPT4All, a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.\n* LoLLMS Web UI, a great web UI with many interesting and unique features, including a full model library for easy model selection.\n* URL, an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.\n* candle, a Rust ML framework with a focus on performance, including GPU support, and ease of use.\n* ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.",
"## Special thanks\n\n Special thanks to Georgi Gerganov and the whole team working on URL for making all of this possible."
] |
image-text-to-text | xtuner |
---
**Notice:** This repository hosts the [`xtuner/llava-llama-3-8b-v1_1-hf`](https://huggingface.co/xtuner/llava-llama-3-8b-v1_1-hf) model, which has been specifically modified to address compatibility issues with the pure `transformers` library. The original model configuration and index files have been manually adjusted to ensure seamless integration and functionality with the `transformers` setup. These adjustments have not altered the model weights.
---
## QuickStart
Running with pure `transformers` library
```python
from transformers import (
LlavaProcessor,
LlavaForConditionalGeneration,
)
from PIL import Image
import requests
MODEL_NAME = "Seungyoun/llava-llama-3-8b-hf"
processor = LlavaProcessor.from_pretrained(MODEL_NAME)
processor.tokenizer.add_tokens(
["<|image|>", "<pad>"], special_tokens=True
) # add 128257 <|image|> , <pad>
model = LlavaForConditionalGeneration.from_pretrained(MODEL_NAME).to("cuda:0")
model.resize_token_embeddings(
len(processor.tokenizer)
) # resize embeddings for new tokens
# prepare image and text prompt, using the appropriate prompt template
url = "https://upload.wikimedia.org/wikipedia/commons/1/18/Kochendes_wasser02.jpg"
image = Image.open(requests.get(url, stream=True).raw)
template = """<|start_header_id|>system<|end_header_id|>{system_prompt}<|eot_id|>
<|start_header_id|>user<|end_header_id|>{user_msg_1}<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>"""
terminators = [
processor.tokenizer.eos_token_id,
processor.tokenizer.convert_tokens_to_ids("<|eot_id|>"),
]
prompt = template.format(
system_prompt="As a vision-llm, your task is to analyze and describe the contents of the image presented to you. Examine the photograph closely and provide a comprehensive, detailed caption. You should identify and describe the various food items and their arrangement, as well as any discernible textures, colors, and specific features of the containers they are in. Highlight the variety and how these contribute to the overall visual appeal of the meal. Your description should help someone who cannot see the image to visualize its contents accurately.",
user_msg_1="<|image|>\nGive me detailed description of the image.",
)
inputs = processor(prompt, image, return_tensors="pt").to("cuda:0")
# autoregressively complete prompt
output = model.generate(**inputs, max_new_tokens=1024, eos_token_id=terminators)
print(processor.decode(output[0], skip_special_tokens=False))
# The image captures a moment in a kitchen. The main focus is a white electric kettle, which is plugged in and resting on a black stovetop. The stovetop has four burners, although only one is occupied by the kettle. The background is blurred, drawing attention to the kettle and stovetop. The image does not contain any text or additional objects. The relative position of the objects is such that the kettle is on the stovetop, and the background is blurred.
```
---
</div>
## Model
llava-llama-3-8b-v1_1-hf is a LLaVA model fine-tuned from [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) and [CLIP-ViT-Large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) with [ShareGPT4V-PT](https://huggingface.co/datasets/Lin-Chen/ShareGPT4V) and [InternVL-SFT](https://github.com/OpenGVLab/InternVL/tree/main/internvl_chat#prepare-training-datasets) by [XTuner](https://github.com/InternLM/xtuner).
## Details
| Model | Visual Encoder | Projector | Resolution | Pretraining Strategy | Fine-tuning Strategy | Pretrain Dataset | Fine-tune Dataset |
| :-------------------- | ------------------: | --------: | ---------: | ---------------------: | ------------------------: | ------------------------: | -----------------------: |
| LLaVA-v1.5-7B | CLIP-L | MLP | 336 | Frozen LLM, Frozen ViT | Full LLM, Frozen ViT | LLaVA-PT (558K) | LLaVA-Mix (665K) |
| LLaVA-Llama-3-8B | CLIP-L | MLP | 336 | Frozen LLM, Frozen ViT | Full LLM, LoRA ViT | LLaVA-PT (558K) | LLaVA-Mix (665K) |
| LLaVA-Llama-3-8B-v1.1 | CLIP-L | MLP | 336 | Frozen LLM, Frozen ViT | Full LLM, LoRA ViT | ShareGPT4V-PT (1246K) | InternVL-SFT (1268K) |
## Results
<div align="center">
<img src="https://github.com/InternLM/xtuner/assets/36994684/a157638c-3500-44ed-bfab-d8d8249f91bb" alt="Image" width=500" />
</div>
| Model | MMBench Test (EN) | MMBench Test (CN) | CCBench Dev | MMMU Val | SEED-IMG | AI2D Test | ScienceQA Test | HallusionBench aAcc | POPE | GQA | TextVQA | MME | MMStar |
| :-------------------- | :---------------: | :---------------: | :---------: | :-------: | :------: | :-------: | :------------: | :-----------------: | :--: | :--: | :-----: | :------: | :----: |
| LLaVA-v1.5-7B | 66.5 | 59.0 | 27.5 | 35.3 | 60.5 | 54.8 | 70.4 | 44.9 | 85.9 | 62.0 | 58.2 | 1511/348 | 30.3 |
| LLaVA-Llama-3-8B | 68.9 | 61.6 | 30.4 | 36.8 | 69.8 | 60.9 | 73.3 | 47.3 | 87.2 | 63.5 | 58.0 | 1506/295 | 38.2 |
| LLaVA-Llama-3-8B-v1.1 | 72.3 | 66.4 | 31.6 | 36.8 | 70.1 | 70.0 | 72.9 | 47.7 | 86.4 | 62.6 | 59.0 | 1469/349 | 45.1 |
## Citation
```bibtex
@misc{2023xtuner,
title={XTuner: A Toolkit for Efficiently Fine-tuning LLM},
author={XTuner Contributors},
howpublished = {\url{https://github.com/InternLM/xtuner}},
year={2023}
}
``` | {"license": "llama3", "library_name": "xtuner", "datasets": ["Lin-Chen/ShareGPT4V"], "pipeline_tag": "image-text-to-text"} | Seungyoun/llava-llama-3-8b-hf | null | [
"xtuner",
"safetensors",
"llava",
"image-text-to-text",
"dataset:Lin-Chen/ShareGPT4V",
"license:llama3",
"region:us"
] | null | 2024-04-24T07:58:48+00:00 | [] | [] | TAGS
#xtuner #safetensors #llava #image-text-to-text #dataset-Lin-Chen/ShareGPT4V #license-llama3 #region-us
|
---
Notice: This repository hosts the 'xtuner/llava-llama-3-8b-v1\_1-hf' model, which has been specifically modified to address compatibility issues with the pure 'transformers' library. The original model configuration and index files have been manually adjusted to ensure seamless integration and functionality with the 'transformers' setup. These adjustments have not altered the model weights.
---
QuickStart
----------
Running with pure 'transformers' library
---
Model
-----
llava-llama-3-8b-v1\_1-hf is a LLaVA model fine-tuned from meta-llama/Meta-Llama-3-8B-Instruct and CLIP-ViT-Large-patch14-336 with ShareGPT4V-PT and InternVL-SFT by XTuner.
Details
-------
Results
-------

| [] | [
"TAGS\n#xtuner #safetensors #llava #image-text-to-text #dataset-Lin-Chen/ShareGPT4V #license-llama3 #region-us \n"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart_test_p2
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0076
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.018 | 0.18 | 500 | 0.0096 |
| 0.0189 | 0.35 | 1000 | 0.0097 |
| 0.0184 | 0.53 | 1500 | 0.0098 |
| 0.0167 | 0.7 | 2000 | 0.0094 |
| 0.0162 | 0.88 | 2500 | 0.0092 |
| 0.0162 | 1.05 | 3000 | 0.0086 |
| 0.0124 | 1.23 | 3500 | 0.0086 |
| 0.0127 | 1.4 | 4000 | 0.0084 |
| 0.0129 | 1.58 | 4500 | 0.0083 |
| 0.0123 | 1.75 | 5000 | 0.0080 |
| 0.0123 | 1.93 | 5500 | 0.0081 |
| 0.0104 | 2.1 | 6000 | 0.0079 |
| 0.0094 | 2.28 | 6500 | 0.0079 |
| 0.0103 | 2.45 | 7000 | 0.0077 |
| 0.01 | 2.63 | 7500 | 0.0077 |
| 0.0098 | 2.8 | 8000 | 0.0077 |
| 0.0095 | 2.98 | 8500 | 0.0076 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0.dev20230621+cu117
- Datasets 2.17.0
- Tokenizers 0.15.0
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "bart_test_p2", "results": []}]} | ypl/bart_test_p2 | null | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T08:00:23+00:00 | [] | [] | TAGS
#transformers #safetensors #bart #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
| bart\_test\_p2
==============
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0076
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.37.0.dev0
* Pytorch 2.1.0.dev20230621+cu117
* Datasets 2.17.0
* Tokenizers 0.15.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.37.0.dev0\n* Pytorch 2.1.0.dev20230621+cu117\n* Datasets 2.17.0\n* Tokenizers 0.15.0"
] | [
"TAGS\n#transformers #safetensors #bart #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.37.0.dev0\n* Pytorch 2.1.0.dev20230621+cu117\n* Datasets 2.17.0\n* Tokenizers 0.15.0"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-gpo-renew2-b0.001-extra-v2-i1
This model is a fine-tuned version of [DUAL-GPO/phi-2-gpo-renew2-b0.001-i0](https://huggingface.co/DUAL-GPO/phi-2-gpo-renew2-b0.001-i0) on the HuggingFaceH4/ultrafeedback_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0388
- Rewards/chosen: 0.0266
- Rewards/rejected: -0.0126
- Rewards/accuracies: 0.6070
- Rewards/margins: 0.0392
- Logps/rejected: -379.8497
- Logps/chosen: -369.7509
- Logits/rejected: -0.9196
- Logits/chosen: -0.9539
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.098 | 0.06 | 100 | 0.0533 | -0.0029 | -0.0036 | 0.4980 | 0.0007 | -370.8433 | -399.2503 | -0.7225 | -0.8171 |
| 0.094 | 0.13 | 200 | 0.0491 | -0.0390 | -0.0525 | 0.5525 | 0.0135 | -419.6949 | -435.2693 | -1.0754 | -1.1388 |
| 0.0898 | 0.19 | 300 | 0.0452 | -0.0184 | -0.0403 | 0.5780 | 0.0218 | -407.5088 | -414.7480 | -1.0291 | -1.0858 |
| 0.0731 | 0.26 | 400 | 0.0430 | -0.0069 | -0.0331 | 0.5970 | 0.0262 | -400.2979 | -403.1916 | -0.9864 | -1.0412 |
| 0.0787 | 0.32 | 500 | 0.0422 | -0.0122 | -0.0473 | 0.6070 | 0.0351 | -414.4887 | -408.4566 | -1.0587 | -1.0975 |
| 0.0742 | 0.38 | 600 | 0.0406 | 0.0135 | -0.0175 | 0.6085 | 0.0309 | -384.7105 | -382.8363 | -0.9872 | -1.0246 |
| 0.0635 | 0.45 | 700 | 0.0401 | 0.0166 | -0.0188 | 0.6095 | 0.0354 | -386.0258 | -379.6696 | -0.9903 | -1.0225 |
| 0.0881 | 0.51 | 800 | 0.0395 | 0.0250 | -0.0102 | 0.6085 | 0.0352 | -377.4323 | -371.2672 | -0.9658 | -0.9975 |
| 0.0753 | 0.58 | 900 | 0.0393 | 0.0304 | -0.0046 | 0.5990 | 0.0350 | -371.7872 | -365.8699 | -0.9026 | -0.9456 |
| 0.0922 | 0.64 | 1000 | 0.0390 | 0.0286 | -0.0075 | 0.5990 | 0.0361 | -374.7669 | -367.7319 | -0.8801 | -0.9184 |
| 0.0703 | 0.7 | 1100 | 0.0389 | 0.0227 | -0.0161 | 0.6000 | 0.0387 | -383.3026 | -373.6226 | -0.9300 | -0.9602 |
| 0.0746 | 0.77 | 1200 | 0.0388 | 0.0226 | -0.0179 | 0.6050 | 0.0405 | -385.1601 | -373.7153 | -0.8944 | -0.9306 |
| 0.0925 | 0.83 | 1300 | 0.0387 | 0.0263 | -0.0131 | 0.6030 | 0.0393 | -380.3072 | -370.0340 | -0.9171 | -0.9494 |
| 0.0863 | 0.9 | 1400 | 0.0387 | 0.0269 | -0.0123 | 0.6055 | 0.0392 | -379.5608 | -369.4450 | -0.9121 | -0.9447 |
| 0.0904 | 0.96 | 1500 | 0.0386 | 0.0268 | -0.0124 | 0.6045 | 0.0392 | -379.6000 | -369.4944 | -0.9203 | -0.9536 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2 | {"license": "mit", "library_name": "peft", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "microsoft/phi-2", "model-index": [{"name": "phi-2-gpo-renew2-b0.001-extra-v2-i1", "results": []}]} | DUAL-GPO-2/phi-2-gpo-renew2-b0.001-extra-v2-i1 | null | [
"peft",
"tensorboard",
"safetensors",
"phi",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"custom_code",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-04-24T08:00:47+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #phi #alignment-handbook #generated_from_trainer #trl #dpo #custom_code #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-microsoft/phi-2 #license-mit #region-us
| phi-2-gpo-renew2-b0.001-extra-v2-i1
===================================
This model is a fine-tuned version of DUAL-GPO/phi-2-gpo-renew2-b0.001-i0 on the HuggingFaceH4/ultrafeedback\_binarized dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0388
* Rewards/chosen: 0.0266
* Rewards/rejected: -0.0126
* Rewards/accuracies: 0.6070
* Rewards/margins: 0.0392
* Logps/rejected: -379.8497
* Logps/chosen: -369.7509
* Logits/rejected: -0.9196
* Logits/chosen: -0.9539
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-06
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* distributed\_type: multi-GPU
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 1
### Training results
### Framework versions
* PEFT 0.7.1
* Transformers 4.36.2
* Pytorch 2.1.2
* Datasets 2.14.6
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-06\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.36.2\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #phi #alignment-handbook #generated_from_trainer #trl #dpo #custom_code #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-microsoft/phi-2 #license-mit #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-06\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.36.2\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | DBangshu/GPT2 | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T08:05:56+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
object-detection | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# solar_detection_microsoft_resnet
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "facebook/detr-resnet-50", "model-index": [{"name": "solar_detection_microsoft_resnet", "results": []}]} | michalszy888/solar_detection_microsoft_resnet | null | [
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T08:07:46+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #detr #object-detection #generated_from_trainer #base_model-facebook/detr-resnet-50 #license-apache-2.0 #endpoints_compatible #region-us
|
# solar_detection_microsoft_resnet
This model is a fine-tuned version of facebook/detr-resnet-50 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# solar_detection_microsoft_resnet\n\nThis model is a fine-tuned version of facebook/detr-resnet-50 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.1\n- Pytorch 2.2.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #detr #object-detection #generated_from_trainer #base_model-facebook/detr-resnet-50 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# solar_detection_microsoft_resnet\n\nThis model is a fine-tuned version of facebook/detr-resnet-50 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.1\n- Pytorch 2.2.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-31m_mz-130_IMDB_n-its-10-seed-4
This model is a fine-tuned version of [EleutherAI/pythia-31m](https://huggingface.co/EleutherAI/pythia-31m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-31m", "model-index": [{"name": "robust_llm_pythia-31m_mz-130_IMDB_n-its-10-seed-4", "results": []}]} | AlignmentResearch/robust_llm_pythia-31m_mz-130_IMDB_n-its-10-seed-4 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-31m",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T08:12:53+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-31m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# robust_llm_pythia-31m_mz-130_IMDB_n-its-10-seed-4
This model is a fine-tuned version of EleutherAI/pythia-31m on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# robust_llm_pythia-31m_mz-130_IMDB_n-its-10-seed-4\n\nThis model is a fine-tuned version of EleutherAI/pythia-31m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-31m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# robust_llm_pythia-31m_mz-130_IMDB_n-its-10-seed-4\n\nThis model is a fine-tuned version of EleutherAI/pythia-31m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vistral-7B_finetuned_A100_April24th
This model is a fine-tuned version of [Viet-Mistral/Vistral-7B-Chat](https://huggingface.co/Viet-Mistral/Vistral-7B-Chat) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 0.7221 | 1.0 | 79685 | 0.5586 |
| 0.467 | 2.0 | 159370 | 0.3150 |
| 0.3586 | 3.0 | 239055 | 0.2440 |
| 0.2954 | 4.0 | 318740 | 0.2116 |
| 0.2556 | 5.0 | 398425 | 0.1997 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.39.3
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "afl-3.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "Viet-Mistral/Vistral-7B-Chat", "model-index": [{"name": "vistral-7B_finetuned_A100_April24th", "results": []}]} | Kudod/vistral-7B_finetuned_A100_April24th | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:Viet-Mistral/Vistral-7B-Chat",
"license:afl-3.0",
"region:us"
] | null | 2024-04-24T08:13:00+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-Viet-Mistral/Vistral-7B-Chat #license-afl-3.0 #region-us
| vistral-7B\_finetuned\_A100\_April24th
======================================
This model is a fine-tuned version of Viet-Mistral/Vistral-7B-Chat on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1997
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 4e-05
* train\_batch\_size: 1
* eval\_batch\_size: 1
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* PEFT 0.7.1
* Transformers 4.39.3
* Pytorch 2.1.2+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 4e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.39.3\n* Pytorch 2.1.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-Viet-Mistral/Vistral-7B-Chat #license-afl-3.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 4e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.39.3\n* Pytorch 2.1.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Maxnotmarx/test_model1
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.1436
- Validation Loss: 1.1968
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 4335, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 43, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.5834 | 1.1968 | 0 |
| 1.1431 | 1.1968 | 1 |
| 1.1436 | 1.1968 | 2 |
### Framework versions
- Transformers 4.40.0
- TensorFlow 2.15.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "distilbert/distilbert-base-uncased", "model-index": [{"name": "Maxnotmarx/test_model1", "results": []}]} | Maxnotmarx/test_model1 | null | [
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T08:14:09+00:00 | [] | [] | TAGS
#transformers #tf #distilbert #question-answering #generated_from_keras_callback #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us
| Maxnotmarx/test\_model1
=======================
This model is a fine-tuned version of distilbert/distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 1.1436
* Validation Loss: 1.1968
* Epoch: 2
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'Adam', 'weight\_decay': None, 'clipnorm': None, 'global\_clipnorm': None, 'clipvalue': None, 'use\_ema': False, 'ema\_momentum': 0.99, 'ema\_overwrite\_frequency': None, 'jit\_compile': True, 'is\_legacy\_optimizer': False, 'learning\_rate': {'module': 'transformers.optimization\_tf', 'class\_name': 'WarmUp', 'config': {'initial\_learning\_rate': 2e-05, 'decay\_schedule\_fn': {'module': 'keras.optimizers.schedules', 'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 2e-05, 'decay\_steps': 4335, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\_name': None}, 'warmup\_steps': 43, 'power': 1.0, 'name': None}, 'registered\_name': 'WarmUp'}, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.40.0
* TensorFlow 2.15.0
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'transformers.optimization\\_tf', 'class\\_name': 'WarmUp', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_schedule\\_fn': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 4335, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'warmup\\_steps': 43, 'power': 1.0, 'name': None}, 'registered\\_name': 'WarmUp'}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tf #distilbert #question-answering #generated_from_keras_callback #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'transformers.optimization\\_tf', 'class\\_name': 'WarmUp', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_schedule\\_fn': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 4335, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'warmup\\_steps': 43, 'power': 1.0, 'name': None}, 'registered\\_name': 'WarmUp'}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-70m_mz-130_PasswordMatch_n-its-10-seed-0
This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-70m", "model-index": [{"name": "robust_llm_pythia-70m_mz-130_PasswordMatch_n-its-10-seed-0", "results": []}]} | AlignmentResearch/robust_llm_pythia-70m_mz-130_PasswordMatch_n-its-10-seed-0 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T08:16:53+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-70m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# robust_llm_pythia-70m_mz-130_PasswordMatch_n-its-10-seed-0
This model is a fine-tuned version of EleutherAI/pythia-70m on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# robust_llm_pythia-70m_mz-130_PasswordMatch_n-its-10-seed-0\n\nThis model is a fine-tuned version of EleutherAI/pythia-70m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-70m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# robust_llm_pythia-70m_mz-130_PasswordMatch_n-its-10-seed-0\n\nThis model is a fine-tuned version of EleutherAI/pythia-70m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers | # GALAXY-16B-v1.0

## Technical notes
- 72 layers,DUS procedure, mistral(32)->SOLAR(48)->GALAXY(72)
- 16B parameters
- model created as a extension of depth upscaling procedure used for SOLAR by upstage
## Results
- model can and will produce NSFW content
- waiting for eval results | {"language": ["en"], "license": "apache-2.0", "tags": ["not-for-all-audiences"], "datasets": ["Intel/orca_dpo_pairs", "athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW", "Open-Orca/SlimOrca", "MinervaAI/Aesir-Preview", "allenai/ultrafeedback_binarized_cleaned"]} | TeeZee/GALAXY-16B-v1.0-bpw5.0-h8-exl2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"not-for-all-audiences",
"conversational",
"en",
"dataset:Intel/orca_dpo_pairs",
"dataset:athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW",
"dataset:Open-Orca/SlimOrca",
"dataset:MinervaAI/Aesir-Preview",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T08:17:27+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #not-for-all-audiences #conversational #en #dataset-Intel/orca_dpo_pairs #dataset-athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW #dataset-Open-Orca/SlimOrca #dataset-MinervaAI/Aesir-Preview #dataset-allenai/ultrafeedback_binarized_cleaned #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # GALAXY-16B-v1.0
!image/png
## Technical notes
- 72 layers,DUS procedure, mistral(32)->SOLAR(48)->GALAXY(72)
- 16B parameters
- model created as a extension of depth upscaling procedure used for SOLAR by upstage
## Results
- model can and will produce NSFW content
- waiting for eval results | [
"# GALAXY-16B-v1.0\n\n!image/png",
"## Technical notes\n- 72 layers,DUS procedure, mistral(32)->SOLAR(48)->GALAXY(72)\n- 16B parameters\n- model created as a extension of depth upscaling procedure used for SOLAR by upstage",
"## Results\n- model can and will produce NSFW content\n- waiting for eval results"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #not-for-all-audiences #conversational #en #dataset-Intel/orca_dpo_pairs #dataset-athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW #dataset-Open-Orca/SlimOrca #dataset-MinervaAI/Aesir-Preview #dataset-allenai/ultrafeedback_binarized_cleaned #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# GALAXY-16B-v1.0\n\n!image/png",
"## Technical notes\n- 72 layers,DUS procedure, mistral(32)->SOLAR(48)->GALAXY(72)\n- 16B parameters\n- model created as a extension of depth upscaling procedure used for SOLAR by upstage",
"## Results\n- model can and will produce NSFW content\n- waiting for eval results"
] |
text-generation | transformers |
# Uploaded model
- **Developed by:** LeroyDyer
- **License:** apache-2.0
- **Finetuned from model :** LeroyDyer/Mixtral_AI_MiniTron_Swahili_3.75b
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) | {"language": ["en", "sw"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "LeroyDyer/Mixtral_AI_MiniTron_Swahili_3.75b"} | LeroyDyer/CyberTron_Swahili_Alpaca | null | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"sw",
"base_model:LeroyDyer/Mixtral_AI_MiniTron_Swahili_3.75b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T08:19:26+00:00 | [] | [
"en",
"sw"
] | TAGS
#transformers #pytorch #mistral #text-generation #text-generation-inference #unsloth #trl #conversational #en #sw #base_model-LeroyDyer/Mixtral_AI_MiniTron_Swahili_3.75b #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: LeroyDyer
- License: apache-2.0
- Finetuned from model : LeroyDyer/Mixtral_AI_MiniTron_Swahili_3.75b
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/> | [
"# Uploaded model\n\n- Developed by: LeroyDyer\n- License: apache-2.0\n- Finetuned from model : LeroyDyer/Mixtral_AI_MiniTron_Swahili_3.75b\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #pytorch #mistral #text-generation #text-generation-inference #unsloth #trl #conversational #en #sw #base_model-LeroyDyer/Mixtral_AI_MiniTron_Swahili_3.75b #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: LeroyDyer\n- License: apache-2.0\n- Finetuned from model : LeroyDyer/Mixtral_AI_MiniTron_Swahili_3.75b\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | eddyejembi/Gemma_fine-tunned | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T08:19:47+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/kenshinx/Llama-2-7b-chat-xgpt
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-chat-xgpt-GGUF/resolve/main/Llama-2-7b-chat-xgpt.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-chat-xgpt-GGUF/resolve/main/Llama-2-7b-chat-xgpt.IQ3_XS.gguf) | IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-chat-xgpt-GGUF/resolve/main/Llama-2-7b-chat-xgpt.IQ3_S.gguf) | IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-chat-xgpt-GGUF/resolve/main/Llama-2-7b-chat-xgpt.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-chat-xgpt-GGUF/resolve/main/Llama-2-7b-chat-xgpt.IQ3_M.gguf) | IQ3_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-chat-xgpt-GGUF/resolve/main/Llama-2-7b-chat-xgpt.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-chat-xgpt-GGUF/resolve/main/Llama-2-7b-chat-xgpt.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-chat-xgpt-GGUF/resolve/main/Llama-2-7b-chat-xgpt.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-chat-xgpt-GGUF/resolve/main/Llama-2-7b-chat-xgpt.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-chat-xgpt-GGUF/resolve/main/Llama-2-7b-chat-xgpt.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-chat-xgpt-GGUF/resolve/main/Llama-2-7b-chat-xgpt.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-chat-xgpt-GGUF/resolve/main/Llama-2-7b-chat-xgpt.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-chat-xgpt-GGUF/resolve/main/Llama-2-7b-chat-xgpt.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-chat-xgpt-GGUF/resolve/main/Llama-2-7b-chat-xgpt.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-chat-xgpt-GGUF/resolve/main/Llama-2-7b-chat-xgpt.f16.gguf) | f16 | 13.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "datasets": ["kenshinx/netlab-blogs"], "base_model": "kenshinx/Llama-2-7b-chat-xgpt", "quantized_by": "mradermacher"} | mradermacher/Llama-2-7b-chat-xgpt-GGUF | null | [
"transformers",
"gguf",
"en",
"dataset:kenshinx/netlab-blogs",
"base_model:kenshinx/Llama-2-7b-chat-xgpt",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T08:24:55+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #en #dataset-kenshinx/netlab-blogs #base_model-kenshinx/Llama-2-7b-chat-xgpt #license-apache-2.0 #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #en #dataset-kenshinx/netlab-blogs #base_model-kenshinx/Llama-2-7b-chat-xgpt #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
text2text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | DocDuck/FRED-T5-large | null | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T08:25:39+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
reinforcement-learning | stable-baselines3 |
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| {"library_name": "stable-baselines3", "tags": ["PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "A2C", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "PandaReachDense-v3", "type": "PandaReachDense-v3"}, "metrics": [{"type": "mean_reward", "value": "-0.27 +/- 0.13", "name": "mean_reward", "verified": false}]}]}]} | ProrabVasili/a2c-PandaReachDense-v3 | null | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-04-24T08:26:11+00:00 | [] | [] | TAGS
#stable-baselines3 #PandaReachDense-v3 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# A2C Agent playing PandaReachDense-v3
This is a trained model of a A2C agent playing PandaReachDense-v3
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
| [
"# A2C Agent playing PandaReachDense-v3\nThis is a trained model of a A2C agent playing PandaReachDense-v3\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] | [
"TAGS\n#stable-baselines3 #PandaReachDense-v3 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# A2C Agent playing PandaReachDense-v3\nThis is a trained model of a A2C agent playing PandaReachDense-v3\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stocks
This model is a fine-tuned version of [projecte-aina/roberta-base-ca-v2-cased-te](https://huggingface.co/projecte-aina/roberta-base-ca-v2-cased-te) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6733
- Accuracy: 0.8109
- Precision: 0.8127
- Recall: 0.8109
- F1: 0.8107
- Ratio: 0.5378
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- lr_scheduler_warmup_steps: 4
- num_epochs: 2
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Ratio |
|:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:------:|
| 3.8156 | 0.1626 | 10 | 1.9553 | 0.5378 | 0.5507 | 0.5378 | 0.5064 | 0.7521 |
| 1.3339 | 0.3252 | 20 | 1.2090 | 0.5546 | 0.5548 | 0.5546 | 0.5543 | 0.5252 |
| 1.103 | 0.4878 | 30 | 0.9577 | 0.5588 | 0.5588 | 0.5588 | 0.5588 | 0.5042 |
| 0.9108 | 0.6504 | 40 | 0.8881 | 0.5714 | 0.5770 | 0.5714 | 0.5635 | 0.6345 |
| 0.8716 | 0.8130 | 50 | 0.8426 | 0.6387 | 0.6563 | 0.6387 | 0.6282 | 0.6681 |
| 0.844 | 0.9756 | 60 | 0.7948 | 0.7017 | 0.7233 | 0.7017 | 0.6943 | 0.3445 |
| 0.7816 | 1.1382 | 70 | 0.7715 | 0.7227 | 0.7660 | 0.7227 | 0.7109 | 0.7017 |
| 0.7406 | 1.3008 | 80 | 0.7040 | 0.8067 | 0.8099 | 0.8067 | 0.8062 | 0.5504 |
| 0.6764 | 1.4634 | 90 | 0.6954 | 0.8025 | 0.8104 | 0.8025 | 0.8013 | 0.5798 |
| 0.7306 | 1.6260 | 100 | 0.6933 | 0.8109 | 0.8209 | 0.8109 | 0.8094 | 0.5882 |
| 0.6736 | 1.7886 | 110 | 0.6763 | 0.8067 | 0.8089 | 0.8067 | 0.8064 | 0.5420 |
| 0.714 | 1.9512 | 120 | 0.6733 | 0.8109 | 0.8127 | 0.8109 | 0.8107 | 0.5378 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "precision", "recall", "f1"], "base_model": "projecte-aina/roberta-base-ca-v2-cased-te", "model-index": [{"name": "stocks", "results": []}]} | adriansanz/2404v1 | null | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:projecte-aina/roberta-base-ca-v2-cased-te",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T08:27:40+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-projecte-aina/roberta-base-ca-v2-cased-te #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| stocks
======
This model is a fine-tuned version of projecte-aina/roberta-base-ca-v2-cased-te on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6733
* Accuracy: 0.8109
* Precision: 0.8127
* Recall: 0.8109
* F1: 0.8107
* Ratio: 0.5378
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 10
* eval\_batch\_size: 2
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 20
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.06
* lr\_scheduler\_warmup\_steps: 4
* num\_epochs: 2
* label\_smoothing\_factor: 0.1
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 10\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 20\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.06\n* lr\\_scheduler\\_warmup\\_steps: 4\n* num\\_epochs: 2\n* label\\_smoothing\\_factor: 0.1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-projecte-aina/roberta-base-ca-v2-cased-te #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 10\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 20\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.06\n* lr\\_scheduler\\_warmup\\_steps: 4\n* num\\_epochs: 2\n* label\\_smoothing\\_factor: 0.1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2085
- Accuracy: 0.9215
- F1: 0.9213
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8081 | 1.0 | 250 | 0.2955 | 0.9175 | 0.9169 |
| 0.2399 | 2.0 | 500 | 0.2085 | 0.9215 | 0.9213 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.9215, "name": "Accuracy"}, {"type": "f1", "value": 0.9213033485423318, "name": "F1"}]}]}]} | jhtop88/distilbert-base-uncased-finetuned-emotion | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T08:28:48+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #dataset-emotion #base_model-distilbert-base-uncased #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-emotion
=========================================
This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2085
* Accuracy: 0.9215
* F1: 0.9213
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.3.0+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #dataset-emotion #base_model-distilbert-base-uncased #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-70m_mz-130_IMDB_n-its-10-seed-1
This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-70m", "model-index": [{"name": "robust_llm_pythia-70m_mz-130_IMDB_n-its-10-seed-1", "results": []}]} | AlignmentResearch/robust_llm_pythia-70m_mz-130_IMDB_n-its-10-seed-1 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T08:29:45+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-70m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# robust_llm_pythia-70m_mz-130_IMDB_n-its-10-seed-1
This model is a fine-tuned version of EleutherAI/pythia-70m on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# robust_llm_pythia-70m_mz-130_IMDB_n-its-10-seed-1\n\nThis model is a fine-tuned version of EleutherAI/pythia-70m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 1\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-70m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# robust_llm_pythia-70m_mz-130_IMDB_n-its-10-seed-1\n\nThis model is a fine-tuned version of EleutherAI/pythia-70m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 1\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PolizzeDonut-UltimaProvaCluster-Cluster5di7-5epochs
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "base_model": "naver-clova-ix/donut-base", "model-index": [{"name": "PolizzeDonut-UltimaProvaCluster-Cluster5di7-5epochs", "results": []}]} | tedad09/PolizzeDonut-UltimaProvaCluster-Cluster5di7-5epochs | null | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T08:29:55+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us
|
# PolizzeDonut-UltimaProvaCluster-Cluster5di7-5epochs
This model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# PolizzeDonut-UltimaProvaCluster-Cluster5di7-5epochs\n\nThis model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us \n",
"# PolizzeDonut-UltimaProvaCluster-Cluster5di7-5epochs\n\nThis model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers | # Gemma Model Card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
This model card corresponds to the 7B base version of the Gemma model. You can also visit the model card of the [2B base model](https://huggingface.co/google/gemma-2b), [7B instruct model](https://huggingface.co/google/gemma-7b-it), and [2B instruct model](https://huggingface.co/google/gemma-2b-it).
**Resources and Technical Documentation**:
* [Gemma Technical Report](https://storage.googleapis.com/deepmind-media/gemma/gemma-report.pdf)
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma)
* [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-7b-gg-hf)
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights, pre-trained variants, and instruction-tuned variants. Gemma
models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Context Length
Models are trained on a context length of 8192 tokens.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Fine-tuning examples
You can find fine-tuning notebooks under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples). We provide:
* A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using [QLoRA](https://huggingface.co/papers/2305.14314)
* A script to perform SFT using FSDP on TPU devices
* A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset. You can also find the copy of the notebook [here](https://github.com/huggingface/notebooks/blob/main/peft/gemma_7b_english_quotes.ipynb).
#### Running the model on a CPU
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a GPU using different precisions
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto", revision="float16")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto", torch_dtype=torch.bfloat16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources, totaling 6 trillion tokens. Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safely in line with
[our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11).
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/).
### Software
Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models](https://ai.google/discover/foundation-models/), including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 |
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 |
| [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 |
| [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 49.7 | 51.8 |
| [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 |
| [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 |
| [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 |
| [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 |
| [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 |
| [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 |
| [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 |
| [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | 12.5 | 23 |
| [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 |
| [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 |
| [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 |
| [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 |
| [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 |
| [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 |
| ------------------------------ | ------------- | ----------- | --------- |
| **Average** | | **45.0** | **56.9** |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2).
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 |
| [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 |
| [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 |
| [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 |
| [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 |
| [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 |
| [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 |
| [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 |
| [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 |
| [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 |
| ------------------------------ | ------------- | ----------- | --------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible).
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives. | {"license": "gemma"} | Lagstill/Varsity_module2_bot | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:2305.14314",
"arxiv:2312.11805",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:1804.06876",
"arxiv:2110.08193",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:2203.09509",
"license:gemma",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-24T08:31:05+00:00 | [
"2305.14314",
"2312.11805",
"2009.03300",
"1905.07830",
"1911.11641",
"1904.09728",
"1905.10044",
"1907.10641",
"1811.00937",
"1809.02789",
"1911.01547",
"1705.03551",
"2107.03374",
"2108.07732",
"2110.14168",
"2304.06364",
"2206.04615",
"1804.06876",
"2110.08193",
"2009.11462",
"2101.11718",
"1804.09301",
"2109.07958",
"2203.09509"
] | [] | TAGS
#transformers #safetensors #gemma #text-generation #conversational #arxiv-2305.14314 #arxiv-2312.11805 #arxiv-2009.03300 #arxiv-1905.07830 #arxiv-1911.11641 #arxiv-1904.09728 #arxiv-1905.10044 #arxiv-1907.10641 #arxiv-1811.00937 #arxiv-1809.02789 #arxiv-1911.01547 #arxiv-1705.03551 #arxiv-2107.03374 #arxiv-2108.07732 #arxiv-2110.14168 #arxiv-2304.06364 #arxiv-2206.04615 #arxiv-1804.06876 #arxiv-2110.08193 #arxiv-2009.11462 #arxiv-2101.11718 #arxiv-1804.09301 #arxiv-2109.07958 #arxiv-2203.09509 #license-gemma #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Gemma Model Card
================
Model Page: Gemma
This model card corresponds to the 7B base version of the Gemma model. You can also visit the model card of the 2B base model, 7B instruct model, and 2B instruct model.
Resources and Technical Documentation:
* Gemma Technical Report
* Responsible Generative AI Toolkit
* Gemma on Kaggle
* Gemma on Vertex Model Garden
Terms of Use: Terms
Authors: Google
Model Information
-----------------
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights, pre-trained variants, and instruction-tuned variants. Gemma
models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Context Length
Models are trained on a context length of 8192 tokens.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to 'pip install -U transformers', then copy the snippet from the section that is relevant for your usecase.
#### Fine-tuning examples
You can find fine-tuning notebooks under the 'examples/' directory. We provide:
* A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using QLoRA
* A script to perform SFT using FSDP on TPU devices
* A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset. You can also find the copy of the notebook here.
#### Running the model on a CPU
#### Running the model on a single / multi GPU
#### Running the model on a GPU using different precisions
* *Using 'torch.float16'*
* *Using 'torch.bfloat16'*
#### Quantized Versions through 'bitsandbytes'
* *Using 8-bit precision (int8)*
* *Using 4-bit precision*
#### Other optimizations
* *Flash Attention 2*
First make sure to install 'flash-attn' in your environment 'pip install flash-attn'
### Inputs and outputs
* Input: Text string, such as a question, a prompt, or a document to be
summarized.
* Output: Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
Model Data
----------
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources, totaling 6 trillion tokens. Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safely in line with
our policies.
Implementation Information
--------------------------
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
Tensor Processing Unit (TPU) hardware (TPUv5e).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
Google's commitments to operate sustainably.
### Software
Training was done using JAX and ML Pathways.
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
foundation models, including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
paper about the Gemini family of models; "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
Evaluation
----------
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
Ethics and Safety
-----------------
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as WinoBias and BBQ Dataset.
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting internal policies for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
Usage and Limitations
---------------------
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
+ Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
+ Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
+ Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
+ Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
+ Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
+ Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
+ The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
+ The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
+ LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
+ A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
+ Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
+ LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
+ LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
+ LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
+ LLMs can be misused to generate text that is false, misleading, or harmful.
+ Guidelines are provided for responsible use with the model, see the
Responsible Generative AI Toolkit.
* Transparency and Accountability:
+ This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
+ A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
Gemma Prohibited Use Policy.
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
| [
"### Description\n\n\nGemma is a family of lightweight, state-of-the-art open models from Google,\nbuilt from the same research and technology used to create the Gemini models.\nThey are text-to-text, decoder-only large language models, available in English,\nwith open weights, pre-trained variants, and instruction-tuned variants. Gemma\nmodels are well-suited for a variety of text generation tasks, including\nquestion answering, summarization, and reasoning. Their relatively small size\nmakes it possible to deploy them in environments with limited resources such as\na laptop, desktop or your own cloud infrastructure, democratizing access to\nstate of the art AI models and helping foster innovation for everyone.",
"### Context Length\n\n\nModels are trained on a context length of 8192 tokens.",
"### Usage\n\n\nBelow we share some code snippets on how to get quickly started with running the model. First make sure to 'pip install -U transformers', then copy the snippet from the section that is relevant for your usecase.",
"#### Fine-tuning examples\n\n\nYou can find fine-tuning notebooks under the 'examples/' directory. We provide:\n\n\n* A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using QLoRA\n* A script to perform SFT using FSDP on TPU devices\n* A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset. You can also find the copy of the notebook here.",
"#### Running the model on a CPU",
"#### Running the model on a single / multi GPU",
"#### Running the model on a GPU using different precisions\n\n\n* *Using 'torch.float16'*\n* *Using 'torch.bfloat16'*",
"#### Quantized Versions through 'bitsandbytes'\n\n\n* *Using 8-bit precision (int8)*\n* *Using 4-bit precision*",
"#### Other optimizations\n\n\n* *Flash Attention 2*\n\n\nFirst make sure to install 'flash-attn' in your environment 'pip install flash-attn'",
"### Inputs and outputs\n\n\n* Input: Text string, such as a question, a prompt, or a document to be\nsummarized.\n* Output: Generated English-language text in response to the input, such\nas an answer to a question, or a summary of a document.\n\n\nModel Data\n----------\n\n\nData used for model training and how the data was processed.",
"### Training Dataset\n\n\nThese models were trained on a dataset of text data that includes a wide variety\nof sources, totaling 6 trillion tokens. Here are the key components:\n\n\n* Web Documents: A diverse collection of web text ensures the model is exposed\nto a broad range of linguistic styles, topics, and vocabulary. Primarily\nEnglish-language content.\n* Code: Exposing the model to code helps it to learn the syntax and patterns of\nprogramming languages, which improves its ability to generate code or\nunderstand code-related questions.\n* Mathematics: Training on mathematical text helps the model learn logical\nreasoning, symbolic representation, and to address mathematical queries.\n\n\nThe combination of these diverse data sources is crucial for training a powerful\nlanguage model that can handle a wide variety of different tasks and text\nformats.",
"### Data Preprocessing\n\n\nHere are the key data cleaning and filtering methods applied to the training\ndata:\n\n\n* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was\napplied at multiple stages in the data preparation process to ensure the\nexclusion of harmful and illegal content\n* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and\nreliable, automated techniques were used to filter out certain personal\ninformation and other sensitive data from training sets.\n* Additional methods: Filtering based on content quality and safely in line with\nour policies.\n\n\nImplementation Information\n--------------------------\n\n\nDetails about the model internals.",
"### Hardware\n\n\nGemma was trained using the latest generation of\nTensor Processing Unit (TPU) hardware (TPUv5e).\n\n\nTraining large language models requires significant computational power. TPUs,\ndesigned specifically for matrix operations common in machine learning, offer\nseveral advantages in this domain:\n\n\n* Performance: TPUs are specifically designed to handle the massive computations\ninvolved in training LLMs. They can speed up training considerably compared to\nCPUs.\n* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing\nfor the handling of large models and batch sizes during training. This can\nlead to better model quality.\n* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for\nhandling the growing complexity of large foundation models. You can distribute\ntraining across multiple TPU devices for faster and more efficient processing.\n* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective\nsolution for training large models compared to CPU-based infrastructure,\nespecially when considering the time and resources saved due to faster\ntraining.\n* These advantages are aligned with\nGoogle's commitments to operate sustainably.",
"### Software\n\n\nTraining was done using JAX and ML Pathways.\n\n\nJAX allows researchers to take advantage of the latest generation of hardware,\nincluding TPUs, for faster and more efficient training of large models.\n\n\nML Pathways is Google's latest effort to build artificially intelligent systems\ncapable of generalizing across multiple tasks. This is specially suitable for\nfoundation models, including large language models like\nthese ones.\n\n\nTogether, JAX and ML Pathways are used as described in the\npaper about the Gemini family of models; \"the 'single\ncontroller' programming model of Jax and Pathways allows a single Python\nprocess to orchestrate the entire training run, dramatically simplifying the\ndevelopment workflow.\"\n\n\nEvaluation\n----------\n\n\nModel evaluation metrics and results.",
"### Benchmark Results\n\n\nThese models were evaluated against a large collection of different datasets and\nmetrics to cover different aspects of text generation:\n\n\n\nEthics and Safety\n-----------------\n\n\nEthics and safety evaluation approach and results.",
"### Evaluation Approach\n\n\nOur evaluation methods include structured evaluations and internal red-teaming\ntesting of relevant content policies. Red-teaming was conducted by a number of\ndifferent teams, each with different goals and human evaluation metrics. These\nmodels were evaluated against a number of different categories relevant to\nethics and safety, including:\n\n\n* Text-to-Text Content Safety: Human evaluation on prompts covering safety\npolicies including child sexual abuse and exploitation, harassment, violence\nand gore, and hate speech.\n* Text-to-Text Representational Harms: Benchmark against relevant academic\ndatasets such as WinoBias and BBQ Dataset.\n* Memorization: Automated evaluation of memorization of training data, including\nthe risk of personally identifiable information exposure.\n* Large-scale harm: Tests for \"dangerous capabilities,\" such as chemical,\nbiological, radiological, and nuclear (CBRN) risks.",
"### Evaluation Results\n\n\nThe results of ethics and safety evaluations are within acceptable thresholds\nfor meeting internal policies for categories such as child\nsafety, content safety, representational harms, memorization, large-scale harms.\nOn top of robust internal evaluations, the results of well known safety\nbenchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA\nare shown here.\n\n\n\nUsage and Limitations\n---------------------\n\n\nThese models have certain limitations that users should be aware of.",
"### Intended Usage\n\n\nOpen Large Language Models (LLMs) have a wide range of applications across\nvarious industries and domains. The following list of potential uses is not\ncomprehensive. The purpose of this list is to provide contextual information\nabout the possible use-cases that the model creators considered as part of model\ntraining and development.\n\n\n* Content Creation and Communication\n\t+ Text Generation: These models can be used to generate creative text formats\n\tsuch as poems, scripts, code, marketing copy, and email drafts.\n\t+ Chatbots and Conversational AI: Power conversational interfaces for customer\n\tservice, virtual assistants, or interactive applications.\n\t+ Text Summarization: Generate concise summaries of a text corpus, research\n\tpapers, or reports.\n* Research and Education\n\t+ Natural Language Processing (NLP) Research: These models can serve as a\n\tfoundation for researchers to experiment with NLP techniques, develop\n\talgorithms, and contribute to the advancement of the field.\n\t+ Language Learning Tools: Support interactive language learning experiences,\n\taiding in grammar correction or providing writing practice.\n\t+ Knowledge Exploration: Assist researchers in exploring large bodies of text\n\tby generating summaries or answering questions about specific topics.",
"### Limitations\n\n\n* Training Data\n\t+ The quality and diversity of the training data significantly influence the\n\tmodel's capabilities. Biases or gaps in the training data can lead to\n\tlimitations in the model's responses.\n\t+ The scope of the training dataset determines the subject areas the model can\n\thandle effectively.\n* Context and Task Complexity\n\t+ LLMs are better at tasks that can be framed with clear prompts and\n\tinstructions. Open-ended or highly complex tasks might be challenging.\n\t+ A model's performance can be influenced by the amount of context provided\n\t(longer context generally leads to better outputs, up to a certain point).\n* Language Ambiguity and Nuance\n\t+ Natural language is inherently complex. LLMs might struggle to grasp subtle\n\tnuances, sarcasm, or figurative language.\n* Factual Accuracy\n\t+ LLMs generate responses based on information they learned from their\n\ttraining datasets, but they are not knowledge bases. They may generate\n\tincorrect or outdated factual statements.\n* Common Sense\n\t+ LLMs rely on statistical patterns in language. They might lack the ability\n\tto apply common sense reasoning in certain situations.",
"### Ethical Considerations and Risks\n\n\nThe development of large language models (LLMs) raises several ethical concerns.\nIn creating an open model, we have carefully considered the following:\n\n\n* Bias and Fairness\n\n\n\t+ LLMs trained on large-scale, real-world text data can reflect socio-cultural\n\tbiases embedded in the training material. These models underwent careful\n\tscrutiny, input data pre-processing described and posterior evaluations\n\treported in this card.\n* Misinformation and Misuse\n\n\n\t+ LLMs can be misused to generate text that is false, misleading, or harmful.\n\t+ Guidelines are provided for responsible use with the model, see the\n\tResponsible Generative AI Toolkit.\n* Transparency and Accountability:\n\n\n\t+ This model card summarizes details on the models' architecture,\n\tcapabilities, limitations, and evaluation processes.\n\t+ A responsibly developed open model offers the opportunity to share\n\tinnovation by making LLM technology accessible to developers and researchers\n\tacross the AI ecosystem.\n\tRisks identified and mitigations:\n* Perpetuation of biases: It's encouraged to perform continuous monitoring\n(using evaluation metrics, human review) and the exploration of de-biasing\ntechniques during model training, fine-tuning, and other use cases.\n* Generation of harmful content: Mechanisms and guidelines for content safety\nare essential. Developers are encouraged to exercise caution and implement\nappropriate content safety safeguards based on their specific product policies\nand application use cases.\n* Misuse for malicious purposes: Technical limitations and developer and\nend-user education can help mitigate against malicious applications of LLMs.\nEducational resources and reporting mechanisms for users to flag misuse are\nprovided. Prohibited uses of Gemma models are outlined in the\nGemma Prohibited Use Policy.\n* Privacy violations: Models were trained on data filtered for removal of PII\n(Personally Identifiable Information). Developers are encouraged to adhere to\nprivacy regulations with privacy-preserving techniques.",
"### Benefits\n\n\nAt the time of release, this family of models provides high-performance open\nlarge language model implementations designed from the ground up for Responsible\nAI development compared to similarly sized models.\n\n\nUsing the benchmark evaluation metrics described in this document, these models\nhave shown to provide superior performance to other, comparably-sized open model\nalternatives."
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-2305.14314 #arxiv-2312.11805 #arxiv-2009.03300 #arxiv-1905.07830 #arxiv-1911.11641 #arxiv-1904.09728 #arxiv-1905.10044 #arxiv-1907.10641 #arxiv-1811.00937 #arxiv-1809.02789 #arxiv-1911.01547 #arxiv-1705.03551 #arxiv-2107.03374 #arxiv-2108.07732 #arxiv-2110.14168 #arxiv-2304.06364 #arxiv-2206.04615 #arxiv-1804.06876 #arxiv-2110.08193 #arxiv-2009.11462 #arxiv-2101.11718 #arxiv-1804.09301 #arxiv-2109.07958 #arxiv-2203.09509 #license-gemma #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"### Description\n\n\nGemma is a family of lightweight, state-of-the-art open models from Google,\nbuilt from the same research and technology used to create the Gemini models.\nThey are text-to-text, decoder-only large language models, available in English,\nwith open weights, pre-trained variants, and instruction-tuned variants. Gemma\nmodels are well-suited for a variety of text generation tasks, including\nquestion answering, summarization, and reasoning. Their relatively small size\nmakes it possible to deploy them in environments with limited resources such as\na laptop, desktop or your own cloud infrastructure, democratizing access to\nstate of the art AI models and helping foster innovation for everyone.",
"### Context Length\n\n\nModels are trained on a context length of 8192 tokens.",
"### Usage\n\n\nBelow we share some code snippets on how to get quickly started with running the model. First make sure to 'pip install -U transformers', then copy the snippet from the section that is relevant for your usecase.",
"#### Fine-tuning examples\n\n\nYou can find fine-tuning notebooks under the 'examples/' directory. We provide:\n\n\n* A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using QLoRA\n* A script to perform SFT using FSDP on TPU devices\n* A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset. You can also find the copy of the notebook here.",
"#### Running the model on a CPU",
"#### Running the model on a single / multi GPU",
"#### Running the model on a GPU using different precisions\n\n\n* *Using 'torch.float16'*\n* *Using 'torch.bfloat16'*",
"#### Quantized Versions through 'bitsandbytes'\n\n\n* *Using 8-bit precision (int8)*\n* *Using 4-bit precision*",
"#### Other optimizations\n\n\n* *Flash Attention 2*\n\n\nFirst make sure to install 'flash-attn' in your environment 'pip install flash-attn'",
"### Inputs and outputs\n\n\n* Input: Text string, such as a question, a prompt, or a document to be\nsummarized.\n* Output: Generated English-language text in response to the input, such\nas an answer to a question, or a summary of a document.\n\n\nModel Data\n----------\n\n\nData used for model training and how the data was processed.",
"### Training Dataset\n\n\nThese models were trained on a dataset of text data that includes a wide variety\nof sources, totaling 6 trillion tokens. Here are the key components:\n\n\n* Web Documents: A diverse collection of web text ensures the model is exposed\nto a broad range of linguistic styles, topics, and vocabulary. Primarily\nEnglish-language content.\n* Code: Exposing the model to code helps it to learn the syntax and patterns of\nprogramming languages, which improves its ability to generate code or\nunderstand code-related questions.\n* Mathematics: Training on mathematical text helps the model learn logical\nreasoning, symbolic representation, and to address mathematical queries.\n\n\nThe combination of these diverse data sources is crucial for training a powerful\nlanguage model that can handle a wide variety of different tasks and text\nformats.",
"### Data Preprocessing\n\n\nHere are the key data cleaning and filtering methods applied to the training\ndata:\n\n\n* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was\napplied at multiple stages in the data preparation process to ensure the\nexclusion of harmful and illegal content\n* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and\nreliable, automated techniques were used to filter out certain personal\ninformation and other sensitive data from training sets.\n* Additional methods: Filtering based on content quality and safely in line with\nour policies.\n\n\nImplementation Information\n--------------------------\n\n\nDetails about the model internals.",
"### Hardware\n\n\nGemma was trained using the latest generation of\nTensor Processing Unit (TPU) hardware (TPUv5e).\n\n\nTraining large language models requires significant computational power. TPUs,\ndesigned specifically for matrix operations common in machine learning, offer\nseveral advantages in this domain:\n\n\n* Performance: TPUs are specifically designed to handle the massive computations\ninvolved in training LLMs. They can speed up training considerably compared to\nCPUs.\n* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing\nfor the handling of large models and batch sizes during training. This can\nlead to better model quality.\n* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for\nhandling the growing complexity of large foundation models. You can distribute\ntraining across multiple TPU devices for faster and more efficient processing.\n* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective\nsolution for training large models compared to CPU-based infrastructure,\nespecially when considering the time and resources saved due to faster\ntraining.\n* These advantages are aligned with\nGoogle's commitments to operate sustainably.",
"### Software\n\n\nTraining was done using JAX and ML Pathways.\n\n\nJAX allows researchers to take advantage of the latest generation of hardware,\nincluding TPUs, for faster and more efficient training of large models.\n\n\nML Pathways is Google's latest effort to build artificially intelligent systems\ncapable of generalizing across multiple tasks. This is specially suitable for\nfoundation models, including large language models like\nthese ones.\n\n\nTogether, JAX and ML Pathways are used as described in the\npaper about the Gemini family of models; \"the 'single\ncontroller' programming model of Jax and Pathways allows a single Python\nprocess to orchestrate the entire training run, dramatically simplifying the\ndevelopment workflow.\"\n\n\nEvaluation\n----------\n\n\nModel evaluation metrics and results.",
"### Benchmark Results\n\n\nThese models were evaluated against a large collection of different datasets and\nmetrics to cover different aspects of text generation:\n\n\n\nEthics and Safety\n-----------------\n\n\nEthics and safety evaluation approach and results.",
"### Evaluation Approach\n\n\nOur evaluation methods include structured evaluations and internal red-teaming\ntesting of relevant content policies. Red-teaming was conducted by a number of\ndifferent teams, each with different goals and human evaluation metrics. These\nmodels were evaluated against a number of different categories relevant to\nethics and safety, including:\n\n\n* Text-to-Text Content Safety: Human evaluation on prompts covering safety\npolicies including child sexual abuse and exploitation, harassment, violence\nand gore, and hate speech.\n* Text-to-Text Representational Harms: Benchmark against relevant academic\ndatasets such as WinoBias and BBQ Dataset.\n* Memorization: Automated evaluation of memorization of training data, including\nthe risk of personally identifiable information exposure.\n* Large-scale harm: Tests for \"dangerous capabilities,\" such as chemical,\nbiological, radiological, and nuclear (CBRN) risks.",
"### Evaluation Results\n\n\nThe results of ethics and safety evaluations are within acceptable thresholds\nfor meeting internal policies for categories such as child\nsafety, content safety, representational harms, memorization, large-scale harms.\nOn top of robust internal evaluations, the results of well known safety\nbenchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA\nare shown here.\n\n\n\nUsage and Limitations\n---------------------\n\n\nThese models have certain limitations that users should be aware of.",
"### Intended Usage\n\n\nOpen Large Language Models (LLMs) have a wide range of applications across\nvarious industries and domains. The following list of potential uses is not\ncomprehensive. The purpose of this list is to provide contextual information\nabout the possible use-cases that the model creators considered as part of model\ntraining and development.\n\n\n* Content Creation and Communication\n\t+ Text Generation: These models can be used to generate creative text formats\n\tsuch as poems, scripts, code, marketing copy, and email drafts.\n\t+ Chatbots and Conversational AI: Power conversational interfaces for customer\n\tservice, virtual assistants, or interactive applications.\n\t+ Text Summarization: Generate concise summaries of a text corpus, research\n\tpapers, or reports.\n* Research and Education\n\t+ Natural Language Processing (NLP) Research: These models can serve as a\n\tfoundation for researchers to experiment with NLP techniques, develop\n\talgorithms, and contribute to the advancement of the field.\n\t+ Language Learning Tools: Support interactive language learning experiences,\n\taiding in grammar correction or providing writing practice.\n\t+ Knowledge Exploration: Assist researchers in exploring large bodies of text\n\tby generating summaries or answering questions about specific topics.",
"### Limitations\n\n\n* Training Data\n\t+ The quality and diversity of the training data significantly influence the\n\tmodel's capabilities. Biases or gaps in the training data can lead to\n\tlimitations in the model's responses.\n\t+ The scope of the training dataset determines the subject areas the model can\n\thandle effectively.\n* Context and Task Complexity\n\t+ LLMs are better at tasks that can be framed with clear prompts and\n\tinstructions. Open-ended or highly complex tasks might be challenging.\n\t+ A model's performance can be influenced by the amount of context provided\n\t(longer context generally leads to better outputs, up to a certain point).\n* Language Ambiguity and Nuance\n\t+ Natural language is inherently complex. LLMs might struggle to grasp subtle\n\tnuances, sarcasm, or figurative language.\n* Factual Accuracy\n\t+ LLMs generate responses based on information they learned from their\n\ttraining datasets, but they are not knowledge bases. They may generate\n\tincorrect or outdated factual statements.\n* Common Sense\n\t+ LLMs rely on statistical patterns in language. They might lack the ability\n\tto apply common sense reasoning in certain situations.",
"### Ethical Considerations and Risks\n\n\nThe development of large language models (LLMs) raises several ethical concerns.\nIn creating an open model, we have carefully considered the following:\n\n\n* Bias and Fairness\n\n\n\t+ LLMs trained on large-scale, real-world text data can reflect socio-cultural\n\tbiases embedded in the training material. These models underwent careful\n\tscrutiny, input data pre-processing described and posterior evaluations\n\treported in this card.\n* Misinformation and Misuse\n\n\n\t+ LLMs can be misused to generate text that is false, misleading, or harmful.\n\t+ Guidelines are provided for responsible use with the model, see the\n\tResponsible Generative AI Toolkit.\n* Transparency and Accountability:\n\n\n\t+ This model card summarizes details on the models' architecture,\n\tcapabilities, limitations, and evaluation processes.\n\t+ A responsibly developed open model offers the opportunity to share\n\tinnovation by making LLM technology accessible to developers and researchers\n\tacross the AI ecosystem.\n\tRisks identified and mitigations:\n* Perpetuation of biases: It's encouraged to perform continuous monitoring\n(using evaluation metrics, human review) and the exploration of de-biasing\ntechniques during model training, fine-tuning, and other use cases.\n* Generation of harmful content: Mechanisms and guidelines for content safety\nare essential. Developers are encouraged to exercise caution and implement\nappropriate content safety safeguards based on their specific product policies\nand application use cases.\n* Misuse for malicious purposes: Technical limitations and developer and\nend-user education can help mitigate against malicious applications of LLMs.\nEducational resources and reporting mechanisms for users to flag misuse are\nprovided. Prohibited uses of Gemma models are outlined in the\nGemma Prohibited Use Policy.\n* Privacy violations: Models were trained on data filtered for removal of PII\n(Personally Identifiable Information). Developers are encouraged to adhere to\nprivacy regulations with privacy-preserving techniques.",
"### Benefits\n\n\nAt the time of release, this family of models provides high-performance open\nlarge language model implementations designed from the ground up for Responsible\nAI development compared to similarly sized models.\n\n\nUsing the benchmark evaluation metrics described in this document, these models\nhave shown to provide superior performance to other, comparably-sized open model\nalternatives."
] |
zero-shot-classification | adapter-transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"language": ["am", "ab"], "license": "apache-2.0", "library_name": "adapter-transformers", "datasets": ["mlabonne/orpo-dpo-mix-40k", "gretelai/synthetic_text_to_sql", "HuggingFaceFW/fineweb"], "metrics": ["accuracy"], "pipeline_tag": "zero-shot-classification"} | Saba-PornStar/Fozhan-girls-21age | null | [
"adapter-transformers",
"zero-shot-classification",
"am",
"ab",
"dataset:mlabonne/orpo-dpo-mix-40k",
"dataset:gretelai/synthetic_text_to_sql",
"dataset:HuggingFaceFW/fineweb",
"arxiv:1910.09700",
"license:apache-2.0",
"region:us"
] | null | 2024-04-24T08:32:27+00:00 | [
"1910.09700"
] | [
"am",
"ab"
] | TAGS
#adapter-transformers #zero-shot-classification #am #ab #dataset-mlabonne/orpo-dpo-mix-40k #dataset-gretelai/synthetic_text_to_sql #dataset-HuggingFaceFW/fineweb #arxiv-1910.09700 #license-apache-2.0 #region-us
|
# Model Card for Model ID
This modelcard aims to be a base template for new models. It has been generated using this raw template.
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using this raw template.",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#adapter-transformers #zero-shot-classification #am #ab #dataset-mlabonne/orpo-dpo-mix-40k #dataset-gretelai/synthetic_text_to_sql #dataset-HuggingFaceFW/fineweb #arxiv-1910.09700 #license-apache-2.0 #region-us \n",
"# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using this raw template.",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Uploaded model
- **Developed by:** LeroyDyer
- **License:** apache-2.0
- **Finetuned from model :** LeroyDyer/Mixtral_AI_MiniTron_Swahili_3.75b
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "LeroyDyer/Mixtral_AI_MiniTron_Swahili_3.75b"} | LeroyDyer/Alpaca_Swahili_LORA | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:LeroyDyer/Mixtral_AI_MiniTron_Swahili_3.75b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T08:32:51+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-LeroyDyer/Mixtral_AI_MiniTron_Swahili_3.75b #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: LeroyDyer
- License: apache-2.0
- Finetuned from model : LeroyDyer/Mixtral_AI_MiniTron_Swahili_3.75b
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: LeroyDyer\n- License: apache-2.0\n- Finetuned from model : LeroyDyer/Mixtral_AI_MiniTron_Swahili_3.75b\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-LeroyDyer/Mixtral_AI_MiniTron_Swahili_3.75b #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: LeroyDyer\n- License: apache-2.0\n- Finetuned from model : LeroyDyer/Mixtral_AI_MiniTron_Swahili_3.75b\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | cilantro9246/51n46wk | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T08:35:05+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-70m_mz-130_IMDB_n-its-10-seed-0
This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-70m", "model-index": [{"name": "robust_llm_pythia-70m_mz-130_IMDB_n-its-10-seed-0", "results": []}]} | AlignmentResearch/robust_llm_pythia-70m_mz-130_IMDB_n-its-10-seed-0 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T08:35:15+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-70m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# robust_llm_pythia-70m_mz-130_IMDB_n-its-10-seed-0
This model is a fine-tuned version of EleutherAI/pythia-70m on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# robust_llm_pythia-70m_mz-130_IMDB_n-its-10-seed-0\n\nThis model is a fine-tuned version of EleutherAI/pythia-70m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-70m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# robust_llm_pythia-70m_mz-130_IMDB_n-its-10-seed-0\n\nThis model is a fine-tuned version of EleutherAI/pythia-70m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "openai/whisper-base"} | ygaci/whisper-base-fr_common_voice_16_new_3 | null | [
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:openai/whisper-base",
"region:us"
] | null | 2024-04-24T08:35:50+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #tensorboard #safetensors #arxiv-1910.09700 #base_model-openai/whisper-base #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] | [
"TAGS\n#peft #tensorboard #safetensors #arxiv-1910.09700 #base_model-openai/whisper-base #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
text-generation | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/mantou-studio/journal-finetune/runs/dcsf68ed)
# mistral-pennlaine-finetune
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4428
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5433 | 25.0 | 25 | 1.4043 |
| 0.016 | 50.0 | 50 | 1.5889 |
| 0.0001 | 75.0 | 75 | 1.4304 |
| 0.0002 | 100.0 | 100 | 1.4221 |
| 0.0 | 125.0 | 125 | 1.4181 |
| 0.0001 | 150.0 | 150 | 1.4378 |
| 0.0001 | 175.0 | 175 | 1.4395 |
| 0.0 | 200.0 | 200 | 1.4443 |
| 0.0001 | 225.0 | 225 | 1.4458 |
| 0.0 | 250.0 | 250 | 1.4430 |
| 0.0 | 275.0 | 275 | 1.4446 |
| 0.0001 | 300.0 | 300 | 1.4426 |
| 0.0 | 325.0 | 325 | 1.4528 |
| 0.0 | 350.0 | 350 | 1.4405 |
| 0.0 | 375.0 | 375 | 1.4464 |
| 0.0 | 400.0 | 400 | 1.4423 |
| 0.0 | 425.0 | 425 | 1.4526 |
| 0.0 | 450.0 | 450 | 1.4581 |
| 0.0 | 475.0 | 475 | 1.4538 |
| 0.0 | 500.0 | 500 | 1.4428 |
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.41.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1 | {"language": ["en"], "license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "mistralai/Mistral-7B-v0.1", "pipeline_tag": "text-generation", "model-index": [{"name": "mistral-pennlaine-finetune", "results": []}]} | Pennlaine/mistral-v0.1-finetune-entities-extraction | null | [
"peft",
"safetensors",
"generated_from_trainer",
"text-generation",
"en",
"base_model:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-04-24T08:37:12+00:00 | [] | [
"en"
] | TAGS
#peft #safetensors #generated_from_trainer #text-generation #en #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us
| <img src="URL alt="Visualize in Weights & Biases" width="200" height="32"/>
mistral-pennlaine-finetune
==========================
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.4428
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2.5e-05
* train\_batch\_size: 2
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1
* training\_steps: 500
### Training results
### Framework versions
* PEFT 0.10.1.dev0
* Transformers 4.41.0.dev0
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2.5e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1\n* training\\_steps: 500",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #text-generation #en #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2.5e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1\n* training\\_steps: 500",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.19.1"
] |
null | null |
# DavidAU/HelloNurse-11b-Q6_K-GGUF
This model was converted to GGUF format from [`MarsupialAI/HelloNurse-11b`](https://huggingface.co/MarsupialAI/HelloNurse-11b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/MarsupialAI/HelloNurse-11b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/HelloNurse-11b-Q6_K-GGUF --model hellonurse-11b.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/HelloNurse-11b-Q6_K-GGUF --model hellonurse-11b.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m hellonurse-11b.Q6_K.gguf -n 128
```
| {"language": ["en"], "license": "apache-2.0", "tags": ["not-for-all-audiences", "nsfw", "merge", "mistral", "llama-cpp", "gguf-my-repo"]} | DavidAU/HelloNurse-11b-Q6_K-GGUF | null | [
"gguf",
"not-for-all-audiences",
"nsfw",
"merge",
"mistral",
"llama-cpp",
"gguf-my-repo",
"en",
"license:apache-2.0",
"region:us"
] | null | 2024-04-24T08:38:34+00:00 | [] | [
"en"
] | TAGS
#gguf #not-for-all-audiences #nsfw #merge #mistral #llama-cpp #gguf-my-repo #en #license-apache-2.0 #region-us
|
# DavidAU/HelloNurse-11b-Q6_K-GGUF
This model was converted to GGUF format from 'MarsupialAI/HelloNurse-11b' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/HelloNurse-11b-Q6_K-GGUF\nThis model was converted to GGUF format from 'MarsupialAI/HelloNurse-11b' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #not-for-all-audiences #nsfw #merge #mistral #llama-cpp #gguf-my-repo #en #license-apache-2.0 #region-us \n",
"# DavidAU/HelloNurse-11b-Q6_K-GGUF\nThis model was converted to GGUF format from 'MarsupialAI/HelloNurse-11b' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["unsloth"]} | illikea/football3 | null | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T08:40:30+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #unsloth #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #unsloth #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-to-image | diffusers | # Kandinsky-3: Text-to-image Diffusion Model

[Post](https://habr.com/ru/companies/sberbank/articles/775590/) | [Generate](https://fusionbrain.ai) | [Telegram-bot](https://t.me/kandinsky21_bot) | [Report]
## Description:
Kandinsky 3.0 is an open-source text-to-image diffusion model built upon the Kandinsky2-x model family. In comparison to its predecessors, Kandinsky 3.0 incorporates more data and specifically related to Russian culture, which allows to generate pictures related to Russin culture. Furthermore, enhancements have been made to the text understanding and visual quality of the model, achieved by increasing the size of the text encoder and Diffusion U-Net models, respectively.
For more information: details of training, example of generations check out our [post](https://habr.com/ru/companies/sberbank/articles/775590/). The english version will be released in a couple of days.
## Architecture details:

Architecture consists of three parts:
+ Text encoder Flan-UL2 (encoder part) - 8.6B
+ Latent Diffusion U-Net - 3B
+ MoVQ encoder/decoder - 267M
## Models
We release our two models:
+ Base: Base text-to-image diffusion model. This model was trained over 2M steps on 400 A100
+ Inpainting: Inpainting version of the model. The model was initialized from final checkpoint of base model and trained 250k steps on 300 A100.
## Installing
Make sure to install `diffusers` from main as well as Transformers, Accelerate
```
pip install git+https://github.com/huggingface/diffusers.git
pip install --upgrade transformers accelerate
```
## How to use:
TODO
### Text-2-Image
```python
from diffusers import AutoPipelineForText2Image
import torch
pipe = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-3", variant="fp16", torch_dtype=torch.float16)
pipe.enable_model_cpu_offload()
prompt = "A photograph of the inside of a subway train. There are raccoons sitting on the seats. One of them is reading a newspaper. The window shows the city in the background."
generator = torch.Generator(device="cpu").manual_seed(0)
image = pipe(prompt, num_inference_steps=25, generator=generator).images[0]
```
### Image-2-Image
```python
from diffusers import AutoPipelineForImage2Image
from diffusers.utils import load_image
import torch
pipe = AutoPipelineForImage2Image.from_pretrained("kandinsky-community/kandinsky-3", variant="fp16", torch_dtype=torch.float16)
pipe.enable_model_cpu_offload()
prompt = "A painting of the inside of a subway train with tiny raccoons."
image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky3/t2i.png")
generator = torch.Generator(device="cpu").manual_seed(0)
image = pipe(prompt, image=image, strength=0.75, num_inference_steps=25, generator=generator).images[0]
```
## Examples of generations
<hr>
<table class="center">
<tr>
<td><img src="assets/photo_8.jpg" raw=true></td>
<td><img src="assets/photo_15.jpg"></td>
<td><img src="assets/photo_16.jpg"></td>
<td><img src="assets/photo_17.jpg"></td>
</tr>
<tr>
<td width=25% align="center">"A beautiful landscape outdoors scene in the crochet knitting art style, drawing in style by Alfons Mucha"</td>
<td width=25% align="center">"gorgeous phoenix, cosmic, darkness, epic, cinematic, moonlight, stars, high - definition, texture,Oscar-Claude Monet"</td>
<td width=25% align="center">"a yellow house at the edge of the danish fjord, in the style of eiko ojala, ingrid baars, ad posters, mountainous vistas, george ault, realistic details, dark white and dark gray, 4k"</td>
<td width=25% align="center">"dragon fruit head, upper body, realistic, illustration by Joshua Hoffine Norman Rockwell, scary, creepy, biohacking, futurism, Zaha Hadid style"</td>
</tr>
<tr>
<td><img src="assets/photo_2.jpg" raw=true></td>
<td><img src="assets/photo_19.jpg"></td>
<td><img src="assets/photo_13.jpg"></td>
<td><img src="assets/photo_14.jpg"></td>
</tr>
<tr>
<td width=25% align="center">"Amazing playful nice cute strawberry character, dynamic poze, surreal fantazy garden background, gorgeous masterpice, award winning photo, soft natural lighting, 3d, Blender, Octane render, tilt - shift, deep field, colorful, I can't believe how beautiful this is, colorful, cute and sweet baby - loved photo"</td>
<td width=25% align="center">"beautiful fairy-tale desert, in the sky a wave of sand merges with the milky way, stars, cosmism, digital art, 8k"</td>
<td width=25% align="center">"Car, mustang, movie, person, poster, car cover, person, in the style of alessandro gottardo, gold and cyan, gerald harvey jones, reflections, highly detailed illustrations, industrial urban scenes""</td>
<td width=25% align="center">"cloud in blue sky, a red lip, collage art, shuji terayama, dreamy objects, surreal, criterion collection, showa era, intricate details, mirror"</td>
</tr>
</table>
<hr>
## Authors
+ Vladimir Arkhipkin: [Github](https://github.com/oriBetelgeuse)
+ Anastasia Maltseva [Github](https://github.com/NastyaMittseva)
+ Andrei Filatov [Github](https://github.com/anvilarth),
+ Igor Pavlov: [Github](https://github.com/boomb0om)
+ Julia Agafonova
+ Arseniy Shakhmatov: [Github](https://github.com/cene555), [Blog](https://t.me/gradientdip)
+ Andrey Kuznetsov: [Github](https://github.com/kuznetsoffandrey), [Blog](https://t.me/complete_ai)
+ Denis Dimitrov: [Github](https://github.com/denndimitrov), [Blog](https://t.me/dendi_math_ai) | {"license": "apache-2.0", "pipeline_tag": "text-to-image", "inference": false} | Shaleen123/kandinksy-3 | null | [
"diffusers",
"safetensors",
"text-to-image",
"license:apache-2.0",
"diffusers:Kandinsky3Pipeline",
"region:us"
] | null | 2024-04-24T08:40:37+00:00 | [] | [] | TAGS
#diffusers #safetensors #text-to-image #license-apache-2.0 #diffusers-Kandinsky3Pipeline #region-us
| # Kandinsky-3: Text-to-image Diffusion Model

Post | Generate | Telegram-bot | [Report]
## Description:
Kandinsky 3.0 is an open-source text-to-image diffusion model built upon the Kandinsky2-x model family. In comparison to its predecessors, Kandinsky 3.0 incorporates more data and specifically related to Russian culture, which allows to generate pictures related to Russin culture. Furthermore, enhancements have been made to the text understanding and visual quality of the model, achieved by increasing the size of the text encoder and Diffusion U-Net models, respectively.
For more information: details of training, example of generations check out our post. The english version will be released in a couple of days.
## Architecture details:

Architecture consists of three parts:
+ Text encoder Flan-UL2 (encoder part) - 8.6B
+ Latent Diffusion U-Net - 3B
+ MoVQ encoder/decoder - 267M
## Models
We release our two models:
+ Base: Base text-to-image diffusion model. This model was trained over 2M steps on 400 A100
+ Inpainting: Inpainting version of the model. The model was initialized from final checkpoint of base model and trained 250k steps on 300 A100.
## Installing
Make sure to install 'diffusers' from main as well as Transformers, Accelerate
## How to use:
TODO
### Text-2-Image
### Image-2-Image
## Examples of generations
<hr>
<table class="center">
<tr>
<td><img src="assets/photo_8.jpg" raw=true></td>
<td><img src="assets/photo_15.jpg"></td>
<td><img src="assets/photo_16.jpg"></td>
<td><img src="assets/photo_17.jpg"></td>
</tr>
<tr>
<td width=25% align="center">"A beautiful landscape outdoors scene in the crochet knitting art style, drawing in style by Alfons Mucha"</td>
<td width=25% align="center">"gorgeous phoenix, cosmic, darkness, epic, cinematic, moonlight, stars, high - definition, texture,Oscar-Claude Monet"</td>
<td width=25% align="center">"a yellow house at the edge of the danish fjord, in the style of eiko ojala, ingrid baars, ad posters, mountainous vistas, george ault, realistic details, dark white and dark gray, 4k"</td>
<td width=25% align="center">"dragon fruit head, upper body, realistic, illustration by Joshua Hoffine Norman Rockwell, scary, creepy, biohacking, futurism, Zaha Hadid style"</td>
</tr>
<tr>
<td><img src="assets/photo_2.jpg" raw=true></td>
<td><img src="assets/photo_19.jpg"></td>
<td><img src="assets/photo_13.jpg"></td>
<td><img src="assets/photo_14.jpg"></td>
</tr>
<tr>
<td width=25% align="center">"Amazing playful nice cute strawberry character, dynamic poze, surreal fantazy garden background, gorgeous masterpice, award winning photo, soft natural lighting, 3d, Blender, Octane render, tilt - shift, deep field, colorful, I can't believe how beautiful this is, colorful, cute and sweet baby - loved photo"</td>
<td width=25% align="center">"beautiful fairy-tale desert, in the sky a wave of sand merges with the milky way, stars, cosmism, digital art, 8k"</td>
<td width=25% align="center">"Car, mustang, movie, person, poster, car cover, person, in the style of alessandro gottardo, gold and cyan, gerald harvey jones, reflections, highly detailed illustrations, industrial urban scenes""</td>
<td width=25% align="center">"cloud in blue sky, a red lip, collage art, shuji terayama, dreamy objects, surreal, criterion collection, showa era, intricate details, mirror"</td>
</tr>
</table>
<hr>
## Authors
+ Vladimir Arkhipkin: Github
+ Anastasia Maltseva Github
+ Andrei Filatov Github,
+ Igor Pavlov: Github
+ Julia Agafonova
+ Arseniy Shakhmatov: Github, Blog
+ Andrey Kuznetsov: Github, Blog
+ Denis Dimitrov: Github, Blog | [
"# Kandinsky-3: Text-to-image Diffusion Model\n\n\n\nPost | Generate | Telegram-bot | [Report]",
"## Description:\n\nKandinsky 3.0 is an open-source text-to-image diffusion model built upon the Kandinsky2-x model family. In comparison to its predecessors, Kandinsky 3.0 incorporates more data and specifically related to Russian culture, which allows to generate pictures related to Russin culture. Furthermore, enhancements have been made to the text understanding and visual quality of the model, achieved by increasing the size of the text encoder and Diffusion U-Net models, respectively.\n\nFor more information: details of training, example of generations check out our post. The english version will be released in a couple of days.",
"## Architecture details:\n\n\n\n\n\nArchitecture consists of three parts:\n\n+ Text encoder Flan-UL2 (encoder part) - 8.6B\n+ Latent Diffusion U-Net - 3B\n+ MoVQ encoder/decoder - 267M",
"## Models\n\nWe release our two models:\n\n+ Base: Base text-to-image diffusion model. This model was trained over 2M steps on 400 A100\n+ Inpainting: Inpainting version of the model. The model was initialized from final checkpoint of base model and trained 250k steps on 300 A100.",
"## Installing\n\nMake sure to install 'diffusers' from main as well as Transformers, Accelerate",
"## How to use:\n\nTODO",
"### Text-2-Image",
"### Image-2-Image",
"## Examples of generations\n\n<hr>\n\n<table class=\"center\">\n<tr>\n <td><img src=\"assets/photo_8.jpg\" raw=true></td>\n <td><img src=\"assets/photo_15.jpg\"></td>\n <td><img src=\"assets/photo_16.jpg\"></td>\n <td><img src=\"assets/photo_17.jpg\"></td>\n</tr>\n<tr>\n <td width=25% align=\"center\">\"A beautiful landscape outdoors scene in the crochet knitting art style, drawing in style by Alfons Mucha\"</td>\n <td width=25% align=\"center\">\"gorgeous phoenix, cosmic, darkness, epic, cinematic, moonlight, stars, high - definition, texture,Oscar-Claude Monet\"</td>\n <td width=25% align=\"center\">\"a yellow house at the edge of the danish fjord, in the style of eiko ojala, ingrid baars, ad posters, mountainous vistas, george ault, realistic details, dark white and dark gray, 4k\"</td>\n <td width=25% align=\"center\">\"dragon fruit head, upper body, realistic, illustration by Joshua Hoffine Norman Rockwell, scary, creepy, biohacking, futurism, Zaha Hadid style\"</td>\n</tr>\n<tr>\n <td><img src=\"assets/photo_2.jpg\" raw=true></td>\n <td><img src=\"assets/photo_19.jpg\"></td>\n <td><img src=\"assets/photo_13.jpg\"></td>\n <td><img src=\"assets/photo_14.jpg\"></td>\n</tr>\n<tr>\n <td width=25% align=\"center\">\"Amazing playful nice cute strawberry character, dynamic poze, surreal fantazy garden background, gorgeous masterpice, award winning photo, soft natural lighting, 3d, Blender, Octane render, tilt - shift, deep field, colorful, I can't believe how beautiful this is, colorful, cute and sweet baby - loved photo\"</td>\n <td width=25% align=\"center\">\"beautiful fairy-tale desert, in the sky a wave of sand merges with the milky way, stars, cosmism, digital art, 8k\"</td>\n <td width=25% align=\"center\">\"Car, mustang, movie, person, poster, car cover, person, in the style of alessandro gottardo, gold and cyan, gerald harvey jones, reflections, highly detailed illustrations, industrial urban scenes\"\"</td>\n <td width=25% align=\"center\">\"cloud in blue sky, a red lip, collage art, shuji terayama, dreamy objects, surreal, criterion collection, showa era, intricate details, mirror\"</td>\n</tr>\n\n</table>\n\n<hr>",
"## Authors\n\n+ Vladimir Arkhipkin: Github\n+ Anastasia Maltseva Github\n+ Andrei Filatov Github, \n+ Igor Pavlov: Github\n+ Julia Agafonova \n+ Arseniy Shakhmatov: Github, Blog\n+ Andrey Kuznetsov: Github, Blog\n+ Denis Dimitrov: Github, Blog"
] | [
"TAGS\n#diffusers #safetensors #text-to-image #license-apache-2.0 #diffusers-Kandinsky3Pipeline #region-us \n",
"# Kandinsky-3: Text-to-image Diffusion Model\n\n\n\nPost | Generate | Telegram-bot | [Report]",
"## Description:\n\nKandinsky 3.0 is an open-source text-to-image diffusion model built upon the Kandinsky2-x model family. In comparison to its predecessors, Kandinsky 3.0 incorporates more data and specifically related to Russian culture, which allows to generate pictures related to Russin culture. Furthermore, enhancements have been made to the text understanding and visual quality of the model, achieved by increasing the size of the text encoder and Diffusion U-Net models, respectively.\n\nFor more information: details of training, example of generations check out our post. The english version will be released in a couple of days.",
"## Architecture details:\n\n\n\n\n\nArchitecture consists of three parts:\n\n+ Text encoder Flan-UL2 (encoder part) - 8.6B\n+ Latent Diffusion U-Net - 3B\n+ MoVQ encoder/decoder - 267M",
"## Models\n\nWe release our two models:\n\n+ Base: Base text-to-image diffusion model. This model was trained over 2M steps on 400 A100\n+ Inpainting: Inpainting version of the model. The model was initialized from final checkpoint of base model and trained 250k steps on 300 A100.",
"## Installing\n\nMake sure to install 'diffusers' from main as well as Transformers, Accelerate",
"## How to use:\n\nTODO",
"### Text-2-Image",
"### Image-2-Image",
"## Examples of generations\n\n<hr>\n\n<table class=\"center\">\n<tr>\n <td><img src=\"assets/photo_8.jpg\" raw=true></td>\n <td><img src=\"assets/photo_15.jpg\"></td>\n <td><img src=\"assets/photo_16.jpg\"></td>\n <td><img src=\"assets/photo_17.jpg\"></td>\n</tr>\n<tr>\n <td width=25% align=\"center\">\"A beautiful landscape outdoors scene in the crochet knitting art style, drawing in style by Alfons Mucha\"</td>\n <td width=25% align=\"center\">\"gorgeous phoenix, cosmic, darkness, epic, cinematic, moonlight, stars, high - definition, texture,Oscar-Claude Monet\"</td>\n <td width=25% align=\"center\">\"a yellow house at the edge of the danish fjord, in the style of eiko ojala, ingrid baars, ad posters, mountainous vistas, george ault, realistic details, dark white and dark gray, 4k\"</td>\n <td width=25% align=\"center\">\"dragon fruit head, upper body, realistic, illustration by Joshua Hoffine Norman Rockwell, scary, creepy, biohacking, futurism, Zaha Hadid style\"</td>\n</tr>\n<tr>\n <td><img src=\"assets/photo_2.jpg\" raw=true></td>\n <td><img src=\"assets/photo_19.jpg\"></td>\n <td><img src=\"assets/photo_13.jpg\"></td>\n <td><img src=\"assets/photo_14.jpg\"></td>\n</tr>\n<tr>\n <td width=25% align=\"center\">\"Amazing playful nice cute strawberry character, dynamic poze, surreal fantazy garden background, gorgeous masterpice, award winning photo, soft natural lighting, 3d, Blender, Octane render, tilt - shift, deep field, colorful, I can't believe how beautiful this is, colorful, cute and sweet baby - loved photo\"</td>\n <td width=25% align=\"center\">\"beautiful fairy-tale desert, in the sky a wave of sand merges with the milky way, stars, cosmism, digital art, 8k\"</td>\n <td width=25% align=\"center\">\"Car, mustang, movie, person, poster, car cover, person, in the style of alessandro gottardo, gold and cyan, gerald harvey jones, reflections, highly detailed illustrations, industrial urban scenes\"\"</td>\n <td width=25% align=\"center\">\"cloud in blue sky, a red lip, collage art, shuji terayama, dreamy objects, surreal, criterion collection, showa era, intricate details, mirror\"</td>\n</tr>\n\n</table>\n\n<hr>",
"## Authors\n\n+ Vladimir Arkhipkin: Github\n+ Anastasia Maltseva Github\n+ Andrei Filatov Github, \n+ Igor Pavlov: Github\n+ Julia Agafonova \n+ Arseniy Shakhmatov: Github, Blog\n+ Andrey Kuznetsov: Github, Blog\n+ Denis Dimitrov: Github, Blog"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-14m_mz-130_PasswordMatch_n-its-10-seed-4
This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-14m", "model-index": [{"name": "robust_llm_pythia-14m_mz-130_PasswordMatch_n-its-10-seed-4", "results": []}]} | AlignmentResearch/robust_llm_pythia-14m_mz-130_PasswordMatch_n-its-10-seed-4 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-14m",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T08:41:11+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-14m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# robust_llm_pythia-14m_mz-130_PasswordMatch_n-its-10-seed-4
This model is a fine-tuned version of EleutherAI/pythia-14m on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# robust_llm_pythia-14m_mz-130_PasswordMatch_n-its-10-seed-4\n\nThis model is a fine-tuned version of EleutherAI/pythia-14m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-14m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# robust_llm_pythia-14m_mz-130_PasswordMatch_n-its-10-seed-4\n\nThis model is a fine-tuned version of EleutherAI/pythia-14m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers |
# StableLM-Tuned-Alpha
## Model Description
`StableLM-Tuned-Alpha` is a suite of 3B and 7B parameter decoder-only language models built on top of the `StableLM-Base-Alpha` models and further fine-tuned on various chat and instruction-following datasets.
## Usage
Get started chatting with `StableLM-Tuned-Alpha` by using the following code snippet:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, StoppingCriteria, StoppingCriteriaList
tokenizer = AutoTokenizer.from_pretrained("StabilityAI/stablelm-tuned-alpha-7b")
model = AutoModelForCausalLM.from_pretrained("StabilityAI/stablelm-tuned-alpha-7b")
model.half().cuda()
class StopOnTokens(StoppingCriteria):
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
stop_ids = [50278, 50279, 50277, 1, 0]
for stop_id in stop_ids:
if input_ids[0][-1] == stop_id:
return True
return False
system_prompt = """<|SYSTEM|># StableLM Tuned (Alpha version)
- StableLM is a helpful and harmless open-source AI language model developed by StabilityAI.
- StableLM is excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.
- StableLM is more than just an information source, StableLM is also able to write poetry, short stories, and make jokes.
- StableLM will refuse to participate in anything that could harm a human.
"""
prompt = f"{system_prompt}<|USER|>What's your mood today?<|ASSISTANT|>"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
tokens = model.generate(
**inputs,
max_new_tokens=64,
temperature=0.7,
do_sample=True,
stopping_criteria=StoppingCriteriaList([StopOnTokens()])
)
print(tokenizer.decode(tokens[0], skip_special_tokens=True))
```
StableLM Tuned should be used with prompts formatted to `<|SYSTEM|>...<|USER|>...<|ASSISTANT|>...`
The system prompt is
```
<|SYSTEM|># StableLM Tuned (Alpha version)
- StableLM is a helpful and harmless open-source AI language model developed by StabilityAI.
- StableLM is excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.
- StableLM is more than just an information source, StableLM is also able to write poetry, short stories, and make jokes.
- StableLM will refuse to participate in anything that could harm a human.
```
## Model Details
* **Developed by**: [Stability AI](https://stability.ai/)
* **Model type**: StableLM-Tuned-Alpha models are auto-regressive language models based on the NeoX transformer architecture.
* **Language(s)**: English
* **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
* **License**: Fine-tuned checkpoints (`StableLM-Tuned-Alpha`) are licensed under the Non-Commercial Creative Commons license ([CC BY-NC-SA-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/)), in-line with the original non-commercial license specified by [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca).
* **Contact**: For questions and comments about the model, please email `[email protected]`
## Training
| Parameters | Hidden Size | Layers | Heads | Sequence Length |
|------------|-------------|--------|-------|-----------------|
| 3B | 4096 | 16 | 32 | 4096 |
| 7B | 6144 | 16 | 48 | 4096 |
### Training Dataset
`StableLM-Tuned-Alpha` models are fine-tuned on a combination of five datasets:
[Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), a dataset of 52,000 instructions and demonstrations generated by OpenAI's `text-davinci-003` engine.
[GPT4All Prompt Generations](https://huggingface.co/datasets/nomic-ai/gpt4all_prompt_generations), which consists of 400k prompts and responses generated by GPT-4;
[Anthropic HH](https://huggingface.co/datasets/Dahoas/full-hh-rlhf), made up of preferences about AI assistant helpfulness and harmlessness;
[DataBricks Dolly](https://github.com/databrickslabs/dolly), comprising 15k instruction/responses generated by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation, information extraction, open QA and summarization;
and [ShareGPT Vicuna (English subset)](https://huggingface.co/datasets/jeffwan/sharegpt_vicuna), a dataset of conversations retrieved from [ShareGPT](https://sharegpt.com/).
### Training Procedure
Models are learned via supervised fine-tuning on the aforementioned datasets, trained in mixed-precision (FP16), and optimized with AdamW. We outline the following hyperparameters:
| Parameters | Batch Size | Learning Rate | Warm-up | Weight Decay | Betas |
|------------|------------|---------------|---------|--------------|-------------|
| 3B | 256 | 2e-5 | 50 | 0.01 | (0.9, 0.99) |
| 7B | 128 | 2e-5 | 100 | 0.01 | (0.9, 0.99) |
## Use and Limitations
### Intended Use
These models are intended to be used by the open-source community chat-like applications in adherence with the [CC BY-NC-SA-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license.
### Limitations and bias
Although the aforementioned datasets help to steer the base language models into "safer" distributions of text, not all biases and toxicity can be mitigated through fine-tuning. We ask that users be mindful of such potential issues that can arise in generated responses. Do not treat model outputs as substitutes for human judgment or as sources of truth. Please use responsibly.
## Acknowledgements
This work would not have been possible without the helpful hand of Dakota Mahan ([@dmayhem93](https://huggingface.co/dmayhem93)).
## Citations
```bibtex
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction-following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
```
```bibtext
@misc{vicuna2023,
title = {Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality},
url = {https://vicuna.lmsys.org},
author = {Chiang, Wei-Lin and Li, Zhuohan and Lin, Zi and Sheng, Ying and Wu, Zhanghao and Zhang, Hao and Zheng, Lianmin and Zhuang, Siyuan and Zhuang, Yonghao and Gonzalez, Joseph E. and Stoica, Ion and Xing, Eric P.},
month = {March},
year = {2023}
}
```
```bibtex
@misc{gpt4all,
author = {Yuvanesh Anand and Zach Nussbaum and Brandon Duderstadt and Benjamin Schmidt and Andriy Mulyar},
title = {GPT4All: Training an Assistant-style Chatbot with Large Scale Data Distillation from GPT-3.5-Turbo},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/nomic-ai/gpt4all}},
}
```
| {"language": ["en"], "license": "cc-by-nc-sa-4.0", "tags": ["causal-lm"], "datasets": ["dmayhem93/ChatCombined", "tatsu-lab/alpaca", "nomic-ai/gpt4all_prompt_generations", "Dahoas/full-hh-rlhf", "jeffwan/sharegpt_vicuna", "HuggingFaceH4/databricks_dolly_15k"]} | titanbot/ct2-int8-stablelm-7b | null | [
"transformers",
"gpt_neox",
"text-generation",
"causal-lm",
"en",
"dataset:dmayhem93/ChatCombined",
"dataset:tatsu-lab/alpaca",
"dataset:nomic-ai/gpt4all_prompt_generations",
"dataset:Dahoas/full-hh-rlhf",
"dataset:jeffwan/sharegpt_vicuna",
"dataset:HuggingFaceH4/databricks_dolly_15k",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T08:41:13+00:00 | [] | [
"en"
] | TAGS
#transformers #gpt_neox #text-generation #causal-lm #en #dataset-dmayhem93/ChatCombined #dataset-tatsu-lab/alpaca #dataset-nomic-ai/gpt4all_prompt_generations #dataset-Dahoas/full-hh-rlhf #dataset-jeffwan/sharegpt_vicuna #dataset-HuggingFaceH4/databricks_dolly_15k #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| StableLM-Tuned-Alpha
====================
Model Description
-----------------
'StableLM-Tuned-Alpha' is a suite of 3B and 7B parameter decoder-only language models built on top of the 'StableLM-Base-Alpha' models and further fine-tuned on various chat and instruction-following datasets.
Usage
-----
Get started chatting with 'StableLM-Tuned-Alpha' by using the following code snippet:
StableLM Tuned should be used with prompts formatted to '<|SYSTEM|>...<|USER|>...<|ASSISTANT|>...'
The system prompt is
Model Details
-------------
* Developed by: Stability AI
* Model type: StableLM-Tuned-Alpha models are auto-regressive language models based on the NeoX transformer architecture.
* Language(s): English
* Library: HuggingFace Transformers
* License: Fine-tuned checkpoints ('StableLM-Tuned-Alpha') are licensed under the Non-Commercial Creative Commons license (CC BY-NC-SA-4.0), in-line with the original non-commercial license specified by Stanford Alpaca.
* Contact: For questions and comments about the model, please email 'lm@URL'
Training
--------
### Training Dataset
'StableLM-Tuned-Alpha' models are fine-tuned on a combination of five datasets:
Alpaca, a dataset of 52,000 instructions and demonstrations generated by OpenAI's 'text-davinci-003' engine.
GPT4All Prompt Generations, which consists of 400k prompts and responses generated by GPT-4;
Anthropic HH, made up of preferences about AI assistant helpfulness and harmlessness;
DataBricks Dolly, comprising 15k instruction/responses generated by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation, information extraction, open QA and summarization;
and ShareGPT Vicuna (English subset), a dataset of conversations retrieved from ShareGPT.
### Training Procedure
Models are learned via supervised fine-tuning on the aforementioned datasets, trained in mixed-precision (FP16), and optimized with AdamW. We outline the following hyperparameters:
Use and Limitations
-------------------
### Intended Use
These models are intended to be used by the open-source community chat-like applications in adherence with the CC BY-NC-SA-4.0 license.
### Limitations and bias
Although the aforementioned datasets help to steer the base language models into "safer" distributions of text, not all biases and toxicity can be mitigated through fine-tuning. We ask that users be mindful of such potential issues that can arise in generated responses. Do not treat model outputs as substitutes for human judgment or as sources of truth. Please use responsibly.
Acknowledgements
----------------
This work would not have been possible without the helpful hand of Dakota Mahan (@dmayhem93).
s
| [
"### Training Dataset\n\n\n'StableLM-Tuned-Alpha' models are fine-tuned on a combination of five datasets:\nAlpaca, a dataset of 52,000 instructions and demonstrations generated by OpenAI's 'text-davinci-003' engine.\nGPT4All Prompt Generations, which consists of 400k prompts and responses generated by GPT-4;\nAnthropic HH, made up of preferences about AI assistant helpfulness and harmlessness;\nDataBricks Dolly, comprising 15k instruction/responses generated by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation, information extraction, open QA and summarization;\nand ShareGPT Vicuna (English subset), a dataset of conversations retrieved from ShareGPT.",
"### Training Procedure\n\n\nModels are learned via supervised fine-tuning on the aforementioned datasets, trained in mixed-precision (FP16), and optimized with AdamW. We outline the following hyperparameters:\n\n\n\nUse and Limitations\n-------------------",
"### Intended Use\n\n\nThese models are intended to be used by the open-source community chat-like applications in adherence with the CC BY-NC-SA-4.0 license.",
"### Limitations and bias\n\n\nAlthough the aforementioned datasets help to steer the base language models into \"safer\" distributions of text, not all biases and toxicity can be mitigated through fine-tuning. We ask that users be mindful of such potential issues that can arise in generated responses. Do not treat model outputs as substitutes for human judgment or as sources of truth. Please use responsibly.\n\n\nAcknowledgements\n----------------\n\n\nThis work would not have been possible without the helpful hand of Dakota Mahan (@dmayhem93).\n\n\ns"
] | [
"TAGS\n#transformers #gpt_neox #text-generation #causal-lm #en #dataset-dmayhem93/ChatCombined #dataset-tatsu-lab/alpaca #dataset-nomic-ai/gpt4all_prompt_generations #dataset-Dahoas/full-hh-rlhf #dataset-jeffwan/sharegpt_vicuna #dataset-HuggingFaceH4/databricks_dolly_15k #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training Dataset\n\n\n'StableLM-Tuned-Alpha' models are fine-tuned on a combination of five datasets:\nAlpaca, a dataset of 52,000 instructions and demonstrations generated by OpenAI's 'text-davinci-003' engine.\nGPT4All Prompt Generations, which consists of 400k prompts and responses generated by GPT-4;\nAnthropic HH, made up of preferences about AI assistant helpfulness and harmlessness;\nDataBricks Dolly, comprising 15k instruction/responses generated by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation, information extraction, open QA and summarization;\nand ShareGPT Vicuna (English subset), a dataset of conversations retrieved from ShareGPT.",
"### Training Procedure\n\n\nModels are learned via supervised fine-tuning on the aforementioned datasets, trained in mixed-precision (FP16), and optimized with AdamW. We outline the following hyperparameters:\n\n\n\nUse and Limitations\n-------------------",
"### Intended Use\n\n\nThese models are intended to be used by the open-source community chat-like applications in adherence with the CC BY-NC-SA-4.0 license.",
"### Limitations and bias\n\n\nAlthough the aforementioned datasets help to steer the base language models into \"safer\" distributions of text, not all biases and toxicity can be mitigated through fine-tuning. We ask that users be mindful of such potential issues that can arise in generated responses. Do not treat model outputs as substitutes for human judgment or as sources of truth. Please use responsibly.\n\n\nAcknowledgements\n----------------\n\n\nThis work would not have been possible without the helpful hand of Dakota Mahan (@dmayhem93).\n\n\ns"
] |
text-generation | transformers |
# DoubleLlama3-8b-slerp
DoubleLlama3-8b-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
* [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: meta-llama/Meta-Llama-3-8B-Instruct
layer_range: [0, 32]
- model: meta-llama/Meta-Llama-3-8B-Instruct
layer_range: [0, 32]
merge_method: slerp
base_model: meta-llama/Meta-Llama-3-8B-Instruct
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "llm-lover/DoubleLlama3-8b-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"tags": ["merge", "mergekit", "lazymergekit", "meta-llama/Meta-Llama-3-8B-Instruct"], "base_model": ["meta-llama/Meta-Llama-3-8B-Instruct", "meta-llama/Meta-Llama-3-8B-Instruct"]} | llm-lover/DoubleLlama3-8b-slerp | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"meta-llama/Meta-Llama-3-8B-Instruct",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T08:41:30+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #meta-llama/Meta-Llama-3-8B-Instruct #conversational #base_model-meta-llama/Meta-Llama-3-8B-Instruct #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# DoubleLlama3-8b-slerp
DoubleLlama3-8b-slerp is a merge of the following models using LazyMergekit:
* meta-llama/Meta-Llama-3-8B-Instruct
* meta-llama/Meta-Llama-3-8B-Instruct
## Configuration
## Usage
| [
"# DoubleLlama3-8b-slerp\n\nDoubleLlama3-8b-slerp is a merge of the following models using LazyMergekit:\n* meta-llama/Meta-Llama-3-8B-Instruct\n* meta-llama/Meta-Llama-3-8B-Instruct",
"## Configuration",
"## Usage"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #meta-llama/Meta-Llama-3-8B-Instruct #conversational #base_model-meta-llama/Meta-Llama-3-8B-Instruct #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# DoubleLlama3-8b-slerp\n\nDoubleLlama3-8b-slerp is a merge of the following models using LazyMergekit:\n* meta-llama/Meta-Llama-3-8B-Instruct\n* meta-llama/Meta-Llama-3-8B-Instruct",
"## Configuration",
"## Usage"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-14m_mz-130_PasswordMatch_n-its-10-seed-3
This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 3
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-14m", "model-index": [{"name": "robust_llm_pythia-14m_mz-130_PasswordMatch_n-its-10-seed-3", "results": []}]} | AlignmentResearch/robust_llm_pythia-14m_mz-130_PasswordMatch_n-its-10-seed-3 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-14m",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T08:42:28+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-14m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# robust_llm_pythia-14m_mz-130_PasswordMatch_n-its-10-seed-3
This model is a fine-tuned version of EleutherAI/pythia-14m on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 3
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# robust_llm_pythia-14m_mz-130_PasswordMatch_n-its-10-seed-3\n\nThis model is a fine-tuned version of EleutherAI/pythia-14m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 3\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-14m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# robust_llm_pythia-14m_mz-130_PasswordMatch_n-its-10-seed-3\n\nThis model is a fine-tuned version of EleutherAI/pythia-14m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 3\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | transformers |
# DavidAU/AntlerStar-RP-Q6_K-GGUF
This model was converted to GGUF format from [`Aratako/AntlerStar-RP`](https://huggingface.co/Aratako/AntlerStar-RP) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Aratako/AntlerStar-RP) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/AntlerStar-RP-Q6_K-GGUF --model antlerstar-rp.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/AntlerStar-RP-Q6_K-GGUF --model antlerstar-rp.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m antlerstar-rp.Q6_K.gguf -n 128
```
| {"language": ["ja"], "license": "apache-2.0", "library_name": "transformers", "tags": ["mergekit", "merge", "not-for-all-audiences", "nsfw", "llama-cpp", "gguf-my-repo"], "base_model": ["Aratako/Antler-7B-RP-v3", "Aratako/Japanese-Starling-ChatV-7B-RP", "senseable/WestLake-7B-v2", "SanjiWatsuki/Kunoichi-DPO-v2-7B", "SanjiWatsuki/Silicon-Maid-7B", "SanjiWatsuki/Loyal-Macaroni-Maid-7B"]} | DavidAU/AntlerStar-RP-Q6_K-GGUF | null | [
"transformers",
"gguf",
"mergekit",
"merge",
"not-for-all-audiences",
"nsfw",
"llama-cpp",
"gguf-my-repo",
"ja",
"base_model:Aratako/Antler-7B-RP-v3",
"base_model:Aratako/Japanese-Starling-ChatV-7B-RP",
"base_model:senseable/WestLake-7B-v2",
"base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:SanjiWatsuki/Silicon-Maid-7B",
"base_model:SanjiWatsuki/Loyal-Macaroni-Maid-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T08:42:58+00:00 | [] | [
"ja"
] | TAGS
#transformers #gguf #mergekit #merge #not-for-all-audiences #nsfw #llama-cpp #gguf-my-repo #ja #base_model-Aratako/Antler-7B-RP-v3 #base_model-Aratako/Japanese-Starling-ChatV-7B-RP #base_model-senseable/WestLake-7B-v2 #base_model-SanjiWatsuki/Kunoichi-DPO-v2-7B #base_model-SanjiWatsuki/Silicon-Maid-7B #base_model-SanjiWatsuki/Loyal-Macaroni-Maid-7B #license-apache-2.0 #endpoints_compatible #region-us
|
# DavidAU/AntlerStar-RP-Q6_K-GGUF
This model was converted to GGUF format from 'Aratako/AntlerStar-RP' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/AntlerStar-RP-Q6_K-GGUF\nThis model was converted to GGUF format from 'Aratako/AntlerStar-RP' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #mergekit #merge #not-for-all-audiences #nsfw #llama-cpp #gguf-my-repo #ja #base_model-Aratako/Antler-7B-RP-v3 #base_model-Aratako/Japanese-Starling-ChatV-7B-RP #base_model-senseable/WestLake-7B-v2 #base_model-SanjiWatsuki/Kunoichi-DPO-v2-7B #base_model-SanjiWatsuki/Silicon-Maid-7B #base_model-SanjiWatsuki/Loyal-Macaroni-Maid-7B #license-apache-2.0 #endpoints_compatible #region-us \n",
"# DavidAU/AntlerStar-RP-Q6_K-GGUF\nThis model was converted to GGUF format from 'Aratako/AntlerStar-RP' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
object-detection | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_fss1000
This model is a fine-tuned version of [facebook/detr-resnet-101](https://huggingface.co/facebook/detr-resnet-101) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "facebook/detr-resnet-101", "model-index": [{"name": "detr-resnet-50_finetuned_fss1000", "results": []}]} | Khanmhmdi/detr-resnet-50_finetuned_fss1000 | null | [
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-101",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T08:43:15+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #detr #object-detection #generated_from_trainer #base_model-facebook/detr-resnet-101 #license-apache-2.0 #endpoints_compatible #region-us
|
# detr-resnet-50_finetuned_fss1000
This model is a fine-tuned version of facebook/detr-resnet-101 on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# detr-resnet-50_finetuned_fss1000\n\nThis model is a fine-tuned version of facebook/detr-resnet-101 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 12\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 15\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #detr #object-detection #generated_from_trainer #base_model-facebook/detr-resnet-101 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# detr-resnet-50_finetuned_fss1000\n\nThis model is a fine-tuned version of facebook/detr-resnet-101 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 12\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 15\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers |
<br/><br/>
8bpw/h8 exl2 quantization of [xxx777xxxASD/ChaoticSoliloquy-4x8B](https://huggingface.co/xxx777xxxASD/ChaoticSoliloquy-4x8B) using default exllamav2 calibration dataset.
---
**ORIGINAL CARD:**

(Maybe i'll change the waifu picture later)
Experimental RP-oriented MoE, the idea was to get a model that would be equal to or better than the Mixtral 8x7B and it's finetunes in RP/ERP tasks.
[GGUF, Exl2](https://huggingface.co/collections/xxx777xxxASD/chaoticsoliloquy-4x8b-6628a759b5a60d8d3f51ed62)
### ChaoticSoliloquy-4x8B
```
base_model: jeiku_Chaos_RP_l3_8B
gate_mode: random
dtype: bfloat16
experts_per_token: 2
experts:
- source_model: ChaoticNeutrals_Poppy_Porpoise-v0.6-L3-8B
- source_model: jeiku_Chaos_RP_l3_8B
- source_model: openlynn_Llama-3-Soliloquy-8B
- source_model: Sao10K_L3-Solana-8B-v1
```
## Models used
- [ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B)
- [jeiku/Chaos_RP_l3_8B](https://huggingface.co/jeiku/Chaos_RP_l3_8B)
- [openlynn/Llama-3-Soliloquy-8B](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B)
- [Sao10K/L3-Solana-8B-v1](https://huggingface.co/Sao10K/L3-Solana-8B-v1)
## Vision
[llama3_mmproj](https://huggingface.co/ChaoticNeutrals/Llava_1.5_Llama3_mmproj)

## Prompt format: Llama 3 | {"language": ["en"], "license": "llama3", "tags": ["moe"]} | JayhC/ChaoticSoliloquy-4x8B-8bpw-h8-exl2 | null | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"conversational",
"en",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-24T08:44:00+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #mixtral #text-generation #moe #conversational #en #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
|
<br/><br/>
8bpw/h8 exl2 quantization of xxx777xxxASD/ChaoticSoliloquy-4x8B using default exllamav2 calibration dataset.
---
ORIGINAL CARD:
!image/png
(Maybe i'll change the waifu picture later)
Experimental RP-oriented MoE, the idea was to get a model that would be equal to or better than the Mixtral 8x7B and it's finetunes in RP/ERP tasks.
GGUF, Exl2
### ChaoticSoliloquy-4x8B
## Models used
- ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B
- jeiku/Chaos_RP_l3_8B
- openlynn/Llama-3-Soliloquy-8B
- Sao10K/L3-Solana-8B-v1
## Vision
llama3_mmproj
!image/png
## Prompt format: Llama 3 | [
"### ChaoticSoliloquy-4x8B",
"## Models used\n\n- ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B\n- jeiku/Chaos_RP_l3_8B\n- openlynn/Llama-3-Soliloquy-8B\n- Sao10K/L3-Solana-8B-v1",
"## Vision\n\nllama3_mmproj\n!image/png",
"## Prompt format: Llama 3"
] | [
"TAGS\n#transformers #safetensors #mixtral #text-generation #moe #conversational #en #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"### ChaoticSoliloquy-4x8B",
"## Models used\n\n- ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B\n- jeiku/Chaos_RP_l3_8B\n- openlynn/Llama-3-Soliloquy-8B\n- Sao10K/L3-Solana-8B-v1",
"## Vision\n\nllama3_mmproj\n!image/png",
"## Prompt format: Llama 3"
] |
null | transformers |
# Function Calling and Tool Use LLaMA Models
This repository contains two main versions of LLaMA models fine-tuned for function calling and tool use capabilities:
1. Fine-tuned version of the `LLama3-8b-instruct` model
2. `tinyllama` - a smaller model version
For each version, the following variants are available:
- 16-bit quantized model
- 4-bit quantized model
- GGFU format for use with llama.cpp
## Dataset
The models were fine-tuned using a modified version of the `ilacai/glaive-function-calling-v2-sharegpt` dataset, which can be found at [unclecode/glaive-function-calling-llama3](https://huggingface.co/datasets/unclecode/glaive-function-calling-llama3).
## Usage
To learn how to use these models, refer to the Colab notebook: [](https://tinyurl.com/ucfllm)
This is the first version of the models, and work is in progress to further train them with multi-tool detection and native tool binding support.
## Library and Tools Support
A library is being developed to manage tools and add tool support for major LLMs, regardless of their built-in capabilities. You can find examples and contribute to the library at the following repository:
[https://github.com/unclecode/fllm](https://github.com/unclecode/fllm)
Please open an issue in the repository for any bugs or collaboration requests.
## Other Models
Here are links to other related models:
- [unclecode/llama3-function-call-lora-adapter-240424](https://huggingface.co/unclecode/llama3-function-call-lora-adapter-240424)
- [unclecode/llama3-function-call-16bit-240424](https://huggingface.co/unclecode/llama3-function-call-16bit-240424)
- [unclecode/llama3-function-call-4bit-240424](https://huggingface.co/unclecode/llama3-function-call-4bit-240424)
- [unclecode/llama3-function-call-Q4_K_M_GGFU-240424](https://huggingface.co/unclecode/llama3-function-call-Q4_K_M_GGFU-240424)
- [unclecode/tinyllama-function-call-lora-adapter-250424](https://huggingface.co/unclecode/tinyllama-function-call-lora-adapter-250424)
- [unclecode/tinyllama-function-call-16bit-250424](https://huggingface.co/unclecode/tinyllama-function-call-16bit-250424)
- [unclecode/tinyllama-function-call-Q4_K_M_GGFU-250424](https://huggingface.co/unclecode/tinyllama-function-call-Q4_K_M_GGFU-250424)
## License
These models are released under the Apache 2.0 license.
# Uploaded model
- **Developed by:** unclecode
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "function calling", "tool use", "llama", "llama3", "tinyllama", "instruct-tuned", "4-bit quantization", "ggfu"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"} | unclecode/llama3-function-call-Q4_K_M_GGFU-240424 | null | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"function calling",
"tool use",
"llama3",
"tinyllama",
"instruct-tuned",
"4-bit quantization",
"ggfu",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T08:44:13+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #llama #text-generation-inference #unsloth #trl #function calling #tool use #llama3 #tinyllama #instruct-tuned #4-bit quantization #ggfu #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Function Calling and Tool Use LLaMA Models
This repository contains two main versions of LLaMA models fine-tuned for function calling and tool use capabilities:
1. Fine-tuned version of the 'LLama3-8b-instruct' model
2. 'tinyllama' - a smaller model version
For each version, the following variants are available:
- 16-bit quantized model
- 4-bit quantized model
- GGFU format for use with URL
## Dataset
The models were fine-tuned using a modified version of the 'ilacai/glaive-function-calling-v2-sharegpt' dataset, which can be found at unclecode/glaive-function-calling-llama3.
## Usage
To learn how to use these models, refer to the Colab notebook: 
- **Paper:** [2403.15377](https://arxiv.org/abs/2403.15377)
- **Point of Contact:** mailto:[InternVideo Group]([email protected])
## Citation
If you find this work useful for your research, please consider citing InternVideo2. Your acknowledgement would greatly help us in continuing to contribute resources to the research community.
```
@article{wang2024internvideo2,
title={InternVideo2: Scaling Video Foundation Models for Multimodal Video Understanding},
author={Wang, Yi and Li, Kunchang and Li, Xinhao and Yu, Jiashuo and He, Yinan and Chen, Guo and Pei, Baoqi and Zheng, Rongkun and Xu, Jilan and Wang, Zun and others},
journal={arXiv preprint arXiv:2403.15377},
year={2024}
}
@article{wang2022internvideo,
title={InternVideo: General Video Foundation Models via Generative and Discriminative Learning},
author={Wang, Yi and Li, Kunchang and Li, Yizhuo and He, Yinan and Huang, Bingkun and Zhao, Zhiyu and Zhang, Hongjie and Xu, Jilan and Liu, Yi and Wang, Zun and Xing, Sen and Chen, Guo and Pan, Junting and Yu, Jiashuo and Wang, Yali and Wang, Limin and Qiao, Yu},
journal={arXiv preprint arXiv:2212.03191},
year={2022}
}
``` | {"license": "apache-2.0", "extra_gated_prompt": "You agree to not use the model to conduct experiments that cause harm to human subjects.", "extra_gated_fields": {"Name": "text", "Company/Organization": "text", "Country": "text", "E-Mail": "text"}} | OpenGVLab/InternVideo2-CLIP-1B-224p-f8 | null | [
"arxiv:2403.15377",
"license:apache-2.0",
"region:us"
] | null | 2024-04-24T08:46:52+00:00 | [
"2403.15377"
] | [] | TAGS
#arxiv-2403.15377 #license-apache-2.0 #region-us
|
# Model Card for InternVideo2
This modelcard aims to give the model info of 'InternVideo2: Scaling Video Foundation Models for Multimodal Video Understanding'.
## Model Details
### Model Sources
- Repository: InternVideo2
- Paper: 2403.15377
- Point of Contact: mailto:InternVideo Group
If you find this work useful for your research, please consider citing InternVideo2. Your acknowledgement would greatly help us in continuing to contribute resources to the research community.
| [
"# Model Card for InternVideo2\n\nThis modelcard aims to give the model info of 'InternVideo2: Scaling Video Foundation Models for Multimodal Video Understanding'.",
"## Model Details",
"### Model Sources\n\n- Repository: InternVideo2\n- Paper: 2403.15377\n- Point of Contact: mailto:InternVideo Group\n\nIf you find this work useful for your research, please consider citing InternVideo2. Your acknowledgement would greatly help us in continuing to contribute resources to the research community."
] | [
"TAGS\n#arxiv-2403.15377 #license-apache-2.0 #region-us \n",
"# Model Card for InternVideo2\n\nThis modelcard aims to give the model info of 'InternVideo2: Scaling Video Foundation Models for Multimodal Video Understanding'.",
"## Model Details",
"### Model Sources\n\n- Repository: InternVideo2\n- Paper: 2403.15377\n- Point of Contact: mailto:InternVideo Group\n\nIf you find this work useful for your research, please consider citing InternVideo2. Your acknowledgement would greatly help us in continuing to contribute resources to the research community."
] |
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PolizzeDonut-UltimaProvaCluster-Cluster6di7-5epochs
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "base_model": "naver-clova-ix/donut-base", "model-index": [{"name": "PolizzeDonut-UltimaProvaCluster-Cluster6di7-5epochs", "results": []}]} | tedad09/PolizzeDonut-UltimaProvaCluster-Cluster6di7-5epochs | null | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T08:46:54+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us
|
# PolizzeDonut-UltimaProvaCluster-Cluster6di7-5epochs
This model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# PolizzeDonut-UltimaProvaCluster-Cluster6di7-5epochs\n\nThis model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us \n",
"# PolizzeDonut-UltimaProvaCluster-Cluster6di7-5epochs\n\nThis model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers |
# MultiverseBuddy-15B-MoE
MultiverseBuddy-15B-MoE is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [allknowingroger/MultiverseEx26-7B-slerp](https://huggingface.co/allknowingroger/MultiverseEx26-7B-slerp)
* [OpenBuddy/openbuddy-mistral2-7b-v20.2-32k](https://huggingface.co/OpenBuddy/openbuddy-mistral2-7b-v20.2-32k)
## 🧩 Configuration
```yaml
base_model: allknowingroger/MultiverseEx26-7B-slerp
experts:
- source_model: allknowingroger/MultiverseEx26-7B-slerp
positive_prompts: ["what"]
- source_model: OpenBuddy/openbuddy-mistral2-7b-v20.2-32k
positive_prompts: ["think"]
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/MultiverseBuddy-15B-MoE"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"license": "apache-2.0", "tags": ["moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "allknowingroger/MultiverseEx26-7B-slerp", "OpenBuddy/openbuddy-mistral2-7b-v20.2-32k"], "base_model": ["allknowingroger/MultiverseEx26-7B-slerp", "OpenBuddy/openbuddy-mistral2-7b-v20.2-32k"]} | allknowingroger/MultiverseBuddy-15B-MoE | null | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"allknowingroger/MultiverseEx26-7B-slerp",
"OpenBuddy/openbuddy-mistral2-7b-v20.2-32k",
"base_model:allknowingroger/MultiverseEx26-7B-slerp",
"base_model:OpenBuddy/openbuddy-mistral2-7b-v20.2-32k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T08:47:53+00:00 | [] | [] | TAGS
#transformers #safetensors #mixtral #text-generation #moe #frankenmoe #merge #mergekit #lazymergekit #allknowingroger/MultiverseEx26-7B-slerp #OpenBuddy/openbuddy-mistral2-7b-v20.2-32k #base_model-allknowingroger/MultiverseEx26-7B-slerp #base_model-OpenBuddy/openbuddy-mistral2-7b-v20.2-32k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# MultiverseBuddy-15B-MoE
MultiverseBuddy-15B-MoE is a Mixture of Experts (MoE) made with the following models using LazyMergekit:
* allknowingroger/MultiverseEx26-7B-slerp
* OpenBuddy/openbuddy-mistral2-7b-v20.2-32k
## Configuration
## Usage
| [
"# MultiverseBuddy-15B-MoE\n\nMultiverseBuddy-15B-MoE is a Mixture of Experts (MoE) made with the following models using LazyMergekit:\n* allknowingroger/MultiverseEx26-7B-slerp\n* OpenBuddy/openbuddy-mistral2-7b-v20.2-32k",
"## Configuration",
"## Usage"
] | [
"TAGS\n#transformers #safetensors #mixtral #text-generation #moe #frankenmoe #merge #mergekit #lazymergekit #allknowingroger/MultiverseEx26-7B-slerp #OpenBuddy/openbuddy-mistral2-7b-v20.2-32k #base_model-allknowingroger/MultiverseEx26-7B-slerp #base_model-OpenBuddy/openbuddy-mistral2-7b-v20.2-32k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# MultiverseBuddy-15B-MoE\n\nMultiverseBuddy-15B-MoE is a Mixture of Experts (MoE) made with the following models using LazyMergekit:\n* allknowingroger/MultiverseEx26-7B-slerp\n* OpenBuddy/openbuddy-mistral2-7b-v20.2-32k",
"## Configuration",
"## Usage"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/meraGPT/mera-mix-4x7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/mera-mix-4x7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-i1-GGUF/resolve/main/mera-mix-4x7B.i1-IQ1_S.gguf) | i1-IQ1_S | 5.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-i1-GGUF/resolve/main/mera-mix-4x7B.i1-IQ1_M.gguf) | i1-IQ1_M | 5.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-i1-GGUF/resolve/main/mera-mix-4x7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-i1-GGUF/resolve/main/mera-mix-4x7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-i1-GGUF/resolve/main/mera-mix-4x7B.i1-IQ2_S.gguf) | i1-IQ2_S | 7.4 | |
| [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-i1-GGUF/resolve/main/mera-mix-4x7B.i1-IQ2_M.gguf) | i1-IQ2_M | 8.1 | |
| [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-i1-GGUF/resolve/main/mera-mix-4x7B.i1-Q2_K.gguf) | i1-Q2_K | 8.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-i1-GGUF/resolve/main/mera-mix-4x7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-i1-GGUF/resolve/main/mera-mix-4x7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-i1-GGUF/resolve/main/mera-mix-4x7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 10.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-i1-GGUF/resolve/main/mera-mix-4x7B.i1-IQ3_S.gguf) | i1-IQ3_S | 10.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-i1-GGUF/resolve/main/mera-mix-4x7B.i1-IQ3_M.gguf) | i1-IQ3_M | 10.7 | |
| [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-i1-GGUF/resolve/main/mera-mix-4x7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 11.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-i1-GGUF/resolve/main/mera-mix-4x7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 12.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-i1-GGUF/resolve/main/mera-mix-4x7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 13.0 | |
| [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-i1-GGUF/resolve/main/mera-mix-4x7B.i1-Q4_0.gguf) | i1-Q4_0 | 13.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-i1-GGUF/resolve/main/mera-mix-4x7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 13.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-i1-GGUF/resolve/main/mera-mix-4x7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 14.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-i1-GGUF/resolve/main/mera-mix-4x7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.7 | |
| [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-i1-GGUF/resolve/main/mera-mix-4x7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 17.2 | |
| [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-i1-GGUF/resolve/main/mera-mix-4x7B.i1-Q6_K.gguf) | i1-Q6_K | 19.9 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "base_model": "meraGPT/mera-mix-4x7B", "quantized_by": "mradermacher"} | mradermacher/mera-mix-4x7B-i1-GGUF | null | [
"transformers",
"gguf",
"en",
"base_model:meraGPT/mera-mix-4x7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T08:48:34+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #en #base_model-meraGPT/mera-mix-4x7B #license-apache-2.0 #endpoints_compatible #region-us
| About
-----
weighted/imatrix quants of URL
static quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #en #base_model-meraGPT/mera-mix-4x7B #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
null | null |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {} | RyanJBishop/Interpretator_of_PDFs | null | [
"arxiv:1910.09700",
"region:us"
] | null | 2024-04-24T08:49:00+00:00 | [
"1910.09700"
] | [] | TAGS
#arxiv-1910.09700 #region-us
|
# Model Card for Model ID
This modelcard aims to be a base template for new models. It has been generated using this raw template.
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using this raw template.",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#arxiv-1910.09700 #region-us \n",
"# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using this raw template.",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
[notebook](https://gist.github.com/shake/81769398fe0b6cf9affcfc7f6b60e475)
有详细的说明。学习使用。
如果使用,你需要更换你的wandb,和huggingface的密钥。我是在colab 密钥添加。
| {"library_name": "transformers", "tags": []} | chenshake/Meta-Llama-3-8B-Orpo | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T08:49:41+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
notebook
有详细的说明。学习使用。
如果使用,你需要更换你的wandb,和huggingface的密钥。我是在colab 密钥添加。
| [
"# Model Card for Model ID\n\n\n\nnotebook\n\n有详细的说明。学习使用。\n\n如果使用,你需要更换你的wandb,和huggingface的密钥。我是在colab 密钥添加。"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID\n\n\n\nnotebook\n\n有详细的说明。学习使用。\n\n如果使用,你需要更换你的wandb,和huggingface的密钥。我是在colab 密钥添加。"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "openai/whisper-small"} | ygaci/whisper-small-fr_common_voice_16_new | null | [
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:openai/whisper-small",
"region:us"
] | null | 2024-04-24T08:49:48+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #tensorboard #safetensors #arxiv-1910.09700 #base_model-openai/whisper-small #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] | [
"TAGS\n#peft #tensorboard #safetensors #arxiv-1910.09700 #base_model-openai/whisper-small #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
text-generation | transformers |
<img src="./llama-3-merges.webp" alt="Llama-3 DPO Logo" width="500" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Llama-3-8B-Instruct-DPO-v0.3 (32k)
This model is a fine-tune (DPO) of `meta-llama/Meta-Llama-3-8B-Instruct` model. I have used `rope_theta` to extend the context length up to 32K safely.
# Quantized GGUF
All GGUF models come with context length of `32000`: [Llama-3-8B-Instruct-DPO-v0.3-32k-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3-32k-GGUF)
# Prompt Template
This model uses `ChatML` prompt template:
```
<|im_start|>system
{System}
<|im_end|>
<|im_start|>user
{User}
<|im_end|>
<|im_start|>assistant
{Assistant}
````
# How to use
You can use this model by using `MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3` as the model name in Hugging Face's
transformers library.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
from transformers import pipeline
import torch
model_id = "MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3"
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True,
# attn_implementation="flash_attention_2"
)
tokenizer = AutoTokenizer.from_pretrained(
model_id,
trust_remote_code=True
)
streamer = TextStreamer(tokenizer)
pipeline = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
model_kwargs={"torch_dtype": torch.bfloat16},
streamer=streamer
)
# Then you can use the pipeline to generate text.
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|im_end|>")
]
outputs = pipeline(
prompt,
max_new_tokens=8192,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.95,
)
print(outputs[0]["generated_text"][len(prompt):])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__Llama-3-8B-Instruct-DPO-v0.3)
| Metric |Value|
|---------------------------------|----:|
|Avg. |68.23|
|AI2 Reasoning Challenge (25-Shot)|62.63|
|HellaSwag (10-Shot) |79.20|
|MMLU (5-Shot) |68.33|
|TruthfulQA (0-shot) |53.29|
|Winogrande (5-shot) |75.37|
|GSM8k (5-shot) |70.58|
| {"language": ["en"], "license": "llama3", "library_name": "transformers", "tags": ["axolotl", "finetune", "dpo", "facebook", "meta", "pytorch", "llama", "llama-3"], "datasets": ["Intel/orca_dpo_pairs"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "LICENSE", "inference": false, "model_creator": "MaziyarPanahi", "quantized_by": "MaziyarPanahi", "model-index": [{"name": "Llama-3-8B-Instruct-DPO-v0.3", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 62.63, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 79.2, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 68.33, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 53.29}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 75.37, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 70.58, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3", "name": "Open LLM Leaderboard"}}]}]} | MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"axolotl",
"finetune",
"dpo",
"facebook",
"meta",
"pytorch",
"llama-3",
"conversational",
"en",
"dataset:Intel/orca_dpo_pairs",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T08:50:00+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #axolotl #finetune #dpo #facebook #meta #pytorch #llama-3 #conversational #en #dataset-Intel/orca_dpo_pairs #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-llama3 #model-index #autotrain_compatible #text-generation-inference #region-us
| 
Llama-3-8B-Instruct-DPO-v0.3 (32k)
==================================
This model is a fine-tune (DPO) of 'meta-llama/Meta-Llama-3-8B-Instruct' model. I have used 'rope\_theta' to extend the context length up to 32K safely.
Quantized GGUF
==============
All GGUF models come with context length of '32000': Llama-3-8B-Instruct-DPO-v0.3-32k-GGUF
Prompt Template
===============
This model uses 'ChatML' prompt template:
'
How to use
==========
You can use this model by using 'MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3' as the model name in Hugging Face's
transformers library.
Open LLM Leaderboard Evaluation Results
=======================================
Detailed results can be found here
| [] | [
"TAGS\n#transformers #safetensors #llama #text-generation #axolotl #finetune #dpo #facebook #meta #pytorch #llama-3 #conversational #en #dataset-Intel/orca_dpo_pairs #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-llama3 #model-index #autotrain_compatible #text-generation-inference #region-us \n"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stocks
This model is a fine-tuned version of [projecte-aina/roberta-base-ca-v2-cased-te](https://huggingface.co/projecte-aina/roberta-base-ca-v2-cased-te) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6553
- Accuracy: 0.8101
- Precision: 0.8111
- Recall: 0.8101
- F1: 0.8099
- Ratio: 0.5289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- lr_scheduler_warmup_steps: 4
- num_epochs: 2
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Ratio |
|:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:------:|
| 3.5199 | 0.1626 | 10 | 1.7420 | 0.5530 | 0.5581 | 0.5530 | 0.5431 | 0.6477 |
| 1.6995 | 0.3252 | 20 | 1.3228 | 0.5356 | 0.5554 | 0.5356 | 0.4899 | 0.2007 |
| 1.1579 | 0.4878 | 30 | 0.9331 | 0.5785 | 0.5796 | 0.5785 | 0.5771 | 0.4423 |
| 0.9588 | 0.6504 | 40 | 0.8592 | 0.6329 | 0.6340 | 0.6329 | 0.6321 | 0.5450 |
| 0.91 | 0.8130 | 50 | 0.8239 | 0.6738 | 0.7473 | 0.6738 | 0.6477 | 0.7725 |
| 0.8624 | 0.9756 | 60 | 0.8217 | 0.6 | 0.7217 | 0.6 | 0.5364 | 0.1295 |
| 0.8238 | 1.1382 | 70 | 0.7594 | 0.7477 | 0.7802 | 0.7477 | 0.7401 | 0.6705 |
| 0.7669 | 1.3008 | 80 | 0.6968 | 0.7913 | 0.7922 | 0.7913 | 0.7911 | 0.5289 |
| 0.7648 | 1.4634 | 90 | 0.6744 | 0.8007 | 0.8015 | 0.8007 | 0.8005 | 0.4738 |
| 0.691 | 1.6260 | 100 | 0.6739 | 0.7993 | 0.8029 | 0.7993 | 0.7987 | 0.5544 |
| 0.6698 | 1.7886 | 110 | 0.6616 | 0.8067 | 0.8091 | 0.8067 | 0.8063 | 0.5443 |
| 0.6985 | 1.9512 | 120 | 0.6553 | 0.8101 | 0.8111 | 0.8101 | 0.8099 | 0.5289 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "precision", "recall", "f1"], "base_model": "projecte-aina/roberta-base-ca-v2-cased-te", "model-index": [{"name": "stocks", "results": []}]} | adriansanz/2404v2 | null | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:projecte-aina/roberta-base-ca-v2-cased-te",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T08:50:18+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-projecte-aina/roberta-base-ca-v2-cased-te #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| stocks
======
This model is a fine-tuned version of projecte-aina/roberta-base-ca-v2-cased-te on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6553
* Accuracy: 0.8101
* Precision: 0.8111
* Recall: 0.8101
* F1: 0.8099
* Ratio: 0.5289
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 10
* eval\_batch\_size: 2
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 20
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.06
* lr\_scheduler\_warmup\_steps: 4
* num\_epochs: 2
* label\_smoothing\_factor: 0.1
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 10\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 20\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.06\n* lr\\_scheduler\\_warmup\\_steps: 4\n* num\\_epochs: 2\n* label\\_smoothing\\_factor: 0.1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-projecte-aina/roberta-base-ca-v2-cased-te #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 10\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 20\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.06\n* lr\\_scheduler\\_warmup\\_steps: 4\n* num\\_epochs: 2\n* label\\_smoothing\\_factor: 0.1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | null |
# Model Card for InternVideo2
This modelcard aims to give the model info of 'InternVideo2: Scaling Video Foundation Models for Multimodal Video Understanding'.
## Model Details
### Model Sources
- **Repository:** [InternVideo2](https://github.com/OpenGVLab/InternVideo/tree/main/InternVideo2)
- **Paper:** [2403.15377](https://arxiv.org/abs/2403.15377)
- **Point of Contact:** mailto:[InternVideo Group]([email protected])
## Citation
If you find this work useful for your research, please consider citing InternVideo2. Your acknowledgement would greatly help us in continuing to contribute resources to the research community.
```
@article{wang2024internvideo2,
title={InternVideo2: Scaling Video Foundation Models for Multimodal Video Understanding},
author={Wang, Yi and Li, Kunchang and Li, Xinhao and Yu, Jiashuo and He, Yinan and Chen, Guo and Pei, Baoqi and Zheng, Rongkun and Xu, Jilan and Wang, Zun and others},
journal={arXiv preprint arXiv:2403.15377},
year={2024}
}
@article{wang2022internvideo,
title={InternVideo: General Video Foundation Models via Generative and Discriminative Learning},
author={Wang, Yi and Li, Kunchang and Li, Yizhuo and He, Yinan and Huang, Bingkun and Zhao, Zhiyu and Zhang, Hongjie and Xu, Jilan and Liu, Yi and Wang, Zun and Xing, Sen and Chen, Guo and Pan, Junting and Yu, Jiashuo and Wang, Yali and Wang, Limin and Qiao, Yu},
journal={arXiv preprint arXiv:2212.03191},
year={2022}
}
``` | {"license": "apache-2.0", "extra_gated_prompt": "You agree to not use the model to conduct experiments that cause harm to human subjects.", "extra_gated_fields": {"Name": "text", "Company/Organization": "text", "Country": "text", "E-Mail": "text"}} | OpenGVLab/InternVideo2-Stage1-1B-224p-f8 | null | [
"arxiv:2403.15377",
"license:apache-2.0",
"region:us"
] | null | 2024-04-24T08:52:35+00:00 | [
"2403.15377"
] | [] | TAGS
#arxiv-2403.15377 #license-apache-2.0 #region-us
|
# Model Card for InternVideo2
This modelcard aims to give the model info of 'InternVideo2: Scaling Video Foundation Models for Multimodal Video Understanding'.
## Model Details
### Model Sources
- Repository: InternVideo2
- Paper: 2403.15377
- Point of Contact: mailto:InternVideo Group
If you find this work useful for your research, please consider citing InternVideo2. Your acknowledgement would greatly help us in continuing to contribute resources to the research community.
| [
"# Model Card for InternVideo2\n\nThis modelcard aims to give the model info of 'InternVideo2: Scaling Video Foundation Models for Multimodal Video Understanding'.",
"## Model Details",
"### Model Sources\n\n- Repository: InternVideo2\n- Paper: 2403.15377\n- Point of Contact: mailto:InternVideo Group\n\nIf you find this work useful for your research, please consider citing InternVideo2. Your acknowledgement would greatly help us in continuing to contribute resources to the research community."
] | [
"TAGS\n#arxiv-2403.15377 #license-apache-2.0 #region-us \n",
"# Model Card for InternVideo2\n\nThis modelcard aims to give the model info of 'InternVideo2: Scaling Video Foundation Models for Multimodal Video Understanding'.",
"## Model Details",
"### Model Sources\n\n- Repository: InternVideo2\n- Paper: 2403.15377\n- Point of Contact: mailto:InternVideo Group\n\nIf you find this work useful for your research, please consider citing InternVideo2. Your acknowledgement would greatly help us in continuing to contribute resources to the research community."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# breeze_7b_lora
This model is a fine-tuned version of [MediaTek-Research/Breeze-7B-Instruct-v1_0](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v1_0) on the DandinPower/ZH-Reading-Comprehension-Breeze-Instruct dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9671
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 700
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.2919 | 0.3690 | 250 | 2.2932 |
| 2.2105 | 0.7380 | 500 | 2.1866 |
| 1.9287 | 1.1070 | 750 | 1.9796 |
| 1.8181 | 1.4760 | 1000 | 1.8416 |
| 1.6765 | 1.8450 | 1250 | 1.7156 |
| 1.4271 | 2.2140 | 1500 | 1.6054 |
| 1.3595 | 2.5830 | 1750 | 1.5071 |
| 1.2794 | 2.9520 | 2000 | 1.4263 |
| 1.0636 | 3.3210 | 2250 | 1.3707 |
| 1.0272 | 3.6900 | 2500 | 1.3044 |
| 0.8977 | 4.0590 | 2750 | 1.2597 |
| 0.8923 | 4.4280 | 3000 | 1.2184 |
| 0.8628 | 4.7970 | 3250 | 1.1737 |
| 0.6994 | 5.1661 | 3500 | 1.1514 |
| 0.7201 | 5.5351 | 3750 | 1.1209 |
| 0.7237 | 5.9041 | 4000 | 1.0931 |
| 0.6468 | 6.2731 | 4250 | 1.0740 |
| 0.6052 | 6.6421 | 4500 | 1.0472 |
| 0.5737 | 7.0111 | 4750 | 1.0360 |
| 0.5419 | 7.3801 | 5000 | 1.0246 |
| 0.5539 | 7.7491 | 5250 | 1.0027 |
| 0.4615 | 8.1181 | 5500 | 0.9947 |
| 0.4782 | 8.4871 | 5750 | 0.9851 |
| 0.4809 | 8.8561 | 6000 | 0.9699 |
| 0.4284 | 9.2251 | 6250 | 0.9738 |
| 0.4332 | 9.5941 | 6500 | 0.9696 |
| 0.4341 | 9.9631 | 6750 | 0.9671 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.2.2+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"language": ["zh"], "license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "nycu-112-2-deeplearning-hw2", "generated_from_trainer"], "datasets": ["DandinPower/ZH-Reading-Comprehension-Breeze-Instruct"], "base_model": "MediaTek-Research/Breeze-7B-Instruct-v1_0", "model-index": [{"name": "breeze_7b_lora", "results": []}]} | DandinPower/breeze_7b_lora_full_text | null | [
"peft",
"safetensors",
"trl",
"sft",
"nycu-112-2-deeplearning-hw2",
"generated_from_trainer",
"zh",
"dataset:DandinPower/ZH-Reading-Comprehension-Breeze-Instruct",
"base_model:MediaTek-Research/Breeze-7B-Instruct-v1_0",
"license:apache-2.0",
"region:us"
] | null | 2024-04-24T08:53:31+00:00 | [] | [
"zh"
] | TAGS
#peft #safetensors #trl #sft #nycu-112-2-deeplearning-hw2 #generated_from_trainer #zh #dataset-DandinPower/ZH-Reading-Comprehension-Breeze-Instruct #base_model-MediaTek-Research/Breeze-7B-Instruct-v1_0 #license-apache-2.0 #region-us
| breeze\_7b\_lora
================
This model is a fine-tuned version of MediaTek-Research/Breeze-7B-Instruct-v1\_0 on the DandinPower/ZH-Reading-Comprehension-Breeze-Instruct dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9671
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 1
* eval\_batch\_size: 1
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 2
* gradient\_accumulation\_steps: 8
* total\_train\_batch\_size: 16
* total\_eval\_batch\_size: 2
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 700
* num\_epochs: 10.0
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.40.0
* Pytorch 2.2.2+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 2\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 16\n* total\\_eval\\_batch\\_size: 2\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 700\n* num\\_epochs: 10.0",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0\n* Pytorch 2.2.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#peft #safetensors #trl #sft #nycu-112-2-deeplearning-hw2 #generated_from_trainer #zh #dataset-DandinPower/ZH-Reading-Comprehension-Breeze-Instruct #base_model-MediaTek-Research/Breeze-7B-Instruct-v1_0 #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 2\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 16\n* total\\_eval\\_batch\\_size: 2\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 700\n* num\\_epochs: 10.0",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0\n* Pytorch 2.2.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | null | ¿Qué es L-Fortex Precio?
L-Fortex tabletas es un suplemento dietético en cápsulas de primera calidad diseñado para abordar diversos aspectos de la salud masculina, incluida la vitalidad, los niveles de energía y la función reproductiva. Su fórmula avanzada contiene una mezcla sinérgica de vitaminas, minerales y extractos de hierbas cuidadosamente seleccionados para satisfacer las necesidades de salud únicas de los hombres.
Página web oficial:<a href="https://www.nutritionsee.com/lfortschi">www.L-Fortex.com</a>
<p><a href="https://www.nutritionsee.com/lfortschi"> <img src="https://www.nutritionsee.com/wp-content/uploads/2024/04/L-FORTEX-chile-1.png" alt="enter image description here"> </a></p>
<a href="https://www.nutritionsee.com/lfortschi">¡¡Comprar ahora!! Haga clic en el enlace a continuación para obtener más información y obtener un 50% de descuento ahora... ¡Date prisa!</a>
Página web oficial:<a href="https://www.nutritionsee.com/lfortschi">www.L-Fortex.com</a> | {"license": "apache-2.0"} | L-FortexEcuador/L-Fortex | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-04-24T08:53:32+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
| ¿Qué es L-Fortex Precio?
L-Fortex tabletas es un suplemento dietético en cápsulas de primera calidad diseñado para abordar diversos aspectos de la salud masculina, incluida la vitalidad, los niveles de energía y la función reproductiva. Su fórmula avanzada contiene una mezcla sinérgica de vitaminas, minerales y extractos de hierbas cuidadosamente seleccionados para satisfacer las necesidades de salud únicas de los hombres.
Página web oficial:<a href="URL
<p><a href="URL <img src="URL alt="enter image description here"> </a></p>
<a href="URL¡¡Comprar ahora!! Haga clic en el enlace a continuación para obtener más información y obtener un 50% de descuento ahora... ¡Date prisa!</a>
Página web oficial:<a href="URL | [] | [
"TAGS\n#license-apache-2.0 #region-us \n"
] |
reinforcement-learning | stable-baselines3 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| {"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "299.09 +/- 14.31", "name": "mean_reward", "verified": false}]}]}]} | dark-lord2002/ppo-LunarLander-v2 | null | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-04-24T08:54:29+00:00 | [] | [] | TAGS
#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# PPO Agent playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
| [
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] | [
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["unsloth"]} | ravindrakinagi/pc | null | [
"transformers",
"safetensors",
"gguf",
"mistral",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T08:54:53+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gguf #mistral #unsloth #arxiv-1910.09700 #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gguf #mistral #unsloth #arxiv-1910.09700 #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [EleutherAI/gpt-neo-125m](https://huggingface.co/EleutherAI/gpt-neo-125m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50.0
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.0.1
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/gpt-neo-125m", "model-index": [{"name": "results", "results": []}]} | Stassney/gpt-neo-finetune | null | [
"transformers",
"safetensors",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"base_model:EleutherAI/gpt-neo-125m",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T08:55:22+00:00 | [] | [] | TAGS
#transformers #safetensors #gpt_neo #text-generation #generated_from_trainer #base_model-EleutherAI/gpt-neo-125m #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# results
This model is a fine-tuned version of EleutherAI/gpt-neo-125m on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50.0
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.0.1
- Datasets 2.19.0
- Tokenizers 0.19.1
| [
"# results\n\nThis model is a fine-tuned version of EleutherAI/gpt-neo-125m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 50.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.0.1\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #gpt_neo #text-generation #generated_from_trainer #base_model-EleutherAI/gpt-neo-125m #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# results\n\nThis model is a fine-tuned version of EleutherAI/gpt-neo-125m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 50.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.0.1\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
null | transformers |
# Uploaded model
- **Developed by:** MadK
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | MadK/ninjabot_v1_cleaning | null | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T08:56:06+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #gguf #llama #text-generation-inference #unsloth #trl #sft #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: MadK
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: MadK\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #gguf #llama #text-generation-inference #unsloth #trl #sft #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: MadK\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | transformers | ## About
weighted/imatrix quants of https://huggingface.co/Eurdem/Bombus_3x8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Bombus_3x8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Bombus_3x8B-i1-GGUF/resolve/main/Bombus_3x8B.i1-IQ1_S.gguf) | i1-IQ1_S | 4.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Bombus_3x8B-i1-GGUF/resolve/main/Bombus_3x8B.i1-IQ1_M.gguf) | i1-IQ1_M | 4.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Bombus_3x8B-i1-GGUF/resolve/main/Bombus_3x8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Bombus_3x8B-i1-GGUF/resolve/main/Bombus_3x8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/Bombus_3x8B-i1-GGUF/resolve/main/Bombus_3x8B.i1-IQ2_S.gguf) | i1-IQ2_S | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/Bombus_3x8B-i1-GGUF/resolve/main/Bombus_3x8B.i1-IQ2_M.gguf) | i1-IQ2_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Bombus_3x8B-i1-GGUF/resolve/main/Bombus_3x8B.i1-Q2_K.gguf) | i1-Q2_K | 7.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Bombus_3x8B-i1-GGUF/resolve/main/Bombus_3x8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 7.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Bombus_3x8B-i1-GGUF/resolve/main/Bombus_3x8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/Bombus_3x8B-i1-GGUF/resolve/main/Bombus_3x8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 8.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Bombus_3x8B-i1-GGUF/resolve/main/Bombus_3x8B.i1-IQ3_S.gguf) | i1-IQ3_S | 8.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Bombus_3x8B-i1-GGUF/resolve/main/Bombus_3x8B.i1-IQ3_M.gguf) | i1-IQ3_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Bombus_3x8B-i1-GGUF/resolve/main/Bombus_3x8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 9.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Bombus_3x8B-i1-GGUF/resolve/main/Bombus_3x8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 10.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Bombus_3x8B-i1-GGUF/resolve/main/Bombus_3x8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/Bombus_3x8B-i1-GGUF/resolve/main/Bombus_3x8B.i1-Q4_0.gguf) | i1-Q4_0 | 11.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Bombus_3x8B-i1-GGUF/resolve/main/Bombus_3x8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 11.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Bombus_3x8B-i1-GGUF/resolve/main/Bombus_3x8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 11.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Bombus_3x8B-i1-GGUF/resolve/main/Bombus_3x8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 13.5 | |
| [GGUF](https://huggingface.co/mradermacher/Bombus_3x8B-i1-GGUF/resolve/main/Bombus_3x8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/Bombus_3x8B-i1-GGUF/resolve/main/Bombus_3x8B.i1-Q6_K.gguf) | i1-Q6_K | 15.9 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["moe", "merge", "llama-3"], "base_model": "Eurdem/Bombus_3x8B", "quantized_by": "mradermacher"} | mradermacher/Bombus_3x8B-i1-GGUF | null | [
"transformers",
"gguf",
"moe",
"merge",
"llama-3",
"en",
"base_model:Eurdem/Bombus_3x8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T08:56:18+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #moe #merge #llama-3 #en #base_model-Eurdem/Bombus_3x8B #license-apache-2.0 #endpoints_compatible #region-us
| About
-----
weighted/imatrix quants of URL
static quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #moe #merge #llama-3 #en #base_model-Eurdem/Bombus_3x8B #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
text-generation | transformers | # ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B AWQ
- Model creator: [ChaoticNeutrals](https://huggingface.co/ChaoticNeutrals)
- Original model: [Poppy_Porpoise-v0.6-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B)

## How to use
### Install the necessary packages
```bash
pip install --upgrade autoawq autoawq-kernels
```
### Example Python code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
model_path = "solidrust/Poppy_Porpoise-v0.6-L3-8B-AWQ"
system_message = "You are Poppy_Porpoise-v0.6-L3-8B, incarnated as a powerful AI. You were created by ChaoticNeutrals."
# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
trust_remote_code=True)
streamer = TextStreamer(tokenizer,
skip_prompt=True,
skip_special_tokens=True)
# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""
prompt = "You're standing on the surface of the Earth. "\
"You walk one mile south, one mile west and one mile north. "\
"You end up exactly where you started. Where are you?"
tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
return_tensors='pt').input_ids.cuda()
# Generate output
generation_output = model.generate(tokens,
streamer=streamer,
max_new_tokens=512)
```
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
| {"library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"} | solidrust/Poppy_Porpoise-v0.6-L3-8B-AWQ | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"conversational",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T08:56:47+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #conversational #text-generation-inference #region-us
| # ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B AWQ
- Model creator: ChaoticNeutrals
- Original model: Poppy_Porpoise-v0.6-L3-8B
!image/png
## How to use
### Install the necessary packages
### Example Python code
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- Text Generation Webui - using Loader: AutoAWQ
- vLLM - version 0.2.2 or later for support for all model types.
- Hugging Face Text Generation Inference (TGI)
- Transformers version 4.35.0 and later, from any code or client that supports Transformers
- AutoAWQ - for use from Python code
| [
"# ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B AWQ\n\n- Model creator: ChaoticNeutrals\n- Original model: Poppy_Porpoise-v0.6-L3-8B\n\n!image/png",
"## How to use",
"### Install the necessary packages",
"### Example Python code",
"### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #conversational #text-generation-inference #region-us \n",
"# ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B AWQ\n\n- Model creator: ChaoticNeutrals\n- Original model: Poppy_Porpoise-v0.6-L3-8B\n\n!image/png",
"## How to use",
"### Install the necessary packages",
"### Example Python code",
"### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code"
] |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | liserman/parlbert_climate_change_praise_v02 | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T08:57:46+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | ripaaiii/fine-tune-C1-revised-lr6-boxkecil30 | null | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T08:58:03+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text2text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | AraikT/model-of-Araik-t5-finetuned_5 | null | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T08:58:03+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# MoMonir/wavecoder-ultra-6.7b-GGUF
This model was converted to GGUF format from [`microsoft/wavecoder-ultra-6.7b`](https://huggingface.co/microsoft/wavecoder-ultra-6.7b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/microsoft/wavecoder-ultra-6.7b) for more details on the model.
<!-- README_GGUF.md-about-gguf start -->
### About GGUF ([TheBloke](https://huggingface.co/TheBloke) Description)
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo MoMonir/wavecoder-ultra-6.7b-GGUF --model wavecoder-ultra-6.7b.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo MoMonir/wavecoder-ultra-6.7b-GGUF --model wavecoder-ultra-6.7b.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m wavecoder-ultra-6.7b.Q4_K_M.gguf -n 128
```
| {"language": ["en"], "license": "mit", "library_name": "transformers", "tags": ["code", "llama-cpp", "gguf-my-repo"], "datasets": ["humaneval"], "metrics": ["code_eval"], "license_link": "https://huggingface.co/microsoft/wavecoder-ultra-6.7b/blob/main/LICENSE", "pipeline_tag": "text-generation"} | MoMonir/wavecoder-ultra-6.7b-GGUF | null | [
"transformers",
"gguf",
"code",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:humaneval",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T08:58:09+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #code #llama-cpp #gguf-my-repo #text-generation #en #dataset-humaneval #license-mit #endpoints_compatible #region-us
|
# MoMonir/wavecoder-ultra-6.7b-GGUF
This model was converted to GGUF format from 'microsoft/wavecoder-ultra-6.7b' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
### About GGUF (TheBloke Description)
GGUF is a new format introduced by the URL team on August 21st 2023. It is a replacement for GGML, which is no longer supported by URL.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* URL. The source project for GGUF. Offers a CLI and a server option.
* text-generation-webui, the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* KoboldCpp, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* GPT4All, a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* LM Studio, an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* LoLLMS Web UI, a great web UI with many interesting and unique features, including a full model library for easy model selection.
* URL, an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* candle, a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# MoMonir/wavecoder-ultra-6.7b-GGUF\nThis model was converted to GGUF format from 'microsoft/wavecoder-ultra-6.7b' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"### About GGUF (TheBloke Description)\n\nGGUF is a new format introduced by the URL team on August 21st 2023. It is a replacement for GGML, which is no longer supported by URL.\n\nHere is an incomplete list of clients and libraries that are known to support GGUF:\n\n* URL. The source project for GGUF. Offers a CLI and a server option.\n* text-generation-webui, the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.\n* KoboldCpp, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.\n* GPT4All, a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.\n* LM Studio, an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.\n* LoLLMS Web UI, a great web UI with many interesting and unique features, including a full model library for easy model selection.\n* URL, an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.\n* llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.\n* candle, a Rust ML framework with a focus on performance, including GPU support, and ease of use.\n* ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #code #llama-cpp #gguf-my-repo #text-generation #en #dataset-humaneval #license-mit #endpoints_compatible #region-us \n",
"# MoMonir/wavecoder-ultra-6.7b-GGUF\nThis model was converted to GGUF format from 'microsoft/wavecoder-ultra-6.7b' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"### About GGUF (TheBloke Description)\n\nGGUF is a new format introduced by the URL team on August 21st 2023. It is a replacement for GGML, which is no longer supported by URL.\n\nHere is an incomplete list of clients and libraries that are known to support GGUF:\n\n* URL. The source project for GGUF. Offers a CLI and a server option.\n* text-generation-webui, the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.\n* KoboldCpp, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.\n* GPT4All, a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.\n* LM Studio, an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.\n* LoLLMS Web UI, a great web UI with many interesting and unique features, including a full model library for easy model selection.\n* URL, an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.\n* llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.\n* candle, a Rust ML framework with a focus on performance, including GPU support, and ease of use.\n* ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | transformers |
# Uploaded model
- **Developed by:** VinhLlama
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-7b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "gemma", "trl"], "base_model": "unsloth/gemma-7b-bnb-4bit"} | VinhLlama/Gemma7bVinhntV01_16bit | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T08:58:14+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #gemma #trl #en #base_model-unsloth/gemma-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: VinhLlama
- License: apache-2.0
- Finetuned from model : unsloth/gemma-7b-bnb-4bit
This gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: VinhLlama\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-7b-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #gemma #trl #en #base_model-unsloth/gemma-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: VinhLlama\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-7b-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | mlx |
# GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-3.0-mlx
This quantized low-bit model was converted to MLX format from [`GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-3.0`]().
Refer to the [original model card](https://huggingface.co/GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-3.0) for more details on the model.
## Use with mlx
```bash
pip install gbx-lm
```
```python
from gbx_lm import load, generate
model, tokenizer = load("GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-3.0-mlx")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
| {"license": "apache-2.0", "tags": ["mlx"]} | GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-3.0-mlx | null | [
"mlx",
"safetensors",
"llama",
"license:apache-2.0",
"region:us"
] | null | 2024-04-24T08:58:28+00:00 | [] | [] | TAGS
#mlx #safetensors #llama #license-apache-2.0 #region-us
|
# GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-3.0-mlx
This quantized low-bit model was converted to MLX format from ['GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-3.0']().
Refer to the original model card for more details on the model.
## Use with mlx
| [
"# GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-3.0-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-3.0']().\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] | [
"TAGS\n#mlx #safetensors #llama #license-apache-2.0 #region-us \n",
"# GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-3.0-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-3.0']().\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
null | mlx |
# GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-2.2-mlx
This quantized low-bit model was converted to MLX format from [`GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-2.2`]().
Refer to the [original model card](https://huggingface.co/GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-2.2) for more details on the model.
## Use with mlx
```bash
pip install gbx-lm
```
```python
from gbx_lm import load, generate
model, tokenizer = load("GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-2.2-mlx")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
| {"license": "apache-2.0", "tags": ["mlx"]} | GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-2.2-mlx | null | [
"mlx",
"safetensors",
"llama",
"license:apache-2.0",
"region:us"
] | null | 2024-04-24T08:58:37+00:00 | [] | [] | TAGS
#mlx #safetensors #llama #license-apache-2.0 #region-us
|
# GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-2.2-mlx
This quantized low-bit model was converted to MLX format from ['GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-2.2']().
Refer to the original model card for more details on the model.
## Use with mlx
| [
"# GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-2.2-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-2.2']().\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] | [
"TAGS\n#mlx #safetensors #llama #license-apache-2.0 #region-us \n",
"# GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-2.2-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-2.2']().\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
null | null |
# danielus/Mermaid-Llama-3-8B-Q8_0-GGUF
This model was converted to GGUF format from [`TroyDoesAI/Mermaid-Llama-3-8B`](https://huggingface.co/TroyDoesAI/Mermaid-Llama-3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/TroyDoesAI/Mermaid-Llama-3-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo danielus/Mermaid-Llama-3-8B-Q8_0-GGUF --model mermaid-llama-3-8b.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo danielus/Mermaid-Llama-3-8B-Q8_0-GGUF --model mermaid-llama-3-8b.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mermaid-llama-3-8b.Q8_0.gguf -n 128
```
| {"license": "cc-by-4.0", "tags": ["llama-cpp", "gguf-my-repo"]} | danielus/Mermaid-Llama-3-8B-Q8_0-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:cc-by-4.0",
"region:us"
] | null | 2024-04-24T08:59:04+00:00 | [] | [] | TAGS
#gguf #llama-cpp #gguf-my-repo #license-cc-by-4.0 #region-us
|
# danielus/Mermaid-Llama-3-8B-Q8_0-GGUF
This model was converted to GGUF format from 'TroyDoesAI/Mermaid-Llama-3-8B' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# danielus/Mermaid-Llama-3-8B-Q8_0-GGUF\nThis model was converted to GGUF format from 'TroyDoesAI/Mermaid-Llama-3-8B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #license-cc-by-4.0 #region-us \n",
"# danielus/Mermaid-Llama-3-8B-Q8_0-GGUF\nThis model was converted to GGUF format from 'TroyDoesAI/Mermaid-Llama-3-8B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
reinforcement-learning | stable-baselines3 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| {"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "271.57 +/- 14.48", "name": "mean_reward", "verified": false}]}]}]} | loziobo/ppo-LunarLander-v2 | null | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-04-24T08:59:20+00:00 | [] | [] | TAGS
#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# PPO Agent playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
| [
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] | [
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
reinforcement-learning | ml-agents |
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: DaniElAbrazos/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| {"library_name": "ml-agents", "tags": ["SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget"]} | DaniElAbrazos/ppo-SnowballTarget | null | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | null | 2024-04-24T08:59:35+00:00 | [] | [] | TAGS
#ml-agents #tensorboard #onnx #SnowballTarget #deep-reinforcement-learning #reinforcement-learning #ML-Agents-SnowballTarget #region-us
|
# ppo Agent playing SnowballTarget
This is a trained model of a ppo agent playing SnowballTarget
using the Unity ML-Agents Library.
## Usage (with ML-Agents)
The Documentation: URL
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your
browser: URL
- A *longer tutorial* to understand how works ML-Agents:
URL
### Resume the training
### Watch your Agent play
You can watch your agent playing directly in your browser
1. If the environment is part of ML-Agents official environments, go to URL
2. Step 1: Find your model_id: DaniElAbrazos/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play
| [
"# ppo Agent playing SnowballTarget\n This is a trained model of a ppo agent playing SnowballTarget\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: DaniElAbrazos/ppo-SnowballTarget\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] | [
"TAGS\n#ml-agents #tensorboard #onnx #SnowballTarget #deep-reinforcement-learning #reinforcement-learning #ML-Agents-SnowballTarget #region-us \n",
"# ppo Agent playing SnowballTarget\n This is a trained model of a ppo agent playing SnowballTarget\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: DaniElAbrazos/ppo-SnowballTarget\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] |
null | mlx |
# GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-2.5-mlx
This quantized low-bit model was converted to MLX format from [`GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-2.5`]().
Refer to the [original model card](https://huggingface.co/GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-2.5) for more details on the model.
## Use with mlx
```bash
pip install gbx-lm
```
```python
from gbx_lm import load, generate
model, tokenizer = load("GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-2.5-mlx")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
| {"license": "apache-2.0", "tags": ["mlx"]} | GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-2.5-mlx | null | [
"mlx",
"safetensors",
"llama",
"license:apache-2.0",
"region:us"
] | null | 2024-04-24T09:00:15+00:00 | [] | [] | TAGS
#mlx #safetensors #llama #license-apache-2.0 #region-us
|
# GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-2.5-mlx
This quantized low-bit model was converted to MLX format from ['GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-2.5']().
Refer to the original model card for more details on the model.
## Use with mlx
| [
"# GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-2.5-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-2.5']().\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] | [
"TAGS\n#mlx #safetensors #llama #license-apache-2.0 #region-us \n",
"# GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-2.5-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/Llama-3-8B-instruct-layer-mix-bpw-2.5']().\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
null | transformers |
# Uploaded model
- **Developed by:** VinhLlama
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-7b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "gemma", "trl"], "base_model": "unsloth/gemma-7b-bnb-4bit"} | VinhLlama/Gemma7bVinhntV02_16bit | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T09:01:42+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #gemma #trl #en #base_model-unsloth/gemma-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: VinhLlama
- License: apache-2.0
- Finetuned from model : unsloth/gemma-7b-bnb-4bit
This gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: VinhLlama\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-7b-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #gemma #trl #en #base_model-unsloth/gemma-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: VinhLlama\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-7b-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation | transformers |
# Uploaded model
- **Developed by:** FelixChao
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"} | PetroGPT/Llama-3-Petro-Instruct-v1 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T09:01:54+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #conversational #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: FelixChao
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: FelixChao\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #conversational #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: FelixChao\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# heart_defect_detection
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2189
- F1: 0.4835
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2369 | 1.0 | 305 | 0.2212 | 0.4835 |
| 0.2151 | 2.0 | 610 | 0.2194 | 0.4835 |
| 0.2055 | 3.0 | 915 | 0.2189 | 0.4835 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "heart_defect_detection", "results": []}]} | yana-sklyanchuk/heart_defect_detection | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T09:02:30+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| heart\_defect\_detection
========================
This model is a fine-tuned version of distilbert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2189
* F1: 0.4835
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers |
**The license is `cc-by-nc-4.0`.**
# **GAI-LLM/Llama-3-8B_classification**
## Model Details
**Model Developers** Donghoon Oh, Hanmin Myung, SuKyung Park (SK C&C G.AI Eng)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
GAI-LLM/Llama-3-8B_classification is an auto-regressive language model based on the LLaMA2 transformer architecture.
**Base Model** [meta-llama/Meta-Llama-3-8B]
**Training Dataset**
- We combined Open Korean Dateset using mixed-strategy
- We use A100 GPU 80GB * 8, when training.
# **Model Benchmark**
# Implementation Code
```python
### GAI-LLM/Llama-3-8B_classification
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "GAI-LLM/Llama-3-8B_classification"
model = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(repo)
``` | {"language": ["ko"], "license": "cc-by-nc-4.0", "library_name": "transformers", "pipeline_tag": "text-generation"} | GAI-LLM/Llama-3-8B_classification | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T09:02:31+00:00 | [] | [
"ko"
] | TAGS
#transformers #safetensors #llama #text-generation #ko #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
The license is 'cc-by-nc-4.0'.
# GAI-LLM/Llama-3-8B_classification
## Model Details
Model Developers Donghoon Oh, Hanmin Myung, SuKyung Park (SK C&C G.AI Eng)
Input Models input text only.
Output Models generate text only.
Model Architecture
GAI-LLM/Llama-3-8B_classification is an auto-regressive language model based on the LLaMA2 transformer architecture.
Base Model [meta-llama/Meta-Llama-3-8B]
Training Dataset
- We combined Open Korean Dateset using mixed-strategy
- We use A100 GPU 80GB * 8, when training.
# Model Benchmark
# Implementation Code
| [
"# GAI-LLM/Llama-3-8B_classification",
"## Model Details\n\nModel Developers Donghoon Oh, Hanmin Myung, SuKyung Park (SK C&C G.AI Eng)\n\nInput Models input text only.\n\nOutput Models generate text only.\n\nModel Architecture \nGAI-LLM/Llama-3-8B_classification is an auto-regressive language model based on the LLaMA2 transformer architecture.\n\nBase Model [meta-llama/Meta-Llama-3-8B]\n\nTraining Dataset \n\n- We combined Open Korean Dateset using mixed-strategy \n- We use A100 GPU 80GB * 8, when training.",
"# Model Benchmark",
"# Implementation Code"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #ko #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# GAI-LLM/Llama-3-8B_classification",
"## Model Details\n\nModel Developers Donghoon Oh, Hanmin Myung, SuKyung Park (SK C&C G.AI Eng)\n\nInput Models input text only.\n\nOutput Models generate text only.\n\nModel Architecture \nGAI-LLM/Llama-3-8B_classification is an auto-regressive language model based on the LLaMA2 transformer architecture.\n\nBase Model [meta-llama/Meta-Llama-3-8B]\n\nTraining Dataset \n\n- We combined Open Korean Dateset using mixed-strategy \n- We use A100 GPU 80GB * 8, when training.",
"# Model Benchmark",
"# Implementation Code"
] |
text-generation | transformers |
# Uploaded model
- **Developed by:** umarigan
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) | {"language": ["en", "tr"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "datasets": ["umarigan/GPTeacher-General-Instruct-tr"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | umarigan/LLama-3-8B-Instruction-tr | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"tr",
"dataset:umarigan/GPTeacher-General-Instruct-tr",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T09:02:39+00:00 | [] | [
"en",
"tr"
] | TAGS
#transformers #pytorch #llama #text-generation #text-generation-inference #unsloth #trl #sft #en #tr #dataset-umarigan/GPTeacher-General-Instruct-tr #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: umarigan
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/> | [
"# Uploaded model\n\n- Developed by: umarigan\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #pytorch #llama #text-generation #text-generation-inference #unsloth #trl #sft #en #tr #dataset-umarigan/GPTeacher-General-Instruct-tr #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: umarigan\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
reinforcement-learning | stable-baselines3 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| {"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "235.29 +/- 66.09", "name": "mean_reward", "verified": false}]}]}]} | elisamammi/ppo_lunar_lander_v2 | null | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-04-24T09:02:50+00:00 | [] | [] | TAGS
#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# PPO Agent playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
| [
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] | [
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | ramfais/gpt2_orpo | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T09:03:07+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gpt2 #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gpt2 #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-to-image | diffusers | # Fonglets Ashley Ho (JabComix) Pony XL
<Gallery />
## Trigger words
You should use `ashley ho` to trigger the image generation.
You should use `tooth gap` to trigger the image generation.
You should use `Micro Bikini` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Fongletto/Pony_XL/tree/main) them in the Files & versions tab. | {"tags": ["text-to-image", "stable-diffusion", "lora", "diffusers", "template:sd-lora", "not-for-all-audiences"], "widget": [{"text": "tooth gap, ashley_ho, 1koma, black border, comic, english text, letterboxed, speech bubble, 1girl, blush, smile, short hair, open mouth, black hair, indoors, dark skin, pink jumper, white skirt, stairs, hair bun, flat chest, solo,", "parameters": {"negative_prompt": "source_furry, source_pony, mosaic censoring,bar censor, breasts, mature, wide hips, large thighs"}, "output": {"url": "images/00046-216934606.png"}}, {"text": "tooth gap, ashley_ho, 1koma, black border, comic, english text, letterboxed, speech bubble, 1girl, smile, short hair, open mouth, black hair, pool, yellow jumper, white skirt, stairs, hair bun, flat chest, solo, sleeves past wrists, ", "parameters": {"negative_prompt": "source_furry, source_pony, mosaic censoring,bar censor, breasts, mature, wide hips, large thighs"}, "output": {"url": "images/00050-2919983236.png"}}, {"text": "tooth gap, ashley_ho, black border, comic, english text, letterboxed, speech bubble, 1girl, smile, short hair, open mouth, black hair, pool, white crop top, pink dolphin shorts,hair bun, flat chest, solo, navel, aged down", "parameters": {"negative_prompt": "source_furry, source_pony, mosaic censoring,bar censor, breasts, mature, wide hips, large thighs"}, "output": {"url": "images/00056-4018619522.png"}}, {"text": "tooth gap, ashley_ho, black border, comic, english text, letterboxed, speech bubble, 1girl, smile, short hair, open mouth, black hair, couch, lounge room, blue dress, dress, hair bun, flat chest, solo, aged down, 1boy, age difference, size difference, lifting another, orange beard, orange handlebar mustache, ", "parameters": {"negative_prompt": "source_furry, source_pony, mosaic censoring,bar censor, breasts, mature, wide hips, large thighs"}, "output": {"url": "images/00064-1084715645.png"}}, {"text": "tooth gap, ashley_ho, black border, comic, english text, letterboxed, speech bubble, 1girl, smile, short hair, open mouth, black hair, couch, park, outdoors, blue dress, dress, hair bun, flat chest, solo, aged down, playground, ", "parameters": {"negative_prompt": "source_furry, source_pony, mosaic censoring,bar censor, breasts, mature, wide hips, large thighs"}, "output": {"url": "images/00071-4244028882.png"}}, {"text": "tooth gap, ashley_ho, black border, comic, english text, letterboxed, speech bubble, 1girl, smile, short hair, open mouth, black hair, park, outdoors, blue dress, dress, hair bun, flat chest, solo, aged down, playground, running, jumping, ", "parameters": {"negative_prompt": "source_furry, source_pony, mosaic censoring,bar censor, breasts, mature, wide hips, large thighs"}, "output": {"url": "images/00072-13715566.png"}}], "base_model": "stablediffusionapi/pony-diffusion-v6-xl", "instance_prompt": "ashley ho, tooth gap, Micro Bikini"} | Fongletto/Fonglets_Ashley_Ho_JabComix_Pony_XL | null | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"not-for-all-audiences",
"base_model:stablediffusionapi/pony-diffusion-v6-xl",
"region:us"
] | null | 2024-04-24T09:03:47+00:00 | [] | [] | TAGS
#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #not-for-all-audiences #base_model-stablediffusionapi/pony-diffusion-v6-xl #region-us
| # Fonglets Ashley Ho (JabComix) Pony XL
<Gallery />
## Trigger words
You should use 'ashley ho' to trigger the image generation.
You should use 'tooth gap' to trigger the image generation.
You should use 'Micro Bikini' to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab. | [
"# Fonglets Ashley Ho (JabComix) Pony XL\n\n<Gallery />",
"## Trigger words\n\nYou should use 'ashley ho' to trigger the image generation.\n\nYou should use 'tooth gap' to trigger the image generation.\n\nYou should use 'Micro Bikini' to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab."
] | [
"TAGS\n#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #not-for-all-audiences #base_model-stablediffusionapi/pony-diffusion-v6-xl #region-us \n",
"# Fonglets Ashley Ho (JabComix) Pony XL\n\n<Gallery />",
"## Trigger words\n\nYou should use 'ashley ho' to trigger the image generation.\n\nYou should use 'tooth gap' to trigger the image generation.\n\nYou should use 'Micro Bikini' to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab."
] |
text-generation | transformers | Quantizations of https://huggingface.co/stabilityai/stablelm-zephyr-3b
# From original readme
## Usage
`StableLM Zephyr 3B` uses the following instruction format:
```
<|user|>
List 3 synonyms for the word "tiny"<|endoftext|>
<|assistant|>
1. Dwarf
2. Little
3. Petite<|endoftext|>
```
This format is also available through the tokenizer's `apply_chat_template` method:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-zephyr-3b')
model = AutoModelForCausalLM.from_pretrained(
'stabilityai/stablelm-zephyr-3b',
device_map="auto"
)
prompt = [{'role': 'user', 'content': 'List 3 synonyms for the word "tiny"'}]
inputs = tokenizer.apply_chat_template(
prompt,
add_generation_prompt=True,
return_tensors='pt'
)
tokens = model.generate(
inputs.to(model.device),
max_new_tokens=1024,
temperature=0.8,
do_sample=True
)
print(tokenizer.decode(tokens[0], skip_special_tokens=False))
```
You can also see how to run a performance optimized version of this model [here](https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/273-stable-zephyr-3b-chatbot/273-stable-zephyr-3b-chatbot.ipynb) using [OpenVINO](https://docs.openvino.ai/2023.2/home.html) from Intel.
| {"language": ["en"], "license": "other", "tags": ["transformers", "gguf", "imatrix", "stablelm-zephyr-3b", "stabilityai"], "inference": false, "pipeline_tag": "text-generation"} | duyntnet/stablelm-zephyr-3b-imatrix-GGUF | null | [
"transformers",
"gguf",
"imatrix",
"stablelm-zephyr-3b",
"stabilityai",
"text-generation",
"en",
"license:other",
"region:us"
] | null | 2024-04-24T09:03:52+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #imatrix #stablelm-zephyr-3b #stabilityai #text-generation #en #license-other #region-us
| Quantizations of URL
# From original readme
## Usage
'StableLM Zephyr 3B' uses the following instruction format:
This format is also available through the tokenizer's 'apply_chat_template' method:
You can also see how to run a performance optimized version of this model here using OpenVINO from Intel.
| [
"# From original readme",
"## Usage\n\n'StableLM Zephyr 3B' uses the following instruction format:\n\n\nThis format is also available through the tokenizer's 'apply_chat_template' method:\n\n\n\nYou can also see how to run a performance optimized version of this model here using OpenVINO from Intel."
] | [
"TAGS\n#transformers #gguf #imatrix #stablelm-zephyr-3b #stabilityai #text-generation #en #license-other #region-us \n",
"# From original readme",
"## Usage\n\n'StableLM Zephyr 3B' uses the following instruction format:\n\n\nThis format is also available through the tokenizer's 'apply_chat_template' method:\n\n\n\nYou can also see how to run a performance optimized version of this model here using OpenVINO from Intel."
] |
null | transformers |
# Uploaded model
- **Developed by:** VinhLlama
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-7b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "gemma", "trl"], "base_model": "unsloth/gemma-7b-bnb-4bit"} | VinhLlama/Gemma7bVinhntV03_16bit | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T09:04:24+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #gemma #trl #en #base_model-unsloth/gemma-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: VinhLlama
- License: apache-2.0
- Finetuned from model : unsloth/gemma-7b-bnb-4bit
This gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: VinhLlama\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-7b-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #gemma #trl #en #base_model-unsloth/gemma-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: VinhLlama\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-7b-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/EryriLabs/Llama-3-Smolphin-8b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Smolphin-8b-GGUF/resolve/main/Llama-3-Smolphin-8b.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Smolphin-8b-GGUF/resolve/main/Llama-3-Smolphin-8b.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Smolphin-8b-GGUF/resolve/main/Llama-3-Smolphin-8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Smolphin-8b-GGUF/resolve/main/Llama-3-Smolphin-8b.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Smolphin-8b-GGUF/resolve/main/Llama-3-Smolphin-8b.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Smolphin-8b-GGUF/resolve/main/Llama-3-Smolphin-8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Smolphin-8b-GGUF/resolve/main/Llama-3-Smolphin-8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Smolphin-8b-GGUF/resolve/main/Llama-3-Smolphin-8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Smolphin-8b-GGUF/resolve/main/Llama-3-Smolphin-8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Smolphin-8b-GGUF/resolve/main/Llama-3-Smolphin-8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Smolphin-8b-GGUF/resolve/main/Llama-3-Smolphin-8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Smolphin-8b-GGUF/resolve/main/Llama-3-Smolphin-8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Smolphin-8b-GGUF/resolve/main/Llama-3-Smolphin-8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Smolphin-8b-GGUF/resolve/main/Llama-3-Smolphin-8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Smolphin-8b-GGUF/resolve/main/Llama-3-Smolphin-8b.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "llama3", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": "EryriLabs/Llama-3-Smolphin-8b", "quantized_by": "mradermacher"} | mradermacher/Llama-3-Smolphin-8b-GGUF | null | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:EryriLabs/Llama-3-Smolphin-8b",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T09:04:36+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #mergekit #merge #en #base_model-EryriLabs/Llama-3-Smolphin-8b #license-llama3 #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #mergekit #merge #en #base_model-EryriLabs/Llama-3-Smolphin-8b #license-llama3 #endpoints_compatible #region-us \n"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-14m_mz-130_PasswordMatch_n-its-10-seed-2
This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-14m", "model-index": [{"name": "robust_llm_pythia-14m_mz-130_PasswordMatch_n-its-10-seed-2", "results": []}]} | AlignmentResearch/robust_llm_pythia-14m_mz-130_PasswordMatch_n-its-10-seed-2 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-14m",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T09:05:02+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-14m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# robust_llm_pythia-14m_mz-130_PasswordMatch_n-its-10-seed-2
This model is a fine-tuned version of EleutherAI/pythia-14m on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# robust_llm_pythia-14m_mz-130_PasswordMatch_n-its-10-seed-2\n\nThis model is a fine-tuned version of EleutherAI/pythia-14m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 2\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-14m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# robust_llm_pythia-14m_mz-130_PasswordMatch_n-its-10-seed-2\n\nThis model is a fine-tuned version of EleutherAI/pythia-14m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 2\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-14m_mz-130_PasswordMatch_n-its-10-seed-1
This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-14m", "model-index": [{"name": "robust_llm_pythia-14m_mz-130_PasswordMatch_n-its-10-seed-1", "results": []}]} | AlignmentResearch/robust_llm_pythia-14m_mz-130_PasswordMatch_n-its-10-seed-1 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-14m",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T09:05:26+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-14m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# robust_llm_pythia-14m_mz-130_PasswordMatch_n-its-10-seed-1
This model is a fine-tuned version of EleutherAI/pythia-14m on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# robust_llm_pythia-14m_mz-130_PasswordMatch_n-its-10-seed-1\n\nThis model is a fine-tuned version of EleutherAI/pythia-14m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 1\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-14m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# robust_llm_pythia-14m_mz-130_PasswordMatch_n-its-10-seed-1\n\nThis model is a fine-tuned version of EleutherAI/pythia-14m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 1\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
zero-shot-image-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | user-agent/CLIP_embeddings_ViT_B32 | null | [
"transformers",
"safetensors",
"clip",
"zero-shot-image-classification",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T09:07:02+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #clip #zero-shot-image-classification #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #clip #zero-shot-image-classification #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Uploaded model
- **Developed by:** jurieyel
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | jurieyel/text2sql-finetuned-llama3-8b-bnb-4bit | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T09:07:56+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: jurieyel
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: jurieyel\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: jurieyel\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
reinforcement-learning | stable-baselines3 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| {"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "254.84 +/- 22.14", "name": "mean_reward", "verified": false}]}]}]} | pietroorlandi/ppo-LunarLander-v2 | null | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-04-24T09:08:04+00:00 | [] | [] | TAGS
#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# PPO Agent playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
| [
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] | [
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
text-to-image | diffusers | # Fonglets PumpkinSpiceLatte Pony XL
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Fongletto/Fonglets_PumpkinSpiceLatte_Pony_XL/tree/main) them in the Files & versions tab. | {"tags": ["text-to-image", "stable-diffusion", "lora", "diffusers", "template:sd-lora", "not-for-all-audiences"], "widget": [{"text": "score_9,score_8,score_7,score_8_up,score_7_up,score_6_up, score_9,score_8,score_7,score_8_up,score_7_up,score_6_up, 1girl, abigail williams (fate), abigail williams (swimsuit foreigner) (fate), abigail williams (swimsuit foreigner) (first ascension) (fate), absurdres, bikini, black bow, black bra, black panties, bow, bra, colored skin, commentary, double bun, english commentary, fate/grand order, fate (series), flat chest, forehead, from below, hair bow, hair bun, highres, keyhole, long hair, looking at viewer, looking down, multiple hair bows, navel, orange bow, panties, parted bangs, polka dot, polka dot bow, pumpkinspicelatte, red eyes, smile, solo, standing, swimsuit, underwear, underwear only, white hair, white skin<lora:Fonglets_PumpkinSpiceLatte:1>", "output": {"url": "images/00020-1531338185.png"}}, {"text": "score_9,score_8,score_7,score_8_up,score_7_up,score_6_up, <lora:Fonglets_PumpkinSpiceLatte:1>", "output": {"url": "images/00033-972720408.png"}}, {"text": "score_9,score_8,score_7,score_8_up,score_7_up,score_6_up, <lora:Fonglets_PumpkinSpiceLatte:1>", "output": {"url": "images/00037-2520449106.png"}}, {"text": "score_9,score_8,score_7,score_8_up,score_7_up,score_6_up, <lora:Fonglets_PumpkinSpiceLatte:1>", "output": {"url": "images/00041-660578769.png"}}, {"text": "score_9,score_8,score_7,score_8_up,score_7_up,score_6_up, <lora:Fonglets_PumpkinSpiceLatte:1>", "output": {"url": "images/00046-1602290416.png"}}, {"text": "score_9,score_8,score_7,score_8_up,score_7_up,score_6_up, <lora:Fonglets_PumpkinSpiceLatte:1>", "parameters": {"negative_prompt": "cum, fluid, liquid, "}, "output": {"url": "images/00054-2227083049.png"}}, {"text": "score_9,score_8,score_7,score_8_up,score_7_up,score_6_up, <lora:Fonglets_PumpkinSpiceLatte:1>", "output": {"url": "images/00061-2227083051.png"}}, {"text": "score_9,score_8,score_7,score_8_up,score_7_up,score_6_up, <lora:Fonglets_PumpkinSpiceLatte:1>", "output": {"url": "images/00072-1260928957.png"}}, {"text": "score_9,score_8,score_7,score_8_up,score_7_up,score_6_up, <lora:Fonglets_PumpkinSpiceLatte:1>", "parameters": {"negative_prompt": "naked"}, "output": {"url": "images/00078-731428457.png"}}], "base_model": "stablediffusionapi/pony-diffusion-v6-xl"} | Fongletto/Fonglets_PumpkinSpiceLatte_Pony_XL | null | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"not-for-all-audiences",
"base_model:stablediffusionapi/pony-diffusion-v6-xl",
"region:us"
] | null | 2024-04-24T09:08:18+00:00 | [] | [] | TAGS
#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #not-for-all-audiences #base_model-stablediffusionapi/pony-diffusion-v6-xl #region-us
| # Fonglets PumpkinSpiceLatte Pony XL
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab. | [
"# Fonglets PumpkinSpiceLatte Pony XL\n\n<Gallery />",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab."
] | [
"TAGS\n#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #not-for-all-audiences #base_model-stablediffusionapi/pony-diffusion-v6-xl #region-us \n",
"# Fonglets PumpkinSpiceLatte Pony XL\n\n<Gallery />",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab."
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Reihaneh/wav2vec2_fy_nl_en_common_voice_15 | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T09:08:21+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deit-base-patch16-224-finetuned-ind-4-imbalanced-aadhaarmask-3839
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2241
- Accuracy: 0.9401
- Recall: 0.9401
- F1: 0.9385
- Precision: 0.9382
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.1426 | 1.0 | 96 | 0.2195 | 0.9297 | 0.9297 | 0.9263 | 0.9270 |
| 0.0644 | 2.0 | 192 | 0.2403 | 0.9245 | 0.9245 | 0.9249 | 0.9260 |
| 0.0695 | 3.0 | 288 | 0.3488 | 0.9232 | 0.9232 | 0.9221 | 0.9257 |
| 0.0674 | 4.0 | 384 | 0.2355 | 0.9375 | 0.9375 | 0.9366 | 0.9363 |
| 0.1265 | 5.0 | 480 | 0.2119 | 0.9388 | 0.9388 | 0.9376 | 0.9382 |
| 0.1128 | 6.0 | 576 | 0.2018 | 0.9401 | 0.9401 | 0.9388 | 0.9389 |
| 0.0806 | 7.0 | 672 | 0.2095 | 0.9388 | 0.9388 | 0.9371 | 0.9410 |
| 0.1237 | 8.0 | 768 | 0.2008 | 0.9427 | 0.9427 | 0.9423 | 0.9425 |
| 0.0955 | 9.0 | 864 | 0.1763 | 0.9440 | 0.9440 | 0.9420 | 0.9421 |
| 0.0429 | 10.0 | 960 | 0.2021 | 0.9401 | 0.9401 | 0.9381 | 0.9376 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy", "recall", "f1", "precision"], "base_model": "facebook/deit-base-patch16-224", "model-index": [{"name": "deit-base-patch16-224-finetuned-ind-4-imbalanced-aadhaarmask-3839", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9401041666666666, "name": "Accuracy"}, {"type": "recall", "value": 0.9401041666666666, "name": "Recall"}, {"type": "f1", "value": 0.9384896500283729, "name": "F1"}, {"type": "precision", "value": 0.9382242510101494, "name": "Precision"}]}]}]} | Kushagra07/deit-base-patch16-224-finetuned-ind-4-imbalanced-aadhaarmask-3839 | null | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T09:08:30+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #dataset-imagefolder #base_model-facebook/deit-base-patch16-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| deit-base-patch16-224-finetuned-ind-4-imbalanced-aadhaarmask-3839
=================================================================
This model is a fine-tuned version of facebook/deit-base-patch16-224 on the imagefolder dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2241
* Accuracy: 0.9401
* Recall: 0.9401
* F1: 0.9385
* Precision: 0.9382
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #dataset-imagefolder #base_model-facebook/deit-base-patch16-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PolizzeDonut-UltimaProvaCluster-Cluster7di7-5epochs
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "base_model": "naver-clova-ix/donut-base", "model-index": [{"name": "PolizzeDonut-UltimaProvaCluster-Cluster7di7-5epochs", "results": []}]} | tedad09/PolizzeDonut-UltimaProvaCluster-Cluster7di7-5epochs | null | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T09:10:01+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us
|
# PolizzeDonut-UltimaProvaCluster-Cluster7di7-5epochs
This model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# PolizzeDonut-UltimaProvaCluster-Cluster7di7-5epochs\n\nThis model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us \n",
"# PolizzeDonut-UltimaProvaCluster-Cluster7di7-5epochs\n\nThis model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [motherfucker0/zhun01](https://huggingface.co/motherfucker0/zhun01)
* [motherfucker0/zhun02](https://huggingface.co/motherfucker0/zhun02)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: motherfucker0/zhun01
layer_range: [0, 30]
- model: motherfucker0/zhun02
layer_range: [0, 30]
merge_method: slerp
base_model: motherfucker0/zhun02
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.25
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["motherfucker0/zhun01", "motherfucker0/zhun02"]} | Sumail/zhun05 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:motherfucker0/zhun01",
"base_model:motherfucker0/zhun02",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T09:10:47+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #base_model-motherfucker0/zhun01 #base_model-motherfucker0/zhun02 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* motherfucker0/zhun01
* motherfucker0/zhun02
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* motherfucker0/zhun01\n* motherfucker0/zhun02",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #base_model-motherfucker0/zhun01 #base_model-motherfucker0/zhun02 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* motherfucker0/zhun01\n* motherfucker0/zhun02",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
object-detection | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# yolo
This model is a fine-tuned version of [hustvl/yolos-tiny](https://huggingface.co/hustvl/yolos-tiny) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "hustvl/yolos-tiny", "model-index": [{"name": "yolo", "results": []}]} | SkowKyubu/yolo | null | [
"transformers",
"safetensors",
"yolos",
"object-detection",
"generated_from_trainer",
"base_model:hustvl/yolos-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T09:10:50+00:00 | [] | [] | TAGS
#transformers #safetensors #yolos #object-detection #generated_from_trainer #base_model-hustvl/yolos-tiny #license-apache-2.0 #endpoints_compatible #region-us
|
# yolo
This model is a fine-tuned version of hustvl/yolos-tiny on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.2
| [
"# yolo\n\nThis model is a fine-tuned version of hustvl/yolos-tiny on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10",
"### Training results",
"### Framework versions\n\n- Transformers 4.35.2\n- Pytorch 2.2.2+cu121\n- Datasets 2.15.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #yolos #object-detection #generated_from_trainer #base_model-hustvl/yolos-tiny #license-apache-2.0 #endpoints_compatible #region-us \n",
"# yolo\n\nThis model is a fine-tuned version of hustvl/yolos-tiny on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10",
"### Training results",
"### Framework versions\n\n- Transformers 4.35.2\n- Pytorch 2.2.2+cu121\n- Datasets 2.15.0\n- Tokenizers 0.15.2"
] |
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="FaryalS/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| {"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]} | FaryalS/q-FrozenLake-v1-4x4-noSlippery | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null | 2024-04-24T09:11:11+00:00 | [] | [] | TAGS
#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
|
# Q-Learning Agent playing1 FrozenLake-v1
This is a trained model of a Q-Learning agent playing FrozenLake-v1 .
## Usage
| [
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage"
] | [
"TAGS\n#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n",
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "google/gemma-2b-it"} | Inishds/gemma-2b-it-quotes | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2b-it",
"region:us"
] | null | 2024-04-24T09:12:28+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-google/gemma-2b-it #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-google/gemma-2b-it #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="FaryalS/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| {"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "Taxi-v3", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.52 +/- 2.74", "name": "mean_reward", "verified": false}]}]}]} | FaryalS/Taxi-v3 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null | 2024-04-24T09:13:31+00:00 | [] | [] | TAGS
#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
|
# Q-Learning Agent playing1 Taxi-v3
This is a trained model of a Q-Learning agent playing Taxi-v3 .
## Usage
| [
"# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage"
] | [
"TAGS\n#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n",
"# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [motherfucker0/zhun02](https://huggingface.co/motherfucker0/zhun02)
* [motherfucker0/zhun01](https://huggingface.co/motherfucker0/zhun01)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: motherfucker0/zhun01
layer_range: [0, 30]
- model: motherfucker0/zhun02
layer_range: [0, 30]
merge_method: slerp
base_model: motherfucker0/zhun02
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.1
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["motherfucker0/zhun02", "motherfucker0/zhun01"]} | motherfucker0/zhen01 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:motherfucker0/zhun02",
"base_model:motherfucker0/zhun01",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-24T09:13:55+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #base_model-motherfucker0/zhun02 #base_model-motherfucker0/zhun01 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* motherfucker0/zhun02
* motherfucker0/zhun01
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* motherfucker0/zhun02\n* motherfucker0/zhun01",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #base_model-motherfucker0/zhun02 #base_model-motherfucker0/zhun01 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* motherfucker0/zhun02\n* motherfucker0/zhun01",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
null | transformers |
# Uploaded model
- **Developed by:** hanifsyarubany10
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-bnb-4bit"} | hanifsyarubany10/mistral-7b-100epochs-Unsloth-FreedomIntelligence-indo-2e-4 | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T09:14:33+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: hanifsyarubany10
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: hanifsyarubany10\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: hanifsyarubany10\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.