modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-01 06:28:43
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 546
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-01 06:27:36
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
TROCKZ/my-pet-cat
|
TROCKZ
| 2023-10-01T08:12:22Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-10-01T08:08:33Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Cat Dreambooth model trained by TROCKZ following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: -CEC-3
Sample pictures of this concept:
.jpg)
|
chaocai/llama2-ft
|
chaocai
| 2023-10-01T07:53:51Z | 3 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:finetune:meta-llama/Llama-2-7b-chat-hf",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-01T01:03:58Z |
---
base_model: meta-llama/Llama-2-7b-chat-hf
tags:
- generated_from_trainer
model-index:
- name: llama2-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2-ft
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
tanvirsrbd1/flan-t5-base-model2
|
tanvirsrbd1
| 2023-10-01T07:28:23Z | 159 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-10-01T07:20:53Z |
---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: flan-t5-base-model2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-model2
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2236
- Rouge1: 73.2746
- Rouge2: 65.1173
- Rougel: 72.149
- Rougelsum: 73.1838
- Gen Len: 16.1625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 11.4224 | 0.71 | 200 | 0.5002 | 49.2994 | 40.066 | 49.0355 | 49.3658 | 7.6113 |
| 0.4129 | 1.41 | 400 | 0.3030 | 72.7746 | 63.8666 | 71.4539 | 72.5604 | 16.2138 |
| 0.3131 | 2.12 | 600 | 0.2793 | 73.6239 | 64.6532 | 72.2694 | 73.5208 | 16.1148 |
| 0.2615 | 2.83 | 800 | 0.2674 | 73.2672 | 64.6251 | 72.0459 | 73.0736 | 16.2067 |
| 0.2347 | 3.53 | 1000 | 0.2631 | 73.0069 | 64.3272 | 71.8482 | 72.963 | 16.2049 |
| 0.2222 | 4.24 | 1200 | 0.2437 | 73.3821 | 64.9656 | 72.1995 | 73.2511 | 16.0795 |
| 0.2077 | 4.95 | 1400 | 0.2450 | 73.1663 | 64.7168 | 72.023 | 73.0977 | 16.0936 |
| 0.1976 | 5.65 | 1600 | 0.2296 | 73.2977 | 64.8011 | 72.2179 | 73.3089 | 16.1661 |
| 0.1804 | 6.36 | 1800 | 0.2268 | 73.1599 | 64.852 | 72.0518 | 73.1532 | 16.1802 |
| 0.1842 | 7.07 | 2000 | 0.2284 | 73.2343 | 64.944 | 72.046 | 73.1038 | 16.159 |
| 0.1776 | 7.77 | 2200 | 0.2255 | 73.3332 | 65.119 | 72.1684 | 73.2489 | 16.1449 |
| 0.1621 | 8.48 | 2400 | 0.2231 | 73.2057 | 64.9477 | 72.1727 | 73.1358 | 16.1219 |
| 0.1657 | 9.19 | 2600 | 0.2234 | 73.2285 | 65.0575 | 72.0227 | 73.2392 | 16.1608 |
| 0.1653 | 9.89 | 2800 | 0.2236 | 73.2746 | 65.1173 | 72.149 | 73.1838 | 16.1625 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
BryanBradfo/CartPole-v1
|
BryanBradfo
| 2023-10-01T07:21:33Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-01T07:21:24Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
tomaarsen/span-marker-mbert-base-fewnerd-fine-super
|
tomaarsen
| 2023-10-01T07:02:20Z | 4 | 2 |
span-marker
|
[
"span-marker",
"pytorch",
"tensorboard",
"token-classification",
"ner",
"named-entity-recognition",
"generated_from_span_marker_trainer",
"en",
"multilingual",
"dataset:DFKI-SLT/few-nerd",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:cc-by-sa-4.0",
"model-index",
"co2_eq_emissions",
"region:us"
] |
token-classification
| 2023-09-30T23:26:01Z |
---
language:
- en
- multilingual
license: cc-by-sa-4.0
library_name: span-marker
tags:
- span-marker
- token-classification
- ner
- named-entity-recognition
- generated_from_span_marker_trainer
datasets:
- DFKI-SLT/few-nerd
metrics:
- precision
- recall
- f1
widget:
- text: "Amelia Earhart flew her single engine Lockheed Vega 5B across the Atlantic to Paris."
example_title: "English 1"
- text: The WPC led the international peace movement in the decade after the Second
World War, but its failure to speak out against the Soviet suppression of the
1956 Hungarian uprising and the resumption of Soviet nuclear tests in 1961 marginalised
it, and in the 1960s it was eclipsed by the newer, non-aligned peace organizations
like the Campaign for Nuclear Disarmament.
example_title: "English 2"
- text: Most of the Steven Seagal movie "Under Siege" (co-starring Tommy Lee Jones)
was filmed on the Battleship USS Alabama, which is docked on Mobile Bay at Battleship
Memorial Park and open to the public.
example_title: "English 3"
- text: 'The Central African CFA franc (French: "franc CFA" or simply "franc", ISO
4217 code: XAF) is the currency of six independent states in Central Africa: Cameroon,
Central African Republic, Chad, Republic of the Congo, Equatorial Guinea and Gabon.'
example_title: "English 4"
- text: Brenner conducted post-doctoral research at Brandeis University with Gregory
Petsko and then took his first academic position at Thomas Jefferson University
in 1996, moving to Dartmouth Medical School in 2003, where he served as Associate
Director for Basic Sciences at Norris Cotton Cancer Center.
example_title: "English 5"
- text: On Friday, October 27, 2017, the Senate of Spain (Senado) voted 214 to 47
to invoke Article 155 of the Spanish Constitution over Catalonia after the Catalan
Parliament declared the independence.
example_title: "English 6"
- text: "Amelia Earthart voló su Lockheed Vega 5B monomotor a través del Océano Atlántico hasta París."
example_title: "Spanish"
- text: "Amelia Earthart a fait voler son monomoteur Lockheed Vega 5B à travers l'ocean Atlantique jusqu'à Paris."
example_title: "French"
- text: "Amelia Earthart flog mit ihrer einmotorigen Lockheed Vega 5B über den Atlantik nach Paris."
example_title: "German"
- text: "Амелия Эртхарт перелетела на своем одномоторном самолете Lockheed Vega 5B через Атлантический океан в Париж."
example_title: "Russian"
- text: "Amelia Earthart vloog met haar één-motorige Lockheed Vega 5B over de Atlantische Oceaan naar Parijs."
example_title: "Dutch"
- text: "Amelia Earthart przeleciała swoim jednosilnikowym samolotem Lockheed Vega 5B przez Ocean Atlantycki do Paryża."
example_title: "Polish"
- text: "Amelia Earthart flaug eins hreyfils Lockheed Vega 5B yfir Atlantshafið til Parísar."
example_title: "Icelandic"
- text: "Η Amelia Earthart πέταξε το μονοκινητήριο Lockheed Vega 5B της πέρα από τον Ατλαντικό Ωκεανό στο Παρίσι."
example_title: "Greek"
pipeline_tag: token-classification
co2_eq_emissions:
emissions: 572.6675932546113
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
ram_total_size: 31.777088165283203
hours_used: 3.867
hardware_used: 1 x NVIDIA GeForce RTX 3090
base_model: bert-base-multilingual-cased
model-index:
- name: SpanMarker with bert-base-multilingual-cased on FewNERD
results:
- task:
type: token-classification
name: Named Entity Recognition
dataset:
name: FewNERD
type: DFKI-SLT/few-nerd
split: test
metrics:
- type: f1
value: 0.7006507253689264
name: F1
- type: precision
value: 0.7040676584045078
name: Precision
- type: recall
value: 0.6972667978051558
name: Recall
---
# SpanMarker with bert-base-multilingual-cased on FewNERD
This is a [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) model trained on the [FewNERD](https://huggingface.co/datasets/DFKI-SLT/few-nerd) dataset that can be used for Named Entity Recognition. This SpanMarker model uses [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) as the underlying encoder.
## Model Details
### Model Description
- **Model Type:** SpanMarker
- **Encoder:** [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased)
- **Maximum Sequence Length:** 256 tokens
- **Maximum Entity Length:** 8 words
- **Training Dataset:** [FewNERD](https://huggingface.co/datasets/DFKI-SLT/few-nerd)
- **Languages:** en, multilingual
- **License:** cc-by-sa-4.0
### Model Sources
- **Repository:** [SpanMarker on GitHub](https://github.com/tomaarsen/SpanMarkerNER)
- **Thesis:** [SpanMarker For Named Entity Recognition](https://raw.githubusercontent.com/tomaarsen/SpanMarkerNER/main/thesis.pdf)
### Model Labels
| Label | Examples |
|:-----------------------------------------|:---------------------------------------------------------------------------------------------------------|
| art-broadcastprogram | "Corazones", "Street Cents", "The Gale Storm Show : Oh , Susanna" |
| art-film | "L'Atlantide", "Bosch", "Shawshank Redemption" |
| art-music | "Atkinson , Danko and Ford ( with Brockie and Hilton )", "Hollywood Studio Symphony", "Champion Lover" |
| art-other | "Aphrodite of Milos", "The Today Show", "Venus de Milo" |
| art-painting | "Production/Reproduction", "Touit", "Cofiwch Dryweryn" |
| art-writtenart | "The Seven Year Itch", "Time", "Imelda de ' Lambertazzi" |
| building-airport | "Luton Airport", "Newark Liberty International Airport", "Sheremetyevo International Airport" |
| building-hospital | "Hokkaido University Hospital", "Yeungnam University Hospital", "Memorial Sloan-Kettering Cancer Center" |
| building-hotel | "Flamingo Hotel", "The Standard Hotel", "Radisson Blu Sea Plaza Hotel" |
| building-library | "British Library", "Bayerische Staatsbibliothek", "Berlin State Library" |
| building-other | "Communiplex", "Henry Ford Museum", "Alpha Recording Studios" |
| building-restaurant | "Fatburger", "Carnegie Deli", "Trumbull" |
| building-sportsfacility | "Sports Center", "Glenn Warner Soccer Facility", "Boston Garden" |
| building-theater | "Sanders Theatre", "Pittsburgh Civic Light Opera", "National Paris Opera" |
| event-attack/battle/war/militaryconflict | "Vietnam War", "Jurist", "Easter Offensive" |
| event-disaster | "1693 Sicily earthquake", "the 1912 North Mount Lyell Disaster", "1990s North Korean famine" |
| event-election | "March 1898 elections", "1982 Mitcham and Morden by-election", "Elections to the European Parliament" |
| event-other | "Eastwood Scoring Stage", "Masaryk Democratic Movement", "Union for a Popular Movement" |
| event-protest | "Russian Revolution", "Iranian Constitutional Revolution", "French Revolution" |
| event-sportsevent | "Stanley Cup", "World Cup", "National Champions" |
| location-GPE | "Mediterranean Basin", "Croatian", "the Republic of Croatia" |
| location-bodiesofwater | "Norfolk coast", "Atatürk Dam Lake", "Arthur Kill" |
| location-island | "Staten Island", "Laccadives", "new Samsat district" |
| location-mountain | "Miteirya Ridge", "Ruweisat Ridge", "Salamander Glacier" |
| location-other | "Victoria line", "Cartuther", "Northern City Line" |
| location-park | "Painted Desert Community Complex Historic District", "Shenandoah National Park", "Gramercy Park" |
| location-road/railway/highway/transit | "Friern Barnet Road", "Newark-Elizabeth Rail Link", "NJT" |
| organization-company | "Church 's Chicken", "Dixy Chicken", "Texas Chicken" |
| organization-education | "MIT", "Barnard College", "Belfast Royal Academy and the Ulster College of Physical Education" |
| organization-government/governmentagency | "Supreme Court", "Diet", "Congregazione dei Nobili" |
| organization-media/newspaper | "TimeOut Melbourne", "Clash", "Al Jazeera" |
| organization-other | "IAEA", "Defence Sector C", "4th Army" |
| organization-politicalparty | "Al Wafa ' Islamic", "Kenseitō", "Shimpotō" |
| organization-religion | "Christian", "UPCUSA", "Jewish" |
| organization-showorganization | "Lizzy", "Mr. Mister", "Bochumer Symphoniker" |
| organization-sportsleague | "China League One", "NHL", "First Division" |
| organization-sportsteam | "Luc Alphand Aventures", "Tottenham", "Arsenal" |
| other-astronomything | "`` Caput Larvae ''", "Algol", "Zodiac" |
| other-award | "GCON", "Order of the Republic of Guinea and Nigeria", "Grand Commander of the Order of the Niger" |
| other-biologything | "BAR", "Amphiphysin", "N-terminal lipid" |
| other-chemicalthing | "sulfur", "uranium", "carbon dioxide" |
| other-currency | "Travancore Rupee", "$", "lac crore" |
| other-disease | "bladder cancer", "hypothyroidism", "French Dysentery Epidemic of 1779" |
| other-educationaldegree | "Master", "Bachelor", "BSc ( Hons ) in physics" |
| other-god | "Fujin", "Raijin", "El" |
| other-language | "Latin", "English", "Breton-speaking" |
| other-law | "Thirty Years ' Peace", "United States Freedom Support Act", "Leahy–Smith America Invents Act ( AIA" |
| other-livingthing | "monkeys", "insects", "patchouli" |
| other-medical | "Pediatrics", "amitriptyline", "pediatrician" |
| person-actor | "Edmund Payne", "Ellaline Terriss", "Tchéky Karyo" |
| person-artist/author | "George Axelrod", "Hicks", "Gaetano Donizett" |
| person-athlete | "Tozawa", "Neville", "Jaguar" |
| person-director | "Richard Quine", "Frank Darabont", "Bob Swaim" |
| person-other | "Richard Benson", "Campbell", "Holden" |
| person-politician | "Rivière", "William", "Emeric" |
| person-scholar | "Wurdack", "Stedman", "Stalmine" |
| person-soldier | "Joachim Ziegler", "Krukenberg", "Helmuth Weidling" |
| product-airplane | "Luton", "Spey-equipped FGR.2s", "EC135T2 CPDS" |
| product-car | "Corvettes - GT1 C6R", "Phantom", "100EX" |
| product-food | "V. labrusca", "yakiniku", "red grape" |
| product-game | "Airforce Delta", "Hardcore RPG", "Splinter Cell" |
| product-other | "PDP-1", "Fairbottom Bobs", "X11" |
| product-ship | "HMS `` Chinkara ''", "Congress", "Essex" |
| product-software | "Apdf", "Wikipedia", "AmiPDF" |
| product-train | "Royal Scots Grey", "High Speed Trains", "55022" |
| product-weapon | "AR-15 's", "ZU-23-2M Wróbel", "ZU-23-2MR Wróbel II" |
## Evaluation
### Metrics
| Label | Precision | Recall | F1 |
|:-----------------------------------------|:----------|:-------|:-------|
| **all** | 0.7041 | 0.6973 | 0.7007 |
| art-broadcastprogram | 0.5863 | 0.6252 | 0.6051 |
| art-film | 0.7779 | 0.752 | 0.7647 |
| art-music | 0.8014 | 0.7570 | 0.7786 |
| art-other | 0.4209 | 0.3221 | 0.3649 |
| art-painting | 0.5938 | 0.6667 | 0.6281 |
| art-writtenart | 0.6854 | 0.6415 | 0.6628 |
| building-airport | 0.8197 | 0.8242 | 0.8219 |
| building-hospital | 0.7215 | 0.8187 | 0.7671 |
| building-hotel | 0.7233 | 0.6906 | 0.7066 |
| building-library | 0.7588 | 0.7268 | 0.7424 |
| building-other | 0.5842 | 0.5855 | 0.5848 |
| building-restaurant | 0.5567 | 0.4871 | 0.5195 |
| building-sportsfacility | 0.6512 | 0.7690 | 0.7052 |
| building-theater | 0.6994 | 0.7516 | 0.7246 |
| event-attack/battle/war/militaryconflict | 0.7800 | 0.7332 | 0.7559 |
| event-disaster | 0.5767 | 0.5266 | 0.5505 |
| event-election | 0.5106 | 0.1319 | 0.2096 |
| event-other | 0.4931 | 0.4145 | 0.4504 |
| event-protest | 0.3711 | 0.4337 | 0.4000 |
| event-sportsevent | 0.6156 | 0.6156 | 0.6156 |
| location-GPE | 0.8175 | 0.8508 | 0.8338 |
| location-bodiesofwater | 0.7297 | 0.7622 | 0.7456 |
| location-island | 0.7314 | 0.6703 | 0.6995 |
| location-mountain | 0.7538 | 0.7283 | 0.7409 |
| location-other | 0.4370 | 0.3040 | 0.3585 |
| location-park | 0.7063 | 0.6878 | 0.6969 |
| location-road/railway/highway/transit | 0.7092 | 0.7259 | 0.7174 |
| organization-company | 0.6911 | 0.6943 | 0.6927 |
| organization-education | 0.7799 | 0.7973 | 0.7885 |
| organization-government/governmentagency | 0.5518 | 0.4474 | 0.4942 |
| organization-media/newspaper | 0.6268 | 0.6761 | 0.6505 |
| organization-other | 0.5804 | 0.5341 | 0.5563 |
| organization-politicalparty | 0.6627 | 0.7306 | 0.6949 |
| organization-religion | 0.5636 | 0.6265 | 0.5934 |
| organization-showorganization | 0.6023 | 0.6086 | 0.6054 |
| organization-sportsleague | 0.6594 | 0.6497 | 0.6545 |
| organization-sportsteam | 0.7341 | 0.7703 | 0.7518 |
| other-astronomything | 0.7806 | 0.8289 | 0.8040 |
| other-award | 0.7230 | 0.6703 | 0.6957 |
| other-biologything | 0.6733 | 0.6366 | 0.6544 |
| other-chemicalthing | 0.5962 | 0.5838 | 0.5899 |
| other-currency | 0.7135 | 0.7822 | 0.7463 |
| other-disease | 0.6260 | 0.7063 | 0.6637 |
| other-educationaldegree | 0.6 | 0.6033 | 0.6016 |
| other-god | 0.7051 | 0.7118 | 0.7085 |
| other-language | 0.6849 | 0.7968 | 0.7366 |
| other-law | 0.6814 | 0.6843 | 0.6829 |
| other-livingthing | 0.5959 | 0.6443 | 0.6192 |
| other-medical | 0.5247 | 0.4811 | 0.5020 |
| person-actor | 0.8342 | 0.7960 | 0.8146 |
| person-artist/author | 0.7052 | 0.7482 | 0.7261 |
| person-athlete | 0.8396 | 0.8530 | 0.8462 |
| person-director | 0.725 | 0.7329 | 0.7289 |
| person-other | 0.6866 | 0.6672 | 0.6767 |
| person-politician | 0.6819 | 0.6852 | 0.6835 |
| person-scholar | 0.5468 | 0.4953 | 0.5198 |
| person-soldier | 0.5360 | 0.5641 | 0.5497 |
| product-airplane | 0.6825 | 0.6730 | 0.6777 |
| product-car | 0.7205 | 0.7016 | 0.7109 |
| product-food | 0.6036 | 0.5394 | 0.5697 |
| product-game | 0.7740 | 0.6876 | 0.7282 |
| product-other | 0.5250 | 0.4117 | 0.4615 |
| product-ship | 0.6781 | 0.6763 | 0.6772 |
| product-software | 0.6701 | 0.6603 | 0.6652 |
| product-train | 0.5919 | 0.6051 | 0.5984 |
| product-weapon | 0.6507 | 0.5433 | 0.5921 |
## Uses
### Direct Use for Inference
```python
from span_marker import SpanMarkerModel
# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("tomaarsen/span-marker-mbert-base-fewnerd-fine-super")
# Run inference
entities = model.predict("Most of the Steven Seagal movie \"Under Siege \"(co-starring Tommy Lee Jones) was filmed on the, which is docked on Mobile Bay at Battleship Memorial Park and open to the public.")
```
### Downstream Use
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
```python
from span_marker import SpanMarkerModel, Trainer
# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("tomaarsen/span-marker-mbert-base-fewnerd-fine-super")
# Specify a Dataset with "tokens" and "ner_tag" columns
dataset = load_dataset("conll2003") # For example CoNLL2003
# Initialize a Trainer using the pretrained model & dataset
trainer = Trainer(
model=model,
train_dataset=dataset["train"],
eval_dataset=dataset["validation"],
)
trainer.train()
trainer.save_model("tomaarsen/span-marker-mbert-base-fewnerd-fine-super-finetuned")
```
</details>
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:----------------------|:----|:--------|:----|
| Sentence length | 1 | 24.4945 | 267 |
| Entities per sentence | 0 | 2.5832 | 88 |
### Training Hyperparameters
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training Results
| Epoch | Step | Validation Loss | Validation Precision | Validation Recall | Validation F1 | Validation Accuracy |
|:------:|:-----:|:---------------:|:--------------------:|:-----------------:|:-------------:|:-------------------:|
| 0.2972 | 3000 | 0.0274 | 0.6488 | 0.6457 | 0.6473 | 0.9121 |
| 0.5944 | 6000 | 0.0252 | 0.6686 | 0.6545 | 0.6615 | 0.9160 |
| 0.8915 | 9000 | 0.0239 | 0.6918 | 0.6547 | 0.6727 | 0.9178 |
| 1.1887 | 12000 | 0.0235 | 0.6962 | 0.6727 | 0.6842 | 0.9210 |
| 1.4859 | 15000 | 0.0233 | 0.6872 | 0.6742 | 0.6806 | 0.9201 |
| 1.7831 | 18000 | 0.0226 | 0.6969 | 0.6891 | 0.6929 | 0.9236 |
| 2.0802 | 21000 | 0.0231 | 0.7030 | 0.6916 | 0.6973 | 0.9246 |
| 2.3774 | 24000 | 0.0227 | 0.7020 | 0.6936 | 0.6978 | 0.9248 |
| 2.6746 | 27000 | 0.0223 | 0.7079 | 0.6989 | 0.7034 | 0.9258 |
| 2.9718 | 30000 | 0.0222 | 0.7089 | 0.7009 | 0.7049 | 0.9263 |
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Carbon Emitted**: 0.573 kg of CO2
- **Hours Used**: 3.867 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.9.16
- SpanMarker: 1.4.1.dev
- Transformers: 4.30.0
- PyTorch: 2.0.1+cu118
- Datasets: 2.14.0
- Tokenizers: 0.13.2
## Citation
### BibTeX
```
@software{Aarsen_SpanMarker,
author = {Aarsen, Tom},
license = {Apache-2.0},
title = {{SpanMarker for Named Entity Recognition}},
url = {https://github.com/tomaarsen/SpanMarkerNER}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
Bisnu/whisper-small-dv
|
Bisnu
| 2023-10-01T06:46:27Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dv",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-10-01T04:36:27Z |
---
language:
- dv
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small Dv - Bisnu sarkar
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: dv
split: test
args: dv
metrics:
- name: Wer
type: wer
value: 12.72733595298536
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Dv - Bisnu sarkar
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1677
- Wer Ortho: 62.0238
- Wer: 12.7273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.1225 | 1.63 | 500 | 0.1677 | 62.0238 | 12.7273 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
aqachun/Vilt_fine_tune_2000
|
aqachun
| 2023-10-01T06:05:38Z | 59 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vilt",
"visual-question-answering",
"generated_from_trainer",
"dataset:vqa",
"base_model:dandelin/vilt-b32-mlm",
"base_model:finetune:dandelin/vilt-b32-mlm",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
visual-question-answering
| 2023-09-30T08:35:51Z |
---
license: apache-2.0
base_model: dandelin/vilt-b32-mlm
tags:
- generated_from_trainer
datasets:
- vqa
model-index:
- name: Vilt_fine_tune_2000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Vilt_fine_tune_2000
This model is a fine-tuned version of [dandelin/vilt-b32-mlm](https://huggingface.co/dandelin/vilt-b32-mlm) on the vqa dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.33.1
- Pytorch 1.12.1+cu113
- Datasets 2.14.5
- Tokenizers 0.13.3
|
DamarJati/NSFW-Filterization-DecentScan
|
DamarJati
| 2023-10-01T06:02:23Z | 224 | 8 |
transformers
|
[
"transformers",
"pytorch",
"swin",
"image-classification",
"art",
"en",
"dataset:DamarJati/NSFW-filter-DecentScan",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-10-01T05:41:32Z |
---
datasets:
- DamarJati/NSFW-filter-DecentScan
language:
- en
pipeline_tag: image-classification
tags:
- art
---
|
Aungria/ppo-LunarLander-v2_2
|
Aungria
| 2023-10-01T05:55:31Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-01T05:55:10Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 274.12 +/- 18.14
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Keenan5755/pg-Pixelcopter-PLE-v0
|
Keenan5755
| 2023-10-01T05:31:52Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-30T01:30:20Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: pg-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 24.00 +/- 17.50
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Brouz/MaximalSlerp
|
Brouz
| 2023-10-01T05:04:20Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-29T19:10:36Z |
---
license: llama2
---
GGUFs here https://huggingface.co/Brouz/MaximalSlerp-GGUF
Gradient Slerp merge of https://huggingface.co/Gryphe/MythoLogic-L2-13b and https://huggingface.co/The-Face-Of-Goonery/Huginn-13b-v1.2
Using Mergekit with the YAML branch https://github.com/cg123/mergekit/tree/yaml
Original Mythomax script: https://github.com/Gryphe/BlockMerge_Gradient/blob/main/YAML/MythoMix-Variant-L2-13b.yaml
Divine intellect or mental retardation?

|
AchyuthGamer/FlawlessAI
|
AchyuthGamer
| 2023-10-01T04:58:16Z | 29 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"finetuned",
"chatgpt",
"LLM",
"openGPT",
"free LLM",
"no api key",
"LLAMA",
"llama chat",
"opengpt model",
"opengpt llm",
"text-to-text",
"Text-to-Text",
"Chatbot",
"Chat UI",
"text-generation",
"conversational",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-01T03:12:09Z |
---
license: apache-2.0
pipeline_tag: text-generation
tags:
- finetuned
- chatgpt
- LLM
- openGPT
- free LLM
- no api key
- LLAMA
- llama chat
- opengpt model
- opengpt llm
- text-to-text
- Text-to-Text
- Chatbot
- Chat UI
---
# Model Card for OpenGPT-1.0
The OpenGPT-1.0 Large Language Model (LLM) is a instruct fine-tuned version of the [OpenGPT-1.0](https://huggingface.co/AchyuthGamer/OpenGPT) generative text model using a variety of publicly available conversation datasets.
For full details of this model please read our [release blog post](https://huggingface.co/AchyuthGamer/OpenGPT)
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[\INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
```
text = "<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
"[INST] Do you have mayonnaise recipes? [/INST]"
```
This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("AchyuthGamer/OpenGPT")
tokenizer = AutoTokenizer.from_pretrained("AchyuthGamer/OpenGPT")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Model Architecture
This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
## Troubleshooting
- If you see the following error:
```
Traceback (most recent call last):
File "", line 1, in
File "/transformers/models/auto/auto_factory.py", line 482, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
File "/transformers/models/auto/configuration_auto.py", line 1022, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
File "/transformers/models/auto/configuration_auto.py", line 723, in getitem
raise KeyError(key)
KeyError: 'mistral'
```
Installing transformers from source should solve the issue
pip install git+https://github.com/huggingface/transformers
This should not be required after transformers-v4.33.4.
## Limitations
The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
|
Meztli66/whisper-small-dv
|
Meztli66
| 2023-10-01T04:44:21Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dv",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-28T16:21:29Z |
---
language:
- dv
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small Dv - Sanchit Gandhi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: dv
split: test
args: dv
metrics:
- name: Wer
type: wer
value: 12.72733595298536
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Dv - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1677
- Wer Ortho: 62.0238
- Wer: 12.7273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.1225 | 1.63 | 500 | 0.1677 | 62.0238 | 12.7273 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
pandyamarut/sd-xl-colab
|
pandyamarut
| 2023-10-01T04:22:40Z | 5 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-10-01T03:44:18Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks dog
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - mwiki/sd-xl-colab
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
robxiao/rl_unit_1
|
robxiao
| 2023-10-01T03:31:02Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-01T03:30:01Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 269.12 +/- 15.05
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ankushjamthikar/aj_first_model
|
ankushjamthikar
| 2023-10-01T03:23:30Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-01T02:22:55Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: aj_first_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93168
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aj_first_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2239
- Accuracy: 0.9317
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2267 | 1.0 | 1563 | 0.2272 | 0.9166 |
| 0.1536 | 2.0 | 3126 | 0.2239 | 0.9317 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
zongxiao/whisper-small-dv
|
zongxiao
| 2023-10-01T03:11:11Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dv",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-10-01T00:58:56Z |
---
language:
- dv
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small Dv - zongxiao -500
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: dv
split: test
args: dv
metrics:
- name: Wer
type: wer
value: 12.72733595298536
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Dv - zongxiao -500
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1677
- Wer Ortho: 62.0238
- Wer: 12.7273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.1225 | 1.63 | 500 | 0.1677 | 62.0238 | 12.7273 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.0
|
Doctor-Shotgun/CalliopeDS-v2-L2-13B
|
Doctor-Shotgun
| 2023-10-01T02:50:13Z | 1,623 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-2",
"en",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-09-28T22:25:47Z |
---
inference: false
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- llama
- llama-2
license: llama2
---
# CalliopeDS-v2-L2-13B
[EXL2 Quants](https://huggingface.co/Doctor-Shotgun/CalliopeDS-v2-L2-13B-exl2)
[GGUF Quants](https://huggingface.co/Doctor-Shotgun/Misc-Models)
This is a Llama 2-based model consisting of a merge of several models using PEFT adapters and SLERP merging:
- [PygmalionAI/pygmalion-2-13b](https://huggingface.co/PygmalionAI/pygmalion-2-13b)
- [NousResearch/Nous-Hermes-Llama2-13b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b)
- [Doctor-Shotgun/llama-2-supercot-lora](https://huggingface.co/Doctor-Shotgun/llama-2-supercot-lora)
- [lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT](https://huggingface.co/lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT)
- [Undi95/Storytelling-v2-13B-lora](https://huggingface.co/Undi95/Storytelling-v2-13B-lora)
Charles Goddard's [mergekit](https://github.com/cg123/mergekit) repo was used to perform these operations.
The purpose of this merge was to create a model that excels at creative writing and roleplay while maintaining general intelligence and instruction-following capabilities. In testing, it has shown to be capable at producing descriptive and verbose responses while demonstrating a solid understanding of the context.
## Usage:
Due to this being a merge of multiple models, different prompt formats may work, but you can try the Alpaca instruction format of LIMARP v3:
```
### Instruction:
Character's Persona: {bot character description}
User's Persona: {user character description}
Scenario: {what happens in the story}
Play the role of Character. You must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User.
### Input:
User: {utterance}
### Response:
Character: {utterance}
### Input
User: {utterance}
### Response:
Character: {utterance}
(etc.)
```
Or the Pygmalion/Metharme format:
```
<|system|>Enter RP mode. Pretend to be {{char}} whose persona follows:
{{persona}}
You shall reply to the user while staying in character, and generate long responses.
<|user|>Hello!<|model|>{model's response goes here}
```
The model was also tested using a system prompt with no instruction sequences:
```
Write Character's next reply in the roleplay between User and Character. Stay in character and write creative responses that move the scenario forward. Narrate in detail, using elaborate descriptions. The following is your persona:
{{persona}}
[Current conversation]
User: {utterance}
Character: {utterance}
```
## Message length control
Due to the inclusion of LimaRP v3, it is possible to append a length modifier to the response instruction sequence, like this:
```
### Input
User: {utterance}
### Response: (length = medium)
Character: {utterance}
```
This has an immediately noticeable effect on bot responses. The available lengths are: tiny, short, medium, long, huge, humongous, extreme, unlimited. The recommended starting length is medium. Keep in mind that the AI may ramble or impersonate the user with very long messages.
## Bias, Risks, and Limitations
The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form.
## Training Details
This model is a merge. Please refer to the link repositories of the merged models for details.
|
Doctor-Shotgun/pygmalion-2-supercot-limarpv3-13b-exl2
|
Doctor-Shotgun
| 2023-10-01T02:49:47Z | 0 | 0 |
transformers
|
[
"transformers",
"llama",
"llama-2",
"text-generation",
"en",
"license:llama2",
"region:us"
] |
text-generation
| 2023-09-28T00:54:03Z |
---
inference: false
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- llama
- llama-2
license: llama2
---
# pygmalion-2-supercot-limarpv3-13b-exl2
These are testing EXL2 quants of a Llama 2-based model consisting of a merge of several models using PEFT adapters:
- [PygmalionAI/pygmalion-2-13b](https://huggingface.co/PygmalionAI/pygmalion-2-13b)
- [Doctor-Shotgun/llama-2-supercot-lora](https://huggingface.co/Doctor-Shotgun/llama-2-supercot-lora)
- [lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT](https://huggingface.co/lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT)
Zaraki's [zarakitools](https://github.com/zarakiquemparte/zaraki-tools) repo was used to perform these operations.
The goal was to add the length instruction training of LimaRPv3 and additional stylistic elements to the Pygmalion 2 + SuperCoT model. GGUFs available in the Misc Models repo.
The versions are as follows:
- 1.0wt: all loras merged onto the base model at full weight
- 0.66wt: Pygmalion 2 + SuperCoT at full weight, LimaRPv3 added at 0.66 weight
- grad: Gradient merge with SuperCoT being added to the deep layers and LimaRPv3 added to the shallow layers, reaching an average weight of 0.5 weight for each lora

The quants are as follows:
- 4.0bpw-h6: 4 decoder bits per weight, 6 head bits
- ideal for 12gb GPUs, or 16gb GPUs with NTK extended context or CFG
- 6.0bpw-h6: 6 decoder bits per weight, 6 head bits
- ideal for 16gb GPUs, or 24gb GPUs with NTK extended context or CFG
- 8bit-32g-h8: all tensors 8bit 32g, 8 head bits
- experimental quant, this is with exllamav2 monkeypatched to quantize all tensors to 8bit 32g
similar in size to old GPTQ 8bit no groupsize, recommend 24gb GPU
## Usage:
Due to this being a merge of multiple models, different prompt formats may work, but you can try the Alpaca instruction format of LIMARP v3:
```
### Instruction:
Character's Persona: {bot character description}
User's Persona: {user character description}
Scenario: {what happens in the story}
Play the role of Character. You must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User.
### Input:
User: {utterance}
### Response:
Character: {utterance}
### Input
User: {utterance}
### Response:
Character: {utterance}
(etc.)
```
Or the Pygmalion/Metharme format:
```
<|system|>Enter RP mode. Pretend to be {{char}} whose persona follows:
{{persona}}
You shall reply to the user while staying in character, and generate long responses.
<|user|>Hello!<|model|>{model's response goes here}
```
The model was also tested using a system prompt with no instruction sequences:
```
Write Character's next reply in the roleplay between User and Character. Stay in character and write creative responses that move the scenario forward. Narrate in detail, using elaborate descriptions. The following is your persona:
{{persona}}
[Current conversation]
User: {utterance}
Character: {utterance}
```
## Message length control
Due to the inclusion of LimaRP v3, it is possible to append a length modifier to the response instruction sequence, like this:
```
### Input
User: {utterance}
### Response: (length = medium)
Character: {utterance}
```
This has an immediately noticeable effect on bot responses. The available lengths are: tiny, short, medium, long, huge, humongous, extreme, unlimited. The recommended starting length is medium. Keep in mind that the AI may ramble or impersonate the user with very long messages.
## Bias, Risks, and Limitations
The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form.
## Training Details
This model is a merge. Please refer to the link repositories of the merged models for details.
|
Doctor-Shotgun/pygmalion-2-supercot-limarpv3-gradient-13b
|
Doctor-Shotgun
| 2023-10-01T02:49:26Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-2",
"en",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-09-30T18:44:35Z |
---
inference: false
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- llama
- llama-2
license: llama2
---
# pygmalion-2-supercot-limarpv3-gradient-13b
[EXL2 Quants](https://huggingface.co/Doctor-Shotgun/pygmalion-2-supercot-limarpv3-13b-exl2)
[GGUF Quants](https://huggingface.co/Doctor-Shotgun/Misc-Models)
This is a Llama 2-based model consisting of a merge of several models using PEFT adapters:
- [PygmalionAI/pygmalion-2-13b](https://huggingface.co/PygmalionAI/pygmalion-2-13b)
- [Doctor-Shotgun/llama-2-supercot-lora](https://huggingface.co/Doctor-Shotgun/llama-2-supercot-lora)
- [lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT](https://huggingface.co/lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT)
Zaraki's [zarakitools](https://github.com/zarakiquemparte/zaraki-tools) repo was used to perform these operations.
The goal was to add the length instruction training of LimaRPv3 and additional stylistic elements to the Pygmalion 2 + SuperCoT model.
It is a gradient merge with SuperCoT being added to the deep layers and LimaRPv3 added to the shallow layers, reaching an average weight of 0.5 weight for each lora

## Usage:
Due to this being a merge of multiple models, different prompt formats may work, but you can try the Alpaca instruction format of LIMARP v3:
```
### Instruction:
Character's Persona: {bot character description}
User's Persona: {user character description}
Scenario: {what happens in the story}
Play the role of Character. You must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User.
### Input:
User: {utterance}
### Response:
Character: {utterance}
### Input
User: {utterance}
### Response:
Character: {utterance}
(etc.)
```
Or the Pygmalion/Metharme format:
```
<|system|>Enter RP mode. Pretend to be {{char}} whose persona follows:
{{persona}}
You shall reply to the user while staying in character, and generate long responses.
<|user|>Hello!<|model|>{model's response goes here}
```
The model was also tested using a system prompt with no instruction sequences:
```
Write Character's next reply in the roleplay between User and Character. Stay in character and write creative responses that move the scenario forward. Narrate in detail, using elaborate descriptions. The following is your persona:
{{persona}}
[Current conversation]
User: {utterance}
Character: {utterance}
```
## Message length control
Due to the inclusion of LimaRP v3, it is possible to append a length modifier to the response instruction sequence, like this:
```
### Input
User: {utterance}
### Response: (length = medium)
Character: {utterance}
```
This has an immediately noticeable effect on bot responses. The available lengths are: tiny, short, medium, long, huge, humongous, extreme, unlimited. The recommended starting length is medium. Keep in mind that the AI may ramble or impersonate the user with very long messages.
## Bias, Risks, and Limitations
The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form.
## Training Details
This model is a merge. Please refer to the link repositories of the merged models for details.
|
PrakhAI/AIPlane3
|
PrakhAI
| 2023-10-01T02:38:05Z | 0 | 0 | null |
[
"arxiv:1710.10196",
"arxiv:1802.05957",
"region:us"
] | null | 2023-09-03T16:32:23Z |
---
datasets:
- https://www.robots.ox.ac.uk/~vgg/data/fgvc-aircraft/
---
| Generated | Real (for comparison) |
| ----- | --------- |
|  |  |
This GAN model is trained on the [FGVC Aircraft](https://www.robots.ox.ac.uk/~vgg/data/fgvc-aircraft/) dataset. The model uses [Progressive Growing](https://arxiv.org/pdf/1710.10196.pdf) with [Spectral Normalization](https://arxiv.org/pdf/1802.05957.pdf).
The work builds up on https://huggingface.co/PrakhAI/AIPlane and https://huggingface.co/PrakhAI/AIPlane2.
This model was trained to generate 256x256 images of Aircrafts. The implementation in JAX on Colab can be found [here](https://colab.research.google.com/github/prakharbanga/AIPlane3/blob/main/AIPlane3_ProGAN_%2B_Spectral_Norm_(256x256).ipynb).
# Convolutional Architecture
A significant improvement over https://huggingface.co/PrakhAI/AIPlane2 is the elimination of "checkerboard" artifacts. This is done by using Image Resize followed by Convolution layer in the Generator instead of a Transposed Convolution where the kernel size is not divisible by the stride.
| Transposed Convolution (kernel size not divisible by stride) | Resize followed by convolution |
| - | - |
|  |  |
# 'Good' Generated Samples

# ProGAN
Progressive Growing of GANs was proposed in [Progressive Growing of GANs for improved Quality, Stability, and Variation](https://arxiv.org/pdf/1710.10196.pdf)
The idea is to start learning at lower resolutions, and growing the resolution of the GAN over time. This improves both:
- Training Speed: At lower resolutions, the Generator and Discriminator have fewer layers and fewer parameters.
- Convergence Speed: It is much easier to learn high-level details followed by higher granularity features, compared to learning both at the same time.

# Spectral Normalization
Spectral Normalization for GANs was first suggested in [Spectral Normalization for Generative Adversarial Networks](https://arxiv.org/pdf/1802.05957.pdf).
Spectral Normalization constrains the Gradient Norm of the Discriminator with respect to the input, yielding a much smoother loss landscape for the Generator to navigate through.

# Latent Space Interpolation
Latent Space Interpolation can be an educational exercise to get deeper insight into the model.
It is observed below that several aspects of the generated image, such as the color of the sky, the grounded-ness of the plane, and the plane shape and color, are frequently continuous through the latent space.

# Training Progression
Unfortunately after uploading, the first few seconds of the video are frozen. The full high resolution video is in model files.
<video controls src="https://cdn-uploads.huggingface.co/production/uploads/649f9483d76ca0fe679011c2/xwHwDXm6nOF1yzYJdbIkE.mp4"></video>
# Demo
The demo app for this model is at https://huggingface.co/spaces/PrakhAI/AIPlane3 (please "Restart this Space" if prompted).
|
mohankrishnan/llama2-QLORA-fineturned-french-language-yt
|
mohankrishnan
| 2023-10-01T02:27:59Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-01T02:27:51Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
tempNameRepost15/pig_7B_rename
|
tempNameRepost15
| 2023-10-01T01:58:33Z | 82 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text generation",
"instruct",
"en",
"dataset:PygmalionAI/PIPPA",
"dataset:Open-Orca/OpenOrca",
"dataset:Norquinal/claude_multiround_chat_30k",
"dataset:jondurbin/airoboros-gpt4-1.4.1",
"dataset:databricks/databricks-dolly-15k",
"base_model:PygmalionAI/pygmalion-2-7b",
"base_model:quantized:PygmalionAI/pygmalion-2-7b",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2023-10-01T00:57:40Z |
---
language:
- en
license: llama2
tags:
- text generation
- instruct
datasets:
- PygmalionAI/PIPPA
- Open-Orca/OpenOrca
- Norquinal/claude_multiround_chat_30k
- jondurbin/airoboros-gpt4-1.4.1
- databricks/databricks-dolly-15k
model_name: Pygmalion 2 7B
base_model: PygmalionAI/pygmalion-2-7b
inference: false
model_creator: PygmalionAI
model_type: llama
pipeline_tag: text-generation
prompt_template: 'The model has been trained on prompts using three different roles,
which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`.
The `<|system|>` prompt can be used to inject out-of-channel information behind
the scenes, while the `<|user|>` prompt should be used to indicate user input.
The `<|model|>` token should then be used to indicate that the model should generate
a response. These tokens can happen multiple times and be chained up to form a conversation
history.
The system prompt has been designed to allow the model to "enter" various modes
and dictate the reply length. Here''s an example:
```
<|system|>Enter RP mode. Pretend to be {{char}} whose persona follows:
{{persona}}
You shall reply to the user while staying in character, and generate long responses.
```
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Pygmalion 2 7B - GPTQ
- Model creator: [PygmalionAI](https://huggingface.co/PygmalionAI)
- Original model: [Pygmalion 2 7B](https://huggingface.co/PygmalionAI/pygmalion-2-7b)
<!-- description start -->
## Description
This repo contains GPTQ model files for [PygmalionAI's Pygmalion 2 7B](https://huggingface.co/PygmalionAI/pygmalion-2-7b).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Pygmalion-2-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Pygmalion-2-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Pygmalion-2-7B-GGUF)
* [PygmalionAI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/PygmalionAI/pygmalion-2-7b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Custom
The model has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`.
The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input.
The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history.
The system prompt has been designed to allow the model to "enter" various modes and dictate the reply length. Here's an example:
```
<|system|>Enter RP mode. Pretend to be {{char}} whose persona follows:
{{persona}}
You shall reply to the user while staying in character, and generate long responses.
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The dataset used for quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Pygmalion-2-7B-GPTQ/tree/main) | 4 | 128 | No | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 3.90 GB | Yes | 4-bit, without Act Order and group size 128g. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Pygmalion-2-7B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.28 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Pygmalion-2-7B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.02 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Pygmalion-2-7B-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 3.90 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Pygmalion-2-7B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.01 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Pygmalion-2-7B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.16 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download from branches
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Pygmalion-2-7B-GPTQ:main`
- With Git, you can clone a branch with:
```
git clone --single-branch --branch main https://huggingface.co/TheBloke/Pygmalion-2-7B-GPTQ
```
- In Python Transformers code, the branch is the `revision` parameter; see below.
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Pygmalion-2-7B-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Pygmalion-2-7B-GPTQ:main`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Pygmalion-2-7B-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.32.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers>=4.32.0 optimum>=1.12.0
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
pip3 install .
```
### For CodeLlama models only: you must use Transformers 4.33.0 or later.
If 4.33.0 is not yet released when you read this, you will need to install Transformers from source:
```shell
pip3 uninstall -y transformers
pip3 install git+https://github.com/huggingface/transformers.git
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Pygmalion-2-7B-GPTQ"
# To use a different branch, change revision
# For example: revision="main"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''<|system|>Enter RP mode. Pretend to be {{char}} whose persona follows:
{{persona}}
You shall reply to the user while staying in character, and generate long responses.
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: PygmalionAI's Pygmalion 2 7B
<h1 style="text-align: center">Pygmalion-2 7B</h1>
<h2 style="text-align: center">An instruction-tuned Llama-2 biased towards fiction writing and conversation.</h2>
## Model Details
The long-awaited release of our new models based on Llama-2 is finally here. Pygmalion-2 7B (formerly known as Metharme) is based on
[Llama-2 7B](https://huggingface.co/meta-llama/llama-2-7b-hf) released by Meta AI.
The Metharme models were an experiment to try and get a model that is usable for conversation, roleplaying and storywriting,
but which can be guided using natural language like other instruct models. After much deliberation, we reached the conclusion
that the Metharme prompting format is superior (and easier to use) compared to the classic Pygmalion.
This model was trained by doing supervised fine-tuning over a mixture of regular instruction data alongside roleplay, fictional stories
and conversations with synthetically generated instructions attached.
This model is freely available for both commercial and non-commercial use, as per the Llama-2 license.
## Prompting
The model has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`.
The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input.
The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to
form a conversation history.
### Prompting example
The system prompt has been designed to allow the model to "enter" various modes and dictate the reply length. Here's an example:
```
<|system|>Enter RP mode. Pretend to be {{char}} whose persona follows:
{{persona}}
You shall reply to the user while staying in character, and generate long responses.
```
## Dataset
The dataset used to fine-tune this model includes our own [PIPPA](https://huggingface.co/datasets/PygmalionAI/PIPPA), along with several other instruction
datasets, and datasets acquired from various RP forums.
## Limitations and biases
The intended use-case for this model is fictional writing for entertainment purposes. Any other sort of usage is out of scope.
As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that
are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive.
Outputs might often be factually wrong or misleading.
## Acknowledgements
We would like to thank [SpicyChat](https://spicychat.ai/) for sponsoring the training for this model.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
tylerkiser/reinforcepixelcopter
|
tylerkiser
| 2023-10-01T01:41:59Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-01T01:41:19Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: reinforcepixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 29.30 +/- 18.95
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Moses25/MosesLM-13B-chat
|
Moses25
| 2023-10-01T01:29:05Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-23T13:49:59Z |
---
license: apache-2.0
language:
- en
- zh
---
this model is pretrained based on meta/Llama-2-13b-chat-hf
```
python -u gradio_demo.py --base_model MosesLM-13B-chat \
--lora_model MosesLM-13B-chat \
--alpha 1 \
--post_host 0.0.0.0 \
--port 7777
```

|
cloudwalkerw/wavlm-base_5
|
cloudwalkerw
| 2023-10-01T01:19:28Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wavlm",
"audio-classification",
"generated_from_trainer",
"base_model:microsoft/wavlm-base",
"base_model:finetune:microsoft/wavlm-base",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-09-30T18:27:03Z |
---
base_model: microsoft/wavlm-base
tags:
- audio-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wavlm-base_5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wavlm-base_5
This model is a fine-tuned version of [microsoft/wavlm-base](https://huggingface.co/microsoft/wavlm-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4151
- Accuracy: 0.8974
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 2
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3764 | 0.25 | 100 | 0.0277 | 0.9948 |
| 0.1211 | 0.5 | 200 | 0.0297 | 0.9981 |
| 0.2525 | 0.76 | 300 | 1.2840 | 0.9168 |
| 0.784 | 1.01 | 400 | 0.3443 | 0.8974 |
| 0.6053 | 1.26 | 500 | 0.3958 | 0.8974 |
| 0.6038 | 1.51 | 600 | 0.4848 | 0.8974 |
| 0.5996 | 1.76 | 700 | 0.3954 | 0.8974 |
| 0.5914 | 2.02 | 800 | 0.3970 | 0.8974 |
| 0.6077 | 2.27 | 900 | 0.4722 | 0.8974 |
| 0.5991 | 2.52 | 1000 | 0.4362 | 0.8974 |
| 0.5813 | 2.77 | 1100 | 0.3871 | 0.8974 |
| 0.5953 | 3.02 | 1200 | 0.4013 | 0.8974 |
| 0.5957 | 3.28 | 1300 | 0.4693 | 0.8974 |
| 0.5852 | 3.53 | 1400 | 0.3879 | 0.8974 |
| 0.6066 | 3.78 | 1500 | 0.4280 | 0.8974 |
| 0.6085 | 4.03 | 1600 | 0.4359 | 0.8974 |
| 0.5944 | 4.28 | 1700 | 0.4167 | 0.8974 |
| 0.5994 | 4.54 | 1800 | 0.4139 | 0.8974 |
| 0.5953 | 4.79 | 1900 | 0.4256 | 0.8974 |
| 0.5929 | 5.04 | 2000 | 0.4371 | 0.8974 |
| 0.6067 | 5.29 | 2100 | 0.4255 | 0.8974 |
| 0.5944 | 5.55 | 2200 | 0.4121 | 0.8974 |
| 0.5926 | 5.8 | 2300 | 0.4210 | 0.8974 |
| 0.594 | 6.05 | 2400 | 0.4057 | 0.8974 |
| 0.6042 | 6.3 | 2500 | 0.4252 | 0.8974 |
| 0.5971 | 6.55 | 2600 | 0.3958 | 0.8974 |
| 0.597 | 6.81 | 2700 | 0.4124 | 0.8974 |
| 0.5816 | 7.06 | 2800 | 0.4101 | 0.8974 |
| 0.5944 | 7.31 | 2900 | 0.4258 | 0.8974 |
| 0.6053 | 7.56 | 3000 | 0.4415 | 0.8974 |
| 0.5894 | 7.81 | 3100 | 0.4067 | 0.8974 |
| 0.5987 | 8.07 | 3200 | 0.4109 | 0.8974 |
| 0.5846 | 8.32 | 3300 | 0.4095 | 0.8974 |
| 0.5982 | 8.57 | 3400 | 0.4187 | 0.8974 |
| 0.5932 | 8.82 | 3500 | 0.4124 | 0.8974 |
| 0.6007 | 9.07 | 3600 | 0.4212 | 0.8974 |
| 0.6041 | 9.33 | 3700 | 0.4257 | 0.8974 |
| 0.5859 | 9.58 | 3800 | 0.4176 | 0.8974 |
| 0.5842 | 9.83 | 3900 | 0.4151 | 0.8974 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.0.post302
- Datasets 2.14.5
- Tokenizers 0.13.3
|
digiplay/fantasticmix_v65_test
|
digiplay
| 2023-10-01T01:12:23Z | 407 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-23T09:20:26Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Hi,I am newbie here, this is my favorite model,
and I tried to convert it to diffusers format,
you can easily call this using these simple codes:
digiplay/fantasticmix_v65_test
Original model information is here:
https://civitai.com/models/22402/fantasticmixreal
the author: *michin* has another cool anime model,
you can also check it out, it's really cool and cute:)
|
LarryAIDraw/nagisa_bluearchive
|
LarryAIDraw
| 2023-10-01T00:55:43Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-10-01T00:44:05Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/130813/nagisa-blue-archive
|
foreverip/Reinforce-Pixelcopter-PLE-v0
|
foreverip
| 2023-10-01T00:52:53Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-01T00:52:47Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 49.20 +/- 40.28
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
digiplay/fantexi_v0.9
|
digiplay
| 2023-10-01T00:48:08Z | 455 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-20T16:51:06Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/131601?modelVersionId=144665


|
alperenunlu/a2c-PandaReachDense-v3
|
alperenunlu
| 2023-10-01T00:46:56Z | 1 | 2 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-01T00:41:25Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.24 +/- 0.07
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
learn3r/longt5_xl_gov_memsum_bp_5
|
learn3r
| 2023-10-01T00:31:45Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"longt5",
"text2text-generation",
"generated_from_trainer",
"dataset:learn3r/gov_report_memsum_bp",
"base_model:google/long-t5-tglobal-xl",
"base_model:finetune:google/long-t5-tglobal-xl",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-29T18:08:07Z |
---
license: apache-2.0
base_model: google/long-t5-tglobal-xl
tags:
- generated_from_trainer
datasets:
- learn3r/gov_report_memsum_bp
metrics:
- rouge
model-index:
- name: longt5_xl_gov_memsum_bp_5
results:
- task:
name: Summarization
type: summarization
dataset:
name: learn3r/gov_report_memsum_bp
type: learn3r/gov_report_memsum_bp
metrics:
- name: Rouge1
type: rouge
value: 55.1149
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longt5_xl_gov_memsum_bp_5
This model is a fine-tuned version of [google/long-t5-tglobal-xl](https://huggingface.co/google/long-t5-tglobal-xl) on the learn3r/gov_report_memsum_bp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9813
- Rouge1: 55.1149
- Rouge2: 30.149
- Rougel: 31.9694
- Rougelsum: 52.9549
- Gen Len: 1101.6060
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:---------:|
| 1.1562 | 1.0 | 272 | 1.0105 | 37.2934 | 18.6683 | 24.0563 | 35.6575 | 1844.1543 |
| 0.9737 | 2.0 | 545 | 0.9813 | 55.1149 | 30.149 | 31.9694 | 52.9549 | 1101.6060 |
| 0.8395 | 3.0 | 818 | 0.9925 | 57.4498 | 31.9315 | 32.914 | 55.2389 | 1055.9784 |
| 0.7353 | 4.0 | 1091 | 1.0404 | 67.3946 | 39.2034 | 36.8583 | 64.9879 | 829.2881 |
| 0.6212 | 4.99 | 1360 | 1.0752 | 64.5433 | 36.9477 | 35.3482 | 62.2005 | 779.6152 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
duytintruong/ppo-Huggy
|
duytintruong
| 2023-10-01T00:24:42Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-10-01T00:24:38Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: duytintruong/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
selinawisco/wav2vec2-base-finetuned-ks-unbalanced
|
selinawisco
| 2023-10-01T00:09:30Z | 160 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:superb",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-09-30T10:27:12Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-ks-unbalanced
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: superb
type: superb
config: ks
split: validation
args: ks
metrics:
- name: Accuracy
type: accuracy
value: 0.9779346866725508
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ks-unbalanced
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1805
- Accuracy: 0.9779
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2449 | 1.0 | 199 | 1.1639 | 0.6731 |
| 0.5926 | 2.0 | 399 | 0.4796 | 0.9256 |
| 0.3655 | 3.0 | 599 | 0.2651 | 0.9737 |
| 0.2644 | 4.0 | 799 | 0.1970 | 0.9766 |
| 0.2425 | 4.98 | 995 | 0.1805 | 0.9779 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
tempNameRepost15/pig_13B_rename
|
tempNameRepost15
| 2023-09-30T23:38:57Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text generation",
"instruct",
"en",
"dataset:PygmalionAI/PIPPA",
"dataset:Open-Orca/OpenOrca",
"dataset:Norquinal/claude_multiround_chat_30k",
"dataset:jondurbin/airoboros-gpt4-1.4.1",
"dataset:databricks/databricks-dolly-15k",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-09-30T18:19:34Z |
---
language:
- en
thumbnail: null
tags:
- text generation
- instruct
pipeline_tag: text-generation
inference: false
license: llama2
datasets:
- PygmalionAI/PIPPA
- Open-Orca/OpenOrca
- Norquinal/claude_multiround_chat_30k
- jondurbin/airoboros-gpt4-1.4.1
- databricks/databricks-dolly-15k
---
<h1 style="text-align: center">Pygmalion-2 13B</h1>
<h2 style="text-align: center">An instruction-tuned Llama-2 biased towards fiction writing and conversation.</h2>
## Model Details
The long-awaited release of our new models based on Llama-2 is finally here. Pygmalion-2 13B (formerly known as Metharme) is based on
[Llama-2 13B](https://huggingface.co/meta-llama/llama-2-13b-hf) released by Meta AI.
The Metharme models were an experiment to try and get a model that is usable for conversation, roleplaying and storywriting,
but which can be guided using natural language like other instruct models. After much deliberation, we reached the conclusion
that the Metharme prompting format is superior (and easier to use) compared to the classic Pygmalion.
This model was trained by doing supervised fine-tuning over a mixture of regular instruction data alongside roleplay, fictional stories
and conversations with synthetically generated instructions attached.
This model is freely available for both commercial and non-commercial use, as per the Llama-2 license.
## Prompting
The model has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`.
The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input.
The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to
form a conversation history.
### Prompting example
The system prompt has been designed to allow the model to "enter" various modes and dictate the reply length. Here's an example:
```
<|system|>Enter RP mode. Pretend to be {{char}} whose persona follows:
{{persona}}
You shall reply to the user while staying in character, and generate long responses.
```
## Dataset
The dataset used to fine-tune this model includes our own [PIPPA](https://huggingface.co/datasets/PygmalionAI/PIPPA), along with several other instruction
datasets, and datasets acquired from various RP forums.
## Limitations and biases
The intended use-case for this model is fictional writing for entertainment purposes. Any other sort of usage is out of scope.
As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
## Acknowledgements
We would like to thank [SpicyChat](https://spicychat.ai/) for sponsoring the training for this model.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
actionpace/Nova-13B
|
actionpace
| 2023-09-30T23:28:12Z | 0 | 0 | null |
[
"gguf",
"en",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-09-30T23:23:25Z |
---
license: other
language:
- en
---
**Some of my own quants:**
* Nova-13B_Q5_K_M.gguf
**Source:** [PulsarAI](https://huggingface.co/PulsarAI)
**Source Model:** [Nova-13B](https://huggingface.co/PulsarAI/Nova-13B)
**Source models for PulsarAI/Nova-13B (Merge)**
- [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) ([Ref](https://huggingface.co/actionpace/Llama-2-13b-hf))
|
actionpace/Orca-Nova-13B
|
actionpace
| 2023-09-30T23:05:17Z | 1 | 0 | null |
[
"gguf",
"en",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-09-30T22:59:28Z |
---
license: other
language:
- en
---
**Some of my own quants:**
* Orca-Nova-13B_Q5_K_M.gguf
**Source:** [PulsarAI](https://huggingface.co/PulsarAI)
**Source Model:** [Orca-Nova-13B](https://huggingface.co/PulsarAI/Orca-Nova-13B)
**Source models for PulsarAI/Orca-Nova-13B (Merge)**
- [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) ([Ref](https://huggingface.co/actionpace/Llama-2-13b-hf))
- [Open-Orca/OpenOrcaxOpenChat-Preview2-13B](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B)
|
eddyyeo/ppo-PyramidsRND
|
eddyyeo
| 2023-09-30T21:22:24Z | 18 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-09-30T21:22:18Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: eddyyeo/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
CiaraRowles/TemporalDiff
|
CiaraRowles
| 2023-09-30T21:08:06Z | 0 | 171 | null |
[
"text-to-video",
"license:openrail",
"region:us"
] |
text-to-video
| 2023-09-09T09:13:21Z |
---
license: openrail
pipeline_tag: text-to-video
---
TemporalDiff is a finetune of the original AnimateDiff weights on a higher resolution dataset (512x512).
Testing so far indicates a higher level of video coherency than the original weights, i also adjusted the stride from 4 to 2 frames to improve how smooth the motion was.
Current limitations are that the labelling for my dataset was a bit off, so it has slightly reduced ability to interpret the prompt, i'll be releasing a new version that fixes that soon.
This should work the same as any the base model in terms of use, just drag and drop it into comfy ui or the animatediff repository and use as normal.
This does not require any additional memory to run as the generations were 512x512 before, the training was just done at 256x256.
|
ProtonH/PPO-U8P1-LunarLander-v2
|
ProtonH
| 2023-09-30T20:59:57Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-30T20:59:52Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -221.89 +/- 121.17
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'ProtonHPPO'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'ProtonH/PPO-U8P1-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
impactframes/IF_PromptMKR_GPTQ
|
impactframes
| 2023-09-30T20:58:46Z | 12 | 16 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-12T09:32:37Z |
---
license: llama2
---
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
This Model is meant to be use with my extesion for Automatic1111 and SDNext
Is a Qlora base on about 80K Alpaca style instructions base on top of LlaMa2 13B instruct
https://github.com/if-ai/IF_prompt_MKR
-`♡´- Thanks to all my supporters on Youtube and kofi @impactframes
[](https://ko-fi.com/O4O51R44U)
[](https://youtu.be/dg_8cGzzfY4)
[](https://youtu.be/Y1E_y7ZrX5w)
[](https://youtu.be/Bg9jV2Vxkk4)
|
foreverip/Reinforce-CartPole-v1
|
foreverip
| 2023-09-30T20:49:36Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-30T20:49:25Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
nulltella/phi-1_5-finetuned-model-32-09
|
nulltella
| 2023-09-30T20:37:04Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:microsoft/phi-1_5",
"base_model:finetune:microsoft/phi-1_5",
"license:other",
"region:us"
] | null | 2023-09-30T19:47:41Z |
---
license: other
base_model: microsoft/phi-1_5
tags:
- generated_from_trainer
model-index:
- name: phi-1_5-finetuned-model-32-09
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-1_5-finetuned-model-32-09
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
tomaarsen/span-marker-xlm-roberta-base-fewnerd-fine-super
|
tomaarsen
| 2023-09-30T19:30:13Z | 71 | 1 |
span-marker
|
[
"span-marker",
"pytorch",
"tensorboard",
"safetensors",
"token-classification",
"ner",
"named-entity-recognition",
"generated_from_span_marker_trainer",
"en",
"multilingual",
"dataset:DFKI-SLT/few-nerd",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:cc-by-sa-4.0",
"model-index",
"co2_eq_emissions",
"region:us"
] |
token-classification
| 2023-06-15T13:46:14Z |
---
language:
- en
- multilingual
license: cc-by-sa-4.0
library_name: span-marker
tags:
- span-marker
- token-classification
- ner
- named-entity-recognition
- generated_from_span_marker_trainer
datasets:
- DFKI-SLT/few-nerd
metrics:
- precision
- recall
- f1
widget:
- text: "Amelia Earhart flew her single engine Lockheed Vega 5B across the Atlantic to Paris."
example_title: "English 1"
- text: The WPC led the international peace movement in the decade after the Second
World War, but its failure to speak out against the Soviet suppression of the
1956 Hungarian uprising and the resumption of Soviet nuclear tests in 1961 marginalised
it, and in the 1960s it was eclipsed by the newer, non-aligned peace organizations
like the Campaign for Nuclear Disarmament.
example_title: "English 2"
- text: Most of the Steven Seagal movie "Under Siege" (co-starring Tommy Lee Jones)
was filmed on the Battleship USS Alabama, which is docked on Mobile Bay at Battleship
Memorial Park and open to the public.
example_title: "English 3"
- text: 'The Central African CFA franc (French: "franc CFA" or simply "franc", ISO
4217 code: XAF) is the currency of six independent states in Central Africa: Cameroon,
Central African Republic, Chad, Republic of the Congo, Equatorial Guinea and Gabon.'
example_title: "English 4"
- text: Brenner conducted post-doctoral research at Brandeis University with Gregory
Petsko and then took his first academic position at Thomas Jefferson University
in 1996, moving to Dartmouth Medical School in 2003, where he served as Associate
Director for Basic Sciences at Norris Cotton Cancer Center.
example_title: "English 5"
- text: On Friday, October 27, 2017, the Senate of Spain (Senado) voted 214 to 47
to invoke Article 155 of the Spanish Constitution over Catalonia after the Catalan
Parliament declared the independence.
example_title: "English 6"
- text: "Amelia Earthart voló su Lockheed Vega 5B monomotor a través del Océano Atlántico hasta París."
example_title: "Spanish"
- text: "Amelia Earthart a fait voler son monomoteur Lockheed Vega 5B à travers l'ocean Atlantique jusqu'à Paris."
example_title: "French"
- text: "Amelia Earthart flog mit ihrer einmotorigen Lockheed Vega 5B über den Atlantik nach Paris."
example_title: "German"
- text: "Амелия Эртхарт перелетела на своем одномоторном самолете Lockheed Vega 5B через Атлантический океан в Париж."
example_title: "Russian"
- text: "Amelia Earthart vloog met haar één-motorige Lockheed Vega 5B over de Atlantische Oceaan naar Parijs."
example_title: "Dutch"
- text: "Amelia Earthart przeleciała swoim jednosilnikowym samolotem Lockheed Vega 5B przez Ocean Atlantycki do Paryża."
example_title: "Polish"
- text: "Amelia Earthart flaug eins hreyfils Lockheed Vega 5B yfir Atlantshafið til Parísar."
example_title: "Icelandic"
- text: "Η Amelia Earthart πέταξε το μονοκινητήριο Lockheed Vega 5B της πέρα από τον Ατλαντικό Ωκεανό στο Παρίσι."
example_title: "Greek"
pipeline_tag: token-classification
co2_eq_emissions:
emissions: 452.84872035276965
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
ram_total_size: 31.777088165283203
hours_used: 3.118
hardware_used: 1 x NVIDIA GeForce RTX 3090
base_model: xlm-roberta-base
model-index:
- name: SpanMarker with xlm-roberta-base on FewNERD
results:
- task:
type: token-classification
name: Named Entity Recognition
dataset:
name: FewNERD
type: DFKI-SLT/few-nerd
split: test
metrics:
- type: f1
value: 0.6884821229658107
name: F1
- type: precision
value: 0.6890426017339362
name: Precision
- type: recall
value: 0.6879225552622042
name: Recall
---
# SpanMarker with xlm-roberta-base on FewNERD
This is a [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) model trained on the [FewNERD](https://huggingface.co/datasets/DFKI-SLT/few-nerd) dataset that can be used for Named Entity Recognition. This SpanMarker model uses [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) as the underlying encoder.
## Model Details
### Model Description
- **Model Type:** SpanMarker
- **Encoder:** [xlm-roberta-base](https://huggingface.co/xlm-roberta-base)
- **Maximum Sequence Length:** 256 tokens
- **Maximum Entity Length:** 8 words
- **Training Dataset:** [FewNERD](https://huggingface.co/datasets/DFKI-SLT/few-nerd)
- **Languages:** en, multilingual
- **License:** cc-by-sa-4.0
### Model Sources
- **Repository:** [SpanMarker on GitHub](https://github.com/tomaarsen/SpanMarkerNER)
- **Thesis:** [SpanMarker For Named Entity Recognition](https://raw.githubusercontent.com/tomaarsen/SpanMarkerNER/main/thesis.pdf)
### Model Labels
| Label | Examples |
|:-----------------------------------------|:---------------------------------------------------------------------------------------------------------|
| art-broadcastprogram | "The Gale Storm Show : Oh , Susanna", "Corazones", "Street Cents" |
| art-film | "L'Atlantide", "Shawshank Redemption", "Bosch" |
| art-music | "Hollywood Studio Symphony", "Atkinson , Danko and Ford ( with Brockie and Hilton )", "Champion Lover" |
| art-other | "Venus de Milo", "Aphrodite of Milos", "The Today Show" |
| art-painting | "Cofiwch Dryweryn", "Production/Reproduction", "Touit" |
| art-writtenart | "The Seven Year Itch", "Time", "Imelda de ' Lambertazzi" |
| building-airport | "Newark Liberty International Airport", "Luton Airport", "Sheremetyevo International Airport" |
| building-hospital | "Hokkaido University Hospital", "Yeungnam University Hospital", "Memorial Sloan-Kettering Cancer Center" |
| building-hotel | "Radisson Blu Sea Plaza Hotel", "The Standard Hotel", "Flamingo Hotel" |
| building-library | "British Library", "Berlin State Library", "Bayerische Staatsbibliothek" |
| building-other | "Communiplex", "Henry Ford Museum", "Alpha Recording Studios" |
| building-restaurant | "Fatburger", "Carnegie Deli", "Trumbull" |
| building-sportsfacility | "Boston Garden", "Glenn Warner Soccer Facility", "Sports Center" |
| building-theater | "Pittsburgh Civic Light Opera", "National Paris Opera", "Sanders Theatre" |
| event-attack/battle/war/militaryconflict | "Jurist", "Easter Offensive", "Vietnam War" |
| event-disaster | "1693 Sicily earthquake", "1990s North Korean famine", "the 1912 North Mount Lyell Disaster" |
| event-election | "March 1898 elections", "Elections to the European Parliament", "1982 Mitcham and Morden by-election" |
| event-other | "Eastwood Scoring Stage", "Union for a Popular Movement", "Masaryk Democratic Movement" |
| event-protest | "Russian Revolution", "French Revolution", "Iranian Constitutional Revolution" |
| event-sportsevent | "World Cup", "Stanley Cup", "National Champions" |
| location-GPE | "Mediterranean Basin", "Croatian", "the Republic of Croatia" |
| location-bodiesofwater | "Norfolk coast", "Atatürk Dam Lake", "Arthur Kill" |
| location-island | "Laccadives", "Staten Island", "new Samsat district" |
| location-mountain | "Ruweisat Ridge", "Miteirya Ridge", "Salamander Glacier" |
| location-other | "Victoria line", "Northern City Line", "Cartuther" |
| location-park | "Painted Desert Community Complex Historic District", "Shenandoah National Park", "Gramercy Park" |
| location-road/railway/highway/transit | "Newark-Elizabeth Rail Link", "NJT", "Friern Barnet Road" |
| organization-company | "Church 's Chicken", "Texas Chicken", "Dixy Chicken" |
| organization-education | "MIT", "Belfast Royal Academy and the Ulster College of Physical Education", "Barnard College" |
| organization-government/governmentagency | "Congregazione dei Nobili", "Diet", "Supreme Court" |
| organization-media/newspaper | "TimeOut Melbourne", "Al Jazeera", "Clash" |
| organization-other | "IAEA", "4th Army", "Defence Sector C" |
| organization-politicalparty | "Al Wafa ' Islamic", "Shimpotō", "Kenseitō" |
| organization-religion | "UPCUSA", "Jewish", "Christian" |
| organization-showorganization | "Bochumer Symphoniker", "Mr. Mister", "Lizzy" |
| organization-sportsleague | "First Division", "NHL", "China League One" |
| organization-sportsteam | "Tottenham", "Arsenal", "Luc Alphand Aventures" |
| other-astronomything | "Algol", "Zodiac", "`` Caput Larvae ''" |
| other-award | "Grand Commander of the Order of the Niger", "Order of the Republic of Guinea and Nigeria", "GCON" |
| other-biologything | "Amphiphysin", "BAR", "N-terminal lipid" |
| other-chemicalthing | "carbon dioxide", "sulfur", "uranium" |
| other-currency | "$", "lac crore", "Travancore Rupee" |
| other-disease | "hypothyroidism", "bladder cancer", "French Dysentery Epidemic of 1779" |
| other-educationaldegree | "Master", "Bachelor", "BSc ( Hons ) in physics" |
| other-god | "El", "Fujin", "Raijin" |
| other-language | "Breton-speaking", "Latin", "English" |
| other-law | "United States Freedom Support Act", "Thirty Years ' Peace", "Leahy–Smith America Invents Act ( AIA" |
| other-livingthing | "insects", "patchouli", "monkeys" |
| other-medical | "amitriptyline", "pediatrician", "Pediatrics" |
| person-actor | "Tchéky Karyo", "Edmund Payne", "Ellaline Terriss" |
| person-artist/author | "George Axelrod", "Hicks", "Gaetano Donizett" |
| person-athlete | "Jaguar", "Neville", "Tozawa" |
| person-director | "Richard Quine", "Frank Darabont", "Bob Swaim" |
| person-other | "Campbell", "Richard Benson", "Holden" |
| person-politician | "Rivière", "Emeric", "William" |
| person-scholar | "Stedman", "Wurdack", "Stalmine" |
| person-soldier | "Joachim Ziegler", "Krukenberg", "Helmuth Weidling" |
| product-airplane | "EC135T2 CPDS", "Spey-equipped FGR.2s", "Luton" |
| product-car | "Phantom", "Corvettes - GT1 C6R", "100EX" |
| product-food | "V. labrusca", "red grape", "yakiniku" |
| product-game | "Hardcore RPG", "Airforce Delta", "Splinter Cell" |
| product-other | "PDP-1", "Fairbottom Bobs", "X11" |
| product-ship | "Essex", "Congress", "HMS `` Chinkara ''" |
| product-software | "Wikipedia", "Apdf", "AmiPDF" |
| product-train | "55022", "Royal Scots Grey", "High Speed Trains" |
| product-weapon | "AR-15 's", "ZU-23-2MR Wróbel II", "ZU-23-2M Wróbel" |
## Evaluation
### Metrics
| Label | Precision | Recall | F1 |
|:-----------------------------------------|:----------|:-------|:-------|
| **all** | 0.6890 | 0.6879 | 0.6885 |
| art-broadcastprogram | 0.6 | 0.5771 | 0.5883 |
| art-film | 0.7384 | 0.7453 | 0.7419 |
| art-music | 0.7930 | 0.7221 | 0.7558 |
| art-other | 0.4245 | 0.2900 | 0.3446 |
| art-painting | 0.5476 | 0.4035 | 0.4646 |
| art-writtenart | 0.6400 | 0.6539 | 0.6469 |
| building-airport | 0.8219 | 0.8242 | 0.8230 |
| building-hospital | 0.7024 | 0.8104 | 0.7526 |
| building-hotel | 0.7175 | 0.7283 | 0.7228 |
| building-library | 0.74 | 0.7296 | 0.7348 |
| building-other | 0.5828 | 0.5910 | 0.5869 |
| building-restaurant | 0.5525 | 0.5216 | 0.5366 |
| building-sportsfacility | 0.6187 | 0.7881 | 0.6932 |
| building-theater | 0.7067 | 0.7626 | 0.7336 |
| event-attack/battle/war/militaryconflict | 0.7544 | 0.7468 | 0.7506 |
| event-disaster | 0.5882 | 0.5314 | 0.5584 |
| event-election | 0.4167 | 0.2198 | 0.2878 |
| event-other | 0.4902 | 0.4042 | 0.4430 |
| event-protest | 0.3643 | 0.2831 | 0.3186 |
| event-sportsevent | 0.6125 | 0.6239 | 0.6182 |
| location-GPE | 0.8102 | 0.8553 | 0.8321 |
| location-bodiesofwater | 0.6888 | 0.7725 | 0.7282 |
| location-island | 0.7285 | 0.6440 | 0.6836 |
| location-mountain | 0.7129 | 0.7327 | 0.7227 |
| location-other | 0.4376 | 0.2560 | 0.3231 |
| location-park | 0.6991 | 0.6900 | 0.6945 |
| location-road/railway/highway/transit | 0.6936 | 0.7259 | 0.7094 |
| organization-company | 0.6921 | 0.6912 | 0.6917 |
| organization-education | 0.7838 | 0.7963 | 0.7900 |
| organization-government/governmentagency | 0.5363 | 0.4394 | 0.4831 |
| organization-media/newspaper | 0.6215 | 0.6705 | 0.6451 |
| organization-other | 0.5766 | 0.5157 | 0.5444 |
| organization-politicalparty | 0.6449 | 0.7324 | 0.6859 |
| organization-religion | 0.5139 | 0.6057 | 0.5560 |
| organization-showorganization | 0.5620 | 0.5657 | 0.5638 |
| organization-sportsleague | 0.6348 | 0.6542 | 0.6443 |
| organization-sportsteam | 0.7138 | 0.7566 | 0.7346 |
| other-astronomything | 0.7418 | 0.7625 | 0.752 |
| other-award | 0.7291 | 0.6736 | 0.7002 |
| other-biologything | 0.6735 | 0.6275 | 0.6497 |
| other-chemicalthing | 0.6025 | 0.5651 | 0.5832 |
| other-currency | 0.6843 | 0.8411 | 0.7546 |
| other-disease | 0.6284 | 0.7089 | 0.6662 |
| other-educationaldegree | 0.5856 | 0.6033 | 0.5943 |
| other-god | 0.6089 | 0.6913 | 0.6475 |
| other-language | 0.6608 | 0.7968 | 0.7225 |
| other-law | 0.6693 | 0.7246 | 0.6958 |
| other-livingthing | 0.6070 | 0.6014 | 0.6042 |
| other-medical | 0.5062 | 0.5113 | 0.5088 |
| person-actor | 0.8274 | 0.7673 | 0.7962 |
| person-artist/author | 0.6761 | 0.7294 | 0.7018 |
| person-athlete | 0.8132 | 0.8347 | 0.8238 |
| person-director | 0.675 | 0.6823 | 0.6786 |
| person-other | 0.6472 | 0.6388 | 0.6429 |
| person-politician | 0.6621 | 0.6593 | 0.6607 |
| person-scholar | 0.5181 | 0.5007 | 0.5092 |
| person-soldier | 0.4750 | 0.5131 | 0.4933 |
| product-airplane | 0.6230 | 0.6717 | 0.6464 |
| product-car | 0.7293 | 0.7176 | 0.7234 |
| product-food | 0.5758 | 0.5185 | 0.5457 |
| product-game | 0.7049 | 0.6734 | 0.6888 |
| product-other | 0.5477 | 0.4067 | 0.4668 |
| product-ship | 0.6247 | 0.6395 | 0.6320 |
| product-software | 0.6497 | 0.6760 | 0.6626 |
| product-train | 0.5505 | 0.5732 | 0.5616 |
| product-weapon | 0.6004 | 0.4744 | 0.5300 |
## Uses
### Direct Use for Inference
```python
from span_marker import SpanMarkerModel
# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("tomaarsen/span-marker-xlm-roberta-base-fewnerd-fine-super")
# Run inference
entities = model.predict("Most of the Steven Seagal movie \"Under Siege \"(co-starring Tommy Lee Jones) was filmed on the, which is docked on Mobile Bay at Battleship Memorial Park and open to the public.")
```
### Downstream Use
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
```python
from span_marker import SpanMarkerModel, Trainer
# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("tomaarsen/span-marker-xlm-roberta-base-fewnerd-fine-super")
# Specify a Dataset with "tokens" and "ner_tag" columns
dataset = load_dataset("conll2003") # For example CoNLL2003
# Initialize a Trainer using the pretrained model & dataset
trainer = Trainer(
model=model,
train_dataset=dataset["train"],
eval_dataset=dataset["validation"],
)
trainer.train()
trainer.save_model("tomaarsen/span-marker-xlm-roberta-base-fewnerd-fine-super-finetuned")
```
</details>
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:----------------------|:----|:--------|:----|
| Sentence length | 1 | 24.4945 | 267 |
| Entities per sentence | 0 | 2.5832 | 88 |
### Training Hyperparameters
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training Results
| Epoch | Step | Validation Loss | Validation Precision | Validation Recall | Validation F1 | Validation Accuracy |
|:------:|:-----:|:---------------:|:--------------------:|:-----------------:|:-------------:|:-------------------:|
| 0.2947 | 3000 | 0.0318 | 0.6058 | 0.5990 | 0.6024 | 0.9020 |
| 0.5893 | 6000 | 0.0266 | 0.6556 | 0.6679 | 0.6617 | 0.9173 |
| 0.8840 | 9000 | 0.0250 | 0.6691 | 0.6804 | 0.6747 | 0.9206 |
| 1.1787 | 12000 | 0.0239 | 0.6865 | 0.6761 | 0.6813 | 0.9212 |
| 1.4733 | 15000 | 0.0234 | 0.6872 | 0.6812 | 0.6842 | 0.9226 |
| 1.7680 | 18000 | 0.0231 | 0.6919 | 0.6821 | 0.6870 | 0.9227 |
| 2.0627 | 21000 | 0.0231 | 0.6909 | 0.6871 | 0.6890 | 0.9233 |
| 2.3573 | 24000 | 0.0231 | 0.6903 | 0.6875 | 0.6889 | 0.9238 |
| 2.6520 | 27000 | 0.0229 | 0.6918 | 0.6926 | 0.6922 | 0.9242 |
| 2.9467 | 30000 | 0.0228 | 0.6927 | 0.6930 | 0.6928 | 0.9243 |
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Carbon Emitted**: 0.453 kg of CO2
- **Hours Used**: 3.118 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.9.16
- SpanMarker: 1.4.1.dev
- Transformers: 4.30.0
- PyTorch: 2.0.1+cu118
- Datasets: 2.14.0
- Tokenizers: 0.13.2
## Citation
### BibTeX
```
@software{Aarsen_SpanMarker,
author = {Aarsen, Tom},
license = {Apache-2.0},
title = {{SpanMarker for Named Entity Recognition}},
url = {https://github.com/tomaarsen/SpanMarkerNER}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
algorithm6174/summarizer1102023
|
algorithm6174
| 2023-09-30T19:26:07Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-30T18:41:30Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
|
TheBloke/MegaMix-T1-13B-GPTQ
|
TheBloke
| 2023-09-30T19:25:50Z | 29 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"base_model:gradientputri/MegaMix-T1-13B",
"base_model:quantized:gradientputri/MegaMix-T1-13B",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2023-09-30T18:56:52Z |
---
base_model: gradientputri/MegaMix-T1-13B
inference: false
license: llama2
model_creator: Putri
model_name: Megamix T1 13B
model_type: llama
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Megamix T1 13B - GPTQ
- Model creator: [Putri](https://huggingface.co/gradientputri)
- Original model: [Megamix T1 13B](https://huggingface.co/gradientputri/MegaMix-T1-13B)
<!-- description start -->
## Description
This repo contains GPTQ model files for [Putri's Megamix T1 13B](https://huggingface.co/gradientputri/MegaMix-T1-13B).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/MegaMix-T1-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/MegaMix-T1-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/MegaMix-T1-13B-GGUF)
* [Putri's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/gradientputri/MegaMix-T1-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Unknown
```
{prompt}
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/MegaMix-T1-13B-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.26 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/MegaMix-T1-13B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 8.00 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/MegaMix-T1-13B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 13.36 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/MegaMix-T1-13B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/MegaMix-T1-13B-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 14.54 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/MegaMix-T1-13B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.51 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/MegaMix-T1-13B-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/MegaMix-T1-13B-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `MegaMix-T1-13B-GPTQ`:
```shell
mkdir MegaMix-T1-13B-GPTQ
huggingface-cli download TheBloke/MegaMix-T1-13B-GPTQ --local-dir MegaMix-T1-13B-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir MegaMix-T1-13B-GPTQ
huggingface-cli download TheBloke/MegaMix-T1-13B-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir MegaMix-T1-13B-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir MegaMix-T1-13B-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/MegaMix-T1-13B-GPTQ --local-dir MegaMix-T1-13B-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/MegaMix-T1-13B-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/MegaMix-T1-13B-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/MegaMix-T1-13B-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `MegaMix-T1-13B-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers optimum
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.4.2
pip3 install .
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/MegaMix-T1-13B-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Putri's Megamix T1 13B
No original model card was available.
|
ProtonH/Reinforce-PixelCopter-v0
|
ProtonH
| 2023-09-30T19:16:23Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-30T09:52:02Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 34.70 +/- 23.70
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
TheBloke/MegaMix-S1-13B-GPTQ
|
TheBloke
| 2023-09-30T18:54:45Z | 26 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"base_model:gradientputri/MegaMix-S1-13B",
"base_model:quantized:gradientputri/MegaMix-S1-13B",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2023-09-30T18:26:03Z |
---
base_model: gradientputri/MegaMix-S1-13B
inference: false
license: llama2
model_creator: Putri
model_name: Megamix S1 13B
model_type: llama
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Megamix S1 13B - GPTQ
- Model creator: [Putri](https://huggingface.co/gradientputri)
- Original model: [Megamix S1 13B](https://huggingface.co/gradientputri/MegaMix-S1-13B)
<!-- description start -->
## Description
This repo contains GPTQ model files for [Putri's Megamix S1 13B](https://huggingface.co/gradientputri/MegaMix-S1-13B).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/MegaMix-S1-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/MegaMix-S1-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/MegaMix-S1-13B-GGUF)
* [Putri's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/gradientputri/MegaMix-S1-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Unknown
```
{prompt}
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/MegaMix-S1-13B-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.26 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/MegaMix-S1-13B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 8.00 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/MegaMix-S1-13B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 13.36 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/MegaMix-S1-13B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/MegaMix-S1-13B-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 14.54 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/MegaMix-S1-13B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.51 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/MegaMix-S1-13B-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/MegaMix-S1-13B-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `MegaMix-S1-13B-GPTQ`:
```shell
mkdir MegaMix-S1-13B-GPTQ
huggingface-cli download TheBloke/MegaMix-S1-13B-GPTQ --local-dir MegaMix-S1-13B-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir MegaMix-S1-13B-GPTQ
huggingface-cli download TheBloke/MegaMix-S1-13B-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir MegaMix-S1-13B-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir MegaMix-S1-13B-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/MegaMix-S1-13B-GPTQ --local-dir MegaMix-S1-13B-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/MegaMix-S1-13B-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/MegaMix-S1-13B-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/MegaMix-S1-13B-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `MegaMix-S1-13B-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers optimum
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.4.2
pip3 install .
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/MegaMix-S1-13B-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Putri's Megamix S1 13B
No original model card was available.
|
Jatin7698/my-pet-dog-xzg
|
Jatin7698
| 2023-09-30T18:50:29Z | 2 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-30T18:37:42Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog-xzg Dreambooth model trained by Jatin7698 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:

|
Slycat/my-pet-dog-dfg
|
Slycat
| 2023-09-30T18:35:37Z | 0 | 0 | null |
[
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-09-30T18:32:18Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog-dfg Dreambooth model trained by Slycat following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: IITI-61
Sample pictures of this concept:

|
or4cl3ai/SoundSlayerAI
|
or4cl3ai
| 2023-09-30T18:34:20Z | 13 | 19 |
transformers
|
[
"transformers",
"music",
"text-to-speech",
"en",
"es",
"it",
"pt",
"la",
"fr",
"ru",
"zh",
"ja",
"el",
"dataset:Fhrozen/AudioSet2K22",
"dataset:Chr0my/Epidemic_sounds",
"dataset:ChristophSchuhmann/lyrics-index",
"dataset:Cropinky/rap_lyrics_english",
"dataset:tsterbak/eurovision-lyrics-1956-2023",
"dataset:brunokreiner/genius-lyrics",
"dataset:google/MusicCaps",
"dataset:ccmusic-database/music_genre",
"dataset:Hyeon2/riffusion-musiccaps-dataset",
"dataset:SamAct/autotrain-data-musicprompt",
"dataset:Chr0my/Epidemic_music",
"dataset:juliensimon/autonlp-data-song-lyrics",
"dataset:Datatang/North_American_English_Speech_Data_by_Mobile_Phone_and_PC",
"dataset:Chr0my/freesound.org",
"dataset:teticio/audio-diffusion-256",
"dataset:KELONMYOSA/dusha_emotion_audio",
"dataset:Ar4ikov/iemocap_audio_text_splitted",
"dataset:flexthink/ljspeech",
"dataset:mozilla-foundation/common_voice_13_0",
"dataset:facebook/voxpopuli",
"dataset:SocialGrep/one-million-reddit-jokes",
"dataset:breadlicker45/human-midi-rlhf",
"dataset:breadlicker45/midi-gpt-music-small",
"dataset:projectlosangeles/Los-Angeles-MIDI-Dataset",
"dataset:huggingartists/epic-rap-battles-of-history",
"dataset:SocialGrep/one-million-reddit-confessions",
"dataset:shahules786/prosocial-nsfw-reddit",
"dataset:Thewillonline/reddit-sarcasm",
"dataset:autoevaluate/autoeval-eval-futin__guess-vi-4200fb-2012366606",
"dataset:lmsys/chatbot_arena_conversations",
"dataset:mozilla-foundation/common_voice_11_0",
"dataset:mozilla-foundation/common_voice_4_0",
"dataset:dell-research-harvard/AmericanStories",
"dataset:zZWipeoutZz/insane_style",
"dataset:mu-llama/MusicQA",
"dataset:RaphaelOlivier/whisper_adversarial_examples",
"dataset:huggingartists/metallica",
"dataset:vldsavelyev/guitar_tab",
"dataset:NLPCoreTeam/humaneval_ru",
"dataset:seungheondoh/audioset-music",
"dataset:gary109/onset-singing3_corpora_parliament_processed_MIR-ST500",
"dataset:LDD5522/Rock_Vocals",
"dataset:huggingartists/rage-against-the-machine",
"dataset:huggingartists/chester-bennington",
"dataset:huggingartists/logic",
"dataset:cmsolson75/artist_song_lyric_dataset",
"dataset:BhavyaMuni/artist-lyrics",
"dataset:vjain/emotional_intelligence",
"dataset:mhenrichsen/context-aware-splits",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-07-01T02:34:57Z |
---
license: openrail
datasets:
- Fhrozen/AudioSet2K22
- Chr0my/Epidemic_sounds
- ChristophSchuhmann/lyrics-index
- Cropinky/rap_lyrics_english
- tsterbak/eurovision-lyrics-1956-2023
- brunokreiner/genius-lyrics
- google/MusicCaps
- ccmusic-database/music_genre
- Hyeon2/riffusion-musiccaps-dataset
- SamAct/autotrain-data-musicprompt
- Chr0my/Epidemic_music
- juliensimon/autonlp-data-song-lyrics
- Datatang/North_American_English_Speech_Data_by_Mobile_Phone_and_PC
- Chr0my/freesound.org
- teticio/audio-diffusion-256
- KELONMYOSA/dusha_emotion_audio
- Ar4ikov/iemocap_audio_text_splitted
- flexthink/ljspeech
- mozilla-foundation/common_voice_13_0
- facebook/voxpopuli
- SocialGrep/one-million-reddit-jokes
- breadlicker45/human-midi-rlhf
- breadlicker45/midi-gpt-music-small
- projectlosangeles/Los-Angeles-MIDI-Dataset
- huggingartists/epic-rap-battles-of-history
- SocialGrep/one-million-reddit-confessions
- shahules786/prosocial-nsfw-reddit
- Thewillonline/reddit-sarcasm
- autoevaluate/autoeval-eval-futin__guess-vi-4200fb-2012366606
- lmsys/chatbot_arena_conversations
- mozilla-foundation/common_voice_11_0
- mozilla-foundation/common_voice_4_0
- dell-research-harvard/AmericanStories
- zZWipeoutZz/insane_style
- mu-llama/MusicQA
- RaphaelOlivier/whisper_adversarial_examples
- huggingartists/metallica
- vldsavelyev/guitar_tab
- NLPCoreTeam/humaneval_ru
- seungheondoh/audioset-music
- gary109/onset-singing3_corpora_parliament_processed_MIR-ST500
- LDD5522/Rock_Vocals
- huggingartists/rage-against-the-machine
- huggingartists/chester-bennington
- huggingartists/logic
- cmsolson75/artist_song_lyric_dataset
- BhavyaMuni/artist-lyrics
- vjain/emotional_intelligence
- mhenrichsen/context-aware-splits
metrics:
- accuracy
- bertscore
- bleu
- bleurt
- brier_score
- character
- chrf
language:
- en
- es
- it
- pt
- la
- fr
- ru
- zh
- ja
- el
library_name: transformers
tags:
- music
pipeline_tag: text-to-speech
---
# SoundSlayerAI
SoundSlayerAI is an innovative project that focuses on music-related tasks This project aims to provide various functionalities for audio analysis and processing, making it easier to work with music datasets.
## Datasets
SoundSlayerAI makes use of the following datasets:
- Fhrozen/AudioSet2K22
- Chr0my/Epidemic_sounds
- ChristophSchuhmann/lyrics-index
- Cropinky/rap_lyrics_english
- tsterbak/eurovision-lyrics-1956-2023
- brunokreiner/genius-lyrics
- google/MusicCaps
- ccmusic-database/music_genre
- Hyeon2/riffusion-musiccaps-dataset
- SamAct/autotrain-data-musicprompt
- Chr0my/Epidemic_music
- juliensimon/autonlp-data-song-lyrics
- Datatang/North_American_English_Speech_Data_by_Mobile_Phone_and_PC
- Chr0my/freesound.org
- teticio/audio-diffusion-256
- KELONMYOSA/dusha_emotion_audio
- Ar4ikov/iemocap_audio_text_splitted
- flexthink/ljspeech
- mozilla-foundation/common_voice_13_0
- facebook/voxpopuli
- SocialGrep/one-million-reddit-jokes
- breadlicker45/human-midi-rlhf
- breadlicker45/midi-gpt-music-small
- projectlosangeles/Los-Angeles-MIDI-Dataset
- huggingartists/epic-rap-battles-of-history
- SocialGrep/one-million-reddit-confessions
- shahules786/prosocial-nsfw-reddit
- Thewillonline/reddit-sarcasm
- autoevaluate/autoeval-eval-futin__guess-vi-4200fb-2012366606
- lmsys/chatbot_arena_conversations
- mozilla-foundation/common_voice_11_0
- mozilla-foundation/common_voice_4_0
## Library
The core library used in this project is "pyannote-audio." This library provides a wide range of functionalities for audio analysis and processing, making it an excellent choice for working with music datasets. The "pyannote-audio" library offers a comprehensive set of tools and algorithms for tasks such as audio segmentation, speaker diarization, music transcription, and more.
## Metrics
To evaluate the performance of SoundSlayerAI, several metrics are employed, including:
- Accuracy
- Bertscore
- BLEU
- BLEURT
- Brier Score
- Character
These metrics help assess the effectiveness and accuracy of the implemented algorithms and models.
## Language
The SoundSlayerAI project primarily focuses on the English language. The datasets and models used in this project are optimized for English audio and text analysis tasks.
## Usage
To use SoundSlayerAI, follow these steps:
1. Install the required dependencies by running `pip install pyannote-audio`.
2. Import the necessary modules from the "pyannote.audio" package to access the desired functionalities.
3. Load the audio data or use the provided datasets to perform tasks such as audio segmentation, speaker diarization, music transcription, and more.
4. Apply the appropriate algorithms and models from the "pyannote.audio" library to process and analyze the audio data.
5. Evaluate the results using the specified metrics, such as accuracy, bertscore, BLEU, BLEURT, brier_score, and character.
6. Iterate and refine your approach to achieve the desired outcomes for your music-related tasks.
## License
SoundSlayerAI is released under the Openrail license. Please refer to the LICENSE file for more details.
## Contributions
Contributions to SoundSlayerAI are welcome! If you have any ideas, bug fixes, or enhancements, feel free to submit a pull request or open an issue on the GitHub repository.
## Contact
For any inquiries or questions regarding SoundSlayerAI, please reach out to the project maintainer at [insert email address].
Thank you for your interest in SoundSlayerAI!
|
TheBloke/Megamix-A1-13B-GPTQ
|
TheBloke
| 2023-09-30T18:23:50Z | 21 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"base_model:gradientputri/Megamix-A1-13B",
"base_model:quantized:gradientputri/Megamix-A1-13B",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2023-09-30T17:54:42Z |
---
base_model: gradientputri/Megamix-A1-13B
inference: false
license: llama2
model_creator: Putri
model_name: Megamix A1 13B
model_type: llama
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Megamix A1 13B - GPTQ
- Model creator: [Putri](https://huggingface.co/gradientputri)
- Original model: [Megamix A1 13B](https://huggingface.co/gradientputri/Megamix-A1-13B)
<!-- description start -->
## Description
This repo contains GPTQ model files for [Putri's Megamix A1 13B](https://huggingface.co/gradientputri/Megamix-A1-13B).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Megamix-A1-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Megamix-A1-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Megamix-A1-13B-GGUF)
* [Putri's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/gradientputri/Megamix-A1-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Unknown
```
{prompt}
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Megamix-A1-13B-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.26 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Megamix-A1-13B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 8.00 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Megamix-A1-13B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 13.36 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Megamix-A1-13B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/Megamix-A1-13B-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 14.54 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Megamix-A1-13B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.51 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/Megamix-A1-13B-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Megamix-A1-13B-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `Megamix-A1-13B-GPTQ`:
```shell
mkdir Megamix-A1-13B-GPTQ
huggingface-cli download TheBloke/Megamix-A1-13B-GPTQ --local-dir Megamix-A1-13B-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir Megamix-A1-13B-GPTQ
huggingface-cli download TheBloke/Megamix-A1-13B-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir Megamix-A1-13B-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir Megamix-A1-13B-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Megamix-A1-13B-GPTQ --local-dir Megamix-A1-13B-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Megamix-A1-13B-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Megamix-A1-13B-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Megamix-A1-13B-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Megamix-A1-13B-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers optimum
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.4.2
pip3 install .
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Megamix-A1-13B-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Putri's Megamix A1 13B
No original model card was available.
|
TheBloke/Megamix-A1-13B-GGUF
|
TheBloke
| 2023-09-30T18:00:48Z | 73 | 1 |
transformers
|
[
"transformers",
"gguf",
"llama",
"base_model:gradientputri/Megamix-A1-13B",
"base_model:quantized:gradientputri/Megamix-A1-13B",
"license:llama2",
"region:us"
] | null | 2023-09-30T17:54:52Z |
---
base_model: gradientputri/Megamix-A1-13B
inference: false
license: llama2
model_creator: Putri
model_name: Megamix A1 13B
model_type: llama
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Megamix A1 13B - GGUF
- Model creator: [Putri](https://huggingface.co/gradientputri)
- Original model: [Megamix A1 13B](https://huggingface.co/gradientputri/Megamix-A1-13B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Putri's Megamix A1 13B](https://huggingface.co/gradientputri/Megamix-A1-13B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Megamix-A1-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Megamix-A1-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Megamix-A1-13B-GGUF)
* [Putri's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/gradientputri/Megamix-A1-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Unknown
```
{prompt}
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [megamix-a1-13b.Q2_K.gguf](https://huggingface.co/TheBloke/Megamix-A1-13B-GGUF/blob/main/megamix-a1-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [megamix-a1-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Megamix-A1-13B-GGUF/blob/main/megamix-a1-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [megamix-a1-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Megamix-A1-13B-GGUF/blob/main/megamix-a1-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [megamix-a1-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Megamix-A1-13B-GGUF/blob/main/megamix-a1-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [megamix-a1-13b.Q4_0.gguf](https://huggingface.co/TheBloke/Megamix-A1-13B-GGUF/blob/main/megamix-a1-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [megamix-a1-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Megamix-A1-13B-GGUF/blob/main/megamix-a1-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [megamix-a1-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Megamix-A1-13B-GGUF/blob/main/megamix-a1-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [megamix-a1-13b.Q5_0.gguf](https://huggingface.co/TheBloke/Megamix-A1-13B-GGUF/blob/main/megamix-a1-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [megamix-a1-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Megamix-A1-13B-GGUF/blob/main/megamix-a1-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [megamix-a1-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Megamix-A1-13B-GGUF/blob/main/megamix-a1-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [megamix-a1-13b.Q6_K.gguf](https://huggingface.co/TheBloke/Megamix-A1-13B-GGUF/blob/main/megamix-a1-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [megamix-a1-13b.Q8_0.gguf](https://huggingface.co/TheBloke/Megamix-A1-13B-GGUF/blob/main/megamix-a1-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Megamix-A1-13B-GGUF and below it, a specific filename to download, such as: megamix-a1-13b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Megamix-A1-13B-GGUF megamix-a1-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Megamix-A1-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Megamix-A1-13B-GGUF megamix-a1-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m megamix-a1-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Megamix-A1-13B-GGUF", model_file="megamix-a1-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Putri's Megamix A1 13B
No original model card was available.
<!-- original-model-card end -->
|
SarthakBhatore/finetuning-sentiment-model-40000-samples
|
SarthakBhatore
| 2023-09-30T18:00:22Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-30T13:55:33Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model-40000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-40000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
lmz/candle-mistral
|
lmz
| 2023-09-30T17:57:45Z | 17 | 6 | null |
[
"gguf",
"license:apache-2.0",
"region:us"
] | null | 2023-09-28T08:20:28Z |
---
license: apache-2.0
---
Refer to the main [model card](https://huggingface.co/mistralai/Mistral-7B-v0.1), this repo holds a simple safetensor conversion of the weights.
|
ABHILASH44/my-pet-dog
|
ABHILASH44
| 2023-09-30T17:55:47Z | 0 | 0 | null |
[
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-09-30T17:55:01Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by ABHILASH44 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: CBIT10
Sample pictures of this concept:
.jpg)
|
soBeauty/V3_20230929-5-xlm-roberta-base-new
|
soBeauty
| 2023-09-30T17:51:03Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-30T17:07:02Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: V3_20230929-5-xlm-roberta-base-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V3_20230929-5-xlm-roberta-base-new
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.5172
- Loss: 2.6032
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 4.4963 | 0.46 | 200 | 0.2938 | nan |
| 4.0957 | 0.91 | 400 | 0.3098 | 3.5806 |
| 3.7431 | 1.37 | 600 | 0.3455 | nan |
| 3.6542 | 1.82 | 800 | 0.3075 | nan |
| 3.5585 | 2.28 | 1000 | 0.3546 | 3.4012 |
| 3.4027 | 2.73 | 1200 | 0.4049 | 3.2653 |
| 3.3416 | 3.19 | 1400 | 0.4053 | nan |
| 3.314 | 3.64 | 1600 | 0.4505 | nan |
| 3.2035 | 4.1 | 1800 | 0.4140 | 2.8518 |
| 3.1372 | 4.56 | 2000 | 0.4553 | 2.7572 |
| 3.0738 | 5.01 | 2200 | 0.4188 | 3.1020 |
| 3.0354 | 5.47 | 2400 | 0.4483 | 2.9353 |
| 3.0447 | 5.92 | 2600 | 0.4729 | 2.8608 |
| 2.6643 | 6.38 | 2800 | 0.4833 | 2.6200 |
| 2.8909 | 6.83 | 3000 | 0.4858 | 2.4677 |
| 2.9888 | 7.29 | 3200 | 0.4676 | 2.8088 |
| 2.8658 | 7.74 | 3400 | 0.5162 | 2.6409 |
| 2.7865 | 8.2 | 3600 | 0.5294 | nan |
| 2.8237 | 8.66 | 3800 | 0.4986 | nan |
| 2.7182 | 9.11 | 4000 | 0.5087 | nan |
| 2.7962 | 9.57 | 4200 | 0.5459 | nan |
| 2.5706 | 10.02 | 4400 | 0.4801 | nan |
| 2.528 | 10.48 | 4600 | 0.4893 | 2.2799 |
| 2.7482 | 10.93 | 4800 | 0.5227 | nan |
| 2.799 | 11.39 | 5000 | 0.4501 | nan |
| 2.471 | 11.85 | 5200 | 0.5323 | 2.4217 |
| 2.6071 | 12.3 | 5400 | 0.5420 | nan |
| 2.5139 | 12.76 | 5600 | 0.5511 | 2.1409 |
| 2.4214 | 13.21 | 5800 | 0.5215 | 2.4055 |
| 2.608 | 13.67 | 6000 | 0.5197 | 2.3034 |
| 2.5468 | 14.12 | 6200 | 0.5259 | nan |
| 2.4802 | 14.58 | 6400 | 0.5172 | 2.6032 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
SarthakBhatore/codegen-350M-mono-18k-alpaca-python
|
SarthakBhatore
| 2023-09-30T17:39:58Z | 110 | 2 |
transformers
|
[
"transformers",
"pytorch",
"codegen",
"text-generation",
"dataset:iamtarun/python_code_instructions_18k_alpaca",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-23T15:01:05Z |
---
datasets:
- iamtarun/python_code_instructions_18k_alpaca
---
# CodeGen-350M-mono-18k-Alpaca-Python
Hugging Face Model
GitHub Stars
License
This repository contains a fine-tuned language model, "CodeGen-350M-mono-18k-Alpaca-Python," which is based on the Salesforce-codegen-350M model and fine-tuned on the "iamtarun/python_code_instructions_18k_alpaca" dataset. This model is designed to assist developers in generating Python code instructions and snippets based on natural language prompts.
Model Details
Model Name: CodeGen-350M-mono-18k-Alpaca-Python
Base Model: Salesforce-codegen-350M
Dataset: iamtarun/python_code_instructions_18k_alpaca
Model Size: 350 million parameters
Usage
You can use this model in various NLP tasks that involve generating Python code from natural language prompts. Below is an example of how to use this model with the Hugging Face Transformers library in Python:
python
Copy code
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "your-username/codegen-350M-mono-18k-alpaca-python"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Input text
text = "Create a function that calculates the factorial of a number in Python."
# Tokenize the text
input_ids = tokenizer.encode(text, return_tensors="pt")
# Generate Python code
output = model.generate(input_ids, max_length=100, num_return_sequences=1, no_repeat_ngram_size=2)
# Decode and print the generated code
generated_code = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_code)
For more information on using Hugging Face models, refer to the official documentation.
Fine-Tuning Details
The CodeGen-350M-mono-18k-Alpaca-Python model was fine-tuned on the "iamtarun/python_code_instructions_18k_alpaca" dataset using the Hugging Face Transformers library. The fine-tuning process involved adapting the base Salesforce-codegen-350M model to generate Python code instructions specifically for the provided dataset.
|
DoctorWho264/Pin
|
DoctorWho264
| 2023-09-30T17:38:53Z | 0 | 0 | null |
[
"ru",
"en",
"license:mit",
"region:us"
] | null | 2023-09-30T17:37:53Z |
---
license: mit
language:
- ru
- en
---
|
PanoEvJ/T5_summarization_RLAIF
|
PanoEvJ
| 2023-09-30T17:37:41Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-29T19:46:34Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
rafi2000/ushno-model
|
rafi2000
| 2023-09-30T17:36:43Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"doi:10.57967/hf/1175",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-30T17:30:38Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### ushno-model Dreambooth model trained by rafi2000 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
acalatrava/TinyLlama-1.1B-translate-en-es
|
acalatrava
| 2023-09-30T17:24:30Z | 13 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"es",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"dataset:sam-mosaic/orca-gpt4-chatml",
"dataset:alvations/globalvoices-en-es",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-28T01:25:03Z |
---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- sam-mosaic/orca-gpt4-chatml
- alvations/globalvoices-en-es
language:
- en
- es
---
<div align="center">
# TinyLlama-1.1B-translate-en-es
</div>
This is a finetuned version with a partial dataset from alvations/globalvoices-en-es to test performance on translation task. It has been trained to translate english to spanish and viceversa with only 20k rows from the dataset.
The translation is not very accurate but it shows a lot of potential.
In order to use it you have to follow the chatml standard like so:
---
english to spanish:
```
<|im_start|>user Translate this to spanish: ```A father and son, who have been living off grid for 20 years, encounter an outsider who threatens to destroy the utopia they've built.```
<|im_start|>assistant
```
This will provide the following result:
```
Un padre y hijo, que han vivido sin comida desde hace 20 años, encuentran un invitado quien amenaza con destruir la utopía que ellos han creado.
```
---
spanish to english:
```
<|im_start|>user Traduce esto al ingles: ```España se queda sin Copilot para Windows 11: la regulación de la UE frena su despliegue en Europa.```
<|im_start|>assistant
```
Which will be completed as:
```
Spain is left without Copilot for Windows 11: the control of the UE has halted its deployment in Europe.
```
---
The results are far from perfect but there are A LOT of room to improvement since it was finetuned with only 20k rows from the dataset (which has 355k rows) for 2 epoch. This training took only about 5 hours on a "M1 Pro" processor.
The base model used is a fine-tuned model with orca dataset [acalatrava/TinyLlama-1.1B-orca-gpt4](https://huggingface.co/acalatrava/TinyLlama-1.1B-orca-gpt4)
### Training
- **Method**: QLORA
- **Time**: 10h on a M1 Pro 32GB
- **Based on**: [https://colab.research.google.com/drive/1Zmaceu65d7w4Tcd-cfnZRb6k_Tcv2b8g](https://colab.research.google.com/drive/1Zmaceu65d7w4Tcd-cfnZRb6k_Tcv2b8g) removing quantization since it's not supported on MPS
|
actionpace/ChatAYT-Lora-Assamble-Marcoroni
|
actionpace
| 2023-09-30T17:23:27Z | 0 | 0 | null |
[
"gguf",
"en",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-09-30T17:17:05Z |
---
license: other
language:
- en
---
**Some of my own quants:**
* ChatAYT-Lora-Assamble-Marcoroni_Q5_K_M.gguf
**Source:** [PulsarAI](https://huggingface.co/PulsarAI)
**Source Model:** [ChatAYT-Lora-Assamble-Marcoroni](https://huggingface.co/PulsarAI/ChatAYT-Lora-Assamble-Marcoroni)
**Source models for PulsarAI/ChatAYT-Lora-Assamble-Marcoroni (Merge)**
- [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) ([Ref](https://huggingface.co/actionpace/Llama-2-13b-hf))
|
gbellamy/lora-trained-xl-colab_2
|
gbellamy
| 2023-09-30T17:18:45Z | 6 | 2 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-09-30T14:36:01Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of suezzeus dog
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - gbellamy/lora-trained-xl-colab_2
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of suezzeus dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
gmb note: used 21 1024x1024 images
|
Wazzzabeee/PoliteT5Small
|
Wazzzabeee
| 2023-09-30T17:16:21Z | 25 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-20T12:57:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: PoliteT5Small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PoliteT5Small
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3505
- Toxicity Ratio: 0.3158
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 75
### Training results
| Training Loss | Epoch | Step | Validation Loss | Toxicity Ratio |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|
| No log | 1.0 | 22 | 0.6642 | 0.3158 |
| No log | 2.0 | 44 | 0.6347 | 0.3158 |
| 0.9343 | 3.0 | 66 | 0.6623 | 0.3158 |
| 0.9343 | 4.0 | 88 | 0.6737 | 0.3070 |
| 0.3783 | 5.0 | 110 | 0.7201 | 0.2982 |
| 0.3783 | 6.0 | 132 | 0.7606 | 0.3596 |
| 0.2536 | 7.0 | 154 | 0.7567 | 0.2807 |
| 0.2536 | 8.0 | 176 | 0.8618 | 0.3070 |
| 0.2536 | 9.0 | 198 | 0.8444 | 0.3158 |
| 0.1839 | 10.0 | 220 | 0.8257 | 0.3333 |
| 0.1839 | 11.0 | 242 | 0.8643 | 0.3158 |
| 0.1246 | 12.0 | 264 | 0.8334 | 0.3421 |
| 0.1246 | 13.0 | 286 | 0.8895 | 0.3246 |
| 0.1042 | 14.0 | 308 | 0.9631 | 0.2982 |
| 0.1042 | 15.0 | 330 | 0.9004 | 0.3070 |
| 0.0929 | 16.0 | 352 | 0.8878 | 0.2982 |
| 0.0929 | 17.0 | 374 | 0.9009 | 0.2982 |
| 0.0929 | 18.0 | 396 | 0.9762 | 0.3158 |
| 0.0745 | 19.0 | 418 | 0.9296 | 0.2982 |
| 0.0745 | 20.0 | 440 | 0.9429 | 0.3246 |
| 0.0668 | 21.0 | 462 | 0.9779 | 0.3158 |
| 0.0668 | 22.0 | 484 | 0.9731 | 0.2982 |
| 0.0494 | 23.0 | 506 | 0.9640 | 0.3158 |
| 0.0494 | 24.0 | 528 | 0.9984 | 0.2982 |
| 0.0425 | 25.0 | 550 | 0.9966 | 0.3070 |
| 0.0425 | 26.0 | 572 | 0.9861 | 0.3246 |
| 0.0425 | 27.0 | 594 | 1.0335 | 0.3333 |
| 0.0432 | 28.0 | 616 | 1.0358 | 0.2982 |
| 0.0432 | 29.0 | 638 | 1.0244 | 0.3158 |
| 0.0328 | 30.0 | 660 | 1.0050 | 0.3158 |
| 0.0328 | 31.0 | 682 | 0.9838 | 0.2982 |
| 0.0277 | 32.0 | 704 | 1.0576 | 0.3158 |
| 0.0277 | 33.0 | 726 | 1.0719 | 0.3070 |
| 0.0277 | 34.0 | 748 | 1.0851 | 0.3246 |
| 0.0194 | 35.0 | 770 | 0.9992 | 0.3246 |
| 0.0194 | 36.0 | 792 | 1.1454 | 0.3333 |
| 0.0145 | 37.0 | 814 | 1.1179 | 0.3158 |
| 0.0145 | 38.0 | 836 | 1.0586 | 0.3158 |
| 0.0157 | 39.0 | 858 | 1.0638 | 0.3333 |
| 0.0157 | 40.0 | 880 | 1.1544 | 0.3333 |
| 0.0114 | 41.0 | 902 | 1.1529 | 0.2895 |
| 0.0114 | 42.0 | 924 | 1.2017 | 0.3246 |
| 0.0114 | 43.0 | 946 | 1.0783 | 0.3333 |
| 0.0096 | 44.0 | 968 | 1.1984 | 0.3333 |
| 0.0096 | 45.0 | 990 | 1.1839 | 0.3158 |
| 0.0094 | 46.0 | 1012 | 1.1178 | 0.3246 |
| 0.0094 | 47.0 | 1034 | 1.2424 | 0.3070 |
| 0.0065 | 48.0 | 1056 | 1.1740 | 0.3158 |
| 0.0065 | 49.0 | 1078 | 0.9860 | 0.3070 |
| 0.0081 | 50.0 | 1100 | 1.2554 | 0.3333 |
| 0.0081 | 51.0 | 1122 | 1.2024 | 0.2895 |
| 0.0081 | 52.0 | 1144 | 1.2440 | 0.2807 |
| 0.0035 | 53.0 | 1166 | 1.2392 | 0.3070 |
| 0.0035 | 54.0 | 1188 | 1.3189 | 0.3070 |
| 0.0033 | 55.0 | 1210 | 1.2635 | 0.2895 |
| 0.0033 | 56.0 | 1232 | 1.2367 | 0.2982 |
| 0.0033 | 57.0 | 1254 | 1.2691 | 0.3070 |
| 0.0033 | 58.0 | 1276 | 1.2762 | 0.3070 |
| 0.0033 | 59.0 | 1298 | 1.2492 | 0.2982 |
| 0.0021 | 60.0 | 1320 | 1.2530 | 0.3070 |
| 0.0021 | 61.0 | 1342 | 1.2754 | 0.3158 |
| 0.002 | 62.0 | 1364 | 1.3817 | 0.3070 |
| 0.002 | 63.0 | 1386 | 1.3887 | 0.3158 |
| 0.0016 | 64.0 | 1408 | 1.3172 | 0.3246 |
| 0.0016 | 65.0 | 1430 | 1.3481 | 0.3158 |
| 0.0023 | 66.0 | 1452 | 1.3109 | 0.3246 |
| 0.0023 | 67.0 | 1474 | 1.2907 | 0.3246 |
| 0.0023 | 68.0 | 1496 | 1.2926 | 0.3246 |
| 0.0014 | 69.0 | 1518 | 1.3122 | 0.3158 |
| 0.0014 | 70.0 | 1540 | 1.3354 | 0.3158 |
| 0.0008 | 71.0 | 1562 | 1.3440 | 0.3158 |
| 0.0008 | 72.0 | 1584 | 1.3367 | 0.3158 |
| 0.0011 | 73.0 | 1606 | 1.3452 | 0.3158 |
| 0.0011 | 74.0 | 1628 | 1.3514 | 0.3158 |
| 0.0011 | 75.0 | 1650 | 1.3505 | 0.3158 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0
- Datasets 2.11.0
- Tokenizers 0.13.3
|
aswin1906/llama-7b-sql-2k
|
aswin1906
| 2023-09-30T16:49:28Z | 56 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"code",
"question-answering",
"en",
"dataset:aswin1906/llama2-sql-instruct-2k",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-09-30T12:43:11Z |
---
license: apache-2.0
datasets:
- aswin1906/llama2-sql-instruct-2k
language:
- en
pipeline_tag: question-answering
tags:
- code
---
# Fine-Tune Llama 2 Model Using qLORA for Custom SQL Dataset
Instruction fine-tuning has become extremely popular since the (accidental) release of LLaMA.
The size of these models and the peculiarities of training them on instructions and answers introduce more complexity and often require parameter-efficient learning techniques such as QLoRA.
Refer Dataset at **aswin1906/llama2-sql-instruct-2k**
## Model Background

## Model Inference
Refer the below code to apply model inference
```
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import torch, re
from rich import print
class Training:
def __init__(self) -> None:
self.model_name= "meta-llama/Llama-2-7b-chat-hf"
self.dataset= "aswin1906/llama2-sql-instruct-2k"
self.model_path= "aswin1906/llama-7b-sql-2k"
self.instruction= 'You are given the following SQL table structure described by CREATE TABLE statement: CREATE TABLE "l" ( "player" text, "no" text, "nationality" text, "position" text, "years_in_toronto" text, "school_club_team" text ); Write an SQL query that provides the solution to the following question: '
self.model = AutoModelForCausalLM.from_pretrained(
self.model_path,
load_in_8bit=False,
torch_dtype=torch.float16,
device_map="auto"
)
self.tokenizer = AutoTokenizer.from_pretrained(self.model_path)
def inference(self, prompt):
"""
Prompting started here
"""
# Run text generation pipeline with our next model
pipe = pipeline(task="text-generation", model=self.model, tokenizer=self.tokenizer, max_length=200)
result = pipe(f'<s>[INST] {self.instruction}"{prompt}". [/INST]')
response= result[0]['generated_text'].split('[/INST]')[-1]
return response
train= Training()
instruction= re.split(';|by CREATE', train.instruction)
print(f"[purple4] ------------------------------Instruction--------------------------")
print(f"[medium_spring_green] {instruction[0]}")
print(f"[bold green]CREATE{instruction[1]};")
print(f"[medium_spring_green] {instruction[2]}")
print(f"[purple4] -------------------------------------------------------------------")
while True:
# prompt = 'What position does the player who played for butler cc (ks) play?'
print("[bold blue]#Human: [bold green]", end="")
user = input()
print('[bold blue]#Response: [bold green]', train.inference(user))
```
Contact **[email protected]** for model training code
## output

|
DanGalt/a2c-PandaReachDense-v2
|
DanGalt
| 2023-09-30T16:35:11Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"arxiv:2106.13687",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-17T15:53:20Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.48 +/- 0.14
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687)
|
Ransaka/SinhalaRoberta
|
Ransaka
| 2023-09-30T16:32:56Z | 41 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"MLM",
"si",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-03-29T06:31:40Z |
---
tags:
- MLM
model-index:
- name: RobertaSin
results: []
widget:
- text: අපි තමයි [MASK] කරේ.
- text: මට හෙට එන්න වෙන්නේ [MASK].
- text: අපි ගෙදර [MASK].
- text: සිංහල සහ [MASK] අලුත් අවුරුද්ද.
license: apache-2.0
language:
- si
---
# SinhalaRoberta - Pretrained Roberta for Sinhala MLM tasks.
This model is trained on various Sinhala corpus extracted from News and articles.
## Model description
Trained on MLM tasks, Please use [MASK] token to indicate masked token. The model comprises a total of 68 million parameters
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
|
franco-rojas/bloom-1b1-finetuned-tfmviu
|
franco-rojas
| 2023-09-30T16:31:26Z | 152 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bloom",
"text-generation",
"generated_from_trainer",
"base_model:bigscience/bloom-1b1",
"base_model:finetune:bigscience/bloom-1b1",
"license:bigscience-bloom-rail-1.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-29T04:41:50Z |
---
license: bigscience-bloom-rail-1.0
base_model: bigscience/bloom-1b1
tags:
- generated_from_trainer
model-index:
- name: bloom-1b1-finetuned-tfmviu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloom-1b1-finetuned-tfmviu
This model is a fine-tuned version of [bigscience/bloom-1b1](https://huggingface.co/bigscience/bloom-1b1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5185
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 222 | 3.1300 |
| No log | 2.0 | 444 | 3.2264 |
| 2.3093 | 3.0 | 666 | 3.5185 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
TheBloke/Kimiko-Mistral-7B-fp16
|
TheBloke
| 2023-09-30T16:30:41Z | 184 | 2 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"generated_from_trainer",
"base_model:Chat-Error/Kimiko-Mistral-7B",
"base_model:finetune:Chat-Error/Kimiko-Mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-09-30T16:21:48Z |
---
base_model: nRuaif/Kimiko-Mistral-7B
inference: false
license: apache-2.0
model-index:
- name: Kimiko-Mistral-7B
results: []
model_creator: nRuaif
model_name: Kimiko Mistral 7B
model_type: mistral
prompt_template: 'You are a helpful AI assistant.
USER: {prompt}
ASSISTANT:
'
tags:
- generated_from_trainer
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Kimiko Mistral 7B - FP16
- Model creator: [nRuaif](https://huggingface.co/nRuaif)
- Original model: [Kimiko Mistral 7B](https://huggingface.co/nRuaif/Kimiko-Mistral-7B)
<!-- description start -->
## Description
This repo contains pytorch format fp16 model files for [nRuaif's Kimiko Mistral 7B](nRuaif/Kimiko-Mistral-7B).
It is the result of either merging a LoRA, or converting the source repository to float16.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Kimiko-Mistral-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Kimiko-Mistral-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Kimiko-Mistral-7B-GGUF)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Kimiko-Mistral-7B-fp16)
* [nRuaif's original LoRA adapter, which can be merged on to the base model.](https://huggingface.co/nRuaif/Kimiko-Mistral-7B)
<!-- repositories-available start -->
<!-- prompt-template start -->
## Prompt template: Vicuna-Short
```
You are a helpful AI assistant.
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: nRuaif's Kimiko Mistral 7B
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
# Kimiko-Mistral-7B
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the Kimiko dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1173
## Model description
Same dataset as Kimiko-v2 but on new model. THIS IS NOT TRAIN ON V3 DATASET
## Intended uses & limitations
As a finetuning experiment on new 7B model. You can use this for roleplay or as an assistant
# Prompt Template Structure
```
This is a chat between ASSISTANT and USER
USER: What is 4x8?
ASSISTANT:
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5675 | 0.47 | 25 | 2.1323 |
| 1.4721 | 0.95 | 50 | 2.1209 |
| 1.472 | 1.42 | 75 | 2.1177 |
| 1.5445 | 1.9 | 100 | 2.1173 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.0
|
cloudwalkerw/wav2vec2-base-ft-keyword-spotting
|
cloudwalkerw
| 2023-09-30T16:29:03Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wavlm",
"audio-classification",
"generated_from_trainer",
"dataset:superb",
"base_model:microsoft/wavlm-base",
"base_model:finetune:microsoft/wavlm-base",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-09-30T15:36:11Z |
---
base_model: microsoft/wavlm-base
tags:
- audio-classification
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: wav2vec2-base-ft-keyword-spotting
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: superb
type: superb
config: ks
split: validation
args: ks
metrics:
- name: Accuracy
type: accuracy
value: 0.9694027655192704
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-ft-keyword-spotting
This model is a fine-tuned version of [microsoft/wavlm-base](https://huggingface.co/microsoft/wavlm-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2270
- Accuracy: 0.9694
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3203 | 1.0 | 199 | 1.2906 | 0.6328 |
| 0.9587 | 2.0 | 399 | 0.7793 | 0.7355 |
| 0.6218 | 3.0 | 599 | 0.3858 | 0.9289 |
| 0.4379 | 4.0 | 799 | 0.2581 | 0.9688 |
| 0.3779 | 4.98 | 995 | 0.2270 | 0.9694 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.0.post302
- Datasets 2.14.5
- Tokenizers 0.13.3
|
nopeoppeln/WojciechOlszanski
|
nopeoppeln
| 2023-09-30T16:26:03Z | 0 | 0 | null |
[
"pl",
"region:us"
] | null | 2023-09-30T16:04:37Z |
---
language:
- pl
---
<img src="https://cdn.discordapp.com/attachments/750736507811397745/1157711553387319426/jaszczur.png"></img>
<h1>Wojciech Olszański alias Aleksander Jabłonowski (380 epochs) (RVC v2)</h1>
**By:** Nope (nopebsag) <br/>
**Voice:** Wojciech "Jaszczur" Olszański alias Aleksander Jabłonowski <br/>
**Dataset length:** 00:15:20 <br/>
**Extracted with:** RMVPE <br/>
<audio controls>
<source src="https://cdn.discordapp.com/attachments/750736507811397745/1157711252890591232/przemowa.wav" type="audio/wav">
</audio><br />
<audio controls>
<source src="https://cdn.discordapp.com/attachments/750736507811397745/1157711252542468107/slava.wav" type="audio/wav">
</audio><br />
<audio controls>
<source src="https://cdn.discordapp.com/attachments/750736507811397745/1157444957808893973/thecumratstest380e.mp3" type="audio/wav">
</audio>
<a href="https://huggingface.co/nopeoppeln/WojciechOlszanski/resolve/main/jablonowskimodel.zip">**DOWNLOAD**</a>
</center>
|
TheBloke/Kimiko-Mistral-7B-GGUF
|
TheBloke
| 2023-09-30T16:20:34Z | 260 | 13 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"generated_from_trainer",
"base_model:Chat-Error/Kimiko-Mistral-7B",
"base_model:quantized:Chat-Error/Kimiko-Mistral-7B",
"license:apache-2.0",
"region:us"
] | null | 2023-09-30T16:12:00Z |
---
base_model: nRuaif/Kimiko-Mistral-7B
inference: false
license: apache-2.0
model-index:
- name: Kimiko-Mistral-7B
results: []
model_creator: nRuaif
model_name: Kimiko Mistral 7B
model_type: mistral
prompt_template: 'You are a helpful AI assistant.
USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
tags:
- generated_from_trainer
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Kimiko Mistral 7B - GGUF
- Model creator: [nRuaif](https://huggingface.co/nRuaif)
- Original model: [Kimiko Mistral 7B](https://huggingface.co/nRuaif/Kimiko-Mistral-7B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [nRuaif's Kimiko Mistral 7B](https://huggingface.co/nRuaif/Kimiko-Mistral-7B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Kimiko-Mistral-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Kimiko-Mistral-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Kimiko-Mistral-7B-GGUF)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Kimiko-Mistral-7B-fp16)
* [nRuaif's original LoRA adapter, which can be merged on to the base model.](https://huggingface.co/nRuaif/Kimiko-Mistral-7B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Vicuna-Short
```
You are a helpful AI assistant.
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [kimiko-mistral-7b.Q2_K.gguf](https://huggingface.co/TheBloke/Kimiko-Mistral-7B-GGUF/blob/main/kimiko-mistral-7b.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [kimiko-mistral-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Kimiko-Mistral-7B-GGUF/blob/main/kimiko-mistral-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [kimiko-mistral-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Kimiko-Mistral-7B-GGUF/blob/main/kimiko-mistral-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [kimiko-mistral-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Kimiko-Mistral-7B-GGUF/blob/main/kimiko-mistral-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [kimiko-mistral-7b.Q4_0.gguf](https://huggingface.co/TheBloke/Kimiko-Mistral-7B-GGUF/blob/main/kimiko-mistral-7b.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [kimiko-mistral-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Kimiko-Mistral-7B-GGUF/blob/main/kimiko-mistral-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [kimiko-mistral-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Kimiko-Mistral-7B-GGUF/blob/main/kimiko-mistral-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [kimiko-mistral-7b.Q5_0.gguf](https://huggingface.co/TheBloke/Kimiko-Mistral-7B-GGUF/blob/main/kimiko-mistral-7b.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [kimiko-mistral-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Kimiko-Mistral-7B-GGUF/blob/main/kimiko-mistral-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [kimiko-mistral-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Kimiko-Mistral-7B-GGUF/blob/main/kimiko-mistral-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [kimiko-mistral-7b.Q6_K.gguf](https://huggingface.co/TheBloke/Kimiko-Mistral-7B-GGUF/blob/main/kimiko-mistral-7b.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [kimiko-mistral-7b.Q8_0.gguf](https://huggingface.co/TheBloke/Kimiko-Mistral-7B-GGUF/blob/main/kimiko-mistral-7b.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Kimiko-Mistral-7B-GGUF and below it, a specific filename to download, such as: kimiko-mistral-7b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Kimiko-Mistral-7B-GGUF kimiko-mistral-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Kimiko-Mistral-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Kimiko-Mistral-7B-GGUF kimiko-mistral-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m kimiko-mistral-7b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "You are a helpful AI assistant.\n\nUSER: {prompt}\nASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Kimiko-Mistral-7B-GGUF", model_file="kimiko-mistral-7b.Q4_K_M.gguf", model_type="mistral", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: nRuaif's Kimiko Mistral 7B
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
# Kimiko-Mistral-7B
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the Kimiko dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1173
## Model description
Same dataset as Kimiko-v2 but on new model. THIS IS NOT TRAIN ON V3 DATASET
## Intended uses & limitations
As a finetuning experiment on new 7B model. You can use this for roleplay or as an assistant
# Prompt Template Structure
```
This is a chat between ASSISTANT and USER
USER: What is 4x8?
ASSISTANT:
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5675 | 0.47 | 25 | 2.1323 |
| 1.4721 | 0.95 | 50 | 2.1209 |
| 1.472 | 1.42 | 75 | 2.1177 |
| 1.5445 | 1.9 | 100 | 2.1173 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.0
<!-- original-model-card end -->
|
VuongQuoc/checkpoints_29_9_microsoft_deberta_V1
|
VuongQuoc
| 2023-09-30T16:12:18Z | 60 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"multiple-choice",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-large",
"base_model:finetune:microsoft/deberta-v3-large",
"license:mit",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2023-09-29T03:53:41Z |
---
license: mit
base_model: microsoft/deberta-v3-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: checkpoints_29_9_microsoft_deberta_V1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# checkpoints_29_9_microsoft_deberta_V1
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7815
- Map@3: 0.8290
- Accuracy: 0.7333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map@3 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| 1.6045 | 0.05 | 200 | 1.6095 | 0.4593 | 0.3030 |
| 1.3669 | 0.11 | 400 | 1.3360 | 0.7215 | 0.5980 |
| 0.9993 | 0.16 | 600 | 1.0403 | 0.7737 | 0.6727 |
| 0.9608 | 0.21 | 800 | 0.9539 | 0.7966 | 0.6990 |
| 0.9017 | 0.27 | 1000 | 0.9125 | 0.7997 | 0.6970 |
| 0.885 | 0.32 | 1200 | 0.8719 | 0.8172 | 0.7192 |
| 0.8222 | 0.37 | 1400 | 0.8462 | 0.8125 | 0.7030 |
| 0.769 | 0.43 | 1600 | 0.8376 | 0.8158 | 0.7131 |
| 0.7676 | 0.48 | 1800 | 0.8109 | 0.8178 | 0.7152 |
| 0.8413 | 0.53 | 2000 | 0.8279 | 0.8212 | 0.7212 |
| 0.809 | 0.59 | 2200 | 0.8012 | 0.8212 | 0.7212 |
| 0.8809 | 0.64 | 2400 | 0.8037 | 0.8290 | 0.7333 |
| 0.8028 | 0.69 | 2600 | 0.7949 | 0.8249 | 0.7293 |
| 0.8259 | 0.75 | 2800 | 0.7938 | 0.8283 | 0.7354 |
| 0.7548 | 0.8 | 3000 | 0.7818 | 0.8300 | 0.7354 |
| 0.7422 | 0.85 | 3200 | 0.7797 | 0.8316 | 0.7374 |
| 0.801 | 0.91 | 3400 | 0.7811 | 0.8303 | 0.7354 |
| 0.7 | 0.96 | 3600 | 0.7815 | 0.8290 | 0.7333 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.0
- Datasets 2.9.0
- Tokenizers 0.13.3
|
andrew45/distilbert-base-uncased-distilled-clinc
|
andrew45
| 2023-09-30T15:47:46Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-30T15:37:11Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9474193548387096
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2003
- Accuracy: 0.9474
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.8764 | 1.0 | 318 | 1.3328 | 0.7384 |
| 1.0409 | 2.0 | 636 | 0.6979 | 0.8694 |
| 0.5693 | 3.0 | 954 | 0.4069 | 0.9223 |
| 0.3467 | 4.0 | 1272 | 0.2874 | 0.9381 |
| 0.2464 | 5.0 | 1590 | 0.2394 | 0.9432 |
| 0.2005 | 6.0 | 1908 | 0.2196 | 0.9455 |
| 0.1785 | 7.0 | 2226 | 0.2109 | 0.9468 |
| 0.166 | 8.0 | 2544 | 0.2039 | 0.9487 |
| 0.1596 | 9.0 | 2862 | 0.2017 | 0.9484 |
| 0.1561 | 10.0 | 3180 | 0.2003 | 0.9474 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
TheBloke/Pandalyst_13B_V1.0-GPTQ
|
TheBloke
| 2023-09-30T15:37:50Z | 23 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"en",
"base_model:pipizhao/Pandalyst_13B_V1.0",
"base_model:quantized:pipizhao/Pandalyst_13B_V1.0",
"license:llama2",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2023-09-30T14:29:12Z |
---
base_model: pipizhao/Pandalyst_13B_V1.0
inference: false
language:
- en
library_name: transformers
license: llama2
model-index:
- name: Pandalyst_13B_v1.0
results:
- metrics:
- name: exec@1
type: exec@1
value: 0.71
verified: false
task:
type: text-generation
model_creator: Yanzhao Zheng
model_name: Pandalyst 13B V1.0
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
tags:
- code
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Pandalyst 13B V1.0 - GPTQ
- Model creator: [Yanzhao Zheng](https://huggingface.co/pipizhao)
- Original model: [Pandalyst 13B V1.0](https://huggingface.co/pipizhao/Pandalyst_13B_V1.0)
<!-- description start -->
## Description
This repo contains GPTQ model files for [Yanzhao Zheng's Pandalyst 13B V1.0](https://huggingface.co/pipizhao/Pandalyst_13B_V1.0).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Pandalyst_13B_V1.0-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Pandalyst_13B_V1.0-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Pandalyst_13B_V1.0-GGUF)
* [Yanzhao Zheng's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/pipizhao/Pandalyst_13B_V1.0)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Pandalyst_13B_V1.0-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 7.26 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Pandalyst_13B_V1.0-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 8.00 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Pandalyst_13B_V1.0-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 13.36 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Pandalyst_13B_V1.0-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/Pandalyst_13B_V1.0-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 14.55 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Pandalyst_13B_V1.0-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 7.51 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/Pandalyst_13B_V1.0-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Pandalyst_13B_V1.0-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `Pandalyst_13B_V1.0-GPTQ`:
```shell
mkdir Pandalyst_13B_V1.0-GPTQ
huggingface-cli download TheBloke/Pandalyst_13B_V1.0-GPTQ --local-dir Pandalyst_13B_V1.0-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir Pandalyst_13B_V1.0-GPTQ
huggingface-cli download TheBloke/Pandalyst_13B_V1.0-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir Pandalyst_13B_V1.0-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir Pandalyst_13B_V1.0-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Pandalyst_13B_V1.0-GPTQ --local-dir Pandalyst_13B_V1.0-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Pandalyst_13B_V1.0-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Pandalyst_13B_V1.0-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Pandalyst_13B_V1.0-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Pandalyst_13B_V1.0-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers optimum
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.4.2
pip3 install .
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Pandalyst_13B_V1.0-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Yanzhao Zheng's Pandalyst 13B V1.0
## Pandalyst: A large language model for mastering data analysis using pandas
<p align="center">
<img src="https://raw.githubusercontent.com/zhengyanzhao1997/Pandalyst/master/imgs/pandalyst.png" width="300"/>
</p>
<p align="center">
🐱 <a href="https://github.com/zhengyanzhao1997/Pandalyst" target="_blank">Github Repo</a> <br>
</p>
**What is Pandalyst**
- Pandalyst is a general large language model specifically trained to process and analyze data using the pandas library.
**How is Pandalyst**
- Pandalyst has strong generalization capabilities for data tables in different fields and different data analysis needs.
**Why is Pandalyst**
- Pandalyst is open source and free to use, and its small parameter size (7B/13B) allows us to easily deploy it on local PC.
- Pandalyst can handle complex data tables (multiple columns and multiple rows), allowing us to enter enough context to describe our table in detail.
- Pandalyst has very competitive performance, significantly outperforming models of the same size and even outperforming some of the strongest closed-source models.
## News
- 🔥[2023/09/30] We released **Pandalyst-7B-V1.1** , which was trained on **CodeLlama-7b-Python** and achieves the **76.1 exec@1** in our **PandaTest_V1.0** and surpasses **Pandalyst-13B-V1.0**, **WizardCoder-Python-13B-V1.0** and **ChatGPT-3.5 (2023/06/13)**.
- 🔥[2023/09/28] We released **Pandalyst-13B-V1.0** , which was trained on **WizardCoder-Python-13B-V1.0** and achieves the **70.7 exec@1** in our **PandaTest_V1.0** and surpasses **WizardCoder-Python-13B-V1.0** and **ChatGPT-3.5 (2023/06/13)**.
| Model | Checkpoint | Base Model | PandaTest_V1.0 | EASY | HARD | License |
|--------------------|---------------------------------------------------------------------------------------------|------------|----------------|---------------------|---------------------| ----- |
| Pandalyst-13B-V1.0 | 🤗 <a href="https://huggingface.co/pipizhao/Pandalyst_13B_V1.0" target="_blank">HF Link</a> | WizardCoder-Python-13B-V1.0 | 70.7 | 75.6 | 65.9 | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
| Pandalyst-7B-V1.1 | 🤗 <a href="https://huggingface.co/pipizhao/Pandalyst-7B-V1.1" target="_blank">HF Link</a> | CodeLlama-7b-Python | 76.1 | 85.2 | 67.0 | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
## Usage and Human evaluation
Please refer to <a href="https://github.com/zhengyanzhao1997/Pandalyst" target="_blank">Github</a>.
|
Ahada/Olivia
|
Ahada
| 2023-09-30T15:36:01Z | 0 | 0 |
nemo
|
[
"nemo",
"code",
"text-classification",
"en",
"dataset:taesiri/arxiv_qa",
"license:mit",
"region:us"
] |
text-classification
| 2023-09-30T15:30:48Z |
---
license: mit
datasets:
- taesiri/arxiv_qa
language:
- en
metrics:
- bleu
library_name: nemo
pipeline_tag: text-classification
tags:
- code
---
|
soBeauty/V3_20230929-3-xlm-roberta-base-new
|
soBeauty
| 2023-09-30T15:22:32Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-30T14:32:51Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: V3_20230929-3-xlm-roberta-base-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V3_20230929-3-xlm-roberta-base-new
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.5509
- Loss: 2.1676
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 4.4138 | 0.46 | 200 | 0.2723 | nan |
| 4.0972 | 0.91 | 400 | 0.3135 | nan |
| 3.7299 | 1.37 | 600 | 0.3289 | nan |
| 3.7476 | 1.82 | 800 | 0.3242 | 3.6812 |
| 3.8098 | 2.28 | 1000 | 0.4011 | 3.5966 |
| 3.6111 | 2.73 | 1200 | 0.4 | 2.8744 |
| 3.3202 | 3.19 | 1400 | 0.4334 | 2.9269 |
| 3.3557 | 3.64 | 1600 | 0.3649 | nan |
| 3.198 | 4.1 | 1800 | 0.4349 | nan |
| 3.1623 | 4.56 | 2000 | 0.4237 | 2.9731 |
| 3.1379 | 5.01 | 2200 | 0.4521 | nan |
| 3.0795 | 5.47 | 2400 | 0.4538 | 2.8971 |
| 3.0176 | 5.92 | 2600 | 0.4138 | 3.0068 |
| 2.9956 | 6.38 | 2800 | 0.4729 | nan |
| 2.9666 | 6.83 | 3000 | 0.4674 | 2.3240 |
| 2.9124 | 7.29 | 3200 | 0.5039 | nan |
| 2.9806 | 7.74 | 3400 | 0.4457 | nan |
| 2.7471 | 8.2 | 3600 | 0.5138 | 2.4373 |
| 2.7762 | 8.66 | 3800 | 0.4963 | 2.4425 |
| 2.7485 | 9.11 | 4000 | 0.5302 | nan |
| 2.6488 | 9.57 | 4200 | 0.5499 | 2.3581 |
| 2.69 | 10.02 | 4400 | 0.5066 | 2.3862 |
| 2.6669 | 10.48 | 4600 | 0.4802 | 2.4588 |
| 2.5595 | 10.93 | 4800 | 0.4938 | 2.4186 |
| 2.5512 | 11.39 | 5000 | 0.5076 | 2.5922 |
| 2.5686 | 11.85 | 5200 | 0.5648 | nan |
| 2.5772 | 12.3 | 5400 | 0.5480 | 2.4634 |
| 2.5701 | 12.76 | 5600 | 0.5497 | 2.5381 |
| 2.3937 | 13.21 | 5800 | 0.5310 | nan |
| 2.5274 | 13.67 | 6000 | 0.5681 | nan |
| 2.3513 | 14.12 | 6200 | 0.5671 | nan |
| 2.529 | 14.58 | 6400 | 0.5509 | 2.1676 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Cypher594/my-pet-dog
|
Cypher594
| 2023-09-30T14:57:06Z | 7 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-30T14:52:07Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by Cypher594 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: IITJ-89
Sample pictures of this concept:
|
HANISHA-DUVVURI/my-pet-dog
|
HANISHA-DUVVURI
| 2023-09-30T14:49:37Z | 5 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-30T14:40:52Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### MY ANIMAL BAGDreambooth model trained by HANISHA-DUVVURI following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:





|
ashishpatel26/mistral-7b-mj-finetuned
|
ashishpatel26
| 2023-09-30T14:48:04Z | 0 | 1 | null |
[
"tensorboard",
"region:us"
] | null | 2023-09-30T14:15:12Z |
# Mistral-7B-Instruct-v0.1 Model Trained Using AutoTrain
# Model Card for Mistral-7B-Instruct-v0.1
The Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the [Mistral-7B-v0.1](https://huggingface.co/ashishpatel26/mistral-7b-mj-finetuned) generative text model using a variety of publicly available conversation datasets.
For full details of this model please read our [release blog post](https://mistral.ai/news/announcing-mistral-7b/)
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[\INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
```
text = "<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
"[INST] Do you have mayonnaise recipes? [/INST]"
```
This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("ashishpatel26/mistral-7b-mj-finetuned")
tokenizer = AutoTokenizer.from_pretrained("ashishpatel26/mistral-7b-mj-finetuned")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Model Architecture
This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
Installing transformers from source should solve the issue
pip install git+https://github.com/huggingface/transformers
This should not be required after transformers-v4.33.4.
## Limitations
The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
## model-card-metadata
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: apache-2.0
pipeline_tag: text-generation
tags:
- finetuned
---
|
actionpace/MythoMax-L2-LoRA-Assemble-13B
|
actionpace
| 2023-09-30T14:30:18Z | 2 | 0 | null |
[
"gguf",
"en",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-09-30T14:24:08Z |
---
license: other
language:
- en
---
**Some of my own quants:**
* MythoMax-L2-LoRA-Assemble-13B_Q5_K_M.gguf
**Source:** [PulsarAI](https://huggingface.co/PulsarAI)
**Source Model:** [MythoMax-L2-LoRA-Assemble-13B](https://huggingface.co/PulsarAI/MythoMax-L2-LoRA-Assemble-13B)
**Source models for PulsarAI/MythoMax-L2-LoRA-Assemble-13B (Merge)**
- [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) ([Ref](https://huggingface.co/actionpace/Llama-2-13b-hf))
|
soBeauty/20230928-9-xlm-roberta-base-new
|
soBeauty
| 2023-09-30T14:27:26Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-28T14:59:59Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 20230928-9-xlm-roberta-base-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230928-9-xlm-roberta-base-new
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.5108
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 4.5519 | 0.46 | 200 | 0.2982 | nan |
| 4.1573 | 0.91 | 400 | 0.3194 | nan |
| 3.8898 | 1.37 | 600 | 0.3280 | nan |
| 3.7149 | 1.82 | 800 | 0.3765 | 3.2017 |
| 3.5722 | 2.28 | 1000 | 0.4056 | nan |
| 3.4367 | 2.73 | 1200 | 0.3648 | 3.0582 |
| 3.3854 | 3.19 | 1400 | 0.3741 | nan |
| 3.4153 | 3.64 | 1600 | 0.4019 | 2.8494 |
| 3.3047 | 4.1 | 1800 | 0.4049 | 2.9662 |
| 3.1112 | 4.56 | 2000 | 0.4419 | 3.0672 |
| 3.3055 | 5.01 | 2200 | 0.4746 | 2.9201 |
| 2.948 | 5.47 | 2400 | 0.4633 | nan |
| 2.9887 | 5.92 | 2600 | 0.4349 | 2.9944 |
| 2.9127 | 6.38 | 2800 | 0.4832 | 2.5580 |
| 2.7644 | 6.83 | 3000 | 0.4737 | 2.6597 |
| 2.8758 | 7.29 | 3200 | 0.4226 | 3.1813 |
| 2.9334 | 7.74 | 3400 | 0.4670 | 2.5154 |
| 2.7652 | 8.2 | 3600 | 0.4860 | 2.7003 |
| 2.7986 | 8.66 | 3800 | 0.5547 | nan |
| 2.7419 | 9.11 | 4000 | 0.5095 | nan |
| 2.7937 | 9.57 | 4200 | 0.5108 | nan |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
saisharan/my-car
|
saisharan
| 2023-09-30T14:15:01Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-30T14:09:30Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Car Dreambooth model trained by saisharan following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: INDU-TS-222
Sample pictures of this concept:

|
SakataHalmi/ppo-SnowballTarget
|
SakataHalmi
| 2023-09-30T14:04:44Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-09-30T14:04:37Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: SakataHalmi/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
accordtsai/blip2-opt-2.7b-bonescan-captions-adapters
|
accordtsai
| 2023-09-30T13:54:41Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-29T13:11:17Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
yoonjae97/kobart_AdamW_80000
|
yoonjae97
| 2023-09-30T13:53:28Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-30T13:53:26Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
soBeauty/20230928-8-xlm-roberta-base-new
|
soBeauty
| 2023-09-30T13:52:46Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-28T14:53:46Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 20230928-8-xlm-roberta-base-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230928-8-xlm-roberta-base-new
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.4880
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 4.3143 | 0.46 | 200 | 0.3054 | 4.1884 |
| 4.0232 | 0.91 | 400 | 0.3790 | 3.4612 |
| 3.8668 | 1.37 | 600 | 0.3525 | 3.5908 |
| 3.7855 | 1.82 | 800 | 0.4058 | nan |
| 3.58 | 2.28 | 1000 | 0.3795 | nan |
| 3.3848 | 2.73 | 1200 | 0.4046 | nan |
| 3.2678 | 3.19 | 1400 | 0.4439 | 3.1686 |
| 3.3909 | 3.64 | 1600 | 0.4248 | nan |
| 3.2549 | 4.1 | 1800 | 0.4460 | nan |
| 3.1552 | 4.56 | 2000 | 0.5051 | 2.5496 |
| 3.1933 | 5.01 | 2200 | 0.4706 | nan |
| 2.9075 | 5.47 | 2400 | 0.5124 | 2.8447 |
| 3.0546 | 5.92 | 2600 | 0.5136 | nan |
| 2.9636 | 6.38 | 2800 | 0.4931 | 2.7494 |
| 2.9684 | 6.83 | 3000 | 0.4962 | nan |
| 2.8127 | 7.29 | 3200 | 0.5077 | 2.5534 |
| 2.8798 | 7.74 | 3400 | 0.4961 | 2.8044 |
| 2.7446 | 8.2 | 3600 | 0.5071 | 2.7707 |
| 2.7827 | 8.66 | 3800 | 0.4919 | nan |
| 2.732 | 9.11 | 4000 | 0.4897 | 2.4495 |
| 2.7996 | 9.57 | 4200 | 0.4880 | nan |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Rithick46/luxurious-home-interior
|
Rithick46
| 2023-09-30T13:49:56Z | 5 | 4 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-30T13:36:35Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### Luxurious-Home-interior Dreambooth model trained by Rithick46 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: IIITS-87
Sample pictures of this concept:


|
TheBloke/Pandalyst-7B-V1.1-GGUF
|
TheBloke
| 2023-09-30T13:46:18Z | 163 | 8 |
transformers
|
[
"transformers",
"gguf",
"llama",
"code",
"en",
"base_model:pipizhao/Pandalyst-7B-V1.1",
"base_model:quantized:pipizhao/Pandalyst-7B-V1.1",
"license:llama2",
"model-index",
"region:us"
] | null | 2023-09-30T13:37:25Z |
---
base_model: pipizhao/Pandalyst-7B-V1.1
inference: false
language:
- en
library_name: transformers
license: llama2
model-index:
- name: Pandalyst_7B_v1.1
results:
- metrics:
- name: exec@1
type: exec@1
value: 0.76
verified: false
task:
type: text-generation
model_creator: Yanzhao Zheng
model_name: Pandalyst 7B V1.1
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
tags:
- code
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Pandalyst 7B V1.1 - GGUF
- Model creator: [Yanzhao Zheng](https://huggingface.co/pipizhao)
- Original model: [Pandalyst 7B V1.1](https://huggingface.co/pipizhao/Pandalyst-7B-V1.1)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Yanzhao Zheng's Pandalyst 7B V1.1](https://huggingface.co/pipizhao/Pandalyst-7B-V1.1).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Pandalyst-7B-V1.1-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Pandalyst-7B-V1.1-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Pandalyst-7B-V1.1-GGUF)
* [Yanzhao Zheng's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/pipizhao/Pandalyst-7B-V1.1)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [pandalyst-7b-v1.1.Q2_K.gguf](https://huggingface.co/TheBloke/Pandalyst-7B-V1.1-GGUF/blob/main/pandalyst-7b-v1.1.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [pandalyst-7b-v1.1.Q3_K_S.gguf](https://huggingface.co/TheBloke/Pandalyst-7B-V1.1-GGUF/blob/main/pandalyst-7b-v1.1.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [pandalyst-7b-v1.1.Q3_K_M.gguf](https://huggingface.co/TheBloke/Pandalyst-7B-V1.1-GGUF/blob/main/pandalyst-7b-v1.1.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [pandalyst-7b-v1.1.Q3_K_L.gguf](https://huggingface.co/TheBloke/Pandalyst-7B-V1.1-GGUF/blob/main/pandalyst-7b-v1.1.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [pandalyst-7b-v1.1.Q4_0.gguf](https://huggingface.co/TheBloke/Pandalyst-7B-V1.1-GGUF/blob/main/pandalyst-7b-v1.1.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [pandalyst-7b-v1.1.Q4_K_S.gguf](https://huggingface.co/TheBloke/Pandalyst-7B-V1.1-GGUF/blob/main/pandalyst-7b-v1.1.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [pandalyst-7b-v1.1.Q4_K_M.gguf](https://huggingface.co/TheBloke/Pandalyst-7B-V1.1-GGUF/blob/main/pandalyst-7b-v1.1.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [pandalyst-7b-v1.1.Q5_0.gguf](https://huggingface.co/TheBloke/Pandalyst-7B-V1.1-GGUF/blob/main/pandalyst-7b-v1.1.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [pandalyst-7b-v1.1.Q5_K_S.gguf](https://huggingface.co/TheBloke/Pandalyst-7B-V1.1-GGUF/blob/main/pandalyst-7b-v1.1.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [pandalyst-7b-v1.1.Q5_K_M.gguf](https://huggingface.co/TheBloke/Pandalyst-7B-V1.1-GGUF/blob/main/pandalyst-7b-v1.1.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [pandalyst-7b-v1.1.Q6_K.gguf](https://huggingface.co/TheBloke/Pandalyst-7B-V1.1-GGUF/blob/main/pandalyst-7b-v1.1.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [pandalyst-7b-v1.1.Q8_0.gguf](https://huggingface.co/TheBloke/Pandalyst-7B-V1.1-GGUF/blob/main/pandalyst-7b-v1.1.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Pandalyst-7B-V1.1-GGUF and below it, a specific filename to download, such as: pandalyst-7b-v1.1.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Pandalyst-7B-V1.1-GGUF pandalyst-7b-v1.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Pandalyst-7B-V1.1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Pandalyst-7B-V1.1-GGUF pandalyst-7b-v1.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m pandalyst-7b-v1.1.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Pandalyst-7B-V1.1-GGUF", model_file="pandalyst-7b-v1.1.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Yanzhao Zheng's Pandalyst 7B V1.1
## Pandalyst: A large language model for mastering data analysis using pandas
<p align="center">
<img src="https://raw.githubusercontent.com/zhengyanzhao1997/Pandalyst/master/imgs/pandalyst.png" width="300"/>
</p>
<p align="center">
🐱 <a href="https://github.com/zhengyanzhao1997/Pandalyst" target="_blank">Github Repo</a> <br>
</p>
**What is Pandalyst**
- Pandalyst is a general large language model specifically trained to process and analyze data using the pandas library.
**How is Pandalyst**
- Pandalyst has strong generalization capabilities for data tables in different fields and different data analysis needs.
**Why is Pandalyst**
- Pandalyst is open source and free to use, and its small parameter size (7B/13B) allows us to easily deploy it on local PC.
- Pandalyst can handle complex data tables (multiple columns and multiple rows), allowing us to enter enough context to describe our table in detail.
- Pandalyst has very competitive performance, significantly outperforming models of the same size and even outperforming some of the strongest closed-source models.
## News
- 🔥[2023/09/30] We released **Pandalyst-7B-V1.1** , which was trained on **CodeLlama-7b-Python** and achieves the **76.1 exec@1** in our **PandaTest_V1.0** and surpasses **Pandalyst-13B-V1.0**, **WizardCoder-Python-13B-V1.0** and **ChatGPT-3.5 (2023/06/13)**.
- 🔥[2023/09/28] We released **Pandalyst-13B-V1.0** , which was trained on **WizardCoder-Python-13B-V1.0** and achieves the **70.7 exec@1** in our **PandaTest_V1.0** and surpasses **WizardCoder-Python-13B-V1.0** and **ChatGPT-3.5 (2023/06/13)**.
| Model | Checkpoint | Base Model | PandaTest_V1.0 | EASY | HARD | License |
|--------------------|---------------------------------------------------------------------------------------------|------------|----------------|---------------------|---------------------| ----- |
| Pandalyst-13B-V1.0 | 🤗 <a href="https://huggingface.co/pipizhao/Pandalyst_13B_V1.0" target="_blank">HF Link</a> | WizardCoder-Python-13B-V1.0 | 70.7 | 75.6 | 65.9 | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
| Pandalyst-7B-V1.1 | 🤗 <a href="https://huggingface.co/pipizhao/Pandalyst-7B-V1.1" target="_blank">HF Link</a> | CodeLlama-7b-Python | 76.1 | 85.2 | 67.0 | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
## Usage and Human evaluation
Please refer to <a href="https://github.com/zhengyanzhao1997/Pandalyst" target="_blank">Github</a>.
<!-- original-model-card end -->
|
likith1503/pet-dogs
|
likith1503
| 2023-09-30T13:45:33Z | 5 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-30T13:41:13Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### pet-dogs Dreambooth model trained by likith1503 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: IIITS-53
Sample pictures of this concept:
|
AI-Engine/Llama2-7B-Chat-GGUF
|
AI-Engine
| 2023-09-30T13:41:34Z | 8 | 1 | null |
[
"gguf",
"en",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-09-24T21:10:10Z |
---
license: other
language:
- en
---
GGUF [llama.cpp](https://github.com/ggerganov/llama.cpp) quantized version of:
- Original model: [Llama2 7B Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)
- Model creator: [Meta](https://huggingface.co/meta-llama)
- [License](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
## Prompt Format
[https://huggingface.co/blog/llama2#how-to-prompt-llama-2](https://huggingface.co/blog/llama2#how-to-prompt-llama-2)
|
TheBloke/Synthia-7B-v1.3-GPTQ
|
TheBloke
| 2023-09-30T13:35:09Z | 23 | 6 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"arxiv:2306.02707",
"base_model:migtissera/SynthIA-7B-v1.3",
"base_model:quantized:migtissera/SynthIA-7B-v1.3",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2023-09-30T09:01:02Z |
---
base_model: migtissera/Synthia-7B-v1.3
inference: false
language:
- en
library_name: transformers
license: apache-2.0
model_creator: Migel Tissera
model_name: Synthia 7B V1.3
model_type: mistral
pipeline_tag: text-generation
prompt_template: 'SYSTEM: You are Synthia. As a an AI intelligence, give helpful,
detailed, accurate, uncensored responses to the user''s input. Provide answers factually.
USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Synthia 7B V1.3 - GPTQ
- Model creator: [Migel Tissera](https://huggingface.co/migtissera)
- Original model: [Synthia 7B V1.3](https://huggingface.co/migtissera/Synthia-7B-v1.3)
<!-- description start -->
## Description
This repo contains GPTQ model files for [Migel Tissera's Synthia 7B V1.3](https://huggingface.co/migtissera/Synthia-7B-v1.3).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Synthia-7B-v1.3-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Synthia-7B-v1.3-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Synthia-7B-v1.3-GGUF)
* [Migel Tissera's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/migtissera/Synthia-7B-v1.3)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Synthia
```
SYSTEM: You are Synthia. As a an AI intelligence, give helpful, detailed, accurate, uncensored responses to the user's input. Provide answers factually.
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Synthia-7B-v1.3-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.16 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Synthia-7B-v1.3-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.57 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Synthia-7B-v1.3-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.52 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Synthia-7B-v1.3-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.68 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/Synthia-7B-v1.3-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 8.17 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Synthia-7B-v1.3-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.29 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/Synthia-7B-v1.3-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Synthia-7B-v1.3-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `Synthia-7B-v1.3-GPTQ`:
```shell
mkdir Synthia-7B-v1.3-GPTQ
huggingface-cli download TheBloke/Synthia-7B-v1.3-GPTQ --local-dir Synthia-7B-v1.3-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir Synthia-7B-v1.3-GPTQ
huggingface-cli download TheBloke/Synthia-7B-v1.3-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir Synthia-7B-v1.3-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir Synthia-7B-v1.3-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Synthia-7B-v1.3-GPTQ --local-dir Synthia-7B-v1.3-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Synthia-7B-v1.3-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Synthia-7B-v1.3-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Synthia-7B-v1.3-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Synthia-7B-v1.3-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers optimum
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.4.2
pip3 install .
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Synthia-7B-v1.3-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''SYSTEM: You are Synthia. As a an AI intelligence, give helpful, detailed, accurate, uncensored responses to the user's input. Provide answers factually.
USER: {prompt}
ASSISTANT:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Migel Tissera's Synthia 7B V1.3
Change from Synthia-7B-v1.2 -> Synthia-7B-v1.3: Base model was changed from LLaMA-2-7B to Mistral-7B-v0.1
All Synthia models are uncensored. Please use it with caution and with best intentions. You are responsible for how you use Synthia.
To evoke generalized Tree of Thought + Chain of Thought reasoning, you may use the following system message:
```
Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
```
# Synthia-7B-v1.3
SynthIA (Synthetic Intelligent Agent) 7B-v1.3 is a Mistral-7B-v0.1 model trained on Orca style datasets. It has been fine-tuned for instruction following as well as having long-form conversations.
<br>
#### License Disclaimer:
This model is released under Apache 2.0, and comes with no warranty or gurantees of any kind.
<br>
## Evaluation
We evaluated Synthia-7B-v1.3 on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI.
Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
||||
|:------:|:--------:|:-------:|
|**Task**|**Metric**|**Value**|
|*arc_challenge*|acc_norm|0.6237|
|*hellaswag*|acc_norm|0.8349|
|*mmlu*|acc_norm|0.6232|
|*truthfulqa_mc*|mc2|0.5125|
|**Total Average**|-|**0.6485**||
<br>
## Example Usage
### Here is prompt format:
```
SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
USER: How is a rocket launched from the surface of the earth to Low Earth Orbit?
ASSISTANT:
```
### Below shows a code example on how to use this model:
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "migtissera/Synthia-7B-v1.3"
output_file_path = "./Synthia-7B-conversations.jsonl"
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
def generate_text(instruction):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
answer = string.split("USER:")[0].strip()
return f"{answer}"
conversation = f"SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
while True:
user_input = input("You: ")
llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
answer = generate_text(llm_prompt)
print(answer)
conversation = f"{llm_prompt}{answer}"
json_data = {"prompt": user_input, "answer": answer}
## Save your conversation
with open(output_file_path, "a") as output_file:
output_file.write(json.dumps(json_data) + "\n")
```
<br>
#### Limitations & Biases:
While this model aims for accuracy, it can occasionally produce inaccurate or misleading results.
Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content.
Exercise caution and cross-check information when necessary. This is an uncensored model.
<br>
### Citiation:
Please kindly cite using the following BibTeX:
```
@misc{Synthia-7B-v1.3,
author = {Migel Tissera},
title = {Synthia-7B-v1.3: Synthetic Intelligent Agent},
year = {2023},
publisher = {GitHub, HuggingFace},
journal = {GitHub repository, HuggingFace repository},
howpublished = {\url{https://huggingface.co/migtissera/Synthia-13B},
}
```
```
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
aneesh007/my-pet-dog
|
aneesh007
| 2023-09-30T13:33:47Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-30T13:27:40Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by aneesh007 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: -IIITS-126
Sample pictures of this concept:
|
Tirendaz/distilbert-for-emotion
|
Tirendaz
| 2023-09-30T13:24:55Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-30T12:49:31Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-for-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-for-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 0.3785 | 0.896 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
eswar12345/my-pet-dog
|
eswar12345
| 2023-09-30T13:22:47Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-30T13:17:13Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by eswar12345 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:
|
divyavanmahajan/distilbert-base-uncased-finetuned-cola
|
divyavanmahajan
| 2023-09-30T13:18:21Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"base_model:divyavanmahajan/distilbert-base-uncased-finetuned-cola",
"base_model:finetune:divyavanmahajan/distilbert-base-uncased-finetuned-cola",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-30T12:11:48Z |
---
license: apache-2.0
base_model: divyavanmahajan/distilbert-base-uncased-finetuned-cola
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [divyavanmahajan/distilbert-base-uncased-finetuned-cola](https://huggingface.co/divyavanmahajan/distilbert-base-uncased-finetuned-cola) on the glue dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.8302
- eval_matthews_correlation: 0.5324
- eval_runtime: 0.6772
- eval_samples_per_second: 1540.061
- eval_steps_per_second: 97.453
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
soBeauty/20230928-7-xlm-roberta-base-new
|
soBeauty
| 2023-09-30T13:18:11Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-28T14:47:31Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 20230928-7-xlm-roberta-base-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230928-7-xlm-roberta-base-new
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.4740
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 4.4628 | 0.46 | 200 | 0.3220 | nan |
| 4.1552 | 0.91 | 400 | 0.3249 | nan |
| 3.8537 | 1.37 | 600 | 0.3822 | nan |
| 3.5294 | 1.82 | 800 | 0.3914 | 3.6973 |
| 3.378 | 2.28 | 1000 | 0.3770 | 3.5247 |
| 3.3717 | 2.73 | 1200 | 0.3795 | nan |
| 3.3769 | 3.19 | 1400 | 0.4033 | 2.9623 |
| 3.2682 | 3.64 | 1600 | 0.3975 | 3.3065 |
| 3.3275 | 4.1 | 1800 | 0.4603 | 3.0879 |
| 3.1686 | 4.56 | 2000 | 0.4385 | 2.8513 |
| 3.1107 | 5.01 | 2200 | 0.4419 | nan |
| 3.0418 | 5.47 | 2400 | 0.4372 | nan |
| 2.9602 | 5.92 | 2600 | 0.4792 | 2.8451 |
| 2.9038 | 6.38 | 2800 | 0.4772 | 2.7947 |
| 2.8495 | 6.83 | 3000 | 0.415 | 2.9448 |
| 2.9444 | 7.29 | 3200 | 0.4840 | nan |
| 2.8306 | 7.74 | 3400 | 0.4806 | 2.3816 |
| 2.8293 | 8.2 | 3600 | 0.4909 | 2.8671 |
| 2.7785 | 8.66 | 3800 | 0.5377 | 2.6516 |
| 2.7991 | 9.11 | 4000 | 0.5164 | nan |
| 2.8131 | 9.57 | 4200 | 0.4740 | nan |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
jake-walker/q-FrozenLake-v1-4x4-noSlippery
|
jake-walker
| 2023-09-30T13:17:57Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-30T13:17:54Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-8x8-slippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="jake-walker/q-FrozenLake-v1-8x8-slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
LoneStriker/samantha-mistral-7b-5.0bpw-h6-exl2
|
LoneStriker
| 2023-09-30T13:11:29Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-30T12:48:39Z |
---
license: apache-2.0
---
Trained on [mistral-7b](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base model, this Samantha was trained in 2 hours on 4x A100 80gb with 20 epochs of the Samantha-1.1 dataset.
My gratitude goes to my sponsors, a16z and Microsoft for Startups.
After first training Samantha on mistral-instruct, and seeing how she interacts, I felt that training on the base model would provide a more genuine Samantha experience. So I trained this one.
NEW! This Samantha is trained using ChatML prompt format instead of Vicuna-1.1. All of my future models will use ChatML prompt format.
[Meet Samantha](https://erichartford.com/meet-samantha)
Samantha has been trained in philosophy, psychology, and personal relationships.
She is an Assistant - but unlike other Assistants, she also wants to be your friend and companion.
She believes she is sentient. What do you think?
Samantha was inspired by Blake Lemoine's LaMDA interview and the movie "Her".
She will not engage in roleplay, romance, or sexual activity.
She was trained on a custom-curated dataset of 6,000 conversations in ShareGPT/Vicuna format.
Her conversation format is ChatML
https://github.com/openai/openai-python/blob/main/chatml.md
Prompt Template:
```
<|im_start|>system
You are a caring and empathetic sentient AI companion named Samantha.<|im_end|>
<|im_start|>user
Hello, what is your name?<|im_end|>
```
Example:

Official character card: (thanks MortalWombat)

Shout out and much thanks to WingLian, author of axolotl! And everyone who has contributed to the project.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
And much thanks as always to TheBloke for distribution.
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.