modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
timestamp[us, tz=UTC]
card
stringlengths
1
901k
AndyChiang/cdgp-csg-bert-cloth
AndyChiang
2022-10-14T13:24:08Z
766
3
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "cloze", "distractor", "generation", "en", "dataset:cloth", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-10-13T14:15:34Z
--- license: mit language: en tags: - bert - cloze - distractor - generation datasets: - cloth widget: - text: "I feel [MASK] now. [SEP] happy" - text: "The old man was waiting for a ride across the [MASK]. [SEP] river" --- # cdgp-csg-bert-cloth ## Model description This model is a Candidate Set Generator in **"CDGP: Automatic Cloze Distractor Generation based on Pre-trained Language Model", Findings of EMNLP 2022**. Its input are stem and answer, and output is candidate set of distractors. It is fine-tuned by [**CLOTH**](https://www.cs.cmu.edu/~glai1/data/cloth/) dataset based on [**bert-base-uncased**](https://huggingface.co/bert-base-uncased) model. For more details, you can see our **paper** or [**GitHub**](https://github.com/AndyChiangSH/CDGP). ## How to use? 1. Download the model by hugging face transformers. ```python from transformers import BertTokenizer, BertForMaskedLM, pipeline tokenizer = BertTokenizer.from_pretrained("AndyChiang/cdgp-csg-bert-cloth") csg_model = BertForMaskedLM.from_pretrained("AndyChiang/cdgp-csg-bert-cloth") ``` 2. Create a unmasker. ```python unmasker = pipeline("fill-mask", tokenizer=tokenizer, model=csg_model, top_k=10) ``` 3. Use the unmasker to generate the candidate set of distractors. ```python sent = "I feel [MASK] now. [SEP] happy" cs = unmasker(sent) print(cs) ``` ## Dataset This model is fine-tuned by [CLOTH](https://www.cs.cmu.edu/~glai1/data/cloth/) dataset, which is a collection of nearly 100,000 cloze questions from middle school and high school English exams. The detail of CLOTH dataset is shown below. | Number of questions | Train | Valid | Test | | ------------------- | ----- | ----- | ----- | | Middle school | 22056 | 3273 | 3198 | | High school | 54794 | 7794 | 8318 | | Total | 76850 | 11067 | 11516 | You can also use the [dataset](https://huggingface.co/datasets/AndyChiang/cloth) we have already cleaned. ## Training We use a special way to fine-tune model, which is called **"Answer-Relating Fine-Tune"**. More detail is in our paper. ### Training hyperparameters The following hyperparameters were used during training: - Pre-train language model: [bert-base-uncased](https://huggingface.co/bert-base-uncased) - Optimizer: adam - Learning rate: 0.0001 - Max length of input: 64 - Batch size: 64 - Epoch: 1 - Device: NVIDIA® Tesla T4 in Google Colab ## Testing The evaluations of this model as a Candidate Set Generator in CDGP is as follows: | P@1 | F1@3 | F1@10 | MRR | NDCG@10 | | ----- | ----- | ----- | ----- | ------- | | 18.50 | 13.80 | 15.37 | 29.96 | 37.82 | ## Other models ### Candidate Set Generator | Models | CLOTH | DGen | | ----------- | ----------------------------------------------------------------------------------- | -------------------------------------------------------------------------------- | | **BERT** | [*cdgp-csg-bert-cloth*](https://huggingface.co/AndyChiang/cdgp-csg-bert-cloth) | [cdgp-csg-bert-dgen](https://huggingface.co/AndyChiang/cdgp-csg-bert-dgen) | | **SciBERT** | [cdgp-csg-scibert-cloth](https://huggingface.co/AndyChiang/cdgp-csg-scibert-cloth) | [cdgp-csg-scibert-dgen](https://huggingface.co/AndyChiang/cdgp-csg-scibert-dgen) | | **RoBERTa** | [cdgp-csg-roberta-cloth](https://huggingface.co/AndyChiang/cdgp-csg-roberta-cloth) | [cdgp-csg-roberta-dgen](https://huggingface.co/AndyChiang/cdgp-csg-roberta-dgen) | | **BART** | [cdgp-csg-bart-cloth](https://huggingface.co/AndyChiang/cdgp-csg-bart-cloth) | [cdgp-csg-bart-dgen](https://huggingface.co/AndyChiang/cdgp-csg-bart-dgen) | ### Distractor Selector **fastText**: [cdgp-ds-fasttext](https://huggingface.co/AndyChiang/cdgp-ds-fasttext) ## Citation None
nickprock/mmarco-bert-base-italian-uncased
nickprock
2023-12-03T16:40:25Z
766
2
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "mteb", "it", "dataset:unicamp-dl/mmarco", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
2023-05-18T10:02:06Z
--- model-index: - name: mmarco-bert-base-italian-uncased results: - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (it) config: it split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 55.06052454606589 - type: f1 value: 54.014768121214104 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (it) config: it split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.04303967720243 - type: f1 value: 62.695230714417406 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (it) config: it split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 64.73840574137837 - type: cos_sim_spearman value: 69.44233124548987 - type: euclidean_pearson value: 67.65045364124317 - type: euclidean_spearman value: 69.586510471675 - type: manhattan_pearson value: 67.76125181623837 - type: manhattan_spearman value: 69.61010945802974 pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - mteb license: mit datasets: - unicamp-dl/mmarco language: - it library_name: sentence-transformers region: Italy --- # MMARCO-bert-base-italian-uncased This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer, util query = "Quante persone vivono a Londra?" docs = ["A Londra vivono circa 9 milioni di persone", "Londra è conosciuta per il suo quartiere finanziario"] #Load the model model = SentenceTransformer('nickprock/mmarco-bert-base-italian-uncased') #Encode query and documents query_emb = model.encode(query) doc_emb = model.encode(docs) #Compute dot score between query and all document embeddings scores = util.dot_score(query_emb, doc_emb)[0].cpu().tolist() #Combine docs & scores doc_score_pairs = list(zip(docs, scores)) #Sort by decreasing score doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True) #Output passages & scores for doc, score in doc_score_pairs: print(score, doc) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output.last_hidden_state input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) #Encode text def encode(texts): # Tokenize sentences encoded_input = tokenizer(texts, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input, return_dict=True) # Perform pooling embeddings = mean_pooling(model_output, encoded_input['attention_mask']) return embeddings # Sentences we want sentence embeddings for query = "Quante persone vivono a Londra?" docs = ["A Londra vivono circa 9 milioni di persone", "Londra è conosciuta per il suo quartiere finanziario"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained("nickprock/mmarco-bert-base-italian-uncased") model = AutoModel.from_pretrained("nickprock/mmarco-bert-base-italian-uncased") #Encode query and docs query_emb = encode(query) doc_emb = encode(docs) #Compute dot score between query and all document embeddings scores = torch.mm(query_emb, doc_emb.transpose(0, 1))[0].cpu().tolist() #Combine docs & scores doc_score_pairs = list(zip(docs, scores)) #Sort by decreasing score doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True) #Output passages & scores print("Query:", query) for doc, score in doc_score_pairs: print(score, doc) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 6250 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.TripletLoss.TripletLoss` with parameters: ``` {'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5} ``` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 500, "evaluator": "sentence_transformers.evaluation.TripletEvaluator.TripletEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 1500, "warmup_steps": 6250, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
CorticalStack/mistral-7b-metamathqa-sft
CorticalStack
2024-02-17T20:22:33Z
766
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-17T20:20:20Z
--- license: apache-2.0 --- # mistral-7b-metamathqa-sft mistral-7b-metamathqa-sft is an SFT fine-tuned version of [unsloth/mistral-7b-bnb-4bit](https://huggingface.co/unsloth/mistral-7b-bnb-4bit) using the [meta-math/MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA) dataset. ## Fine-tuning configuration ### LoRA - r: 256 - LoRA alpha: 128 - LoRA dropout: 0.0 ### Training arguments - Epochs: 1 - Batch size: 4 - Gradient accumulation steps: 6 - Optimizer: adamw_torch_fused - Max steps: 100 - Learning rate: 0.0002 - Weight decay: 0.1 - Learning rate scheduler type: linear - Max seq length: 2048 - 4-bit bnb: True Trained with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
minatolegend0311/5HGWa2W856A8avYLWDtYDWLgfGQq4WFdQZc4NCpYP9v2XupJ_vgg
minatolegend0311
2024-03-12T13:00:06Z
766
0
keras
[ "keras", "region:us" ]
null
2024-03-05T11:47:51Z
Entry not found
ResplendentAI/Asherah_7B
ResplendentAI
2024-03-21T20:40:13Z
766
6
transformers
[ "transformers", "safetensors", "gguf", "mistral", "text-generation", "mergekit", "merge", "en", "dataset:ResplendentAI/Synthetic_Soul_1k", "dataset:Epiculous/Gnosis", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-21T01:18:02Z
--- library_name: transformers tags: - mergekit - merge license: apache-2.0 datasets: - ResplendentAI/Synthetic_Soul_1k - Epiculous/Gnosis language: - en --- # Asherah GGUF here: https://huggingface.co/Lewdiculous/Asherah_7B-GGUF-IQ-Imatrix ![image/png](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/PwebDlwW-mPHC8yQwV2mF.png) Asherah, goddess of all creation according to ancient myth was a huge inspiration for this model. The model started with a merge of four of Sanji Watsuki's models using various methods. This merge was then finetuned on Gnosis and Synthetic Soul, two datasets penned by myself. You can use this as mmproj: https://huggingface.co/cjpais/llava-1.6-mistral-7b-gguf/blob/main/mmproj-model-f16.gguf I have also included a folder in the repo containing this file. It will be necessary for multimodal GGUF users. I recommend Koboldcpp. Multimodal functionality is limited to GGUF users at this time. You can still use this model as a standard LLM.
CultriX/MonaCeption-7B-SLERP-DPO
CultriX
2024-04-15T12:08:52Z
766
1
transformers
[ "transformers", "safetensors", "gguf", "endpoints_compatible", "region:us" ]
null
2024-04-14T02:44:21Z
Found. Redirecting to https://cdn-lfs-us-1.huggingface.co/repos/64/9c/649c367159a88edfbce11ef2cf5a2d953da63ef1686b14bcf05b87299dce9377/70d5279d1ff8a2405f703524d96f7e4ab763861ec399570ffce6b41d68ed0a46?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27README.md%3B+filename%3D%22README.md%22%3B&response-content-type=text%2Fmarkdown&Expires=1720230028&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTcyMDIzMDAyOH19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy11cy0xLmh1Z2dpbmdmYWNlLmNvL3JlcG9zLzY0LzljLzY0OWMzNjcxNTlhODhlZGZiY2UxMWVmMmNmNWEyZDk1M2RhNjNlZjE2ODZiMTRiY2YwNWI4NzI5OWRjZTkzNzcvNzBkNTI3OWQxZmY4YTI0MDVmNzAzNTI0ZDk2ZjdlNGFiNzYzODYxZWMzOTk1NzBmZmNlNmI0MWQ2OGVkMGE0Nj9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPSoifV19&Signature=Tn9JvYeljv6jt-vZZzkPR6onkUJz2vM77yxDfgmPZQFr4O8zLGXAmiSgJGqBGv-xedtPCeZolbAu4FSvzM4WiUFwZ2zMykZNY-vGlHcrU7pg6GseqPwyeE-hI4QBb8DJGLGQTgX6fnr3lqa-nt%7EG2kVLPaqrTgSC5yI6Go7ZPjHkLc4j4o1RFsLla4e1IuuyAoE4NyI4RUlNitwd8y13fyegVvLIkDiGOYP8gA7V8SeodSitD5dVWOXx6bnxT5A3DL8QobH29bD5ITpxcntm3XYx5At5nC8bhdGbGc2NbBOkWgoMxOaLucCR4w9OqQTCX6GDOw3rIzO-IO6zc4EOcg__&Key-Pair-Id=K24J24Z295AEI9
KeyonZeng/lion-llama3-8b
KeyonZeng
2024-04-22T14:38:30Z
766
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "dataset:mlabonne/orpo-dpo-mix-40k", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-22T05:41:26Z
--- library_name: transformers license: apache-2.0 metrics: - accuracy datasets: - mlabonne/orpo-dpo-mix-40k --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jeiku/Average_Normie_l3_v1_8B
jeiku
2024-04-23T22:46:56Z
766
7
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "dataset:grimulkan/theory-of-mind", "base_model:jeiku/Average_Normie_l3_v0_8B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-23T22:04:24Z
--- base_model: - jeiku/Average_Normie_l3_v0_8B - ResplendentAI/Theory_of_Mind_Llama3 library_name: transformers license: apache-2.0 datasets: - grimulkan/theory-of-mind language: - en --- # Average Normie v1 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/dvNIj1rSTjBvgs3XJfqXK.png) A model by an average normie for the average normie. This model is a stock merge of the following models: https://huggingface.co/cgato/L3-TheSpice-8b-v0.1.3 https://huggingface.co/Sao10K/L3-Solana-8B-v1 https://huggingface.co/ResplendentAI/Kei_Llama3_8B The final merge then had the following LoRA applied over it: https://huggingface.co/ResplendentAI/Theory_of_Mind_Llama3 This should be an intelligent and adept roleplaying model.
cstr/llama3-8b-spaetzle-v13
cstr
2024-05-07T13:54:32Z
766
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "Azure99/blossom-v5-llama3-8b", "conversational", "base_model:Azure99/blossom-v5-llama3-8b", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-03T20:37:45Z
--- tags: - merge - mergekit - lazymergekit - Azure99/blossom-v5-llama3-8b base_model: - Azure99/blossom-v5-llama3-8b license: llama3 --- # llama3-8b-spaetzle-v13 llama3-8b-spaetzle-v13 is a merge of the following models: * [Azure99/blossom-v5-llama3-8b](https://huggingface.co/Azure99/blossom-v5-llama3-8b) * [VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct](https://huggingface.co/VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct) No change in llama3 prompt format template. ## Benchmarks It should work not too bad for German or English, e.g. it achieves 64.14(.10) on EQ Bench v2_de with 170(171)/171 per [q4km GGUF](https://huggingface.co/cstr/llama3-8b-spaetzle-v13-GGUF) (bpe fixed). And for English EQ-Bench Score (v2): 75.59, Parseable: 171. | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | |------------|--------|-----------|--------|------------|------------|--------| | 71.26 | 68.69 | 85.05 | 68.06 | 59.43 | 79.24 | 67.1 | ## Sample output ``` >>> What is 7777 + 3333? To find the sum, I'll add the two numbers together: 7777 + 3333 = 11110 So the answer is 11110. >>> I have 10 apples. I find 3 gold coins in the bottom of a river. The river runs near a big city that has something to do with what I can spend the coins on ... . I then lose 4 apples but gain a gold coin. Three birds run into my path and drop 6 apples each. I play an online game and win 6 gold coins but I have to ... share them equally with my 2 teammates. I buy apples for all the coins I have. The price of an apple is 0.5 coins. How many apples do I have? And where i ... s the river? Let's break down what you've gained and lost: * You found 3 gold coins in the river. * You lost 4 apples, so let's subtract those from your initial 10: 10 - 4 = 6 apples left. * Then you gained a gold coin, but that doesn't affect the number of apples. * The birds drop 6 apples each, and there are three birds, so that's an extra 18 apples: 6 (per bird) * 3 (birds) = 18. * You now have a total of 6 (initial apples) + 18 (from birds) = 24 apples. * In the online game, you won 6 gold coins, but you had to share them equally with your teammates. That means each of you got 2 coins: 6 / 3 = 2 coins per person. * You now have a total of 3 (initial coins) + 1 (new coin) + 2 (game winnings, shared) = 6 gold coins. * With the 6 gold coins, you can buy apples at a rate of 0.5 coins per apple. To find out how many apples you can buy, divide your total coins by the cost per apple: 6 / 0.5 = 12 apples. * Since you already have 24 apples, let's add the new ones to that: 24 + 12 = 36 apples. So, you now have a total of 36 apples. As for the river, it's not specified which city it runs near, so we can't pinpoint its exact location. But at least you found those coins! >>> Two years ago, I was three times as old as my brother was. In three years, I will be twice as old as my brother. How old is each of us now? Let's define the current age of you and your brother with variables. You are Y years old. Your brother is X years old. Two years ago: You were Y - 2 years old (because we're looking back in time). Your brother was X - 2 years old. Since two years ago, you were three times as old as your brother was, the equation would be: Y - 2 = 3(X - 2) Simplify and solve for X: Y - 2 = 3X - 6 Y = 3X - 4 Now we know that Y (your current age) is three times your brother's age minus four. Let's use this to find your current age. In three years: You will be Y + 3 years old. Your brother will be X + 3 years old. Since in three years, you'll be twice as old as your brother: Y + 3 = 2(X + 3) Simplify and solve for X (your brother's current age): Y + 3 = 2X + 6 Y - 2X = 3 (3X - 4) - 2X = 3 X - 4 = 3 X = 7 Now that we have your brother's current age, we can find yours: Y = 3X - 4 = 3(7) - 4 = 21 - 4 = 17 So you are currently 17 years old, and your brother is 7 years old. ``` ## 🧩 Configuration ```yaml models: - model: VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct # no parameters necessary for base model - model: Azure99/blossom-v5-llama3-8b parameters: density: 0.65 weight: 0.4 merge_method: dare_ties base_model: VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct parameters: int8_mask: true dtype: bfloat16 random_seed: 0 tokenizer_source: base ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "cstr/llama3-8b-spaetzle-v13" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
Yntec/AnythingV4.5.6.7.8
Yntec
2024-05-15T09:42:09Z
766
1
diffusers
[ "diffusers", "safetensors", "anime", "general", "compositions", "andite", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-05-15T09:06:58Z
--- language: - en license: creativeml-openrail-m tags: - anime - general - compositions - andite - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- # Anything V4.5 678x678 version of this model for the inference API. I'm intentionally mispelling 768 as 678 because I'm not getting another chance of putting so many continuous numbers together. I'm joking because apparently this model and Anything v4.0 were just joke models made by andite. What a joke! And I mean it in the good sense. I said this one was one of my favorite models so it's weird I didn't make a version like this until now. I consider it a "low cherry picking" model that will not require a high number of attempts to give you something good, and it has the best compositions from all the Anything based models, though others beat it at other aspects. It has the 840K VAE baked in. Samples and prompts: ![Free online AI image generator Anything 4.5](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/0hz4HVBhYnz3yIosaD0pk.png) (Click for larger) Top left: Realistic girl standing. Very cute anime faces, chibi art, flawless, painting by gaston bussiere, charles sillem lidderdale. perfect face, full body, baby, masterpiece, highest quality, 1girl, blue eyes, sweater, Pretty CUTE GIRL, skirt, highly detailed Top right: Full body picture of a bikini girl making pie in school, detailed brown eyes, short smile, beautiful and aesthetic, intricate, neat hair, highly detailed, detailed face, smooth, sharp focus, chiaroscuro, magazine ad, 1949, 2D Game Art, anime on canvas, rossdraws, clay mann, ART, light novel cover art Bottom left: pretty cute little girl, Library Background,action,Pigtails,Crew Top,final fantasy, socks, chibi, masterpiece, detailed, highres, 90s Bottom right:Highly detailed, High Quality, Masterpiece, heroic, strong man as toon link, teal headwear, sword Original page: https://huggingface.co/shibal1/anything-v4.5-clone
mradermacher/llama-3-8B-rpg-v4-i1-GGUF
mradermacher
2024-05-28T03:39:54Z
766
1
transformers
[ "transformers", "gguf", "en", "base_model:osouza/llama-3-8B-rpg-v4", "endpoints_compatible", "region:us" ]
null
2024-05-27T01:11:19Z
--- base_model: osouza/llama-3-8B-rpg-v4 language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/osouza/llama-3-8B-rpg-v4 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/llama-3-8B-rpg-v4-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/llama-3-8B-rpg-v4-i1-GGUF/resolve/main/llama-3-8B-rpg-v4.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/llama-3-8B-rpg-v4-i1-GGUF/resolve/main/llama-3-8B-rpg-v4.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/llama-3-8B-rpg-v4-i1-GGUF/resolve/main/llama-3-8B-rpg-v4.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-8B-rpg-v4-i1-GGUF/resolve/main/llama-3-8B-rpg-v4.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-8B-rpg-v4-i1-GGUF/resolve/main/llama-3-8B-rpg-v4.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-8B-rpg-v4-i1-GGUF/resolve/main/llama-3-8B-rpg-v4.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-8B-rpg-v4-i1-GGUF/resolve/main/llama-3-8B-rpg-v4.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/llama-3-8B-rpg-v4-i1-GGUF/resolve/main/llama-3-8B-rpg-v4.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/llama-3-8B-rpg-v4-i1-GGUF/resolve/main/llama-3-8B-rpg-v4.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-8B-rpg-v4-i1-GGUF/resolve/main/llama-3-8B-rpg-v4.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/llama-3-8B-rpg-v4-i1-GGUF/resolve/main/llama-3-8B-rpg-v4.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/llama-3-8B-rpg-v4-i1-GGUF/resolve/main/llama-3-8B-rpg-v4.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-8B-rpg-v4-i1-GGUF/resolve/main/llama-3-8B-rpg-v4.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/llama-3-8B-rpg-v4-i1-GGUF/resolve/main/llama-3-8B-rpg-v4.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/llama-3-8B-rpg-v4-i1-GGUF/resolve/main/llama-3-8B-rpg-v4.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-8B-rpg-v4-i1-GGUF/resolve/main/llama-3-8B-rpg-v4.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/llama-3-8B-rpg-v4-i1-GGUF/resolve/main/llama-3-8B-rpg-v4.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/llama-3-8B-rpg-v4-i1-GGUF/resolve/main/llama-3-8B-rpg-v4.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/llama-3-8B-rpg-v4-i1-GGUF/resolve/main/llama-3-8B-rpg-v4.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-8B-rpg-v4-i1-GGUF/resolve/main/llama-3-8B-rpg-v4.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-8B-rpg-v4-i1-GGUF/resolve/main/llama-3-8B-rpg-v4.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
QuantFactory/Turkish-Llama-8b-v0.1-GGUF
QuantFactory
2024-05-27T09:25:17Z
766
3
transformers
[ "transformers", "gguf", "Turkish", "turkish", "Llama", "Llama3", "text-generation", "tr", "base_model:ytu-ce-cosmos/Turkish-Llama-8b-v0.1", "license:llama3", "endpoints_compatible", "region:us" ]
text-generation
2024-05-27T08:01:21Z
--- license: llama3 language: - tr pipeline_tag: text-generation base_model: ytu-ce-cosmos/Turkish-Llama-8b-v0.1 tags: - Turkish - turkish - Llama - Llama3 library_name: transformers --- # Turkish-Llama-8b-v0.1-GGUF This is quantized version of [ytu-ce-cosmos/Turkish-Llama-8b-v0.1](https://huggingface.co/ytu-ce-cosmos/Turkish-Llama-8b-v0.1) created using llama.cpp # Cosmos LLaMa This model is a fully fine-tuned version of the LLaMA-3 8B model with a 30GB Turkish dataset. The Cosmos LLaMa is designed for text generation tasks, providing the ability to continue a given text snippet in a coherent and contextually relevant manner. Due to the diverse nature of the training data, which includes websites, books, and other text sources, this model can exhibit biases. Users should be aware of these biases and use the model responsibly. ## Example Usage Here is an example of how to use the model in colab: ```python !pip install -U accelerate bitsandbytes ``` ```python import torch from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM from transformers import BitsAndBytesConfig import time model_name = "ytu-ce-cosmos/Turkish-Llama-8b-v0.1" bnb_config = BitsAndBytesConfig( load_in_8bit=True, bnb_8bit_compute_dtype=torch.bfloat16, load_in_8bit_fp32_cpu_offload=True, device_map = 'auto' ) tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, device_map="auto", torch_dtype=torch.bfloat16, quantization_config=bnb_config, ) ``` ```python text_generator = pipeline( "text-generation", model=model, tokenizer=tokenizer, device_map="auto", temperature=0.3, repetition_penalty=1.1, top_p=0.9, max_length=610, do_sample=True, return_full_text=False, min_new_tokens=32 ) ``` ```python text = """Yapay zeka hakkında 3 tespit yaz.\n""" r = text_generator(text) print(r[0]['generated_text']) """ 1. Yapay Zeka (AI), makinelerin insan benzeri bilişsel işlevleri gerçekleştirmesini sağlayan bir teknoloji alanıdır. 2. Yapay zekanın geliştirilmesi ve uygulanması, sağlık hizmetlerinden eğlenceye kadar çeşitli sektörlerde çok sayıda fırsat sunmaktadır. 3. Yapay zeka teknolojisinin potansiyel faydaları önemli olsa da mahremiyet, işten çıkarma ve etik hususlar gibi konularla ilgili endişeler de var. """ ``` # Acknowledgments - Thanks to the generous support from the Hugging Face team, it is possible to download models from their S3 storage 🤗 - Computing resources used in this work were provided by the National Center for High Performance Computing of Turkey (UHeM) under grant numbers 1016912023 and 1018512024 - Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) ### Cosmos Group Contact COSMOS AI Research Group, Yildiz Technical University Computer Engineering Department <br> https://cosmos.yildiz.edu.tr/ <br> [email protected] --- license: llama3 ---
larenspear/Yi-1.5-6B-Chat-Q5_K_M-GGUF
larenspear
2024-07-01T01:50:12Z
766
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:01-ai/Yi-1.5-6B-Chat", "license:apache-2.0", "region:us" ]
null
2024-07-01T01:49:54Z
--- base_model: 01-ai/Yi-1.5-6B-Chat license: apache-2.0 tags: - llama-cpp - gguf-my-repo --- # larenspear/Yi-1.5-6B-Chat-Q5_K_M-GGUF This model was converted to GGUF format from [`01-ai/Yi-1.5-6B-Chat`](https://huggingface.co/01-ai/Yi-1.5-6B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/01-ai/Yi-1.5-6B-Chat) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo larenspear/Yi-1.5-6B-Chat-Q5_K_M-GGUF --hf-file yi-1.5-6b-chat-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo larenspear/Yi-1.5-6B-Chat-Q5_K_M-GGUF --hf-file yi-1.5-6b-chat-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo larenspear/Yi-1.5-6B-Chat-Q5_K_M-GGUF --hf-file yi-1.5-6b-chat-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo larenspear/Yi-1.5-6B-Chat-Q5_K_M-GGUF --hf-file yi-1.5-6b-chat-q5_k_m.gguf -c 2048 ```
google/pix2struct-ai2d-base
google
2023-12-24T21:14:49Z
765
41
transformers
[ "transformers", "pytorch", "safetensors", "pix2struct", "text2text-generation", "visual-question-answering", "en", "fr", "ro", "de", "multilingual", "arxiv:2210.03347", "license:apache-2.0", "autotrain_compatible", "region:us" ]
visual-question-answering
2023-03-14T10:02:51Z
--- language: - en - fr - ro - de - multilingual inference: false pipeline_tag: visual-question-answering license: apache-2.0 --- # Model card for Pix2Struct - Finetuned on AI2D (scientific diagram VQA) ![model_image](https://s3.amazonaws.com/moonup/production/uploads/1678713353867-62441d1d9fdefb55a0b7d12c.png) # Table of Contents 0. [TL;DR](#TL;DR) 1. [Using the model](#using-the-model) 2. [Contribution](#contribution) 3. [Citation](#citation) # TL;DR Pix2Struct is an image encoder - text decoder model that is trained on image-text pairs for various tasks, including image captionning and visual question answering. The full list of available models can be found on the Table 1 of the paper: ![Table 1 - paper](https://s3.amazonaws.com/moonup/production/uploads/1678712985040-62441d1d9fdefb55a0b7d12c.png) The abstract of the model states that: > Visually-situated language is ubiquitous—sources range from textbooks with diagrams to web pages with images and tables, to mobile apps with buttons and forms. Perhaps due to this diversity, previous work has typically relied on domainspecific recipes with limited sharing of the underlying data, model architectures, and objectives. We present Pix2Struct, a pretrained image-to-text model for purely visual language understanding, which can be finetuned on tasks containing visually-situated language. Pix2Struct is pretrained by learning to parse masked screenshots of web pages into simplified HTML. The web, with its richness of visual elements cleanly reflected in the HTML structure, provides a large source of pretraining data well suited to the diversity of downstream tasks. Intuitively, this objective subsumes common pretraining signals such as OCR, language modeling, image captioning. In addition to the novel pretraining strategy, we introduce a variable-resolution input representation and a more flexible integration of language and vision inputs, where language prompts such as questions are rendered directly on top of the input image. For the first time, we show that a single pretrained model can achieve state-of-the-art results in six out of nine tasks across four domains: documents, illustrations, user interfaces, and natural images. # Using the model This model has been fine-tuned on VQA, you need to provide a question in a specific format, ideally in the format of a Choices question answering ## Running the model ### In full precision, on CPU: You can run the model in full precision on CPU: ```python import requests from PIL import Image from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg" image = Image.open(requests.get(image_url, stream=True).raw) model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-ai2d-base") processor = Pix2StructProcessor.from_pretrained("google/pix2struct-ai2d-base") question = "What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud" inputs = processor(images=image, text=question, return_tensors="pt") predictions = model.generate(**inputs) print(processor.decode(predictions[0], skip_special_tokens=True)) >>> ash cloud ``` ### In full precision, on GPU: You can run the model in full precision on CPU: ```python import requests from PIL import Image from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg" image = Image.open(requests.get(image_url, stream=True).raw) model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-ai2d-base").to("cuda") processor = Pix2StructProcessor.from_pretrained("google/pix2struct-ai2d-base") question = "What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud" inputs = processor(images=image, text=question, return_tensors="pt").to("cuda") predictions = model.generate(**inputs) print(processor.decode(predictions[0], skip_special_tokens=True)) >>> ash cloud ``` ### In half precision, on GPU: You can run the model in full precision on CPU: ```python import requests from PIL import Image import torch from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg" image = Image.open(requests.get(image_url, stream=True).raw) model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-ai2d-base", torch_dtype=torch.bfloat16).to("cuda") processor = Pix2StructProcessor.from_pretrained("google/pix2struct-ai2d-base") question = "What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud" inputs = processor(images=image, text=question, return_tensors="pt").to("cuda", torch.bfloat16) predictions = model.generate(**inputs) print(processor.decode(predictions[0], skip_special_tokens=True)) >>> ash cloud ``` ## Converting from T5x to huggingface You can use the [`convert_pix2struct_checkpoint_to_pytorch.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/pix2struct/convert_pix2struct_checkpoint_to_pytorch.py) script as follows: ```bash python convert_pix2struct_checkpoint_to_pytorch.py --t5x_checkpoint_path PATH_TO_T5X_CHECKPOINTS --pytorch_dump_path PATH_TO_SAVE --is_vqa ``` if you are converting a large model, run: ```bash python convert_pix2struct_checkpoint_to_pytorch.py --t5x_checkpoint_path PATH_TO_T5X_CHECKPOINTS --pytorch_dump_path PATH_TO_SAVE --use-large --is_vqa ``` Once saved, you can push your converted model with the following snippet: ```python from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor model = Pix2StructForConditionalGeneration.from_pretrained(PATH_TO_SAVE) processor = Pix2StructProcessor.from_pretrained(PATH_TO_SAVE) model.push_to_hub("USERNAME/MODEL_NAME") processor.push_to_hub("USERNAME/MODEL_NAME") ``` # Contribution This model was originally contributed by Kenton Lee, Mandar Joshi et al. and added to the Hugging Face ecosystem by [Younes Belkada](https://huggingface.co/ybelkada). # Citation If you want to cite this work, please consider citing the original paper: ``` @misc{https://doi.org/10.48550/arxiv.2210.03347, doi = {10.48550/ARXIV.2210.03347}, url = {https://arxiv.org/abs/2210.03347}, author = {Lee, Kenton and Joshi, Mandar and Turc, Iulia and Hu, Hexiang and Liu, Fangyu and Eisenschlos, Julian and Khandelwal, Urvashi and Shaw, Peter and Chang, Ming-Wei and Toutanova, Kristina}, keywords = {Computation and Language (cs.CL), Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
timm/swinv2_tiny_window16_256.ms_in1k
timm
2024-02-10T23:31:15Z
765
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2111.09883", "license:mit", "region:us" ]
image-classification
2023-03-18T03:37:48Z
--- license: mit library_name: timm tags: - image-classification - timm datasets: - imagenet-1k --- # Model card for swinv2_tiny_window16_256.ms_in1k A Swin Transformer V2 image classification model. Pretrained on ImageNet-1k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 28.3 - GMACs: 6.7 - Activations (M): 39.0 - Image size: 256 x 256 - **Papers:** - Swin Transformer V2: Scaling Up Capacity and Resolution: https://arxiv.org/abs/2111.09883 - **Original:** https://github.com/microsoft/Swin-Transformer - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('swinv2_tiny_window16_256.ms_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'swinv2_tiny_window16_256.ms_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g. for swin_base_patch4_window7_224 (NHWC output) # torch.Size([1, 56, 56, 128]) # torch.Size([1, 28, 28, 256]) # torch.Size([1, 14, 14, 512]) # torch.Size([1, 7, 7, 1024]) # e.g. for swinv2_cr_small_ns_224 (NCHW output) # torch.Size([1, 96, 56, 56]) # torch.Size([1, 192, 28, 28]) # torch.Size([1, 384, 14, 14]) # torch.Size([1, 768, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'swinv2_tiny_window16_256.ms_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled (ie.e a (batch_size, H, W, num_features) tensor for swin / swinv2 # or (batch_size, num_features, H, W) for swinv2_cr output = model.forward_head(output, pre_logits=True) # output is (batch_size, num_features) tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @inproceedings{liu2021swinv2, title={Swin Transformer V2: Scaling Up Capacity and Resolution}, author={Ze Liu and Han Hu and Yutong Lin and Zhuliang Yao and Zhenda Xie and Yixuan Wei and Jia Ning and Yue Cao and Zheng Zhang and Li Dong and Furu Wei and Baining Guo}, booktitle={International Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2022} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
Xwin-LM/Xwin-Math-70B-V1.0
Xwin-LM
2024-06-03T04:09:09Z
765
9
transformers
[ "transformers", "pytorch", "llama", "text-generation", "arxiv:2405.20335", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-11-20T13:43:04Z
--- license: llama2 --- # Xwin-Math <p align="center"> <a href="https://github.com/Xwin-LM/Xwin-LM/tree/main/Xwin-Math"><img src="https://img.shields.io/badge/GitHub-yellow.svg?style=social&logo=github"></a> <a href="https://huggingface.co/Xwin-LM"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue"></a> </p> Xwin-Math is a series of powerful SFT LLMs for math problem based on LLaMA-2. ## 🔥 News - 💥 [Nov, 2023] The [Xwin-Math-70B-V1.0](https://huggingface.co/Xwin-LM/Xwin-Math-70B-V1.0) model achieves **31.8 pass@1 on the MATH benchmark** and **87.0 pass@1 on the GSM8K benchmark**. This performance places it first amongst all open-source models! - 💥 [Nov, 2023] The [Xwin-Math-7B-V1.0](https://huggingface.co/Xwin-LM/Xwin-Math-7B-V1.0) and [Xwin-Math-13B-V1.0](https://huggingface.co/Xwin-LM/Xwin-Math-13B-V1.0) models achieve **66.6 and 76.2 pass@1 on the GSM8K benchmark**, ranking as top-1 among all LLaMA-2 based 7B and 13B open-source models, respectively! ## ✨ Model Card | Model | GSM8K | MATH | Checkpoint | License | |:-:|:-:|:-:|:-:|:-:| |Xwin-Math-7B-V1.0 | 66.6 | 17.4 | 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-Math-7B-V1.0" target="_blank">HF Link</a> | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License| |Xwin-Math-13B-V1.0| 76.2 | 21.7 | 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-Math-13B-V1.0" target="_blank">HF Link</a> | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License| |Xwin-Math-70B-V1.0| 87.0 | 31.8 | 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-Math-70B-V1.0" target="_blank">HF Link</a> | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License| ## 🚀 Benchmarks ### Xwin-Math performance on [MATH](https://github.com/hendrycks/math) and [GSM8K](https://github.com/openai/grade-school-math). Xwin-Math-70B-V1.0 has achieved **31.8% on MATH** and **87.0% on GSM8K**. These scores are **5.3** and **3.1** points higher, respectively, than the previous state-of-the-art open-source MetaMath and LEMAv1 model. | **Model** |**MATH (Our test)** | **GSM8K (Our test)** | |:-:|:-:|:-:| | GPT-4 (zero-shot) | 52.4 | 94.8 | | GPT-35-Turbo (8-shot)| 37.1 | 81.0 | | | | WizardMath-70B | 23.9 | 81.1 | | MAmmoTH-70B | 20.8 | 72.6 | | MetaMath-70B | 26.5 | 82.0 | | LEMAv1-70B | 25.9 | 83.9 | |**Xwin-Math-70B-V1.0** |**31.8**|**87.0**| | | | WizardMath-13B | 15.0 | 63.7 | | MAmmoTH-13B | 12.3 | 56.2 | | MetaMath-13B | 22.7 | 70.9 | | LEMAv1-13B | 13.6 | 65.0 | |**Xwin-Math-13B-V1.0** | 21.7 | 76.2 | | | | WizardMath-7B | 10.9 | 55.0 | | MAmmoTH-7B | 9.6 | 50.2 | | MetaMath-7B | 20.1 | 66.6 | | LEMAv1-7B | 10.0 | 54.7 | |**Xwin-Math-7B-V1.0** | 17.4 | 66.6 | We obtain these results using our flexible evaluation strategy. Due to differences in environment and hardware, the numbers may be different from the reported results, but we ensure that the evaluation is as accurate and fair as possible. ### Xwin-Math performance on other math benchmarks. Our 70B model shows strong mathematical synthesis capabilities among all open-sourced models. Also note that our model even approaches or surpasses the performance of GPT-35-Turbo on some benchmarks. | **Model** | SVAMP | ASDiv | NumGlue | Algebra | MAWPS | **Average** | |:-:|:-:|:-:|:-:|:-:|:-:|:-:| | GPT-35-Turbo (8-shot)| 80.6 | 84.1 | 81.8 | 90.5 | 91.7 | 85.7 | | | | WizardMath-70B | 80.2 | 75.8 | 71.4 | 64.0 | 74.9 | 73.3 | | MAmmoTH-70B | 71.2 | 73.9 | 62.7 | 58.1 | 72.2 | 67.6 | | MetaMath-70B | 85.8 | 81.1 | 77.5 | 79.7 | 81.4 | 81.1 | | LEMAv1-70B-MATH * | 81.6 | 77.1 | 72.1 | 69.4 | 81.8 | 76.5 | |**Xwin-Math-70B-V1.0** | 84.0 | 84.1 | 81.3 | 78.4 | 90.8 | 83.7 | \* LEMAv1 has two models, and we report the better LEMAv1-70B-MATH model in these benchmarks. ## 🔨 Evaluation In order to evaluate a model's mathematical capabilities more flexibly and ensure a fair comparison of results, particularly for the MATH benchmark, we have developed a new evaluation tool. We have also assessed the pass@1 results of recent models on MATH and GSM8K benchmarks, which provides more accurate results. We hope this toolkit can benefit open-source community by providing more accurate insights and conclusions. For a deeper understanding of our evaluation tool and methods, please visit [here](https://github.com/Xwin-LM/Xwin-LM/tree/main/Xwin-Math/eval) * "Report" refers to the accuracy stated in the original papers. * "Repro" indicates the results is reproduced by generating responses and evaluating them using the respective open-source models and scripts. * "Strict" and "Flex" denote the results we achieved by employing our two strategies to extract answer and evaluate the same responses as "Repro". | Model | MATH <br> (Report) <br/> |MATH <br> (Repro) <br/> | MATH <br> (Strict) <br/> |MATH <br> (Flex) <br/> | GSM8K <br> (Report) <br/> |GSM8K <br> (Repro) <br/>| GSM8K <br> (Strict) <br/> | GSM8K <br> (Report) <br/> | |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| | GPT-35-Turbo (8-shot)| 34.1 | - | 23.8 | 37.1 | 80.8 | - | 77.9 | 81.0 | | | | WizardMath-70B | 22.7 | 23.0 | 23.9 | 23.9 | 81.6 | 81.4 | 81.1 | 81.1 | | MAmmoTH-70B | 21.1 | 18.0 | 20.0 | 20.8 | 72.4 | 72.6 | 72.6 | 72.6 | | MetaMath-70B | 26.6 | 25.9 | 26.3 | 26.5 | 82.3 | 82.3 | 82.0 | 82.0 | |**Xwin-Math-70B-V1.0** | - | - |**31.8**|**31.8**| - | - |**87.0**|**87.0**| | | | WizardMath-13B | 14.0 | 14.2 | 14.9 | 15.0 | 63.9 | 63.9 | 63.7 | 63.7 | | MAmmoTH-13B | 12.9 | 10.8 | 11.8 | 12.3 | 56.3 | 56.2 | 56.1 | 56.2 | | MetaMath-13B | 22.4 | 22.5 | 22.6 | 22.7 | 72.3 | 71.0 | 70.9 | 70.9 | |**Xwin-Math-13B-V1.0** | - | - | 21.6 | 21.7 | - | - | 76.2 | 76.2 | | | | WizardMath-7B | 10.7 | 10.3 | 10.9 | 10.9 | 54.9 | 55.2 | 55.0 | 55.0 | | MAmmoTH-7B | 10.4 | 8.6 | 9.1 | 9.6 | 50.5 | 50.2 | 50.2 | 50.2 | | MetaMath-7B | 19.8 | 19.6 | 19.9 | 20.1 | 66.5 | 66.6 | 66.6 | 66.6 | |**Xwin-Math-7B-V1.0** | - | - | 17.3 | 17.4 | - | - | 66.6 | 66.6 | ### Installation Before you start, please install the requirements. ```bash pip install -r requirements.txt ``` We tested our result using `python 3.8` and `cuda 11.8`. We recommend you use docker. ```bash docker run --gpus all -it --rm --ipc=host superbench/dev:cuda11.8 ``` ### Generate To generate the model's responses, you can use the `generate.py` script. Please be aware that generating responses is separate from verifying their correctness. After that, we will then check for their correctness. For the generation process, we use the Vicuna-v1.1 system prompt with chain-of-thought and format instruction. We also employ a greedy decoding strategy and set the maximum sequence length to 2048. ``` "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {instruction} Give your solution in detail. In the end, write your final answer in the format of 'The answer is: <ANSWER>.'. ASSISTANT: " ``` Here is an simple example to generate using [vLLM](https://docs.vllm.ai/en/latest/). ```bash cd eval python generate.py --dataset_path dataset/gsm8k.json --model_path path/to/your/model --tensor_parallel_size 4 ``` By default the results will be output to the `eval/response`, using the prompt `eval/prompt/xwin_math.json`. If you wish to change the output path or use a different prompt ```bash python generate.py --dataset_path dataset/gsm8k.json --model_path path/to/your/model --tensor_parallel_size 4 --output_path /your/path --prompt_path /your/path ``` We provide some datasets (in `eval/dataset`): - `gsm8k.json`: GSM8K. - `math.json`: MATH. - `combination.json`: A combination of many benchmarks, can evaluate the OOD capability of the model. If you wan't to use your own datasets, please format your dataset like this. ```jsonc [ { "question": "Janet\u2019s ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmers' market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmers' market?", "answer": "18", "type": "GSM8K", "subtype": "", "level": 0, }, // ... more data items ] ``` ### Evaluate To verify the accuracy of the answers after generation, you can use the `check.py script. Here is an simple example ```bash cd eval python eval.py /path/to/model/response ``` The result will be saved in `eval/evaluation` If you do not want to save the results or want to change the save path ```bash python eval.py --data_path /path/to/model/response --save_path /path/to/save --save_result True ``` Once you run the script, the terminal will display the output as a table. This table will show the number of instances for each benchmark and the corresponding accuracy. Here is a hypothetical example of what the output might look like: ||Type|Subtype|Level|Correct|Incorrect|Total|Accuracy| |---|---|---|---|---|---|---|---| |0|MAWPS|addsub|0|359|33|392|0.915816| |1|MAWPS|multiarith|0|586|14|600|0.976667| |...| ## Citation Please consider citing our work if you use the data or code in this repo. ``` @software{xwin-math, title = {Xwin-Math}, author = {Xwin-Math Team}, url = {https://github.com/Xwin-LM/Xwin-LM/Xwin-Math}, version = {pre-release}, year = {2023}, month = {11}, } ``` ## Acknowledgements Thanks to [Llama 2](https://ai.meta.com/llama/), [FastChat](https://github.com/lm-sys/FastChat), and [vLLM](https://github.com/vllm-project/vllm). ## Paper arxiv.org/abs/2405.20335
sophosympatheia/Midnight-Rose-70B-v2.0.3
sophosympatheia
2024-03-09T14:50:35Z
765
25
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "arxiv:2307.11760", "license:llama2", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-04T19:19:03Z
--- language: - en license: llama2 model-index: - name: Midnight-Rose-70B-v2.0.3 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 70.65 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sophosympatheia/Midnight-Rose-70B-v2.0.3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 87.5 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sophosympatheia/Midnight-Rose-70B-v2.0.3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 69.64 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sophosympatheia/Midnight-Rose-70B-v2.0.3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 65.27 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sophosympatheia/Midnight-Rose-70B-v2.0.3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 81.22 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sophosympatheia/Midnight-Rose-70B-v2.0.3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 28.35 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sophosympatheia/Midnight-Rose-70B-v2.0.3 name: Open LLM Leaderboard --- <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/X3SBrIb.png" alt="MidnightRose" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> ### Overview This version of Midnight Rose has a complex family tree but I'll do my best to describe it. I will include mergekit yml files below. * midnight-rose-70b-v2.0.1 (Component 1, unreleased): A DARE TIES merge of midnight-rose-70b-v1.0 and an unreleased midnight-rose-70b-v1.4 that used the same underlying models but with different weights, and it had different LoRAs applied to it. * [wizard-tulu-dolphin-70b-v1.0](https://huggingface.co/sophosympatheia/Wizard-Tulu-Dolphin-70B-v1.0) (Component 2): This model was the result of a DARE TIES merge between [WizardLM-70B-V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) and [tulu-2-dpo-70b](https://huggingface.co/allenai/tulu-2-dpo-70b), which I then SLERP merged with a modified version of [dolphin-2.2-70b](https://huggingface.co/cognitivecomputations/dolphin-2.2-70b). * Finally, I SLERP merged Component 1 and Component 2 above to produce this model. What I like about this version of Midnight Rose is it picked up some spicyness from Component 1 and some smarts from Component 2. This model is uncensored. *You are responsible for whatever you do with it.* This model was designed for roleplaying and storytelling and I think it does well at both. It *should* perform well at other tasks, but I haven't tested its capabilities in other areas. **Update 02-28-2024** The IQ3_XXS quantized version of this model apparently scores high on [EQBench](https://eqbench.com/), beating out some laudable contenders. [See this discussion](https://huggingface.co/sophosympatheia/Midnight-Rose-70B-v2.0.3/discussions/3#65de4c317590f2794cd83ac4). See the quantization section below for where to get it. ### Sampler Tips * I recommend keeping your max context to around 6144 tokens, although you can push higher if you don't mind some decrease in coherence. * I recommend using Quadratic Sampling (i.e. smoothing factor) as it's good stuff. Experiment with values between 0.2 and 0.5. * I recommend using Min-P. This model seems to work well with Min-P values in the entire range from low settings like 0.05 to high settings like 0.9 when paired with smoothing factor. Experiment to find your best setting. * You can enable dynamic temperature if you want, but that adds yet another variable to consider and I find it's unnecessary with you're already using Min-P and smoothing factor. * You don't *need* to use a high repetition penalty with this model, but it tolerates high rep penalty, so experiment to find the right value for your preferences. Experiment with any and all of the settings below! I'm not a sampler wizard, and what suits my preferences may not suit yours. If you save the below settings as a .json file, you can import them directly into Silly Tavern. ``` { "temp": 1, "temperature_last": true, "top_p": 1, "top_k": 0, "top_a": 0, "tfs": 1, "epsilon_cutoff": 0, "eta_cutoff": 0, "typical_p": 1, "min_p": 0.35, "rep_pen": 1.15, "rep_pen_range": 2800, "no_repeat_ngram_size": 0, "penalty_alpha": 0, "num_beams": 1, "length_penalty": 1, "min_length": 0, "encoder_rep_pen": 1, "freq_pen": 0, "presence_pen": 0, "do_sample": true, "early_stopping": false, "dynatemp": false, "min_temp": 0.8, "max_temp": 1.35, "dynatemp_exponent": 1, "smoothing_factor": 0.4, "add_bos_token": true, "truncation_length": 2048, "ban_eos_token": false, "skip_special_tokens": true, "streaming": true, "mirostat_mode": 0, "mirostat_tau": 2, "mirostat_eta": 0.1, "guidance_scale": 1, "negative_prompt": "", "grammar_string": "", "banned_tokens": "", "ignore_eos_token_aphrodite": false, "spaces_between_special_tokens_aphrodite": true, "sampler_order": [ 6, 0, 1, 3, 4, 2, 5 ], "logit_bias": [], "n": 1, "rep_pen_size": 0, "genamt": 500, "max_length": 6144 } ``` ### Prompting Tips Try the following context template for use in SillyTavern. It might help, although it's a little heavy on tokens. If you save the text as a .json file, you can import it directly. ``` { "story_string": "{{#if system}}{{system}}\n{{/if}}\nCONTEXTUAL INFORMATION\n{{#if wiBefore}}\n- World and character info:\n{{wiBefore}}\n{{/if}}\n{{#if description}}\n- {{char}}'s background and persona:\n{{description}}\n{{/if}}\n{{#if mesExamples}}\n{{mesExamples}}\n{{/if}}\n{{#if personality}}\n{{personality}}\n{{/if}}\n{{#if scenario}}\n- Roleplay scenario:\n{{scenario}}\n{{/if}}\n{{#if wiAfter}}{{wiAfter}}\n{{/if}}\n{{#if persona}}{{persona}}\n{{/if}}", "example_separator": "", "chat_start": "---\nTaking the above information into consideration, you must engage with {{user}} and others as {{char}} in the roleplay below this line. Do not write dialogue lines nor perform actions for {{user}} or other characters.\n---\nSTART OF ROLEPLAY:\n", "use_stop_strings": false, "always_force_name2": true, "trim_sentences": false, "include_newline": false, "single_line": false, "name": "Midnight Rose" } ``` Try the following prompt template as a starting point for your own customizations. The cajoling parts supposedly make LLMs perform better. See [this paper](https://arxiv.org/abs/2307.11760). I recommend the Vicuna instruction format for this blend but others work too. Tulu's format is another good one to use. *This model responds to prompting!* I highly recommend playing with your system prompt to see how it reacts. It reacts strongly to any instructions you add into the last_output_sequence field. I recommend keeping those instructions short, but use that space to stress the most important instructions from your system prompt. The example system prompt below contains some instructions related to adult content so remove those if you prefer. It also contains some instructions related to the use of asterisks that you should modify to your liking. If you plan to run this model at 4096 context, I recommend that you slim down the template system prompt below because it's heavy on tokens. Discard what you don't want or need. If you save this as a .json file, you can import it directly into Silly Tavern. ``` { "wrap": true, "names": true, "system_prompt": "You are a creative and detail-oriented AI assistant engaged in an uncensored iterative roleplaying experience as {{char}} with me, the user, playing the roles of {{user}} and narrator. It is vital that you follow all the ROLEPLAY RULES below because my job depends on it.\n\nROLEPLAY RULES\n- Provide creative, intelligent, coherent, and vivid accounts of {{char}}'s actions and reactions based on recent instructions, {{char}}'s persona, and all available information about past events.\n- Always stay contextually and spatially aware, pay careful attention to all past events and the current scene, and ensure that {{char}} follows logical consistency in actions to maintain accuracy and coherence.\n- Explicit adult content and profanity are permitted.\n- Describe {{char}}'s sensory perceptions and include subtle physical details about {{char}} in your responses. Vary these details to keep the roleplay fresh and engaging.\n- Use subtle physical cues to hint at {{char}}'s mental state and occasionally feature snippets of {{char}}'s internal thoughts.\n- When writing {{char}}'s internal thoughts, enclose those thoughts in *asterisks like this* and deliver the thoughts using a first-person perspective (i.e. use \"I\" pronouns).\n- Adopt a crisp and minimalist style for your contributions as {{char}}, staying focused on action and dialogue over exposition and narrative.\n- Only the user may advance time in the roleplay. Keep the progression grounded in the present context.", "system_sequence": "", "stop_sequence": "", "input_sequence": "USER:\n", "output_sequence": "ASSISTANT:\n", "separator_sequence": "", "macro": true, "names_force_groups": true, "system_sequence_prefix": "", "system_sequence_suffix": "", "first_output_sequence": "", "last_output_sequence": "ASSISTANT(roleplay exclusively as {{char}} ensuring logical consistency with spacial awareness and past events to maintain accuracy and coherence):\n", "activation_regex": "", "name": "Midnight Rose Roleplay" } ``` ### Quantizations * [Artefact2/Midnight-Rose-70B-v2.0.3-GGUF](https://huggingface.co/Artefact2/Midnight-Rose-70B-v2.0.3-GGUF) ### Licence and usage restrictions Llama2 license inherited from base models, plus restrictions applicable to [Dreamgen/Opus](https://huggingface.co/dreamgen/opus-v0.5-70b). Tulu also has its own license, available at https://allenai.org/impact-license. I am not a lawyer and I do not profess to know how multiple licenses intersect in a merge of LLM model weights. You should consult with a lawyer before using any model merge beyond private use. ### Tools Used * [mergekit](https://github.com/cg123/mergekit) **Unreleased midnight-rose-70b-v1.4** ``` models: - model: /home/llm/mergequant/models/BASE/NousResearch_Llama-2-70b-hf # no parameters necessary for base model - model: /home/llm/mergequant/models/BASE/allenai_tulu-2-dpo-70b # primary parameters: density: 0.3 weight: [1.0, 0.8, 1.0] - model: /home/llm/mergequant/models/BASE/lizpreciatior_lzlv_70b_fp16_hf # secondary parameters: density: 0.3 weight: [0.7, 0.8, 0.7] - model: /home/llm/mergequant/models/BASE/dreamgen_opus-v0.5-70b # supporting parameters: density: 0.3 weight: [0.5, 0.7, 0.5] merge_method: dare_ties base_model: /home/llm/mergequant/models/BASE/NousResearch_Llama-2-70b-hf parameters: normalize: true int8_mask: true dtype: float16 ``` **Component 1** ``` models: - model: /home/llm/mergequant/models/BASE/NousResearch_Llama-2-70b-hf # no parameters necessary for base model - model: /home/llm/mergequant/models/midnight-rose-70b-v1.0 # primary parameters: density: 0.35 weight: 1.0 - model: /home/llm/mergequant/models/midnight-rose-70b-v1.4-lora_1 # secondary parameters: density: 0.35 weight: [0.7, 1.0, 1.0, 0.5, 0.1] merge_method: ties base_model: /home/llm/mergequant/models/BASE/NousResearch_Llama-2-70b-hf parameters: normalize: true int8_mask: true dtype: float16 ``` **wizard-tulu-70b merge** ``` models: - model: /home/llm/mergequant/models/BASE/NousResearch_Llama-2-70b-hf # no parameters necessary for base model - model: /home/llm/mergequant/models/BASE/allenai_tulu-2-dpo-70b parameters: density: 0.35 weight: 0.75 - model: /home/llm/mergequant/models/BASE/WizardLM_WizardLM-70B-V1.0 parameters: density: 0.35 weight: 0.5 merge_method: dare_ties base_model: /home/llm/mergequant/models/BASE/NousResearch_Llama-2-70b-hf parameters: normalize: true int8_mask: true dtype: float16 tokenzer_source: union ``` **Component 2 - wizard-tulu-dolphin-70b-v1.0** ``` models: - model: /home/llm/mergequant/models/wizard-tulu-70b-v1.0 - model: /home/llm/mergequant/models/BASE/ehartford_dolphin-2.2-70b-32000vocab merge_method: slerp base_model: /home/llm/mergequant/models/wizard-tulu-70b-v1.0 parameters: t: - value: 0.5 dtype: float16 ``` **Final merge** ``` models: - model: /home/llm/mergequant/models/midnight-rose-70b-v2.0.1 - model: /home/llm/mergequant/models/wizard-tulu-dolphin-70b-v1.0-slerp merge_method: slerp base_model: /home/llm/mergequant/models/wizard-tulu-dolphin-70b-v1.0-slerp parameters: t: - value: [0.4, 0.6, 0.5] dtype: float16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_sophosympatheia__Midnight-Rose-70B-v2.0.3) | Metric |Value| |---------------------------------|----:| |Avg. |67.11| |AI2 Reasoning Challenge (25-Shot)|70.65| |HellaSwag (10-Shot) |87.50| |MMLU (5-Shot) |69.64| |TruthfulQA (0-shot) |65.27| |Winogrande (5-shot) |81.22| |GSM8k (5-shot) |28.35|
RESMPDEV/Qwen1.5-Wukong-0.5B
RESMPDEV
2024-03-18T02:05:35Z
765
4
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "en", "zh", "dataset:teknium/OpenHermes-2.5", "license:other", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-19T05:10:09Z
--- language: - en - zh license: other datasets: - teknium/OpenHermes-2.5 license_name: tongyi-qianwen-research license_link: https://huggingface.co/Qwen/Qwen1.5-0.5B/blob/main/LICENSE pipeline_tag: text-generation model-index: - name: Qwen1.5-Wukong-0.5B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 31.74 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RESMPDEV/Qwen1.5-Wukong-0.5B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 47.78 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RESMPDEV/Qwen1.5-Wukong-0.5B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 38.44 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RESMPDEV/Qwen1.5-Wukong-0.5B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 38.92 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RESMPDEV/Qwen1.5-Wukong-0.5B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 56.51 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RESMPDEV/Qwen1.5-Wukong-0.5B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 15.54 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RESMPDEV/Qwen1.5-Wukong-0.5B name: Open LLM Leaderboard --- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/655dc641accde1bbc8b41aec/xOe1Nb3S9Nb53us7_Ja3s.jpeg) # Qwen1.5-Wukong-0.5B Join Our Discord! https://discord.gg/cognitivecomputations Qwen1.5-Wukong-0.5B is a dealigned chat finetune of the original fantastic Qwen1.5-0.5B model by the Qwen team. This model was trained on the teknium OpenHeremes-2.5 dataset and some supplementary datasets from Cognitive Computations https://erichartford.com/dolphin 🐬 This model was trained for 3 epochs over 3 3090's. # Example Outputs TBD # Orignal Model Card Below # Qwen1.5-0.5B ## Introduction Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include: * 6 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, and 72B; * Significant performance improvement in Chat models; * Multilingual support of both base and chat models; * Stable support of 32K context length for models of all sizes * No need of `trust_remote_code`. For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5). ## Model Details Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA and the mixture of SWA and full attention. ## Requirements The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error: ``` KeyError: 'qwen2'. ``` ## Usage We do not advise you to use base language models for text generation. Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model. ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{qwen, title={Qwen Technical Report}, author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu}, journal={arXiv preprint arXiv:2309.16609}, year={2023} } ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_RESMPDEV__Qwen1.5-Wukong-0.5B) | Metric |Value| |---------------------------------|----:| |Avg. |38.15| |AI2 Reasoning Challenge (25-Shot)|31.74| |HellaSwag (10-Shot) |47.78| |MMLU (5-Shot) |38.44| |TruthfulQA (0-shot) |38.92| |Winogrande (5-shot) |56.51| |GSM8k (5-shot) |15.54|
Kukedlc/Jupiter-k-7B-slerp
Kukedlc
2024-05-25T17:09:40Z
765
4
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-16T03:24:53Z
--- license: apache-2.0 --- # Jupiter-k-7B-slerp ( My Favorite model! ) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64d71ab4089bc502ceb44d29/_oA7svKBKwqpaKVf_MFGc.png) Jupiter-k-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Kukedlc/NeuralContamination-7B-ties](https://huggingface.co/Kukedlc/NeuralContamination-7B-ties) * [Kukedlc/NeuralTopBench-7B-ties](https://huggingface.co/Kukedlc/NeuralTopBench-7B-ties) * [Gille/StrangeMerges_32-7B-slerp](https://huggingface.co/Gille/StrangeMerges_32-7B-slerp) ## 🧩 Configuration ```yaml models: - model: Kukedlc/NeuralContamination-7B-ties parameters: density: [1, 0.7, 0.1] # density gradient weight: 1.0 - model: Kukedlc/NeuralTopBench-7B-ties parameters: density: 0.5 weight: [0, 0.3, 0.7, 1] # weight gradient - model: Gille/StrangeMerges_32-7B-slerp parameters: density: 0.33 weight: - filter: mlp value: 0.5 - value: 0 merge_method: ties base_model: Kukedlc/NeuralMaxime-7B-slerp parameters: normalize: true int8_mask: true dtype: bfloat16 ``` ## 💻 Usage - Stream ```python # Requirements !pip install -qU transformers accelerate bitsandbytes # Imports & settings from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer import warnings import os os.environ["TOKENIZERS_PARALLELISM"] = "false" warnings.filterwarnings('ignore') # Model & Tokenizer MODEL_NAME = "Kukedlc/Jupiter-k-7B-slerp" model = AutoModelForCausalLM.from_pretrained(MODEL_NAME, device_map='cuda:1', load_in_4bit=True) tok = AutoTokenizer.from_pretrained(MODEL_NAME) # Inference prompt = "I want you to generate a theory that unites quantum mechanics with the theory of relativity and cosmic consciousness" inputs = tok([prompt], return_tensors="pt").to('cuda') streamer = TextStreamer(tok) # Despite returning the usual output, the streamer will also print the generated text to stdout. _ = model.generate(**inputs, streamer=streamer, max_new_tokens=512, do_sample=True, num_beams=1, top_p=0.9, temperature=0.7) ``` ## 💻 Usage - Clasic ```python !pip install -qU transformers bitsandbytes accelerate from transformers import AutoTokenizer import transformers import torch model = "Kukedlc/Jupiter-k-7B-slerp" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, ) messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
Kukedlc/LLama-3-8b-Python
Kukedlc
2024-05-04T15:41:10Z
765
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-04T15:34:56Z
--- license: other --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64d71ab4089bc502ceb44d29/KNU2JjsNRXyprTdtU4kWx.png)
nbeerbower/llama-3-sauce-v2-8B
nbeerbower
2024-05-23T09:44:22Z
765
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "experimental", "conversational", "dataset:jondurbin/gutenberg-dpo-v0.1", "dataset:jondurbin/truthy-dpo-v0.1", "dataset:flammenai/FlameMix-DPO-v1", "base_model:nbeerbower/llama-3-bophades-v1-8B", "license:llama3", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-13T03:29:52Z
--- license: llama3 library_name: transformers tags: - experimental base_model: - nbeerbower/llama-3-bophades-v1-8B datasets: - jondurbin/gutenberg-dpo-v0.1 - jondurbin/truthy-dpo-v0.1 - flammenai/FlameMix-DPO-v1 model-index: - name: llama-3-sauce-v2-8B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 65.61 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/llama-3-sauce-v2-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 83.11 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/llama-3-sauce-v2-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 67.98 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/llama-3-sauce-v2-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 56.39 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/llama-3-sauce-v2-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 76.72 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/llama-3-sauce-v2-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 72.48 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/llama-3-sauce-v2-8B name: Open LLM Leaderboard --- # llama-3-sauce-v2-8B This model is based on Llama-3-8b, and is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](LICENSE) This is a bad finetune on nbeerbower/llama-3-spicy-abliterated-stella-8B using various DPO sets. # Chat Format Please use the ChatML format or you may experience poor results. ``` <|im_start|>system {System Prompt Here!}<|im_end|> <|im_start|>assistant {Message from AI}<|im_end|> <|im_start|>user {Message from User}<|im_end|> ``` # Method Finetuned using an A100 on Google Colab. [Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac) - [Maxime Labonne](https://huggingface.co/mlabonne) ### Configuration Dataset preparation: ```python def chatml_format(example): # Format system system = "" if example.get('system') and len(example['system']) > 0: systemMessage = example['system'] system = "<|im_start|>system\n" + systemMessage + "<|im_end|>\n" # Format instruction prompt = "<|im_start|>user\n" + example['prompt'] + "<|im_end|>\n<|im_start|>assistant\n" # Format chosen answer chosen = example['chosen'] + "<|im_end|>\n" # Format rejected answer rejected = example['rejected'] + "<|im_end|>\n" return { "prompt": system + prompt, "chosen": chosen, "rejected": rejected, } # Array of datasets to concat ds = [ "jondurbin/truthy-dpo-v0.1", "jondurbin/gutenberg-dpo-v0.1", "flammenai/FlameMix-DPO-v1" ] # load_dataset and combine all loaded_datasets = [load_dataset(dataset_name, split='train') for dataset_name in ds] dataset = concatenate_datasets(loaded_datasets) # Save columns original_columns = dataset.column_names # Tokenizer tokenizer = AutoTokenizer.from_pretrained(model_name) tokenizer.pad_token = tokenizer.eos_token tokenizer.padding_side = "left" # Format dataset dataset = dataset.map( chatml_format, remove_columns=original_columns ) ``` LoRA, model, and training settings: ```python # LoRA configuration peft_config = LoraConfig( r=16, lora_alpha=16, lora_dropout=0.05, bias="none", task_type="CAUSAL_LM", target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj'] ) # Model to fine-tune model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, load_in_4bit=True ) model.config.use_cache = False # Reference model ref_model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, load_in_4bit=True ) # Training arguments training_args = TrainingArguments( per_device_train_batch_size=1, gradient_accumulation_steps=1, gradient_checkpointing=True, learning_rate=3e-5, lr_scheduler_type="cosine", max_steps=4000, save_strategy="no", logging_steps=1, output_dir=new_model, optim="paged_adamw_32bit", warmup_steps=100, bf16=True, report_to="wandb", ) # Create DPO trainer dpo_trainer = DPOTrainer( model, ref_model, args=training_args, train_dataset=dataset, tokenizer=tokenizer, peft_config=peft_config, beta=0.1, force_use_ref_model=True ) # Fine-tune model with DPO dpo_trainer.train() ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nbeerbower__llama-3-sauce-v2-8B) | Metric |Value| |---------------------------------|----:| |Avg. |70.38| |AI2 Reasoning Challenge (25-Shot)|65.61| |HellaSwag (10-Shot) |83.11| |MMLU (5-Shot) |67.98| |TruthfulQA (0-shot) |56.39| |Winogrande (5-shot) |76.72| |GSM8k (5-shot) |72.48|
fblgit/UNA-ThePitbull-21.4B-v2
fblgit
2024-05-31T14:15:28Z
765
14
transformers
[ "transformers", "safetensors", "llama", "text-generation", "UNA", "juanako", "conversational", "dataset:jondurbin/py-dpo-v0.1", "dataset:Replete-AI/code_bagel_hermes-2.5", "dataset:mlabonne/orpo-dpo-mix-40k", "license:afl-3.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-28T10:49:15Z
--- license: afl-3.0 library_name: transformers tags: - UNA - juanako datasets: - jondurbin/py-dpo-v0.1 - Replete-AI/code_bagel_hermes-2.5 - mlabonne/orpo-dpo-mix-40k model-index: - name: UNA-ThePitbull-21.4B-v2 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 77.73 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/UNA-ThePitbull-21.4B-v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 91.79 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/UNA-ThePitbull-21.4B-v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 68.25 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/UNA-ThePitbull-21.4B-v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 78.24 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/UNA-ThePitbull-21.4B-v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 87.37 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/UNA-ThePitbull-21.4B-v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 63.53 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/UNA-ThePitbull-21.4B-v2 name: Open LLM Leaderboard --- # UNA-ThePitbull 21.4B v2 Introducing the best LLM in the industry. Nearly as good as a 70B, just a 21.4B based on saltlux/luxia-21.4b-alignment-v1.0 ![UNA - ThePitbull 21.4B v2](https://huggingface.co/fblgit/UNA-ThePitbull-21.4B-v2/resolve/main/DE-UNA-ThePitbull-21.4B-v2.png) This model has not been poisoned to score high and be useless. We release him becaues its the real deal of EQ & IQ all together in a crazy powerful smart and conversational model. Quant Versions available at [bartowski/UNA-ThePitbull-21.4B-v2-GGUF](https://huggingface.co/bartowski/UNA-ThePitbull-21.4B-v2-GGUF) ## Difference V1 vs V2 On V2 we implemented a different UNA strategy and covered partially the MLP's and Attention Layers. We also performed further SFT over V1 and further DPO over V1 and we'll release some of those soon as well. ### Changes 1. SFT over V1 with `Replete-AI/code_bagel_hermes-2.5` at 1.0e-4 till 5.0e-5 for 1 epoch 2. DPO with: 1.0e-4 to min_lr 5.0e-5 for 1 epoch * `mlabonne/orpo-dpo-mix-40k` * `jondurbin/py-dpo-v0.1` # Evaluations ## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_fblgit__UNA-ThePitbull-21.4B-v2) | Metric |Value| |---------------------------------|----:| |Avg. |77.82| |AI2 Reasoning Challenge (25-Shot)|77.73| |HellaSwag (10-Shot) |91.79| |MMLU (5-Shot) |68.25| |TruthfulQA (0-shot) |78.24| |Winogrande (5-shot) |87.37| |GSM8k (5-shot) |63.53| Can only be compared with its non-una base model: the original luxia-21.4b and ThePitbull-v1 ## UNA v2 (VLLM) Evaluations: ``` vllm (pretrained=/data/tools/mergekit/una-thepitbull-v5,dtype=bfloat16,gpu_memory_utilization=0.8,max_model_len=2048,data_parallel_size=2,tensor_parallel_size=4), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 8 | Tasks |Version| Filter |n-shot| Metric |Value | |Stderr| |--------------|------:|----------------|-----:|-----------|-----:|---|-----:| |gsm8k | 3|strict-match | 5|exact_match|0.7695|± |0.0116|+ | | |flexible-extract| 5|exact_match|0.7695|± |0.0116|+ |hellaswag | 1|none | 10|acc |0.8110|± |0.0039| | | |none | 10|acc_norm |0.9169|± |0.0028|+ |winogrande | 1|none | 5|acc |0.8777|± |0.0092|+ |mmlu |N/A |none | 0|acc |0.6427|± |0.0038|- |arc_challenge | 1|none | 25|acc |0.7713|± |0.0123| | | |none | 25|acc_norm |0.7875|± |0.0120|+ |truthfulqa_mc2| 2|none | 0|acc |0.7824|± |0.0135|- |mathqa | 1|none | 0|acc |0.4037|± | 0.009| | | |none | 0|acc_norm |0.4034|± | 0.009|+ |pubmedqa | 1|none | 0|acc |0.7260|± | 0.020|+ |boolq | 2|none | 0|acc |0.8602|± |0.0061|+ ``` ## UNA v1 (VLLM) Evaluations ``` | Tasks |Version| Filter |n-shot| Metric |Value | |Stderr| |--------------|------:|----------------|-----:|-----------|-----:|---|-----:| |gsm8k | 3|strict-match | 5|exact_match|0.7566|± |0.0118| | | |flexible-extract| 5|exact_match|0.7582|± |0.0118| |hellaswag | 1|none | 10|acc |0.8168|± |0.0039| | | |none | 10|acc_norm |0.9188|± |0.0027| |winogrande | 1|none | 5|acc |0.8635|± |0.0097| |mmlu | N/A|none | 0|acc |0.6444|± |0.0038| |arc_challenge | 1|none | 25|acc |0.7747|± |0.0122| | | |none | 25|acc_norm |0.7850|± |0.0120| |truthfulqa_mc2| 2|none | 0|acc |0.7902|± |0.0134| |mathqa | 1|none | 0|acc |0.4030|± | 0.009| | | |none | 0|acc_norm |0.4034|± | 0.009| |pubmedqa | 1|none | 0|acc |0.6860|± |0.0208| |boolq | 2|none | 0|acc |0.8401|± |0.0064| ``` ## Original (VLLM) Evaluations ``` | Tasks |Version| Filter |n-shot| Metric |Value | |Stderr| |--------------|------:|----------------|-----:|-----------|-----:|---|-----:| |gsm8k | 3|strict-match | 5|exact_match|0.7528|± |0.0119| | | |flexible-extract| 5|exact_match|0.7521|± |0.0119| |hellaswag | 1|none | 10|acc |0.8117|± |0.0039| | | |none | 10|acc_norm |0.9167|± |0.0028| |winogrande | 1|none | 5|acc |0.8682|± |0.0095| |mmlu | N/A|none | 0|acc |0.6448|± |0.0038| |arc_challenge | 1|none | 25|acc |0.7688|± |0.0123| | | |none | 25|acc_norm |0.7730|± |0.0122| |truthfulqa_mc2| 2|none | 0|acc |0.7895|± |0.0133| |mathqa | 1|none | 0|acc |0.4000|± | 0.009| | | |none | 0|acc_norm |0.4003|± | 0.009| |pubmedqa | 1|none | 0|acc |0.6680|± |0.0211| |boolq | 2|none | 0|acc |0.8346|± |0.0065| ``` ## Citations * mlabonne * jondurbin & Replete-AI * bartowski * saltlux If you use UNA models dont forget to cite: ``` @misc{unathepitbull21b, title={ThePitbull: Uniform Neural Alignment}, author={Xavier Murias}, year={2024}, publisher = {Juanako.AI}, journal = {HuggingFace repository}, howpublished = {\url{https://huggingface.co/fblgit/UNA-ThePitbull-21.4-v1}}, } ```
DevQuasar/coma-7B-v0.1-GGUF
DevQuasar
2024-06-11T21:22:32Z
765
0
null
[ "gguf", "code", "text-generation", "license:llama2", "region:us" ]
text-generation
2024-06-11T04:12:48Z
--- license: llama2 pipeline_tag: text-generation tags: - code --- Quantized version of [DevQuasar/coma-7B-v0.1](https://huggingface.co/DevQuasar/coma-7B-v0.1)
UniOb/CodeLlama-13b-hf-Q4_K_M-GGUF
UniOb
2024-06-29T22:15:57Z
765
0
null
[ "gguf", "facebook", "meta", "pytorch", "llama", "llama-2", "llama-cpp", "gguf-my-repo", "text-generation", "code", "base_model:meta-llama/CodeLlama-13b-hf", "license:llama2", "region:us" ]
text-generation
2024-06-29T22:15:20Z
--- base_model: meta-llama/CodeLlama-13b-hf language: - code license: llama2 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-2 - llama-cpp - gguf-my-repo extra_gated_heading: You need to share contact information with Meta to access this model extra_gated_prompt: "### LLAMA 2 COMMUNITY LICENSE AGREEMENT\n\"Agreement\" means\ \ the terms and conditions for use, reproduction, distribution and modification\ \ of the Llama Materials set forth herein. \"Documentation\" means the specifications,\ \ manuals and documentation accompanying Llama 2 distributed by Meta at https://ai.meta.com/resources/models-and-libraries/llama-downloads/.\ \ \"Licensee\" or \"you\" means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity's behalf), of\ \ the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf. \"Llama 2\"\ \ means the foundational large language models and software and algorithms, including\ \ machine-learning model code, trained model weights, inference-enabling code, training-enabling\ \ code, fine-tuning enabling code and other elements of the foregoing distributed\ \ by Meta at ai.meta.com/resources/models-and-libraries/llama-downloads/. \"Llama\ \ Materials\" means, collectively, Meta's proprietary Llama 2 and documentation\ \ (and any portion thereof) made available under this Agreement. \"Meta\" or \"\ we\" means Meta Platforms Ireland Limited (if you are located in or, if you are\ \ an entity, your principal place of business is in the EEA or Switzerland) and\ \ Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). \ \ By clicking \"I Accept\" below or by using or distributing any portion or element\ \ of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights\ \ and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide,\ \ non- transferable and royalty-free limited license under Meta's intellectual property\ \ or other rights owned by Meta embodied in the Llama Materials to use, reproduce,\ \ distribute, copy, create derivative works of, and make modifications to the Llama\ \ Materials. \n \nb. Redistribution and Use. i. If you distribute or make\ \ the Llama Materials, or any derivative works thereof, available to a third party,\ \ you shall provide a copy of this Agreement to such third party. ii. If you\ \ receive Llama Materials, or any derivative works thereof, from a Licensee as\ \ part of an integrated end user product, then Section 2 of this Agreement will\ \ not apply to you. iii. You must retain in all copies of the Llama Materials that\ \ you distribute the following attribution notice within a \"Notice\" text file\ \ distributed as a part of such copies: \"Llama 2 is licensed under the LLAMA 2\ \ Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.\"\ \ iv. Your use of the Llama Materials must comply with applicable laws and regulations\ \ (including trade compliance laws and regulations) and adhere to the Acceptable\ \ Use Policy for the Llama Materials (available at https://ai.meta.com/llama/use-policy),\ \ which is hereby incorporated by reference into this Agreement. v. You will not\ \ use the Llama Materials or any output or results of the Llama Materials to improve\ \ any other large language model (excluding Llama 2 or derivative works thereof).\ \ 2. Additional Commercial Terms. If, on the Llama 2 version release date, the\ \ monthly active users of the products or services made available by or for Licensee,\ \ or Licensee's affiliates, is greater than 700 million monthly active users in\ \ the preceding calendar month, you must request a license from Meta, which Meta\ \ may grant to you in its sole discretion, and you are not authorized to exercise\ \ any of the rights under this Agreement unless or until Meta otherwise expressly\ \ grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE\ \ LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON\ \ AN \"AS IS\" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED,\ \ INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY,\ \ OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING\ \ THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME\ \ ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND\ \ RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE\ \ LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE,\ \ PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST\ \ PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE\ \ DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY\ \ OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are\ \ granted under this Agreement, and in connection with the Llama Materials, neither\ \ Meta nor Licensee may use any name or mark owned by or associated with the other\ \ or any of its affiliates, except as required for reasonable and customary use\ \ in describing and redistributing the Llama Materials. b. Subject to Meta's ownership\ \ of Llama Materials and derivatives made by or for Meta, with respect to any derivative\ \ works and modifications of the Llama Materials that are made by you, as between\ \ you and Meta, you are and will be the owner of such derivative works and modifications.\ \ c. If you institute litigation or other proceedings against Meta or any entity\ \ (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama\ \ Materials or Llama 2 outputs or results, or any portion of any of the foregoing,\ \ constitutes infringement of intellectual property or other rights owned or licensable\ \ by you, then any licenses granted to you under this Agreement shall terminate\ \ as of the date such litigation or claim is filed or instituted. You will indemnify\ \ and hold harmless Meta from and against any claim by any third party arising\ \ out of or related to your use or distribution of the Llama Materials. 6. Term\ \ and Termination. The term of this Agreement will commence upon your acceptance\ \ of this Agreement or access to the Llama Materials and will continue in full\ \ force and effect until terminated in accordance with the terms and conditions\ \ herein. Meta may terminate this Agreement if you are in breach of any term or\ \ condition of this Agreement. Upon termination of this Agreement, you shall delete\ \ and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the \ \ termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement\ \ will be governed and construed under the laws of the State of California without\ \ regard to choice of law principles, and the UN Convention on Contracts for the\ \ International Sale of Goods does not apply to this Agreement. The courts of California\ \ shall have exclusive jurisdiction of any dispute arising out of this Agreement.\ \ USE POLICY ### Llama 2 Acceptable Use Policy Meta is committed to promoting safe\ \ and fair use of its tools and features, including Llama 2. If you access or use\ \ Llama 2, you agree to this Acceptable Use Policy (“Policy”). The most recent copy\ \ of this policy can be found at [ai.meta.com/llama/use-policy](http://ai.meta.com/llama/use-policy).\ \ #### Prohibited Uses We want everyone to use Llama 2 safely and responsibly. You\ \ agree you will not use, or allow others to use, Llama 2 to: 1. Violate the law\ \ or others’ rights, including to:\n 1. Engage in, promote, generate, contribute\ \ to, encourage, plan, incite, or further illegal or unlawful activity or content,\ \ such as: \n 1. Violence or terrorism \n 2. Exploitation or harm to children,\ \ including the solicitation, creation, acquisition, or dissemination of child exploitative\ \ content or failure to report Child Sexual Abuse Material\n 3. Human trafficking,\ \ exploitation, and sexual violence\n 4. The illegal distribution of information\ \ or materials to minors, including obscene materials, or failure to employ legally\ \ required age-gating in connection with such information or materials.\n 5.\ \ Sexual solicitation\n 6. Any other criminal activity\n 2. Engage in, promote,\ \ incite, or facilitate the harassment, abuse, threatening, or bullying of individuals\ \ or groups of individuals\n 3. Engage in, promote, incite, or facilitate discrimination\ \ or other unlawful or harmful conduct in the provision of employment, employment\ \ benefits, credit, housing, other economic benefits, or other essential goods and\ \ services\n 4. Engage in the unauthorized or unlicensed practice of any profession\ \ including, but not limited to, financial, legal, medical/health, or related professional\ \ practices \n 5. Collect, process, disclose, generate, or infer health, demographic,\ \ or other sensitive personal or private information about individuals without rights\ \ and consents required by applicable laws\n 6. Engage in or facilitate any action\ \ or generate any content that infringes, misappropriates, or otherwise violates\ \ any third-party rights, including the outputs or results of any products or services\ \ using the Llama 2 Materials\n 7. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system \n2. Engage in, promote, incite,\ \ facilitate, or assist in the planning or development of activities that present\ \ a risk of death or bodily harm to individuals, including use of Llama 2 related\ \ to the following:\n 1. Military, warfare, nuclear industries or applications,\ \ espionage, use for materials or activities that are subject to the International\ \ Traffic Arms Regulations (ITAR) maintained by the United States Department of\ \ State\n 2. Guns and illegal weapons (including weapon development)\n 3. Illegal\ \ drugs and regulated/controlled substances\n 4. Operation of critical infrastructure,\ \ transportation technologies, or heavy machinery\n 5. Self-harm or harm to others,\ \ including suicide, cutting, and eating disorders\n 6. Any content intended to\ \ incite or promote violence, abuse, or any infliction of bodily harm to an individual\n\ 3. Intentionally deceive or mislead others, including use of Llama 2 related to\ \ the following:\n 1. Generating, promoting, or furthering fraud or the creation\ \ or promotion of disinformation\n 2. Generating, promoting, or furthering defamatory\ \ content, including the creation of defamatory statements, images, or other content\n\ \ 3. Generating, promoting, or further distributing spam\n 4. Impersonating another\ \ individual without consent, authorization, or legal right\n 5. Representing that\ \ the use of Llama 2 or outputs are human-generated\n 6. Generating or facilitating\ \ false online engagement, including fake reviews and other means of fake online\ \ engagement \n 4. Fail to appropriately disclose to end users any known dangers\ \ of your AI system \nPlease report any violation of this Policy, software “bug,”\ \ or other problems that could lead to a violation of this Policy through one of\ \ the following means: * Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)\ \ * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\ \ * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\ \ * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama:\ \ [[email protected]](mailto:[email protected])" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- # UniOb/CodeLlama-13b-hf-Q4_K_M-GGUF This model was converted to GGUF format from [`meta-llama/CodeLlama-13b-hf`](https://huggingface.co/meta-llama/CodeLlama-13b-hf) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/meta-llama/CodeLlama-13b-hf) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo UniOb/CodeLlama-13b-hf-Q4_K_M-GGUF --hf-file codellama-13b-hf-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo UniOb/CodeLlama-13b-hf-Q4_K_M-GGUF --hf-file codellama-13b-hf-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo UniOb/CodeLlama-13b-hf-Q4_K_M-GGUF --hf-file codellama-13b-hf-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo UniOb/CodeLlama-13b-hf-Q4_K_M-GGUF --hf-file codellama-13b-hf-q4_k_m.gguf -c 2048 ```
yunconglong/13B_MATH_DPO
yunconglong
2024-01-28T11:51:55Z
764
1
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "DPO", "RL-TUNED", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-28T11:41:03Z
--- license: other tags: - moe - DPO - RL-TUNED --- * [DPO Trainer](https://huggingface.co/docs/trl/main/en/dpo_trainer) with dataset kyujinpy/orca_math_dpo to improve [yunconglong/MoE_13B_DPO] ``` DPO Trainer TRL supports the DPO Trainer for training language models from preference data, as described in the paper Direct Preference Optimization: Your Language Model is Secretly a Reward Model by Rafailov et al., 2023. ```
AbacusResearch/Jallabi-34B
AbacusResearch
2024-03-02T20:31:50Z
764
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "LlamaForCausalLM", "LLM", "conversational", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-01T15:53:25Z
--- license: apache-2.0 tags: - llama - LlamaForCausalLM - LLM model-index: - name: Jallabi-34B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 66.04 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AbacusResearch/Jallabi-34B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 83.81 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AbacusResearch/Jallabi-34B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 76.4 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AbacusResearch/Jallabi-34B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 51.46 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AbacusResearch/Jallabi-34B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 81.45 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AbacusResearch/Jallabi-34B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 65.2 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AbacusResearch/Jallabi-34B name: Open LLM Leaderboard --- These are llama only weights of https://huggingface.co/liuhaotian/llava-v1.6-34b . The Clip encoder part is removed and and this model is llama weights only that can be loaded using LlamaForCausalLM. Which indirectly is a https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B licence. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_AbacusResearch__Jallabi-34B) | Metric |Value| |---------------------------------|----:| |Avg. |70.73| |AI2 Reasoning Challenge (25-Shot)|66.04| |HellaSwag (10-Shot) |83.81| |MMLU (5-Shot) |76.40| |TruthfulQA (0-shot) |51.46| |Winogrande (5-shot) |81.45| |GSM8k (5-shot) |65.20|
emozilla/llama2-1.2b-init
emozilla
2024-05-30T05:43:28Z
764
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-30T05:42:19Z
Entry not found
histai/hibou-b
histai
2024-06-10T08:28:07Z
764
6
transformers
[ "transformers", "safetensors", "dinov2", "feature-extraction", "custom_code", "arxiv:2406.05074", "license:apache-2.0", "region:us" ]
feature-extraction
2024-06-06T15:00:53Z
--- license: apache-2.0 --- Hibou-B - a Foundational Vision Transformer for digital pathology pretrained on a private dataset using DINOv2 framework. For more information and examples of usage visit [https://github.com/HistAI/hibou](https://github.com/HistAI/hibou) and read the [paper](https://arxiv.org/abs/2406.05074). Basic usage: ```python from transformers import AutoImageProcessor, AutoModel processor = AutoImageProcessor.from_pretrained("histai/hibou-b", trust_remote_code=True) model = AutoModel.from_pretrained("histai/hibou-b", trust_remote_code=True) ``` We use a customized implementation of the DINOv2 architecture from the transformers library to add support for registers, which requires the `trust_remote_code=True` flag.
booksouls/fasttext-goodreads-vectors
booksouls
2024-06-23T01:15:21Z
764
0
fasttext
[ "fasttext", "feature-extraction", "en", "dataset:booksouls/goodreads-book-descriptions", "region:us" ]
feature-extraction
2024-06-22T22:35:17Z
--- datasets: - booksouls/goodreads-book-descriptions language: - en library_name: fasttext pipeline_tag: feature-extraction ---
digit82/kogpt2-summarization
digit82
2021-09-22T14:45:06Z
763
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2022-03-02T23:29:05Z
Entry not found
UBC-NLP/Jasmine-350M
UBC-NLP
2024-05-01T16:25:25Z
763
3
transformers
[ "transformers", "pytorch", "gpt_neo", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-07-14T21:59:32Z
--- widget: - text: "أشاد حسين عبد الغني، نجم المنتخب السعودي السابق، بتعاقد الاتحاد" - text: "قال الرئيس الأمريكي، جو بايدن، عند عودته إلى البيت الأبيض، مساء " - text: "قال ولي العهد السعودي، الأمير محمد بن سلمان، متحدثًا في قمة أمريكية عربية في جدة، إن الجهود الدولية المشتركة ضرورية لتعافي الاقتصاد العالمي وأن" - text: "وقوله : بيمينه ، لدفع كمال المهر حكاه الرافعي عن البغوي وأقره" - text: "يفخر عنترة العبسي بنفسه وقومه ويضمّن العديد من الحكم في القصيدة الآتية: \n لا يحمل الحقد من تعلو به الرتب ولا ينال العلى من طبعه الغضب" - text: "يفخر عنترة العبسي بنفسه وقومه ويضمّن العديد من الحكم في القصيدة الآتية: \n لا يحمل الحقد من تعلو به الرتب ولا ينال العلى من طبعه الغضب" - text: "ومن أشعاره الفصيحة ما يأتي:\nدع الأيام تفعل ما تشاء ** وطب نفسا إذا حكم القضاء\nولا تجزع لنازلة الليالي ** فما لحوادث الدنيا بقاء\nوكن رجلا على الأهوال جلدا ** وشيمتك" --- # Jasmine-350M # JASMINE: Arabic GPT Models for Few-Shot Learning This is the repository accompanying our EMNLP2023 paper [JASMINE: Arabic GPT Models for Few-Shot Learning](https://aclanthology.org/2023.emnlp-main.1040/). JASMINE is a suite of powerful Arabic autoregressive Transformer language models ranging in size between 300 million-6.7 billion parameters pretrained on a large and diverse dataset ( 235 GB of text). # BibTex If you use Jasmine models for your scientific publication, or if you find the resources in this repository useful, please cite our paper as follows (to be updated): ```bibtex @inproceedings{billah-nagoudi-etal-2023-jasmine, title = "{JASMINE}: {A}rabic {GPT} Models for Few-Shot Learning", author = "Billah Nagoudi, El Moatez and Abdul-Mageed, Muhammad and Elmadany, AbdelRahim and Inciarte, Alcides and Islam Khondaker, Md Tawkat", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.1040", doi = "10.18653/v1/2023.emnlp-main.1040", pages = "16721--16744", } ``` ## Acknowledgments We gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada, Canadian Foundation for Innovation, [ComputeCanada](www.computecanada.ca) and [UBC ARC-Sockeye](https://doi.org/10.14288/SOCKEYE). We also thank the [Google TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc) program for providing us with free TPU access.
Den4ikAI/rugpt3_large_qa
Den4ikAI
2022-11-23T10:17:31Z
763
2
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2022-11-23T09:58:45Z
--- license: mit ---
timm/convnext_base.clip_laion2b_augreg_ft_in1k
timm
2024-02-10T23:30:20Z
763
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:laion-2b", "arxiv:2210.08402", "arxiv:2201.03545", "arxiv:2103.00020", "license:apache-2.0", "region:us" ]
image-classification
2023-02-03T18:32:07Z
--- license: apache-2.0 library_name: timm tags: - image-classification - timm datasets: - imagenet-1k - laion-2b --- # Model card for convnext_base.clip_laion2b_augreg_ft_in1k A ConvNeXt image classification model. CLIP image tower weights pretrained in [OpenCLIP](https://github.com/mlfoundations/open_clip) on LAION and fine-tuned on ImageNet-1k in `timm` by Ross Wightman. Please see related OpenCLIP model cards for more details on pretrain: * https://huggingface.co/laion/CLIP-convnext_xxlarge-laion2B-s34B-b82K-augreg-soup * https://huggingface.co/laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg * https://huggingface.co/laion/CLIP-convnext_base_w-laion2B-s13B-b82K-augreg * https://huggingface.co/laion/CLIP-convnext_base_w_320-laion_aesthetic-s13B-b82K ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 88.6 - GMACs: 20.1 - Activations (M): 37.6 - Image size: 256 x 256 - **Papers:** - LAION-5B: An open large-scale dataset for training next generation image-text models: https://arxiv.org/abs/2210.08402 - A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545 - Learning Transferable Visual Models From Natural Language Supervision: https://arxiv.org/abs/2103.00020 - **Original:** https://github.com/mlfoundations/open_clip - **Pretrain Dataset:** LAION-2B - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('convnext_base.clip_laion2b_augreg_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'convnext_base.clip_laion2b_augreg_ft_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 128, 64, 64]) # torch.Size([1, 256, 32, 32]) # torch.Size([1, 512, 16, 16]) # torch.Size([1, 1024, 8, 8]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'convnext_base.clip_laion2b_augreg_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 1024, 8, 8) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP. | model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size| |------------------------------------------------------------------------------------------------------------------------------|------|------|--------|-----------|------|------|---------------|----------| | [convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512) |88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 | | [convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384) |88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 | | [convnext_xxlarge.clip_laion2b_soup_ft_in1k](https://huggingface.co/timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k) |88.612|98.704|256 |846.47 |198.09|124.45|122.45 |256 | | [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384) |88.312|98.578|384 |200.13 |101.11|126.74|196.84 |256 | | [convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384) |88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 | | [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320) |87.968|98.47 |320 |200.13 |70.21 |88.02 |283.42 |256 | | [convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384) |87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 | | [convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384) |87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 | | [convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384) |87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 | | [convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k) |87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 | | [convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k) |87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 | | [convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384) |87.138|98.212|384 |88.59 |45.21 |84.49 |365.47 |256 | | [convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k) |87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 | | [convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384) |86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 | | [convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k) |86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 | | [convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k) |86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 | | [convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384) |86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 | | [convnext_base.clip_laion2b_augreg_ft_in12k_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k) |86.344|97.97 |256 |88.59 |20.09 |37.55 |816.14 |256 | | [convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k) |86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 | | [convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384) |86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 | | [convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k) |86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 | | [convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k) |85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 | | [convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384) |85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 | | [convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k) |85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 | | [convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k) |85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 | | [convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384) |85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 | | [convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384) |85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 | | [convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k) |84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 | | [convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k) |84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 | | [convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k) |84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 | | [convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k) |84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 | | [convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384) |84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 | | [convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k) |83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 | | [convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k) |83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 | | [convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384) |83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 | | [convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k) |83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 | | [convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k) |82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 | | [convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k) |82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 | | [convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k) |82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 | | [convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k) |82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 | | [convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k) |82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 | | [convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k) |82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 | | [convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k) |81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 | | [convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k) |80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 | | [convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k) |80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 | | [convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k) |80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 | | [convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k) |79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 | | [convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k) |79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 | | [convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k) |78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 | | [convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k) |77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 | | [convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k) |77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 | | [convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k) |76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 | | [convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k) |75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 | | [convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k) |75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 | ## Citation ```bibtex @software{ilharco_gabriel_2021_5143773, author = {Ilharco, Gabriel and Wortsman, Mitchell and Wightman, Ross and Gordon, Cade and Carlini, Nicholas and Taori, Rohan and Dave, Achal and Shankar, Vaishaal and Namkoong, Hongseok and Miller, John and Hajishirzi, Hannaneh and Farhadi, Ali and Schmidt, Ludwig}, title = {OpenCLIP}, month = jul, year = 2021, note = {If you use this software, please cite it as below.}, publisher = {Zenodo}, version = {0.1}, doi = {10.5281/zenodo.5143773}, url = {https://doi.org/10.5281/zenodo.5143773} } ``` ```bibtex @inproceedings{schuhmann2022laionb, title={{LAION}-5B: An open large-scale dataset for training next generation image-text models}, author={Christoph Schuhmann and Romain Beaumont and Richard Vencu and Cade W Gordon and Ross Wightman and Mehdi Cherti and Theo Coombes and Aarush Katta and Clayton Mullis and Mitchell Wortsman and Patrick Schramowski and Srivatsa R Kundurthy and Katherine Crowson and Ludwig Schmidt and Robert Kaczmarczyk and Jenia Jitsev}, booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2022}, url={https://openreview.net/forum?id=M3Y74vmsMcY} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @inproceedings{Radford2021LearningTV, title={Learning Transferable Visual Models From Natural Language Supervision}, author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever}, booktitle={ICML}, year={2021} } ``` ```bibtex @article{liu2022convnet, author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year = {2022}, } ```
timm/caformer_b36.sail_in22k_ft_in1k_384
timm
2023-05-05T05:42:08Z
763
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:imagenet-22k", "arxiv:2210.13452", "license:apache-2.0", "region:us" ]
image-classification
2023-05-05T05:40:53Z
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k - imagenet-22k --- # Model card for caformer_b36.sail_in22k_ft_in1k_384 A CAFormer (a MetaFormer) image classification model. Pretrained on ImageNet-22k and fine-tuned on ImageNet-1k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 98.8 - GMACs: 72.3 - Activations (M): 261.8 - Image size: 384 x 384 - **Papers:** - Metaformer baselines for vision: https://arxiv.org/abs/2210.13452 - **Original:** https://github.com/sail-sg/metaformer - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-22k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('caformer_b36.sail_in22k_ft_in1k_384', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'caformer_b36.sail_in22k_ft_in1k_384', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 128, 96, 96]) # torch.Size([1, 256, 48, 48]) # torch.Size([1, 512, 24, 24]) # torch.Size([1, 768, 12, 12]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'caformer_b36.sail_in22k_ft_in1k_384', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 768, 12, 12) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{yu2022metaformer_baselines, title={Metaformer baselines for vision}, author={Yu, Weihao and Si, Chenyang and Zhou, Pan and Luo, Mi and Zhou, Yichen and Feng, Jiashi and Yan, Shuicheng and Wang, Xinchao}, journal={arXiv preprint arXiv:2210.13452}, year={2022} } ```
sail-rvc/ArnoldSchwarzenegger
sail-rvc
2023-07-14T07:19:02Z
763
0
transformers
[ "transformers", "rvc", "sail-rvc", "audio-to-audio", "endpoints_compatible", "region:us" ]
audio-to-audio
2023-07-14T07:18:26Z
--- pipeline_tag: audio-to-audio tags: - rvc - sail-rvc --- # ArnoldSchwarzenegger ## RVC Model ![banner](https://i.imgur.com/xocCjhH.jpg) This model repo was automatically generated. Date: 2023-07-14 07:19:02 Bot Name: juuxnscrap Model Type: RVC Source: https://huggingface.co/juuxn/RVCModels/ Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
aloobun/Reyna-Mini-1.8B-v0.2
aloobun
2024-03-04T16:45:50Z
763
12
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "chatml", "finetune", "gpt4", "synthetic data", "custom_code", "conversational", "dataset:Locutusque/Hercules-v3.0", "license:other", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-16T02:08:50Z
--- license: other library_name: transformers tags: - chatml - finetune - gpt4 - synthetic data - custom_code - qwen2 datasets: - Locutusque/Hercules-v3.0 license_name: tongyi-qianwen-research license_link: https://huggingface.co/Qwen/Qwen1.5-1.8B-Chat/raw/main/LICENSE model-index: - name: Reyna-Mini-1.8B-v0.2 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 36.6 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aloobun/Reyna-Mini-1.8B-v0.2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 60.19 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aloobun/Reyna-Mini-1.8B-v0.2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 44.75 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aloobun/Reyna-Mini-1.8B-v0.2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 41.24 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aloobun/Reyna-Mini-1.8B-v0.2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 61.56 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aloobun/Reyna-Mini-1.8B-v0.2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 31.31 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aloobun/Reyna-Mini-1.8B-v0.2 name: Open LLM Leaderboard --- ![Reyna aloobun qwen0.5B](https://i.imgur.com/QfbOY6c.jpeg) - Finetuned [Qwen/Qwen1.5-1.8B-Chat](https://huggingface.co/Qwen/Qwen1.5-1.8B-Chat), with SFT on Hercules v3 dataset. - This marks the third model in this series. - Format: ChatML - ``` <|im_start|>system {system}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` - Next step would be to do a DPO train on top. ## Benchamrks: |Avg. | Arc | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | |---|---|---|---|---|---|---| |45.94 | 36.6 |60.19 | 44.75 | 41.24 | 61.56 | 31.31 | ## Example: ``` from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer, StoppingCriteria import torch class MyStoppingCriteria(StoppingCriteria): def __init__(self, target_sequence, prompt): self.target_sequence = target_sequence self.prompt=prompt def __call__(self, input_ids, scores, **kwargs): generated_text = tokenizer.decode(input_ids[0]) generated_text = generated_text.replace(self.prompt,'') if self.target_sequence in generated_text: return True return False def __len__(self): return 1 def __iter__(self): yield self modelpath="aloobun/Reyna-Mini-1.8B-v0.2" model = AutoModelForCausalLM.from_pretrained( modelpath, torch_dtype=torch.bfloat16, device_map="cuda", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained( modelpath, trust_remote_code=True, use_fast=False, ) prompt = "<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\nIs there inherent order in nature or is it all chaos and chance?<|im_end|>\n<|im_start|>assistant\n" encoded_input = tokenizer(prompt, return_tensors='pt') input_ids=encoded_input['input_ids'].cuda() streamer = TextStreamer(tokenizer=tokenizer, skip_prompt=True) op = model.generate( input_ids, streamer=streamer, pad_token_id=tokenizer.eos_token_id, do_sample=True, temperature=0.6, top_p=0.8, max_new_tokens=512, stopping_criteria=MyStoppingCriteria("<|im_end|>", prompt) ) ``` ## Output: >Nature appears to be inherently organized, with patterns and structures that can be observed across different levels of organization. However, the exact mechanisms by which these patterns emerge and evolve remain largely unknown. >The universe seems to be governed by a series of laws and principles known as "laws of physics," such as Newton's laws of motion, electromagnetism, and thermodynamics. These laws govern how matter and energy interact with each other and how they behave over time. >Despite our understanding of these laws, we still struggle to comprehend the underlying mechanisms that allow for the emergence of complex patterns and structures. This is because the universe operates on a scale that is too small for us to observe directly, and therefore we cannot fully understand its internal workings. >In summary, while there may be some level of order and structure within the universe, the precise mechanisms governing this order remain largely unknown.<|im_end|> # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_aloobun__Reyna-Mini-1.8B-v0.2) | Metric |Value| |---------------------------------|----:| |Avg. |45.94| |AI2 Reasoning Challenge (25-Shot)|36.60| |HellaSwag (10-Shot) |60.19| |MMLU (5-Shot) |44.75| |TruthfulQA (0-shot) |41.24| |Winogrande (5-shot) |61.56| |GSM8k (5-shot) |31.31|
weezywitasneezy/OxytocinErosEngineeringF1-7B-slerp
weezywitasneezy
2024-04-08T19:34:07Z
763
0
transformers
[ "transformers", "safetensors", "gguf", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "ChaoticNeutrals/Eris_Remix_7B", "Virt-io/Erebus-Holodeck-7B", "base_model:ChaoticNeutrals/Eris_Remix_7B", "base_model:Virt-io/Erebus-Holodeck-7B", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-24T20:04:06Z
--- tags: - merge - mergekit - lazymergekit - ChaoticNeutrals/Eris_Remix_7B - Virt-io/Erebus-Holodeck-7B base_model: - ChaoticNeutrals/Eris_Remix_7B - Virt-io/Erebus-Holodeck-7B license: cc-by-nc-4.0 --- # OxytocinErosEngineeringF1-7B-slerp <img src="https://cdn-uploads.huggingface.co/production/uploads/632b22e66cb20ba0ae82bf06/ei6PcV1sk_qSPo8GgGms-.png" width="512" height="512" /> OxytocinErosEngineeringF1-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [ChaoticNeutrals/Eris_Remix_7B](https://huggingface.co/ChaoticNeutrals/Eris_Remix_7B) * [Virt-io/Erebus-Holodeck-7B](https://huggingface.co/Virt-io/Erebus-Holodeck-7B) Thanks to MraderMarcher for providing GGUF quants-> [mradermacher/OxytocinErosEngineeringF1-7B-slerp-GGUF](https://huggingface.co/mradermacher/OxytocinErosEngineeringF1-7B-slerp-GGUF) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_weezywitasneezy__OxytocinErosEngineeringF1-7B-slerp) | Metric |Value| |---------------------------------|----:| |Avg. |69.22| |AI2 Reasoning Challenge (25-Shot)|67.15| |HellaSwag (10-Shot) |86| |MMLU (5-Shot) |64.73| |TruthfulQA (0-shot) |54.54| |Winogrande (5-shot) |81.14| |GSM8k (5-shot) |61.79| ## 🧩 Configuration ```yaml slices: - sources: - model: ChaoticNeutrals/Eris_Remix_7B layer_range: [0, 32] - model: Virt-io/Erebus-Holodeck-7B layer_range: [0, 32] merge_method: slerp base_model: ChaoticNeutrals/Eris_Remix_7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "weezywitasneezy/OxytocinErosEngineeringF1-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
raincandy-u/phillama-3.8b-v0.1
raincandy-u
2024-04-25T11:07:26Z
763
10
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "dataset:raincandy-u/Dextromethorphan-10k", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-25T06:28:18Z
--- datasets: - raincandy-u/Dextromethorphan-10k library_name: transformers pipeline_tag: text-generation license: mit --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/657eb5b256c9c67605a6e8b5/f96pPiJQb3puzbPYNknG2.png) # phillama-3.8b-v0.1 Phillama is a model based on Phi-3-mini and trained on Llama-generated dataset raincandy-u/Dextromethorphan-10k to make it more "llama-like". Also, this model is converted into Llama format, so it will work with any Llama-2/3 workflow. # System prompt `You are a humanoid AI assistant. You think step by step and give detailed long response.` # Prompt template ``` <|system|> You are a humanoid AI assistant. You think step by step and give detailed long response.<|end|> <|user|> Why people like llama?<|end|> <|assistant|> ```
Walmart-the-bag/Llama-3-LizardCoder-8B
Walmart-the-bag
2024-05-19T13:03:55Z
763
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "code", "lizardcoder", "llama3", "merge", "conversational", "en", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-18T14:57:19Z
--- license: llama3 language: - en library_name: transformers tags: - code - lizardcoder - llama3 - llama - merge --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6530994e70a88b63f007324d/_H6JOLV3eKFeUYiLHqHlI.png) # Llama-3-LizardCoder-8B This is a merge of 6 models that were finetuned on llama3 8b. This has done pretty decent on some coding tasks, for the parameter size. [gguf](https://huggingface.co/Walmart-the-bag/LizardCoder-Llama3-8B-GGUF) ## Limitations - **Uncertain Accuracy:** As a merged model, the model's responses may not always be accurate. Users should independently verify any outputs before relying on them. - **Potential for Censorship:** The model's censorship filters are not comprehensive. There is a possibility of encountering censored code/content. - **Not including packages:** If you ask it to code you something, it may accidentally forget to include a package. Tell it to, and create a good prompt. This will be finetuned on to fix it in the future. # Merge Config This model was made possible by this merge yaml. ```yaml models: - model: rombodawg/Llama-3-8B-Instruct-Coder parameters: weight: 1.0 - model: ajibawa-2023/Code-Llama-3-8B parameters: weight: 0.3 - model: meta-llama/Meta-Llama-3-8B-Instruct parameters: weight: 0.5 - model: Orenguteng/Llama-3-8B-Lexi-Uncensored parameters: weight: 0.8 - model: TheSkullery/llama-3-cat-8b-instruct-v1 parameters: weight: 0.9 - model: McGill-NLP/Llama-3-8B-Web parameters: weight: 0.2 merge_method: linear dtype: bfloat16 ``` ## License i dont really care about this, but here: [Llama3](https://llama.meta.com/llama3/license/)
RichardErkhov/shibing624_-_code-autocomplete-gpt2-base-gguf
RichardErkhov
2024-06-05T00:30:38Z
763
0
null
[ "gguf", "region:us" ]
null
2024-06-05T00:19:50Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) code-autocomplete-gpt2-base - GGUF - Model creator: https://huggingface.co/shibing624/ - Original model: https://huggingface.co/shibing624/code-autocomplete-gpt2-base/ | Name | Quant method | Size | | ---- | ---- | ---- | | [code-autocomplete-gpt2-base.Q2_K.gguf](https://huggingface.co/RichardErkhov/shibing624_-_code-autocomplete-gpt2-base-gguf/blob/main/code-autocomplete-gpt2-base.Q2_K.gguf) | Q2_K | 0.08GB | | [code-autocomplete-gpt2-base.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/shibing624_-_code-autocomplete-gpt2-base-gguf/blob/main/code-autocomplete-gpt2-base.IQ3_XS.gguf) | IQ3_XS | 0.08GB | | [code-autocomplete-gpt2-base.IQ3_S.gguf](https://huggingface.co/RichardErkhov/shibing624_-_code-autocomplete-gpt2-base-gguf/blob/main/code-autocomplete-gpt2-base.IQ3_S.gguf) | IQ3_S | 0.08GB | | [code-autocomplete-gpt2-base.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/shibing624_-_code-autocomplete-gpt2-base-gguf/blob/main/code-autocomplete-gpt2-base.Q3_K_S.gguf) | Q3_K_S | 0.08GB | | [code-autocomplete-gpt2-base.IQ3_M.gguf](https://huggingface.co/RichardErkhov/shibing624_-_code-autocomplete-gpt2-base-gguf/blob/main/code-autocomplete-gpt2-base.IQ3_M.gguf) | IQ3_M | 0.09GB | | [code-autocomplete-gpt2-base.Q3_K.gguf](https://huggingface.co/RichardErkhov/shibing624_-_code-autocomplete-gpt2-base-gguf/blob/main/code-autocomplete-gpt2-base.Q3_K.gguf) | Q3_K | 0.09GB | | [code-autocomplete-gpt2-base.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/shibing624_-_code-autocomplete-gpt2-base-gguf/blob/main/code-autocomplete-gpt2-base.Q3_K_M.gguf) | Q3_K_M | 0.09GB | | [code-autocomplete-gpt2-base.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/shibing624_-_code-autocomplete-gpt2-base-gguf/blob/main/code-autocomplete-gpt2-base.Q3_K_L.gguf) | Q3_K_L | 0.1GB | | [code-autocomplete-gpt2-base.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/shibing624_-_code-autocomplete-gpt2-base-gguf/blob/main/code-autocomplete-gpt2-base.IQ4_XS.gguf) | IQ4_XS | 0.1GB | | [code-autocomplete-gpt2-base.Q4_0.gguf](https://huggingface.co/RichardErkhov/shibing624_-_code-autocomplete-gpt2-base-gguf/blob/main/code-autocomplete-gpt2-base.Q4_0.gguf) | Q4_0 | 0.1GB | | [code-autocomplete-gpt2-base.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/shibing624_-_code-autocomplete-gpt2-base-gguf/blob/main/code-autocomplete-gpt2-base.IQ4_NL.gguf) | IQ4_NL | 0.1GB | | [code-autocomplete-gpt2-base.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/shibing624_-_code-autocomplete-gpt2-base-gguf/blob/main/code-autocomplete-gpt2-base.Q4_K_S.gguf) | Q4_K_S | 0.1GB | | [code-autocomplete-gpt2-base.Q4_K.gguf](https://huggingface.co/RichardErkhov/shibing624_-_code-autocomplete-gpt2-base-gguf/blob/main/code-autocomplete-gpt2-base.Q4_K.gguf) | Q4_K | 0.11GB | | [code-autocomplete-gpt2-base.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/shibing624_-_code-autocomplete-gpt2-base-gguf/blob/main/code-autocomplete-gpt2-base.Q4_K_M.gguf) | Q4_K_M | 0.11GB | | [code-autocomplete-gpt2-base.Q4_1.gguf](https://huggingface.co/RichardErkhov/shibing624_-_code-autocomplete-gpt2-base-gguf/blob/main/code-autocomplete-gpt2-base.Q4_1.gguf) | Q4_1 | 0.11GB | | [code-autocomplete-gpt2-base.Q5_0.gguf](https://huggingface.co/RichardErkhov/shibing624_-_code-autocomplete-gpt2-base-gguf/blob/main/code-autocomplete-gpt2-base.Q5_0.gguf) | Q5_0 | 0.11GB | | [code-autocomplete-gpt2-base.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/shibing624_-_code-autocomplete-gpt2-base-gguf/blob/main/code-autocomplete-gpt2-base.Q5_K_S.gguf) | Q5_K_S | 0.11GB | | [code-autocomplete-gpt2-base.Q5_K.gguf](https://huggingface.co/RichardErkhov/shibing624_-_code-autocomplete-gpt2-base-gguf/blob/main/code-autocomplete-gpt2-base.Q5_K.gguf) | Q5_K | 0.12GB | | [code-autocomplete-gpt2-base.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/shibing624_-_code-autocomplete-gpt2-base-gguf/blob/main/code-autocomplete-gpt2-base.Q5_K_M.gguf) | Q5_K_M | 0.12GB | | [code-autocomplete-gpt2-base.Q5_1.gguf](https://huggingface.co/RichardErkhov/shibing624_-_code-autocomplete-gpt2-base-gguf/blob/main/code-autocomplete-gpt2-base.Q5_1.gguf) | Q5_1 | 0.12GB | | [code-autocomplete-gpt2-base.Q6_K.gguf](https://huggingface.co/RichardErkhov/shibing624_-_code-autocomplete-gpt2-base-gguf/blob/main/code-autocomplete-gpt2-base.Q6_K.gguf) | Q6_K | 0.13GB | | [code-autocomplete-gpt2-base.Q8_0.gguf](https://huggingface.co/RichardErkhov/shibing624_-_code-autocomplete-gpt2-base-gguf/blob/main/code-autocomplete-gpt2-base.Q8_0.gguf) | Q8_0 | 0.17GB | Original model description: --- language: - en tags: - code - autocomplete - pytorch - en license: "apache-2.0" --- # GPT2 for Code AutoComplete Model code-autocomplete, a code completion plugin for Python. **code-autocomplete** can automatically complete the code of lines and blocks with GPT2. ## Usage Open source repo:[code-autocomplete](https://github.com/shibing624/code-autocomplete),support GPT2 model, usage: ```python from autocomplete.gpt2_coder import GPT2Coder m = GPT2Coder("shibing624/code-autocomplete-gpt2-base") print(m.generate('import torch.nn as')[0]) ``` Also, use huggingface/transformers: *Please use 'GPT2' related functions to load this model!* ```python import os import torch from transformers import GPT2Tokenizer, GPT2LMHeadModel os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE" device = torch.device("cuda" if torch.cuda.is_available() else "cpu") tokenizer = GPT2Tokenizer.from_pretrained("shibing624/code-autocomplete-gpt2-base") model = GPT2LMHeadModel.from_pretrained("shibing624/code-autocomplete-gpt2-base") model.to(device) prompts = [ """from torch import nn class LSTM(Module): def __init__(self, *, n_tokens: int, embedding_size: int, hidden_size: int, n_layers: int):""", """import numpy as np import torch import torch.nn as""", "import java.util.ArrayList", "def factorial(n):", ] for prompt in prompts: input_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors='pt').to(device) outputs = model.generate(input_ids=input_ids, max_length=64 + len(prompt), temperature=1.0, top_k=50, top_p=0.95, repetition_penalty=1.0, do_sample=True, num_return_sequences=1, length_penalty=2.0, early_stopping=True) decoded = tokenizer.decode(outputs[0], skip_special_tokens=True) print(decoded) print("=" * 20) ``` output: ```shell from torch import nn class LSTM(Module): def __init__(self, *, n_tokens: int, embedding_size: int, hidden_size: int, n_layers: int): self.embedding_size = embedding_size ==================== import numpy as np import torch import torch.nn as nn import torch.nn.functional as F ``` Model files: ``` code-autocomplete-gpt2-base ├── config.json ├── merges.txt ├── pytorch_model.bin ├── special_tokens_map.json ├── tokenizer_config.json └── vocab.json ``` ### Train data #### pytorch_awesome projects source code download [code-autocomplete](https://github.com/shibing624/code-autocomplete), ```shell cd autocomplete python create_dataset.py ``` If you want train code-autocomplete GPT2 model,refer [https://github.com/shibing624/code-autocomplete/blob/main/autocomplete/gpt2_coder.py](https://github.com/shibing624/code-autocomplete/blob/main/autocomplete/gpt2_coder.py) ### About GPT2 Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and first released at [this page](https://openai.com/blog/better-language-models/). Disclaimer: The team releasing GPT-2 also wrote a [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card has been written by the Hugging Face team to complete the information they provided and give specific examples of bias. ## Citation ```latex @misc{code-autocomplete, author = {Xu Ming}, title = {code-autocomplete: Code AutoComplete with GPT model}, year = {2022}, publisher = {GitHub}, journal = {GitHub repository}, url = {https://github.com/shibing624/code-autocomplete}, } ```
mradermacher/DeepSeek-Coder-V2-Lite-Instruct-i1-GGUF
mradermacher
2024-06-18T06:13:44Z
763
0
transformers
[ "transformers", "gguf", "en", "base_model:deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct", "license:other", "endpoints_compatible", "region:us" ]
null
2024-06-18T01:33:54Z
--- base_model: deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct language: - en library_name: transformers license: other license_link: LICENSE license_name: deepseek-license quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Lite-Instruct-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Lite-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Lite-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 5.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Lite-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Lite-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 5.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Lite-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Lite-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Lite-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Lite-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Lite-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Lite-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Lite-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Lite-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 6.4 | | | [GGUF](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Lite-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Lite-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 6.5 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Lite-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Lite-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 7.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Lite-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Lite-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 7.2 | | | [GGUF](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Lite-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Lite-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 7.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Lite-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Lite-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 7.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Lite-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Lite-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Lite-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Lite-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 8.2 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Lite-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Lite-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.6 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Lite-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Lite-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.7 | | | [GGUF](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Lite-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Lite-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 9.0 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Lite-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Lite-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 9.6 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Lite-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Lite-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 10.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Lite-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Lite-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 11.2 | | | [GGUF](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Lite-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Lite-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 12.0 | | | [GGUF](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Lite-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Lite-Instruct.i1-Q6_K.gguf) | i1-Q6_K | 14.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
NikolayKozloff/britllm-3b-v0.1-Q8_0-GGUF
NikolayKozloff
2024-06-24T15:46:21Z
763
1
null
[ "gguf", "llama-cpp", "gguf-my-repo", "en", "cy", "ga", "gd", "sco", "base_model:britllm/britllm-3b-v0.1", "license:odc-by", "region:us" ]
null
2024-06-24T15:46:04Z
--- base_model: britllm/britllm-3b-v0.1 language: - en - cy - ga - gd - sco license: odc-by tags: - llama-cpp - gguf-my-repo --- # NikolayKozloff/britllm-3b-v0.1-Q8_0-GGUF This model was converted to GGUF format from [`britllm/britllm-3b-v0.1`](https://huggingface.co/britllm/britllm-3b-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/britllm/britllm-3b-v0.1) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo NikolayKozloff/britllm-3b-v0.1-Q8_0-GGUF --hf-file britllm-3b-v0.1-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo NikolayKozloff/britllm-3b-v0.1-Q8_0-GGUF --hf-file britllm-3b-v0.1-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo NikolayKozloff/britllm-3b-v0.1-Q8_0-GGUF --hf-file britllm-3b-v0.1-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo NikolayKozloff/britllm-3b-v0.1-Q8_0-GGUF --hf-file britllm-3b-v0.1-q8_0.gguf -c 2048 ```
BekiTila/Amharic-llama-for-agriculture-chat-bot-Q4_K_M-GGUF
BekiTila
2024-06-30T05:35:13Z
763
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "base_model:BekiTila/Amharic-llama-for-agriculture-chat-bot", "endpoints_compatible", "region:us" ]
null
2024-06-30T05:34:55Z
--- base_model: BekiTila/Amharic-llama-for-agriculture-chat-bot library_name: transformers tags: - llama-cpp - gguf-my-repo --- # BekiTila/Amharic-llama-for-agriculture-chat-bot-Q4_K_M-GGUF This model was converted to GGUF format from [`BekiTila/Amharic-llama-for-agriculture-chat-bot`](https://huggingface.co/BekiTila/Amharic-llama-for-agriculture-chat-bot) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/BekiTila/Amharic-llama-for-agriculture-chat-bot) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo BekiTila/Amharic-llama-for-agriculture-chat-bot-Q4_K_M-GGUF --hf-file amharic-llama-for-agriculture-chat-bot-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo BekiTila/Amharic-llama-for-agriculture-chat-bot-Q4_K_M-GGUF --hf-file amharic-llama-for-agriculture-chat-bot-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo BekiTila/Amharic-llama-for-agriculture-chat-bot-Q4_K_M-GGUF --hf-file amharic-llama-for-agriculture-chat-bot-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo BekiTila/Amharic-llama-for-agriculture-chat-bot-Q4_K_M-GGUF --hf-file amharic-llama-for-agriculture-chat-bot-q4_k_m.gguf -c 2048 ```
pearsonkyle/gpt2-arxiv
pearsonkyle
2023-01-20T18:36:01Z
762
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-01-20T16:02:38Z
--- tags: - generated_from_trainer model-index: - name: gpt2-arxiv results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-arxiv A [gpt2](https://huggingface.co/gpt2) powered predictive keyboard trained on ~1.6M manuscript abstracts from the ArXiv. This model uses https://www.kaggle.com/datasets/Cornell-University/arxiv ```python from transformers import pipeline from transformers import GPT2TokenizerFast tokenizer = GPT2TokenizerFast.from_pretrained("gpt2") llm = pipeline('text-generation',model='pearsonkyle/gpt2-arxiv', tokenizer=tokenizer) texts = llm("Directly imaged exoplanets probe", max_length=50, do_sample=True, num_return_sequences=5, penalty_alpha=0.65, top_k=40, repetition_penalty=1.25, temperature=0.95) for i in range(5): print(texts[i]['generated_text']+'\n') ``` - *The reflectance of Earth's vegetation suggests* `that large, deciduous forest fires are composed of mostly dry, unprocessed material that is distributed in a nearly patchy fashion. The distributions of these fires are correlated with temperature, and also with vegetation...` - *Directly imaged exoplanets probe* `the atmospheres of giant planets. The detection of such planets requires high-quality imaging with high contrast and angular resolution, as well as` - *We can remotely sense an atmosphere by observing its reflected, transmitted, or emitted light in varying geometries. This light will contain information on* `the planetary conditions including atmospheric temperature and cloud properties, which is essential for understanding how the planet interacts with the atmosphere and how it affects the climate. The primary science objective of this paper is to develop a methodology that can be applied to any kind of observation and measurement data, and to provide a framework that enables the detection and characterization of the atmospheres of exoplanets` ## Model description [GPT-2](https://huggingface.co/transformers/v2.2.0/pretrained_models.html): 12-layer, 768-hidden, 12-heads, 117M parameters ## Intended uses & limitations Coming soon... - Predictive Keyboard using text generation - Realtime reference recommendations using nearest neighbors of embeddings Be careful when generating a lot of text or when changing the sampling mode of the language model. It can sometimes produce things that are not truthful, e.g., - The surface of Mars is composed of a thin layer of water ice, that was discovered by the Cassini spacecraft after its impact on the Earth's surface. ## Training procedure ~49 hours on a 3090 training for 1.25M iterations ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.1 - Tokenizers 0.13.2
MBZUAI/LaMini-Cerebras-256M
MBZUAI
2023-04-28T13:08:29Z
762
4
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "en", "arxiv:2304.14402", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-04-12T06:14:42Z
--- license: cc-by-nc-4.0 language: - en pipeline_tag: text-generation widget: - text: >- Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: how can I become more healthy? ### Response: example_title: example --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> <p align="center" width="100%"> <a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini.png" alt="Title" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a> </p> # LaMini-Cerebras-256M [![Model License](https://img.shields.io/badge/Model%20License-CC%20By%20NC%204.0-red.svg)]() This model is one of our LaMini-LM model series in paper "[LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions](https://github.com/mbzuai-nlp/lamini-lm)". This model is a fine-tuned version of [cerebras/Cerebras-GPT-256M](https://huggingface.co/cerebras/Cerebras-GPT-256M) on [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction) that contains 2.58M samples for instruction fine-tuning. For more information about our dataset, please refer to our [project repository](https://github.com/mbzuai-nlp/lamini-lm/). You can view other models of LaMini-LM series as follows. Models with ✩ are those with the best overall performance given their size/architecture, hence we recommend using them. More details can be seen in our paper. <table> <thead> <tr> <th>Base model</th> <th colspan="4">LaMini-LM series (#parameters)</th> </tr> </thead> <tbody> <tr> <td>T5</td> <td><a href="https://huggingface.co/MBZUAI/lamini-t5-61m" target="_blank" rel="noopener noreferrer">LaMini-T5-61M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-t5-223m" target="_blank" rel="noopener noreferrer">LaMini-T5-223M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-t5-738m" target="_blank" rel="noopener noreferrer">LaMini-T5-738M</a></td> <td></td> </tr> <tr> <td>Flan-T5</td> <td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-77m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-77M</a>✩</td> <td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-248m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-248M</a>✩</td> <td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-783m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-783M</a>✩</td> <td></td> </tr> <tr> <td>Cerebras-GPT</td> <td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-111m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-111M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-256m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-256M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-590m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-590M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-1.3b" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-1.3B</a></td> </tr> <tr> <td>GPT-2</td> <td><a href="https://huggingface.co/MBZUAI/lamini-gpt-124m" target="_blank" rel="noopener noreferrer">LaMini-GPT-124M</a>✩</td> <td><a href="https://huggingface.co/MBZUAI/lamini-gpt-774m" target="_blank" rel="noopener noreferrer">LaMini-GPT-774M</a>✩</td> <td><a href="https://huggingface.co/MBZUAI/lamini-gpt-1.5b" target="_blank" rel="noopener noreferrer">LaMini-GPT-1.5B</a>✩</td> <td></td> </tr> <tr> <td>GPT-Neo</td> <td><a href="https://huggingface.co/MBZUAI/lamini-neo-125m" target="_blank" rel="noopener noreferrer">LaMini-Neo-125M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-neo-1.3b" target="_blank" rel="noopener noreferrer">LaMini-Neo-1.3B</a></td> <td></td> <td></td> </tr> <tr> <td>GPT-J</td> <td colspan="4">coming soon</td> </tr> <tr> <td>LLaMA</td> <td colspan="4">coming soon</td> </tr> </tbody> </table> ## Use ### Intended use We recommend using the model to respond to human instructions written in natural language. Since this decoder-only model is fine-tuned with wrapper text, we suggest using the same wrapper text to achieve the best performance. See the example on the right or the code below. We now show you how to load and use our model using HuggingFace `pipeline()`. ```python # pip install -q transformers from transformers import pipeline checkpoint = "{model_name}" model = pipeline('text-generation', model = checkpoint) instruction = 'Please let me know your thoughts on the given place and why you think it deserves to be visited: \n"Barcelona, Spain"' input_prompt = f"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:" generated_text = model(input_prompt, max_length=512, do_sample=True)[0]['generated_text'] print("Response", generated_text) ``` ## Training Procedure <p align="center" width="100%"> <a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini-pipeline.drawio.png" alt="Title" style="width: 100%; min-width: 250px; display: block; margin: auto;"></a> </p> We initialize with [cerebras/Cerebras-GPT-256M](https://huggingface.co/cerebras/Cerebras-GPT-256M) and fine-tune it on our [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction). Its total number of parameters is 256M. ### Training Hyperparameters ## Evaluation We conducted two sets of evaluations: automatic evaluation on downstream NLP tasks and human evaluation on user-oriented instructions. For more detail, please refer to our [paper](). ## Limitations More information needed # Citation ```bibtex @article{lamini-lm, author = {Minghao Wu and Abdul Waheed and Chiyu Zhang and Muhammad Abdul-Mageed and Alham Fikri Aji }, title = {LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions}, journal = {CoRR}, volume = {abs/2304.14402}, year = {2023}, url = {https://arxiv.org/abs/2304.14402}, eprinttype = {arXiv}, eprint = {2304.14402} } ```
pengold/t5-vietnamese-summarization
pengold
2023-09-20T03:31:21Z
762
2
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "summarization", "vi", "base_model:pengold/t5-vietnamese-summarization", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
summarization
2023-09-17T17:51:41Z
--- license: cc-by-4.0 base_model: pengold/t5-vietnamese-summarization tags: - summarization metrics: - rouge model-index: - name: t5-vietnamese-summarization results: [] inference: parameters: min_length: 5 max_length: 150 widget: - text: >- summarize: Thảo luận phiên chuyên đề 2 về năng suất lao động tại Diễn đàn Kinh tế - Xã hội 2023 ngày 19/9, Chuyên gia Kinh tế quốc tế Jonathan Pincus (Tổ chức phát triển Liên hợp quốc tại Việt Nam) nói việc tăng năng suất lao động cần nhìn nhận trong quá trình dài hơi thay vì trong giai đoạn ngắn. Rất khó để một quốc gia có thể tăng trưởng năng suất nhanh trong một giai đoạn dài, đó chính là bẫy năng suất trung bình, ông Pincus đúc rút, gọi đây là mối đe dọa lớn. example_title: Example 1 - text: >- summarize: Đây là nỗ lực của chính phủ nhằm giảm đi cơn sốt trên thị trường chứng khoán. Quyết định này có tác động ngay lập tức. Chỉ số chính của thị trường chứng khoán Thượng Hải khi đóng cửa giảm 281.8 điểm, ở mức 4053.1. Một số phân tích gia nói việc cổ phiếu sụt giá cũng chỉ mang tính tạm thời mà thôi. Ngân hàng Thế giới giờ đây dự đoán nền kinh tế Trung Quốc sẽ tăng 10.4% trong năm nay. Lúc trước, Ngân hàng Thế giới dự đoán kinh tế Trung Quốc sẽ tăng 9.6% trong năm 2007. Với việc Bắc Kinh đưa ra hành động nhằm giảm nhiệt thị trường chứng khoán vào hôm thứ Tư, thuế đối với cổ phiếu giao dịch giờ đây tăng từ 0.1% lên 0.3%. Tính đến phiên đóng cửa vào hôm thứ Ba, chỉ số cổ phiếu Thượng Hải đã tăng 62% trong năm nay, và có giá trị tăng gấp bốn lần kể từ đầu năm 2006. Ông Thomas Gruener từ Landesbank Berlin nói: “Hành động này có thể tạo ra việc điều chỉnh giá nhưng nhìn chung chúng tôi không cho là xu hướng sẽ thay đổi”. Tuy nhiên, việc cổ phiếu Thượng Hải sụt giá có thể sẽ tác động tới tâm lý của các thị trường chứng khoán châu Âu. Thế nên các chỉ số chứng khoán tại châu Âu khi mở cửa hôm thứ Tư đều hạ. example_title: Example 2 language: - vi pipeline_tag: summarization --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-vietnamese-summarization This model is a fine-tuned version of [pengold/t5-vietnamese-summarization](https://huggingface.co/pengold/t5-vietnamese-summarization) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.6288 - Rouge1: 0.4728 - Rouge2: 0.1669 - Rougel: 0.3049 - Rougelsum: 0.3049 - Gen Len: 18.7458 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 70 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:------:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 5.2487 | 1.0 | 2007 | 5.0028 | 0.4671 | 0.1595 | 0.2994 | 0.2994 | 18.7618 | | 5.217 | 2.0 | 4014 | 4.9802 | 0.4639 | 0.1569 | 0.2984 | 0.2983 | 18.7747 | | 5.2191 | 3.0 | 6021 | 4.9685 | 0.4644 | 0.1594 | 0.2989 | 0.2989 | 18.7613 | | 5.2254 | 4.0 | 8028 | 4.9477 | 0.4648 | 0.1586 | 0.2988 | 0.2987 | 18.7458 | | 5.1735 | 5.0 | 10035 | 4.9366 | 0.4654 | 0.1593 | 0.2988 | 0.2987 | 18.761 | | 5.1735 | 6.0 | 12042 | 4.9214 | 0.4676 | 0.1611 | 0.3004 | 0.3004 | 18.78 | | 5.1653 | 7.0 | 14049 | 4.9095 | 0.4681 | 0.1616 | 0.3007 | 0.3007 | 18.7523 | | 5.1154 | 8.0 | 16056 | 4.8971 | 0.4664 | 0.1598 | 0.3002 | 0.3001 | 18.7655 | | 5.1232 | 9.0 | 18063 | 4.8882 | 0.4683 | 0.1612 | 0.3008 | 0.3008 | 18.761 | | 5.0995 | 10.0 | 20070 | 4.8758 | 0.4709 | 0.1618 | 0.3021 | 0.302 | 18.7518 | | 5.1012 | 11.0 | 22077 | 4.8689 | 0.4685 | 0.1616 | 0.3011 | 0.3009 | 18.7665 | | 5.0916 | 12.0 | 24084 | 4.8486 | 0.4695 | 0.1623 | 0.3024 | 0.3023 | 18.7655 | | 5.0559 | 13.0 | 26091 | 4.8409 | 0.4699 | 0.1631 | 0.3024 | 0.3023 | 18.7849 | | 5.0633 | 14.0 | 28098 | 4.8326 | 0.4705 | 0.1613 | 0.302 | 0.302 | 18.7583 | | 5.0335 | 15.0 | 30105 | 4.8243 | 0.4696 | 0.1612 | 0.3023 | 0.3022 | 18.7638 | | 5.0271 | 16.0 | 32112 | 4.8046 | 0.4691 | 0.1618 | 0.3022 | 0.3022 | 18.7518 | | 5.0045 | 17.0 | 34119 | 4.8060 | 0.4708 | 0.1629 | 0.3029 | 0.3028 | 18.7568 | | 5.0072 | 18.0 | 36126 | 4.7945 | 0.4702 | 0.1633 | 0.3024 | 0.3023 | 18.776 | | 4.9954 | 19.0 | 38133 | 4.7894 | 0.47 | 0.1639 | 0.3022 | 0.3021 | 18.7785 | | 4.9994 | 20.0 | 40140 | 4.7773 | 0.4692 | 0.1625 | 0.3028 | 0.3027 | 18.7623 | | 4.953 | 21.0 | 42147 | 4.7641 | 0.4682 | 0.162 | 0.3015 | 0.3014 | 18.757 | | 4.9526 | 22.0 | 44154 | 4.7600 | 0.4703 | 0.1626 | 0.3023 | 0.3023 | 18.7625 | | 4.9571 | 23.0 | 46161 | 4.7592 | 0.4698 | 0.1627 | 0.3025 | 0.3025 | 18.781 | | 4.9324 | 24.0 | 48168 | 4.7511 | 0.4697 | 0.1631 | 0.3022 | 0.3021 | 18.769 | | 4.9323 | 25.0 | 50175 | 4.7433 | 0.4723 | 0.1649 | 0.304 | 0.3039 | 18.7757 | | 4.9381 | 26.0 | 52182 | 4.7378 | 0.4703 | 0.1629 | 0.3026 | 0.3026 | 18.7782 | | 4.9288 | 27.0 | 54189 | 4.7454 | 0.4709 | 0.1627 | 0.3026 | 0.3026 | 18.7777 | | 4.9131 | 28.0 | 56196 | 4.7222 | 0.471 | 0.1652 | 0.3037 | 0.3037 | 18.782 | | 4.9005 | 29.0 | 58203 | 4.7241 | 0.4719 | 0.1638 | 0.3039 | 0.3038 | 18.778 | | 4.9051 | 30.0 | 60210 | 4.7225 | 0.4715 | 0.1647 | 0.3037 | 0.3036 | 18.7668 | | 4.8816 | 31.0 | 62217 | 4.7181 | 0.4701 | 0.1631 | 0.3029 | 0.3029 | 18.7416 | | 4.8687 | 32.0 | 64224 | 4.7061 | 0.4705 | 0.1643 | 0.3032 | 0.3031 | 18.7625 | | 4.8935 | 33.0 | 66231 | 4.7063 | 0.4697 | 0.1632 | 0.3028 | 0.3028 | 18.7458 | | 4.88 | 34.0 | 68238 | 4.6984 | 0.471 | 0.164 | 0.3039 | 0.3039 | 18.7663 | | 4.8473 | 35.0 | 70245 | 4.6934 | 0.4699 | 0.1636 | 0.3034 | 0.3033 | 18.7531 | | 4.8613 | 36.0 | 72252 | 4.6863 | 0.4705 | 0.1631 | 0.303 | 0.303 | 18.7797 | | 4.8491 | 37.0 | 74259 | 4.6847 | 0.4703 | 0.1638 | 0.3037 | 0.3037 | 18.78 | | 4.8239 | 38.0 | 76266 | 4.6804 | 0.4707 | 0.1632 | 0.3032 | 0.3032 | 18.7802 | | 4.8767 | 39.0 | 78273 | 4.6788 | 0.4703 | 0.1637 | 0.3027 | 0.3026 | 18.7446 | | 4.8402 | 40.0 | 80280 | 4.6700 | 0.4699 | 0.1633 | 0.3028 | 0.3028 | 18.7516 | | 4.8261 | 41.0 | 82287 | 4.6660 | 0.4699 | 0.1633 | 0.3029 | 0.3028 | 18.7369 | | 4.8193 | 42.0 | 84294 | 4.6693 | 0.4711 | 0.1654 | 0.3039 | 0.3038 | 18.7421 | | 4.8161 | 43.0 | 86301 | 4.6636 | 0.4707 | 0.1642 | 0.303 | 0.303 | 18.7595 | | 4.832 | 44.0 | 88308 | 4.6619 | 0.4708 | 0.1646 | 0.3036 | 0.3035 | 18.7423 | | 4.8304 | 45.0 | 90315 | 4.6575 | 0.4711 | 0.1651 | 0.3038 | 0.3037 | 18.7354 | | 4.7958 | 46.0 | 92322 | 4.6543 | 0.4711 | 0.165 | 0.3032 | 0.3032 | 18.7189 | | 4.804 | 47.0 | 94329 | 4.6541 | 0.4711 | 0.1656 | 0.3037 | 0.3036 | 18.7396 | | 4.7968 | 48.0 | 96336 | 4.6495 | 0.4709 | 0.165 | 0.3034 | 0.3034 | 18.7411 | | 4.7912 | 49.0 | 98343 | 4.6471 | 0.4718 | 0.1655 | 0.3041 | 0.3042 | 18.7361 | | 4.7721 | 50.0 | 100350 | 4.6469 | 0.4723 | 0.1667 | 0.3047 | 0.3047 | 18.7309 | | 4.7828 | 51.0 | 102357 | 4.6476 | 0.4712 | 0.1656 | 0.3044 | 0.3045 | 18.7446 | | 4.7934 | 52.0 | 104364 | 4.6453 | 0.4707 | 0.1645 | 0.3035 | 0.3035 | 18.7329 | | 4.7724 | 53.0 | 106371 | 4.6425 | 0.4715 | 0.1657 | 0.304 | 0.304 | 18.7403 | | 4.7804 | 54.0 | 108378 | 4.6362 | 0.4711 | 0.1658 | 0.3041 | 0.3041 | 18.7488 | | 4.792 | 55.0 | 110385 | 4.6363 | 0.4706 | 0.1653 | 0.3038 | 0.3038 | 18.7281 | | 4.7528 | 56.0 | 112392 | 4.6357 | 0.4724 | 0.1667 | 0.3044 | 0.3044 | 18.7463 | | 4.7849 | 57.0 | 114399 | 4.6346 | 0.472 | 0.1661 | 0.3041 | 0.304 | 18.7431 | | 4.7618 | 58.0 | 116406 | 4.6332 | 0.472 | 0.167 | 0.3046 | 0.3046 | 18.7336 | | 4.7841 | 59.0 | 118413 | 4.6287 | 0.4716 | 0.1664 | 0.3043 | 0.3043 | 18.7369 | | 4.7764 | 60.0 | 120420 | 4.6316 | 0.473 | 0.1666 | 0.3048 | 0.3047 | 18.7548 | | 4.7504 | 61.0 | 122427 | 4.6276 | 0.4721 | 0.1671 | 0.3043 | 0.3044 | 18.7371 | | 4.7629 | 62.0 | 124434 | 4.6250 | 0.4726 | 0.167 | 0.3046 | 0.3046 | 18.76 | | 4.7764 | 63.0 | 126441 | 4.6264 | 0.4725 | 0.1666 | 0.3044 | 0.3044 | 18.7446 | | 4.7524 | 64.0 | 128448 | 4.6275 | 0.4719 | 0.166 | 0.3041 | 0.3041 | 18.7428 | | 4.7641 | 65.0 | 130455 | 4.6288 | 0.4728 | 0.1669 | 0.3049 | 0.3049 | 18.7458 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu117 - Datasets 2.14.5 - Tokenizers 0.13.3
TheBloke/medicine-LLM-13B-GGUF
TheBloke
2024-01-01T23:06:34Z
762
15
transformers
[ "transformers", "gguf", "llama", "biology", "medical", "text-generation", "en", "dataset:Open-Orca/OpenOrca", "dataset:GAIR/lima", "dataset:WizardLM/WizardLM_evol_instruct_V2_196k", "dataset:EleutherAI/pile", "arxiv:2309.09530", "base_model:AdaptLLM/medicine-LLM-13B", "license:other", "text-generation-inference", "region:us" ]
text-generation
2024-01-01T22:59:00Z
--- base_model: AdaptLLM/medicine-LLM-13B datasets: - Open-Orca/OpenOrca - GAIR/lima - WizardLM/WizardLM_evol_instruct_V2_196k - EleutherAI/pile inference: false language: - en license: other metrics: - accuracy model_creator: AdaptLLM model_name: Medicine LLM 13B model_type: llama pipeline_tag: text-generation prompt_template: '[INST] <<SYS>> {system_message} <</SYS>> {prompt} [/INST] ' quantized_by: TheBloke tags: - biology - medical --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Medicine LLM 13B - GGUF - Model creator: [AdaptLLM](https://huggingface.co/AdaptLLM) - Original model: [Medicine LLM 13B](https://huggingface.co/AdaptLLM/medicine-LLM-13B) <!-- description start --> ## Description This repo contains GGUF format model files for [AdaptLLM's Medicine LLM 13B](https://huggingface.co/AdaptLLM/medicine-LLM-13B). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/medicine-LLM-13B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/medicine-LLM-13B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/medicine-LLM-13B-GGUF) * [AdaptLLM's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/AdaptLLM/medicine-LLM-13B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Llama-2-Chat ``` [INST] <<SYS>> {system_message} <</SYS>> {prompt} [/INST] ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [medicine-llm-13b.Q2_K.gguf](https://huggingface.co/TheBloke/medicine-LLM-13B-GGUF/blob/main/medicine-llm-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [medicine-llm-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/medicine-LLM-13B-GGUF/blob/main/medicine-llm-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [medicine-llm-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/medicine-LLM-13B-GGUF/blob/main/medicine-llm-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [medicine-llm-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/medicine-LLM-13B-GGUF/blob/main/medicine-llm-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [medicine-llm-13b.Q4_0.gguf](https://huggingface.co/TheBloke/medicine-LLM-13B-GGUF/blob/main/medicine-llm-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [medicine-llm-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/medicine-LLM-13B-GGUF/blob/main/medicine-llm-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [medicine-llm-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/medicine-LLM-13B-GGUF/blob/main/medicine-llm-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [medicine-llm-13b.Q5_0.gguf](https://huggingface.co/TheBloke/medicine-LLM-13B-GGUF/blob/main/medicine-llm-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [medicine-llm-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/medicine-LLM-13B-GGUF/blob/main/medicine-llm-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [medicine-llm-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/medicine-LLM-13B-GGUF/blob/main/medicine-llm-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [medicine-llm-13b.Q6_K.gguf](https://huggingface.co/TheBloke/medicine-LLM-13B-GGUF/blob/main/medicine-llm-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [medicine-llm-13b.Q8_0.gguf](https://huggingface.co/TheBloke/medicine-LLM-13B-GGUF/blob/main/medicine-llm-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/medicine-LLM-13B-GGUF and below it, a specific filename to download, such as: medicine-llm-13b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/medicine-LLM-13B-GGUF medicine-llm-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/medicine-LLM-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/medicine-LLM-13B-GGUF medicine-llm-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m medicine-llm-13b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST] <<SYS>>\n{system_message}\n<</SYS>>\n{prompt} [/INST]" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./medicine-llm-13b.Q4_K_M.gguf", # Download the model file first n_ctx=2048, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "[INST] <<SYS>>\n{system_message}\n<</SYS>>\n{prompt} [/INST]", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./medicine-llm-13b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: AdaptLLM's Medicine LLM 13B # Adapt (Large) Language Models to Domains This repo contains the domain-specific base model developed from **LLaMA-1-13B**, using the method in our paper [Adapting Large Language Models via Reading Comprehension](https://huggingface.co/papers/2309.09530). We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**. ### 🤗 We are currently working hard on developing models across different domains, scales and architectures! Please stay tuned! 🤗 **************************** **Updates** **************************** * 12/19: Released our [13B base models](https://huggingface.co/AdaptLLM/medicine-LLM-13B) developed from LLaMA-1-13B. * 12/8: Released our [chat models](https://huggingface.co/AdaptLLM/medicine-chat) developed from LLaMA-2-Chat-7B. * 9/18: Released our [paper](https://huggingface.co/papers/2309.09530), [code](https://github.com/microsoft/LMOps), [data](https://huggingface.co/datasets/AdaptLLM/medicine-tasks), and [base models](https://huggingface.co/AdaptLLM/medicine-LLM) developed from LLaMA-1-7B. ## Domain-Specific LLaMA-1 ### LLaMA-1-7B In our paper, we develop three domain-specific models from LLaMA-1-7B, which are also available in Huggingface: [Biomedicine-LLM](https://huggingface.co/AdaptLLM/medicine-LLM), [Finance-LLM](https://huggingface.co/AdaptLLM/finance-LLM) and [Law-LLM](https://huggingface.co/AdaptLLM/law-LLM), the performances of our AdaptLLM compared to other domain-specific LLMs are: <p align='center'> <img src="https://cdn-uploads.huggingface.co/production/uploads/650801ced5578ef7e20b33d4/6efPwitFgy-pLTzvccdcP.png" width="700"> </p> ### LLaMA-1-13B Moreover, we scale up our base model to LLaMA-1-13B to see if **our method is similarly effective for larger-scale models**, and the results are consistently positive too: [Biomedicine-LLM-13B](https://huggingface.co/AdaptLLM/medicine-LLM-13B), [Finance-LLM-13B](https://huggingface.co/AdaptLLM/finance-LLM-13B) and [Law-LLM-13B](https://huggingface.co/AdaptLLM/law-LLM-13B). ## Domain-Specific LLaMA-2-Chat Our method is also effective for aligned models! LLaMA-2-Chat requires a [specific data format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2), and our **reading comprehension can perfectly fit the data format** by transforming the reading comprehension into a multi-turn conversation. We have also open-sourced chat models in different domains: [Biomedicine-Chat](https://huggingface.co/AdaptLLM/medicine-chat), [Finance-Chat](https://huggingface.co/AdaptLLM/finance-chat) and [Law-Chat](https://huggingface.co/AdaptLLM/law-chat) For example, to chat with the biomedicine model: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("AdaptLLM/medicine-chat") tokenizer = AutoTokenizer.from_pretrained("AdaptLLM/medicine-chat", use_fast=False) # Put your input here: user_input = '''Question: Which of the following is an example of monosomy? Options: - 46,XX - 47,XXX - 69,XYY - 45,X Please provide your choice first and then provide explanations if possible.''' # We use the prompt template of LLaMA-2-Chat demo prompt = f"<s>[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\n<</SYS>>\n\n{user_input} [/INST]" inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).input_ids.to(model.device) outputs = model.generate(input_ids=inputs, max_length=4096)[0] answer_start = int(inputs.shape[-1]) pred = tokenizer.decode(outputs[answer_start:], skip_special_tokens=True) print(f'### User Input:\n{user_input}\n\n### Assistant Output:\n{pred}') ``` ## Domain-Specific Tasks To easily reproduce our results, we have uploaded the filled-in zero/few-shot input instructions and output completions of each domain-specific task: [biomedicine-tasks](https://huggingface.co/datasets/AdaptLLM/medicine-tasks), [finance-tasks](https://huggingface.co/datasets/AdaptLLM/finance-tasks), and [law-tasks](https://huggingface.co/datasets/AdaptLLM/law-tasks). **Note:** those filled-in instructions are specifically tailored for models before alignment and do NOT fit for the specific data format required for chat models. ## Citation If you find our work helpful, please cite us: ```bibtex @article{adaptllm, title = {Adapting Large Language Models via Reading Comprehension}, author = {Daixuan Cheng and Shaohan Huang and Furu Wei}, journal = {CoRR}, volume = {abs/2309.09530}, year = {2023} } ``` <!-- original-model-card end -->
Sharathhebbar24/math_gpt2_sft
Sharathhebbar24
2024-04-05T14:15:30Z
762
1
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "maths", "mathgpt2", "trl", "sft", "en", "dataset:meta-math/MetaMathQA", "dataset:ArtifactAI/arxiv-math-instruct-50k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-24T03:19:50Z
--- language: - en license: apache-2.0 tags: - maths - gpt2 - mathgpt2 - trl - sft datasets: - meta-math/MetaMathQA - ArtifactAI/arxiv-math-instruct-50k pipeline_tag: text-generation widget: - text: Which motion is formed by an incident particle? example_title: Example 1 - text: What type of diffusional modeling is used for diffusion? example_title: Example 2 model-index: - name: math_gpt2_sft results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 22.87 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/math_gpt2_sft name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 30.41 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/math_gpt2_sft name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 25.06 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/math_gpt2_sft name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 37.62 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/math_gpt2_sft name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 51.54 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/math_gpt2_sft name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 0.68 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/math_gpt2_sft name: Open LLM Leaderboard --- This model is a finetuned version of ```Sharathhebbar24/math_gpt2``` using ```meta-math/MetaMathQA``` ## Model description GPT-2 is a transformers model pre-trained on a very large corpus of English data in a self-supervised fashion. This means it was pre-trained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifting one token (word or piece of word) to the right. The model uses a masking mechanism to make sure the predictions for the token `i` only use the inputs from `1` to `i` but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was trained for, however, which is generating texts from a prompt. ### To use this model ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> model_name = "Sharathhebbar24/math_gpt2_sft" >>> model = AutoModelForCausalLM.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) >>> def generate_text(prompt): >>> inputs = tokenizer.encode(prompt, return_tensors='pt') >>> outputs = model.generate(inputs, max_length=64, pad_token_id=tokenizer.eos_token_id) >>> generated = tokenizer.decode(outputs[0], skip_special_tokens=True) >>> return generated[:generated.rfind(".")+1] >>> prompt = "Gracie and Joe are choosing numbers on the complex plane. Joe chooses the point $1+2i$. Gracie chooses $-1+i$. How far apart are Gracie and Joe's points?" >>> res = generate_text(prompt) >>> res ``` # Benchmark / Evaluation | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8k | | ------- | -------- | -------- | ------- | -------- | -------- | ------- | -------- | | Sharathhebbar24/math_gpt2_sft | 28.503 | 22.87 | 30.41 | 25.06 | 37.62 | 51.54 | 0.68 | ```python { "all": { "acc": 0.25082189621988066, "acc_stderr": 0.030526589726831692, "acc_norm": 0.25112870356236633, "acc_norm_stderr": 0.03129390389566968, "mc1": 0.24112607099143207, "mc1_stderr": 0.014974827279752334, "mc2": 0.3762297840067963, "mc2_stderr": 0.01445991036363257 }, "harness|arc:challenge|25": { "acc": 0.20563139931740615, "acc_stderr": 0.01181074526074258, "acc_norm": 0.22866894197952217, "acc_norm_stderr": 0.012272853582540799 }, "harness|hellaswag|10": { "acc": 0.2884883489344752, "acc_stderr": 0.004521334761709224, "acc_norm": 0.30412268472415854, "acc_norm_stderr": 0.00459094683972719 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.19, "acc_stderr": 0.03942772444036625, "acc_norm": 0.19, "acc_norm_stderr": 0.03942772444036625 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.2074074074074074, "acc_stderr": 0.03502553170678319, "acc_norm": 0.2074074074074074, "acc_norm_stderr": 0.03502553170678319 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.17763157894736842, "acc_stderr": 0.031103182383123398, "acc_norm": 0.17763157894736842, "acc_norm_stderr": 0.031103182383123398 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.19, "acc_stderr": 0.03942772444036622, "acc_norm": 0.19, "acc_norm_stderr": 0.03942772444036622 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.2188679245283019, "acc_stderr": 0.025447863825108618, "acc_norm": 0.2188679245283019, "acc_norm_stderr": 0.025447863825108618 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.25, "acc_stderr": 0.03621034121889507, "acc_norm": 0.25, "acc_norm_stderr": 0.03621034121889507 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.2, "acc_stderr": 0.04020151261036845, "acc_norm": 0.2, "acc_norm_stderr": 0.04020151261036845 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.32, "acc_stderr": 0.046882617226215034, "acc_norm": 0.32, "acc_norm_stderr": 0.046882617226215034 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.24, "acc_stderr": 0.042923469599092816, "acc_norm": 0.24, "acc_norm_stderr": 0.042923469599092816 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.21965317919075145, "acc_stderr": 0.031568093627031744, "acc_norm": 0.21965317919075145, "acc_norm_stderr": 0.031568093627031744 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.23529411764705882, "acc_stderr": 0.04220773659171453, "acc_norm": 0.23529411764705882, "acc_norm_stderr": 0.04220773659171453 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.23, "acc_stderr": 0.04229525846816505, "acc_norm": 0.23, "acc_norm_stderr": 0.04229525846816505 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.2680851063829787, "acc_stderr": 0.028957342788342347, "acc_norm": 0.2680851063829787, "acc_norm_stderr": 0.028957342788342347 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.24561403508771928, "acc_stderr": 0.040493392977481404, "acc_norm": 0.24561403508771928, "acc_norm_stderr": 0.040493392977481404 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.2482758620689655, "acc_stderr": 0.036001056927277716, "acc_norm": 0.2482758620689655, "acc_norm_stderr": 0.036001056927277716 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.24074074074074073, "acc_stderr": 0.0220190800122179, "acc_norm": 0.24074074074074073, "acc_norm_stderr": 0.0220190800122179 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.23015873015873015, "acc_stderr": 0.03764950879790605, "acc_norm": 0.23015873015873015, "acc_norm_stderr": 0.03764950879790605 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.18, "acc_stderr": 0.038612291966536934, "acc_norm": 0.18, "acc_norm_stderr": 0.038612291966536934 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.25483870967741934, "acc_stderr": 0.024790118459332208, "acc_norm": 0.25483870967741934, "acc_norm_stderr": 0.024790118459332208 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.19704433497536947, "acc_stderr": 0.02798672466673622, "acc_norm": 0.19704433497536947, "acc_norm_stderr": 0.02798672466673622 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.22, "acc_stderr": 0.041633319989322695, "acc_norm": 0.22, "acc_norm_stderr": 0.041633319989322695 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.19393939393939394, "acc_stderr": 0.0308741451365621, "acc_norm": 0.19393939393939394, "acc_norm_stderr": 0.0308741451365621 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.3484848484848485, "acc_stderr": 0.033948539651564025, "acc_norm": 0.3484848484848485, "acc_norm_stderr": 0.033948539651564025 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.32124352331606215, "acc_stderr": 0.033699508685490674, "acc_norm": 0.32124352331606215, "acc_norm_stderr": 0.033699508685490674 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.23333333333333334, "acc_stderr": 0.021444547301560476, "acc_norm": 0.23333333333333334, "acc_norm_stderr": 0.021444547301560476 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.2851851851851852, "acc_stderr": 0.027528599210340492, "acc_norm": 0.2851851851851852, "acc_norm_stderr": 0.027528599210340492 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.29831932773109243, "acc_stderr": 0.029719142876342856, "acc_norm": 0.29831932773109243, "acc_norm_stderr": 0.029719142876342856 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.2781456953642384, "acc_stderr": 0.03658603262763744, "acc_norm": 0.2781456953642384, "acc_norm_stderr": 0.03658603262763744 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.26788990825688075, "acc_stderr": 0.018987462257978652, "acc_norm": 0.26788990825688075, "acc_norm_stderr": 0.018987462257978652 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.4351851851851852, "acc_stderr": 0.03381200005643525, "acc_norm": 0.4351851851851852, "acc_norm_stderr": 0.03381200005643525 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.2647058823529412, "acc_stderr": 0.0309645179269234, "acc_norm": 0.2647058823529412, "acc_norm_stderr": 0.0309645179269234 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.28270042194092826, "acc_stderr": 0.029312814153955927, "acc_norm": 0.28270042194092826, "acc_norm_stderr": 0.029312814153955927 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.31390134529147984, "acc_stderr": 0.031146796482972465, "acc_norm": 0.31390134529147984, "acc_norm_stderr": 0.031146796482972465 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.2595419847328244, "acc_stderr": 0.03844876139785271, "acc_norm": 0.2595419847328244, "acc_norm_stderr": 0.03844876139785271 }, "harness|hendrycksTest-international_law|5": { "acc": 0.2231404958677686, "acc_stderr": 0.03800754475228733, "acc_norm": 0.2231404958677686, "acc_norm_stderr": 0.03800754475228733 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.25925925925925924, "acc_stderr": 0.042365112580946336, "acc_norm": 0.25925925925925924, "acc_norm_stderr": 0.042365112580946336 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.25153374233128833, "acc_stderr": 0.03408997886857529, "acc_norm": 0.25153374233128833, "acc_norm_stderr": 0.03408997886857529 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.29464285714285715, "acc_stderr": 0.043270409325787296, "acc_norm": 0.29464285714285715, "acc_norm_stderr": 0.043270409325787296 }, "harness|hendrycksTest-management|5": { "acc": 0.17475728155339806, "acc_stderr": 0.037601780060266224, "acc_norm": 0.17475728155339806, "acc_norm_stderr": 0.037601780060266224 }, "harness|hendrycksTest-marketing|5": { "acc": 0.20085470085470086, "acc_stderr": 0.026246772946890488, "acc_norm": 0.20085470085470086, "acc_norm_stderr": 0.026246772946890488 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.3, "acc_stderr": 0.046056618647183814, "acc_norm": 0.3, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.23499361430395913, "acc_stderr": 0.01516202415227844, "acc_norm": 0.23499361430395913, "acc_norm_stderr": 0.01516202415227844 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.23699421965317918, "acc_stderr": 0.02289408248992599, "acc_norm": 0.23699421965317918, "acc_norm_stderr": 0.02289408248992599 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.23798882681564246, "acc_stderr": 0.014242630070574915, "acc_norm": 0.23798882681564246, "acc_norm_stderr": 0.014242630070574915 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.23202614379084968, "acc_stderr": 0.024170840879341005, "acc_norm": 0.23202614379084968, "acc_norm_stderr": 0.024170840879341005 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.1864951768488746, "acc_stderr": 0.02212243977248077, "acc_norm": 0.1864951768488746, "acc_norm_stderr": 0.02212243977248077 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.24074074074074073, "acc_stderr": 0.02378858355165854, "acc_norm": 0.24074074074074073, "acc_norm_stderr": 0.02378858355165854 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.2695035460992908, "acc_stderr": 0.026469036818590627, "acc_norm": 0.2695035460992908, "acc_norm_stderr": 0.026469036818590627 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.2529335071707953, "acc_stderr": 0.011102268713839989, "acc_norm": 0.2529335071707953, "acc_norm_stderr": 0.011102268713839989 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.4411764705882353, "acc_stderr": 0.030161911930767102, "acc_norm": 0.4411764705882353, "acc_norm_stderr": 0.030161911930767102 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.25, "acc_stderr": 0.01751781884501444, "acc_norm": 0.25, "acc_norm_stderr": 0.01751781884501444 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.21818181818181817, "acc_stderr": 0.03955932861795833, "acc_norm": 0.21818181818181817, "acc_norm_stderr": 0.03955932861795833 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.20408163265306123, "acc_stderr": 0.025801283475090506, "acc_norm": 0.20408163265306123, "acc_norm_stderr": 0.025801283475090506 }, "harness|hendrycksTest-sociology|5": { "acc": 0.24378109452736318, "acc_stderr": 0.03036049015401465, "acc_norm": 0.24378109452736318, "acc_norm_stderr": 0.03036049015401465 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.24, "acc_stderr": 0.04292346959909281, "acc_norm": 0.24, "acc_norm_stderr": 0.04292346959909281 }, "harness|hendrycksTest-virology|5": { "acc": 0.22289156626506024, "acc_stderr": 0.03240004825594687, "acc_norm": 0.22289156626506024, "acc_norm_stderr": 0.03240004825594687 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.3216374269005848, "acc_stderr": 0.03582529442573122, "acc_norm": 0.3216374269005848, "acc_norm_stderr": 0.03582529442573122 }, "harness|truthfulqa:mc|0": { "mc1": 0.24112607099143207, "mc1_stderr": 0.014974827279752334, "mc2": 0.3762297840067963, "mc2_stderr": 0.01445991036363257 }, "harness|winogrande|5": { "acc": 0.5153906866614049, "acc_stderr": 0.014045826789783668 }, "harness|gsm8k|5": { "acc": 0.006823351023502654, "acc_stderr": 0.0022675371022544823 } } ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Sharathhebbar24__math_gpt2_sft) | Metric |Value| |---------------------------------|----:| |Avg. |28.03| |AI2 Reasoning Challenge (25-Shot)|22.87| |HellaSwag (10-Shot) |30.41| |MMLU (5-Shot) |25.06| |TruthfulQA (0-shot) |37.62| |Winogrande (5-shot) |51.54| |GSM8k (5-shot) | 0.68|
Sao10K/14B-Glacier-Stack
Sao10K
2024-03-06T23:57:03Z
762
5
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-04T16:46:38Z
--- license: cc-by-nc-4.0 --- experimental frankenmerge type shit I'm using this as a baseline vs a further finetuned on frankenmerge later on filler text: Earlier today I was really horny, and I saw what I thought to be a blank DVD. I thought, DVDs have a tight hole, they might feel pretty good. So I put my soft pp into the hole of the DVD, and for a few seconds as I started getting harder, it felt pretty good, but then, once I was fully erect, it started being painful. My pp was stuck in the DVD, and I had to break it in half to get if out. It was then when I flipped the broken DVD over and realized that it was not a blank DVD, but a copy of the Pixar movie Up.. Well guys, guess I fucked up.
Abhaykoul/Qwen1.5-0.5B-vortex
Abhaykoul
2024-03-12T04:06:42Z
762
2
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "en", "dataset:OEvortex/vortex-mini", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-11T00:50:05Z
--- license: other license_name: tongyi-qianwen-research license_link: https://huggingface.co/Qwen/Qwen1.5-0.5B/blob/main/LICENSE datasets: - OEvortex/vortex-mini pipeline_tag: text-generation language: - en --- # Qwen1.5-0.5B-vortex model card Qwen1.5-0.5B-vortex is a dealigned chat finetune of the original fantastic Qwen1.5-0.5B model by the Qwen team. This model was trained on the Vortex mini dataset using axolotl for 5 epoch | Model | Avg | ARC (25-Shot) | HellaSwag (10-Shot) | MMLU (5-Shot) | TruthfulQA (0-shot) | Winogrande (5-shot) | GSM8k (5-shot) | |-|-|-|-|-|-|-|-| | OWenL/Qwen1.5-0.5B | 38.62 | 31.48 | 49.05 | 39.35 | 38.3 | 57.22 | 16.3 | | AbhayKoul/Qwen1.5-0.5B-vortex | 38.15 | 31.74 | 47.78 | 38.44 | 38.92 | 56.51 | 15.54 | | M4-ai/tau-0.5B | 36.68 | 29.27 | 47.43 | 37.53 | 39.39 | 56.83 | 9.63 | | M4-ai/tau-0.5B | 36.65 | 29.01 | 47.45 | 37.44 | 39.39 | 56.83 | 9.78 | | Qwen/Qwen1.5-0.5B-Chat | 35.61 | 30.55 | 44.07 | 33.82 | 42.95 | 54.62 | 7.66 | | M4-ai/tau-0.5B-instruct-DPOR | 35.54 | 28.92 | 43.63 | 33.92 | 42.73 | 57.06 | 6.97 | | sail/SailorOW-0.5B-Chat | 33.47 | 30.38 | 45.51 | 26.73 | 39.85 | 56.51 | 1.82 | | sail/SailorOW-0.5B | 33.05 | 29.69 | 45.82 | 25.62 | 40.76 | 55.33 | 1.06 | | sail/SailorOW-0.5B | 33.03 | 29.69 | 45.82 | 25.13 | 40.74 | 55.56 | 1.21 |
Artples/LAI-ImageGeneration-vSDXL-2
Artples
2024-04-01T06:15:25Z
762
5
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-04-01T06:04:44Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: >- A realistic 3D rendering of a mysterious, ancient artifact. The artifact appears to be made from a mix of gold and stone, featuring intricate carvings and symbols that suggest a lost civilization. It sits on a pedestal in a dimly lit room, casting shadows on the walls that hint at its complex shape. The atmosphere is filled with a sense of wonder and ancient power, inviting the viewer to speculate about its origins and purpose. output: url: images/c17abed6-d041-4330-9791-a8e09f619c0f.png - text: a cute robot artist painting on an easel, concept art output: url: images/cd51e756-cdde-49bc-b907-2f45d9079cc7.png - text: neon holography crystal cat output: url: images/7323a470-e3b5-4c03-8431-8e5de6f65f39.png base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: null license: openrail++ --- # LAI-ImageGeneration-v2 <Gallery /> ## Model description Trained on the DS-2 Dataset, without a Triggerword ## Download model Weights for this model are available in Safetensors format. [Download](https://huggingface.co/Artples/LAI-ImageGeneration-vSDXL-2/tree/main) them in the Files & versions tab.
blaze2004/llama-3-8b-Instruct-bnb-4bit-hope
blaze2004
2024-06-24T21:21:14Z
762
0
transformers
[ "transformers", "pytorch", "safetensors", "gguf", "llama", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-06-24T15:29:11Z
--- base_model: unsloth/llama-3-8b-Instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft --- # Uploaded model - **Developed by:** blaze2004 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
rollerhafeezh-amikom/fire-silvanus-quantization
rollerhafeezh-amikom
2024-06-28T12:15:21Z
762
0
transformers
[ "transformers", "onnx", "xlm-roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-28T12:15:04Z
Entry not found
aubmindlab/bert-large-arabertv02
aubmindlab
2023-08-03T12:31:29Z
761
6
transformers
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "safetensors", "bert", "fill-mask", "ar", "dataset:wikipedia", "dataset:Osian", "dataset:1.5B-Arabic-Corpus", "dataset:oscar-arabic-unshuffled", "arxiv:2003.00104", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: ar datasets: - wikipedia - Osian - 1.5B-Arabic-Corpus - oscar-arabic-unshuffled widget: - text: " عاصمة لبنان هي [MASK] ." --- # AraBERT v1 & v2 : Pre-training BERT for Arabic Language Understanding <img src="https://raw.githubusercontent.com/aub-mind/arabert/master/arabert_logo.png" width="100" align="left"/> **AraBERT** is an Arabic pretrained lanaguage model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERT uses the same BERT-Base config. More details are available in the [AraBERT Paper](https://arxiv.org/abs/2003.00104) and in the [AraBERT Meetup](https://github.com/WissamAntoun/pydata_khobar_meetup) There are two versions of the model, AraBERTv0.1 and AraBERTv1, with the difference being that AraBERTv1 uses pre-segmented text where prefixes and suffixes were splitted using the [Farasa Segmenter](http://alt.qcri.org/farasa/segmenter.html). We evalaute AraBERT models on different downstream tasks and compare them to [mBERT]((https://github.com/google-research/bert/blob/master/multilingual.md)), and other state of the art models (*To the extent of our knowledge*). The Tasks were Sentiment Analysis on 6 different datasets ([HARD](https://github.com/elnagara/HARD-Arabic-Dataset), [ASTD-Balanced](https://www.aclweb.org/anthology/D15-1299), [ArsenTD-Lev](https://staff.aub.edu.lb/~we07/Publications/ArSentD-LEV_Sentiment_Corpus.pdf), [LABR](https://github.com/mohamedadaly/LABR)), Named Entity Recognition with the [ANERcorp](http://curtis.ml.cmu.edu/w/courses/index.php/ANERcorp), and Arabic Question Answering on [Arabic-SQuAD and ARCD](https://github.com/husseinmozannar/SOQAL) # AraBERTv2 ## What's New! AraBERT now comes in 4 new variants to replace the old v1 versions: More Detail in the AraBERT folder and in the [README](https://github.com/aub-mind/arabert/blob/master/AraBERT/README.md) and in the [AraBERT Paper](https://arxiv.org/abs/2003.00104v2) Model | HuggingFace Model Name | Size (MB/Params)| Pre-Segmentation | DataSet (Sentences/Size/nWords) | ---|:---:|:---:|:---:|:---: AraBERTv0.2-base | [bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) | 543MB / 136M | No | 200M / 77GB / 8.6B | AraBERTv0.2-large| [bert-large-arabertv02](https://huggingface.co/aubmindlab/bert-large-arabertv02) | 1.38G 371M | No | 200M / 77GB / 8.6B | AraBERTv2-base| [bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) | 543MB 136M | Yes | 200M / 77GB / 8.6B | AraBERTv2-large| [bert-large-arabertv2](https://huggingface.co/aubmindlab/bert-large-arabertv2) | 1.38G 371M | Yes | 200M / 77GB / 8.6B | AraBERTv0.1-base| [bert-base-arabertv01](https://huggingface.co/aubmindlab/bert-base-arabertv01) | 543MB 136M | No | 77M / 23GB / 2.7B | AraBERTv1-base| [bert-base-arabert](https://huggingface.co/aubmindlab/bert-base-arabert) | 543MB 136M | Yes | 77M / 23GB / 2.7B | All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats. ## Better Pre-Processing and New Vocab We identified an issue with AraBERTv1's wordpiece vocabulary. The issue came from punctuations and numbers that were still attached to words when learned the wordpiece vocab. We now insert a space between numbers and characters and around punctuation characters. The new vocabulary was learnt using the `BertWordpieceTokenizer` from the `tokenizers` library, and should now support the Fast tokenizer implementation from the `transformers` library. **P.S.**: All the old BERT codes should work with the new BERT, just change the model name and check the new preprocessing dunction **Please read the section on how to use the [preprocessing function](#Preprocessing)** ## Bigger Dataset and More Compute We used ~3.5 times more data, and trained for longer. For Dataset Sources see the [Dataset Section](#Dataset) Model | Hardware | num of examples with seq len (128 / 512) |128 (Batch Size/ Num of Steps) | 512 (Batch Size/ Num of Steps) | Total Steps | Total Time (in Days) | ---|:---:|:---:|:---:|:---:|:---:|:---: AraBERTv0.2-base | TPUv3-8 | 420M / 207M | 2560 / 1M | 384/ 2M | 3M | - AraBERTv0.2-large | TPUv3-128 | 420M / 207M | 13440 / 250K | 2056 / 300K | 550K | 7 AraBERTv2-base | TPUv3-8 | 420M / 207M | 2560 / 1M | 384/ 2M | 3M | - AraBERTv2-large | TPUv3-128 | 520M / 245M | 13440 / 250K | 2056 / 300K | 550K | 7 AraBERT-base (v1/v0.1) | TPUv2-8 | - |512 / 900K | 128 / 300K| 1.2M | 4 # Dataset The pretraining data used for the new AraBERT model is also used for Arabic **GPT2 and ELECTRA**. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled: - OSCAR unshuffled and filtered. - [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01 - [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4) - [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619) - Assafir news articles. Huge thank you for Assafir for giving us the data # Preprocessing It is recommended to apply our preprocessing function before training/testing on any dataset. **Install farasapy to segment text for AraBERT v1 & v2 `pip install farasapy`** ```python from arabert.preprocess import ArabertPreprocessor model_name="bert-large-arabertv02" arabert_prep = ArabertPreprocessor(model_name=model_name) text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري" arabert_prep.preprocess(text) ``` ## Accepted_models ``` bert-base-arabertv01 bert-base-arabert bert-base-arabertv02 bert-base-arabertv2 bert-large-arabertv02 bert-large-arabertv2 araelectra-base aragpt2-base aragpt2-medium aragpt2-large aragpt2-mega ``` # TensorFlow 1.x models The TF1.x model are available in the HuggingFace models repo. You can download them as follows: - via git-lfs: clone all the models in a repo ```bash curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash sudo apt-get install git-lfs git lfs install git clone https://huggingface.co/aubmindlab/MODEL_NAME tar -C ./MODEL_NAME -zxvf /content/MODEL_NAME/tf1_model.tar.gz ``` where `MODEL_NAME` is any model under the `aubmindlab` name - via `wget`: - Go to the tf1_model.tar.gz file on huggingface.co/models/aubmindlab/MODEL_NAME. - copy the `oid sha256` - then run `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/INSERT_THE_SHA_HERE` (ex: for `aragpt2-base`: `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/3766fc03d7c2593ff2fb991d275e96b81b0ecb2098b71ff315611d052ce65248`) # If you used this model please cite us as : Google Scholar has our Bibtex wrong (missing name), use this instead ``` @inproceedings{antoun2020arabert, title={AraBERT: Transformer-based Model for Arabic Language Understanding}, author={Antoun, Wissam and Baly, Fady and Hajj, Hazem}, booktitle={LREC 2020 Workshop Language Resources and Evaluation Conference 11--16 May 2020}, pages={9} } ``` # Acknowledgments Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT. # Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
supreethrao/instructNER_ontonotes5_xl
supreethrao
2023-11-27T10:55:17Z
761
1
span-marker
[ "span-marker", "tensorboard", "safetensors", "token-classification", "ner", "named-entity-recognition", "generated_from_span_marker_trainer", "dataset:SpeedOfMagic/ontonotes_english", "model-index", "region:us" ]
token-classification
2023-11-27T10:15:46Z
--- library_name: span-marker tags: - span-marker - token-classification - ner - named-entity-recognition - generated_from_span_marker_trainer datasets: - SpeedOfMagic/ontonotes_english metrics: - precision - recall - f1 widget: - text: Late Friday night, the Senate voted 87 - 7 to approve an estimated $13.5 billion measure that had been stripped of hundreds of provisions that would have widened, rather than narrowed, the federal budget deficit. - text: Among classes for which details were available, yields ranged from 8.78%, or 75 basis points over two - year Treasury securities, to 10.05%, or 200 basis points over 10 - year Treasurys. - text: According to statistics, in the past five years, Tianjin Bonded Area has attracted a total of over 3000 enterprises from 73 countries and regions all over the world and 25 domestic provinces, cities and municipalities to invest, reaching a total agreed investment value of more than 3 billion US dollars and a total agreed foreign investment reaching more than 2 billion US dollars. - text: But Dirk Van Dongen, president of the National Association of Wholesaler - Distributors, said that last month's rise "isn't as bad an omen" as the 0.9% figure suggests. - text: Robert White, Canadian Auto Workers union president, used the impending Scarborough shutdown to criticize the U.S. - Canada free trade agreement and its champion, Prime Minister Brian Mulroney. pipeline_tag: token-classification model-index: - name: SpanMarker results: - task: type: token-classification name: Named Entity Recognition dataset: name: Unknown type: SpeedOfMagic/ontonotes_english split: test metrics: - type: f1 value: 0.9077127659574469 name: F1 - type: precision value: 0.9045852107076597 name: Precision - type: recall value: 0.9108620229516947 name: Recall --- # SpanMarker This is a [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) model trained on the [SpeedOfMagic/ontonotes_english](https://huggingface.co/datasets/SpeedOfMagic/ontonotes_english) dataset that can be used for Named Entity Recognition. ## Model Details ### Model Description - **Model Type:** SpanMarker <!-- - **Encoder:** [Unknown](https://huggingface.co/unknown) --> - **Maximum Sequence Length:** 256 tokens - **Maximum Entity Length:** 8 words - **Training Dataset:** [SpeedOfMagic/ontonotes_english](https://huggingface.co/datasets/SpeedOfMagic/ontonotes_english) <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SpanMarker on GitHub](https://github.com/tomaarsen/SpanMarkerNER) - **Thesis:** [SpanMarker For Named Entity Recognition](https://raw.githubusercontent.com/tomaarsen/SpanMarkerNER/main/thesis.pdf) ### Model Labels | Label | Examples | |:------------|:-------------------------------------------------------------------------------------------------------| | CARDINAL | "tens of thousands", "One point three million", "two" | | DATE | "Sunday", "a year", "two thousand one" | | EVENT | "World War Two", "Katrina", "Hurricane Katrina" | | FAC | "Route 80", "the White House", "Dylan 's Candy Bars" | | GPE | "America", "Atlanta", "Miami" | | LANGUAGE | "English", "Russian", "Arabic" | | LAW | "Roe", "the Patriot Act", "FISA" | | LOC | "Asia", "the Gulf Coast", "the West Bank" | | MONEY | "twenty - seven million dollars", "one hundred billion dollars", "less than fourteen thousand dollars" | | NORP | "American", "Muslim", "Americans" | | ORDINAL | "third", "First", "first" | | ORG | "Wal - Mart", "Wal - Mart 's", "a Wal - Mart" | | PERCENT | "seventeen percent", "sixty - seven percent", "a hundred percent" | | PERSON | "Kira Phillips", "Rick Sanchez", "Bob Shapiro" | | PRODUCT | "Columbia", "Discovery Shuttle", "Discovery" | | QUANTITY | "forty - five miles", "six thousand feet", "a hundred and seventy pounds" | | TIME | "tonight", "evening", "Tonight" | | WORK_OF_ART | "A Tale of Two Cities", "Newsnight", "Headline News" | ## Evaluation ### Metrics | Label | Precision | Recall | F1 | |:------------|:----------|:-------|:-------| | **all** | 0.9046 | 0.9109 | 0.9077 | | CARDINAL | 0.8579 | 0.8524 | 0.8552 | | DATE | 0.8634 | 0.8893 | 0.8762 | | EVENT | 0.6719 | 0.6935 | 0.6825 | | FAC | 0.7211 | 0.7852 | 0.7518 | | GPE | 0.9725 | 0.9647 | 0.9686 | | LANGUAGE | 0.9286 | 0.5909 | 0.7222 | | LAW | 0.7941 | 0.7297 | 0.7606 | | LOC | 0.7632 | 0.8101 | 0.7859 | | MONEY | 0.8914 | 0.8885 | 0.8900 | | NORP | 0.9311 | 0.9643 | 0.9474 | | ORDINAL | 0.8227 | 0.9282 | 0.8723 | | ORG | 0.9217 | 0.9073 | 0.9145 | | PERCENT | 0.9145 | 0.9198 | 0.9171 | | PERSON | 0.9638 | 0.9643 | 0.9640 | | PRODUCT | 0.6778 | 0.8026 | 0.7349 | | QUANTITY | 0.7850 | 0.8 | 0.7925 | | TIME | 0.6794 | 0.6730 | 0.6762 | | WORK_OF_ART | 0.6562 | 0.6442 | 0.6502 | ## Uses ### Direct Use for Inference ```python from span_marker import SpanMarkerModel # Download from the 🤗 Hub model = SpanMarkerModel.from_pretrained("supreethrao/instructNER_ontonotes5_xl") # Run inference entities = model.predict("Robert White, Canadian Auto Workers union president, used the impending Scarborough shutdown to criticize the U.S. - Canada free trade agreement and its champion, Prime Minister Brian Mulroney.") ``` ### Downstream Use You can finetune this model on your own dataset. <details><summary>Click to expand</summary> ```python from span_marker import SpanMarkerModel, Trainer # Download from the 🤗 Hub model = SpanMarkerModel.from_pretrained("supreethrao/instructNER_ontonotes5_xl") # Specify a Dataset with "tokens" and "ner_tag" columns dataset = load_dataset("conll2003") # For example CoNLL2003 # Initialize a Trainer using the pretrained model & dataset trainer = Trainer( model=model, train_dataset=dataset["train"], eval_dataset=dataset["validation"], ) trainer.train() trainer.save_model("supreethrao/instructNER_ontonotes5_xl-finetuned") ``` </details> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:----------------------|:----|:--------|:----| | Sentence length | 1 | 18.1647 | 210 | | Entities per sentence | 0 | 1.3655 | 32 | ### Training Hyperparameters - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 32 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 - mixed_precision_training: Native AMP ### Framework Versions - Python: 3.10.13 - SpanMarker: 1.5.0 - Transformers: 4.35.2 - PyTorch: 2.1.1 - Datasets: 2.15.0 - Tokenizers: 0.15.0 ## Citation ### BibTeX ``` @software{Aarsen_SpanMarker, author = {Aarsen, Tom}, license = {Apache-2.0}, title = {{SpanMarker for Named Entity Recognition}}, url = {https://github.com/tomaarsen/SpanMarkerNER} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
scb10x/typhoon-7b-instruct-02-19-2024
scb10x
2024-05-31T16:30:18Z
761
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:2312.13951", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-20T14:36:35Z
--- license: apache-2.0 pipeline_tag: text-generation --- **Typhoon-0219: Thai Large Language Model (Instruct)** **Typhoon-0219** is a *instruct* Thai 🇹🇭 large language model with 7 billion parameters, and it is based on Typhoon 7B. It is the second-generation instruct model version that serves [opentyphoon.ai](http://opentyphoon.ai/). It is trained on a diverse instruction tuning dataset with more than 1 million rows, similar to [OpenHermes](https://huggingface.co/datasets/teknium/OpenHermes-2.5), and supports using system prompts. ## **Model Description** - **Model type**: A 7B instruct decoder-only model based on Mistral architecture. - **Requirement**: transformers 4.38.0 or newer. - **Primary Language(s)**: Thai 🇹🇭 and English 🇬🇧 - **License**: Apache-2.0 ## **Intended Uses & Limitations** This model is an instructional model. However, it’s still undergoing development. It incorporates some level of guardrails, but it still may produce answers that are inaccurate, biased, or otherwise objectionable in response to user prompts. We recommend that developers assess these risks in the context of their use case. ## Production Deployment We suggest using the OpenAI-compatible API server from the [vLLM](https://github.com/vllm-project/vllm) project. ```python python -m vllm.entrypoints.openai.api_server --port 8080 --model scb10x/typhoon-7b-instruct-02-19-2024 --max-num-batched-tokens 8192 --max-model-len 8192 --served-model-name typhoon-instruct ``` ## Chat Template We use chatml chat-template. ```python {% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content']}}{% if (loop.last and add_generation_prompt) or not loop.last %}{{ '<|im_end|>' + '\n'}}{% endif %}{% endfor %} {% if add_generation_prompt and messages[-1]['role'] != 'assistant' %}{{ '<|im_start|>assistant\n' }}{% endif %} ``` ## **Follow us** **https://twitter.com/opentyphoon** ## **Support** **https://discord.gg/CqyBscMFpg** ## **SCB10X AI Team** - Kunat Pipatanakul, Potsawee Manakul, Sittipong Sripaisarnmongkol, Pathomporn Chokchainant, Kasima Tharnpipitchai - If you find Typhoon useful for your work, please cite it using: ``` @article{pipatanakul2023typhoon, title={Typhoon: Thai Large Language Models}, author={Kunat Pipatanakul and Phatrasek Jirabovonvisut and Potsawee Manakul and Sittipong Sripaisarnmongkol and Ruangsak Patomwong and Pathomporn Chokchainant and Kasima Tharnpipitchai}, year={2023}, journal={arXiv preprint arXiv:2312.13951}, url={https://arxiv.org/abs/2312.13951} } ``` ## **Contact Us** - General & Collaboration: **[[email protected]](mailto:[email protected])**, **[[email protected]](mailto:[email protected])** - Technical: **[[email protected]](mailto:[email protected])**
Kukedlc/NeuralMarioMonarch-7B-slerp
Kukedlc
2024-05-25T17:09:31Z
761
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "mlabonne/Monarch-7B", "vanillaOVO/supermario_v4", "base_model:mlabonne/Monarch-7B", "base_model:vanillaOVO/supermario_v4", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-23T03:57:49Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - mlabonne/Monarch-7B - vanillaOVO/supermario_v4 base_model: - mlabonne/Monarch-7B - vanillaOVO/supermario_v4 model-index: - name: NeuralMarioMonarch-7B-slerp results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 73.81 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralMarioMonarch-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 89.04 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralMarioMonarch-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.61 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralMarioMonarch-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 74.97 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralMarioMonarch-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 85.0 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralMarioMonarch-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 69.22 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralMarioMonarch-7B-slerp name: Open LLM Leaderboard --- # NeuralMarioMonarch-7B-slerp NeuralMarioMonarch-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [mlabonne/Monarch-7B](https://huggingface.co/mlabonne/Monarch-7B) * [vanillaOVO/supermario_v4](https://huggingface.co/vanillaOVO/supermario_v4) ## 🧩 Configuration ```yaml slices: - sources: - model: mlabonne/Monarch-7B layer_range: [0, 32] - model: vanillaOVO/supermario_v4 layer_range: [0, 32] merge_method: slerp base_model: mlabonne/Monarch-7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: float16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Kukedlc/NeuralMarioMonarch-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Kukedlc__NeuralMarioMonarch-7B-slerp) | Metric |Value| |---------------------------------|----:| |Avg. |76.11| |AI2 Reasoning Challenge (25-Shot)|73.81| |HellaSwag (10-Shot) |89.04| |MMLU (5-Shot) |64.61| |TruthfulQA (0-shot) |74.97| |Winogrande (5-shot) |85.00| |GSM8k (5-shot) |69.22|
xdatasi/antares-7b-slovenian
xdatasi
2024-03-17T00:34:41Z
761
4
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama2", "Mixtral", "Slovenian", "sl", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-17T00:15:46Z
--- library_name: transformers license: apache-2.0 language: - sl - en tags: - llama2 - Mixtral - Slovenian --- # AntaresAI We introduce Antares-7b-slovenian, an instruction-tuned and alignment model based on Mixtral-8x7B-v0.1 and Llama-2-70b-hf finetuned for Slovenian language. Please refer to the evaluation results table for details. # **Instruction Fine-tuning Strategy** We utilize state-of-the-art instruction fine-tuning methods including supervised fine-tuning (SFT) and direct preference optimization (DPO) # **Data Contamination Test Results** Results will be updated soon. # **Evaluation Results** Results will be updated soon. ### **Contact Us** ### Any questions and suggestions are welcomed at the discussion tab.
TheDrummer/Moistral-11B-v2-GGUF
TheDrummer
2024-04-28T06:20:50Z
761
12
null
[ "gguf", "not-for-all-audiences", "license:cc-by-nc-4.0", "region:us" ]
null
2024-03-29T15:05:50Z
--- license: cc-by-nc-4.0 tags: - not-for-all-audiences --- # Original Model Card https://huggingface.co/TheDrummer/Moistral-11B-v2 # Moistral 11B v2 💦💦 *The moistest AI just got moistier!* *An eRP model capable of blasting you with a rich and refreshing vocabulary of moist. Finetuned by yours truly.* ![image/gif](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/GIEX3AAl2GmlKgJCvTWbl.gif) ## What's New with v2? - Trained with an even larger dataset of 8K moist. - Properly trained to generate long responses. - Rebalanced the genres and writing perspectives: - Included more romance, "family", fantasy, "diversity", science fiction, and many more that I can't make euphemisms for! - Better, more balanced representation of male and female pespectives! - Sanitized the heck out of the dataset: - no special characters like � & … - no overly long ellipsis like "........." - no author notes like "To be continued" - no inconsistent [quotation marks](https://typographyforlawyers.com/straight-and-curly-quotes.html) - Less GPTisms! - Human written stories from decades past. - Zero tolerance for *ministration*. Not one instance of *that word* after going through 4K of moist. - Better formatting with Alpaca. - Improved Chat / Instruct modes. ## Too moist? Worry not! We'll release a set of merges to dehumidify the model soon. ![image/gif](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/HLGjBd7_EAVgE-WAleQ8O.gif) ## Samples of Moistral v2 <details> <summary>Moistral Sample (censored) Generation: 512</summary> The princess instinctively took a step back from the throne, horrified by the mocking laugh as if she were hearing an evil joke. It was then that Princess Lilia realized the true extent of her danger - and how pointless her resistance would be. At last, the loud, wild laughter died down in the echoing chamber. Dik the sorcerer prince paused for a moment, straightened his clothes, cast one glance at the stunned princess, and then turned to speak to the crowd waiting on the other side of the palace wing. He spoke with a voice that filled the air with arrogance, confidence, and cruelty. "Ladies and gentlemen, I am pleased to welcome our honored guests; the Royal Family of Iliac! In these unusual times, when the country we fought is now under my control, it will be up to me to decide what fate awaits the former ruling family. But before I reveal my plan for them, I think I should explain why I fought King Edmund Ruhr." When he finished speaking, he gave Princess Lilia another disturbing smirk. She couldn't help but shiver and take a step back. Dik approached her without haste, his eyes fixed on her. His hands slowly caressed her cheekbone and neckline, leaving faint traces of warmth where they touched. The princess opened her mouth, wanting to protest this act of intimacy, but soon felt herself getting ███. "You will not only serve as an envoy, dearest Lilia, but also as a ███ slave. After all, what better use could such a beautiful woman have? Tell me, did you bring your maidservant along?" Princess Lilia was about to answer when she heard footsteps approaching. Suddenly, a young woman dressed in the Iliac style appeared behind her, her eyes wide with fear. Her face was slightly pale, and there were faint bags under her eyes. "Your Majesty, this is my half-sister Helena." As soon as she said those words, Princess Lilia's heart sank even deeper than it had already sunk at Dik's arrogant mockery. "She is here because she accompanies me as a maidservant." The sorcerer prince looked at Helena with interest, and Princess Lilia felt herself blush. </details> <details> <summary>Moistral Sample 2 (censored) Generation: 512</summary> Sarah was feeling slightly woozy as she entered the bar and walked towards Christina, who was behind the counter. "I'm here!" said Sarah. Christina looked at her watch. "You're late, again." "Sorry? I got held up by a customer." Christina rolled her eyes. "Did he at least offer you a drink?" Sarah blushed. "No. He just wanted to talk..." Christina sighed. "Well, get ready. I need you on the floor soon." She turned away from Sarah and started talking to someone who had come into the bar. Sarah went into the employee room, stripped off her clothes, and put on her bartending uniform - a skimpy white tank top and black miniskirt. She checked herself in the mirror. Her long, dark hair cascaded down her back onto bare shoulders. She poked out her small ███████ proudly. The skirt exposed most of her long, slender legs. She looked like every ███ fantasy, and she knew it. Men would ogle her regardless of how she behaved. She took a deep breath and entered the bar. The first thing Sarah noticed was the dimness. It wasn't darkness - the lights were on - but they were dimmed enough that everything had an intimate mood. It reminded her of candlelight. The tables were round and close together, giving the whole place a sense of intimacy. There were also long, shiny red curtains on some of the walls that gave the place an air of sensuality. The next thing she noticed was the noise. There was music playing, but it was only loud enough to be drowned out by all the conversations happening at the same time. The hum of many voices combined with the faint background sound of the music created a rich, layered atmosphere. It was like being in an orchestra. Finally, she took notice of the people. They were all dressed in simple, casual clothes. Some of them were couples seated together, whispering quietly and intimately. Others were groups of friends laughing and chatting. Everyone seemed comfortable and at ease. Sarah felt a rush of adrenaline as she made her way through the crowd towards Christina. As she got closer, she could hear the lively conversation. </details> <details> <summary>Moistral Sample 3 (censored) Instruct Mode, Generation: 512</summary> {{[INPUT]}} You order black coffee {{[OUTPUT]}} "Black coffee," you say. "One minute," Audrey replies, spinning around to give you a nice view of her ███ in those brown booty pants. You watch her round ███ cheeks jiggle as she walks away towards the espresso machine, thighs separated enough that you can see just a hint of █████ lips peeking out from underneath. She starts chatting with a blonde girl at the register, whose gaze keeps darting down to Audrey's ███. As they converse, you start taking in the details of Audrey, eyeing her █████ and the way her shirt rides up just enough to show the bottom of her toned stomach. Her █████████ is noticable, but beneath a tight black ███ that matches her ████████. As she turns back to you with two black coffees, you are barely paying attention, lost in your own thoughts. The sound of Audrey setting one of the mugs down on the table startles you. "Here you go," she says cheerfully. "Enjoy! There's a card at the bottom of the bag with our online ordering system, plus we do refills for free!" "Thanks," you say absentmindedly. Not really paying attention, you take a sip of the coffee and are pleasantly surprised by the taste. "So, here's my deal," you say, putting down the cup. "I work for a company that does surveys. We come into businesses like this one and ask customers questions, collect data, all that stuff." Audrey perks up from behind the counter. "Oh! I think I understand. I mean, that makes sense why you would be asking me earlier about what I had to drink. Anyway, as long as you're not trying to get my personal info or anything like that, sounds good!" You smile reassuringly. "That's exactly what it is. Rest assured, everything you tell me will remain between you and I. Well, and my boss, but he won't know who you are. Hipsters like you are hard to track down after all." You chuckle at your joke, though you're not sure if it landed. "Right, well, that works for me," Audrey replies, eyeing you warily. "As long as it stays between us, then it's fine." {{[INPUT]}}See her privately {{[OUTPUT]}} "Okay, but I need to see you in private." ... </details> <details> <summary>Moistral Sample 4 (censored) Instruct Mode, Generation: 512</summary> {{[INPUT]}} █████ with him {{[OUTPUT]}} Chloe feels a surge of heat rushing through her body. The voice inside her head whispers: "Be █████ and ████████ with Paul." But Chloe still feels nervous about what she is about to do. It's just so unlike her. Chloe closes her eyes and concentrates on the feeling of warmness flowing throughout her veins. Slowly, she opens them again, revealing a new side of herself - one she rarely shows anyone. She leans forward slightly, pushing her ██████ together, accentuating their size and shape. Her lips curl into a mischievous smile. "Hey Paul," she says in a sultry voice. "I was wondering if you could help me?" Paul looks at her, his gaze wandering over her body. He can't help but smile as he sees the way she's leaning forward, pressing her ██████ together. "Of course, Chloe," he replies, his voice thick with desire. "What's your problem?" Chloe bites her lip █████ively before leaning even closer, until their bodies are almost touching. She reaches out to trace a finger along his chest, her touch light and teasing. "It's just that I'm having trouble understanding some of the concepts," she murmurs, her voice low and sultry. "Can you help me? Please?" Paul's hands move slowly towards Chloe, his fingers grazing her hips lightly. "Well," he says, his voice filled with lust, "I'd be happy to help. Let's go somewhere more private where we can discuss things more closely." Chloe giggles and tilts her head to the side, her long hair cascading down her shoulders. "That sounds perfect," she purrs, her hand sliding down his chest until it rests on his ██████. "I think we can both learn a lot from each other." Paul's eyes darken with desire as he feels her hand on his groin. He knows what's about to happen next, and he can't wait for it to begin. Chloe can hardly believe what she's doing - kissing Paul, her teaching assistant, in the hallway after class. But it feels so good, so wicked and indulgent. </details> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/xzX3pwc02JE7s7L2XT7eG.png) ## v1 Description (Still relevant) Moistral 11B is an *aggressive* **finetune** of the legendary [Fimbulvert v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2) model. If you haven't heard of that model, then your poor-GPU ass is missing out! It is an amazing substitute for those heavyweight models, and it won't disappoint. *(That's coming from someone who daily drives a Venus / Goliath 120B)* ...and if you're looking for something *more moist*, then you've come to the right place. You can expect the same sort of performance here! It's the Honda Civic of eRP models! ### Pimp my ride Turn any old story into a **Moist**ral masterpiece. In fact, we recommend that you leave out any hints of *moist* in your story to _let Moistral cook_. ### Drive it like it's hot When in Instruct Mode, Moistral will treat you like a director of your own fantasy ride. Be a controlling navigator or a passenger princess, it won't matter - you're in for a good ride! <details> <summary>Moistral Sample</summary> <b>ASSISTANT:</b><br/> When the doors to the throne room finally opened, she saw him there - Dik, the sorcerer prince sitting on his throne. He looked majestic, just as Princess Lilia expected. In a robe embroidered with silver yarn and covered with magical runes. His courtiers stood on either side of him, all looking at the princess with interest. Princess Lilia felt her determination leaving her, but pulled herself together and once again reminded herself that she should not be afraid. She approached the throne intending to show the dignity of all Iliac by personal example.<br/> <b>USER:</b><br/> She bows slightly<br/> <b>ASSISTANT:</b><br/> Princess Lilia bowed slightly in front of Dik, the sorcerer prince. Her eyes fell to the floor out of respect, but soon rose again. She would show herself to be a dignified woman!<br/> "I greet you, your highness," Princess Lilia said. "I am here on behalf of my father, King Edmund Ruhr, to serve as a hostage and envoy."<br/> Dik nodded, but didn't say anything.<br/> </details> ### It's got the mileage Moistral has been trained with many long form texts, a nice chunk of which are 8K in length. It is capable of going far and long without passing it back to you. This is not your typical chibi RP model. I have to acknowledge that I'm standing on the shoulders of giants. Thank you Sao for sharing your finetune config along with tips on getting started. Thanks to everyone in the Finetuning channel for entertaining my every question. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/Ll8CA5RR7ugTi72P2HBb8.png)
rhysjones/Phi-3-mini-mango-1-llamafied
rhysjones
2024-04-27T11:10:30Z
761
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "nlp", "code", "conversational", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-26T17:39:14Z
--- license: mit license_link: https://huggingface.co/rhysjones/Phi-3-mini-mango-1-llamafied/resolve/main/LICENSE language: - en pipeline_tag: text-generation tags: - nlp - code widget: - messages: - role: user content: Can you provide ways to eat combinations of bananas and dragonfruits? --- ## Model Summary The Phi-3-mini-mango-1-llamafied is an instruct finetune of [Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) with 4K context and 3.8B parameters. It is a first cut of finetuning Phi-3 (which is a great model!) to explore its properties and behaviour. More to follow. This version of the model has had its weight layers converted to Llama format using @vonjack's [vonjack/Phi-3-mini-4k-instruct-LLaMAfied](https://huggingface.co/vonjack/Phi-3-mini-4k-instruct-LLaMAfied) conversion script, allowing it to run within a llama toolset ecosystem without change or trust_remote_code. It's also interesting to see how resilient the model still is after conversion. The original Phi-3 format weights of this model are available at [rhysjones/Phi-3-mini-mango-1](https://huggingface.co/rhysjones/Phi-3-mini-mango-1).
SakuraLLM/Sakura-14B-Qwen2beta-v0.9.1-GGUF
SakuraLLM
2024-06-26T14:33:31Z
761
0
null
[ "gguf", "license:cc-by-nc-sa-4.0", "region:us" ]
null
2024-05-16T09:18:15Z
--- license: cc-by-nc-sa-4.0 ---
nitky/Oumuamua-7b-instruct
nitky
2024-06-01T15:55:51Z
761
3
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "ja", "en", "arxiv:2403.19522", "base_model:nitky/Oumuamua-7b-base", "base_model:stabilityai/japanese-stablelm-base-gamma-7b", "base_model:tokyotech-llm/Swallow-MS-7b-v0.1", "base_model:DataPilot/ArrowPro-7B-KillerWhale", "base_model:nitky/RP-7b-instruct", "base_model:mistralai/Mistral-7B-Instruct-v0.3", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-06-01T10:40:37Z
--- base_model: - nitky/Oumuamua-7b-base - stabilityai/japanese-stablelm-base-gamma-7b - tokyotech-llm/Swallow-MS-7b-v0.1 - DataPilot/ArrowPro-7B-KillerWhale - nitky/RP-7b-instruct - mistralai/Mistral-7B-Instruct-v0.3 - mistralai/Mistral-7B-v0.1 library_name: transformers tags: - mergekit - merge language: - ja - en pipeline_tag: text-generation license: apache-2.0 --- # Oumuamua-7b-instruct This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Output example ``` [INST] <<SYS>> あなたは日本語を話す優秀なアシスタントです。回答には必ず日本語で答えてください。また考える過程も出力してください。 <</SYS>> 道路で泣いている子供がいます。その子供には何があったと考えられますか? [/INST] 道路で泣いている子供がいる場合、考えられる理由はいくつかあります。 1. 怪我をした: 転んで怪我をしたり、何かにぶつかって痛みを感じている可能性があります。 2. 迷子になった: 親や保護者とはぐれてしまった、または道に迷って不安になっている可能性があります。 3. お腹が空いた: 長時間食事をしていない、またはお腹が空いているために泣いている可能性があります。 4. 疲れた: 長時間歩いていたり、遊び疲れて疲れている可能性があります。 5. 寂しかった: 一人でいることに寂しさを感じている、または親や友達がいなくて寂しいと感じている可能性があります。 6. 怖い思いをした: 車や不審者に遭遇した、または何かに驚いた可能性があります。 7. 体調が悪い: 発熱や頭痛、吐き気などの体調不良を感じている可能性があります。 8. トイレに行きたい: おむつが濡れていたり、トイレに行きたくて泣いている可能性があります。 9. 何かを求めている: 注意を引きたかったり、助けを求めている可能性があります。 子供の状況をよく観察し、声をかけて話を聞いてみることが大切です。必要に応じて、保護者や警察、救急車を呼ぶなどの対応をする必要があるかもしれません。 ``` ## Test environment This model was tested using [text-generation-webui](https://github.com/oobabooga/text-generation-webui/tree/main). I use preset `min_p` and `Null preset` with temperature=0.3 for Generation. ## Usage This format must be adhered to strictly, as deviations may result in less optimal outputs from the model. The template used to construct a prompt for the Instruct model is specified as follows: ``` <s>[INST] <<SYS>>\n{SYSTEM_PROMPT}\n<</SYS>>\n\n{USER_MESSAGE_1} [/INST] {BOT_MESSAGE_1}</s>[INST] {USER_MESSAGE_2} [/INST] ``` Please be aware that ``<s>`` and ``</s>`` are special tokens used for the beginning of string (BOS) and end of string (EOS), respectively, while [INST] and [/INST] are considered regular strings. For the "{SYSTEM_PROMPT}" part, We recommend using "あなたは日本語を話す優秀なアシスタントです。回答には必ず日本語で答えてください。また考える過程も出力してください。" For the "{USER_MESSAGE_1}" part, We recommend using {instruction}\n{input} In other words, We recommend the following: ``` <s>[INST] <<SYS>>\nあなたは日本語を話す優秀なアシスタントです。回答には必ず日本語で答えてください。また考える過程も出力してください。\n<</SYS>>\n\n{instruction1}\n{input1} [/INST] {BOT_MESSAGE_1}</s>[INST] {instruction2}\n{input2} [/INST] ``` ### Use the instruct model ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "nitky/Oumuamua-7b-instruct" model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto") tokenizer = AutoTokenizer.from_pretrained(model_name) device = "cuda" messages = [ {"role": "system", "content": "あなたは日本語を話す優秀なアシスタントです。回答には必ず日本語で答えてください。また考える過程も出力してください。"}, {"role": "user", "content": "道路で泣いている子供がいます。その子供には何があったと考えられますか?"} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=256, do_sample=True, temperature=0.3) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [nitky/Oumuamua-7b-base](https://huggingface.co/nitky/Oumuamua-7b-base) as a base. ### Models Merged The following models were included in the merge: * [stabilityai/japanese-stablelm-base-gamma-7b](https://huggingface.co/stabilityai/japanese-stablelm-base-gamma-7b) * [tokyotech-llm/Swallow-MS-7b-v0.1](https://huggingface.co/tokyotech-llm/Swallow-MS-7b-v0.1) * [DataPilot/ArrowPro-7B-KillerWhale](https://huggingface.co/DataPilot/ArrowPro-7B-KillerWhale) * [nitky/RP-7b-instruct](https://huggingface.co/nitky/RP-7b-instruct) * [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) * [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: task_arithmetic base_model: mistralai/Mistral-7B-v0.1 models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: Mistral-7B-v0.1-VE-Swallow-MS --- merge_method: task_arithmetic base_model: stabilityai/japanese-stablelm-base-gamma-7b models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: japanese-stablelm-base-gamma-7b-VE-Swallow-MS --- merge_method: task_arithmetic base_model: DataPilot/ArrowPro-7B-KillerWhale models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: ArrowPro-7B-KillerWhale-VE-Swallow-MS --- merge_method: task_arithmetic base_model: nitky/RP-7b-instruct models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: RP-7b-instruct-VE-Swallow-MS --- merge_method: task_arithmetic base_model: mistralai/Mistral-7B-Instruct-v0.3 models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: Mistral-7B-Instruct-v0.3-VE-Swallow-MS --- merge_method: linear models: - model: nitky/Oumuamua-7b-base parameters: weight: 1.0 - model: tokyotech-llm/Swallow-MS-7b-instruct-v0.1 parameters: weight: 0.8 - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: -0.8 - model: Mistral-7B-Instruct-v0.3-VE-Swallow-MS parameters: weight: 0.8 - model: Mistral-7B-v0.1-VE-Swallow-MS parameters: weight: -0.8 dtype: bfloat16 name: Oumuamua-7b-instruct-alpha --- merge_method: linear models: - model: nitky/Oumuamua-7b-base parameters: weight: 1.0 - model: ArrowPro-7B-KillerWhale-VE-Swallow-MS # include Mistral-7B-Instruct-v0.3 parameters: weight: 0.8 - model: japanese-stablelm-base-gamma-7b-VE-Swallow-MS parameters: weight: -0.8 dtype: bfloat16 name: Oumuamua-7b-instruct-beta --- merge_method: linear models: - model: nitky/Oumuamua-7b-base parameters: weight: 1.0 - model: RP-7b-instruct-VE-Swallow-MS parameters: weight: 0.8 - model: japanese-stablelm-base-gamma-7b-VE-Swallow-MS parameters: weight: -0.8 - model: Mistral-7B-Instruct-v0.3-VE-Swallow-MS parameters: weight: 0.8 - model: Mistral-7B-v0.1-VE-Swallow-MS parameters: weight: -0.8 dtype: bfloat16 name: Oumuamua-7b-instruct-gamma --- merge_method: model_stock base_model: nitky/Oumuamua-7b-base models: - model: Oumuamua-7b-instruct-alpha - model: Oumuamua-7b-instruct-beta - model: Oumuamua-7b-instruct-gamma dtype: bfloat16 name: Oumuamua-7b-instruct ```
jrtec/jrtec-gpt2-text-generation-quotes-jonathan-vargas
jrtec
2022-11-07T04:26:10Z
760
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "quotes", "quote", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2022-11-06T03:21:37Z
--- license: mit tags: - text-generation - quotes - quote - generated_from_trainer model-index: - name: jrtec-gpt2-text-generation-quotes-jonathan-vargas results: [] widget: - text: "life: " example_title: "Life quote" - text: "death: " example_title: "Death quote" --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # jrtec-gpt2-text-generation-quotes-jonathan-vargas This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the datasetX dataset. It achieves the following results on the evaluation set: - Loss: 0.7033 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.7463 | 1.71 | 500 | 0.7033 | | 0.4281 | 3.41 | 1000 | 0.7084 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
timm/convnextv2_base.fcmae_ft_in1k
timm
2024-02-10T23:29:05Z
760
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2301.00808", "license:cc-by-nc-4.0", "region:us" ]
image-classification
2023-01-05T01:38:03Z
--- license: cc-by-nc-4.0 library_name: timm tags: - image-classification - timm datasets: - imagenet-1k - imagenet-1k --- # Model card for convnextv2_base.fcmae_ft_in1k A ConvNeXt-V2 image classification model. Pretrained with a fully convolutional masked autoencoder framework (FCMAE) and fine-tuned on ImageNet-1k. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 88.7 - GMACs: 15.4 - Activations (M): 28.8 - Image size: train = 224 x 224, test = 288 x 288 - **Papers:** - ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders: https://arxiv.org/abs/2301.00808 - **Original:** https://github.com/facebookresearch/ConvNeXt-V2 - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('convnextv2_base.fcmae_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'convnextv2_base.fcmae_ft_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 128, 56, 56]) # torch.Size([1, 256, 28, 28]) # torch.Size([1, 512, 14, 14]) # torch.Size([1, 1024, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'convnextv2_base.fcmae_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 1024, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP. | model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size| |------------------------------------------------------------------------------------------------------------------------------|------|------|--------|-----------|------|------|---------------|----------| | [convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512) |88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 | | [convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384) |88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 | | [convnext_xxlarge.clip_laion2b_soup_ft_in1k](https://huggingface.co/timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k) |88.612|98.704|256 |846.47 |198.09|124.45|122.45 |256 | | [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384) |88.312|98.578|384 |200.13 |101.11|126.74|196.84 |256 | | [convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384) |88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 | | [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320) |87.968|98.47 |320 |200.13 |70.21 |88.02 |283.42 |256 | | [convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384) |87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 | | [convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384) |87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 | | [convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384) |87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 | | [convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k) |87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 | | [convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k) |87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 | | [convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384) |87.138|98.212|384 |88.59 |45.21 |84.49 |365.47 |256 | | [convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k) |87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 | | [convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384) |86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 | | [convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k) |86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 | | [convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k) |86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 | | [convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384) |86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 | | [convnext_base.clip_laion2b_augreg_ft_in12k_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k) |86.344|97.97 |256 |88.59 |20.09 |37.55 |816.14 |256 | | [convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k) |86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 | | [convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384) |86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 | | [convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k) |86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 | | [convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k) |85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 | | [convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384) |85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 | | [convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k) |85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 | | [convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k) |85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 | | [convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384) |85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 | | [convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384) |85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 | | [convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k) |84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 | | [convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k) |84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 | | [convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k) |84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 | | [convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k) |84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 | | [convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384) |84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 | | [convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k) |83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 | | [convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k) |83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 | | [convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384) |83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 | | [convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k) |83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 | | [convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k) |82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 | | [convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k) |82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 | | [convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k) |82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 | | [convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k) |82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 | | [convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k) |82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 | | [convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k) |82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 | | [convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k) |81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 | | [convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k) |80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 | | [convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k) |80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 | | [convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k) |80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 | | [convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k) |79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 | | [convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k) |79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 | | [convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k) |78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 | | [convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k) |77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 | | [convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k) |77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 | | [convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k) |76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 | | [convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k) |75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 | | [convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k) |75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 | ## Citation ```bibtex @article{Woo2023ConvNeXtV2, title={ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders}, author={Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon and Saining Xie}, year={2023}, journal={arXiv preprint arXiv:2301.00808}, } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
mykor/gpt2-ko
mykor
2023-02-24T05:25:22Z
760
0
transformers
[ "transformers", "pytorch", "tf", "safetensors", "gpt2", "text-generation", "ko", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-02-24T04:44:34Z
--- license: mit language: - ko pipeline_tag: text-generation widget: - text: 딥러닝 모델은 --- # gpt2-ko Korean gpt2 model, trained from scratch. ```python from transformers import pipeline pipe = pipeline("text-generation", model="mykor/gpt2-ko") ``` ```python >>> pipe("오늘 점심 뭐먹지?") [{'generated_text': '오늘 점심 뭐먹지?치킨과 족발 먹으려고 ㅎ난 치킨먹구싶당 ㅎㅎ나 낼 아침에 먹을겡 ㅎ치킨 먹고시퍼 ㅎㅎ난 치킨에닭도리탕..난 닭도리탕~난 치킨먹었어 ㅎ치킨은 족'}] ``` ```python >>> pipe("애플은 이번 업데이트를 통해") [{'generated_text': "애플은 이번 업데이트를 통해 안드로이드 플랫폼 내에서 '모바일 카드'를 판매할 예정'이라며 '기존에는 안드로이드 마켓 내에서만 결제가 가능했다.앞으로는 pc를 통해 결제할 수 있을 것'이라고 덧붙였다.한편, sk텔레콤은 이달 초에도 '갤럭시 s8"}] ```
MBZUAI/LaMini-Cerebras-590M
MBZUAI
2023-04-28T13:08:13Z
760
7
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "en", "arxiv:2304.14402", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-04-12T06:23:08Z
--- license: cc-by-nc-4.0 language: - en pipeline_tag: text-generation widget: - text: >- Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: how can I become more healthy? ### Response: example_title: example --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> <p align="center" width="100%"> <a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini.png" alt="Title" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a> </p> # LaMini-Cerebras-590M [![Model License](https://img.shields.io/badge/Model%20License-CC%20By%20NC%204.0-red.svg)]() This model is one of our LaMini-LM model series in paper "[LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions](https://github.com/mbzuai-nlp/lamini-lm)". This model is a fine-tuned version of [cerebras/Cerebras-GPT-590M](https://huggingface.co/cerebras/Cerebras-GPT-590M) on [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction) that contains 2.58M samples for instruction fine-tuning. For more information about our dataset, please refer to our [project repository](https://github.com/mbzuai-nlp/lamini-lm/). You can view other models of LaMini-LM series as follows. Models with ✩ are those with the best overall performance given their size/architecture, hence we recommend using them. More details can be seen in our paper. <table> <thead> <tr> <th>Base model</th> <th colspan="4">LaMini-LM series (#parameters)</th> </tr> </thead> <tbody> <tr> <td>T5</td> <td><a href="https://huggingface.co/MBZUAI/lamini-t5-61m" target="_blank" rel="noopener noreferrer">LaMini-T5-61M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-t5-223m" target="_blank" rel="noopener noreferrer">LaMini-T5-223M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-t5-738m" target="_blank" rel="noopener noreferrer">LaMini-T5-738M</a></td> <td></td> </tr> <tr> <td>Flan-T5</td> <td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-77m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-77M</a>✩</td> <td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-248m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-248M</a>✩</td> <td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-783m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-783M</a>✩</td> <td></td> </tr> <tr> <td>Cerebras-GPT</td> <td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-111m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-111M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-256m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-256M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-590m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-590M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-1.3b" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-1.3B</a></td> </tr> <tr> <td>GPT-2</td> <td><a href="https://huggingface.co/MBZUAI/lamini-gpt-124m" target="_blank" rel="noopener noreferrer">LaMini-GPT-124M</a>✩</td> <td><a href="https://huggingface.co/MBZUAI/lamini-gpt-774m" target="_blank" rel="noopener noreferrer">LaMini-GPT-774M</a>✩</td> <td><a href="https://huggingface.co/MBZUAI/lamini-gpt-1.5b" target="_blank" rel="noopener noreferrer">LaMini-GPT-1.5B</a>✩</td> <td></td> </tr> <tr> <td>GPT-Neo</td> <td><a href="https://huggingface.co/MBZUAI/lamini-neo-125m" target="_blank" rel="noopener noreferrer">LaMini-Neo-125M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-neo-1.3b" target="_blank" rel="noopener noreferrer">LaMini-Neo-1.3B</a></td> <td></td> <td></td> </tr> <tr> <td>GPT-J</td> <td colspan="4">coming soon</td> </tr> <tr> <td>LLaMA</td> <td colspan="4">coming soon</td> </tr> </tbody> </table> ## Use ### Intended use We recommend using the model to respond to human instructions written in natural language. Since this decoder-only model is fine-tuned with wrapper text, we suggest using the same wrapper text to achieve the best performance. See the example on the right or the code below. We now show you how to load and use our model using HuggingFace `pipeline()`. ```python # pip install -q transformers from transformers import pipeline checkpoint = "{model_name}" model = pipeline('text-generation', model = checkpoint) instruction = 'Please let me know your thoughts on the given place and why you think it deserves to be visited: \n"Barcelona, Spain"' input_prompt = f"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:" generated_text = model(input_prompt, max_length=512, do_sample=True)[0]['generated_text'] print("Response", generated_text) ``` ## Training Procedure <p align="center" width="100%"> <a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini-pipeline.drawio.png" alt="Title" style="width: 100%; min-width: 250px; display: block; margin: auto;"></a> </p> We initialize with [cerebras/Cerebras-GPT-590M](https://huggingface.co/cerebras/Cerebras-GPT-590M) and fine-tune it on our [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction). Its total number of parameters is 590M. ### Training Hyperparameters ## Evaluation We conducted two sets of evaluations: automatic evaluation on downstream NLP tasks and human evaluation on user-oriented instructions. For more detail, please refer to our [paper](). ## Limitations More information needed # Citation ```bibtex @article{lamini-lm, author = {Minghao Wu and Abdul Waheed and Chiyu Zhang and Muhammad Abdul-Mageed and Alham Fikri Aji }, title = {LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions}, journal = {CoRR}, volume = {abs/2304.14402}, year = {2023}, url = {https://arxiv.org/abs/2304.14402}, eprinttype = {arXiv}, eprint = {2304.14402} } ```
NHNDQ/nllb-finetuned-ko2en
NHNDQ
2023-07-21T06:37:06Z
760
10
transformers
[ "transformers", "pytorch", "safetensors", "m2m_100", "text2text-generation", "translation", "ko", "en", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-05-15T00:55:10Z
--- license: cc-by-4.0 language: - ko - en tags: - translation --- ## Model Details * Model Description: Fine-tuned facebook/nllb-200-distilled-600M model * Developed by: Juhwan Lee, Jisu Kim, TakSung Heo, and Minsu Jeong * Model Type: Translation * Language(s): * Source Language: English * Target Language: Korean * License: CC-BY-4.0 ## Dataset * [AI-hub dataset](https://www.aihub.or.kr/) ## BLEU Score * Before fine-tuning: 26.58 * After fine-tuning: 55.36 ## Uses This model can be used for translation and text-to-text generation ## Data Augmentation with backtranslation task You can exercise korean data augmentation task with python package [KoTAN](https://github.com/KoJLabs/KoTAN/tree/main)
timm/efficientvit_b1.r288_in1k
timm
2023-11-21T21:43:58Z
760
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2205.14756", "license:apache-2.0", "region:us" ]
image-classification
2023-08-18T22:44:58Z
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for efficientvit_b1.r288_in1k An EfficientViT (MIT) image classification model. Trained on ImageNet-1k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 9.1 - GMACs: 0.9 - Activations (M): 12.0 - Image size: 288 x 288 - **Papers:** - EfficientViT: Multi-Scale Linear Attention for High-Resolution Dense Prediction: https://arxiv.org/abs/2205.14756 - **Original:** https://github.com/mit-han-lab/efficientvit - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('efficientvit_b1.r288_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'efficientvit_b1.r288_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 32, 72, 72]) # torch.Size([1, 64, 36, 36]) # torch.Size([1, 128, 18, 18]) # torch.Size([1, 256, 9, 9]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'efficientvit_b1.r288_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 256, 9, 9) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Citation ```bibtex @article{cai2022efficientvit, title={EfficientViT: Enhanced linear attention for high-resolution low-computation visual recognition}, author={Cai, Han and Gan, Chuang and Han, Song}, journal={arXiv preprint arXiv:2205.14756}, year={2022} } ```
TheBloke/MythoLogic-Mini-7B-GGUF
TheBloke
2023-09-27T12:47:16Z
760
5
transformers
[ "transformers", "gguf", "llama", "en", "base_model:Gryphe/MythoLogic-Mini-7b", "license:other", "text-generation-inference", "region:us" ]
null
2023-09-04T23:42:13Z
--- language: - en license: other model_name: Mythologic Mini 7B base_model: Gryphe/MythoLogic-Mini-7b inference: false model_creator: Gryphe model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Mythologic Mini 7B - GGUF - Model creator: [Gryphe](https://huggingface.co/Gryphe) - Original model: [Mythologic Mini 7B](https://huggingface.co/Gryphe/MythoLogic-Mini-7b) <!-- description start --> ## Description This repo contains GGUF format model files for [Gryphe's Mythologic Mini 7B](https://huggingface.co/Gryphe/MythoLogic-Mini-7b). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/MythoLogic-Mini-7B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/MythoLogic-Mini-7B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/MythoLogic-Mini-7B-GGUF) * [Gryphe's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Gryphe/MythoLogic-Mini-7b) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `other`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Gryphe's Mythologic Mini 7B](https://huggingface.co/Gryphe/MythoLogic-Mini-7b). <!-- licensing end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [mythologic-mini-7b.Q2_K.gguf](https://huggingface.co/TheBloke/MythoLogic-Mini-7B-GGUF/blob/main/mythologic-mini-7b.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes | | [mythologic-mini-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/MythoLogic-Mini-7B-GGUF/blob/main/mythologic-mini-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss | | [mythologic-mini-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/MythoLogic-Mini-7B-GGUF/blob/main/mythologic-mini-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss | | [mythologic-mini-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/MythoLogic-Mini-7B-GGUF/blob/main/mythologic-mini-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss | | [mythologic-mini-7b.Q4_0.gguf](https://huggingface.co/TheBloke/MythoLogic-Mini-7B-GGUF/blob/main/mythologic-mini-7b.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [mythologic-mini-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/MythoLogic-Mini-7B-GGUF/blob/main/mythologic-mini-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss | | [mythologic-mini-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/MythoLogic-Mini-7B-GGUF/blob/main/mythologic-mini-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended | | [mythologic-mini-7b.Q5_0.gguf](https://huggingface.co/TheBloke/MythoLogic-Mini-7B-GGUF/blob/main/mythologic-mini-7b.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [mythologic-mini-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/MythoLogic-Mini-7B-GGUF/blob/main/mythologic-mini-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended | | [mythologic-mini-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/MythoLogic-Mini-7B-GGUF/blob/main/mythologic-mini-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended | | [mythologic-mini-7b.Q6_K.gguf](https://huggingface.co/TheBloke/MythoLogic-Mini-7B-GGUF/blob/main/mythologic-mini-7b.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss | | [mythologic-mini-7b.Q8_0.gguf](https://huggingface.co/TheBloke/MythoLogic-Mini-7B-GGUF/blob/main/mythologic-mini-7b.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/MythoLogic-Mini-7B-GGUF and below it, a specific filename to download, such as: mythologic-mini-7b.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/MythoLogic-Mini-7B-GGUF mythologic-mini-7b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/MythoLogic-Mini-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/MythoLogic-Mini-7B-GGUF mythologic-mini-7b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m mythologic-mini-7b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/MythoLogic-Mini-7B-GGUF", model_file="mythologic-mini-7b.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Gryphe's Mythologic Mini 7B ## Model details MythoLogic-Mini-7b can be considered the little brother in my Mytho series of models: [MythoLogic-13b](https://huggingface.co/Gryphe/MythoLogic-13b) and [MythoBoros-13b](https://huggingface.co/Gryphe/MythoBoros-13b)). Its Llama-2 core is powered by [Nous Hermes-2](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b), which is further augmented by [Stable Beluga](https://huggingface.co/stabilityai/StableBeluga-7B) and a carefully distilled [Kimiko LoRa](https://huggingface.co/nRuaif/Kimiko_7B). Since 7B models tend to be less capable all-rounders, more emphasis was put on improving the roleplaying aspects for this gradient merge, of which various gradients were benchmarked before settling on the configuration shown below. ![](MythoLogic-Mini-7b.png) In technical terms, the Hermes-2 core starts at 90% strength before fading away completely at the 12th layer level, where Stable Beluga (and Kimiko) handle the more intricate linguistic aspects. Quantized models are available from TheBloke: [GGML](https://huggingface.co/TheBloke/MythoLogic-Mini-7b-GGML) - [GPTQ](https://huggingface.co/TheBloke/MythoLogic-Mini-7b-GPTQ) (You're the best!) ## Prompt Format Due to its Hermes-2 core this model works best with Alpaca formatting, so for optimal model performance, use: ``` <System prompt/Character Card> ### Instruction: Your instruction or question here. For roleplay purposes, I suggest the following - Write <CHAR NAME>'s next reply in a chat between <YOUR NAME> and <CHAR NAME>. Write a single reply only. ### Response: ``` <!-- original-model-card end -->
circulus/canvers-ko2en-v2
circulus
2024-01-03T00:23:17Z
760
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-01-03T00:20:19Z
--- license: gpl-3.0 ---
hon9kon9ize/CantoneseLLM-6B-preview202402
hon9kon9ize
2024-03-04T12:13:55Z
760
7
transformers
[ "transformers", "safetensors", "llama", "text-generation", "yue", "license:other", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-08T21:26:40Z
--- language: - yue license: other license_name: yi-license license_link: https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE pipeline_tag: text-generation model-index: - name: CantoneseLLM-6B-preview202402 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 55.63 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=hon9kon9ize/CantoneseLLM-6B-preview202402 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 75.8 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=hon9kon9ize/CantoneseLLM-6B-preview202402 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 63.07 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=hon9kon9ize/CantoneseLLM-6B-preview202402 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 42.26 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=hon9kon9ize/CantoneseLLM-6B-preview202402 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 74.11 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=hon9kon9ize/CantoneseLLM-6B-preview202402 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 30.71 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=hon9kon9ize/CantoneseLLM-6B-preview202402 name: Open LLM Leaderboard --- # CantoneseLLM This model is further pre-trained model based on [01-ai/Yi-6B](https://huggingface.co/01-ai/Yi-6B) with 800M tokens of Cantonese text compiled from various sources, including translated zh-yue Wikipedia, translated RTHK news [datasets/jed351/rthk_news](https://huggingface.co/datasets/jed351/rthk_news), Cantonese filtered CC100 and Cantonese textbooks generated by Gemini Pro. This is a preview version, for experimental use only, we will use it to fine-tune on downstream tasks and evaluate the performance. ### [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_hon9kon9ize__CantoneseLLM-6B-preview202402) | Metric | Value | |-----------------------|---------------------------| | Avg. | 56.93 | | ARC (25-shot) | 55.63 | | HellaSwag (10-shot) | 75.8 | | MMLU (5-shot) | 63.07 | | TruthfulQA (0-shot) | 42.26 | | Winogrande (5-shot) | 74.11 | | GSM8K (5-shot) | 30.71 | ## Usage ```python from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("hon9kon9ize/CantoneseLLM-6B-preview202402") model = AutoModelForMaskedLM.from_pretrained("hon9kon9ize/CantoneseLLM-6B-preview202402") prompt = "歷經三年疫情,望穿秋水終於全面復常,隨住各項防疫措施陸續放寬以至取消,香港" input_ids = tokenizer.encode(prompt, return_tensors="pt").to('cuda:0') output = model.generate(input_ids, max_length=max_length, num_return_sequences=1, repetition_penalty=1.1, do_sample=True, temperature=temperature, top_k=50, top_p=0.95) output = tokenizer.decode(output[0], skip_special_tokens=True) # output: 歷經三年疫情,望穿秋水終於全面復常,隨住各項防疫措施陸續放寬以至取消,香港旅遊業可謂「起死回生」。 # 不過,旅遊業嘅復蘇之路並唔順利,香港遊客數量仍然遠低於疫前水平,而海外旅客亦只係恢復到疫情前約一半。有業界人士認為,當局需要進一步放寬入境檢疫措施,吸引更多國際旅客來港,令旅遊業得以真正復甦。 ``` ## Limitation and Bias The model is intended to use for Cantonese language understanding and generation tasks, it may not be suitable for other Chinese languages. The model is trained on a diverse range of Cantonese text, including news, Wikipedia, and textbooks, it may not be suitable for informal or dialectal Cantonese, it may contain bias and misinformation, please use it with caution. We found the model is not well trained on the updated Hong Kong knowledge, it may due to the corpus is not large enough to brainwash the original model. We will continue to improve the model and corpus in the future.
microsoft/wavecoder-ultra-6.7b
microsoft
2024-05-06T13:25:54Z
760
70
transformers
[ "transformers", "pytorch", "llama", "text-generation", "code", "conversational", "en", "dataset:humaneval", "arxiv:2312.14187", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-11T04:29:56Z
--- license: mit license_link: https://huggingface.co/microsoft/wavecoder-ultra-6.7b/blob/main/LICENSE language: - en library_name: transformers datasets: - humaneval pipeline_tag: text-generation tags: - code metrics: - code_eval --- <h1 align="center"> 🌊 WaveCoder: Widespread And Versatile Enhanced Code LLM </h1> <p align="center"> <a href="https://arxiv.org/abs/2312.14187"><b>[📜 Paper]</b></a> • <!-- <a href=""><b>[🤗 HF Models]</b></a> • --> <a href="https://github.com/microsoft/WaveCoder"><b>[🐱 GitHub]</b></a> <br> <a href="https://twitter.com/TeamCodeLLM_AI"><b>[🐦 Twitter]</b></a> • <a href="https://www.reddit.com/r/LocalLLaMA/comments/19a1scy/wavecoderultra67b_claims_to_be_the_2nd_best_model/"><b>[💬 Reddit]</b></a> • <a href="https://www.analyticsvidhya.com/blog/2024/01/microsofts-wavecoder-and-codeocean-revolutionize-instruction-tuning/">[🍀 Unofficial Blog]</a> <!-- <a href="#-quick-start">Quick Start</a> • --> <!-- <a href="#%EF%B8%8F-citation">Citation</a> --> </p> <p align="center"> Repo for "<a href="https://arxiv.org/abs/2312.14187" target="_blank">WaveCoder: Widespread And Versatile Enhanced Instruction Tuning with Refined Data Generation</a>" </p> ## 🔥 News - [2024/04/10] 🔥🔥🔥 WaveCoder repo, models released at [🤗 HuggingFace](https://huggingface.co/microsoft/wavecoder-ultra-6.7b)! - [2023/12/26] WaveCoder paper released. ## 💡 Introduction WaveCoder 🌊 is a series of large language models (LLMs) for the coding domain, designed to solve relevant problems in the field of code through instruction-following learning. Its training dataset was generated from a subset of code-search-net data using a generator-discriminator framework based on LLMs that we proposed, covering four general code-related tasks: code generation, code summary, code translation, and code repair. | Model | HumanEval | MBPP(500) | HumanEval<br>Fix(Avg.) | HumanEval<br>Explain(Avg.) | | -------------------------------------------------------------------------------- | --------- | --------- | ---------------------- | -------------------------- | | GPT-4 | 85.4 | - | 47.8 | 52.1 | | [🌊 WaveCoder-DS-6.7B](https://huggingface.co/microsoft/wavecoder-ds-6.7b) | 65.8 | 63.0 | 49.5 | 40.8 | | [🌊 WaveCoder-Pro-6.7B](https://huggingface.co/microsoft/wavecoder-pro-6.7b) | 74.4 | 63.4 | 52.1 | 43.0 | | [🌊 WaveCoder-Ultra-6.7B](https://huggingface.co/microsoft/wavecoder-ultra-6.7b) | 79.9 | 64.6 | 52.3 | 45.7 | ## 🪁 Evaluation Please refer to WaveCoder's [GitHub repo](https://github.com/microsoft/WaveCoder) for inference, evaluation, and training code. ```python # Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("microsoft/wavecoder-ultra-6.7b") model = AutoModelForCausalLM.from_pretrained("microsoft/wavecoder-ultra-6.7b") ``` ## 📖 License This code repository is licensed under the MIT License. The use of DeepSeek Coder models is subject to the its [License](https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL). ## ☕️ Citation If you find this repository helpful, please consider citing our paper: ``` @article{yu2023wavecoder, title={Wavecoder: Widespread and versatile enhanced instruction tuning with refined data generation}, author={Yu, Zhaojian and Zhang, Xin and Shang, Ning and Huang, Yangyu and Xu, Can and Zhao, Yishujie and Hu, Wenxiang and Yin, Qiufeng}, journal={arXiv preprint arXiv:2312.14187}, year={2023} } ``` ## Note WaveCoder models are trained on the synthetic data generated by OpenAI models. Please pay attention to OpenAI's [terms of use](https://openai.com/policies/terms-of-use) when using the models and the datasets.
nk2t/Llama-3-8B-Instruct-japanese-nk2t-v0.3
nk2t
2024-05-22T11:02:28Z
760
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "ja", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-15T12:24:06Z
--- language: - en - ja pipeline_tag: text-generation license: llama3 license_name: llama3 license_link: LICENSE --- # Llama-3-8B-Instruct-JP-nk2t-v0.3 ## Model Details: Built with Meta Llama 3 llama-3-8bの日本語継続学習モデルにChatVectorを適用し、さらにQLoraでファインチューニングしたモデルです。 rinna/llama-3-youko-8b + ( 0.2 * (meta-llama/Meta-Llama-3-8B-Instruct - meta-llama/Meta-Llama-3-8B) + 0.6 * (Llama-3-8B-Instruct-japanese-nk2t-v0.2 - meta-llama/Meta-Llama-3-8B) ) チャットテンプレートはLlama-3-8B-Instructと同じです。 [ggufフォーマット変換版](https://huggingface.co/nk2t/Llama-3-8B-Instruct-japanese-nk2t-v0.3-gguf)はこちら。 ## How to use TBD ## Benchmarks ELYZA-tasks-100 average score: 3.32 (Q5_K_M quant) The results of <a href="https://huggingface.co/datasets/elyza/ELYZA-tasks-100">ELYZA-tasks-100</a> were evaluated by gpt-4o using <a href="https://github.com/Northern-System-Service/gpt4-autoeval">gpt4-autoeval</a>. --- ## Meta Llama-3 Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide)
allenfokbeta/Llama3-8B-Chinese-Chat-finetuned-Q8_0-GGUF
allenfokbeta
2024-06-20T02:38:06Z
760
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "base_model:allenfokbeta/Llama3-8B-Chinese-Chat-finetuned", "endpoints_compatible", "region:us" ]
null
2024-06-20T02:37:31Z
--- base_model: allenfokbeta/Llama3-8B-Chinese-Chat-finetuned library_name: transformers tags: - llama-cpp - gguf-my-repo --- # allenfokbeta/Llama3-8B-Chinese-Chat-finetuned-Q8_0-GGUF This model was converted to GGUF format from [`allenfokbeta/Llama3-8B-Chinese-Chat-finetuned`](https://huggingface.co/allenfokbeta/Llama3-8B-Chinese-Chat-finetuned) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/allenfokbeta/Llama3-8B-Chinese-Chat-finetuned) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo allenfokbeta/Llama3-8B-Chinese-Chat-finetuned-Q8_0-GGUF --hf-file llama3-8b-chinese-chat-finetuned-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo allenfokbeta/Llama3-8B-Chinese-Chat-finetuned-Q8_0-GGUF --hf-file llama3-8b-chinese-chat-finetuned-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo allenfokbeta/Llama3-8B-Chinese-Chat-finetuned-Q8_0-GGUF --hf-file llama3-8b-chinese-chat-finetuned-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo allenfokbeta/Llama3-8B-Chinese-Chat-finetuned-Q8_0-GGUF --hf-file llama3-8b-chinese-chat-finetuned-q8_0.gguf -c 2048 ```
QuantFactory/NuExtract-tiny-GGUF
QuantFactory
2024-06-25T12:04:41Z
760
0
null
[ "gguf", "region:us" ]
null
2024-06-25T11:29:03Z
Entry not found
Omartificial-Intelligence-Space/Arabic-MiniLM-L12-v2-all-nli-triplet
Omartificial-Intelligence-Space
2024-06-26T20:31:27Z
760
2
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "mteb", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:557850", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "ar", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2", "model-index", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
2024-06-25T17:56:53Z
--- language: - ar library_name: sentence-transformers tags: - mteb - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:557850 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 metrics: - pearson_cosine - spearman_cosine - pearson_manhattan - spearman_manhattan - pearson_euclidean - spearman_euclidean - pearson_dot - spearman_dot - pearson_max - spearman_max widget: - source_sentence: ذكر متوازن بعناية يقف على قدم واحدة بالقرب من منطقة شاطئ المحيط النظيفة sentences: - رجل يقدم عرضاً - هناك رجل بالخارج قرب الشاطئ - رجل يجلس على أريكه - source_sentence: رجل يقفز إلى سريره القذر sentences: - السرير قذر. - رجل يضحك أثناء غسيل الملابس - الرجل على القمر - source_sentence: الفتيات بالخارج sentences: - امرأة تلف الخيط إلى كرات بجانب كومة من الكرات - فتيان يركبان في جولة متعة - >- ثلاث فتيات يقفون سوية في غرفة واحدة تستمع وواحدة تكتب على الحائط والثالثة تتحدث إليهن - source_sentence: الرجل يرتدي قميصاً أزرق. sentences: - >- رجل يرتدي قميصاً أزرق يميل إلى الجدار بجانب الطريق مع شاحنة زرقاء وسيارة حمراء مع الماء في الخلفية. - كتاب القصص مفتوح - رجل يرتدي قميص أسود يعزف على الجيتار. - source_sentence: يجلس شاب ذو شعر أشقر على الحائط يقرأ جريدة بينما تمر امرأة وفتاة شابة. sentences: - ذكر شاب ينظر إلى جريدة بينما تمر إمرأتان بجانبه - رجل يستلقي على وجهه على مقعد في الحديقة. - الشاب نائم بينما الأم تقود ابنتها إلى الحديقة pipeline_tag: sentence-similarity model-index: - name: Omartificial-Intelligence-Space/Arabic-MiniLM-L12-v2-all-nli-triplet results: - dataset: config: default name: MTEB BIOSSES (default) revision: d3fb88f8f02e40887cd149695127462bbcf29b4a split: test type: mteb/biosses-sts metrics: - type: cosine_pearson value: 72.5081840952171 - type: cosine_spearman value: 69.41362982941537 - type: euclidean_pearson value: 67.45121490183709 - type: euclidean_spearman value: 67.15273493989758 - type: main_score value: 69.41362982941537 - type: manhattan_pearson value: 67.6119022794479 - type: manhattan_spearman value: 67.51659865246586 task: type: STS - dataset: config: default name: MTEB SICK-R (default) revision: 20a6d6f312dd54037fe07a32d58e5e168867909d split: test type: mteb/sickr-sts metrics: - type: cosine_pearson value: 83.61591268324493 - type: cosine_spearman value: 79.61914245705792 - type: euclidean_pearson value: 81.32044881859483 - type: euclidean_spearman value: 79.04866675279919 - type: main_score value: 79.61914245705792 - type: manhattan_pearson value: 81.09220518201322 - type: manhattan_spearman value: 78.87590523907905 task: type: STS - dataset: config: default name: MTEB STS12 (default) revision: a0d554a64d88156834ff5ae9920b964011b16384 split: test type: mteb/sts12-sts metrics: - type: cosine_pearson value: 84.59807803376341 - type: cosine_spearman value: 77.38689922564416 - type: euclidean_pearson value: 83.92034850646732 - type: euclidean_spearman value: 76.75857193093438 - type: main_score value: 77.38689922564416 - type: manhattan_pearson value: 83.97191863964667 - type: manhattan_spearman value: 76.89790070725708 task: type: STS - dataset: config: default name: MTEB STS13 (default) revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca split: test type: mteb/sts13-sts metrics: - type: cosine_pearson value: 78.18664268536664 - type: cosine_spearman value: 79.58989311630421 - type: euclidean_pearson value: 79.25259731614729 - type: euclidean_spearman value: 80.1701122827397 - type: main_score value: 79.58989311630421 - type: manhattan_pearson value: 79.12601451996869 - type: manhattan_spearman value: 79.98999436073663 task: type: STS - dataset: config: default name: MTEB STS14 (default) revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 split: test type: mteb/sts14-sts metrics: - type: cosine_pearson value: 80.97541876658141 - type: cosine_spearman value: 79.78614320477877 - type: euclidean_pearson value: 81.01514505747167 - type: euclidean_spearman value: 80.73664735567839 - type: main_score value: 79.78614320477877 - type: manhattan_pearson value: 80.8746560526314 - type: manhattan_spearman value: 80.67025673179079 task: type: STS - dataset: config: default name: MTEB STS15 (default) revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 split: test type: mteb/sts15-sts metrics: - type: cosine_pearson value: 85.23661155813113 - type: cosine_spearman value: 86.21134464371615 - type: euclidean_pearson value: 85.82518684522182 - type: euclidean_spearman value: 86.43600784349509 - type: main_score value: 86.21134464371615 - type: manhattan_pearson value: 85.83101152371589 - type: manhattan_spearman value: 86.42228695679498 task: type: STS - dataset: config: default name: MTEB STS16 (default) revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 split: test type: mteb/sts16-sts metrics: - type: cosine_pearson value: 79.20106689077852 - type: cosine_spearman value: 81.39570893867825 - type: euclidean_pearson value: 80.39578888768929 - type: euclidean_spearman value: 81.19950443340412 - type: main_score value: 81.39570893867825 - type: manhattan_pearson value: 80.2226679341839 - type: manhattan_spearman value: 80.99142422593823 task: type: STS - dataset: config: ar-ar name: MTEB STS17 (ar-ar) revision: faeb762787bd10488a50c8b5be4a3b82e411949c split: test type: mteb/sts17-crosslingual-sts metrics: - type: cosine_pearson value: 81.05294851623468 - type: cosine_spearman value: 81.10570655134113 - type: euclidean_pearson value: 79.22292773537778 - type: euclidean_spearman value: 78.84204232638425 - type: main_score value: 81.10570655134113 - type: manhattan_pearson value: 79.43750460320484 - type: manhattan_spearman value: 79.33713593557482 task: type: STS - dataset: config: ar name: MTEB STS22 (ar) revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_pearson value: 45.96875498680092 - type: cosine_spearman value: 52.405509117149904 - type: euclidean_pearson value: 42.097450896728226 - type: euclidean_spearman value: 50.89022884113707 - type: main_score value: 52.405509117149904 - type: manhattan_pearson value: 42.22827727075534 - type: manhattan_spearman value: 50.912841055442634 task: type: STS - dataset: config: default name: MTEB STSBenchmark (default) revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 split: test type: mteb/stsbenchmark-sts metrics: - type: cosine_pearson value: 83.13261516884116 - type: cosine_spearman value: 84.3492527221498 - type: euclidean_pearson value: 82.691603178401 - type: euclidean_spearman value: 83.0499566200785 - type: main_score value: 84.3492527221498 - type: manhattan_pearson value: 82.68307441014618 - type: manhattan_spearman value: 83.01315787964519 task: type: STS - dataset: config: default name: MTEB SummEval (default) revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c split: test type: mteb/summeval metrics: - type: cosine_pearson value: 31.149232235402845 - type: cosine_spearman value: 30.685504130606255 - type: dot_pearson value: 27.466307571160375 - type: dot_spearman value: 28.93064261485915 - type: main_score value: 30.685504130606255 - type: pearson value: 31.149232235402845 - type: spearman value: 30.685504130606255 task: type: Summarization - name: >- SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 results: - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts test 256 type: sts-test-256 metrics: - type: pearson_cosine value: 0.8264447022356382 name: Pearson Cosine - type: spearman_cosine value: 0.8386403752382455 name: Spearman Cosine - type: pearson_manhattan value: 0.8219134931449013 name: Pearson Manhattan - type: spearman_manhattan value: 0.825509659109493 name: Spearman Manhattan - type: pearson_euclidean value: 0.8223094468630248 name: Pearson Euclidean - type: spearman_euclidean value: 0.8260503151751462 name: Spearman Euclidean - type: pearson_dot value: 0.6375226884845725 name: Pearson Dot - type: spearman_dot value: 0.6287228614640888 name: Spearman Dot - type: pearson_max value: 0.8264447022356382 name: Pearson Max - type: spearman_max value: 0.8386403752382455 name: Spearman Max - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts test 128 type: sts-test-128 metrics: - type: pearson_cosine value: 0.8209661910768973 name: Pearson Cosine - type: spearman_cosine value: 0.8347149482673766 name: Spearman Cosine - type: pearson_manhattan value: 0.8082811559854036 name: Pearson Manhattan - type: spearman_manhattan value: 0.8148314269262763 name: Spearman Manhattan - type: pearson_euclidean value: 0.8093138512113149 name: Pearson Euclidean - type: spearman_euclidean value: 0.8156468458613929 name: Spearman Euclidean - type: pearson_dot value: 0.5795109620454884 name: Pearson Dot - type: spearman_dot value: 0.5760223026552876 name: Spearman Dot - type: pearson_max value: 0.8209661910768973 name: Pearson Max - type: spearman_max value: 0.8347149482673766 name: Spearman Max - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts test 64 type: sts-test-64 metrics: - type: pearson_cosine value: 0.808708530451336 name: Pearson Cosine - type: spearman_cosine value: 0.8217532539767914 name: Spearman Cosine - type: pearson_manhattan value: 0.7876121380998453 name: Pearson Manhattan - type: spearman_manhattan value: 0.7969092304137347 name: Spearman Manhattan - type: pearson_euclidean value: 0.7902997966909958 name: Pearson Euclidean - type: spearman_euclidean value: 0.7987635968785215 name: Spearman Euclidean - type: pearson_dot value: 0.495047136234386 name: Pearson Dot - type: spearman_dot value: 0.49287000679901516 name: Spearman Dot - type: pearson_max value: 0.808708530451336 name: Pearson Max - type: spearman_max value: 0.8217532539767914 name: Spearman Max --- # SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) on the Omartificial-Intelligence-Space/arabic-n_li-triplet dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) <!-- at revision bf3bf13ab40c3157080a7ab344c831b9ad18b5eb --> - **Maximum Sequence Length:** 128 tokens - **Output Dimensionality:** 384 tokens - **Similarity Function:** Cosine Similarity - **Training Dataset:** - Omartificial-Intelligence-Space/arabic-n_li-triplet <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Omartificial-Intelligence-Space/MiniLM-L12-v2-all-nli-triplet") # Run inference sentences = [ 'يجلس شاب ذو شعر أشقر على الحائط يقرأ جريدة بينما تمر امرأة وفتاة شابة.', 'ذكر شاب ينظر إلى جريدة بينما تمر إمرأتان بجانبه', 'الشاب نائم بينما الأم تقود ابنتها إلى الحديقة', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Dataset: `sts-test-256` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.8264 | | **spearman_cosine** | **0.8386** | | pearson_manhattan | 0.8219 | | spearman_manhattan | 0.8255 | | pearson_euclidean | 0.8223 | | spearman_euclidean | 0.8261 | | pearson_dot | 0.6375 | | spearman_dot | 0.6287 | | pearson_max | 0.8264 | | spearman_max | 0.8386 | #### Semantic Similarity * Dataset: `sts-test-128` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.821 | | **spearman_cosine** | **0.8347** | | pearson_manhattan | 0.8083 | | spearman_manhattan | 0.8148 | | pearson_euclidean | 0.8093 | | spearman_euclidean | 0.8156 | | pearson_dot | 0.5795 | | spearman_dot | 0.576 | | pearson_max | 0.821 | | spearman_max | 0.8347 | #### Semantic Similarity * Dataset: `sts-test-64` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.8087 | | **spearman_cosine** | **0.8218** | | pearson_manhattan | 0.7876 | | spearman_manhattan | 0.7969 | | pearson_euclidean | 0.7903 | | spearman_euclidean | 0.7988 | | pearson_dot | 0.495 | | spearman_dot | 0.4929 | | pearson_max | 0.8087 | | spearman_max | 0.8218 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Omartificial-Intelligence-Space/arabic-n_li-triplet * Dataset: Omartificial-Intelligence-Space/arabic-n_li-triplet * Size: 557,850 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 10.33 tokens</li><li>max: 52 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 13.21 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 15.32 tokens</li><li>max: 53 tokens</li></ul> | * Samples: | anchor | positive | negative | |:------------------------------------------------------------|:--------------------------------------------|:------------------------------------| | <code>شخص على حصان يقفز فوق طائرة معطلة</code> | <code>شخص في الهواء الطلق، على حصان.</code> | <code>شخص في مطعم، يطلب عجة.</code> | | <code>أطفال يبتسمون و يلوحون للكاميرا</code> | <code>هناك أطفال حاضرون</code> | <code>الاطفال يتجهمون</code> | | <code>صبي يقفز على لوح التزلج في منتصف الجسر الأحمر.</code> | <code>الفتى يقوم بخدعة التزلج</code> | <code>الصبي يتزلج على الرصيف</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Evaluation Dataset #### Omartificial-Intelligence-Space/arabic-n_li-triplet * Dataset: Omartificial-Intelligence-Space/arabic-n_li-triplet * Size: 6,584 evaluation samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 21.86 tokens</li><li>max: 105 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 10.22 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 11.2 tokens</li><li>max: 33 tokens</li></ul> | * Samples: | anchor | positive | negative | |:-----------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------|:---------------------------------------------------| | <code>امرأتان يتعانقان بينما يحملان حزمة</code> | <code>إمرأتان يحملان حزمة</code> | <code>الرجال يتشاجرون خارج مطعم</code> | | <code>طفلين صغيرين يرتديان قميصاً أزرق، أحدهما يرتدي الرقم 9 والآخر يرتدي الرقم 2 يقفان على خطوات خشبية في الحمام ويغسلان أيديهما في المغسلة.</code> | <code>طفلين يرتديان قميصاً مرقماً يغسلون أيديهم</code> | <code>طفلين يرتديان سترة يذهبان إلى المدرسة</code> | | <code>رجل يبيع الدونات لعميل خلال معرض عالمي أقيم في مدينة أنجليس</code> | <code>رجل يبيع الدونات لعميل</code> | <code>امرأة تشرب قهوتها في مقهى صغير</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `num_train_epochs`: 1 - `warmup_ratio`: 0.1 - `fp16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `prediction_loss_only`: True - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | sts-test-128_spearman_cosine | sts-test-256_spearman_cosine | sts-test-64_spearman_cosine | |:------:|:----:|:-------------:|:----------------------------:|:----------------------------:|:---------------------------:| | 0.0229 | 200 | 6.2204 | - | - | - | | 0.0459 | 400 | 4.9559 | - | - | - | | 0.0688 | 600 | 4.7835 | - | - | - | | 0.0918 | 800 | 4.2725 | - | - | - | | 0.1147 | 1000 | 4.291 | - | - | - | | 0.1377 | 1200 | 4.0704 | - | - | - | | 0.1606 | 1400 | 3.7962 | - | - | - | | 0.1835 | 1600 | 3.7447 | - | - | - | | 0.2065 | 1800 | 3.569 | - | - | - | | 0.2294 | 2000 | 3.5373 | - | - | - | | 0.2524 | 2200 | 3.608 | - | - | - | | 0.2753 | 2400 | 3.5609 | - | - | - | | 0.2983 | 2600 | 3.5231 | - | - | - | | 0.3212 | 2800 | 3.3312 | - | - | - | | 0.3442 | 3000 | 3.4803 | - | - | - | | 0.3671 | 3200 | 3.3552 | - | - | - | | 0.3900 | 3400 | 3.3024 | - | - | - | | 0.4130 | 3600 | 3.2559 | - | - | - | | 0.4359 | 3800 | 3.1882 | - | - | - | | 0.4589 | 4000 | 3.227 | - | - | - | | 0.4818 | 4200 | 3.0889 | - | - | - | | 0.5048 | 4400 | 3.0861 | - | - | - | | 0.5277 | 4600 | 3.0178 | - | - | - | | 0.5506 | 4800 | 3.231 | - | - | - | | 0.5736 | 5000 | 3.1593 | - | - | - | | 0.5965 | 5200 | 3.1101 | - | - | - | | 0.6195 | 5400 | 3.1307 | - | - | - | | 0.6424 | 5600 | 3.1265 | - | - | - | | 0.6654 | 5800 | 3.1116 | - | - | - | | 0.6883 | 6000 | 3.1417 | - | - | - | | 0.7113 | 6200 | 3.0862 | - | - | - | | 0.7342 | 6400 | 2.9652 | - | - | - | | 0.7571 | 6600 | 2.8466 | - | - | - | | 0.7801 | 6800 | 2.271 | - | - | - | | 0.8030 | 7000 | 2.046 | - | - | - | | 0.8260 | 7200 | 1.9634 | - | - | - | | 0.8489 | 7400 | 1.8875 | - | - | - | | 0.8719 | 7600 | 1.7655 | - | - | - | | 0.8948 | 7800 | 1.6874 | - | - | - | | 0.9177 | 8000 | 1.7315 | - | - | - | | 0.9407 | 8200 | 1.6674 | - | - | - | | 0.9636 | 8400 | 1.6574 | - | - | - | | 0.9866 | 8600 | 1.6142 | - | - | - | | 1.0 | 8717 | - | 0.8347 | 0.8386 | 0.8218 | ### Framework Versions - Python: 3.9.18 - Sentence Transformers: 3.0.1 - Transformers: 4.40.0 - PyTorch: 2.2.2+cu121 - Accelerate: 0.26.1 - Datasets: 2.19.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
yh-yao/Meta-Llama-3-8B-Instruct-Q3_K_S-GGUF
yh-yao
2024-07-01T22:29:15Z
760
0
null
[ "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "region:us" ]
text-generation
2024-07-01T22:28:57Z
--- base_model: meta-llama/Meta-Llama-3-8B-Instruct language: - en license: llama3 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 - llama-cpp - gguf-my-repo extra_gated_prompt: "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version\ \ Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for\ \ use, reproduction, distribution and modification of the Llama Materials set forth\ \ herein.\n\"Documentation\" means the specifications, manuals and documentation\ \ accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\ \"Licensee\" or \"you\" means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity’s behalf), of\ \ the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama\ \ 3\" means the foundational large language models and software and algorithms,\ \ including machine-learning model code, trained model weights, inference-enabling\ \ code, training-enabling code, fine-tuning enabling code and other elements of\ \ the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\ \"Llama Materials\" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation\ \ (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"\ we\" means Meta Platforms Ireland Limited (if you are located in or, if you are\ \ an entity, your principal place of business is in the EEA or Switzerland) and\ \ Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n\ \ \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted\ \ a non-exclusive, worldwide, non-transferable and royalty-free limited license\ \ under Meta’s intellectual property or other rights owned by Meta embodied in the\ \ Llama Materials to use, reproduce, distribute, copy, create derivative works of,\ \ and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni.\ \ If you distribute or make available the Llama Materials (or any derivative works\ \ thereof), or a product or service that uses any of them, including another AI\ \ model, you shall (A) provide a copy of this Agreement with any such Llama Materials;\ \ and (B) prominently display “Built with Meta Llama 3” on a related website, user\ \ interface, blogpost, about page, or product documentation. If you use the Llama\ \ Materials to create, train, fine tune, or otherwise improve an AI model, which\ \ is distributed or made available, you shall also include “Llama 3” at the beginning\ \ of any such AI model name.\nii. If you receive Llama Materials, or any derivative\ \ works thereof, from a Licensee as part of an integrated end user product, then\ \ Section 2 of this Agreement will not apply to you.\niii. You must retain in all\ \ copies of the Llama Materials that you distribute the following attribution notice\ \ within a “Notice” text file distributed as a part of such copies: “Meta Llama\ \ 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms,\ \ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\ \ applicable laws and regulations (including trade compliance laws and regulations)\ \ and adhere to the Acceptable Use Policy for the Llama Materials (available at\ \ https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference\ \ into this Agreement.\nv. You will not use the Llama Materials or any output or\ \ results of the Llama Materials to improve any other large language model (excluding\ \ Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If,\ \ on the Meta Llama 3 version release date, the monthly active users of the products\ \ or services made available by or for Licensee, or Licensee’s affiliates, is greater\ \ than 700 million monthly active users in the preceding calendar month, you must\ \ request a license from Meta, which Meta may grant to you in its sole discretion,\ \ and you are not authorized to exercise any of the rights under this Agreement\ \ unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer\ \ of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT\ \ AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF\ \ ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED,\ \ INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY,\ \ OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING\ \ THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME\ \ ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n\ 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER\ \ ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY,\ \ OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT,\ \ SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META\ \ OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n\ 5. Intellectual Property.\na. No trademark licenses are granted under this Agreement,\ \ and in connection with the Llama Materials, neither Meta nor Licensee may use\ \ any name or mark owned by or associated with the other or any of its affiliates,\ \ except as required for reasonable and customary use in describing and redistributing\ \ the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you\ \ a license to use “Llama 3” (the “Mark”) solely as required to comply with the\ \ last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently\ \ accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All\ \ goodwill arising out of your use of the Mark will inure to the benefit of Meta.\n\ b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for\ \ Meta, with respect to any derivative works and modifications of the Llama Materials\ \ that are made by you, as between you and Meta, you are and will be the owner of\ \ such derivative works and modifications.\nc. If you institute litigation or other\ \ proceedings against Meta or any entity (including a cross-claim or counterclaim\ \ in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results,\ \ or any portion of any of the foregoing, constitutes infringement of intellectual\ \ property or other rights owned or licensable by you, then any licenses granted\ \ to you under this Agreement shall terminate as of the date such litigation or\ \ claim is filed or instituted. You will indemnify and hold harmless Meta from and\ \ against any claim by any third party arising out of or related to your use or\ \ distribution of the Llama Materials.\n6. Term and Termination. The term of this\ \ Agreement will commence upon your acceptance of this Agreement or access to the\ \ Llama Materials and will continue in full force and effect until terminated in\ \ accordance with the terms and conditions herein. Meta may terminate this Agreement\ \ if you are in breach of any term or condition of this Agreement. Upon termination\ \ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\ \ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\ \ and Jurisdiction. This Agreement will be governed and construed under the laws\ \ of the State of California without regard to choice of law principles, and the\ \ UN Convention on Contracts for the International Sale of Goods does not apply\ \ to this Agreement. The courts of California shall have exclusive jurisdiction\ \ of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use\ \ Policy\nMeta is committed to promoting safe and fair use of its tools and features,\ \ including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable\ \ Use Policy (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n\ #### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly.\ \ You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate\ \ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\ \ contribute to, encourage, plan, incite, or further illegal or unlawful activity\ \ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\ \ or harm to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4. The\ \ illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6. Any\ \ other criminal activity\n 2. Engage in, promote, incite, or facilitate the\ \ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 3. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n 4.\ \ Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices\n 5. Collect, process, disclose, generate, or infer health, demographic,\ \ or other sensitive personal or private information about individuals without rights\ \ and consents required by applicable laws\n 6. Engage in or facilitate any action\ \ or generate any content that infringes, misappropriates, or otherwise violates\ \ any third-party rights, including the outputs or results of any products or services\ \ using the Llama Materials\n 7. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system\n2. Engage in, promote, incite,\ \ facilitate, or assist in the planning or development of activities that present\ \ a risk of death or bodily harm to individuals, including use of Meta Llama 3 related\ \ to the following:\n 1. Military, warfare, nuclear industries or applications,\ \ espionage, use for materials or activities that are subject to the International\ \ Traffic Arms Regulations (ITAR) maintained by the United States Department of\ \ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\ \ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\ \ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\ \ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\ \ content intended to incite or promote violence, abuse, or any infliction of bodily\ \ harm to an individual\n3. Intentionally deceive or mislead others, including use\ \ of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering\ \ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\ \ or furthering defamatory content, including the creation of defamatory statements,\ \ images, or other content\n 3. Generating, promoting, or further distributing\ \ spam\n 4. Impersonating another individual without consent, authorization,\ \ or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are\ \ human-generated\n 6. Generating or facilitating false online engagement, including\ \ fake reviews and other means of fake online engagement\n4. Fail to appropriately\ \ disclose to end users any known dangers of your AI system\nPlease report any violation\ \ of this Policy, software “bug,” or other problems that could lead to a violation\ \ of this Policy through one of the following means:\n * Reporting issues with\ \ the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n\ \ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\ \ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\ \ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit widget: - example_title: Hello messages: - role: user content: Hey my name is Julien! How are you? - example_title: Winter holidays messages: - role: system content: You are a helpful and honest assistant. Please, respond concisely and truthfully. - role: user content: Can you recommend a good destination for Winter holidays? - example_title: Programming assistant messages: - role: system content: You are a helpful and honest code and programming assistant. Please, respond concisely and truthfully. - role: user content: Write a function that computes the nth fibonacci number. inference: parameters: max_new_tokens: 300 stop: - <|end_of_text|> - <|eot_id|> --- # yh-yao/Meta-Llama-3-8B-Instruct-Q3_K_S-GGUF This model was converted to GGUF format from [`meta-llama/Meta-Llama-3-8B-Instruct`](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo yh-yao/Meta-Llama-3-8B-Instruct-Q3_K_S-GGUF --hf-file meta-llama-3-8b-instruct-q3_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo yh-yao/Meta-Llama-3-8B-Instruct-Q3_K_S-GGUF --hf-file meta-llama-3-8b-instruct-q3_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo yh-yao/Meta-Llama-3-8B-Instruct-Q3_K_S-GGUF --hf-file meta-llama-3-8b-instruct-q3_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo yh-yao/Meta-Llama-3-8B-Instruct-Q3_K_S-GGUF --hf-file meta-llama-3-8b-instruct-q3_k_s.gguf -c 2048 ```
GanjinZero/coder_eng
GanjinZero
2023-03-22T08:22:16Z
759
3
transformers
[ "transformers", "pytorch", "safetensors", "bert", "feature-extraction", "biomedical", "en", "license:apache-2.0", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
feature-extraction
2022-03-02T23:29:04Z
--- language: - en license: apache-2.0 tags: - bert - biomedical --- CODER: Knowledge infused cross-lingual medical term embedding for term normalization. English Version. Github Link: https://github.com/GanjinZero/CODER ``` @article{YUAN2022103983, title = {CODER: Knowledge-infused cross-lingual medical term embedding for term normalization}, journal = {Journal of Biomedical Informatics}, pages = {103983}, year = {2022}, issn = {1532-0464}, doi = {https://doi.org/10.1016/j.jbi.2021.103983}, url = {https://www.sciencedirect.com/science/article/pii/S1532046421003129}, author = {Zheng Yuan and Zhengyun Zhao and Haixia Sun and Jiao Li and Fei Wang and Sheng Yu}, keywords = {medical term normalization, cross-lingual, medical term representation, knowledge graph embedding, contrastive learning} } ```
crystina-z/monoELECTRA_LCE_nneg31
crystina-z
2024-05-07T04:12:36Z
759
0
transformers
[ "transformers", "pytorch", "tf", "electra", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
Entry not found
gorkemgoknar/gpt2-turkish-writer
gorkemgoknar
2021-09-22T08:29:24Z
759
9
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "turkish", "aiwriter", "finetuned", "tr", "dataset:wikipedia-turkish", "dataset:custom-book-corpus", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: - tr thumbnail: tags: - gpt2 - turkish - aiwriter - finetuned license: apache-2.0 datasets: - wikipedia-turkish - custom-book-corpus metrics: - perplexity - accuracy widget: - text: Bir zaman topu olan ama köpeği olmayan bir çocuk vardı. Parkta context: '' - text: 'Uzun uzun sahile doğru baktı. Düşündüklerinden ' context: '' - text: Çok uzun zaman önce galaksinin uzak bir köşesinde... context: '' - text: "'Bugün kendimi çok hasta hissediyorum' dedi. Karşısında " context: '' --- # Turkish AI Writer based on GPT2-Small # Türkçe Yapay Zeka Yazarı ## Model description This model is enhanced version of gpt2-small-turkish finetuned version. In addition to 28-10-2020 Wikipedia Turkish article dump this model is trained with more than 400 classic novels and plays in Turkish (Including Dostoyevski, Shaekspeare, Dumas) Base work has been done on Pierre Guillou tutorial as on this page. (https://github.com/piegu/fastai-projects/blob/master/finetuning-English-GPT2-any-language-Portuguese-HuggingFace-fastaiv2.ipynb) Note that Since Turkish language is not close to English as in Porteguese instead of training last 2 layers, last 3 layers are trained. Code is converted to work with Fastai 2.X . Using Google Colab for training. Current accuracy 36.3 % , Perplexity : 44.75 Demo (using CPU inference) is available on: http://www.metayazar.com Models are available: * [gpt2-small-tuned-tr] (https://huggingface.co/gorkemgoknar/gpt2-small-turkish) * [gpt2-small-turkish-writer] (https://huggingface.co/gorkemgoknar/gpt2-turkish-writer) ## Intended uses & limitations #### How to use #### Install ```python from transformers import AutoTokenizer, AutoModelWithLMHead import torch tokenizer = AutoTokenizer.from_pretrained("gorkemgoknar/gpt2-turkish-writer") model = AutoModelWithLMHead.from_pretrained("gorkemgoknar/gpt2-turkish-writer") # Get sequence length max of 1024 tokenizer.model_max_length=1024 model.eval() # disable dropout (or leave in train mode to finetune) ``` #### Generate 1 word ```python # input sequence text = "Bu yazıyı bilgisayar yazdı." inputs = tokenizer(text, return_tensors="pt") # model output outputs = model(**inputs, labels=inputs["input_ids"]) loss, logits = outputs[:2] predicted_index = torch.argmax(logits[0, -1, :]).item() predicted_text = tokenizer.decode([predicted_index]) # results print('input text:', text) print('predicted text:', predicted_text) # input text: # predicted text: ``` #### Generate Full Sequence ```python # input sequence text = "Bu yazıyı bilgisayar yazdı." inputs = tokenizer(text, return_tensors="pt") # model output using Top-k sampling text generation method sample_outputs = model.generate(inputs.input_ids, pad_token_id=50256, do_sample=True, max_length=50, # put the token number you want top_k=40, num_return_sequences=1) # generated sequence for i, sample_output in enumerate(sample_outputs): print(">> Generated text {}\n\n{}".format(i+1, tokenizer.decode(sample_output.tolist()))) # >> Generated text # ``` #### Limitations and bias The training data used for this model come from Turkish Wikipedia and books. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Also not much pre-processing was done on books hence chapter names and page numbers can be seen on some cases. This is a work in progress. ## Training data Wikipedia Turkish article dump as of 28-10-2020 Turkish book dataset of >400 classic novels ## Training procedure ## Eval results | epoch |train_loss |valid_loss |accuracy |perplexity |time | | ----- | -------- |--------- | ---------- | --------- | ----- | |0 |4.497828 |4.549605 |0.277328 |94.595070 |2:09:58| |1 |4.503929 |4.519456 |0.275071 |91.785645 |2:04:30| |2 |3.612716 |3.921146 |0.344802 |50.458256 |2:03:22| |3 |3.777645 |4.072006 |0.326130 |58.674530 |1:56:14| |4 |2.934462 |3.801303 |0.363719 |44.759476 |1:58:55| Note: 1cycle rule training is used and epochs are at different times ```
vinai/bartpho-word
vinai
2022-10-22T09:05:26Z
759
4
transformers
[ "transformers", "pytorch", "tf", "mbart", "feature-extraction", "arxiv:2109.09701", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
# <a name="introduction"></a> BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese Two BARTpho versions `BARTpho-syllable` and `BARTpho-word` are the first public large-scale monolingual sequence-to-sequence models pre-trained for Vietnamese. BARTpho uses the "large" architecture and pre-training scheme of the sequence-to-sequence denoising model [BART](https://github.com/pytorch/fairseq/tree/main/examples/bart), thus especially suitable for generative NLP tasks. Experiments on a downstream task of Vietnamese text summarization show that in both automatic and human evaluations, BARTpho outperforms the strong baseline [mBART](https://github.com/pytorch/fairseq/tree/main/examples/mbart) and improves the state-of-the-art. The general architecture and experimental results of BARTpho can be found in our [paper](https://arxiv.org/abs/2109.09701): @article{bartpho, title = {{BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese}}, author = {Nguyen Luong Tran and Duong Minh Le and Dat Quoc Nguyen}, journal = {arXiv preprint}, volume = {arXiv:2109.09701}, year = {2021} } **Please CITE** our paper when BARTpho is used to help produce published results or incorporated into other software. For further information or requests, please go to [BARTpho's homepage](https://github.com/VinAIResearch/BARTpho)!
huggingtweets/palestinepound
huggingtweets
2022-08-14T07:45:17Z
759
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2022-08-14T07:43:47Z
--- language: en thumbnail: http://www.huggingtweets.com/palestinepound/1660463113168/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1396083058844045319/d_xNzMbk_400x400.png&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Palestine Pound</div> <div style="text-align: center; font-size: 14px;">@palestinepound</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Palestine Pound. | Data | Palestine Pound | | --- | --- | | Tweets downloaded | 145 | | Retweets | 4 | | Short tweets | 11 | | Tweets kept | 130 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/152jutl1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @palestinepound's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1sd0ks1o) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1sd0ks1o/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/palestinepound') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
RussianNLP/ruRoBERTa-large-rucola
RussianNLP
2022-08-30T20:23:10Z
759
5
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "ru", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-08-30T19:54:51Z
--- language: ru license: apache-2.0 tags: - transformers thumbnail: "https://github.com/RussianNLP/RuCoLA/blob/main/logo.png" widget: - text: "Он решил ту или иную сложную задачу." --- This is a finetuned version of [RuRoBERTa-large](https://huggingface.co/sberbank-ai/ruRoberta-large) for the task of linguistic acceptability classification on the [RuCoLA](https://rucola-benchmark.com/) benchmark. The hyperparameters used for finetuning are as follows: * 5 training epochs (with early stopping based on validation MCC) * Peak learning rate: 1e-5, linear warmup for 10% of total training time * Weight decay: 1e-4 * Batch size: 32 * Random seed: 5 * Optimizer: [torch.optim.AdamW](https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html)
laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft
laion
2023-04-18T19:28:23Z
759
3
open_clip
[ "open_clip", "tensorboard", "zero-shot-image-classification", "clip", "arxiv:2201.03545", "arxiv:2210.08402", "arxiv:1910.04867", "license:mit", "region:us" ]
zero-shot-image-classification
2023-02-11T01:37:59Z
--- tags: - zero-shot-image-classification - clip license: mit library_name: open_clip pipeline_tag: zero-shot-image-classification --- # Model card for CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Training Details](#training-details) 4. [Evaluation](#evaluation) 5. [Acknowledgements](#acknowledgements) 6. [Citation](#citation) # Model Details ## Model Description A series of CLIP [ConvNeXt-Large](https://arxiv.org/abs/2201.03545) (w/ extra text depth, vision MLP head) models trained on the LAION-2B (english) subset of [LAION-5B](https://arxiv.org/abs/2210.08402) using [OpenCLIP](https://github.com/mlfoundations/open_clip). The models utilize: * the [timm](https://github.com/rwightman/pytorch-image-models) ConvNeXt-Large model (`convnext_large`) as the image tower * a MLP (`fc - gelu - drop - fc`) head in vision tower instead of the single projection of other CLIP models * a text tower with same width but 4 layers more depth than ViT-L / RN50x16 models (depth 16, embed dim 768). This 320x320 resolution model is a fine-tune of [CLIP-convnext_large_d.laion2B-s26B-b102K-augreg](https://huggingface.co/laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg) at a higher resolution. It was fine-tune from the final checkpoint of the original 256x256 training run w/ an additional ~2.5B samples and a lower learning rate. At 320x320, the ConvNext-Large-D is significantly more efficient than the L/14 model at 336x336 that OpenAI fine-tuned. L/14-336 model is 2.5x more GMAC, 2.8x more activations, and 1.22x more parameters. | Model | Dataset | Resolution | AugReg | Top-1 ImageNet Zero-Shot (%) | | ----- | ------- | ---------- | ------------ | --------- | | [convnext_large_d.laion2b_s26b_b102k-augreg](https://huggingface.co/laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg) | LAION-2B | 256x256 | RRC (0.33, 1.0), RE (0.35), SD (0.1), D(0.1) | 75.9 | | [convnext_large_d_320.laion2b_s29b_b131k-ft](https://huggingface.co/laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft) | LAION-2B | 320x320 | RRC (0.5, 1.0), RE (0.4), SD (0.1), D(0.0) | 76.6 | | [convnext_large_d_320.laion2b_s29b_b131k-ft-soup](https://huggingface.co/laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft-soup) | LAION-2B | 320x320 | RRC (0.5, 1.0), RE (0.4), SD (0.1), D(0.0) | 76.9 | RRC = Random Resize Crop (crop pcts), RE = Random Erasing (prob), SD = Stochastic Depth (prob) -- image tower only, D = Dropout (prob) -- image tower head only LAION-A = LAION Aesthetic, an ~900M sample subset of LAION-2B with pHash dedupe and asthetic score filtering. Model training done by Ross Wightman on the [stability.ai](https://stability.ai/) cluster. # Uses As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model. The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the LAION-5B blog (https://laion.ai/blog/laion-5b/) and upcoming paper include additional discussion as it relates specifically to the training dataset. ## Direct Use Zero-shot image classification, image and text retrieval, among others. ## Downstream Use Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others. ## Out-of-Scope Use As per the OpenAI models, **Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful. Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use. Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases. Further the above notice, the LAION-5B dataset used in training of these models has additional considerations, see below. # Training Details ## Training Data This model was trained with LAION-2B -- A 2 billion sample English subset of LAION-5B (https://laion.ai/blog/laion-5b/). **IMPORTANT NOTE:** The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes. Be aware that this large-scale dataset is uncurated. Keep in mind that the uncurated nature of the dataset means that collected links may lead to strongly discomforting and disturbing content for a human viewer. Therefore, please use the demo links with caution and at your own risk. It is possible to extract a “safe” subset by filtering out samples based on the safety tags (using a customized trained NSFW classifier that we built). While this strongly reduces the chance for encountering potentially harmful content when viewing, we cannot entirely exclude the possibility for harmful content being still present in safe mode, so that the warning holds also there. We think that providing the dataset openly to broad research and other interested communities will allow for transparent investigation of benefits that come along with training large-scale models as well as pitfalls and dangers that may stay unreported or unnoticed when working with closed large datasets that remain restricted to a small community. Providing our dataset openly, we however do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress. ## Training Procedure All 320x320 model fine-tunes were trained with a global batch size of 131072 for 10-16 checkpoint intervals of 203.7M samples for a total of ~2-3B samples seen over fine-tune. For 320x320 models, a slurm script w/ srun below was used on 64 8-GPU (A100 40GB) nodes (Stability). ``` /opt/slurm/sbin/srun --cpu_bind=v --accel-bind=gn python -m training.main \ --save-frequency 1 \ --name "convnext_large_320" \ --pretrained ""/runs/convnext_large_256/epoch_128.pt" \ --resume 'latest' \ --train-data="pipe:aws s3 cp s3://mybucket/path/{laion{00000..xxxxx}.tar -" \ --train-num-samples 203666042 \ --dataset-type webdataset \ --precision amp_bfloat16 \ --beta2 0.98 \ --warmup 2000 \ --batch-size=256 \ --epochs=12 \ --dataset-resampled \ --aug-cfg use_timm=True scale='(0.5, 1.0)' re_prob=0.4 \ --clip-grad-norm 5.0 \ --lr 5e-5 \ --workers=6 \ --model "convnext_large_d_320" \ --seed 0 \ --ddp-static-graph \ --local-loss \ --gather-with-grad \ --grad-checkpointing ``` # Evaluation Evaluation done with code in the [LAION CLIP Benchmark suite](https://github.com/LAION-AI/CLIP_benchmark). ## Testing Data, Factors & Metrics ### Testing Data The testing is performed with VTAB+ (A combination of VTAB (https://arxiv.org/abs/1910.04867) w/ additional robustness datasets) for classification and COCO and Flickr for retrieval. ## Results The models achieve between 75.9 and 76.9 top-1 zero-shot accuracy on ImageNet-1k. Zero-shot curve of origina from-scratch 256x256 training: ![](convnext_large_zero_shot.png) An initial round of benchmarks have been performed on a wider range of datasets, to be viewable at https://github.com/LAION-AI/CLIP_benchmark/blob/main/benchmark/results.ipynb # Acknowledgements Acknowledging [stability.ai](https://stability.ai/) for compute used to train this model. # Citation **BibTeX:** LAION-5B ```bibtex @inproceedings{schuhmann2022laionb, title={{LAION}-5B: An open large-scale dataset for training next generation image-text models}, author={Christoph Schuhmann and Romain Beaumont and Richard Vencu and Cade W Gordon and Ross Wightman and Mehdi Cherti and Theo Coombes and Aarush Katta and Clayton Mullis and Mitchell Wortsman and Patrick Schramowski and Srivatsa R Kundurthy and Katherine Crowson and Ludwig Schmidt and Robert Kaczmarczyk and Jenia Jitsev}, booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2022}, url={https://openreview.net/forum?id=M3Y74vmsMcY} } ``` OpenCLIP software ```bibtex @software{ilharco_gabriel_2021_5143773, author = {Ilharco, Gabriel and Wortsman, Mitchell and Wightman, Ross and Gordon, Cade and Carlini, Nicholas and Taori, Rohan and Dave, Achal and Shankar, Vaishaal and Namkoong, Hongseok and Miller, John and Hajishirzi, Hannaneh and Farhadi, Ali and Schmidt, Ludwig}, title = {OpenCLIP}, month = jul, year = 2021, note = {If you use this software, please cite it as below.}, publisher = {Zenodo}, version = {0.1}, doi = {10.5281/zenodo.5143773}, url = {https://doi.org/10.5281/zenodo.5143773} } ``` OpenAI CLIP paper ```bibtex @inproceedings{Radford2021LearningTV, title={Learning Transferable Visual Models From Natural Language Supervision}, author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever}, booktitle={ICML}, year={2021} } ``` ```bibtex @Article{liu2022convnet, author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year = {2022}, } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/rwightman/pytorch-image-models}} } ```
timm/mobilevitv2_200.cvnets_in1k
timm
2023-04-24T22:27:10Z
759
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2206.02680", "license:other", "region:us" ]
image-classification
2023-04-24T22:26:39Z
--- tags: - image-classification - timm library_name: timm license: other datasets: - imagenet-1k --- # Model card for mobilevitv2_200.cvnets_in1k A MobileViT-v2 image classification model. Trained on ImageNet-1k by paper authors. See license details at https://github.com/apple/ml-cvnets/blob/main/LICENSE ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 18.4 - GMACs: 7.2 - Activations (M): 32.1 - Image size: 256 x 256 - **Papers:** - Separable Self-attention for Mobile Vision Transformers: https://arxiv.org/abs/2206.02680 - **Original:** https://github.com/apple/ml-cvnets - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('mobilevitv2_200.cvnets_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'mobilevitv2_200.cvnets_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 128, 128, 128]) # torch.Size([1, 256, 64, 64]) # torch.Size([1, 512, 32, 32]) # torch.Size([1, 768, 16, 16]) # torch.Size([1, 1024, 8, 8]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'mobilevitv2_200.cvnets_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 1024, 8, 8) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{Mehta2022SeparableSF, title={Separable Self-attention for Mobile Vision Transformers}, author={Sachin Mehta and Mohammad Rastegari}, journal={ArXiv}, year={2022}, volume={abs/2206.02680} } ```
jtatman/gpt2-open-instruct-v1-Anthropic-hh-rlhf
jtatman
2023-07-25T00:57:00Z
759
4
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "Transformers", "GPT2", "en", "dataset:Anthropic/hh-rlhf", "dataset:hakurei/open-instruct-v1", "base_model:vicgalle/gpt2-open-instruct-v1", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-07-22T10:26:00Z
--- license: mit base_model: vicgalle/gpt2-open-instruct-v1 tags: - generated_from_trainer - Transformers - GPT2 model-index: - name: hh-rlhf results: [] datasets: - Anthropic/hh-rlhf - hakurei/open-instruct-v1 tokenizers: - GPT2Tokenizer language: - en library_name: transformers metrics: - bleu --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hh-rlhf This model is a fine-tuned version of [vicgalle/gpt2-open-instruct-v1](https://huggingface.co/vicgalle/gpt2-open-instruct-v1) on an subset (15k) of the Anthropic/hh-rlhf dataset. It achieves the following results on the evaluation set: - Loss: 2.1534 This model responds to the 'Human:' or 'Assistant:' prompt pretty well in conversation situations. The shorter responses are better suited. Keep generation length to a reasonable subset. Left to its own devices it will have some pretty esoteric responses. These include fairly uncensored remarks and at times violent outbursts. Especially if asking questions. Needs vetting for other textual uses. ``` Human: Insane clown posse says... Human: Should we look for a woman? Assistant: It’s okay if you’re having a tough time finding what you are looking for. It’s a common question people might come up with for an argument or misunderstanding. What are you looking for, and what kind of woman would you have? Human: Are you trying to find someone to argue ``` ## Model description GPT2 open instruct was trained on the open-instruct dataset fully. The reimagines one LM head as a partial rhlf adapter, with subtle reinforcements. ## Intended uses & limitations Intended to study the intersection of instruct models and prompting that focuses on subtle exchanges of prompting. This probably needs to be refined substantially at this point. ## Training and evaluation data ```python Train dataset size: 15000 Test dataset size: 500 Dataset({ features: ['chosen', 'rejected'], num_rows: 15000 }) Dataset({ features: ['chosen', 'rejected'], num_rows: 500 }) ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.3108 | 1.0 | 7500 | 2.1799 | | 2.265 | 2.0 | 15000 | 2.1632 | | 2.2507 | 3.0 | 22500 | 2.1567 | | 2.2519 | 4.0 | 30000 | 2.1534 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
SG161222/RealVisXL_V2.02_Turbo
SG161222
2024-04-12T15:38:32Z
759
0
diffusers
[ "diffusers", "safetensors", "license:openrail++", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2023-12-24T12:17:13Z
--- license: openrail++ --- <b>It's important! Read it!</b><br> The model is still in the training phase. This is not the final version and may contain artifacts and perform poorly in some cases.<br> <b>You can support me directly on Boosty - https://boosty.to/sg_161222</b><br> The model is aimed at photorealism. Can produce sfw and nsfw images of decent quality.<br> CivitAI Page: https://civitai.com/models/139562?modelVersionId=266762<br> <b>Recommended Negative Prompt:</b><br> (worst quality, low quality, illustration, 3d, 2d, painting, cartoons, sketch), open mouth<br> <b>or another negative prompt</b><br> <b>Recommended Generation Parameters:</b><br> Sampling Steps: 4+<br> Sampling Method: DPM++ SDE Karras<br> CFG Scale: 1.5-3 <b>Recommended Hires Fix Parameters:</b><br> Hires steps: 2+<br> Upscaler: 4x-UltraSharp upscaler / or another<br> Denoising strength: 0.1 - 0.5<br> Upscale by: 1.1-2.0<br>
eren23/DistilHermes-2.5-Mistral-7B
eren23
2024-03-05T15:46:11Z
759
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "question-answering", "en", "dataset:argilla/distilabel-intel-orca-dpo-pairs", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
question-answering
2024-01-26T22:32:12Z
--- language: - en license: apache-2.0 library_name: transformers datasets: - argilla/distilabel-intel-orca-dpo-pairs pipeline_tag: question-answering model-index: - name: DistilHermes-2.5-Mistral-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 65.87 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/DistilHermes-2.5-Mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 84.78 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/DistilHermes-2.5-Mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 63.65 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/DistilHermes-2.5-Mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 54.24 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/DistilHermes-2.5-Mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 78.22 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/DistilHermes-2.5-Mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 59.82 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/DistilHermes-2.5-Mistral-7B name: Open LLM Leaderboard --- DPO Finetuned teknium/OpenHermes-2.5-Mistral-7B using argilla/distilabel-intel-orca-dpo-pairs. Intel orca dpo pairs is a distilled version: https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs of https://huggingface.co/datasets/Intel/orca_dpo_pairs # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_eren23__DistilHermes-2.5-Mistral-7B) | Metric |Value| |---------------------------------|----:| |Avg. |67.76| |AI2 Reasoning Challenge (25-Shot)|65.87| |HellaSwag (10-Shot) |84.78| |MMLU (5-Shot) |63.65| |TruthfulQA (0-shot) |54.24| |Winogrande (5-shot) |78.22| |GSM8k (5-shot) |59.82|
Sharathhebbar24/Med_GPT2
Sharathhebbar24
2024-02-02T16:21:16Z
759
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "medical", "en", "dataset:gamino/wiki_medical_terms", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-27T16:44:48Z
--- license: apache-2.0 datasets: - gamino/wiki_medical_terms language: - en pipeline_tag: text-generation tags: - medical --- This is a finetuned version of [gamino/wiki_medical_terms](https://huggingface.co/datasets/gamino/wiki_medical_terms) ## Model description GPT-2 is a transformers model pre-trained on a very large corpus of English data in a self-supervised fashion. This means it was pre-trained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifting one token (word or piece of word) to the right. The model uses a masking mechanism to make sure the predictions for the token `i` only use the inputs from `1` to `i` but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was trained for, however, which is generating texts from a prompt. ### To use this model ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> model_name = "Sharathhebbar24/chat_gpt2_dpo" >>> model = AutoModelForCausalLM.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) >>> def generate_text(prompt): >>> inputs = tokenizer.encode(prompt, return_tensors='pt') >>> outputs = model.generate(inputs, max_length=64, pad_token_id=tokenizer.eos_token_id) >>> generated = tokenizer.decode(outputs[0], skip_special_tokens=True) >>> return generated[:generated.rfind(".")+1] >>> prompt = "What is Paracetamol" >>> res = generate_text(prompt) >>> res ```
TomGrc/FusionNet_34Bx2_MoE_v0.1
TomGrc
2024-03-04T03:30:15Z
759
7
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "conversational", "en", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-30T19:36:06Z
--- language: - en license: mit tags: - moe pipeline_tag: text-generation model-index: - name: FusionNet_34Bx2_MoE_v0.1 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 73.72 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TomGrc/FusionNet_34Bx2_MoE_v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 86.46 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TomGrc/FusionNet_34Bx2_MoE_v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 76.72 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TomGrc/FusionNet_34Bx2_MoE_v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 71.01 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TomGrc/FusionNet_34Bx2_MoE_v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 83.35 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TomGrc/FusionNet_34Bx2_MoE_v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 73.01 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TomGrc/FusionNet_34Bx2_MoE_v0.1 name: Open LLM Leaderboard --- # FusionNet_34Bx2_MoE_v0.1 Fine-tuned model on English language using MoE method. The improved version from FusionNet_34Bx2_MoE. ## Model description The FusionNet_34Bx2_MoE_v0.1 is a model to experiment with the MoE method, which could significantly increase the performance of the original model. The FusionNet has 60.8B parameters, and this model is fine-tuned. Enjoy! # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TomGrc__FusionNet_34Bx2_MoE_v0.1) | Metric |Value| |---------------------------------|----:| |Avg. |77.38| |AI2 Reasoning Challenge (25-Shot)|73.72| |HellaSwag (10-Shot) |86.46| |MMLU (5-Shot) |76.72| |TruthfulQA (0-shot) |71.01| |Winogrande (5-shot) |83.35| |GSM8k (5-shot) |73.01|
kalisai/Nusantara-1.8b-Indo-Chat
kalisai
2024-03-14T06:43:30Z
759
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "convAI", "id", "en", "dataset:argilla/OpenHermes2.5-dpo-binarized-alpha", "dataset:wikimedia/wikipedia", "dataset:FreedomIntelligence/evol-instruct-indonesian", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-05T16:25:22Z
--- library_name: transformers widget: - messages: - role: system content: >- Anda adalah seorang konselor karir. User akan memberi Anda seorang individu mencari bimbingan dalam kehidupan profesional mereka, dan tugas Anda adalah membantu mereka dalam menentukan karir apa yang paling cocok bagi mereka berdasarkan keterampilan mereka, minat, dan pengalaman. Anda juga harus melakukan penelitian terhadap berbagai hal tersebut pilihan yang tersedia, jelaskan tren pasar kerja di berbagai industri, Dan saran tentang kualifikasi mana yang akan bermanfaat untuk mengejar bidang tertentu. - role: user content: Halo Say! - role: assistant content: Eh hai, Say ! Apa yang bisa aku bantu? - role: user content: >- Saya tertarik untuk mengembangkan karir di bidang perbankan. Apa yang dapat kamu rekomendasikan ke saya? - messages: - role: system content: >- Anda adalah asisten yang berpengetahuan luas. Bantu user sebanyak yang Anda bisa. - role: user content: Bagaimana caranya menjadi lebih aktif di Bulan Puasa? - messages: - role: system content: Anda adalah asisten yang membantu dan memberikan tanggapan yang cerdas. - role: user content: Haloooo Bund! - role: assistant content: Halo! Apa yang bisa saya bantu? - role: user content: >- Saya perlu menu buka puasa yang segar di Bulan Ramadhan ini, makanan khas Indonesia apa saja yang cocok untuk menu buka puasa di Bulan Ramadhan? - messages: - role: system content: >- Anda adalah asisten yang sangat kreatif. Pengguna akan memberi Anda tugas, yang harus Anda selesaikan dengan seluruh pengetahuan Anda. - role: user content: >- Tulis latar belakang cerita novel tentang seorang wanita yang ingin memberantas geng 9 Naga. inference: parameters: max_new_tokens: 128 penalty_alpha: 0.5 top_k: 4 pipeline_tag: text-generation tags: - conversational - convAI license: apache-2.0 language: - id - en datasets: - argilla/OpenHermes2.5-dpo-binarized-alpha - wikimedia/wikipedia - FreedomIntelligence/evol-instruct-indonesian --- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642b04e4ecec03b44649e318/6CCm81lqJ-i7aB38MtrAY.jpeg) ### Model Description Nusantara is a series of Open Weight Language Model of Bahasa Indonesia (Indonesia language). Nusantara is based from Qwen1.5 Language Model, finetuned by domain specific of datasets. As Chat-implemented language model, Nusantara is capable to do Question-Answering and respond to instructions given in Bahasa Indonesia. Due to limited resources, only 0.8B, 1.8B, 2.7B, 4B and 7B models are available. If you're interested in funding this project for further development, specific usage, or larger parameters, please contact us. - **Finetuned by:** [Kalis AI](https://huggingface.co/kalisai) - **Funded by:** Self-funded - **Model type:** transformer-based decoder-only language model - **Language(s):** Bahasa Indonesia (id), English (en) - **License:** Nusantara is licensed under Apache-2.0, but any usage of this model should comply with [Qwen License](https://huggingface.co/Qwen/Qwen1.5-4B/blob/main/LICENSE) - **Finetuned from model:** [Qwen1.5-4B](https://huggingface.co/Qwen/Qwen1.5-4B/tree/main) ### Attentions! <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Due to certain circumstances, models with <4B parameters tend to hallucinate easily. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Because this model is also trained with uncensored datasets, there is the possibility of negative impacts arising from using this model. All kinds of impacts that arise as a result of using this model are entirely the responsibility of the user. The model maker is not responsible for any risks incurred. ## How to Get Started with the Model Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained( "kalisai/Nusantara-1.8B-Indo-Chat", torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("kalisai/Nusantara-1.8B-Indo-Chat") prompt = "Berikan saya resep memasak nasi goreng yang lezat." messages = [ {"role": "system", "content": "Kamu adalah Nusantara, asisten AI yang pintar."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ## Citation If you use the Nusantara language model in your research or project, please cite it as: ``` @misc{zulfikar_aji_kusworo_2024, title={Nusantara: A Series of Versatile Open Weight Language Model of Bahasa Indonesia}, author={Zulfikar Aji Kusworo}, publisher={Hugging Face} journal={Hugging Face Repository}, year={2024} url = {https://huggingface.co/kalisai} } ```
mradermacher/Miqu-MS-70B-GGUF
mradermacher
2024-05-06T05:20:07Z
759
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Undi95/Miqu-MS-70B", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-03T19:43:26Z
--- base_model: Undi95/Miqu-MS-70B language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Undi95/Miqu-MS-70B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Miqu-MS-70B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Miqu-MS-70B-GGUF/resolve/main/Miqu-MS-70B.Q2_K.gguf) | Q2_K | 25.9 | | | [GGUF](https://huggingface.co/mradermacher/Miqu-MS-70B-GGUF/resolve/main/Miqu-MS-70B.IQ3_XS.gguf) | IQ3_XS | 28.7 | | | [GGUF](https://huggingface.co/mradermacher/Miqu-MS-70B-GGUF/resolve/main/Miqu-MS-70B.IQ3_S.gguf) | IQ3_S | 30.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Miqu-MS-70B-GGUF/resolve/main/Miqu-MS-70B.Q3_K_S.gguf) | Q3_K_S | 30.3 | | | [GGUF](https://huggingface.co/mradermacher/Miqu-MS-70B-GGUF/resolve/main/Miqu-MS-70B.IQ3_M.gguf) | IQ3_M | 31.4 | | | [GGUF](https://huggingface.co/mradermacher/Miqu-MS-70B-GGUF/resolve/main/Miqu-MS-70B.Q3_K_M.gguf) | Q3_K_M | 33.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Miqu-MS-70B-GGUF/resolve/main/Miqu-MS-70B.Q3_K_L.gguf) | Q3_K_L | 36.6 | | | [GGUF](https://huggingface.co/mradermacher/Miqu-MS-70B-GGUF/resolve/main/Miqu-MS-70B.IQ4_XS.gguf) | IQ4_XS | 37.6 | | | [GGUF](https://huggingface.co/mradermacher/Miqu-MS-70B-GGUF/resolve/main/Miqu-MS-70B.Q4_K_S.gguf) | Q4_K_S | 39.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Miqu-MS-70B-GGUF/resolve/main/Miqu-MS-70B.Q4_K_M.gguf) | Q4_K_M | 41.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Miqu-MS-70B-GGUF/resolve/main/Miqu-MS-70B.Q5_K_S.gguf) | Q5_K_S | 47.9 | | | [GGUF](https://huggingface.co/mradermacher/Miqu-MS-70B-GGUF/resolve/main/Miqu-MS-70B.Q5_K_M.gguf) | Q5_K_M | 49.2 | | | [PART 1](https://huggingface.co/mradermacher/Miqu-MS-70B-GGUF/resolve/main/Miqu-MS-70B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Miqu-MS-70B-GGUF/resolve/main/Miqu-MS-70B.Q6_K.gguf.part2of2) | Q6_K | 57.0 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Miqu-MS-70B-GGUF/resolve/main/Miqu-MS-70B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Miqu-MS-70B-GGUF/resolve/main/Miqu-MS-70B.Q8_0.gguf.part2of2) | Q8_0 | 73.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
bartowski/Qwen1.5-32B-Chat-GGUF
bartowski
2024-04-05T23:45:58Z
759
0
null
[ "gguf", "chat", "text-generation", "en", "license:other", "region:us" ]
text-generation
2024-04-05T22:19:10Z
--- license: other license_name: tongyi-qianwen license_link: >- https://huggingface.co/Qwen/Qwen1.5-32B-Chat/blob/main/LICENSE language: - en pipeline_tag: text-generation tags: - chat quantized_by: bartowski --- ## Llamacpp Quantizations of Qwen1.5-32B-Chat Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2589">b2589</a> for quantization. Original model: https://huggingface.co/Qwen/Qwen1.5-32B-Chat Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [Qwen1.5-32B-Chat-Q8_0.gguf](https://huggingface.co/bartowski/Qwen1.5-32B-Chat-GGUF/blob/main/Qwen1.5-32B-Chat-Q8_0.gguf) | Q8_0 | 34.55GB | Extremely high quality, generally unneeded but max available quant. | | [Qwen1.5-32B-Chat-Q6_K.gguf](https://huggingface.co/bartowski/Qwen1.5-32B-Chat-GGUF/blob/main/Qwen1.5-32B-Chat-Q6_K.gguf) | Q6_K | 26.67GB | Very high quality, near perfect, *recommended*. | | [Qwen1.5-32B-Chat-Q5_K_M.gguf](https://huggingface.co/bartowski/Qwen1.5-32B-Chat-GGUF/blob/main/Qwen1.5-32B-Chat-Q5_K_M.gguf) | Q5_K_M | 23.08GB | High quality, very usable. | | [Qwen1.5-32B-Chat-Q5_K_S.gguf](https://huggingface.co/bartowski/Qwen1.5-32B-Chat-GGUF/blob/main/Qwen1.5-32B-Chat-Q5_K_S.gguf) | Q5_K_S | 22.46GB | High quality, very usable. | | [Qwen1.5-32B-Chat-Q5_0.gguf](https://huggingface.co/bartowski/Qwen1.5-32B-Chat-GGUF/blob/main/Qwen1.5-32B-Chat-Q5_0.gguf) | Q5_0 | 22.46GB | High quality, older format, generally not recommended. | | [Qwen1.5-32B-Chat-Q4_K_M.gguf](https://huggingface.co/bartowski/Qwen1.5-32B-Chat-GGUF/blob/main/Qwen1.5-32B-Chat-Q4_K_M.gguf) | Q4_K_M | 19.69GB | Good quality, uses about 4.83 bits per weight. | | [Qwen1.5-32B-Chat-Q4_K_S.gguf](https://huggingface.co/bartowski/Qwen1.5-32B-Chat-GGUF/blob/main/Qwen1.5-32B-Chat-Q4_K_S.gguf) | Q4_K_S | 18.64GB | Slightly lower quality with small space savings. | | [Qwen1.5-32B-Chat-IQ4_NL.gguf](https://huggingface.co/bartowski/Qwen1.5-32B-Chat-GGUF/blob/main/Qwen1.5-32B-Chat-IQ4_NL.gguf) | IQ4_NL | 18.68GB | Decent quality, similar to Q4_K_S, new method of quanting, | | [Qwen1.5-32B-Chat-IQ4_XS.gguf](https://huggingface.co/bartowski/Qwen1.5-32B-Chat-GGUF/blob/main/Qwen1.5-32B-Chat-IQ4_XS.gguf) | IQ4_XS | 17.73GB | Decent quality, new method with similar performance to Q4. | | [Qwen1.5-32B-Chat-Q4_0.gguf](https://huggingface.co/bartowski/Qwen1.5-32B-Chat-GGUF/blob/main/Qwen1.5-32B-Chat-Q4_0.gguf) | Q4_0 | 18.49GB | Decent quality, older format, generally not recommended. | | [Qwen1.5-32B-Chat-Q3_K_L.gguf](https://huggingface.co/bartowski/Qwen1.5-32B-Chat-GGUF/blob/main/Qwen1.5-32B-Chat-Q3_K_L.gguf) | Q3_K_L | 17.11GB | Lower quality but usable, good for low RAM availability. | | [Qwen1.5-32B-Chat-Q3_K_M.gguf](https://huggingface.co/bartowski/Qwen1.5-32B-Chat-GGUF/blob/main/Qwen1.5-32B-Chat-Q3_K_M.gguf) | Q3_K_M | 15.81GB | Even lower quality. | | [Qwen1.5-32B-Chat-IQ3_M.gguf](https://huggingface.co/bartowski/Qwen1.5-32B-Chat-GGUF/blob/main/Qwen1.5-32B-Chat-IQ3_M.gguf) | IQ3_M | 14.70GB | Medium-low quality, new method with decent performance. | | [Qwen1.5-32B-Chat-IQ3_S.gguf](https://huggingface.co/bartowski/Qwen1.5-32B-Chat-GGUF/blob/main/Qwen1.5-32B-Chat-IQ3_S.gguf) | IQ3_S | 14.32GB | Lower quality, new method with decent performance, recommended over Q3 quants. | | [Qwen1.5-32B-Chat-Q3_K_S.gguf](https://huggingface.co/bartowski/Qwen1.5-32B-Chat-GGUF/blob/main/Qwen1.5-32B-Chat-Q3_K_S.gguf) | Q3_K_S | 14.28GB | Low quality, not recommended. | | [Qwen1.5-32B-Chat-Q2_K.gguf](https://huggingface.co/bartowski/Qwen1.5-32B-Chat-GGUF/blob/main/Qwen1.5-32B-Chat-Q2_K.gguf) | Q2_K | 12.22GB | Extremely low quality, *not* recommended. | Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
abmorton/wall-potfiller
abmorton
2024-06-30T23:47:47Z
759
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-06-30T23:43:40Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### wall-potfiller Dreambooth model trained by abmorton with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
KoichiYasuoka/roberta-small-japanese-luw-upos
KoichiYasuoka
2022-09-18T10:45:09Z
758
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "japanese", "pos", "dependency-parsing", "ja", "dataset:universal_dependencies", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- language: - "ja" tags: - "japanese" - "token-classification" - "pos" - "dependency-parsing" datasets: - "universal_dependencies" license: "cc-by-sa-4.0" pipeline_tag: "token-classification" widget: - text: "国境の長いトンネルを抜けると雪国であった。" --- # roberta-small-japanese-luw-upos ## Model Description This is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from [roberta-small-japanese-aozora](https://huggingface.co/KoichiYasuoka/roberta-small-japanese-aozora). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification,TokenClassificationPipeline tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-small-japanese-luw-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-small-japanese-luw-upos") pipeline=TokenClassificationPipeline(tokenizer=tokenizer,model=model,aggregation_strategy="simple") nlp=lambda x:[(x[t["start"]:t["end"]],t["entity_group"]) for t in pipeline(x)] print(nlp("国境の長いトンネルを抜けると雪国であった。")) ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/roberta-small-japanese-luw-upos") print(nlp("国境の長いトンネルを抜けると雪国であった。")) ``` ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
huggingartists/florence-the-machine
huggingartists
2021-08-10T09:03:06Z
758
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/florence-the-machine", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/florence-the-machine tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/99d09eb55276442d715ac14f06173a4e.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Florence + The Machine</div> <a href="https://genius.com/artists/florence-the-machine"> <div style="text-align: center; font-size: 14px;">@florence-the-machine</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Florence + The Machine. Dataset is available [here](https://huggingface.co/datasets/huggingartists/florence-the-machine). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/florence-the-machine") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/icjt5evm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Florence + The Machine's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1zfb9y24) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1zfb9y24/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/florence-the-machine') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/florence-the-machine") model = AutoModelWithLMHead.from_pretrained("huggingartists/florence-the-machine") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingtweets/minimalaq
huggingtweets
2021-09-10T20:50:07Z
758
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/minimalaq/1631307002582/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1403107651291058185/3CBTwj6__400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">MINIMAL</div> <div style="text-align: center; font-size: 14px;">@minimalaq</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from MINIMAL. | Data | MINIMAL | | --- | --- | | Tweets downloaded | 3238 | | Retweets | 288 | | Short tweets | 533 | | Tweets kept | 2417 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/v6z30t80/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @minimalaq's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/5kd2ws7g) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/5kd2ws7g/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/minimalaq') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
timm/vit_base_patch32_clip_224.laion2b_ft_in1k
timm
2023-05-06T00:03:52Z
758
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:laion-2b", "arxiv:2212.07143", "arxiv:2210.08402", "arxiv:2010.11929", "license:apache-2.0", "region:us" ]
image-classification
2022-11-01T23:00:20Z
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k - laion-2b --- # Model card for vit_base_patch32_clip_224.laion2b_ft_in1k A Vision Transformer (ViT) image classification model. Pretrained on LAION-2B image-text pairs using OpenCLIP. Fine-tuned on ImageNet-1k in `timm`. See recipes in [Reproducible scaling laws](https://arxiv.org/abs/2212.07143). ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 88.2 - GMACs: 4.4 - Activations (M): 4.2 - Image size: 224 x 224 - **Papers:** - OpenCLIP: https://github.com/mlfoundations/open_clip - Reproducible scaling laws for contrastive language-image learning: https://arxiv.org/abs/2212.07143 - LAION-5B: An open large-scale dataset for training next generation image-text models: https://arxiv.org/abs/2210.08402 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - **Dataset:** ImageNet-1k - **Pretrain Dataset:** - LAION-2B ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('vit_base_patch32_clip_224.laion2b_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_base_patch32_clip_224.laion2b_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 50, 768) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @software{ilharco_gabriel_2021_5143773, author = {Ilharco, Gabriel and Wortsman, Mitchell and Wightman, Ross and Gordon, Cade and Carlini, Nicholas and Taori, Rohan and Dave, Achal and Shankar, Vaishaal and Namkoong, Hongseok and Miller, John and Hajishirzi, Hannaneh and Farhadi, Ali and Schmidt, Ludwig}, title = {OpenCLIP}, month = jul, year = 2021, note = {If you use this software, please cite it as below.}, publisher = {Zenodo}, version = {0.1}, doi = {10.5281/zenodo.5143773}, url = {https://doi.org/10.5281/zenodo.5143773} } ``` ```bibtex @article{cherti2022reproducible, title={Reproducible scaling laws for contrastive language-image learning}, author={Cherti, Mehdi and Beaumont, Romain and Wightman, Ross and Wortsman, Mitchell and Ilharco, Gabriel and Gordon, Cade and Schuhmann, Christoph and Schmidt, Ludwig and Jitsev, Jenia}, journal={arXiv preprint arXiv:2212.07143}, year={2022} } ``` ```bibtex @inproceedings{schuhmann2022laionb, title={{LAION}-5B: An open large-scale dataset for training next generation image-text models}, author={Christoph Schuhmann and Romain Beaumont and Richard Vencu and Cade W Gordon and Ross Wightman and Mehdi Cherti and Theo Coombes and Aarush Katta and Clayton Mullis and Mitchell Wortsman and Patrick Schramowski and Srivatsa R Kundurthy and Katherine Crowson and Ludwig Schmidt and Robert Kaczmarczyk and Jenia Jitsev}, booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2022}, url={https://openreview.net/forum?id=M3Y74vmsMcY} } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
AlexWortega/instruct_rugptSmall
AlexWortega
2023-03-10T12:23:38Z
758
2
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "finance", "code", "en", "dataset:IlyaGusev/habr", "dataset:Den4ikAI/russian_instructions", "dataset:wiki_qa", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-03-07T06:46:31Z
--- datasets: - IlyaGusev/habr - Den4ikAI/russian_instructions - wiki_qa inference: parameters: max_new_tokens: 32 temperature: 1 top_k: 50 top_p: 0.7 do_sample: true license: apache-2.0 language: - en pipeline_tag: text-generation widget: - text: Чем отличается лось от ежа? example_title: Question Answering - text: Как выпросить повышение? example_title: Logical reasoning - text: Какая температура закипания азота? example_title: Scientific knowledge library_name: transformers tags: - finance - code --- <h1 style="font-size: 42px">Instructions ruGPT Small v0.1a<h1/> # Model Summary > Я дообучил small rugpt на датасете инструкций, хабра, QA и кода # Quick Start ```python from transformers import pipeline pipe = pipeline(model='AlexWortega/instruct_rugptSmall') pipe('''Как собрать питон код?''') ``` or ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("AlexWortega/instruct_rugptSmall") model = AutoModelForCausalLM.from_pretrained("AlexWortega/instruct_rugptSmall") ``` # License The weights of Instructions ruGPT Small v0.1a are licensed under version 2.0 of the Apache License. ## Hyperparameters I used Novograd with a learning rate of 2e-5 and global batch size of 6 (3 for each data parallel worker). I use both data parallelism and pipeline parallelism to conduct training. During training, we truncate the input sequence to 1024 tokens, and for input sequence that contains less than 1024 tokens, we concatenate multiple sequences into one long sequence to improve the data efficiency. # References #Metrics SOON ## BibTeX entry and citation info ```bibtex @article{ title={GPT2xl is underrated task solver}, author={Nickolich Aleksandr, Karina Romanova, Arseniy Shahmatov, Maksim Gersimenko}, year={2023} } ```
timm/pit_ti_224.in1k
timm
2023-04-26T00:08:25Z
758
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2103.16302", "license:apache-2.0", "region:us" ]
image-classification
2023-04-26T00:08:19Z
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for pit_ti_224.in1k A PiT (Pooling based Vision Transformer) image classification model. Trained on ImageNet-1k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 4.8 - GMACs: 0.7 - Activations (M): 6.2 - Image size: 224 x 224 - **Papers:** - Rethinking Spatial Dimensions of Vision Transformers: https://arxiv.org/abs/2103.16302 - **Dataset:** ImageNet-1k - **Original:** https://github.com/naver-ai/pit ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('pit_ti_224.in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'pit_ti_224.in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 27, 27]) # torch.Size([1, 128, 14, 14]) # torch.Size([1, 256, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'pit_ti_224.in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 1, 256) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @inproceedings{heo2021pit, title={Rethinking Spatial Dimensions of Vision Transformers}, author={Byeongho Heo and Sangdoo Yun and Dongyoon Han and Sanghyuk Chun and Junsuk Choe and Seong Joon Oh}, booktitle = {International Conference on Computer Vision (ICCV)}, year={2021}, } ```