pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
listlengths
0
201
languages
listlengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
listlengths
0
722
processed_texts
listlengths
1
723
text-to-image
diffusers
# AutoTrain SDXL LoRA DreamBooth - rfhuang/maui-no-vest-800 <Gallery /> ## Model description These are rfhuang/maui-no-vest-800 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: None. ## Trigger words You should use A photo of a dog named Maui in random situations, taken from a smartphone camera to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](rfhuang/maui-no-vest-800/tree/main) them in the Files & versions tab.
{"license": "openrail++", "tags": ["autotrain", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "diffusers", "lora", "template:sd-lora"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "A photo of a dog named Maui in random situations, taken from a smartphone camera"}
rfhuang/maui-no-vest-800
null
[ "diffusers", "autotrain", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
null
2024-04-14T20:51:01+00:00
[]
[]
TAGS #diffusers #autotrain #stable-diffusion-xl #stable-diffusion-xl-diffusers #text-to-image #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us
# AutoTrain SDXL LoRA DreamBooth - rfhuang/maui-no-vest-800 <Gallery /> ## Model description These are rfhuang/maui-no-vest-800 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using DreamBooth. LoRA for the text encoder was enabled: False. Special VAE used for training: None. ## Trigger words You should use A photo of a dog named Maui in random situations, taken from a smartphone camera to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. Download them in the Files & versions tab.
[ "# AutoTrain SDXL LoRA DreamBooth - rfhuang/maui-no-vest-800\n\n<Gallery />", "## Model description\n\nThese are rfhuang/maui-no-vest-800 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: None.", "## Trigger words\n\nYou should use A photo of a dog named Maui in random situations, taken from a smartphone camera to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab." ]
[ "TAGS\n#diffusers #autotrain #stable-diffusion-xl #stable-diffusion-xl-diffusers #text-to-image #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n", "# AutoTrain SDXL LoRA DreamBooth - rfhuang/maui-no-vest-800\n\n<Gallery />", "## Model description\n\nThese are rfhuang/maui-no-vest-800 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: None.", "## Trigger words\n\nYou should use A photo of a dog named Maui in random situations, taken from a smartphone camera to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab." ]
zero-shot-classification
transformers
Multilingual mDeBERTa base model fineted on Text_emotions dataset. Dataset link : https://www.kaggle.com/datasets/nelgiriyewithana/emotions/data Finetuned for classifying text into sadness (0) joy (1) love (2) anger (3) fear (4) and surprise (5) emotions.
{"language": ["multilingual", "zh", "ja", "ar", "ko", "de", "fr", "es", "pt", "hi", "id", "it", "tr", "ru", "bn", "ur", "mr", "ta", "vi", "fa", "pl", "uk", "nl", "sv", "he", "sw", "ps"], "license": "mit", "library_name": "transformers", "tags": ["text-classification", "zero-shot-classification", "nli", "pytorch"], "pipeline_tag": "zero-shot-classification", "base_model": "MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7"}
VivekMalipatel23/mDeBERTa-v3-base-text-emotion-classification
null
[ "transformers", "pytorch", "deberta-v2", "text-classification", "zero-shot-classification", "nli", "multilingual", "zh", "ja", "ar", "ko", "de", "fr", "es", "pt", "hi", "id", "it", "tr", "ru", "bn", "ur", "mr", "ta", "vi", "fa", "pl", "uk", "nl", "sv", "he", "sw", "ps", "base_model:MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-14T20:52:03+00:00
[]
[ "multilingual", "zh", "ja", "ar", "ko", "de", "fr", "es", "pt", "hi", "id", "it", "tr", "ru", "bn", "ur", "mr", "ta", "vi", "fa", "pl", "uk", "nl", "sv", "he", "sw", "ps" ]
TAGS #transformers #pytorch #deberta-v2 #text-classification #zero-shot-classification #nli #multilingual #zh #ja #ar #ko #de #fr #es #pt #hi #id #it #tr #ru #bn #ur #mr #ta #vi #fa #pl #uk #nl #sv #he #sw #ps #base_model-MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7 #license-mit #autotrain_compatible #endpoints_compatible #region-us
Multilingual mDeBERTa base model fineted on Text_emotions dataset. Dataset link : URL Finetuned for classifying text into sadness (0) joy (1) love (2) anger (3) fear (4) and surprise (5) emotions.
[]
[ "TAGS\n#transformers #pytorch #deberta-v2 #text-classification #zero-shot-classification #nli #multilingual #zh #ja #ar #ko #de #fr #es #pt #hi #id #it #tr #ru #bn #ur #mr #ta #vi #fa #pl #uk #nl #sv #he #sw #ps #base_model-MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) dolly-v2-7b - bnb 8bits - Model creator: https://huggingface.co/databricks/ - Original model: https://huggingface.co/databricks/dolly-v2-7b/ Original model description: --- license: mit language: - en library_name: transformers inference: false datasets: - databricks/databricks-dolly-15k --- # dolly-v2-7b Model Card ## Summary Databricks' `dolly-v2-7b`, an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. Based on `pythia-6.9b`, Dolly is trained on ~15k instruction/response fine tuning records [`databricks-dolly-15k`](https://github.com/databrickslabs/dolly/tree/master/data) generated by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation, information extraction, open QA and summarization. `dolly-v2-7b` is not a state-of-the-art model, but does exhibit surprisingly high quality instruction following behavior not characteristic of the foundation model on which it is based. Dolly v2 is also available in these other models sizes: * [dolly-v2-12b](https://huggingface.co/databricks/dolly-v2-12b), a 12 billion parameter based on `pythia-12b` * [dolly-v2-3b](https://huggingface.co/databricks/dolly-v2-3b), a 2.8 billion parameter based on `pythia-2.8b` Please refer to the [dolly GitHub repo](https://github.com/databrickslabs/dolly#getting-started-with-response-generation) for tips on running inference for various GPU configurations. **Owner**: Databricks, Inc. ## Model Overview `dolly-v2-7b` is a 6.9 billion parameter causal language model created by [Databricks](https://databricks.com/) that is derived from [EleutherAI's](https://www.eleuther.ai/) [Pythia-6.9b](https://huggingface.co/EleutherAI/pythia-6.9b) and fine-tuned on a [~15K record instruction corpus](https://github.com/databrickslabs/dolly/tree/master/data) generated by Databricks employees and released under a permissive license (CC-BY-SA) ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` and `accelerate` libraries installed. In a Databricks notebook you could run: ```python %pip install "accelerate>=0.16.0,<1" "transformers[torch]>=4.28.1,<5" "torch>=1.13.1,<2" ``` The instruction following pipeline can be loaded using the `pipeline` function as shown below. This loads a custom `InstructionTextGenerationPipeline` found in the model repo [here](https://huggingface.co/databricks/dolly-v2-3b/blob/main/instruct_pipeline.py), which is why `trust_remote_code=True` is required. Including `torch_dtype=torch.bfloat16` is generally recommended if this type is supported in order to reduce memory usage. It does not appear to impact output quality. It is also fine to remove it if there is sufficient memory. ```python import torch from transformers import pipeline generate_text = pipeline(model="databricks/dolly-v2-7b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto") ``` You can then use the pipeline to answer instructions: ```python res = generate_text("Explain to me the difference between nuclear fission and fusion.") print(res[0]["generated_text"]) ``` Alternatively, if you prefer to not use `trust_remote_code=True` you can download [instruct_pipeline.py](https://huggingface.co/databricks/dolly-v2-3b/blob/main/instruct_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer: ```python import torch from instruct_pipeline import InstructionTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v2-7b", padding_side="left") model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v2-7b", device_map="auto", torch_dtype=torch.bfloat16) generate_text = InstructionTextGenerationPipeline(model=model, tokenizer=tokenizer) ``` ### LangChain Usage To use the pipeline with LangChain, you must set `return_full_text=True`, as LangChain expects the full text to be returned and the default for the pipeline is to only return the new text. ```python import torch from transformers import pipeline generate_text = pipeline(model="databricks/dolly-v2-7b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", return_full_text=True) ``` You can create a prompt that either has only an instruction or has an instruction with context: ```python from langchain import PromptTemplate, LLMChain from langchain.llms import HuggingFacePipeline # template for an instrution with no input prompt = PromptTemplate( input_variables=["instruction"], template="{instruction}") # template for an instruction with input prompt_with_context = PromptTemplate( input_variables=["instruction", "context"], template="{instruction}\n\nInput:\n{context}") hf_pipeline = HuggingFacePipeline(pipeline=generate_text) llm_chain = LLMChain(llm=hf_pipeline, prompt=prompt) llm_context_chain = LLMChain(llm=hf_pipeline, prompt=prompt_with_context) ``` Example predicting using a simple instruction: ```python print(llm_chain.predict(instruction="Explain to me the difference between nuclear fission and fusion.").lstrip()) ``` Example predicting using an instruction with context: ```python context = """George Washington (February 22, 1732[b] - December 14, 1799) was an American military officer, statesman, and Founding Father who served as the first president of the United States from 1789 to 1797.""" print(llm_context_chain.predict(instruction="When was George Washington president?", context=context).lstrip()) ``` ## Known Limitations ### Performance Limitations **`dolly-v2-7b` is not a state-of-the-art generative language model** and, though quantitative benchmarking is ongoing, is not designed to perform competitively with more modern model architectures or models subject to larger pretraining corpuses. The Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community. In particular, `dolly-v2-7b` struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors, dates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc. Moreover, we find that `dolly-v2-7b` does not have some capabilities, such as well-formatted letter writing, present in the original model. ### Dataset Limitations Like all language models, `dolly-v2-7b` reflects the content and limitations of its training corpuses. - **The Pile**: GPT-J's pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets, it contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly in the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit associations. - **`databricks-dolly-15k`**: The training data on which `dolly-v2-7b` is instruction tuned represents natural language instructions generated by Databricks employees during a period spanning March and April 2023 and includes passages from Wikipedia as references passages for instruction categories like closed QA and summarization. To our knowledge it does not contain obscenity, intellectual property or personally identifying information about non-public figures, but it may contain typos and factual errors. The dataset may also reflect biases found in Wikipedia. Finally, the dataset likely reflects the interests and semantic choices of Databricks employees, a demographic which is not representative of the global population at large. Databricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that maximize the potential of all individuals and organizations. ### Benchmark Metrics Below you'll find various models benchmark performance on the [EleutherAI LLM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness); model results are sorted by geometric mean to produce an intelligible ordering. As outlined above, these results demonstrate that `dolly-v2-7b` is not state of the art, and in fact underperforms `dolly-v1-6b` in some evaluation benchmarks. We believe this owes to the composition and size of the underlying fine tuning datasets, but a robust statement as to the sources of these variations requires further study. | model | openbookqa | arc_easy | winogrande | hellaswag | arc_challenge | piqa | boolq | gmean | | --------------------------------- | ------------ | ---------- | ------------ | ----------- | --------------- | -------- | -------- | ---------| | EleutherAI/pythia-2.8b | 0.348 | 0.585859 | 0.589582 | 0.591217 | 0.323379 | 0.73395 | 0.638226 | 0.523431 | | EleutherAI/pythia-6.9b | 0.368 | 0.604798 | 0.608524 | 0.631548 | 0.343857 | 0.761153 | 0.6263 | 0.543567 | | databricks/dolly-v2-3b | 0.384 | 0.611532 | 0.589582 | 0.650767 | 0.370307 | 0.742655 | 0.575535 | 0.544886 | | EleutherAI/pythia-12b | 0.364 | 0.627104 | 0.636148 | 0.668094 | 0.346416 | 0.760065 | 0.673394 | 0.559676 | | EleutherAI/gpt-j-6B | 0.382 | 0.621633 | 0.651144 | 0.662617 | 0.363481 | 0.761153 | 0.655963 | 0.565936 | | databricks/dolly-v2-12b | 0.408 | 0.63931 | 0.616417 | 0.707927 | 0.388225 | 0.757889 | 0.568196 | 0.56781 | | databricks/dolly-v2-7b | 0.392 | 0.633838 | 0.607735 | 0.686517 | 0.406997 | 0.750816 | 0.644037 | 0.573487 | | databricks/dolly-v1-6b | 0.41 | 0.62963 | 0.643252 | 0.676758 | 0.384812 | 0.773667 | 0.687768 | 0.583431 | | EleutherAI/gpt-neox-20b | 0.402 | 0.683923 | 0.656669 | 0.7142 | 0.408703 | 0.784004 | 0.695413 | 0.602236 | # Citation ``` @online{DatabricksBlog2023DollyV2, author = {Mike Conover and Matt Hayes and Ankit Mathur and Jianwei Xie and Jun Wan and Sam Shah and Ali Ghodsi and Patrick Wendell and Matei Zaharia and Reynold Xin}, title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM}, year = {2023}, url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm}, urldate = {2023-06-30} } ``` # Happy Hacking!
{}
RichardErkhov/databricks_-_dolly-v2-7b-8bits
null
[ "transformers", "safetensors", "gpt_neox", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-14T20:53:21+00:00
[]
[]
TAGS #transformers #safetensors #gpt_neox #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models dolly-v2-7b - bnb 8bits * Model creator: URL * Original model: URL Original model description: --------------------------- license: mit language: * en library\_name: transformers inference: false datasets: * databricks/databricks-dolly-15k --- dolly-v2-7b Model Card ====================== Summary ------- Databricks' 'dolly-v2-7b', an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. Based on 'pythia-6.9b', Dolly is trained on ~15k instruction/response fine tuning records 'databricks-dolly-15k' generated by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation, information extraction, open QA and summarization. 'dolly-v2-7b' is not a state-of-the-art model, but does exhibit surprisingly high quality instruction following behavior not characteristic of the foundation model on which it is based. Dolly v2 is also available in these other models sizes: * dolly-v2-12b, a 12 billion parameter based on 'pythia-12b' * dolly-v2-3b, a 2.8 billion parameter based on 'pythia-2.8b' Please refer to the dolly GitHub repo for tips on running inference for various GPU configurations. Owner: Databricks, Inc. Model Overview -------------- 'dolly-v2-7b' is a 6.9 billion parameter causal language model created by Databricks that is derived from EleutherAI's Pythia-6.9b and fine-tuned on a ~15K record instruction corpus generated by Databricks employees and released under a permissive license (CC-BY-SA) Usage ----- To use the model with the 'transformers' library on a machine with GPUs, first make sure you have the 'transformers' and 'accelerate' libraries installed. In a Databricks notebook you could run: The instruction following pipeline can be loaded using the 'pipeline' function as shown below. This loads a custom 'InstructionTextGenerationPipeline' found in the model repo here, which is why 'trust\_remote\_code=True' is required. Including 'torch\_dtype=torch.bfloat16' is generally recommended if this type is supported in order to reduce memory usage. It does not appear to impact output quality. It is also fine to remove it if there is sufficient memory. You can then use the pipeline to answer instructions: Alternatively, if you prefer to not use 'trust\_remote\_code=True' you can download instruct\_pipeline.py, store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer: ### LangChain Usage To use the pipeline with LangChain, you must set 'return\_full\_text=True', as LangChain expects the full text to be returned and the default for the pipeline is to only return the new text. You can create a prompt that either has only an instruction or has an instruction with context: Example predicting using a simple instruction: Example predicting using an instruction with context: Known Limitations ----------------- ### Performance Limitations 'dolly-v2-7b' is not a state-of-the-art generative language model and, though quantitative benchmarking is ongoing, is not designed to perform competitively with more modern model architectures or models subject to larger pretraining corpuses. The Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community. In particular, 'dolly-v2-7b' struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors, dates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc. Moreover, we find that 'dolly-v2-7b' does not have some capabilities, such as well-formatted letter writing, present in the original model. ### Dataset Limitations Like all language models, 'dolly-v2-7b' reflects the content and limitations of its training corpuses. * The Pile: GPT-J's pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets, it contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly in the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit associations. * 'databricks-dolly-15k': The training data on which 'dolly-v2-7b' is instruction tuned represents natural language instructions generated by Databricks employees during a period spanning March and April 2023 and includes passages from Wikipedia as references passages for instruction categories like closed QA and summarization. To our knowledge it does not contain obscenity, intellectual property or personally identifying information about non-public figures, but it may contain typos and factual errors. The dataset may also reflect biases found in Wikipedia. Finally, the dataset likely reflects the interests and semantic choices of Databricks employees, a demographic which is not representative of the global population at large. Databricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that maximize the potential of all individuals and organizations. ### Benchmark Metrics Below you'll find various models benchmark performance on the EleutherAI LLM Evaluation Harness; model results are sorted by geometric mean to produce an intelligible ordering. As outlined above, these results demonstrate that 'dolly-v2-7b' is not state of the art, and in fact underperforms 'dolly-v1-6b' in some evaluation benchmarks. We believe this owes to the composition and size of the underlying fine tuning datasets, but a robust statement as to the sources of these variations requires further study. Happy Hacking! ==============
[ "### LangChain Usage\n\n\nTo use the pipeline with LangChain, you must set 'return\\_full\\_text=True', as LangChain expects the full text to be returned\nand the default for the pipeline is to only return the new text.\n\n\nYou can create a prompt that either has only an instruction or has an instruction with context:\n\n\nExample predicting using a simple instruction:\n\n\nExample predicting using an instruction with context:\n\n\nKnown Limitations\n-----------------", "### Performance Limitations\n\n\n'dolly-v2-7b' is not a state-of-the-art generative language model and, though quantitative benchmarking is ongoing, is not designed to perform\ncompetitively with more modern model architectures or models subject to larger pretraining corpuses.\n\n\nThe Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community. \n\nIn particular, 'dolly-v2-7b' struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors,\ndates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc.\nMoreover, we find that 'dolly-v2-7b' does not have some capabilities, such as well-formatted letter writing, present in the original model.", "### Dataset Limitations\n\n\nLike all language models, 'dolly-v2-7b' reflects the content and limitations of its training corpuses.\n\n\n* The Pile: GPT-J's pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets,\nit contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly\nin the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit\nassociations.\n* 'databricks-dolly-15k': The training data on which 'dolly-v2-7b' is instruction tuned represents natural language instructions generated\nby Databricks employees during a period spanning March and April 2023 and includes passages from Wikipedia as references passages\nfor instruction categories like closed QA and summarization. To our knowledge it does not contain obscenity, intellectual property or\npersonally identifying information about non-public figures, but it may contain typos and factual errors.\nThe dataset may also reflect biases found in Wikipedia. Finally, the dataset likely reflects\nthe interests and semantic choices of Databricks employees, a demographic which is not representative of the global population at large.\n\n\nDatabricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that\nmaximize the potential of all individuals and organizations.", "### Benchmark Metrics\n\n\nBelow you'll find various models benchmark performance on the EleutherAI LLM Evaluation Harness;\nmodel results are sorted by geometric mean to produce an intelligible ordering. As outlined above, these results demonstrate that 'dolly-v2-7b' is not state of the art,\nand in fact underperforms 'dolly-v1-6b' in some evaluation benchmarks. We believe this owes to the composition and size of the underlying fine tuning datasets,\nbut a robust statement as to the sources of these variations requires further study.\n\n\n\nHappy Hacking!\n==============" ]
[ "TAGS\n#transformers #safetensors #gpt_neox #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n", "### LangChain Usage\n\n\nTo use the pipeline with LangChain, you must set 'return\\_full\\_text=True', as LangChain expects the full text to be returned\nand the default for the pipeline is to only return the new text.\n\n\nYou can create a prompt that either has only an instruction or has an instruction with context:\n\n\nExample predicting using a simple instruction:\n\n\nExample predicting using an instruction with context:\n\n\nKnown Limitations\n-----------------", "### Performance Limitations\n\n\n'dolly-v2-7b' is not a state-of-the-art generative language model and, though quantitative benchmarking is ongoing, is not designed to perform\ncompetitively with more modern model architectures or models subject to larger pretraining corpuses.\n\n\nThe Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community. \n\nIn particular, 'dolly-v2-7b' struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors,\ndates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc.\nMoreover, we find that 'dolly-v2-7b' does not have some capabilities, such as well-formatted letter writing, present in the original model.", "### Dataset Limitations\n\n\nLike all language models, 'dolly-v2-7b' reflects the content and limitations of its training corpuses.\n\n\n* The Pile: GPT-J's pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets,\nit contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly\nin the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit\nassociations.\n* 'databricks-dolly-15k': The training data on which 'dolly-v2-7b' is instruction tuned represents natural language instructions generated\nby Databricks employees during a period spanning March and April 2023 and includes passages from Wikipedia as references passages\nfor instruction categories like closed QA and summarization. To our knowledge it does not contain obscenity, intellectual property or\npersonally identifying information about non-public figures, but it may contain typos and factual errors.\nThe dataset may also reflect biases found in Wikipedia. Finally, the dataset likely reflects\nthe interests and semantic choices of Databricks employees, a demographic which is not representative of the global population at large.\n\n\nDatabricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that\nmaximize the potential of all individuals and organizations.", "### Benchmark Metrics\n\n\nBelow you'll find various models benchmark performance on the EleutherAI LLM Evaluation Harness;\nmodel results are sorted by geometric mean to produce an intelligible ordering. As outlined above, these results demonstrate that 'dolly-v2-7b' is not state of the art,\nand in fact underperforms 'dolly-v1-6b' in some evaluation benchmarks. We believe this owes to the composition and size of the underlying fine tuning datasets,\nbut a robust statement as to the sources of these variations requires further study.\n\n\n\nHappy Hacking!\n==============" ]
automatic-speech-recognition
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
SpideyDLK/wav2vec2-large-xls-r-300m-sinhala-aug-data-with-original-split-part2
null
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-14T20:57:33+00:00
[ "1910.09700" ]
[]
TAGS #transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
image-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # feizhe/vit-base-patch16-224-in21k-pheno-run5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0782 - Train Accuracy: 0.9985 - Train Top-3-accuracy: 1.0 - Validation Loss: 1.4406 - Validation Accuracy: 0.5731 - Validation Top-3-accuracy: 0.9298 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1615, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results | Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch | |:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:| | 0.7826 | 0.7716 | 0.9705 | 1.1364 | 0.5965 | 0.9532 | 0 | | 0.1564 | 0.9891 | 1.0 | 1.3742 | 0.5731 | 0.9181 | 1 | | 0.0782 | 0.9985 | 1.0 | 1.4406 | 0.5731 | 0.9298 | 2 | ### Framework versions - Transformers 4.31.0 - TensorFlow 2.10.0 - Datasets 2.18.0 - Tokenizers 0.13.3
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "google/vit-base-patch16-224-in21k", "model-index": [{"name": "feizhe/vit-base-patch16-224-in21k-pheno-run5", "results": []}]}
feizhe/vit-base-patch16-224-in21k-pheno-run5
null
[ "transformers", "tf", "vit", "image-classification", "generated_from_keras_callback", "base_model:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-14T21:02:58+00:00
[]
[]
TAGS #transformers #tf #vit #image-classification #generated_from_keras_callback #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
feizhe/vit-base-patch16-224-in21k-pheno-run5 ============================================ This model is a fine-tuned version of google/vit-base-patch16-224-in21k on an unknown dataset. It achieves the following results on the evaluation set: * Train Loss: 0.0782 * Train Accuracy: 0.9985 * Train Top-3-accuracy: 1.0 * Validation Loss: 1.4406 * Validation Accuracy: 0.5731 * Validation Top-3-accuracy: 0.9298 * Epoch: 2 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * optimizer: {'inner\_optimizer': {'class\_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning\_rate': {'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 2e-05, 'decay\_steps': 1615, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\_decay\_rate': 0.01}}, 'dynamic': True, 'initial\_scale': 32768.0, 'dynamic\_growth\_steps': 2000} * training\_precision: mixed\_float16 ### Training results ### Framework versions * Transformers 4.31.0 * TensorFlow 2.10.0 * Datasets 2.18.0 * Tokenizers 0.13.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'inner\\_optimizer': {'class\\_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning\\_rate': {'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 1615, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}}, 'dynamic': True, 'initial\\_scale': 32768.0, 'dynamic\\_growth\\_steps': 2000}\n* training\\_precision: mixed\\_float16", "### Training results", "### Framework versions\n\n\n* Transformers 4.31.0\n* TensorFlow 2.10.0\n* Datasets 2.18.0\n* Tokenizers 0.13.3" ]
[ "TAGS\n#transformers #tf #vit #image-classification #generated_from_keras_callback #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'inner\\_optimizer': {'class\\_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning\\_rate': {'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 1615, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}}, 'dynamic': True, 'initial\\_scale': 32768.0, 'dynamic\\_growth\\_steps': 2000}\n* training\\_precision: mixed\\_float16", "### Training results", "### Framework versions\n\n\n* Transformers 4.31.0\n* TensorFlow 2.10.0\n* Datasets 2.18.0\n* Tokenizers 0.13.3" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) dolly-v2-3b - bnb 4bits - Model creator: https://huggingface.co/databricks/ - Original model: https://huggingface.co/databricks/dolly-v2-3b/ Original model description: --- license: mit language: - en library_name: transformers inference: false datasets: - databricks/databricks-dolly-15k --- # dolly-v2-3b Model Card ## Summary Databricks' `dolly-v2-3b`, an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. Based on `pythia-2.8b`, Dolly is trained on ~15k instruction/response fine tuning records [`databricks-dolly-15k`](https://github.com/databrickslabs/dolly/tree/master/data) generated by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation, information extraction, open QA and summarization. `dolly-v2-3b` is not a state-of-the-art model, but does exhibit surprisingly high quality instruction following behavior not characteristic of the foundation model on which it is based. Dolly v2 is also available in these larger models sizes: * [dolly-v2-12b](https://huggingface.co/databricks/dolly-v2-12b), a 12 billion parameter based on `pythia-12b` * [dolly-v2-7b](https://huggingface.co/databricks/dolly-v2-7b), a 6.9 billion parameter based on `pythia-6.9b` Please refer to the [dolly GitHub repo](https://github.com/databrickslabs/dolly#getting-started-with-response-generation) for tips on running inference for various GPU configurations. **Owner**: Databricks, Inc. ## Model Overview `dolly-v2-3b` is a 2.8 billion parameter causal language model created by [Databricks](https://databricks.com/) that is derived from [EleutherAI's](https://www.eleuther.ai/) [Pythia-2.8b](https://huggingface.co/EleutherAI/pythia-2.8b) and fine-tuned on a [~15K record instruction corpus](https://github.com/databrickslabs/dolly/tree/master/data) generated by Databricks employees and released under a permissive license (CC-BY-SA) ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` and `accelerate` libraries installed. In a Databricks notebook you could run: ```python %pip install "accelerate>=0.16.0,<1" "transformers[torch]>=4.28.1,<5" "torch>=1.13.1,<2" ``` The instruction following pipeline can be loaded using the `pipeline` function as shown below. This loads a custom `InstructionTextGenerationPipeline` found in the model repo [here](https://huggingface.co/databricks/dolly-v2-3b/blob/main/instruct_pipeline.py), which is why `trust_remote_code=True` is required. Including `torch_dtype=torch.bfloat16` is generally recommended if this type is supported in order to reduce memory usage. It does not appear to impact output quality. It is also fine to remove it if there is sufficient memory. ```python import torch from transformers import pipeline generate_text = pipeline(model="databricks/dolly-v2-3b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto") ``` You can then use the pipeline to answer instructions: ```python res = generate_text("Explain to me the difference between nuclear fission and fusion.") print(res[0]["generated_text"]) ``` Alternatively, if you prefer to not use `trust_remote_code=True` you can download [instruct_pipeline.py](https://huggingface.co/databricks/dolly-v2-3b/blob/main/instruct_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer: ```python import torch from instruct_pipeline import InstructionTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v2-3b", padding_side="left") model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v2-3b", device_map="auto", torch_dtype=torch.bfloat16) generate_text = InstructionTextGenerationPipeline(model=model, tokenizer=tokenizer) ``` ### LangChain Usage To use the pipeline with LangChain, you must set `return_full_text=True`, as LangChain expects the full text to be returned and the default for the pipeline is to only return the new text. ```python import torch from transformers import pipeline generate_text = pipeline(model="databricks/dolly-v2-3b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", return_full_text=True) ``` You can create a prompt that either has only an instruction or has an instruction with context: ```python from langchain import PromptTemplate, LLMChain from langchain.llms import HuggingFacePipeline # template for an instrution with no input prompt = PromptTemplate( input_variables=["instruction"], template="{instruction}") # template for an instruction with input prompt_with_context = PromptTemplate( input_variables=["instruction", "context"], template="{instruction}\n\nInput:\n{context}") hf_pipeline = HuggingFacePipeline(pipeline=generate_text) llm_chain = LLMChain(llm=hf_pipeline, prompt=prompt) llm_context_chain = LLMChain(llm=hf_pipeline, prompt=prompt_with_context) ``` Example predicting using a simple instruction: ```python print(llm_chain.predict(instruction="Explain to me the difference between nuclear fission and fusion.").lstrip()) ``` Example predicting using an instruction with context: ```python context = """George Washington (February 22, 1732[b] - December 14, 1799) was an American military officer, statesman, and Founding Father who served as the first president of the United States from 1789 to 1797.""" print(llm_context_chain.predict(instruction="When was George Washington president?", context=context).lstrip()) ``` ## Known Limitations ### Performance Limitations **`dolly-v2-3b` is not a state-of-the-art generative language model** and, though quantitative benchmarking is ongoing, is not designed to perform competitively with more modern model architectures or models subject to larger pretraining corpuses. The Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community. In particular, `dolly-v2-3b` struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors, dates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc. Moreover, we find that `dolly-v2-3b` does not have some capabilities, such as well-formatted letter writing, present in the original model. ### Dataset Limitations Like all language models, `dolly-v2-3b` reflects the content and limitations of its training corpuses. - **The Pile**: GPT-J's pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets, it contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly in the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit associations. - **`databricks-dolly-15k`**: The training data on which `dolly-v2-3b` is instruction tuned represents natural language instructions generated by Databricks employees during a period spanning March and April 2023 and includes passages from Wikipedia as references passages for instruction categories like closed QA and summarization. To our knowledge it does not contain obscenity, intellectual property or personally identifying information about non-public figures, but it may contain typos and factual errors. The dataset may also reflect biases found in Wikipedia. Finally, the dataset likely reflects the interests and semantic choices of Databricks employees, a demographic which is not representative of the global population at large. Databricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that maximize the potential of all individuals and organizations. ### Benchmark Metrics Below you'll find various models benchmark performance on the [EleutherAI LLM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness); model results are sorted by geometric mean to produce an intelligible ordering. As outlined above, these results demonstrate that `dolly-v2-3b` is not state of the art. It underperforms `dolly-v1-6b` in the evaluation benchmarks, which is not surprising considering it has half the number of parameters. | model | openbookqa | arc_easy | winogrande | hellaswag | arc_challenge | piqa | boolq | gmean | | --------------------------------- | ------------ | ---------- | ------------ | ----------- | --------------- | -------- | -------- | ---------| | EleutherAI/pythia-2.8b | 0.348 | 0.585859 | 0.589582 | 0.591217 | 0.323379 | 0.73395 | 0.638226 | 0.523431 | | EleutherAI/pythia-6.9b | 0.368 | 0.604798 | 0.608524 | 0.631548 | 0.343857 | 0.761153 | 0.6263 | 0.543567 | | databricks/dolly-v2-3b | 0.384 | 0.611532 | 0.589582 | 0.650767 | 0.370307 | 0.742655 | 0.575535 | 0.544886 | | EleutherAI/pythia-12b | 0.364 | 0.627104 | 0.636148 | 0.668094 | 0.346416 | 0.760065 | 0.673394 | 0.559676 | | EleutherAI/gpt-j-6B | 0.382 | 0.621633 | 0.651144 | 0.662617 | 0.363481 | 0.761153 | 0.655963 | 0.565936 | | databricks/dolly-v2-12b | 0.408 | 0.63931 | 0.616417 | 0.707927 | 0.388225 | 0.757889 | 0.568196 | 0.56781 | | databricks/dolly-v2-7b | 0.392 | 0.633838 | 0.607735 | 0.686517 | 0.406997 | 0.750816 | 0.644037 | 0.573487 | | databricks/dolly-v1-6b | 0.41 | 0.62963 | 0.643252 | 0.676758 | 0.384812 | 0.773667 | 0.687768 | 0.583431 | | EleutherAI/gpt-neox-20b | 0.402 | 0.683923 | 0.656669 | 0.7142 | 0.408703 | 0.784004 | 0.695413 | 0.602236 | # Citation ``` @online{DatabricksBlog2023DollyV2, author = {Mike Conover and Matt Hayes and Ankit Mathur and Jianwei Xie and Jun Wan and Sam Shah and Ali Ghodsi and Patrick Wendell and Matei Zaharia and Reynold Xin}, title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM}, year = {2023}, url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm}, urldate = {2023-06-30} } ``` # Happy Hacking!
{}
RichardErkhov/databricks_-_dolly-v2-3b-4bits
null
[ "transformers", "safetensors", "gpt_neox", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-14T21:03:03+00:00
[]
[]
TAGS #transformers #safetensors #gpt_neox #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models dolly-v2-3b - bnb 4bits * Model creator: URL * Original model: URL Original model description: --------------------------- license: mit language: * en library\_name: transformers inference: false datasets: * databricks/databricks-dolly-15k --- dolly-v2-3b Model Card ====================== Summary ------- Databricks' 'dolly-v2-3b', an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. Based on 'pythia-2.8b', Dolly is trained on ~15k instruction/response fine tuning records 'databricks-dolly-15k' generated by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation, information extraction, open QA and summarization. 'dolly-v2-3b' is not a state-of-the-art model, but does exhibit surprisingly high quality instruction following behavior not characteristic of the foundation model on which it is based. Dolly v2 is also available in these larger models sizes: * dolly-v2-12b, a 12 billion parameter based on 'pythia-12b' * dolly-v2-7b, a 6.9 billion parameter based on 'pythia-6.9b' Please refer to the dolly GitHub repo for tips on running inference for various GPU configurations. Owner: Databricks, Inc. Model Overview -------------- 'dolly-v2-3b' is a 2.8 billion parameter causal language model created by Databricks that is derived from EleutherAI's Pythia-2.8b and fine-tuned on a ~15K record instruction corpus generated by Databricks employees and released under a permissive license (CC-BY-SA) Usage ----- To use the model with the 'transformers' library on a machine with GPUs, first make sure you have the 'transformers' and 'accelerate' libraries installed. In a Databricks notebook you could run: The instruction following pipeline can be loaded using the 'pipeline' function as shown below. This loads a custom 'InstructionTextGenerationPipeline' found in the model repo here, which is why 'trust\_remote\_code=True' is required. Including 'torch\_dtype=torch.bfloat16' is generally recommended if this type is supported in order to reduce memory usage. It does not appear to impact output quality. It is also fine to remove it if there is sufficient memory. You can then use the pipeline to answer instructions: Alternatively, if you prefer to not use 'trust\_remote\_code=True' you can download instruct\_pipeline.py, store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer: ### LangChain Usage To use the pipeline with LangChain, you must set 'return\_full\_text=True', as LangChain expects the full text to be returned and the default for the pipeline is to only return the new text. You can create a prompt that either has only an instruction or has an instruction with context: Example predicting using a simple instruction: Example predicting using an instruction with context: Known Limitations ----------------- ### Performance Limitations 'dolly-v2-3b' is not a state-of-the-art generative language model and, though quantitative benchmarking is ongoing, is not designed to perform competitively with more modern model architectures or models subject to larger pretraining corpuses. The Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community. In particular, 'dolly-v2-3b' struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors, dates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc. Moreover, we find that 'dolly-v2-3b' does not have some capabilities, such as well-formatted letter writing, present in the original model. ### Dataset Limitations Like all language models, 'dolly-v2-3b' reflects the content and limitations of its training corpuses. * The Pile: GPT-J's pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets, it contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly in the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit associations. * 'databricks-dolly-15k': The training data on which 'dolly-v2-3b' is instruction tuned represents natural language instructions generated by Databricks employees during a period spanning March and April 2023 and includes passages from Wikipedia as references passages for instruction categories like closed QA and summarization. To our knowledge it does not contain obscenity, intellectual property or personally identifying information about non-public figures, but it may contain typos and factual errors. The dataset may also reflect biases found in Wikipedia. Finally, the dataset likely reflects the interests and semantic choices of Databricks employees, a demographic which is not representative of the global population at large. Databricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that maximize the potential of all individuals and organizations. ### Benchmark Metrics Below you'll find various models benchmark performance on the EleutherAI LLM Evaluation Harness; model results are sorted by geometric mean to produce an intelligible ordering. As outlined above, these results demonstrate that 'dolly-v2-3b' is not state of the art. It underperforms 'dolly-v1-6b' in the evaluation benchmarks, which is not surprising considering it has half the number of parameters. Happy Hacking! ==============
[ "### LangChain Usage\n\n\nTo use the pipeline with LangChain, you must set 'return\\_full\\_text=True', as LangChain expects the full text to be returned\nand the default for the pipeline is to only return the new text.\n\n\nYou can create a prompt that either has only an instruction or has an instruction with context:\n\n\nExample predicting using a simple instruction:\n\n\nExample predicting using an instruction with context:\n\n\nKnown Limitations\n-----------------", "### Performance Limitations\n\n\n'dolly-v2-3b' is not a state-of-the-art generative language model and, though quantitative benchmarking is ongoing, is not designed to perform\ncompetitively with more modern model architectures or models subject to larger pretraining corpuses.\n\n\nThe Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community. \n\nIn particular, 'dolly-v2-3b' struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors,\ndates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc.\nMoreover, we find that 'dolly-v2-3b' does not have some capabilities, such as well-formatted letter writing, present in the original model.", "### Dataset Limitations\n\n\nLike all language models, 'dolly-v2-3b' reflects the content and limitations of its training corpuses.\n\n\n* The Pile: GPT-J's pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets,\nit contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly\nin the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit\nassociations.\n* 'databricks-dolly-15k': The training data on which 'dolly-v2-3b' is instruction tuned represents natural language instructions generated\nby Databricks employees during a period spanning March and April 2023 and includes passages from Wikipedia as references passages\nfor instruction categories like closed QA and summarization. To our knowledge it does not contain obscenity, intellectual property or\npersonally identifying information about non-public figures, but it may contain typos and factual errors.\nThe dataset may also reflect biases found in Wikipedia. Finally, the dataset likely reflects\nthe interests and semantic choices of Databricks employees, a demographic which is not representative of the global population at large.\n\n\nDatabricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that\nmaximize the potential of all individuals and organizations.", "### Benchmark Metrics\n\n\nBelow you'll find various models benchmark performance on the EleutherAI LLM Evaluation Harness;\nmodel results are sorted by geometric mean to produce an intelligible ordering. As outlined above, these results demonstrate that 'dolly-v2-3b' is not state of the art.\nIt underperforms 'dolly-v1-6b' in the evaluation benchmarks, which is not surprising considering it has half the number of parameters.\n\n\n\nHappy Hacking!\n==============" ]
[ "TAGS\n#transformers #safetensors #gpt_neox #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "### LangChain Usage\n\n\nTo use the pipeline with LangChain, you must set 'return\\_full\\_text=True', as LangChain expects the full text to be returned\nand the default for the pipeline is to only return the new text.\n\n\nYou can create a prompt that either has only an instruction or has an instruction with context:\n\n\nExample predicting using a simple instruction:\n\n\nExample predicting using an instruction with context:\n\n\nKnown Limitations\n-----------------", "### Performance Limitations\n\n\n'dolly-v2-3b' is not a state-of-the-art generative language model and, though quantitative benchmarking is ongoing, is not designed to perform\ncompetitively with more modern model architectures or models subject to larger pretraining corpuses.\n\n\nThe Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community. \n\nIn particular, 'dolly-v2-3b' struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors,\ndates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc.\nMoreover, we find that 'dolly-v2-3b' does not have some capabilities, such as well-formatted letter writing, present in the original model.", "### Dataset Limitations\n\n\nLike all language models, 'dolly-v2-3b' reflects the content and limitations of its training corpuses.\n\n\n* The Pile: GPT-J's pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets,\nit contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly\nin the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit\nassociations.\n* 'databricks-dolly-15k': The training data on which 'dolly-v2-3b' is instruction tuned represents natural language instructions generated\nby Databricks employees during a period spanning March and April 2023 and includes passages from Wikipedia as references passages\nfor instruction categories like closed QA and summarization. To our knowledge it does not contain obscenity, intellectual property or\npersonally identifying information about non-public figures, but it may contain typos and factual errors.\nThe dataset may also reflect biases found in Wikipedia. Finally, the dataset likely reflects\nthe interests and semantic choices of Databricks employees, a demographic which is not representative of the global population at large.\n\n\nDatabricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that\nmaximize the potential of all individuals and organizations.", "### Benchmark Metrics\n\n\nBelow you'll find various models benchmark performance on the EleutherAI LLM Evaluation Harness;\nmodel results are sorted by geometric mean to produce an intelligible ordering. As outlined above, these results demonstrate that 'dolly-v2-3b' is not state of the art.\nIt underperforms 'dolly-v1-6b' in the evaluation benchmarks, which is not surprising considering it has half the number of parameters.\n\n\n\nHappy Hacking!\n==============" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) dolly-v2-3b - bnb 8bits - Model creator: https://huggingface.co/databricks/ - Original model: https://huggingface.co/databricks/dolly-v2-3b/ Original model description: --- license: mit language: - en library_name: transformers inference: false datasets: - databricks/databricks-dolly-15k --- # dolly-v2-3b Model Card ## Summary Databricks' `dolly-v2-3b`, an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. Based on `pythia-2.8b`, Dolly is trained on ~15k instruction/response fine tuning records [`databricks-dolly-15k`](https://github.com/databrickslabs/dolly/tree/master/data) generated by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation, information extraction, open QA and summarization. `dolly-v2-3b` is not a state-of-the-art model, but does exhibit surprisingly high quality instruction following behavior not characteristic of the foundation model on which it is based. Dolly v2 is also available in these larger models sizes: * [dolly-v2-12b](https://huggingface.co/databricks/dolly-v2-12b), a 12 billion parameter based on `pythia-12b` * [dolly-v2-7b](https://huggingface.co/databricks/dolly-v2-7b), a 6.9 billion parameter based on `pythia-6.9b` Please refer to the [dolly GitHub repo](https://github.com/databrickslabs/dolly#getting-started-with-response-generation) for tips on running inference for various GPU configurations. **Owner**: Databricks, Inc. ## Model Overview `dolly-v2-3b` is a 2.8 billion parameter causal language model created by [Databricks](https://databricks.com/) that is derived from [EleutherAI's](https://www.eleuther.ai/) [Pythia-2.8b](https://huggingface.co/EleutherAI/pythia-2.8b) and fine-tuned on a [~15K record instruction corpus](https://github.com/databrickslabs/dolly/tree/master/data) generated by Databricks employees and released under a permissive license (CC-BY-SA) ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` and `accelerate` libraries installed. In a Databricks notebook you could run: ```python %pip install "accelerate>=0.16.0,<1" "transformers[torch]>=4.28.1,<5" "torch>=1.13.1,<2" ``` The instruction following pipeline can be loaded using the `pipeline` function as shown below. This loads a custom `InstructionTextGenerationPipeline` found in the model repo [here](https://huggingface.co/databricks/dolly-v2-3b/blob/main/instruct_pipeline.py), which is why `trust_remote_code=True` is required. Including `torch_dtype=torch.bfloat16` is generally recommended if this type is supported in order to reduce memory usage. It does not appear to impact output quality. It is also fine to remove it if there is sufficient memory. ```python import torch from transformers import pipeline generate_text = pipeline(model="databricks/dolly-v2-3b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto") ``` You can then use the pipeline to answer instructions: ```python res = generate_text("Explain to me the difference between nuclear fission and fusion.") print(res[0]["generated_text"]) ``` Alternatively, if you prefer to not use `trust_remote_code=True` you can download [instruct_pipeline.py](https://huggingface.co/databricks/dolly-v2-3b/blob/main/instruct_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer: ```python import torch from instruct_pipeline import InstructionTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v2-3b", padding_side="left") model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v2-3b", device_map="auto", torch_dtype=torch.bfloat16) generate_text = InstructionTextGenerationPipeline(model=model, tokenizer=tokenizer) ``` ### LangChain Usage To use the pipeline with LangChain, you must set `return_full_text=True`, as LangChain expects the full text to be returned and the default for the pipeline is to only return the new text. ```python import torch from transformers import pipeline generate_text = pipeline(model="databricks/dolly-v2-3b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", return_full_text=True) ``` You can create a prompt that either has only an instruction or has an instruction with context: ```python from langchain import PromptTemplate, LLMChain from langchain.llms import HuggingFacePipeline # template for an instrution with no input prompt = PromptTemplate( input_variables=["instruction"], template="{instruction}") # template for an instruction with input prompt_with_context = PromptTemplate( input_variables=["instruction", "context"], template="{instruction}\n\nInput:\n{context}") hf_pipeline = HuggingFacePipeline(pipeline=generate_text) llm_chain = LLMChain(llm=hf_pipeline, prompt=prompt) llm_context_chain = LLMChain(llm=hf_pipeline, prompt=prompt_with_context) ``` Example predicting using a simple instruction: ```python print(llm_chain.predict(instruction="Explain to me the difference between nuclear fission and fusion.").lstrip()) ``` Example predicting using an instruction with context: ```python context = """George Washington (February 22, 1732[b] - December 14, 1799) was an American military officer, statesman, and Founding Father who served as the first president of the United States from 1789 to 1797.""" print(llm_context_chain.predict(instruction="When was George Washington president?", context=context).lstrip()) ``` ## Known Limitations ### Performance Limitations **`dolly-v2-3b` is not a state-of-the-art generative language model** and, though quantitative benchmarking is ongoing, is not designed to perform competitively with more modern model architectures or models subject to larger pretraining corpuses. The Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community. In particular, `dolly-v2-3b` struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors, dates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc. Moreover, we find that `dolly-v2-3b` does not have some capabilities, such as well-formatted letter writing, present in the original model. ### Dataset Limitations Like all language models, `dolly-v2-3b` reflects the content and limitations of its training corpuses. - **The Pile**: GPT-J's pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets, it contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly in the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit associations. - **`databricks-dolly-15k`**: The training data on which `dolly-v2-3b` is instruction tuned represents natural language instructions generated by Databricks employees during a period spanning March and April 2023 and includes passages from Wikipedia as references passages for instruction categories like closed QA and summarization. To our knowledge it does not contain obscenity, intellectual property or personally identifying information about non-public figures, but it may contain typos and factual errors. The dataset may also reflect biases found in Wikipedia. Finally, the dataset likely reflects the interests and semantic choices of Databricks employees, a demographic which is not representative of the global population at large. Databricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that maximize the potential of all individuals and organizations. ### Benchmark Metrics Below you'll find various models benchmark performance on the [EleutherAI LLM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness); model results are sorted by geometric mean to produce an intelligible ordering. As outlined above, these results demonstrate that `dolly-v2-3b` is not state of the art. It underperforms `dolly-v1-6b` in the evaluation benchmarks, which is not surprising considering it has half the number of parameters. | model | openbookqa | arc_easy | winogrande | hellaswag | arc_challenge | piqa | boolq | gmean | | --------------------------------- | ------------ | ---------- | ------------ | ----------- | --------------- | -------- | -------- | ---------| | EleutherAI/pythia-2.8b | 0.348 | 0.585859 | 0.589582 | 0.591217 | 0.323379 | 0.73395 | 0.638226 | 0.523431 | | EleutherAI/pythia-6.9b | 0.368 | 0.604798 | 0.608524 | 0.631548 | 0.343857 | 0.761153 | 0.6263 | 0.543567 | | databricks/dolly-v2-3b | 0.384 | 0.611532 | 0.589582 | 0.650767 | 0.370307 | 0.742655 | 0.575535 | 0.544886 | | EleutherAI/pythia-12b | 0.364 | 0.627104 | 0.636148 | 0.668094 | 0.346416 | 0.760065 | 0.673394 | 0.559676 | | EleutherAI/gpt-j-6B | 0.382 | 0.621633 | 0.651144 | 0.662617 | 0.363481 | 0.761153 | 0.655963 | 0.565936 | | databricks/dolly-v2-12b | 0.408 | 0.63931 | 0.616417 | 0.707927 | 0.388225 | 0.757889 | 0.568196 | 0.56781 | | databricks/dolly-v2-7b | 0.392 | 0.633838 | 0.607735 | 0.686517 | 0.406997 | 0.750816 | 0.644037 | 0.573487 | | databricks/dolly-v1-6b | 0.41 | 0.62963 | 0.643252 | 0.676758 | 0.384812 | 0.773667 | 0.687768 | 0.583431 | | EleutherAI/gpt-neox-20b | 0.402 | 0.683923 | 0.656669 | 0.7142 | 0.408703 | 0.784004 | 0.695413 | 0.602236 | # Citation ``` @online{DatabricksBlog2023DollyV2, author = {Mike Conover and Matt Hayes and Ankit Mathur and Jianwei Xie and Jun Wan and Sam Shah and Ali Ghodsi and Patrick Wendell and Matei Zaharia and Reynold Xin}, title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM}, year = {2023}, url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm}, urldate = {2023-06-30} } ``` # Happy Hacking!
{}
RichardErkhov/databricks_-_dolly-v2-3b-8bits
null
[ "transformers", "safetensors", "gpt_neox", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-14T21:04:43+00:00
[]
[]
TAGS #transformers #safetensors #gpt_neox #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models dolly-v2-3b - bnb 8bits * Model creator: URL * Original model: URL Original model description: --------------------------- license: mit language: * en library\_name: transformers inference: false datasets: * databricks/databricks-dolly-15k --- dolly-v2-3b Model Card ====================== Summary ------- Databricks' 'dolly-v2-3b', an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. Based on 'pythia-2.8b', Dolly is trained on ~15k instruction/response fine tuning records 'databricks-dolly-15k' generated by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation, information extraction, open QA and summarization. 'dolly-v2-3b' is not a state-of-the-art model, but does exhibit surprisingly high quality instruction following behavior not characteristic of the foundation model on which it is based. Dolly v2 is also available in these larger models sizes: * dolly-v2-12b, a 12 billion parameter based on 'pythia-12b' * dolly-v2-7b, a 6.9 billion parameter based on 'pythia-6.9b' Please refer to the dolly GitHub repo for tips on running inference for various GPU configurations. Owner: Databricks, Inc. Model Overview -------------- 'dolly-v2-3b' is a 2.8 billion parameter causal language model created by Databricks that is derived from EleutherAI's Pythia-2.8b and fine-tuned on a ~15K record instruction corpus generated by Databricks employees and released under a permissive license (CC-BY-SA) Usage ----- To use the model with the 'transformers' library on a machine with GPUs, first make sure you have the 'transformers' and 'accelerate' libraries installed. In a Databricks notebook you could run: The instruction following pipeline can be loaded using the 'pipeline' function as shown below. This loads a custom 'InstructionTextGenerationPipeline' found in the model repo here, which is why 'trust\_remote\_code=True' is required. Including 'torch\_dtype=torch.bfloat16' is generally recommended if this type is supported in order to reduce memory usage. It does not appear to impact output quality. It is also fine to remove it if there is sufficient memory. You can then use the pipeline to answer instructions: Alternatively, if you prefer to not use 'trust\_remote\_code=True' you can download instruct\_pipeline.py, store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer: ### LangChain Usage To use the pipeline with LangChain, you must set 'return\_full\_text=True', as LangChain expects the full text to be returned and the default for the pipeline is to only return the new text. You can create a prompt that either has only an instruction or has an instruction with context: Example predicting using a simple instruction: Example predicting using an instruction with context: Known Limitations ----------------- ### Performance Limitations 'dolly-v2-3b' is not a state-of-the-art generative language model and, though quantitative benchmarking is ongoing, is not designed to perform competitively with more modern model architectures or models subject to larger pretraining corpuses. The Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community. In particular, 'dolly-v2-3b' struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors, dates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc. Moreover, we find that 'dolly-v2-3b' does not have some capabilities, such as well-formatted letter writing, present in the original model. ### Dataset Limitations Like all language models, 'dolly-v2-3b' reflects the content and limitations of its training corpuses. * The Pile: GPT-J's pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets, it contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly in the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit associations. * 'databricks-dolly-15k': The training data on which 'dolly-v2-3b' is instruction tuned represents natural language instructions generated by Databricks employees during a period spanning March and April 2023 and includes passages from Wikipedia as references passages for instruction categories like closed QA and summarization. To our knowledge it does not contain obscenity, intellectual property or personally identifying information about non-public figures, but it may contain typos and factual errors. The dataset may also reflect biases found in Wikipedia. Finally, the dataset likely reflects the interests and semantic choices of Databricks employees, a demographic which is not representative of the global population at large. Databricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that maximize the potential of all individuals and organizations. ### Benchmark Metrics Below you'll find various models benchmark performance on the EleutherAI LLM Evaluation Harness; model results are sorted by geometric mean to produce an intelligible ordering. As outlined above, these results demonstrate that 'dolly-v2-3b' is not state of the art. It underperforms 'dolly-v1-6b' in the evaluation benchmarks, which is not surprising considering it has half the number of parameters. Happy Hacking! ==============
[ "### LangChain Usage\n\n\nTo use the pipeline with LangChain, you must set 'return\\_full\\_text=True', as LangChain expects the full text to be returned\nand the default for the pipeline is to only return the new text.\n\n\nYou can create a prompt that either has only an instruction or has an instruction with context:\n\n\nExample predicting using a simple instruction:\n\n\nExample predicting using an instruction with context:\n\n\nKnown Limitations\n-----------------", "### Performance Limitations\n\n\n'dolly-v2-3b' is not a state-of-the-art generative language model and, though quantitative benchmarking is ongoing, is not designed to perform\ncompetitively with more modern model architectures or models subject to larger pretraining corpuses.\n\n\nThe Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community. \n\nIn particular, 'dolly-v2-3b' struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors,\ndates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc.\nMoreover, we find that 'dolly-v2-3b' does not have some capabilities, such as well-formatted letter writing, present in the original model.", "### Dataset Limitations\n\n\nLike all language models, 'dolly-v2-3b' reflects the content and limitations of its training corpuses.\n\n\n* The Pile: GPT-J's pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets,\nit contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly\nin the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit\nassociations.\n* 'databricks-dolly-15k': The training data on which 'dolly-v2-3b' is instruction tuned represents natural language instructions generated\nby Databricks employees during a period spanning March and April 2023 and includes passages from Wikipedia as references passages\nfor instruction categories like closed QA and summarization. To our knowledge it does not contain obscenity, intellectual property or\npersonally identifying information about non-public figures, but it may contain typos and factual errors.\nThe dataset may also reflect biases found in Wikipedia. Finally, the dataset likely reflects\nthe interests and semantic choices of Databricks employees, a demographic which is not representative of the global population at large.\n\n\nDatabricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that\nmaximize the potential of all individuals and organizations.", "### Benchmark Metrics\n\n\nBelow you'll find various models benchmark performance on the EleutherAI LLM Evaluation Harness;\nmodel results are sorted by geometric mean to produce an intelligible ordering. As outlined above, these results demonstrate that 'dolly-v2-3b' is not state of the art.\nIt underperforms 'dolly-v1-6b' in the evaluation benchmarks, which is not surprising considering it has half the number of parameters.\n\n\n\nHappy Hacking!\n==============" ]
[ "TAGS\n#transformers #safetensors #gpt_neox #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n", "### LangChain Usage\n\n\nTo use the pipeline with LangChain, you must set 'return\\_full\\_text=True', as LangChain expects the full text to be returned\nand the default for the pipeline is to only return the new text.\n\n\nYou can create a prompt that either has only an instruction or has an instruction with context:\n\n\nExample predicting using a simple instruction:\n\n\nExample predicting using an instruction with context:\n\n\nKnown Limitations\n-----------------", "### Performance Limitations\n\n\n'dolly-v2-3b' is not a state-of-the-art generative language model and, though quantitative benchmarking is ongoing, is not designed to perform\ncompetitively with more modern model architectures or models subject to larger pretraining corpuses.\n\n\nThe Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community. \n\nIn particular, 'dolly-v2-3b' struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors,\ndates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc.\nMoreover, we find that 'dolly-v2-3b' does not have some capabilities, such as well-formatted letter writing, present in the original model.", "### Dataset Limitations\n\n\nLike all language models, 'dolly-v2-3b' reflects the content and limitations of its training corpuses.\n\n\n* The Pile: GPT-J's pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets,\nit contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly\nin the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit\nassociations.\n* 'databricks-dolly-15k': The training data on which 'dolly-v2-3b' is instruction tuned represents natural language instructions generated\nby Databricks employees during a period spanning March and April 2023 and includes passages from Wikipedia as references passages\nfor instruction categories like closed QA and summarization. To our knowledge it does not contain obscenity, intellectual property or\npersonally identifying information about non-public figures, but it may contain typos and factual errors.\nThe dataset may also reflect biases found in Wikipedia. Finally, the dataset likely reflects\nthe interests and semantic choices of Databricks employees, a demographic which is not representative of the global population at large.\n\n\nDatabricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that\nmaximize the potential of all individuals and organizations.", "### Benchmark Metrics\n\n\nBelow you'll find various models benchmark performance on the EleutherAI LLM Evaluation Harness;\nmodel results are sorted by geometric mean to produce an intelligible ordering. As outlined above, these results demonstrate that 'dolly-v2-3b' is not state of the art.\nIt underperforms 'dolly-v1-6b' in the evaluation benchmarks, which is not surprising considering it has half the number of parameters.\n\n\n\nHappy Hacking!\n==============" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-multilingual-cased_regression_finetuned_ptt This model is a fine-tuned version of [distilbert/distilbert-base-multilingual-cased](https://huggingface.co/distilbert/distilbert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8809 - Mse: 1.8809 - Mae: 1.0160 - Rmse: 1.3715 - Mape: inf - R Squared: 0.0000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 206 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mse | Mae | Rmse | Mape | R Squared | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:----:|:---------:| | 1.9028 | 1.0 | 2062 | 1.8809 | 1.8809 | 1.0147 | 1.3715 | inf | 0.0000 | | 1.9381 | 2.0 | 4124 | 1.8831 | 1.8831 | 1.0177 | 1.3723 | inf | -0.0011 | | 1.8691 | 3.0 | 6186 | 1.8809 | 1.8809 | 1.0160 | 1.3715 | inf | 0.0000 | | 1.7741 | 4.0 | 8248 | 1.8809 | 1.8809 | 1.0153 | 1.3715 | inf | 0.0000 | | 1.6734 | 5.0 | 10310 | 1.8809 | 1.8809 | 1.0143 | 1.3715 | inf | 0.0000 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilbert/distilbert-base-multilingual-cased", "model-index": [{"name": "distilbert-base-multilingual-cased_regression_finetuned_ptt", "results": []}]}
Mou11209203/distilbert-base-multilingual-cased_regression_finetuned_ptt
null
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-multilingual-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-14T21:08:28+00:00
[]
[]
TAGS #transformers #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert/distilbert-base-multilingual-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-multilingual-cased\_regression\_finetuned\_ptt ============================================================== This model is a fine-tuned version of distilbert/distilbert-base-multilingual-cased on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.8809 * Mse: 1.8809 * Mae: 1.0160 * Rmse: 1.3715 * Mape: inf * R Squared: 0.0000 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 3e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_steps: 206 * num\_epochs: 10 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.1 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 206\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert/distilbert-base-multilingual-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 206\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
mlx
# GreenBitAI/Llama-2-7B-chat-layer-mix-bpw-3.0-mlx This quantized low-bit model was converted to MLX format from [`GreenBitAI/Llama-2-7B-chat-layer-mix-bpw-3.0`](). Refer to the [original model card](https://huggingface.co/GreenBitAI/Llama-2-7B-chat-layer-mix-bpw-3.0) for more details on the model. ## Use with mlx ```bash pip install gbx-lm ``` ```python from gbx_lm import load, generate model, tokenizer = load("GreenBitAI/Llama-2-7B-chat-layer-mix-bpw-3.0-mlx") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"license": "apache-2.0", "tags": ["mlx"]}
GreenBitAI/Llama-2-7B-Chat-layer-mix-bpw-3.0-mlx
null
[ "mlx", "safetensors", "llama", "license:apache-2.0", "region:us" ]
null
2024-04-14T21:10:17+00:00
[]
[]
TAGS #mlx #safetensors #llama #license-apache-2.0 #region-us
# GreenBitAI/Llama-2-7B-chat-layer-mix-bpw-3.0-mlx This quantized low-bit model was converted to MLX format from ['GreenBitAI/Llama-2-7B-chat-layer-mix-bpw-3.0'](). Refer to the original model card for more details on the model. ## Use with mlx
[ "# GreenBitAI/Llama-2-7B-chat-layer-mix-bpw-3.0-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/Llama-2-7B-chat-layer-mix-bpw-3.0']().\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
[ "TAGS\n#mlx #safetensors #llama #license-apache-2.0 #region-us \n", "# GreenBitAI/Llama-2-7B-chat-layer-mix-bpw-3.0-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/Llama-2-7B-chat-layer-mix-bpw-3.0']().\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MLMA-Lab8-FinetunedBioGPT This model is a fine-tuned version of [microsoft/biogpt](https://huggingface.co/microsoft/biogpt) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1455 - Precision: 0.4444 - Recall: 0.5388 - F1: 0.4871 - Accuracy: 0.9572 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.3222 | 1.0 | 679 | 0.1763 | 0.3318 | 0.4574 | 0.3846 | 0.9472 | | 0.1664 | 2.0 | 1358 | 0.1612 | 0.3817 | 0.5044 | 0.4346 | 0.9522 | | 0.096 | 3.0 | 2037 | 0.1455 | 0.4444 | 0.5388 | 0.4871 | 0.9572 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "microsoft/biogpt", "model-index": [{"name": "MLMA-Lab8-FinetunedBioGPT", "results": []}]}
kthammana/MLMA-Lab8-FinetunedBioGPT
null
[ "transformers", "tensorboard", "safetensors", "gpt2", "token-classification", "generated_from_trainer", "base_model:microsoft/biogpt", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-14T21:10:17+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #gpt2 #token-classification #generated_from_trainer #base_model-microsoft/biogpt #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
MLMA-Lab8-FinetunedBioGPT ========================= This model is a fine-tuned version of microsoft/biogpt on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.1455 * Precision: 0.4444 * Recall: 0.5388 * F1: 0.4871 * Accuracy: 0.9572 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt2 #token-classification #generated_from_trainer #base_model-microsoft/biogpt #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
null
# YiffyEstopianMaid 13B - GGUF - Model creator: [Katy Vetteriano](https://huggingface.co/KatyTheCutie) - Original model: [YiffyEstopianMaid 13B](https://huggingface.co/KatyMergeTesting/YiffyEstopianMaid-13B) <!-- description start --> ## Description This repo contains GGUF format model files for [Katy Vetteriano's YiffyEstopianMaid 13B](https://huggingface.co/KatyMergeTesting/YiffyEstopianMaid-13B). These files were quantized using llama.cpp via ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. <!-- description end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- recommended-settings start --> ## Recommended settings - Default preset if using SillyTavern - Temperature: 0.7 - Min-P: 0.3 - Amount to generate: 256 - Top P: 1 - Repetition penalty: 1.10 <!-- recommended-settings end --> <!-- licensing start --> ## Licensing As this model merge is based on Llama 2, it is subject to Meta's LLAMA 2 Community License terms. The appropriate license files are therefore included. Models that were released under the Apache 2.0 license have also been used in the creation of this model merge. Due to Apache 2.0's permissive relicensing terms, the merge inherits the LLAMA 2 Community License and is not dual licensed. The Apache 2.0 license requires that attribution is included at the point of relicensing. This has been done by listing the models in the [Notice file](https://huggingface.co/boxomcfoxo/YiffyEstopianMaid-13B-GGUF/blob/main/Notice) alongside the LLAMA 2 Community License notice. <!-- licensing end --> <!-- quantization_methods start --> ## Explanation of quantization methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- quantization_methods end --> <!-- provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [yiffyestopianmaid-13b.Q2_K.gguf](https://huggingface.co/boxomcfoxo/YiffyEstopianMaid-13B-GGUF/blob/main/yiffyestopianmaid-13b.Q2_K.gguf) | Q2_K | 2 | 4.85 GB| 7.35 GB | significant quality loss - not recommended for most purposes | | [yiffyestopianmaid-13b.Q3_K_S.gguf](https://huggingface.co/boxomcfoxo/YiffyEstopianMaid-13B-GGUF/blob/main/yiffyestopianmaid-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [yiffyestopianmaid-13b.Q3_K_M.gguf](https://huggingface.co/boxomcfoxo/YiffyEstopianMaid-13B-GGUF/blob/main/yiffyestopianmaid-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [yiffyestopianmaid-13b.Q3_K_L.gguf](https://huggingface.co/boxomcfoxo/YiffyEstopianMaid-13B-GGUF/blob/main/yiffyestopianmaid-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [yiffyestopianmaid-13b.Q4_0.gguf](https://huggingface.co/boxomcfoxo/YiffyEstopianMaid-13B-GGUF/blob/main/yiffyestopianmaid-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [yiffyestopianmaid-13b.Q4_K_S.gguf](https://huggingface.co/boxomcfoxo/YiffyEstopianMaid-13B-GGUF/blob/main/yiffyestopianmaid-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.42 GB| 9.92 GB | small, greater quality loss | | [yiffyestopianmaid-13b.Q4_K_M.gguf](https://huggingface.co/boxomcfoxo/YiffyEstopianMaid-13B-GGUF/blob/main/yiffyestopianmaid-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [yiffyestopianmaid-13b.Q5_0.gguf](https://huggingface.co/boxomcfoxo/YiffyEstopianMaid-13B-GGUF/blob/main/yiffyestopianmaid-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [yiffyestopianmaid-13b.Q5_K_S.gguf](https://huggingface.co/boxomcfoxo/YiffyEstopianMaid-13B-GGUF/blob/main/yiffyestopianmaid-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [yiffyestopianmaid-13b.Q5_K_M.gguf](https://huggingface.co/boxomcfoxo/YiffyEstopianMaid-13B-GGUF/blob/main/yiffyestopianmaid-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [yiffyestopianmaid-13b.Q6_K.gguf](https://huggingface.co/boxomcfoxo/YiffyEstopianMaid-13B-GGUF/blob/main/yiffyestopianmaid-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [yiffyestopianmaid-13b.Q8_0.gguf](https://huggingface.co/boxomcfoxo/YiffyEstopianMaid-13B-GGUF/blob/main/yiffyestopianmaid-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- provided-files end --> <!-- how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantization formats are provided, and most users only want to pick and download a single file. ### In `text-generation-webui` Under Download Model, you can enter the model repo: boxomcfoxo/YiffyEstopianMaid-13B-GGUF and below it, a specific filename to download, such as: yiffyestopianmaid-13b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download boxomcfoxo/YiffyEstopianMaid-13B-GGUF yiffyestopianmaid-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download boxomcfoxo/YiffyEstopianMaid-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download boxomcfoxo/YiffyEstopianMaid-13B-GGUF yiffyestopianmaid-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- how-to-download end --> <!-- how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m yiffyestopianmaid-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./yiffyestopianmaid-13b.Q4_K_M.gguf", # Download the model file first n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./yiffyestopianmaid-13b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- how-to-run end -->
{"language": ["en"], "license": "llama2", "tags": ["llama-cpp", "gguf-my-repo", "roleplay", "text-generation-inference"], "model_name": "YiffyEstopianMaid 13B", "base_model": "KatyMergeTesting/YiffyEstopianMaid-13B", "inference": false, "model_creator": "Katy Vetteriano", "model_type": "llama", "prompt_template": "Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ", "quantized_by": "boxomcfoxo"}
boxomcfoxo/YiffyEstopianMaid-13B-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "roleplay", "text-generation-inference", "en", "base_model:KatyMergeTesting/YiffyEstopianMaid-13B", "license:llama2", "region:us" ]
null
2024-04-14T21:11:34+00:00
[]
[ "en" ]
TAGS #gguf #llama-cpp #gguf-my-repo #roleplay #text-generation-inference #en #base_model-KatyMergeTesting/YiffyEstopianMaid-13B #license-llama2 #region-us
YiffyEstopianMaid 13B - GGUF ============================ * Model creator: Katy Vetteriano * Original model: YiffyEstopianMaid 13B Description ----------- This repo contains GGUF format model files for Katy Vetteriano's YiffyEstopianMaid 13B. These files were quantized using URL via URL's GGUF-my-repo space. Prompt template: Alpaca ----------------------- Recommended settings -------------------- * Default preset if using SillyTavern * Temperature: 0.7 * Min-P: 0.3 * Amount to generate: 256 * Top P: 1 * Repetition penalty: 1.10 Licensing --------- As this model merge is based on Llama 2, it is subject to Meta's LLAMA 2 Community License terms. The appropriate license files are therefore included. Models that were released under the Apache 2.0 license have also been used in the creation of this model merge. Due to Apache 2.0's permissive relicensing terms, the merge inherits the LLAMA 2 Community License and is not dual licensed. The Apache 2.0 license requires that attribution is included at the point of relicensing. This has been done by listing the models in the Notice file alongside the LLAMA 2 Community License notice. Explanation of quantization methods ----------------------------------- Click to see details The new methods available are: * GGML\_TYPE\_Q2\_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML\_TYPE\_Q3\_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML\_TYPE\_Q4\_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML\_TYPE\_Q5\_K - "type-1" 5-bit quantization. Same super-block structure as GGML\_TYPE\_Q4\_K resulting in 5.5 bpw * GGML\_TYPE\_Q6\_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. Provided files -------------- Note: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. How to download GGUF files -------------------------- Note for manual downloaders: You almost never want to clone the entire repo! Multiple different quantization formats are provided, and most users only want to pick and download a single file. ### In 'text-generation-webui' Under Download Model, you can enter the model repo: boxomcfoxo/YiffyEstopianMaid-13B-GGUF and below it, a specific filename to download, such as: yiffyestopianmaid-13b.Q4\_K\_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the 'huggingface-hub' Python library: Then you can download any individual model file to the current directory, at high speed, with a command like this: More advanced huggingface-cli download usage (click to read) You can also download multiple files at once with a pattern: For more documentation on downloading with 'huggingface-cli', please see: HF -> Hub Python Library -> Download files -> Download from the CLI. To accelerate downloads on fast connections (1Gbit/s or higher), install 'hf\_transfer': And set environment variable 'HF\_HUB\_ENABLE\_HF\_TRANSFER' to '1': Windows Command Line users: You can set the environment variable by running 'set HF\_HUB\_ENABLE\_HF\_TRANSFER=1' before the download command. Example 'URL' command --------------------- Make sure you are using 'URL' from commit d0cee0d or later. Change '-ngl 32' to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change '-c 4096' to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by URL automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the '-p ' argument with '-i -ins' For other parameters and how to use them, please refer to the URL documentation How to run in 'text-generation-webui' ------------------------------------- Further instructions can be found in the text-generation-webui documentation, here: text-generation-webui/docs/04 ‐ Model URL. How to run from Python code --------------------------- You can use GGUF models from Python using the llama-cpp-python or ctransformers libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: llama-cpp-python docs. #### First install the package Run one of the following commands, according to your system: #### Simple llama-cpp-python example code How to use with LangChain ------------------------- Here are guides on using llama-cpp-python and ctransformers with LangChain: * LangChain + llama-cpp-python * LangChain + ctransformers
[ "### In 'text-generation-webui'\n\n\nUnder Download Model, you can enter the model repo: boxomcfoxo/YiffyEstopianMaid-13B-GGUF and below it, a specific filename to download, such as: yiffyestopianmaid-13b.Q4\\_K\\_M.gguf.\n\n\nThen click Download.", "### On the command line, including multiple files at once\n\n\nI recommend using the 'huggingface-hub' Python library:\n\n\nThen you can download any individual model file to the current directory, at high speed, with a command like this:\n\n\n\nMore advanced huggingface-cli download usage (click to read)\nYou can also download multiple files at once with a pattern:\n\n\nFor more documentation on downloading with 'huggingface-cli', please see: HF -> Hub Python Library -> Download files -> Download from the CLI.\n\n\nTo accelerate downloads on fast connections (1Gbit/s or higher), install 'hf\\_transfer':\n\n\nAnd set environment variable 'HF\\_HUB\\_ENABLE\\_HF\\_TRANSFER' to '1':\n\n\nWindows Command Line users: You can set the environment variable by running 'set HF\\_HUB\\_ENABLE\\_HF\\_TRANSFER=1' before the download command.\n\n\n\nExample 'URL' command\n---------------------\n\n\nMake sure you are using 'URL' from commit d0cee0d or later.\n\n\nChange '-ngl 32' to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.\n\n\nChange '-c 4096' to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by URL automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.\n\n\nIf you want to have a chat-style conversation, replace the '-p ' argument with '-i -ins'\n\n\nFor other parameters and how to use them, please refer to the URL documentation\n\n\nHow to run in 'text-generation-webui'\n-------------------------------------\n\n\nFurther instructions can be found in the text-generation-webui documentation, here: text-generation-webui/docs/04 ‐ Model URL.\n\n\nHow to run from Python code\n---------------------------\n\n\nYou can use GGUF models from Python using the llama-cpp-python or ctransformers libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.", "### How to load this model in Python code, using llama-cpp-python\n\n\nFor full documentation, please see: llama-cpp-python docs.", "#### First install the package\n\n\nRun one of the following commands, according to your system:", "#### Simple llama-cpp-python example code\n\n\nHow to use with LangChain\n-------------------------\n\n\nHere are guides on using llama-cpp-python and ctransformers with LangChain:\n\n\n* LangChain + llama-cpp-python\n* LangChain + ctransformers" ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #roleplay #text-generation-inference #en #base_model-KatyMergeTesting/YiffyEstopianMaid-13B #license-llama2 #region-us \n", "### In 'text-generation-webui'\n\n\nUnder Download Model, you can enter the model repo: boxomcfoxo/YiffyEstopianMaid-13B-GGUF and below it, a specific filename to download, such as: yiffyestopianmaid-13b.Q4\\_K\\_M.gguf.\n\n\nThen click Download.", "### On the command line, including multiple files at once\n\n\nI recommend using the 'huggingface-hub' Python library:\n\n\nThen you can download any individual model file to the current directory, at high speed, with a command like this:\n\n\n\nMore advanced huggingface-cli download usage (click to read)\nYou can also download multiple files at once with a pattern:\n\n\nFor more documentation on downloading with 'huggingface-cli', please see: HF -> Hub Python Library -> Download files -> Download from the CLI.\n\n\nTo accelerate downloads on fast connections (1Gbit/s or higher), install 'hf\\_transfer':\n\n\nAnd set environment variable 'HF\\_HUB\\_ENABLE\\_HF\\_TRANSFER' to '1':\n\n\nWindows Command Line users: You can set the environment variable by running 'set HF\\_HUB\\_ENABLE\\_HF\\_TRANSFER=1' before the download command.\n\n\n\nExample 'URL' command\n---------------------\n\n\nMake sure you are using 'URL' from commit d0cee0d or later.\n\n\nChange '-ngl 32' to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.\n\n\nChange '-c 4096' to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by URL automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.\n\n\nIf you want to have a chat-style conversation, replace the '-p ' argument with '-i -ins'\n\n\nFor other parameters and how to use them, please refer to the URL documentation\n\n\nHow to run in 'text-generation-webui'\n-------------------------------------\n\n\nFurther instructions can be found in the text-generation-webui documentation, here: text-generation-webui/docs/04 ‐ Model URL.\n\n\nHow to run from Python code\n---------------------------\n\n\nYou can use GGUF models from Python using the llama-cpp-python or ctransformers libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.", "### How to load this model in Python code, using llama-cpp-python\n\n\nFor full documentation, please see: llama-cpp-python docs.", "#### First install the package\n\n\nRun one of the following commands, according to your system:", "#### Simple llama-cpp-python example code\n\n\nHow to use with LangChain\n-------------------------\n\n\nHere are guides on using llama-cpp-python and ctransformers with LangChain:\n\n\n* LangChain + llama-cpp-python\n* LangChain + ctransformers" ]
text-generation
transformers
# Spaetzle-v60-7b This is progressive (mostly dare-ties, but also slerp) merge with the intention of suitable compromise for English and German local tasks. The performance looks ok so far: e.g. we get in EQ-Bench: Score (v2_de): 65.08 (Parseable: 171.0). Spaetzle-v60-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [abideen/AlphaMonarch-dora](https://huggingface.co/abideen/AlphaMonarch-dora) * [cstr/Spaetzle-v58-7b](https://huggingface.co/cstr/Spaetzle-v58-7b) ## 🧩 Configuration ```yaml models: - model: cstr/Spaetzle-v58-7b # no parameters necessary for base model - model: abideen/AlphaMonarch-dora parameters: density: 0.60 weight: 0.30 merge_method: dare_ties base_model: cstr/Spaetzle-v58-7b parameters: int8_mask: true dtype: bfloat16 random_seed: 0 tokenizer_source: base ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "cstr/Spaetzle-v60-7b" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"tags": ["merge", "mergekit", "lazymergekit", "abideen/AlphaMonarch-dora"], "base_model": ["abideen/AlphaMonarch-dora"]}
cstr/Spaetzle-v60-7b
null
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "abideen/AlphaMonarch-dora", "conversational", "base_model:abideen/AlphaMonarch-dora", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-14T21:12:09+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #abideen/AlphaMonarch-dora #conversational #base_model-abideen/AlphaMonarch-dora #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Spaetzle-v60-7b This is progressive (mostly dare-ties, but also slerp) merge with the intention of suitable compromise for English and German local tasks. The performance looks ok so far: e.g. we get in EQ-Bench: Score (v2_de): 65.08 (Parseable: 171.0). Spaetzle-v60-7b is a merge of the following models using LazyMergekit: * abideen/AlphaMonarch-dora * cstr/Spaetzle-v58-7b ## Configuration ## Usage
[ "# Spaetzle-v60-7b\n\nThis is progressive (mostly dare-ties, but also slerp) merge with the intention of suitable compromise for English and German local tasks.\nThe performance looks ok so far: e.g. we get in EQ-Bench: Score (v2_de): 65.08 (Parseable: 171.0).\n\nSpaetzle-v60-7b is a merge of the following models using LazyMergekit:\n* abideen/AlphaMonarch-dora\n* cstr/Spaetzle-v58-7b", "## Configuration", "## Usage" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #abideen/AlphaMonarch-dora #conversational #base_model-abideen/AlphaMonarch-dora #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Spaetzle-v60-7b\n\nThis is progressive (mostly dare-ties, but also slerp) merge with the intention of suitable compromise for English and German local tasks.\nThe performance looks ok so far: e.g. we get in EQ-Bench: Score (v2_de): 65.08 (Parseable: 171.0).\n\nSpaetzle-v60-7b is a merge of the following models using LazyMergekit:\n* abideen/AlphaMonarch-dora\n* cstr/Spaetzle-v58-7b", "## Configuration", "## Usage" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) dolly-v2-12b - bnb 4bits - Model creator: https://huggingface.co/databricks/ - Original model: https://huggingface.co/databricks/dolly-v2-12b/ Original model description: --- license: mit language: - en library_name: transformers inference: false datasets: - databricks/databricks-dolly-15k --- # dolly-v2-12b Model Card ## Summary Databricks' `dolly-v2-12b`, an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. Based on `pythia-12b`, Dolly is trained on ~15k instruction/response fine tuning records [`databricks-dolly-15k`](https://github.com/databrickslabs/dolly/tree/master/data) generated by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation, information extraction, open QA and summarization. `dolly-v2-12b` is not a state-of-the-art model, but does exhibit surprisingly high quality instruction following behavior not characteristic of the foundation model on which it is based. Dolly v2 is also available in these smaller models sizes: * [dolly-v2-7b](https://huggingface.co/databricks/dolly-v2-7b), a 6.9 billion parameter based on `pythia-6.9b` * [dolly-v2-3b](https://huggingface.co/databricks/dolly-v2-3b), a 2.8 billion parameter based on `pythia-2.8b` Please refer to the [dolly GitHub repo](https://github.com/databrickslabs/dolly#getting-started-with-response-generation) for tips on running inference for various GPU configurations. **Owner**: Databricks, Inc. ## Model Overview `dolly-v2-12b` is a 12 billion parameter causal language model created by [Databricks](https://databricks.com/) that is derived from [EleutherAI's](https://www.eleuther.ai/) [Pythia-12b](https://huggingface.co/EleutherAI/pythia-12b) and fine-tuned on a [~15K record instruction corpus](https://github.com/databrickslabs/dolly/tree/master/data) generated by Databricks employees and released under a permissive license (CC-BY-SA) ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` and `accelerate` libraries installed. In a Databricks notebook you could run: ```python %pip install "accelerate>=0.16.0,<1" "transformers[torch]>=4.28.1,<5" "torch>=1.13.1,<2" ``` The instruction following pipeline can be loaded using the `pipeline` function as shown below. This loads a custom `InstructionTextGenerationPipeline` found in the model repo [here](https://huggingface.co/databricks/dolly-v2-3b/blob/main/instruct_pipeline.py), which is why `trust_remote_code=True` is required. Including `torch_dtype=torch.bfloat16` is generally recommended if this type is supported in order to reduce memory usage. It does not appear to impact output quality. It is also fine to remove it if there is sufficient memory. ```python import torch from transformers import pipeline generate_text = pipeline(model="databricks/dolly-v2-12b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto") ``` You can then use the pipeline to answer instructions: ```python res = generate_text("Explain to me the difference between nuclear fission and fusion.") print(res[0]["generated_text"]) ``` Alternatively, if you prefer to not use `trust_remote_code=True` you can download [instruct_pipeline.py](https://huggingface.co/databricks/dolly-v2-3b/blob/main/instruct_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer: ```python import torch from instruct_pipeline import InstructionTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v2-12b", padding_side="left") model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v2-12b", device_map="auto", torch_dtype=torch.bfloat16) generate_text = InstructionTextGenerationPipeline(model=model, tokenizer=tokenizer) ``` ### LangChain Usage To use the pipeline with LangChain, you must set `return_full_text=True`, as LangChain expects the full text to be returned and the default for the pipeline is to only return the new text. ```python import torch from transformers import pipeline generate_text = pipeline(model="databricks/dolly-v2-12b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", return_full_text=True) ``` You can create a prompt that either has only an instruction or has an instruction with context: ```python from langchain import PromptTemplate, LLMChain from langchain.llms import HuggingFacePipeline # template for an instrution with no input prompt = PromptTemplate( input_variables=["instruction"], template="{instruction}") # template for an instruction with input prompt_with_context = PromptTemplate( input_variables=["instruction", "context"], template="{instruction}\n\nInput:\n{context}") hf_pipeline = HuggingFacePipeline(pipeline=generate_text) llm_chain = LLMChain(llm=hf_pipeline, prompt=prompt) llm_context_chain = LLMChain(llm=hf_pipeline, prompt=prompt_with_context) ``` Example predicting using a simple instruction: ```python print(llm_chain.predict(instruction="Explain to me the difference between nuclear fission and fusion.").lstrip()) ``` Example predicting using an instruction with context: ```python context = """George Washington (February 22, 1732[b] - December 14, 1799) was an American military officer, statesman, and Founding Father who served as the first president of the United States from 1789 to 1797.""" print(llm_context_chain.predict(instruction="When was George Washington president?", context=context).lstrip()) ``` ## Known Limitations ### Performance Limitations **`dolly-v2-12b` is not a state-of-the-art generative language model** and, though quantitative benchmarking is ongoing, is not designed to perform competitively with more modern model architectures or models subject to larger pretraining corpuses. The Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community. In particular, `dolly-v2-12b` struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors, dates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc. Moreover, we find that `dolly-v2-12b` does not have some capabilities, such as well-formatted letter writing, present in the original model. ### Dataset Limitations Like all language models, `dolly-v2-12b` reflects the content and limitations of its training corpuses. - **The Pile**: GPT-J's pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets, it contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly in the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit associations. - **`databricks-dolly-15k`**: The training data on which `dolly-v2-12b` is instruction tuned represents natural language instructions generated by Databricks employees during a period spanning March and April 2023 and includes passages from Wikipedia as references passages for instruction categories like closed QA and summarization. To our knowledge it does not contain obscenity, intellectual property or personally identifying information about non-public figures, but it may contain typos and factual errors. The dataset may also reflect biases found in Wikipedia. Finally, the dataset likely reflects the interests and semantic choices of Databricks employees, a demographic which is not representative of the global population at large. Databricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that maximize the potential of all individuals and organizations. ### Benchmark Metrics Below you'll find various models benchmark performance on the [EleutherAI LLM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness); model results are sorted by geometric mean to produce an intelligible ordering. As outlined above, these results demonstrate that `dolly-v2-12b` is not state of the art, and in fact underperforms `dolly-v1-6b` in some evaluation benchmarks. We believe this owes to the composition and size of the underlying fine tuning datasets, but a robust statement as to the sources of these variations requires further study. | model | openbookqa | arc_easy | winogrande | hellaswag | arc_challenge | piqa | boolq | gmean | | --------------------------------- | ------------ | ---------- | ------------ | ----------- | --------------- | -------- | -------- | ---------| | EleutherAI/pythia-2.8b | 0.348 | 0.585859 | 0.589582 | 0.591217 | 0.323379 | 0.73395 | 0.638226 | 0.523431 | | EleutherAI/pythia-6.9b | 0.368 | 0.604798 | 0.608524 | 0.631548 | 0.343857 | 0.761153 | 0.6263 | 0.543567 | | databricks/dolly-v2-3b | 0.384 | 0.611532 | 0.589582 | 0.650767 | 0.370307 | 0.742655 | 0.575535 | 0.544886 | | EleutherAI/pythia-12b | 0.364 | 0.627104 | 0.636148 | 0.668094 | 0.346416 | 0.760065 | 0.673394 | 0.559676 | | EleutherAI/gpt-j-6B | 0.382 | 0.621633 | 0.651144 | 0.662617 | 0.363481 | 0.761153 | 0.655963 | 0.565936 | | databricks/dolly-v2-12b | 0.408 | 0.63931 | 0.616417 | 0.707927 | 0.388225 | 0.757889 | 0.568196 | 0.56781 | | databricks/dolly-v2-7b | 0.392 | 0.633838 | 0.607735 | 0.686517 | 0.406997 | 0.750816 | 0.644037 | 0.573487 | | databricks/dolly-v1-6b | 0.41 | 0.62963 | 0.643252 | 0.676758 | 0.384812 | 0.773667 | 0.687768 | 0.583431 | | EleutherAI/gpt-neox-20b | 0.402 | 0.683923 | 0.656669 | 0.7142 | 0.408703 | 0.784004 | 0.695413 | 0.602236 | # Citation ``` @online{DatabricksBlog2023DollyV2, author = {Mike Conover and Matt Hayes and Ankit Mathur and Jianwei Xie and Jun Wan and Sam Shah and Ali Ghodsi and Patrick Wendell and Matei Zaharia and Reynold Xin}, title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM}, year = {2023}, url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm}, urldate = {2023-06-30} } ``` # Happy Hacking!
{}
RichardErkhov/databricks_-_dolly-v2-12b-4bits
null
[ "transformers", "safetensors", "gpt_neox", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-14T21:13:27+00:00
[]
[]
TAGS #transformers #safetensors #gpt_neox #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models dolly-v2-12b - bnb 4bits * Model creator: URL * Original model: URL Original model description: --------------------------- license: mit language: * en library\_name: transformers inference: false datasets: * databricks/databricks-dolly-15k --- dolly-v2-12b Model Card ======================= Summary ------- Databricks' 'dolly-v2-12b', an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. Based on 'pythia-12b', Dolly is trained on ~15k instruction/response fine tuning records 'databricks-dolly-15k' generated by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation, information extraction, open QA and summarization. 'dolly-v2-12b' is not a state-of-the-art model, but does exhibit surprisingly high quality instruction following behavior not characteristic of the foundation model on which it is based. Dolly v2 is also available in these smaller models sizes: * dolly-v2-7b, a 6.9 billion parameter based on 'pythia-6.9b' * dolly-v2-3b, a 2.8 billion parameter based on 'pythia-2.8b' Please refer to the dolly GitHub repo for tips on running inference for various GPU configurations. Owner: Databricks, Inc. Model Overview -------------- 'dolly-v2-12b' is a 12 billion parameter causal language model created by Databricks that is derived from EleutherAI's Pythia-12b and fine-tuned on a ~15K record instruction corpus generated by Databricks employees and released under a permissive license (CC-BY-SA) Usage ----- To use the model with the 'transformers' library on a machine with GPUs, first make sure you have the 'transformers' and 'accelerate' libraries installed. In a Databricks notebook you could run: The instruction following pipeline can be loaded using the 'pipeline' function as shown below. This loads a custom 'InstructionTextGenerationPipeline' found in the model repo here, which is why 'trust\_remote\_code=True' is required. Including 'torch\_dtype=torch.bfloat16' is generally recommended if this type is supported in order to reduce memory usage. It does not appear to impact output quality. It is also fine to remove it if there is sufficient memory. You can then use the pipeline to answer instructions: Alternatively, if you prefer to not use 'trust\_remote\_code=True' you can download instruct\_pipeline.py, store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer: ### LangChain Usage To use the pipeline with LangChain, you must set 'return\_full\_text=True', as LangChain expects the full text to be returned and the default for the pipeline is to only return the new text. You can create a prompt that either has only an instruction or has an instruction with context: Example predicting using a simple instruction: Example predicting using an instruction with context: Known Limitations ----------------- ### Performance Limitations 'dolly-v2-12b' is not a state-of-the-art generative language model and, though quantitative benchmarking is ongoing, is not designed to perform competitively with more modern model architectures or models subject to larger pretraining corpuses. The Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community. In particular, 'dolly-v2-12b' struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors, dates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc. Moreover, we find that 'dolly-v2-12b' does not have some capabilities, such as well-formatted letter writing, present in the original model. ### Dataset Limitations Like all language models, 'dolly-v2-12b' reflects the content and limitations of its training corpuses. * The Pile: GPT-J's pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets, it contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly in the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit associations. * 'databricks-dolly-15k': The training data on which 'dolly-v2-12b' is instruction tuned represents natural language instructions generated by Databricks employees during a period spanning March and April 2023 and includes passages from Wikipedia as references passages for instruction categories like closed QA and summarization. To our knowledge it does not contain obscenity, intellectual property or personally identifying information about non-public figures, but it may contain typos and factual errors. The dataset may also reflect biases found in Wikipedia. Finally, the dataset likely reflects the interests and semantic choices of Databricks employees, a demographic which is not representative of the global population at large. Databricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that maximize the potential of all individuals and organizations. ### Benchmark Metrics Below you'll find various models benchmark performance on the EleutherAI LLM Evaluation Harness; model results are sorted by geometric mean to produce an intelligible ordering. As outlined above, these results demonstrate that 'dolly-v2-12b' is not state of the art, and in fact underperforms 'dolly-v1-6b' in some evaluation benchmarks. We believe this owes to the composition and size of the underlying fine tuning datasets, but a robust statement as to the sources of these variations requires further study. Happy Hacking! ==============
[ "### LangChain Usage\n\n\nTo use the pipeline with LangChain, you must set 'return\\_full\\_text=True', as LangChain expects the full text to be returned\nand the default for the pipeline is to only return the new text.\n\n\nYou can create a prompt that either has only an instruction or has an instruction with context:\n\n\nExample predicting using a simple instruction:\n\n\nExample predicting using an instruction with context:\n\n\nKnown Limitations\n-----------------", "### Performance Limitations\n\n\n'dolly-v2-12b' is not a state-of-the-art generative language model and, though quantitative benchmarking is ongoing, is not designed to perform\ncompetitively with more modern model architectures or models subject to larger pretraining corpuses.\n\n\nThe Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community. \n\nIn particular, 'dolly-v2-12b' struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors,\ndates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc.\nMoreover, we find that 'dolly-v2-12b' does not have some capabilities, such as well-formatted letter writing, present in the original model.", "### Dataset Limitations\n\n\nLike all language models, 'dolly-v2-12b' reflects the content and limitations of its training corpuses.\n\n\n* The Pile: GPT-J's pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets,\nit contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly\nin the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit\nassociations.\n* 'databricks-dolly-15k': The training data on which 'dolly-v2-12b' is instruction tuned represents natural language instructions generated\nby Databricks employees during a period spanning March and April 2023 and includes passages from Wikipedia as references passages\nfor instruction categories like closed QA and summarization. To our knowledge it does not contain obscenity, intellectual property or\npersonally identifying information about non-public figures, but it may contain typos and factual errors.\nThe dataset may also reflect biases found in Wikipedia. Finally, the dataset likely reflects\nthe interests and semantic choices of Databricks employees, a demographic which is not representative of the global population at large.\n\n\nDatabricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that\nmaximize the potential of all individuals and organizations.", "### Benchmark Metrics\n\n\nBelow you'll find various models benchmark performance on the EleutherAI LLM Evaluation Harness;\nmodel results are sorted by geometric mean to produce an intelligible ordering. As outlined above, these results demonstrate that 'dolly-v2-12b' is not state of the art,\nand in fact underperforms 'dolly-v1-6b' in some evaluation benchmarks. We believe this owes to the composition and size of the underlying fine tuning datasets,\nbut a robust statement as to the sources of these variations requires further study.\n\n\n\nHappy Hacking!\n==============" ]
[ "TAGS\n#transformers #safetensors #gpt_neox #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "### LangChain Usage\n\n\nTo use the pipeline with LangChain, you must set 'return\\_full\\_text=True', as LangChain expects the full text to be returned\nand the default for the pipeline is to only return the new text.\n\n\nYou can create a prompt that either has only an instruction or has an instruction with context:\n\n\nExample predicting using a simple instruction:\n\n\nExample predicting using an instruction with context:\n\n\nKnown Limitations\n-----------------", "### Performance Limitations\n\n\n'dolly-v2-12b' is not a state-of-the-art generative language model and, though quantitative benchmarking is ongoing, is not designed to perform\ncompetitively with more modern model architectures or models subject to larger pretraining corpuses.\n\n\nThe Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community. \n\nIn particular, 'dolly-v2-12b' struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors,\ndates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc.\nMoreover, we find that 'dolly-v2-12b' does not have some capabilities, such as well-formatted letter writing, present in the original model.", "### Dataset Limitations\n\n\nLike all language models, 'dolly-v2-12b' reflects the content and limitations of its training corpuses.\n\n\n* The Pile: GPT-J's pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets,\nit contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly\nin the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit\nassociations.\n* 'databricks-dolly-15k': The training data on which 'dolly-v2-12b' is instruction tuned represents natural language instructions generated\nby Databricks employees during a period spanning March and April 2023 and includes passages from Wikipedia as references passages\nfor instruction categories like closed QA and summarization. To our knowledge it does not contain obscenity, intellectual property or\npersonally identifying information about non-public figures, but it may contain typos and factual errors.\nThe dataset may also reflect biases found in Wikipedia. Finally, the dataset likely reflects\nthe interests and semantic choices of Databricks employees, a demographic which is not representative of the global population at large.\n\n\nDatabricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that\nmaximize the potential of all individuals and organizations.", "### Benchmark Metrics\n\n\nBelow you'll find various models benchmark performance on the EleutherAI LLM Evaluation Harness;\nmodel results are sorted by geometric mean to produce an intelligible ordering. As outlined above, these results demonstrate that 'dolly-v2-12b' is not state of the art,\nand in fact underperforms 'dolly-v1-6b' in some evaluation benchmarks. We believe this owes to the composition and size of the underlying fine tuning datasets,\nbut a robust statement as to the sources of these variations requires further study.\n\n\n\nHappy Hacking!\n==============" ]
text-generation
transformers
# merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [beowolx/MistralHermes-CodePro-7B-v1](https://huggingface.co/beowolx/MistralHermes-CodePro-7B-v1) * [EmbeddedLLM/Medusa2-Mistral-7B-Instruct-v0.2](https://huggingface.co/EmbeddedLLM/Medusa2-Mistral-7B-Instruct-v0.2) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: beowolx/MistralHermes-CodePro-7B-v1 layer_range: [0, 32] - model: EmbeddedLLM/Medusa2-Mistral-7B-Instruct-v0.2 layer_range: [0, 32] merge_method: slerp base_model: beowolx/MistralHermes-CodePro-7B-v1 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["beowolx/MistralHermes-CodePro-7B-v1", "EmbeddedLLM/Medusa2-Mistral-7B-Instruct-v0.2"]}
K00B404/Merged_Beowolx-CodePro_Medusa2-7B-Mistral-I-v0-2
null
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "base_model:beowolx/MistralHermes-CodePro-7B-v1", "base_model:EmbeddedLLM/Medusa2-Mistral-7B-Instruct-v0.2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-14T21:14:24+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-beowolx/MistralHermes-CodePro-7B-v1 #base_model-EmbeddedLLM/Medusa2-Mistral-7B-Instruct-v0.2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# merge This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * beowolx/MistralHermes-CodePro-7B-v1 * EmbeddedLLM/Medusa2-Mistral-7B-Instruct-v0.2 ### Configuration The following YAML configuration was used to produce this model:
[ "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* beowolx/MistralHermes-CodePro-7B-v1\n* EmbeddedLLM/Medusa2-Mistral-7B-Instruct-v0.2", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-beowolx/MistralHermes-CodePro-7B-v1 #base_model-EmbeddedLLM/Medusa2-Mistral-7B-Instruct-v0.2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* beowolx/MistralHermes-CodePro-7B-v1\n* EmbeddedLLM/Medusa2-Mistral-7B-Instruct-v0.2", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-multilingual-cased_regression_finetuned_news_all This model is a fine-tuned version of [distilbert/distilbert-base-multilingual-cased](https://huggingface.co/distilbert/distilbert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8565 - Mse: 0.8565 - Mae: 0.5566 - Rmse: 0.9255 - Mape: inf - R Squared: 0.5011 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 2175 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mse | Mae | Rmse | Mape | R Squared | |:-------------:|:-----:|:------:|:---------------:|:------:|:------:|:------:|:----:|:---------:| | 0.9354 | 1.0 | 21755 | 0.9298 | 0.9298 | 0.5985 | 0.9643 | inf | 0.4584 | | 0.8432 | 2.0 | 43510 | 0.8988 | 0.8988 | 0.5756 | 0.9481 | inf | 0.4764 | | 0.8033 | 3.0 | 65265 | 0.8810 | 0.8810 | 0.5685 | 0.9386 | inf | 0.4868 | | 0.8119 | 4.0 | 87020 | 0.8778 | 0.8778 | 0.5623 | 0.9369 | inf | 0.4887 | | 0.7401 | 5.0 | 108775 | 0.8565 | 0.8565 | 0.5566 | 0.9255 | inf | 0.5011 | | 0.6964 | 6.0 | 130530 | 0.8877 | 0.8877 | 0.5587 | 0.9422 | inf | 0.4829 | | 0.6213 | 7.0 | 152285 | 0.8918 | 0.8918 | 0.5607 | 0.9444 | inf | 0.4805 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilbert/distilbert-base-multilingual-cased", "model-index": [{"name": "distilbert-base-multilingual-cased_regression_finetuned_news_all", "results": []}]}
Mou11209203/distilbert-base-multilingual-cased_regression_finetuned_news_all
null
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-multilingual-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-14T21:16:14+00:00
[]
[]
TAGS #transformers #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert/distilbert-base-multilingual-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-multilingual-cased\_regression\_finetuned\_news\_all ==================================================================== This model is a fine-tuned version of distilbert/distilbert-base-multilingual-cased on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.8565 * Mse: 0.8565 * Mae: 0.5566 * Rmse: 0.9255 * Mape: inf * R Squared: 0.5011 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 3e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_steps: 2175 * num\_epochs: 10 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.1 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 2175\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert/distilbert-base-multilingual-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 2175\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral_finetuned This model is a fine-tuned version of [hishab/titulm-1b-enbn-v1](https://huggingface.co/hishab/titulm-1b-enbn-v1) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 2.1066 - eval_runtime: 18.7031 - eval_samples_per_second: 5.347 - eval_steps_per_second: 2.673 - epoch: 0.0 - step: 100 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 0.03 - training_steps: 10000 ### Framework versions - PEFT 0.10.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "hishab/titulm-1b-enbn-v1", "model-index": [{"name": "mistral_finetuned", "results": []}]}
Virus-Proton/mistral_finetuned
null
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:hishab/titulm-1b-enbn-v1", "license:apache-2.0", "region:us" ]
null
2024-04-14T21:18:38+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #generated_from_trainer #base_model-hishab/titulm-1b-enbn-v1 #license-apache-2.0 #region-us
# mistral_finetuned This model is a fine-tuned version of hishab/titulm-1b-enbn-v1 on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 2.1066 - eval_runtime: 18.7031 - eval_samples_per_second: 5.347 - eval_steps_per_second: 2.673 - epoch: 0.0 - step: 100 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 0.03 - training_steps: 10000 ### Framework versions - PEFT 0.10.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# mistral_finetuned\n\nThis model is a fine-tuned version of hishab/titulm-1b-enbn-v1 on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 2.1066\n- eval_runtime: 18.7031\n- eval_samples_per_second: 5.347\n- eval_steps_per_second: 2.673\n- epoch: 0.0\n- step: 100", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 0.03\n- training_steps: 10000", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-hishab/titulm-1b-enbn-v1 #license-apache-2.0 #region-us \n", "# mistral_finetuned\n\nThis model is a fine-tuned version of hishab/titulm-1b-enbn-v1 on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 2.1066\n- eval_runtime: 18.7031\n- eval_samples_per_second: 5.347\n- eval_steps_per_second: 2.673\n- epoch: 0.0\n- step: 100", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 0.03\n- training_steps: 10000", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) dolly-v1-6b - bnb 4bits - Model creator: https://huggingface.co/databricks/ - Original model: https://huggingface.co/databricks/dolly-v1-6b/ Original model description: --- license: cc-by-nc-4.0 datasets: - tatsu-lab/alpaca language: - en library_name: transformers inference: false --- # dolly-v1-6b Model Card ## Dolly v2 Is Out! Please try Dolly v2 instead: - https://huggingface.co/databricks/dolly-v2-12b - https://huggingface.co/databricks/dolly-v2-7b - https://huggingface.co/databricks/dolly-v2-3b ## Summary Databricks’ `dolly-v1-6b`, a large language model ([blog post](https://www.databricks.com/blog/2023/03/24/hello-dolly-democratizing-magic-chatgpt-open-models.html)) trained on the Databricks machine learning platform, demonstrates that a two-years-old [open source model](https://huggingface.co/EleutherAI/gpt-j-6B) can, when subjected to just 30 minutes of fine tuning on a focused corpus of 50k records ([Stanford Alpaca](https://crfm.stanford.edu/2023/03/13/alpaca.html)), exhibit surprisingly high quality instruction following behavior not characteristic of the foundation model on which it is based. We believe this finding is important because it demonstrates that the ability to create powerful artificial intelligence technologies is vastly more accessible than previously realized. Databricks is committed to ensuring that every organization and individual benefits from the transformative power of artificial intelligence. The Dolly model family represents our first steps along this journey, and we’re excited to share this technology with the world. **Owner**: Databricks, Inc. ## Model Overview `dolly-v1-6b` is a 6 billion parameter causal language model created by [Databricks](https://databricks.com/) that is derived from [EleutherAI’s](https://www.eleuther.ai/) [GPT-J](https://huggingface.co/EleutherAI/gpt-j-6B) (released June 2021) and fine-tuned on a ~52K record instruction corpus ([Stanford Alpaca](https://crfm.stanford.edu/2023/03/13/alpaca.html)) (CC-NC-BY-4.0) consisting of question/answer pairs generated using the techniques outlined in the [Self-Instruct](https://arxiv.org/abs/2212.10560) paper. The [original version](https://www.databricks.com/blog/2023/03/24/hello-dolly-democratizing-magic-chatgpt-open-models.html) of was Dolly was trained using [deepspeed](https://github.com/microsoft/DeepSpeed) [ZeRO 3](https://github.com/microsoft/DeepSpeed/blob/master/docs/code-docs/source/zero3.rst) on the [Databricks Machine Learning Platform](https://www.databricks.com/product/machine-learning) in just 30 minutes (1 epoch) using a single [NDasrA100_v4](https://learn.microsoft.com/en-us/azure/virtual-machines/nda100-v4-series) machine with 8x A100 40GB GPUs. The most recent `dolly-v1-6b` checkpoint was trained for 10 epochs on the same hardware. Like its base model, `dolly-v1-6b` has six billion parameters consisting of 28 transformer layers with 16 attention heads each. It employs [Rotary Position Embedding](https://arxiv.org/abs/2104.09864) (RoPE) and shares the same tokenizer as GPT-3. GPT-J was trained on [The Pile](https://huggingface.co/datasets/the_pile), a 400B token dataset of diverse documents designed primarily for text generation tasks. ## Known Limitations **`dolly-v1-6b` is not a state-of-the-art generative language model** and, though quantitative benchmarking is ongoing, is not designed to perform competitively with more modern model architectures or models subject to larger pretraining corpuses. **It is designed for academic or research purposes, and to encourage model and engineering experimentation.** The Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community. In particular, `dolly-v1-6b` struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors, dates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc. ## Training Data, Bias & Objectionable Content Like all language models, `dolly-v1-6b` reflects the content and limitations of its training corpuses. - **The Pile**: GPT-J’s pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets, it contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly in the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit associations. - **Stanford Alpaca**: The instruction tuning corpus for dolly-6b can be assumed to share many of the limitations. In addition, it is known to contain factual inaccuracies, semantic and syntactic irregularities, nonsensical responses, and incorrect mathematical calculations, among other data shortcomings. The model outputs will reflect these limitations. Databricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that maximize the potential of all individuals and organizations. ## Intended Uses **`dolly-v1-6b` is intended exclusively for research purposes. We do not recommend using `dolly-v1-6b` in high-risk applications (e.g., educational or vocational training, product safety components, or other uses that may impact the well-being of individuals.)** `dolly-v1-6b` is differentiated from its predecessor, GPT-J, in the extent to which it exhibits coherent responses to natural language questions and instruction prompts. `dolly-v1-6b` appears to exhibit reasonable (but categorically not state-of-the-art) behavior across a range of capabilities, including many of those outlined in the [InstructGPT](https://arxiv.org/abs/2203.02155) paper. We include evocative examples below, but invite the reader to observe that they contain factual inaccuracies or semantic misfires that illustrate some of the model’s shortcomings. ### Brainstorming **What are considerations I should keep in mind when planning a backcountry backpacking trip?** When planning a backcountry backpacking trip, there are several considerations to keep in mind. First, make sure you are prepared for any potential weather conditions, including extreme temperatures and lightning storms. Second, consider the route you will take and the terrain that you will be traversing. Third, make sure that you have enough food, water and gear to stay safe and comfortable during your backcountry journey. Fourth, plan for potential wildlife encounters and make sure you have the appropriate safety measures in place. Finally, have an emergency plan in place in case of a medical emergency. ### Classification **Label each of the following as either a scientific concept or a product: Nikon D750, quantum entanglement, CRISPR, and a Macbook Pro.** Nikon D750: Product Quantum entanglement: Scientific Concept CRISPR: Product Macbook Pro: Product ### Extraction **Extract all the people and places from the following passage: Basquiat was born on December 22, 1960, in Park Slope, Brooklyn, New York City, the second of four children to Matilde Basquiat (née Andrades, 1934–2008) and Gérard Basquiat (1930–2013). He had an older brother, Max, who died shortly before his birth, and two younger sisters, Lisane (b. 1964) and Jeanine (b. 1967). His father was born in Port-au-Prince, Haiti and his mother was born in Brooklyn to Puerto Rican parents. He was raised Catholic.** People: Basquiat, Matilde Basquiat, Gérard Basquiat, Max Basquiat, Lisane Basquiat, Jeanine Basquiat. Place: Park Slope, Brooklyn, New York City. ### Generation **Write a press release declaring the completion of Atlantis II, a facility designed for long-term human habitation at the bottom of the ocean. Be sure to mention some of its advanced technological features.** Atlantis II, a new facility designed for long-term human habitation at the bottom of the ocean, is now complete! With advanced technological features, it will be a safe and comfortable living environment for generations to come. From self-sustaining ecosystems to advanced robotics, the facility offers a range of opportunities for exploration, discovery and exploration. Atlantis II will be a home for a wide range of species and will serve as a hub for ocean research, exploration and exploration, bringing together a diverse team of scientists, engineers, biologists, and divers to uncover the mysteries of the deep. ### Summarization **Give me a one line summary of this: Coffee is one of the most widely consumed beverages in the world. It has primarily consumed due to its stimulant effect and unique taste since the ancient times. Afterwards, its consumption has been historically associated with a lower risk of some diseases such as type 2 diabetes mellitus, obesity, cardiovascular disease and some type of cancer and thus it has also consumed due to health benefits. It contains many bioactive compounds such as caffeine, chlorogenic acids and diterpenoid alcohols which have so far been associated with many potential health benefits. For example, caffeine reduces risk of developing neurodegenerative disease and chlorogenic acids (CGA) and diterpene alcohols have many health benefits such as antioxidant and chemo-preventive. Coffee also have harmful effects. For example, diterpenoid alcohols increases serum homocysteine and cholesterol levels and thus it has adverse effects on cardiovascular system. Overall, the study that supports the health benefits of coffee is increasing. But, it is thought-provoking that the association with health benefits of coffee consumption and frequency at different levels in each study. For this reason, we aimed to examine the health effect of the coffee and how much consumption is to investigate whether it meets the claimed health benefits.** Coffee is one of the most widely consumed beverages in the world and has historically been associated with a lower risk of some diseases such as type 2 diabetes mellitus, obesity, cardiovascular disease and some type of cancer. However, there are mixed results in the literature and further research is required to determine the health benefits of coffee consumption. ### Rephrasing **Give me a different way to say the following to a 4 year old: "Son, this is the last time I'm going to tell you. Go to bed!"** "Goodnight, Son. Sleep tight and I'll see you in the morning!" ### Closed QA **I'm going to give you a passage from the book Neuromancer and I'd like you to answer the following question: What is the tool that allows Case to access the matrix? Case was twenty-four. At twenty-two, he'd been a cowboy, a rustler, one of the best in the Sprawl. He'd been trained by the best, by McCoy Pauley and Bobby Quine, legends in the biz. He'd operated on an almost permanent adrenaline high, a byproduct of youth and proficiency, jacked into a custom cyberspace deck that projected his disembodied consciousness into the consensual hallucination that was the matrix.** The tool that allows Case to access the matrix is his cyberspace deck, which allows him to manipulate and interact with the virtual world around him. It enables him to access information from various sources and interact with the people in the matrix, and it is one of the main weapons of Case's cyberworld. ### Open QA **What is the literary style of Jorge Luis Borges?** Jorge Luis Borges's literary style is often described as magical realism. He often creates stories that combine fictional characters with real life elements and use the juxtaposition of these elements to create a sense of surrealism. He often uses allegory and symbolism to explore complex philosophical ideas and explore questions of human perception and understanding. His writing is characterized by a playful and inventive use of language, with characters often being metaphors or symbols for larger, more abstract concepts. ## Usage ### Load Model & Tokenizer ```python import numpy as np from transformers import ( AutoModelForCausalLM, AutoTokenizer, PreTrainedModel, PreTrainedTokenizer ) tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v1-6b", padding_side="left") model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v1-6b", device_map="auto", trust_remote_code=True) ``` ### Generate Text ```python PROMPT_FORMAT = """Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {instruction} ### Response: """ def generate_response(instruction: str, *, model: PreTrainedModel, tokenizer: PreTrainedTokenizer, do_sample: bool = True, max_new_tokens: int = 256, top_p: float = 0.92, top_k: int = 0, **kwargs) -> str: input_ids = tokenizer(PROMPT_FORMAT.format(instruction=instruction), return_tensors="pt").input_ids.to("cuda") # each of these is encoded to a single token response_key_token_id = tokenizer.encode("### Response:")[0] end_key_token_id = tokenizer.encode("### End")[0] gen_tokens = model.generate(input_ids, pad_token_id=tokenizer.pad_token_id, eos_token_id=end_key_token_id, do_sample=do_sample, max_new_tokens=max_new_tokens, top_p=top_p, top_k=top_k, **kwargs)[0].cpu() # find where the response begins response_positions = np.where(gen_tokens == response_key_token_id)[0] if len(response_positions) >= 0: response_pos = response_positions[0] # find where the response ends end_pos = None end_positions = np.where(gen_tokens == end_key_token_id)[0] if len(end_positions) > 0: end_pos = end_positions[0] return tokenizer.decode(gen_tokens[response_pos + 1 : end_pos]).strip() return None # Sample similar to: "Excited to announce the release of Dolly, a powerful new language model from Databricks! #AI #Databricks" generate_response("Write a tweet announcing Dolly, a large language model from Databricks.", model=model, tokenizer=tokenizer) ``` ### Benchmark Metrics Below you'll find various models benchmark performance on the [EleutherAI LLM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) model results are sorted by geometric mean to produce an intelligible ordering. These results demonstrate that Dolly is not state of the art, as we describe above, but also point to an interesting observation. Namely, Dolly is only marginally better (and in the case of Winogrande worse) and its basemodel GPT-J-6B. Despite this fact, the qualitative behavior of Dolly is materially different from the underlying model ([try it yourself](https://huggingface.co/EleutherAI/gpt-j-6B) on Hugging Face!), which points to meaningful limitations of the existing evaluation benchmarks for measuring the quality of generative models. | model | openbookqa | arc_easy | winogrande | hellaswag | arc_challenge | piqa | boolq | | --------------------------- | ------------ | ---------- | ------------ | ----------- | --------------- | -------- | ---------| | cerebras/Cerebras-GPT-13B | 0.36 | 0.598906 | 0.607735 | 0.593109 | 0.325939 | 0.749728 | 0.611621 | | EleutherAI/gpt-j-6B | 0.382 | 0.621633 | 0.651144 | 0.662617 | 0.363481 | 0.761153 | 0.655963 | | dolly-v1-6b (1 epoch) | 0.428 | 0.608586 | 0.633781 | 0.650568 | 0.377133 | 0.761697 | 0.69633 | | dolly-v1-6b (10 epochs) | 0.41 | 0.62963 | 0.643252 | 0.676758 | 0.384812 | 0.773667 | 0.687768 | | EleutherAI/gpt-neox-20b | 0.402 | 0.683923 | 0.656669 | 0.7142 | 0.408703 | 0.784004 | 0.695413 | # Citation ``` @online{DatabricksBlog2023DollyV1, author = {Mike Conover and Matt Hayes and Ankit Mathur and Xiangrui Meng and Jianwei Xie and Jun Wan and Ali Ghodsi and Patrick Wendell and Matei Zaharia}, title = {Hello Dolly: Democratizing the magic of ChatGPT with open models}, year = {2023}, url = {https://www.databricks.com/blog/2023/03/24/hello-dolly-democratizing-magic-chatgpt-open-models.html}, urldate = {2023-06-30} } ``` # Happy Hacking!
{}
RichardErkhov/databricks_-_dolly-v1-6b-4bits
null
[ "transformers", "safetensors", "gptj", "text-generation", "arxiv:2212.10560", "arxiv:2104.09864", "arxiv:2203.02155", "autotrain_compatible", "endpoints_compatible", "4-bit", "region:us" ]
null
2024-04-14T21:18:38+00:00
[ "2212.10560", "2104.09864", "2203.02155" ]
[]
TAGS #transformers #safetensors #gptj #text-generation #arxiv-2212.10560 #arxiv-2104.09864 #arxiv-2203.02155 #autotrain_compatible #endpoints_compatible #4-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models dolly-v1-6b - bnb 4bits * Model creator: URL * Original model: URL Original model description: --------------------------- license: cc-by-nc-4.0 datasets: * tatsu-lab/alpaca language: * en library\_name: transformers inference: false --- dolly-v1-6b Model Card ====================== Dolly v2 Is Out! ---------------- Please try Dolly v2 instead: * URL * URL * URL Summary ------- Databricks’ 'dolly-v1-6b', a large language model (blog post) trained on the Databricks machine learning platform, demonstrates that a two-years-old open source model can, when subjected to just 30 minutes of fine tuning on a focused corpus of 50k records (Stanford Alpaca), exhibit surprisingly high quality instruction following behavior not characteristic of the foundation model on which it is based. We believe this finding is important because it demonstrates that the ability to create powerful artificial intelligence technologies is vastly more accessible than previously realized. Databricks is committed to ensuring that every organization and individual benefits from the transformative power of artificial intelligence. The Dolly model family represents our first steps along this journey, and we’re excited to share this technology with the world. Owner: Databricks, Inc. Model Overview -------------- 'dolly-v1-6b' is a 6 billion parameter causal language model created by Databricks that is derived from EleutherAI’s GPT-J (released June 2021) and fine-tuned on a ~52K record instruction corpus (Stanford Alpaca) (CC-NC-BY-4.0) consisting of question/answer pairs generated using the techniques outlined in the Self-Instruct paper. The original version of was Dolly was trained using deepspeed ZeRO 3 on the Databricks Machine Learning Platform in just 30 minutes (1 epoch) using a single NDasrA100\_v4 machine with 8x A100 40GB GPUs. The most recent 'dolly-v1-6b' checkpoint was trained for 10 epochs on the same hardware. Like its base model, 'dolly-v1-6b' has six billion parameters consisting of 28 transformer layers with 16 attention heads each. It employs Rotary Position Embedding (RoPE) and shares the same tokenizer as GPT-3. GPT-J was trained on The Pile, a 400B token dataset of diverse documents designed primarily for text generation tasks. Known Limitations ----------------- 'dolly-v1-6b' is not a state-of-the-art generative language model and, though quantitative benchmarking is ongoing, is not designed to perform competitively with more modern model architectures or models subject to larger pretraining corpuses. It is designed for academic or research purposes, and to encourage model and engineering experimentation. The Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community. In particular, 'dolly-v1-6b' struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors, dates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc. Training Data, Bias & Objectionable Content ------------------------------------------- Like all language models, 'dolly-v1-6b' reflects the content and limitations of its training corpuses. * The Pile: GPT-J’s pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets, it contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly in the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit associations. * Stanford Alpaca: The instruction tuning corpus for dolly-6b can be assumed to share many of the limitations. In addition, it is known to contain factual inaccuracies, semantic and syntactic irregularities, nonsensical responses, and incorrect mathematical calculations, among other data shortcomings. The model outputs will reflect these limitations. Databricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that maximize the potential of all individuals and organizations. Intended Uses ------------- 'dolly-v1-6b' is intended exclusively for research purposes. We do not recommend using 'dolly-v1-6b' in high-risk applications (e.g., educational or vocational training, product safety components, or other uses that may impact the well-being of individuals.) 'dolly-v1-6b' is differentiated from its predecessor, GPT-J, in the extent to which it exhibits coherent responses to natural language questions and instruction prompts. 'dolly-v1-6b' appears to exhibit reasonable (but categorically not state-of-the-art) behavior across a range of capabilities, including many of those outlined in the InstructGPT paper. We include evocative examples below, but invite the reader to observe that they contain factual inaccuracies or semantic misfires that illustrate some of the model’s shortcomings. ### Brainstorming What are considerations I should keep in mind when planning a backcountry backpacking trip? When planning a backcountry backpacking trip, there are several considerations to keep in mind. First, make sure you are prepared for any potential weather conditions, including extreme temperatures and lightning storms. Second, consider the route you will take and the terrain that you will be traversing. Third, make sure that you have enough food, water and gear to stay safe and comfortable during your backcountry journey. Fourth, plan for potential wildlife encounters and make sure you have the appropriate safety measures in place. Finally, have an emergency plan in place in case of a medical emergency. ### Classification Label each of the following as either a scientific concept or a product: Nikon D750, quantum entanglement, CRISPR, and a Macbook Pro. Nikon D750: Product Quantum entanglement: Scientific Concept CRISPR: Product Macbook Pro: Product ### Extraction Extract all the people and places from the following passage: Basquiat was born on December 22, 1960, in Park Slope, Brooklyn, New York City, the second of four children to Matilde Basquiat (née Andrades, 1934–2008) and Gérard Basquiat (1930–2013). He had an older brother, Max, who died shortly before his birth, and two younger sisters, Lisane (b. 1964) and Jeanine (b. 1967). His father was born in Port-au-Prince, Haiti and his mother was born in Brooklyn to Puerto Rican parents. He was raised Catholic. People: Basquiat, Matilde Basquiat, Gérard Basquiat, Max Basquiat, Lisane Basquiat, Jeanine Basquiat. Place: Park Slope, Brooklyn, New York City. ### Generation Write a press release declaring the completion of Atlantis II, a facility designed for long-term human habitation at the bottom of the ocean. Be sure to mention some of its advanced technological features. Atlantis II, a new facility designed for long-term human habitation at the bottom of the ocean, is now complete! With advanced technological features, it will be a safe and comfortable living environment for generations to come. From self-sustaining ecosystems to advanced robotics, the facility offers a range of opportunities for exploration, discovery and exploration. Atlantis II will be a home for a wide range of species and will serve as a hub for ocean research, exploration and exploration, bringing together a diverse team of scientists, engineers, biologists, and divers to uncover the mysteries of the deep. ### Summarization Give me a one line summary of this: Coffee is one of the most widely consumed beverages in the world. It has primarily consumed due to its stimulant effect and unique taste since the ancient times. Afterwards, its consumption has been historically associated with a lower risk of some diseases such as type 2 diabetes mellitus, obesity, cardiovascular disease and some type of cancer and thus it has also consumed due to health benefits. It contains many bioactive compounds such as caffeine, chlorogenic acids and diterpenoid alcohols which have so far been associated with many potential health benefits. For example, caffeine reduces risk of developing neurodegenerative disease and chlorogenic acids (CGA) and diterpene alcohols have many health benefits such as antioxidant and chemo-preventive. Coffee also have harmful effects. For example, diterpenoid alcohols increases serum homocysteine and cholesterol levels and thus it has adverse effects on cardiovascular system. Overall, the study that supports the health benefits of coffee is increasing. But, it is thought-provoking that the association with health benefits of coffee consumption and frequency at different levels in each study. For this reason, we aimed to examine the health effect of the coffee and how much consumption is to investigate whether it meets the claimed health benefits. Coffee is one of the most widely consumed beverages in the world and has historically been associated with a lower risk of some diseases such as type 2 diabetes mellitus, obesity, cardiovascular disease and some type of cancer. However, there are mixed results in the literature and further research is required to determine the health benefits of coffee consumption. ### Rephrasing Give me a different way to say the following to a 4 year old: "Son, this is the last time I'm going to tell you. Go to bed!" "Goodnight, Son. Sleep tight and I'll see you in the morning!" ### Closed QA I'm going to give you a passage from the book Neuromancer and I'd like you to answer the following question: What is the tool that allows Case to access the matrix? Case was twenty-four. At twenty-two, he'd been a cowboy, a rustler, one of the best in the Sprawl. He'd been trained by the best, by McCoy Pauley and Bobby Quine, legends in the biz. He'd operated on an almost permanent adrenaline high, a byproduct of youth and proficiency, jacked into a custom cyberspace deck that projected his disembodied consciousness into the consensual hallucination that was the matrix. The tool that allows Case to access the matrix is his cyberspace deck, which allows him to manipulate and interact with the virtual world around him. It enables him to access information from various sources and interact with the people in the matrix, and it is one of the main weapons of Case's cyberworld. ### Open QA What is the literary style of Jorge Luis Borges? Jorge Luis Borges's literary style is often described as magical realism. He often creates stories that combine fictional characters with real life elements and use the juxtaposition of these elements to create a sense of surrealism. He often uses allegory and symbolism to explore complex philosophical ideas and explore questions of human perception and understanding. His writing is characterized by a playful and inventive use of language, with characters often being metaphors or symbols for larger, more abstract concepts. Usage ----- ### Load Model & Tokenizer ### Generate Text ### Benchmark Metrics Below you'll find various models benchmark performance on the EleutherAI LLM Evaluation Harness model results are sorted by geometric mean to produce an intelligible ordering. These results demonstrate that Dolly is not state of the art, as we describe above, but also point to an interesting observation. Namely, Dolly is only marginally better (and in the case of Winogrande worse) and its basemodel GPT-J-6B. Despite this fact, the qualitative behavior of Dolly is materially different from the underlying model (try it yourself on Hugging Face!), which points to meaningful limitations of the existing evaluation benchmarks for measuring the quality of generative models. Happy Hacking! ==============
[ "### Brainstorming\n\n\nWhat are considerations I should keep in mind when planning a backcountry backpacking trip?\n\n\nWhen planning a backcountry backpacking trip, there are several considerations to keep in mind. First, make sure you are prepared for any potential weather conditions, including extreme temperatures and lightning storms. Second, consider the route you will take and the terrain that you will be traversing. Third, make sure that you have enough food, water and gear to stay safe and comfortable during your backcountry journey. Fourth, plan for potential wildlife encounters and make sure you have the appropriate safety measures in place. Finally, have an emergency plan in place in case of a medical emergency.", "### Classification\n\n\nLabel each of the following as either a scientific concept or a product: Nikon D750, quantum entanglement, CRISPR, and a Macbook Pro.\n\n\nNikon D750: Product\n\n\nQuantum entanglement: Scientific Concept\n\n\nCRISPR: Product\n\n\nMacbook Pro: Product", "### Extraction\n\n\nExtract all the people and places from the following passage:\nBasquiat was born on December 22, 1960, in Park Slope, Brooklyn, New York City, the second of four children to Matilde Basquiat (née Andrades, 1934–2008) and Gérard Basquiat (1930–2013). He had an older brother, Max, who died shortly before his birth, and two younger sisters, Lisane (b. 1964) and Jeanine (b. 1967). His father was born in Port-au-Prince, Haiti and his mother was born in Brooklyn to Puerto Rican parents. He was raised Catholic.\n\n\nPeople: Basquiat, Matilde Basquiat, Gérard Basquiat, Max Basquiat, Lisane Basquiat, Jeanine Basquiat.\n\n\nPlace: Park Slope, Brooklyn, New York City.", "### Generation\n\n\nWrite a press release declaring the completion of Atlantis II, a facility designed for long-term human habitation at the bottom of the ocean. Be sure to mention some of its advanced technological features.\n\n\nAtlantis II, a new facility designed for long-term human habitation at the bottom of the ocean, is now complete! With advanced technological features, it will be a safe and comfortable living environment for generations to come. From self-sustaining ecosystems to advanced robotics, the facility offers a range of opportunities for exploration, discovery and exploration.\n\n\nAtlantis II will be a home for a wide range of species and will serve as a hub for ocean research, exploration and exploration, bringing together a diverse team of scientists, engineers, biologists, and divers to uncover the mysteries of the deep.", "### Summarization\n\n\nGive me a one line summary of this:\nCoffee is one of the most widely consumed beverages in the world. It has primarily consumed due to its stimulant effect and unique taste since the ancient times. Afterwards, its consumption has been historically associated with a lower risk of some diseases such as type 2 diabetes mellitus, obesity, cardiovascular disease and some type of cancer and thus it has also consumed due to health benefits. It contains many bioactive compounds such as caffeine, chlorogenic acids and diterpenoid alcohols which have so far been associated with many potential health benefits. For example, caffeine reduces risk of developing neurodegenerative disease and chlorogenic acids (CGA) and diterpene alcohols have many health benefits such as antioxidant and chemo-preventive. Coffee also have harmful effects. For example, diterpenoid alcohols increases serum homocysteine and cholesterol levels and thus it has adverse effects on cardiovascular system. Overall, the study that supports the health benefits of coffee is increasing. But, it is thought-provoking that the association with health benefits of coffee consumption and frequency at different levels in each study. For this reason, we aimed to examine the health effect of the coffee and how much consumption is to investigate whether it meets the claimed health benefits.\n\n\nCoffee is one of the most widely consumed beverages in the world and has historically been associated with a lower risk of some diseases such as type 2 diabetes mellitus, obesity, cardiovascular disease and some type of cancer. However, there are mixed results in the literature and further research is required to determine the health benefits of coffee consumption.", "### Rephrasing\n\n\nGive me a different way to say the following to a 4 year old: \"Son, this is the last time I'm going to tell you. Go to bed!\"\n\n\n\"Goodnight, Son. Sleep tight and I'll see you in the morning!\"", "### Closed QA\n\n\nI'm going to give you a passage from the book Neuromancer and I'd like you to answer the following question: What is the tool that allows Case to access the matrix?\nCase was twenty-four. At twenty-two, he'd been a cowboy, a rustler, one of the best in the Sprawl. He'd been trained by the best, by McCoy Pauley and Bobby Quine, legends in the biz. He'd operated on an almost permanent adrenaline high, a byproduct of youth and proficiency, jacked into a custom cyberspace deck that projected his disembodied consciousness into the consensual hallucination that was the matrix.\n\n\nThe tool that allows Case to access the matrix is his cyberspace deck, which allows him to manipulate and interact with the virtual world around him. It enables him to access information from various sources and interact with the people in the matrix, and it is one of the main weapons of Case's cyberworld.", "### Open QA\n\n\nWhat is the literary style of Jorge Luis Borges?\n\n\nJorge Luis Borges's literary style is often described as magical realism. He often creates stories that combine fictional characters with real life elements and use the juxtaposition of these elements to create a sense of surrealism. He often uses allegory and symbolism to explore complex philosophical ideas and explore questions of human perception and understanding. His writing is characterized by a playful and inventive use of language, with characters often being metaphors or symbols for larger, more abstract concepts.\n\n\nUsage\n-----", "### Load Model & Tokenizer", "### Generate Text", "### Benchmark Metrics\n\n\nBelow you'll find various models benchmark performance on the EleutherAI LLM Evaluation Harness\nmodel results are sorted by geometric mean to produce an intelligible ordering. These results demonstrate that Dolly is not state of the art, as we describe\nabove, but also point to an interesting observation. Namely, Dolly is only marginally better (and in the case of Winogrande worse) and its basemodel GPT-J-6B. \n\nDespite this fact, the qualitative behavior of Dolly is materially different from the underlying model (try it yourself on Hugging Face!),\nwhich points to meaningful limitations of the existing evaluation benchmarks for measuring the quality of generative models.\n\n\n\nHappy Hacking!\n==============" ]
[ "TAGS\n#transformers #safetensors #gptj #text-generation #arxiv-2212.10560 #arxiv-2104.09864 #arxiv-2203.02155 #autotrain_compatible #endpoints_compatible #4-bit #region-us \n", "### Brainstorming\n\n\nWhat are considerations I should keep in mind when planning a backcountry backpacking trip?\n\n\nWhen planning a backcountry backpacking trip, there are several considerations to keep in mind. First, make sure you are prepared for any potential weather conditions, including extreme temperatures and lightning storms. Second, consider the route you will take and the terrain that you will be traversing. Third, make sure that you have enough food, water and gear to stay safe and comfortable during your backcountry journey. Fourth, plan for potential wildlife encounters and make sure you have the appropriate safety measures in place. Finally, have an emergency plan in place in case of a medical emergency.", "### Classification\n\n\nLabel each of the following as either a scientific concept or a product: Nikon D750, quantum entanglement, CRISPR, and a Macbook Pro.\n\n\nNikon D750: Product\n\n\nQuantum entanglement: Scientific Concept\n\n\nCRISPR: Product\n\n\nMacbook Pro: Product", "### Extraction\n\n\nExtract all the people and places from the following passage:\nBasquiat was born on December 22, 1960, in Park Slope, Brooklyn, New York City, the second of four children to Matilde Basquiat (née Andrades, 1934–2008) and Gérard Basquiat (1930–2013). He had an older brother, Max, who died shortly before his birth, and two younger sisters, Lisane (b. 1964) and Jeanine (b. 1967). His father was born in Port-au-Prince, Haiti and his mother was born in Brooklyn to Puerto Rican parents. He was raised Catholic.\n\n\nPeople: Basquiat, Matilde Basquiat, Gérard Basquiat, Max Basquiat, Lisane Basquiat, Jeanine Basquiat.\n\n\nPlace: Park Slope, Brooklyn, New York City.", "### Generation\n\n\nWrite a press release declaring the completion of Atlantis II, a facility designed for long-term human habitation at the bottom of the ocean. Be sure to mention some of its advanced technological features.\n\n\nAtlantis II, a new facility designed for long-term human habitation at the bottom of the ocean, is now complete! With advanced technological features, it will be a safe and comfortable living environment for generations to come. From self-sustaining ecosystems to advanced robotics, the facility offers a range of opportunities for exploration, discovery and exploration.\n\n\nAtlantis II will be a home for a wide range of species and will serve as a hub for ocean research, exploration and exploration, bringing together a diverse team of scientists, engineers, biologists, and divers to uncover the mysteries of the deep.", "### Summarization\n\n\nGive me a one line summary of this:\nCoffee is one of the most widely consumed beverages in the world. It has primarily consumed due to its stimulant effect and unique taste since the ancient times. Afterwards, its consumption has been historically associated with a lower risk of some diseases such as type 2 diabetes mellitus, obesity, cardiovascular disease and some type of cancer and thus it has also consumed due to health benefits. It contains many bioactive compounds such as caffeine, chlorogenic acids and diterpenoid alcohols which have so far been associated with many potential health benefits. For example, caffeine reduces risk of developing neurodegenerative disease and chlorogenic acids (CGA) and diterpene alcohols have many health benefits such as antioxidant and chemo-preventive. Coffee also have harmful effects. For example, diterpenoid alcohols increases serum homocysteine and cholesterol levels and thus it has adverse effects on cardiovascular system. Overall, the study that supports the health benefits of coffee is increasing. But, it is thought-provoking that the association with health benefits of coffee consumption and frequency at different levels in each study. For this reason, we aimed to examine the health effect of the coffee and how much consumption is to investigate whether it meets the claimed health benefits.\n\n\nCoffee is one of the most widely consumed beverages in the world and has historically been associated with a lower risk of some diseases such as type 2 diabetes mellitus, obesity, cardiovascular disease and some type of cancer. However, there are mixed results in the literature and further research is required to determine the health benefits of coffee consumption.", "### Rephrasing\n\n\nGive me a different way to say the following to a 4 year old: \"Son, this is the last time I'm going to tell you. Go to bed!\"\n\n\n\"Goodnight, Son. Sleep tight and I'll see you in the morning!\"", "### Closed QA\n\n\nI'm going to give you a passage from the book Neuromancer and I'd like you to answer the following question: What is the tool that allows Case to access the matrix?\nCase was twenty-four. At twenty-two, he'd been a cowboy, a rustler, one of the best in the Sprawl. He'd been trained by the best, by McCoy Pauley and Bobby Quine, legends in the biz. He'd operated on an almost permanent adrenaline high, a byproduct of youth and proficiency, jacked into a custom cyberspace deck that projected his disembodied consciousness into the consensual hallucination that was the matrix.\n\n\nThe tool that allows Case to access the matrix is his cyberspace deck, which allows him to manipulate and interact with the virtual world around him. It enables him to access information from various sources and interact with the people in the matrix, and it is one of the main weapons of Case's cyberworld.", "### Open QA\n\n\nWhat is the literary style of Jorge Luis Borges?\n\n\nJorge Luis Borges's literary style is often described as magical realism. He often creates stories that combine fictional characters with real life elements and use the juxtaposition of these elements to create a sense of surrealism. He often uses allegory and symbolism to explore complex philosophical ideas and explore questions of human perception and understanding. His writing is characterized by a playful and inventive use of language, with characters often being metaphors or symbols for larger, more abstract concepts.\n\n\nUsage\n-----", "### Load Model & Tokenizer", "### Generate Text", "### Benchmark Metrics\n\n\nBelow you'll find various models benchmark performance on the EleutherAI LLM Evaluation Harness\nmodel results are sorted by geometric mean to produce an intelligible ordering. These results demonstrate that Dolly is not state of the art, as we describe\nabove, but also point to an interesting observation. Namely, Dolly is only marginally better (and in the case of Winogrande worse) and its basemodel GPT-J-6B. \n\nDespite this fact, the qualitative behavior of Dolly is materially different from the underlying model (try it yourself on Hugging Face!),\nwhich points to meaningful limitations of the existing evaluation benchmarks for measuring the quality of generative models.\n\n\n\nHappy Hacking!\n==============" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Grayx/sad_pepe_9
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-14T21:19:44+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) dolly-v2-12b - bnb 8bits - Model creator: https://huggingface.co/databricks/ - Original model: https://huggingface.co/databricks/dolly-v2-12b/ Original model description: --- license: mit language: - en library_name: transformers inference: false datasets: - databricks/databricks-dolly-15k --- # dolly-v2-12b Model Card ## Summary Databricks' `dolly-v2-12b`, an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. Based on `pythia-12b`, Dolly is trained on ~15k instruction/response fine tuning records [`databricks-dolly-15k`](https://github.com/databrickslabs/dolly/tree/master/data) generated by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation, information extraction, open QA and summarization. `dolly-v2-12b` is not a state-of-the-art model, but does exhibit surprisingly high quality instruction following behavior not characteristic of the foundation model on which it is based. Dolly v2 is also available in these smaller models sizes: * [dolly-v2-7b](https://huggingface.co/databricks/dolly-v2-7b), a 6.9 billion parameter based on `pythia-6.9b` * [dolly-v2-3b](https://huggingface.co/databricks/dolly-v2-3b), a 2.8 billion parameter based on `pythia-2.8b` Please refer to the [dolly GitHub repo](https://github.com/databrickslabs/dolly#getting-started-with-response-generation) for tips on running inference for various GPU configurations. **Owner**: Databricks, Inc. ## Model Overview `dolly-v2-12b` is a 12 billion parameter causal language model created by [Databricks](https://databricks.com/) that is derived from [EleutherAI's](https://www.eleuther.ai/) [Pythia-12b](https://huggingface.co/EleutherAI/pythia-12b) and fine-tuned on a [~15K record instruction corpus](https://github.com/databrickslabs/dolly/tree/master/data) generated by Databricks employees and released under a permissive license (CC-BY-SA) ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` and `accelerate` libraries installed. In a Databricks notebook you could run: ```python %pip install "accelerate>=0.16.0,<1" "transformers[torch]>=4.28.1,<5" "torch>=1.13.1,<2" ``` The instruction following pipeline can be loaded using the `pipeline` function as shown below. This loads a custom `InstructionTextGenerationPipeline` found in the model repo [here](https://huggingface.co/databricks/dolly-v2-3b/blob/main/instruct_pipeline.py), which is why `trust_remote_code=True` is required. Including `torch_dtype=torch.bfloat16` is generally recommended if this type is supported in order to reduce memory usage. It does not appear to impact output quality. It is also fine to remove it if there is sufficient memory. ```python import torch from transformers import pipeline generate_text = pipeline(model="databricks/dolly-v2-12b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto") ``` You can then use the pipeline to answer instructions: ```python res = generate_text("Explain to me the difference between nuclear fission and fusion.") print(res[0]["generated_text"]) ``` Alternatively, if you prefer to not use `trust_remote_code=True` you can download [instruct_pipeline.py](https://huggingface.co/databricks/dolly-v2-3b/blob/main/instruct_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer: ```python import torch from instruct_pipeline import InstructionTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v2-12b", padding_side="left") model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v2-12b", device_map="auto", torch_dtype=torch.bfloat16) generate_text = InstructionTextGenerationPipeline(model=model, tokenizer=tokenizer) ``` ### LangChain Usage To use the pipeline with LangChain, you must set `return_full_text=True`, as LangChain expects the full text to be returned and the default for the pipeline is to only return the new text. ```python import torch from transformers import pipeline generate_text = pipeline(model="databricks/dolly-v2-12b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", return_full_text=True) ``` You can create a prompt that either has only an instruction or has an instruction with context: ```python from langchain import PromptTemplate, LLMChain from langchain.llms import HuggingFacePipeline # template for an instrution with no input prompt = PromptTemplate( input_variables=["instruction"], template="{instruction}") # template for an instruction with input prompt_with_context = PromptTemplate( input_variables=["instruction", "context"], template="{instruction}\n\nInput:\n{context}") hf_pipeline = HuggingFacePipeline(pipeline=generate_text) llm_chain = LLMChain(llm=hf_pipeline, prompt=prompt) llm_context_chain = LLMChain(llm=hf_pipeline, prompt=prompt_with_context) ``` Example predicting using a simple instruction: ```python print(llm_chain.predict(instruction="Explain to me the difference between nuclear fission and fusion.").lstrip()) ``` Example predicting using an instruction with context: ```python context = """George Washington (February 22, 1732[b] - December 14, 1799) was an American military officer, statesman, and Founding Father who served as the first president of the United States from 1789 to 1797.""" print(llm_context_chain.predict(instruction="When was George Washington president?", context=context).lstrip()) ``` ## Known Limitations ### Performance Limitations **`dolly-v2-12b` is not a state-of-the-art generative language model** and, though quantitative benchmarking is ongoing, is not designed to perform competitively with more modern model architectures or models subject to larger pretraining corpuses. The Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community. In particular, `dolly-v2-12b` struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors, dates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc. Moreover, we find that `dolly-v2-12b` does not have some capabilities, such as well-formatted letter writing, present in the original model. ### Dataset Limitations Like all language models, `dolly-v2-12b` reflects the content and limitations of its training corpuses. - **The Pile**: GPT-J's pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets, it contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly in the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit associations. - **`databricks-dolly-15k`**: The training data on which `dolly-v2-12b` is instruction tuned represents natural language instructions generated by Databricks employees during a period spanning March and April 2023 and includes passages from Wikipedia as references passages for instruction categories like closed QA and summarization. To our knowledge it does not contain obscenity, intellectual property or personally identifying information about non-public figures, but it may contain typos and factual errors. The dataset may also reflect biases found in Wikipedia. Finally, the dataset likely reflects the interests and semantic choices of Databricks employees, a demographic which is not representative of the global population at large. Databricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that maximize the potential of all individuals and organizations. ### Benchmark Metrics Below you'll find various models benchmark performance on the [EleutherAI LLM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness); model results are sorted by geometric mean to produce an intelligible ordering. As outlined above, these results demonstrate that `dolly-v2-12b` is not state of the art, and in fact underperforms `dolly-v1-6b` in some evaluation benchmarks. We believe this owes to the composition and size of the underlying fine tuning datasets, but a robust statement as to the sources of these variations requires further study. | model | openbookqa | arc_easy | winogrande | hellaswag | arc_challenge | piqa | boolq | gmean | | --------------------------------- | ------------ | ---------- | ------------ | ----------- | --------------- | -------- | -------- | ---------| | EleutherAI/pythia-2.8b | 0.348 | 0.585859 | 0.589582 | 0.591217 | 0.323379 | 0.73395 | 0.638226 | 0.523431 | | EleutherAI/pythia-6.9b | 0.368 | 0.604798 | 0.608524 | 0.631548 | 0.343857 | 0.761153 | 0.6263 | 0.543567 | | databricks/dolly-v2-3b | 0.384 | 0.611532 | 0.589582 | 0.650767 | 0.370307 | 0.742655 | 0.575535 | 0.544886 | | EleutherAI/pythia-12b | 0.364 | 0.627104 | 0.636148 | 0.668094 | 0.346416 | 0.760065 | 0.673394 | 0.559676 | | EleutherAI/gpt-j-6B | 0.382 | 0.621633 | 0.651144 | 0.662617 | 0.363481 | 0.761153 | 0.655963 | 0.565936 | | databricks/dolly-v2-12b | 0.408 | 0.63931 | 0.616417 | 0.707927 | 0.388225 | 0.757889 | 0.568196 | 0.56781 | | databricks/dolly-v2-7b | 0.392 | 0.633838 | 0.607735 | 0.686517 | 0.406997 | 0.750816 | 0.644037 | 0.573487 | | databricks/dolly-v1-6b | 0.41 | 0.62963 | 0.643252 | 0.676758 | 0.384812 | 0.773667 | 0.687768 | 0.583431 | | EleutherAI/gpt-neox-20b | 0.402 | 0.683923 | 0.656669 | 0.7142 | 0.408703 | 0.784004 | 0.695413 | 0.602236 | # Citation ``` @online{DatabricksBlog2023DollyV2, author = {Mike Conover and Matt Hayes and Ankit Mathur and Jianwei Xie and Jun Wan and Sam Shah and Ali Ghodsi and Patrick Wendell and Matei Zaharia and Reynold Xin}, title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM}, year = {2023}, url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm}, urldate = {2023-06-30} } ``` # Happy Hacking!
{}
RichardErkhov/databricks_-_dolly-v2-12b-8bits
null
[ "transformers", "safetensors", "gpt_neox", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-14T21:20:15+00:00
[]
[]
TAGS #transformers #safetensors #gpt_neox #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models dolly-v2-12b - bnb 8bits * Model creator: URL * Original model: URL Original model description: --------------------------- license: mit language: * en library\_name: transformers inference: false datasets: * databricks/databricks-dolly-15k --- dolly-v2-12b Model Card ======================= Summary ------- Databricks' 'dolly-v2-12b', an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. Based on 'pythia-12b', Dolly is trained on ~15k instruction/response fine tuning records 'databricks-dolly-15k' generated by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation, information extraction, open QA and summarization. 'dolly-v2-12b' is not a state-of-the-art model, but does exhibit surprisingly high quality instruction following behavior not characteristic of the foundation model on which it is based. Dolly v2 is also available in these smaller models sizes: * dolly-v2-7b, a 6.9 billion parameter based on 'pythia-6.9b' * dolly-v2-3b, a 2.8 billion parameter based on 'pythia-2.8b' Please refer to the dolly GitHub repo for tips on running inference for various GPU configurations. Owner: Databricks, Inc. Model Overview -------------- 'dolly-v2-12b' is a 12 billion parameter causal language model created by Databricks that is derived from EleutherAI's Pythia-12b and fine-tuned on a ~15K record instruction corpus generated by Databricks employees and released under a permissive license (CC-BY-SA) Usage ----- To use the model with the 'transformers' library on a machine with GPUs, first make sure you have the 'transformers' and 'accelerate' libraries installed. In a Databricks notebook you could run: The instruction following pipeline can be loaded using the 'pipeline' function as shown below. This loads a custom 'InstructionTextGenerationPipeline' found in the model repo here, which is why 'trust\_remote\_code=True' is required. Including 'torch\_dtype=torch.bfloat16' is generally recommended if this type is supported in order to reduce memory usage. It does not appear to impact output quality. It is also fine to remove it if there is sufficient memory. You can then use the pipeline to answer instructions: Alternatively, if you prefer to not use 'trust\_remote\_code=True' you can download instruct\_pipeline.py, store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer: ### LangChain Usage To use the pipeline with LangChain, you must set 'return\_full\_text=True', as LangChain expects the full text to be returned and the default for the pipeline is to only return the new text. You can create a prompt that either has only an instruction or has an instruction with context: Example predicting using a simple instruction: Example predicting using an instruction with context: Known Limitations ----------------- ### Performance Limitations 'dolly-v2-12b' is not a state-of-the-art generative language model and, though quantitative benchmarking is ongoing, is not designed to perform competitively with more modern model architectures or models subject to larger pretraining corpuses. The Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community. In particular, 'dolly-v2-12b' struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors, dates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc. Moreover, we find that 'dolly-v2-12b' does not have some capabilities, such as well-formatted letter writing, present in the original model. ### Dataset Limitations Like all language models, 'dolly-v2-12b' reflects the content and limitations of its training corpuses. * The Pile: GPT-J's pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets, it contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly in the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit associations. * 'databricks-dolly-15k': The training data on which 'dolly-v2-12b' is instruction tuned represents natural language instructions generated by Databricks employees during a period spanning March and April 2023 and includes passages from Wikipedia as references passages for instruction categories like closed QA and summarization. To our knowledge it does not contain obscenity, intellectual property or personally identifying information about non-public figures, but it may contain typos and factual errors. The dataset may also reflect biases found in Wikipedia. Finally, the dataset likely reflects the interests and semantic choices of Databricks employees, a demographic which is not representative of the global population at large. Databricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that maximize the potential of all individuals and organizations. ### Benchmark Metrics Below you'll find various models benchmark performance on the EleutherAI LLM Evaluation Harness; model results are sorted by geometric mean to produce an intelligible ordering. As outlined above, these results demonstrate that 'dolly-v2-12b' is not state of the art, and in fact underperforms 'dolly-v1-6b' in some evaluation benchmarks. We believe this owes to the composition and size of the underlying fine tuning datasets, but a robust statement as to the sources of these variations requires further study. Happy Hacking! ==============
[ "### LangChain Usage\n\n\nTo use the pipeline with LangChain, you must set 'return\\_full\\_text=True', as LangChain expects the full text to be returned\nand the default for the pipeline is to only return the new text.\n\n\nYou can create a prompt that either has only an instruction or has an instruction with context:\n\n\nExample predicting using a simple instruction:\n\n\nExample predicting using an instruction with context:\n\n\nKnown Limitations\n-----------------", "### Performance Limitations\n\n\n'dolly-v2-12b' is not a state-of-the-art generative language model and, though quantitative benchmarking is ongoing, is not designed to perform\ncompetitively with more modern model architectures or models subject to larger pretraining corpuses.\n\n\nThe Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community. \n\nIn particular, 'dolly-v2-12b' struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors,\ndates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc.\nMoreover, we find that 'dolly-v2-12b' does not have some capabilities, such as well-formatted letter writing, present in the original model.", "### Dataset Limitations\n\n\nLike all language models, 'dolly-v2-12b' reflects the content and limitations of its training corpuses.\n\n\n* The Pile: GPT-J's pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets,\nit contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly\nin the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit\nassociations.\n* 'databricks-dolly-15k': The training data on which 'dolly-v2-12b' is instruction tuned represents natural language instructions generated\nby Databricks employees during a period spanning March and April 2023 and includes passages from Wikipedia as references passages\nfor instruction categories like closed QA and summarization. To our knowledge it does not contain obscenity, intellectual property or\npersonally identifying information about non-public figures, but it may contain typos and factual errors.\nThe dataset may also reflect biases found in Wikipedia. Finally, the dataset likely reflects\nthe interests and semantic choices of Databricks employees, a demographic which is not representative of the global population at large.\n\n\nDatabricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that\nmaximize the potential of all individuals and organizations.", "### Benchmark Metrics\n\n\nBelow you'll find various models benchmark performance on the EleutherAI LLM Evaluation Harness;\nmodel results are sorted by geometric mean to produce an intelligible ordering. As outlined above, these results demonstrate that 'dolly-v2-12b' is not state of the art,\nand in fact underperforms 'dolly-v1-6b' in some evaluation benchmarks. We believe this owes to the composition and size of the underlying fine tuning datasets,\nbut a robust statement as to the sources of these variations requires further study.\n\n\n\nHappy Hacking!\n==============" ]
[ "TAGS\n#transformers #safetensors #gpt_neox #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n", "### LangChain Usage\n\n\nTo use the pipeline with LangChain, you must set 'return\\_full\\_text=True', as LangChain expects the full text to be returned\nand the default for the pipeline is to only return the new text.\n\n\nYou can create a prompt that either has only an instruction or has an instruction with context:\n\n\nExample predicting using a simple instruction:\n\n\nExample predicting using an instruction with context:\n\n\nKnown Limitations\n-----------------", "### Performance Limitations\n\n\n'dolly-v2-12b' is not a state-of-the-art generative language model and, though quantitative benchmarking is ongoing, is not designed to perform\ncompetitively with more modern model architectures or models subject to larger pretraining corpuses.\n\n\nThe Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community. \n\nIn particular, 'dolly-v2-12b' struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors,\ndates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc.\nMoreover, we find that 'dolly-v2-12b' does not have some capabilities, such as well-formatted letter writing, present in the original model.", "### Dataset Limitations\n\n\nLike all language models, 'dolly-v2-12b' reflects the content and limitations of its training corpuses.\n\n\n* The Pile: GPT-J's pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets,\nit contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly\nin the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit\nassociations.\n* 'databricks-dolly-15k': The training data on which 'dolly-v2-12b' is instruction tuned represents natural language instructions generated\nby Databricks employees during a period spanning March and April 2023 and includes passages from Wikipedia as references passages\nfor instruction categories like closed QA and summarization. To our knowledge it does not contain obscenity, intellectual property or\npersonally identifying information about non-public figures, but it may contain typos and factual errors.\nThe dataset may also reflect biases found in Wikipedia. Finally, the dataset likely reflects\nthe interests and semantic choices of Databricks employees, a demographic which is not representative of the global population at large.\n\n\nDatabricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that\nmaximize the potential of all individuals and organizations.", "### Benchmark Metrics\n\n\nBelow you'll find various models benchmark performance on the EleutherAI LLM Evaluation Harness;\nmodel results are sorted by geometric mean to produce an intelligible ordering. As outlined above, these results demonstrate that 'dolly-v2-12b' is not state of the art,\nand in fact underperforms 'dolly-v1-6b' in some evaluation benchmarks. We believe this owes to the composition and size of the underlying fine tuning datasets,\nbut a robust statement as to the sources of these variations requires further study.\n\n\n\nHappy Hacking!\n==============" ]
null
mlx
# GreenBitAI/Llama-2-7B-chat-layer-mix-bpw-2.5-mlx This quantized low-bit model was converted to MLX format from [`GreenBitAI/Llama-2-7B-chat-layer-mix-bpw-2.5`](). Refer to the [original model card](https://huggingface.co/GreenBitAI/Llama-2-7B-chat-layer-mix-bpw-2.5) for more details on the model. ## Use with mlx ```bash pip install gbx-lm ``` ```python from gbx_lm import load, generate model, tokenizer = load("GreenBitAI/Llama-2-7B-chat-layer-mix-bpw-2.5-mlx") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"license": "apache-2.0", "tags": ["mlx"]}
GreenBitAI/Llama-2-7B-Chat-layer-mix-bpw-2.5-mlx
null
[ "mlx", "safetensors", "llama", "license:apache-2.0", "region:us" ]
null
2024-04-14T21:20:27+00:00
[]
[]
TAGS #mlx #safetensors #llama #license-apache-2.0 #region-us
# GreenBitAI/Llama-2-7B-chat-layer-mix-bpw-2.5-mlx This quantized low-bit model was converted to MLX format from ['GreenBitAI/Llama-2-7B-chat-layer-mix-bpw-2.5'](). Refer to the original model card for more details on the model. ## Use with mlx
[ "# GreenBitAI/Llama-2-7B-chat-layer-mix-bpw-2.5-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/Llama-2-7B-chat-layer-mix-bpw-2.5']().\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
[ "TAGS\n#mlx #safetensors #llama #license-apache-2.0 #region-us \n", "# GreenBitAI/Llama-2-7B-chat-layer-mix-bpw-2.5-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/Llama-2-7B-chat-layer-mix-bpw-2.5']().\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
null
mlx
# GreenBitAI/Llama-2-7B-chat-layer-mix-bpw-2.2-mlx This quantized low-bit model was converted to MLX format from [`GreenBitAI/Llama-2-7B-chat-layer-mix-bpw-2.2`](). Refer to the [original model card](https://huggingface.co/GreenBitAI/Llama-2-7B-chat-layer-mix-bpw-2.2) for more details on the model. ## Use with mlx ```bash pip install gbx-lm ``` ```python from gbx_lm import load, generate model, tokenizer = load("GreenBitAI/Llama-2-7B-chat-layer-mix-bpw-2.2-mlx") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"license": "apache-2.0", "tags": ["mlx"]}
GreenBitAI/Llama-2-7B-Chat-layer-mix-bpw-2.2-mlx
null
[ "mlx", "safetensors", "llama", "license:apache-2.0", "region:us" ]
null
2024-04-14T21:20:27+00:00
[]
[]
TAGS #mlx #safetensors #llama #license-apache-2.0 #region-us
# GreenBitAI/Llama-2-7B-chat-layer-mix-bpw-2.2-mlx This quantized low-bit model was converted to MLX format from ['GreenBitAI/Llama-2-7B-chat-layer-mix-bpw-2.2'](). Refer to the original model card for more details on the model. ## Use with mlx
[ "# GreenBitAI/Llama-2-7B-chat-layer-mix-bpw-2.2-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/Llama-2-7B-chat-layer-mix-bpw-2.2']().\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
[ "TAGS\n#mlx #safetensors #llama #license-apache-2.0 #region-us \n", "# GreenBitAI/Llama-2-7B-chat-layer-mix-bpw-2.2-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/Llama-2-7B-chat-layer-mix-bpw-2.2']().\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/fangloveskari/ORCA_LLaMA_70B_QLoRA <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/ORCA_LLaMA_70B_QLoRA-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/ORCA_LLaMA_70B_QLoRA-GGUF/resolve/main/ORCA_LLaMA_70B_QLoRA.Q2_K.gguf) | Q2_K | 25.6 | | | [GGUF](https://huggingface.co/mradermacher/ORCA_LLaMA_70B_QLoRA-GGUF/resolve/main/ORCA_LLaMA_70B_QLoRA.IQ3_XS.gguf) | IQ3_XS | 28.4 | | | [GGUF](https://huggingface.co/mradermacher/ORCA_LLaMA_70B_QLoRA-GGUF/resolve/main/ORCA_LLaMA_70B_QLoRA.IQ3_S.gguf) | IQ3_S | 30.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/ORCA_LLaMA_70B_QLoRA-GGUF/resolve/main/ORCA_LLaMA_70B_QLoRA.Q3_K_S.gguf) | Q3_K_S | 30.0 | | | [GGUF](https://huggingface.co/mradermacher/ORCA_LLaMA_70B_QLoRA-GGUF/resolve/main/ORCA_LLaMA_70B_QLoRA.IQ3_M.gguf) | IQ3_M | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/ORCA_LLaMA_70B_QLoRA-GGUF/resolve/main/ORCA_LLaMA_70B_QLoRA.Q3_K_M.gguf) | Q3_K_M | 33.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/ORCA_LLaMA_70B_QLoRA-GGUF/resolve/main/ORCA_LLaMA_70B_QLoRA.Q3_K_L.gguf) | Q3_K_L | 36.2 | | | [GGUF](https://huggingface.co/mradermacher/ORCA_LLaMA_70B_QLoRA-GGUF/resolve/main/ORCA_LLaMA_70B_QLoRA.IQ4_XS.gguf) | IQ4_XS | 37.3 | | | [GGUF](https://huggingface.co/mradermacher/ORCA_LLaMA_70B_QLoRA-GGUF/resolve/main/ORCA_LLaMA_70B_QLoRA.Q4_K_S.gguf) | Q4_K_S | 39.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ORCA_LLaMA_70B_QLoRA-GGUF/resolve/main/ORCA_LLaMA_70B_QLoRA.Q4_K_M.gguf) | Q4_K_M | 41.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ORCA_LLaMA_70B_QLoRA-GGUF/resolve/main/ORCA_LLaMA_70B_QLoRA.Q5_K_S.gguf) | Q5_K_S | 47.6 | | | [GGUF](https://huggingface.co/mradermacher/ORCA_LLaMA_70B_QLoRA-GGUF/resolve/main/ORCA_LLaMA_70B_QLoRA.Q5_K_M.gguf) | Q5_K_M | 48.9 | | | [PART 1](https://huggingface.co/mradermacher/ORCA_LLaMA_70B_QLoRA-GGUF/resolve/main/ORCA_LLaMA_70B_QLoRA.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ORCA_LLaMA_70B_QLoRA-GGUF/resolve/main/ORCA_LLaMA_70B_QLoRA.Q6_K.gguf.part2of2) | Q6_K | 56.7 | very good quality | | [PART 1](https://huggingface.co/mradermacher/ORCA_LLaMA_70B_QLoRA-GGUF/resolve/main/ORCA_LLaMA_70B_QLoRA.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ORCA_LLaMA_70B_QLoRA-GGUF/resolve/main/ORCA_LLaMA_70B_QLoRA.Q8_0.gguf.part2of2) | Q8_0 | 73.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "llama2", "library_name": "transformers", "base_model": "fangloveskari/ORCA_LLaMA_70B_QLoRA", "quantized_by": "mradermacher"}
mradermacher/ORCA_LLaMA_70B_QLoRA-GGUF
null
[ "transformers", "gguf", "en", "base_model:fangloveskari/ORCA_LLaMA_70B_QLoRA", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-04-14T21:20:52+00:00
[]
[ "en" ]
TAGS #transformers #gguf #en #base_model-fangloveskari/ORCA_LLaMA_70B_QLoRA #license-llama2 #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants are available at URL Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #en #base_model-fangloveskari/ORCA_LLaMA_70B_QLoRA #license-llama2 #endpoints_compatible #region-us \n" ]
text-generation
transformers
# merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [K00B404/Merged_Beowolx-CodePro_Medusa2-7B-Mistral-I-v0-2](https://huggingface.co/K00B404/Merged_Beowolx-CodePro_Medusa2-7B-Mistral-I-v0-2) * [EmbeddedLLM/Mistral-7B-Merge-14-v0.1](https://huggingface.co/EmbeddedLLM/Mistral-7B-Merge-14-v0.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: EmbeddedLLM/Mistral-7B-Merge-14-v0.1 layer_range: [0, 32] - model: K00B404/Merged_Beowolx-CodePro_Medusa2-7B-Mistral-I-v0-2 layer_range: [0, 32] merge_method: slerp base_model: EmbeddedLLM/Mistral-7B-Merge-14-v0.1 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["K00B404/Merged_Beowolx-CodePro_Medusa2-7B-Mistral-I-v0-2", "EmbeddedLLM/Mistral-7B-Merge-14-v0.1"]}
K00B404/Merged_Beowolx-CodePro_Medusa2-14X-7B-Mistral-I-v0-2
null
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "base_model:K00B404/Merged_Beowolx-CodePro_Medusa2-7B-Mistral-I-v0-2", "base_model:EmbeddedLLM/Mistral-7B-Merge-14-v0.1", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2024-04-14T21:21:09+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-K00B404/Merged_Beowolx-CodePro_Medusa2-7B-Mistral-I-v0-2 #base_model-EmbeddedLLM/Mistral-7B-Merge-14-v0.1 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
# merge This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * K00B404/Merged_Beowolx-CodePro_Medusa2-7B-Mistral-I-v0-2 * EmbeddedLLM/Mistral-7B-Merge-14-v0.1 ### Configuration The following YAML configuration was used to produce this model:
[ "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* K00B404/Merged_Beowolx-CodePro_Medusa2-7B-Mistral-I-v0-2\n* EmbeddedLLM/Mistral-7B-Merge-14-v0.1", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-K00B404/Merged_Beowolx-CodePro_Medusa2-7B-Mistral-I-v0-2 #base_model-EmbeddedLLM/Mistral-7B-Merge-14-v0.1 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* K00B404/Merged_Beowolx-CodePro_Medusa2-7B-Mistral-I-v0-2\n* EmbeddedLLM/Mistral-7B-Merge-14-v0.1", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
automatic-speech-recognition
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
SpideyDLK/wav2vec2-large-xls-r-300m-sinhala-original-split-part3
null
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-14T21:22:24+00:00
[ "1910.09700" ]
[]
TAGS #transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) dolly-v1-6b - bnb 8bits - Model creator: https://huggingface.co/databricks/ - Original model: https://huggingface.co/databricks/dolly-v1-6b/ Original model description: --- license: cc-by-nc-4.0 datasets: - tatsu-lab/alpaca language: - en library_name: transformers inference: false --- # dolly-v1-6b Model Card ## Dolly v2 Is Out! Please try Dolly v2 instead: - https://huggingface.co/databricks/dolly-v2-12b - https://huggingface.co/databricks/dolly-v2-7b - https://huggingface.co/databricks/dolly-v2-3b ## Summary Databricks’ `dolly-v1-6b`, a large language model ([blog post](https://www.databricks.com/blog/2023/03/24/hello-dolly-democratizing-magic-chatgpt-open-models.html)) trained on the Databricks machine learning platform, demonstrates that a two-years-old [open source model](https://huggingface.co/EleutherAI/gpt-j-6B) can, when subjected to just 30 minutes of fine tuning on a focused corpus of 50k records ([Stanford Alpaca](https://crfm.stanford.edu/2023/03/13/alpaca.html)), exhibit surprisingly high quality instruction following behavior not characteristic of the foundation model on which it is based. We believe this finding is important because it demonstrates that the ability to create powerful artificial intelligence technologies is vastly more accessible than previously realized. Databricks is committed to ensuring that every organization and individual benefits from the transformative power of artificial intelligence. The Dolly model family represents our first steps along this journey, and we’re excited to share this technology with the world. **Owner**: Databricks, Inc. ## Model Overview `dolly-v1-6b` is a 6 billion parameter causal language model created by [Databricks](https://databricks.com/) that is derived from [EleutherAI’s](https://www.eleuther.ai/) [GPT-J](https://huggingface.co/EleutherAI/gpt-j-6B) (released June 2021) and fine-tuned on a ~52K record instruction corpus ([Stanford Alpaca](https://crfm.stanford.edu/2023/03/13/alpaca.html)) (CC-NC-BY-4.0) consisting of question/answer pairs generated using the techniques outlined in the [Self-Instruct](https://arxiv.org/abs/2212.10560) paper. The [original version](https://www.databricks.com/blog/2023/03/24/hello-dolly-democratizing-magic-chatgpt-open-models.html) of was Dolly was trained using [deepspeed](https://github.com/microsoft/DeepSpeed) [ZeRO 3](https://github.com/microsoft/DeepSpeed/blob/master/docs/code-docs/source/zero3.rst) on the [Databricks Machine Learning Platform](https://www.databricks.com/product/machine-learning) in just 30 minutes (1 epoch) using a single [NDasrA100_v4](https://learn.microsoft.com/en-us/azure/virtual-machines/nda100-v4-series) machine with 8x A100 40GB GPUs. The most recent `dolly-v1-6b` checkpoint was trained for 10 epochs on the same hardware. Like its base model, `dolly-v1-6b` has six billion parameters consisting of 28 transformer layers with 16 attention heads each. It employs [Rotary Position Embedding](https://arxiv.org/abs/2104.09864) (RoPE) and shares the same tokenizer as GPT-3. GPT-J was trained on [The Pile](https://huggingface.co/datasets/the_pile), a 400B token dataset of diverse documents designed primarily for text generation tasks. ## Known Limitations **`dolly-v1-6b` is not a state-of-the-art generative language model** and, though quantitative benchmarking is ongoing, is not designed to perform competitively with more modern model architectures or models subject to larger pretraining corpuses. **It is designed for academic or research purposes, and to encourage model and engineering experimentation.** The Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community. In particular, `dolly-v1-6b` struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors, dates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc. ## Training Data, Bias & Objectionable Content Like all language models, `dolly-v1-6b` reflects the content and limitations of its training corpuses. - **The Pile**: GPT-J’s pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets, it contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly in the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit associations. - **Stanford Alpaca**: The instruction tuning corpus for dolly-6b can be assumed to share many of the limitations. In addition, it is known to contain factual inaccuracies, semantic and syntactic irregularities, nonsensical responses, and incorrect mathematical calculations, among other data shortcomings. The model outputs will reflect these limitations. Databricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that maximize the potential of all individuals and organizations. ## Intended Uses **`dolly-v1-6b` is intended exclusively for research purposes. We do not recommend using `dolly-v1-6b` in high-risk applications (e.g., educational or vocational training, product safety components, or other uses that may impact the well-being of individuals.)** `dolly-v1-6b` is differentiated from its predecessor, GPT-J, in the extent to which it exhibits coherent responses to natural language questions and instruction prompts. `dolly-v1-6b` appears to exhibit reasonable (but categorically not state-of-the-art) behavior across a range of capabilities, including many of those outlined in the [InstructGPT](https://arxiv.org/abs/2203.02155) paper. We include evocative examples below, but invite the reader to observe that they contain factual inaccuracies or semantic misfires that illustrate some of the model’s shortcomings. ### Brainstorming **What are considerations I should keep in mind when planning a backcountry backpacking trip?** When planning a backcountry backpacking trip, there are several considerations to keep in mind. First, make sure you are prepared for any potential weather conditions, including extreme temperatures and lightning storms. Second, consider the route you will take and the terrain that you will be traversing. Third, make sure that you have enough food, water and gear to stay safe and comfortable during your backcountry journey. Fourth, plan for potential wildlife encounters and make sure you have the appropriate safety measures in place. Finally, have an emergency plan in place in case of a medical emergency. ### Classification **Label each of the following as either a scientific concept or a product: Nikon D750, quantum entanglement, CRISPR, and a Macbook Pro.** Nikon D750: Product Quantum entanglement: Scientific Concept CRISPR: Product Macbook Pro: Product ### Extraction **Extract all the people and places from the following passage: Basquiat was born on December 22, 1960, in Park Slope, Brooklyn, New York City, the second of four children to Matilde Basquiat (née Andrades, 1934–2008) and Gérard Basquiat (1930–2013). He had an older brother, Max, who died shortly before his birth, and two younger sisters, Lisane (b. 1964) and Jeanine (b. 1967). His father was born in Port-au-Prince, Haiti and his mother was born in Brooklyn to Puerto Rican parents. He was raised Catholic.** People: Basquiat, Matilde Basquiat, Gérard Basquiat, Max Basquiat, Lisane Basquiat, Jeanine Basquiat. Place: Park Slope, Brooklyn, New York City. ### Generation **Write a press release declaring the completion of Atlantis II, a facility designed for long-term human habitation at the bottom of the ocean. Be sure to mention some of its advanced technological features.** Atlantis II, a new facility designed for long-term human habitation at the bottom of the ocean, is now complete! With advanced technological features, it will be a safe and comfortable living environment for generations to come. From self-sustaining ecosystems to advanced robotics, the facility offers a range of opportunities for exploration, discovery and exploration. Atlantis II will be a home for a wide range of species and will serve as a hub for ocean research, exploration and exploration, bringing together a diverse team of scientists, engineers, biologists, and divers to uncover the mysteries of the deep. ### Summarization **Give me a one line summary of this: Coffee is one of the most widely consumed beverages in the world. It has primarily consumed due to its stimulant effect and unique taste since the ancient times. Afterwards, its consumption has been historically associated with a lower risk of some diseases such as type 2 diabetes mellitus, obesity, cardiovascular disease and some type of cancer and thus it has also consumed due to health benefits. It contains many bioactive compounds such as caffeine, chlorogenic acids and diterpenoid alcohols which have so far been associated with many potential health benefits. For example, caffeine reduces risk of developing neurodegenerative disease and chlorogenic acids (CGA) and diterpene alcohols have many health benefits such as antioxidant and chemo-preventive. Coffee also have harmful effects. For example, diterpenoid alcohols increases serum homocysteine and cholesterol levels and thus it has adverse effects on cardiovascular system. Overall, the study that supports the health benefits of coffee is increasing. But, it is thought-provoking that the association with health benefits of coffee consumption and frequency at different levels in each study. For this reason, we aimed to examine the health effect of the coffee and how much consumption is to investigate whether it meets the claimed health benefits.** Coffee is one of the most widely consumed beverages in the world and has historically been associated with a lower risk of some diseases such as type 2 diabetes mellitus, obesity, cardiovascular disease and some type of cancer. However, there are mixed results in the literature and further research is required to determine the health benefits of coffee consumption. ### Rephrasing **Give me a different way to say the following to a 4 year old: "Son, this is the last time I'm going to tell you. Go to bed!"** "Goodnight, Son. Sleep tight and I'll see you in the morning!" ### Closed QA **I'm going to give you a passage from the book Neuromancer and I'd like you to answer the following question: What is the tool that allows Case to access the matrix? Case was twenty-four. At twenty-two, he'd been a cowboy, a rustler, one of the best in the Sprawl. He'd been trained by the best, by McCoy Pauley and Bobby Quine, legends in the biz. He'd operated on an almost permanent adrenaline high, a byproduct of youth and proficiency, jacked into a custom cyberspace deck that projected his disembodied consciousness into the consensual hallucination that was the matrix.** The tool that allows Case to access the matrix is his cyberspace deck, which allows him to manipulate and interact with the virtual world around him. It enables him to access information from various sources and interact with the people in the matrix, and it is one of the main weapons of Case's cyberworld. ### Open QA **What is the literary style of Jorge Luis Borges?** Jorge Luis Borges's literary style is often described as magical realism. He often creates stories that combine fictional characters with real life elements and use the juxtaposition of these elements to create a sense of surrealism. He often uses allegory and symbolism to explore complex philosophical ideas and explore questions of human perception and understanding. His writing is characterized by a playful and inventive use of language, with characters often being metaphors or symbols for larger, more abstract concepts. ## Usage ### Load Model & Tokenizer ```python import numpy as np from transformers import ( AutoModelForCausalLM, AutoTokenizer, PreTrainedModel, PreTrainedTokenizer ) tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v1-6b", padding_side="left") model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v1-6b", device_map="auto", trust_remote_code=True) ``` ### Generate Text ```python PROMPT_FORMAT = """Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {instruction} ### Response: """ def generate_response(instruction: str, *, model: PreTrainedModel, tokenizer: PreTrainedTokenizer, do_sample: bool = True, max_new_tokens: int = 256, top_p: float = 0.92, top_k: int = 0, **kwargs) -> str: input_ids = tokenizer(PROMPT_FORMAT.format(instruction=instruction), return_tensors="pt").input_ids.to("cuda") # each of these is encoded to a single token response_key_token_id = tokenizer.encode("### Response:")[0] end_key_token_id = tokenizer.encode("### End")[0] gen_tokens = model.generate(input_ids, pad_token_id=tokenizer.pad_token_id, eos_token_id=end_key_token_id, do_sample=do_sample, max_new_tokens=max_new_tokens, top_p=top_p, top_k=top_k, **kwargs)[0].cpu() # find where the response begins response_positions = np.where(gen_tokens == response_key_token_id)[0] if len(response_positions) >= 0: response_pos = response_positions[0] # find where the response ends end_pos = None end_positions = np.where(gen_tokens == end_key_token_id)[0] if len(end_positions) > 0: end_pos = end_positions[0] return tokenizer.decode(gen_tokens[response_pos + 1 : end_pos]).strip() return None # Sample similar to: "Excited to announce the release of Dolly, a powerful new language model from Databricks! #AI #Databricks" generate_response("Write a tweet announcing Dolly, a large language model from Databricks.", model=model, tokenizer=tokenizer) ``` ### Benchmark Metrics Below you'll find various models benchmark performance on the [EleutherAI LLM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) model results are sorted by geometric mean to produce an intelligible ordering. These results demonstrate that Dolly is not state of the art, as we describe above, but also point to an interesting observation. Namely, Dolly is only marginally better (and in the case of Winogrande worse) and its basemodel GPT-J-6B. Despite this fact, the qualitative behavior of Dolly is materially different from the underlying model ([try it yourself](https://huggingface.co/EleutherAI/gpt-j-6B) on Hugging Face!), which points to meaningful limitations of the existing evaluation benchmarks for measuring the quality of generative models. | model | openbookqa | arc_easy | winogrande | hellaswag | arc_challenge | piqa | boolq | | --------------------------- | ------------ | ---------- | ------------ | ----------- | --------------- | -------- | ---------| | cerebras/Cerebras-GPT-13B | 0.36 | 0.598906 | 0.607735 | 0.593109 | 0.325939 | 0.749728 | 0.611621 | | EleutherAI/gpt-j-6B | 0.382 | 0.621633 | 0.651144 | 0.662617 | 0.363481 | 0.761153 | 0.655963 | | dolly-v1-6b (1 epoch) | 0.428 | 0.608586 | 0.633781 | 0.650568 | 0.377133 | 0.761697 | 0.69633 | | dolly-v1-6b (10 epochs) | 0.41 | 0.62963 | 0.643252 | 0.676758 | 0.384812 | 0.773667 | 0.687768 | | EleutherAI/gpt-neox-20b | 0.402 | 0.683923 | 0.656669 | 0.7142 | 0.408703 | 0.784004 | 0.695413 | # Citation ``` @online{DatabricksBlog2023DollyV1, author = {Mike Conover and Matt Hayes and Ankit Mathur and Xiangrui Meng and Jianwei Xie and Jun Wan and Ali Ghodsi and Patrick Wendell and Matei Zaharia}, title = {Hello Dolly: Democratizing the magic of ChatGPT with open models}, year = {2023}, url = {https://www.databricks.com/blog/2023/03/24/hello-dolly-democratizing-magic-chatgpt-open-models.html}, urldate = {2023-06-30} } ``` # Happy Hacking!
{}
RichardErkhov/databricks_-_dolly-v1-6b-8bits
null
[ "transformers", "safetensors", "gptj", "text-generation", "arxiv:2212.10560", "arxiv:2104.09864", "arxiv:2203.02155", "autotrain_compatible", "endpoints_compatible", "8-bit", "region:us" ]
null
2024-04-14T21:23:02+00:00
[ "2212.10560", "2104.09864", "2203.02155" ]
[]
TAGS #transformers #safetensors #gptj #text-generation #arxiv-2212.10560 #arxiv-2104.09864 #arxiv-2203.02155 #autotrain_compatible #endpoints_compatible #8-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models dolly-v1-6b - bnb 8bits * Model creator: URL * Original model: URL Original model description: --------------------------- license: cc-by-nc-4.0 datasets: * tatsu-lab/alpaca language: * en library\_name: transformers inference: false --- dolly-v1-6b Model Card ====================== Dolly v2 Is Out! ---------------- Please try Dolly v2 instead: * URL * URL * URL Summary ------- Databricks’ 'dolly-v1-6b', a large language model (blog post) trained on the Databricks machine learning platform, demonstrates that a two-years-old open source model can, when subjected to just 30 minutes of fine tuning on a focused corpus of 50k records (Stanford Alpaca), exhibit surprisingly high quality instruction following behavior not characteristic of the foundation model on which it is based. We believe this finding is important because it demonstrates that the ability to create powerful artificial intelligence technologies is vastly more accessible than previously realized. Databricks is committed to ensuring that every organization and individual benefits from the transformative power of artificial intelligence. The Dolly model family represents our first steps along this journey, and we’re excited to share this technology with the world. Owner: Databricks, Inc. Model Overview -------------- 'dolly-v1-6b' is a 6 billion parameter causal language model created by Databricks that is derived from EleutherAI’s GPT-J (released June 2021) and fine-tuned on a ~52K record instruction corpus (Stanford Alpaca) (CC-NC-BY-4.0) consisting of question/answer pairs generated using the techniques outlined in the Self-Instruct paper. The original version of was Dolly was trained using deepspeed ZeRO 3 on the Databricks Machine Learning Platform in just 30 minutes (1 epoch) using a single NDasrA100\_v4 machine with 8x A100 40GB GPUs. The most recent 'dolly-v1-6b' checkpoint was trained for 10 epochs on the same hardware. Like its base model, 'dolly-v1-6b' has six billion parameters consisting of 28 transformer layers with 16 attention heads each. It employs Rotary Position Embedding (RoPE) and shares the same tokenizer as GPT-3. GPT-J was trained on The Pile, a 400B token dataset of diverse documents designed primarily for text generation tasks. Known Limitations ----------------- 'dolly-v1-6b' is not a state-of-the-art generative language model and, though quantitative benchmarking is ongoing, is not designed to perform competitively with more modern model architectures or models subject to larger pretraining corpuses. It is designed for academic or research purposes, and to encourage model and engineering experimentation. The Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community. In particular, 'dolly-v1-6b' struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors, dates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc. Training Data, Bias & Objectionable Content ------------------------------------------- Like all language models, 'dolly-v1-6b' reflects the content and limitations of its training corpuses. * The Pile: GPT-J’s pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets, it contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly in the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit associations. * Stanford Alpaca: The instruction tuning corpus for dolly-6b can be assumed to share many of the limitations. In addition, it is known to contain factual inaccuracies, semantic and syntactic irregularities, nonsensical responses, and incorrect mathematical calculations, among other data shortcomings. The model outputs will reflect these limitations. Databricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that maximize the potential of all individuals and organizations. Intended Uses ------------- 'dolly-v1-6b' is intended exclusively for research purposes. We do not recommend using 'dolly-v1-6b' in high-risk applications (e.g., educational or vocational training, product safety components, or other uses that may impact the well-being of individuals.) 'dolly-v1-6b' is differentiated from its predecessor, GPT-J, in the extent to which it exhibits coherent responses to natural language questions and instruction prompts. 'dolly-v1-6b' appears to exhibit reasonable (but categorically not state-of-the-art) behavior across a range of capabilities, including many of those outlined in the InstructGPT paper. We include evocative examples below, but invite the reader to observe that they contain factual inaccuracies or semantic misfires that illustrate some of the model’s shortcomings. ### Brainstorming What are considerations I should keep in mind when planning a backcountry backpacking trip? When planning a backcountry backpacking trip, there are several considerations to keep in mind. First, make sure you are prepared for any potential weather conditions, including extreme temperatures and lightning storms. Second, consider the route you will take and the terrain that you will be traversing. Third, make sure that you have enough food, water and gear to stay safe and comfortable during your backcountry journey. Fourth, plan for potential wildlife encounters and make sure you have the appropriate safety measures in place. Finally, have an emergency plan in place in case of a medical emergency. ### Classification Label each of the following as either a scientific concept or a product: Nikon D750, quantum entanglement, CRISPR, and a Macbook Pro. Nikon D750: Product Quantum entanglement: Scientific Concept CRISPR: Product Macbook Pro: Product ### Extraction Extract all the people and places from the following passage: Basquiat was born on December 22, 1960, in Park Slope, Brooklyn, New York City, the second of four children to Matilde Basquiat (née Andrades, 1934–2008) and Gérard Basquiat (1930–2013). He had an older brother, Max, who died shortly before his birth, and two younger sisters, Lisane (b. 1964) and Jeanine (b. 1967). His father was born in Port-au-Prince, Haiti and his mother was born in Brooklyn to Puerto Rican parents. He was raised Catholic. People: Basquiat, Matilde Basquiat, Gérard Basquiat, Max Basquiat, Lisane Basquiat, Jeanine Basquiat. Place: Park Slope, Brooklyn, New York City. ### Generation Write a press release declaring the completion of Atlantis II, a facility designed for long-term human habitation at the bottom of the ocean. Be sure to mention some of its advanced technological features. Atlantis II, a new facility designed for long-term human habitation at the bottom of the ocean, is now complete! With advanced technological features, it will be a safe and comfortable living environment for generations to come. From self-sustaining ecosystems to advanced robotics, the facility offers a range of opportunities for exploration, discovery and exploration. Atlantis II will be a home for a wide range of species and will serve as a hub for ocean research, exploration and exploration, bringing together a diverse team of scientists, engineers, biologists, and divers to uncover the mysteries of the deep. ### Summarization Give me a one line summary of this: Coffee is one of the most widely consumed beverages in the world. It has primarily consumed due to its stimulant effect and unique taste since the ancient times. Afterwards, its consumption has been historically associated with a lower risk of some diseases such as type 2 diabetes mellitus, obesity, cardiovascular disease and some type of cancer and thus it has also consumed due to health benefits. It contains many bioactive compounds such as caffeine, chlorogenic acids and diterpenoid alcohols which have so far been associated with many potential health benefits. For example, caffeine reduces risk of developing neurodegenerative disease and chlorogenic acids (CGA) and diterpene alcohols have many health benefits such as antioxidant and chemo-preventive. Coffee also have harmful effects. For example, diterpenoid alcohols increases serum homocysteine and cholesterol levels and thus it has adverse effects on cardiovascular system. Overall, the study that supports the health benefits of coffee is increasing. But, it is thought-provoking that the association with health benefits of coffee consumption and frequency at different levels in each study. For this reason, we aimed to examine the health effect of the coffee and how much consumption is to investigate whether it meets the claimed health benefits. Coffee is one of the most widely consumed beverages in the world and has historically been associated with a lower risk of some diseases such as type 2 diabetes mellitus, obesity, cardiovascular disease and some type of cancer. However, there are mixed results in the literature and further research is required to determine the health benefits of coffee consumption. ### Rephrasing Give me a different way to say the following to a 4 year old: "Son, this is the last time I'm going to tell you. Go to bed!" "Goodnight, Son. Sleep tight and I'll see you in the morning!" ### Closed QA I'm going to give you a passage from the book Neuromancer and I'd like you to answer the following question: What is the tool that allows Case to access the matrix? Case was twenty-four. At twenty-two, he'd been a cowboy, a rustler, one of the best in the Sprawl. He'd been trained by the best, by McCoy Pauley and Bobby Quine, legends in the biz. He'd operated on an almost permanent adrenaline high, a byproduct of youth and proficiency, jacked into a custom cyberspace deck that projected his disembodied consciousness into the consensual hallucination that was the matrix. The tool that allows Case to access the matrix is his cyberspace deck, which allows him to manipulate and interact with the virtual world around him. It enables him to access information from various sources and interact with the people in the matrix, and it is one of the main weapons of Case's cyberworld. ### Open QA What is the literary style of Jorge Luis Borges? Jorge Luis Borges's literary style is often described as magical realism. He often creates stories that combine fictional characters with real life elements and use the juxtaposition of these elements to create a sense of surrealism. He often uses allegory and symbolism to explore complex philosophical ideas and explore questions of human perception and understanding. His writing is characterized by a playful and inventive use of language, with characters often being metaphors or symbols for larger, more abstract concepts. Usage ----- ### Load Model & Tokenizer ### Generate Text ### Benchmark Metrics Below you'll find various models benchmark performance on the EleutherAI LLM Evaluation Harness model results are sorted by geometric mean to produce an intelligible ordering. These results demonstrate that Dolly is not state of the art, as we describe above, but also point to an interesting observation. Namely, Dolly is only marginally better (and in the case of Winogrande worse) and its basemodel GPT-J-6B. Despite this fact, the qualitative behavior of Dolly is materially different from the underlying model (try it yourself on Hugging Face!), which points to meaningful limitations of the existing evaluation benchmarks for measuring the quality of generative models. Happy Hacking! ==============
[ "### Brainstorming\n\n\nWhat are considerations I should keep in mind when planning a backcountry backpacking trip?\n\n\nWhen planning a backcountry backpacking trip, there are several considerations to keep in mind. First, make sure you are prepared for any potential weather conditions, including extreme temperatures and lightning storms. Second, consider the route you will take and the terrain that you will be traversing. Third, make sure that you have enough food, water and gear to stay safe and comfortable during your backcountry journey. Fourth, plan for potential wildlife encounters and make sure you have the appropriate safety measures in place. Finally, have an emergency plan in place in case of a medical emergency.", "### Classification\n\n\nLabel each of the following as either a scientific concept or a product: Nikon D750, quantum entanglement, CRISPR, and a Macbook Pro.\n\n\nNikon D750: Product\n\n\nQuantum entanglement: Scientific Concept\n\n\nCRISPR: Product\n\n\nMacbook Pro: Product", "### Extraction\n\n\nExtract all the people and places from the following passage:\nBasquiat was born on December 22, 1960, in Park Slope, Brooklyn, New York City, the second of four children to Matilde Basquiat (née Andrades, 1934–2008) and Gérard Basquiat (1930–2013). He had an older brother, Max, who died shortly before his birth, and two younger sisters, Lisane (b. 1964) and Jeanine (b. 1967). His father was born in Port-au-Prince, Haiti and his mother was born in Brooklyn to Puerto Rican parents. He was raised Catholic.\n\n\nPeople: Basquiat, Matilde Basquiat, Gérard Basquiat, Max Basquiat, Lisane Basquiat, Jeanine Basquiat.\n\n\nPlace: Park Slope, Brooklyn, New York City.", "### Generation\n\n\nWrite a press release declaring the completion of Atlantis II, a facility designed for long-term human habitation at the bottom of the ocean. Be sure to mention some of its advanced technological features.\n\n\nAtlantis II, a new facility designed for long-term human habitation at the bottom of the ocean, is now complete! With advanced technological features, it will be a safe and comfortable living environment for generations to come. From self-sustaining ecosystems to advanced robotics, the facility offers a range of opportunities for exploration, discovery and exploration.\n\n\nAtlantis II will be a home for a wide range of species and will serve as a hub for ocean research, exploration and exploration, bringing together a diverse team of scientists, engineers, biologists, and divers to uncover the mysteries of the deep.", "### Summarization\n\n\nGive me a one line summary of this:\nCoffee is one of the most widely consumed beverages in the world. It has primarily consumed due to its stimulant effect and unique taste since the ancient times. Afterwards, its consumption has been historically associated with a lower risk of some diseases such as type 2 diabetes mellitus, obesity, cardiovascular disease and some type of cancer and thus it has also consumed due to health benefits. It contains many bioactive compounds such as caffeine, chlorogenic acids and diterpenoid alcohols which have so far been associated with many potential health benefits. For example, caffeine reduces risk of developing neurodegenerative disease and chlorogenic acids (CGA) and diterpene alcohols have many health benefits such as antioxidant and chemo-preventive. Coffee also have harmful effects. For example, diterpenoid alcohols increases serum homocysteine and cholesterol levels and thus it has adverse effects on cardiovascular system. Overall, the study that supports the health benefits of coffee is increasing. But, it is thought-provoking that the association with health benefits of coffee consumption and frequency at different levels in each study. For this reason, we aimed to examine the health effect of the coffee and how much consumption is to investigate whether it meets the claimed health benefits.\n\n\nCoffee is one of the most widely consumed beverages in the world and has historically been associated with a lower risk of some diseases such as type 2 diabetes mellitus, obesity, cardiovascular disease and some type of cancer. However, there are mixed results in the literature and further research is required to determine the health benefits of coffee consumption.", "### Rephrasing\n\n\nGive me a different way to say the following to a 4 year old: \"Son, this is the last time I'm going to tell you. Go to bed!\"\n\n\n\"Goodnight, Son. Sleep tight and I'll see you in the morning!\"", "### Closed QA\n\n\nI'm going to give you a passage from the book Neuromancer and I'd like you to answer the following question: What is the tool that allows Case to access the matrix?\nCase was twenty-four. At twenty-two, he'd been a cowboy, a rustler, one of the best in the Sprawl. He'd been trained by the best, by McCoy Pauley and Bobby Quine, legends in the biz. He'd operated on an almost permanent adrenaline high, a byproduct of youth and proficiency, jacked into a custom cyberspace deck that projected his disembodied consciousness into the consensual hallucination that was the matrix.\n\n\nThe tool that allows Case to access the matrix is his cyberspace deck, which allows him to manipulate and interact with the virtual world around him. It enables him to access information from various sources and interact with the people in the matrix, and it is one of the main weapons of Case's cyberworld.", "### Open QA\n\n\nWhat is the literary style of Jorge Luis Borges?\n\n\nJorge Luis Borges's literary style is often described as magical realism. He often creates stories that combine fictional characters with real life elements and use the juxtaposition of these elements to create a sense of surrealism. He often uses allegory and symbolism to explore complex philosophical ideas and explore questions of human perception and understanding. His writing is characterized by a playful and inventive use of language, with characters often being metaphors or symbols for larger, more abstract concepts.\n\n\nUsage\n-----", "### Load Model & Tokenizer", "### Generate Text", "### Benchmark Metrics\n\n\nBelow you'll find various models benchmark performance on the EleutherAI LLM Evaluation Harness\nmodel results are sorted by geometric mean to produce an intelligible ordering. These results demonstrate that Dolly is not state of the art, as we describe\nabove, but also point to an interesting observation. Namely, Dolly is only marginally better (and in the case of Winogrande worse) and its basemodel GPT-J-6B. \n\nDespite this fact, the qualitative behavior of Dolly is materially different from the underlying model (try it yourself on Hugging Face!),\nwhich points to meaningful limitations of the existing evaluation benchmarks for measuring the quality of generative models.\n\n\n\nHappy Hacking!\n==============" ]
[ "TAGS\n#transformers #safetensors #gptj #text-generation #arxiv-2212.10560 #arxiv-2104.09864 #arxiv-2203.02155 #autotrain_compatible #endpoints_compatible #8-bit #region-us \n", "### Brainstorming\n\n\nWhat are considerations I should keep in mind when planning a backcountry backpacking trip?\n\n\nWhen planning a backcountry backpacking trip, there are several considerations to keep in mind. First, make sure you are prepared for any potential weather conditions, including extreme temperatures and lightning storms. Second, consider the route you will take and the terrain that you will be traversing. Third, make sure that you have enough food, water and gear to stay safe and comfortable during your backcountry journey. Fourth, plan for potential wildlife encounters and make sure you have the appropriate safety measures in place. Finally, have an emergency plan in place in case of a medical emergency.", "### Classification\n\n\nLabel each of the following as either a scientific concept or a product: Nikon D750, quantum entanglement, CRISPR, and a Macbook Pro.\n\n\nNikon D750: Product\n\n\nQuantum entanglement: Scientific Concept\n\n\nCRISPR: Product\n\n\nMacbook Pro: Product", "### Extraction\n\n\nExtract all the people and places from the following passage:\nBasquiat was born on December 22, 1960, in Park Slope, Brooklyn, New York City, the second of four children to Matilde Basquiat (née Andrades, 1934–2008) and Gérard Basquiat (1930–2013). He had an older brother, Max, who died shortly before his birth, and two younger sisters, Lisane (b. 1964) and Jeanine (b. 1967). His father was born in Port-au-Prince, Haiti and his mother was born in Brooklyn to Puerto Rican parents. He was raised Catholic.\n\n\nPeople: Basquiat, Matilde Basquiat, Gérard Basquiat, Max Basquiat, Lisane Basquiat, Jeanine Basquiat.\n\n\nPlace: Park Slope, Brooklyn, New York City.", "### Generation\n\n\nWrite a press release declaring the completion of Atlantis II, a facility designed for long-term human habitation at the bottom of the ocean. Be sure to mention some of its advanced technological features.\n\n\nAtlantis II, a new facility designed for long-term human habitation at the bottom of the ocean, is now complete! With advanced technological features, it will be a safe and comfortable living environment for generations to come. From self-sustaining ecosystems to advanced robotics, the facility offers a range of opportunities for exploration, discovery and exploration.\n\n\nAtlantis II will be a home for a wide range of species and will serve as a hub for ocean research, exploration and exploration, bringing together a diverse team of scientists, engineers, biologists, and divers to uncover the mysteries of the deep.", "### Summarization\n\n\nGive me a one line summary of this:\nCoffee is one of the most widely consumed beverages in the world. It has primarily consumed due to its stimulant effect and unique taste since the ancient times. Afterwards, its consumption has been historically associated with a lower risk of some diseases such as type 2 diabetes mellitus, obesity, cardiovascular disease and some type of cancer and thus it has also consumed due to health benefits. It contains many bioactive compounds such as caffeine, chlorogenic acids and diterpenoid alcohols which have so far been associated with many potential health benefits. For example, caffeine reduces risk of developing neurodegenerative disease and chlorogenic acids (CGA) and diterpene alcohols have many health benefits such as antioxidant and chemo-preventive. Coffee also have harmful effects. For example, diterpenoid alcohols increases serum homocysteine and cholesterol levels and thus it has adverse effects on cardiovascular system. Overall, the study that supports the health benefits of coffee is increasing. But, it is thought-provoking that the association with health benefits of coffee consumption and frequency at different levels in each study. For this reason, we aimed to examine the health effect of the coffee and how much consumption is to investigate whether it meets the claimed health benefits.\n\n\nCoffee is one of the most widely consumed beverages in the world and has historically been associated with a lower risk of some diseases such as type 2 diabetes mellitus, obesity, cardiovascular disease and some type of cancer. However, there are mixed results in the literature and further research is required to determine the health benefits of coffee consumption.", "### Rephrasing\n\n\nGive me a different way to say the following to a 4 year old: \"Son, this is the last time I'm going to tell you. Go to bed!\"\n\n\n\"Goodnight, Son. Sleep tight and I'll see you in the morning!\"", "### Closed QA\n\n\nI'm going to give you a passage from the book Neuromancer and I'd like you to answer the following question: What is the tool that allows Case to access the matrix?\nCase was twenty-four. At twenty-two, he'd been a cowboy, a rustler, one of the best in the Sprawl. He'd been trained by the best, by McCoy Pauley and Bobby Quine, legends in the biz. He'd operated on an almost permanent adrenaline high, a byproduct of youth and proficiency, jacked into a custom cyberspace deck that projected his disembodied consciousness into the consensual hallucination that was the matrix.\n\n\nThe tool that allows Case to access the matrix is his cyberspace deck, which allows him to manipulate and interact with the virtual world around him. It enables him to access information from various sources and interact with the people in the matrix, and it is one of the main weapons of Case's cyberworld.", "### Open QA\n\n\nWhat is the literary style of Jorge Luis Borges?\n\n\nJorge Luis Borges's literary style is often described as magical realism. He often creates stories that combine fictional characters with real life elements and use the juxtaposition of these elements to create a sense of surrealism. He often uses allegory and symbolism to explore complex philosophical ideas and explore questions of human perception and understanding. His writing is characterized by a playful and inventive use of language, with characters often being metaphors or symbols for larger, more abstract concepts.\n\n\nUsage\n-----", "### Load Model & Tokenizer", "### Generate Text", "### Benchmark Metrics\n\n\nBelow you'll find various models benchmark performance on the EleutherAI LLM Evaluation Harness\nmodel results are sorted by geometric mean to produce an intelligible ordering. These results demonstrate that Dolly is not state of the art, as we describe\nabove, but also point to an interesting observation. Namely, Dolly is only marginally better (and in the case of Winogrande worse) and its basemodel GPT-J-6B. \n\nDespite this fact, the qualitative behavior of Dolly is materially different from the underlying model (try it yourself on Hugging Face!),\nwhich points to meaningful limitations of the existing evaluation benchmarks for measuring the quality of generative models.\n\n\n\nHappy Hacking!\n==============" ]
reinforcement-learning
stable-baselines3
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga seachus -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga seachus -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga seachus ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 100000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
{"library_name": "stable-baselines3", "tags": ["SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "DQN", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "SpaceInvadersNoFrameskip-v4", "type": "SpaceInvadersNoFrameskip-v4"}, "metrics": [{"type": "mean_reward", "value": "257.00 +/- 38.81", "name": "mean_reward", "verified": false}]}]}]}
seachus/dqn-SpaceInvadersNoFrameskip-v4
null
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-14T21:25:14+00:00
[]
[]
TAGS #stable-baselines3 #SpaceInvadersNoFrameskip-v4 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
# DQN Agent playing SpaceInvadersNoFrameskip-v4 This is a trained model of a DQN agent playing SpaceInvadersNoFrameskip-v4 using the stable-baselines3 library and the RL Zoo. The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: URL SB3: URL SB3 Contrib: URL Install the RL Zoo (with SB3 and SB3-Contrib): If you installed the RL Zoo3 via pip ('pip install rl_zoo3'), from anywhere you can do: ## Training (with the RL Zoo) ## Hyperparameters # Environment Arguments
[ "# DQN Agent playing SpaceInvadersNoFrameskip-v4\nThis is a trained model of a DQN agent playing SpaceInvadersNoFrameskip-v4\nusing the stable-baselines3 library\nand the RL Zoo.\n\nThe RL Zoo is a training framework for Stable Baselines3\nreinforcement learning agents,\nwith hyperparameter optimization and pre-trained agents included.", "## Usage (with SB3 RL Zoo)\n\nRL Zoo: URL\nSB3: URL\nSB3 Contrib: URL\n\nInstall the RL Zoo (with SB3 and SB3-Contrib):\n\n\n\n\nIf you installed the RL Zoo3 via pip ('pip install rl_zoo3'), from anywhere you can do:", "## Training (with the RL Zoo)", "## Hyperparameters", "# Environment Arguments" ]
[ "TAGS\n#stable-baselines3 #SpaceInvadersNoFrameskip-v4 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n", "# DQN Agent playing SpaceInvadersNoFrameskip-v4\nThis is a trained model of a DQN agent playing SpaceInvadersNoFrameskip-v4\nusing the stable-baselines3 library\nand the RL Zoo.\n\nThe RL Zoo is a training framework for Stable Baselines3\nreinforcement learning agents,\nwith hyperparameter optimization and pre-trained agents included.", "## Usage (with SB3 RL Zoo)\n\nRL Zoo: URL\nSB3: URL\nSB3 Contrib: URL\n\nInstall the RL Zoo (with SB3 and SB3-Contrib):\n\n\n\n\nIf you installed the RL Zoo3 via pip ('pip install rl_zoo3'), from anywhere you can do:", "## Training (with the RL Zoo)", "## Hyperparameters", "# Environment Arguments" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DS-6.7B-schema_0 This model is a fine-tuned version of [deepseek-ai/deepseek-coder-6.7b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1549 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.01 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0913 | 0.19 | 50 | 0.1589 | | 0.069 | 0.38 | 100 | 0.1580 | | 0.0667 | 0.57 | 150 | 0.1571 | | 0.063 | 0.76 | 200 | 0.1555 | | 0.1838 | 0.95 | 250 | 0.1549 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "other", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "deepseek-ai/deepseek-coder-6.7b-instruct", "model-index": [{"name": "DS-6.7B-schema_0", "results": []}]}
jdeklerk10/DS-6.7B-schema_0
null
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:deepseek-ai/deepseek-coder-6.7b-instruct", "license:other", "region:us" ]
null
2024-04-14T21:29:25+00:00
[]
[]
TAGS #peft #safetensors #trl #sft #generated_from_trainer #base_model-deepseek-ai/deepseek-coder-6.7b-instruct #license-other #region-us
DS-6.7B-schema\_0 ================= This model is a fine-tuned version of deepseek-ai/deepseek-coder-6.7b-instruct on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.1549 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * gradient\_accumulation\_steps: 8 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.01 * num\_epochs: 1 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.40.0.dev0 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.01\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #base_model-deepseek-ai/deepseek-coder-6.7b-instruct #license-other #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.01\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
reinforcement-learning
null
# **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="APLunch/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]}
APLunch/q-FrozenLake-v1-4x4-noSlippery
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-14T21:32:02+00:00
[]
[]
TAGS #FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
# Q-Learning Agent playing1 FrozenLake-v1 This is a trained model of a Q-Learning agent playing FrozenLake-v1 . ## Usage
[ "# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage" ]
[ "TAGS\n#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n", "# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
heyllm234/sc13
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-14T21:33:28+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistralv1_lora_r8_25e5_e3 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.39.3 - Pytorch 2.2.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "mistralv1_lora_r8_25e5_e3", "results": []}]}
fangzhaoz/mistralv1_lora_r8_25e5_e3
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-04-14T21:34:02+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us
# mistralv1_lora_r8_25e5_e3 This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.39.3 - Pytorch 2.2.2 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# mistralv1_lora_r8_25e5_e3\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2.5e-05\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3", "### Training results", "### Framework versions\n\n- PEFT 0.9.0\n- Transformers 4.39.3\n- Pytorch 2.2.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us \n", "# mistralv1_lora_r8_25e5_e3\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2.5e-05\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3", "### Training results", "### Framework versions\n\n- PEFT 0.9.0\n- Transformers 4.39.3\n- Pytorch 2.2.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
fangzhaoz/mistralv1_lora_r8_25e5_e3_merged
null
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-14T21:34:18+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
EdBerg/gpt-neo-20B
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-14T21:35:00+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
OwOOwO/dumbo-krillin
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-14T21:37:11+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
null
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) sparse_mistral_7b_refined_web_50p_2024-04-13 - bnb 4bits - Model creator: https://huggingface.co/thrunlab/ - Original model: https://huggingface.co/thrunlab/sparse_mistral_7b_refined_web_50p_2024-04-13/ Original model description: --- license: apache-2.0 base_model: mistralai/Mistral-7B-v0.1 tags: - generated_from_trainer model-index: - name: sparse_mistral_7b_refined_web_50p_2024-04-13 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sparse_mistral_7b_refined_web_50p_2024-04-13 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.1985 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 4 - seed: 0 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 2350 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.3391 | 0.01 | 25 | 2.4196 | | 2.2711 | 0.02 | 50 | 2.3577 | | 2.3054 | 0.02 | 75 | 2.3158 | | 2.2795 | 0.03 | 100 | 2.2966 | | 2.3175 | 0.04 | 125 | 2.2846 | | 2.2388 | 0.05 | 150 | 2.2766 | | 2.1679 | 0.06 | 175 | 2.2705 | | 2.2996 | 0.06 | 200 | 2.2678 | | 2.2788 | 0.07 | 225 | 2.2647 | | 2.2448 | 0.08 | 250 | 2.2637 | | 2.1837 | 0.09 | 275 | 2.2624 | | 2.2089 | 0.1 | 300 | 2.2621 | | 2.2686 | 0.1 | 325 | 2.2601 | | 2.2254 | 0.11 | 350 | 2.2593 | | 2.162 | 0.12 | 375 | 2.2590 | | 2.2687 | 0.13 | 400 | 2.2563 | | 2.2595 | 0.14 | 425 | 2.2571 | | 2.186 | 0.14 | 450 | 2.2564 | | 2.2689 | 0.15 | 475 | 2.2580 | | 2.2472 | 0.16 | 500 | 2.2554 | | 2.2005 | 0.17 | 525 | 2.2553 | | 2.1983 | 0.18 | 550 | 2.2552 | | 2.2388 | 0.18 | 575 | 2.2547 | | 2.1443 | 0.19 | 600 | 2.2555 | | 2.2198 | 0.2 | 625 | 2.2534 | | 2.3008 | 0.21 | 650 | 2.2536 | | 2.179 | 0.22 | 675 | 2.2521 | | 2.2069 | 0.22 | 700 | 2.2531 | | 2.1819 | 0.23 | 725 | 2.2526 | | 2.1218 | 0.24 | 750 | 2.2536 | | 2.1845 | 0.25 | 775 | 2.2515 | | 2.2167 | 0.26 | 800 | 2.2510 | | 2.2252 | 0.26 | 825 | 2.2520 | | 2.1664 | 0.27 | 850 | 2.2519 | | 2.1853 | 0.28 | 875 | 2.2530 | | 2.1499 | 0.29 | 900 | 2.2513 | | 2.2763 | 0.3 | 925 | 2.2517 | | 2.2528 | 0.3 | 950 | 2.2518 | | 2.2505 | 0.31 | 975 | 2.2500 | | 2.1683 | 0.32 | 1000 | 2.2502 | | 2.2177 | 0.33 | 1025 | 2.2501 | | 2.238 | 0.34 | 1050 | 2.2516 | | 2.193 | 0.34 | 1075 | 2.2507 | | 2.2025 | 0.35 | 1100 | 2.2502 | | 2.0944 | 0.36 | 1125 | 2.2512 | | 2.2272 | 0.37 | 1150 | 2.2508 | | 2.2264 | 0.38 | 1175 | 2.2500 | | 2.1837 | 0.38 | 1200 | 2.2507 | | 2.1444 | 0.39 | 1225 | 2.2489 | | 2.2464 | 0.4 | 1250 | 2.2499 | | 2.1388 | 0.41 | 1275 | 2.2508 | | 2.193 | 0.42 | 1300 | 2.2492 | | 2.2376 | 0.42 | 1325 | 2.2506 | | 2.2212 | 0.43 | 1350 | 2.2478 | | 2.2002 | 0.44 | 1375 | 2.2488 | | 2.2729 | 0.45 | 1400 | 2.2484 | | 2.2329 | 0.46 | 1425 | 2.2473 | | 2.1919 | 0.46 | 1450 | 2.2481 | | 2.2102 | 0.47 | 1475 | 2.2475 | | 2.1466 | 0.48 | 1500 | 2.2473 | | 2.1819 | 0.49 | 1525 | 2.2478 | | 2.2558 | 0.5 | 1550 | 2.2468 | | 2.2137 | 0.5 | 1575 | 2.2463 | | 2.2288 | 0.51 | 1600 | 2.2466 | | 2.1479 | 0.52 | 1625 | 2.2468 | | 2.1726 | 0.53 | 1650 | 2.2471 | | 2.1805 | 0.54 | 1675 | 2.2454 | | 2.1505 | 0.54 | 1700 | 2.2470 | | 2.1337 | 0.55 | 1725 | 2.2465 | | 2.2413 | 0.56 | 1750 | 2.2460 | | 2.152 | 0.57 | 1775 | 2.2478 | | 2.2669 | 0.58 | 1800 | 2.2471 | | 2.2925 | 0.58 | 1825 | 2.2465 | | 2.222 | 0.59 | 1850 | 2.2457 | | 2.1308 | 0.6 | 1875 | 2.2466 | | 2.201 | 0.61 | 1900 | 2.2456 | | 2.2247 | 0.62 | 1925 | 2.2460 | | 2.2426 | 0.62 | 1950 | 2.2463 | | 2.2312 | 0.63 | 1975 | 2.2465 | | 2.2679 | 0.64 | 2000 | 2.2464 | | 2.1928 | 0.65 | 2025 | 2.2463 | | 2.2087 | 0.66 | 2050 | 2.2455 | | 2.1792 | 0.66 | 2075 | 2.2470 | | 2.252 | 0.67 | 2100 | 2.2468 | | 2.2018 | 0.68 | 2125 | 2.2456 | | 2.2006 | 0.69 | 2150 | 2.2451 | | 2.2076 | 0.7 | 2175 | 2.2449 | | 2.2436 | 0.7 | 2200 | 2.2460 | | 2.2156 | 0.71 | 2225 | 2.2477 | | 2.1348 | 0.72 | 2250 | 2.2455 | | 2.1338 | 0.73 | 2275 | 2.2450 | | 2.2147 | 0.74 | 2300 | 2.2455 | | 2.2766 | 0.74 | 2325 | 2.2444 | | 2.204 | 0.75 | 2350 | 2.2458 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
{}
RichardErkhov/thrunlab_-_sparse_mistral_7b_refined_web_50p_2024-04-13-4bits
null
[ "safetensors", "region:us" ]
null
2024-04-14T21:37:27+00:00
[]
[]
TAGS #safetensors #region-us
Quantization made by Richard Erkhov. Github Discord Request more models sparse\_mistral\_7b\_refined\_web\_50p\_2024-04-13 - bnb 4bits * Model creator: URL * Original model: URL Original model description: --------------------------- license: apache-2.0 base\_model: mistralai/Mistral-7B-v0.1 tags: * generated\_from\_trainer model-index: * name: sparse\_mistral\_7b\_refined\_web\_50p\_2024-04-13 results: [] --- sparse\_mistral\_7b\_refined\_web\_50p\_2024-04-13 ================================================== This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the None dataset. It achieves the following results on the evaluation set: * Loss: 2.1985 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 1 * eval\_batch\_size: 4 * seed: 0 * distributed\_type: multi-GPU * num\_devices: 4 * gradient\_accumulation\_steps: 8 * total\_train\_batch\_size: 32 * total\_eval\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 2350 ### Training results ### Framework versions * Transformers 4.36.2 * Pytorch 2.1.2+cu121 * Datasets 2.15.0 * Tokenizers 0.15.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 4\n* seed: 0\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 32\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 2350", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.15.0\n* Tokenizers 0.15.0" ]
[ "TAGS\n#safetensors #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 4\n* seed: 0\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 32\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 2350", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.15.0\n* Tokenizers 0.15.0" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
lingchensanwen/llama2-chat-generation-best-balanced
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-14T21:38:05+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
null
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) sparse_mistral_7b_refined_web_50p_2024-04-13 - bnb 8bits - Model creator: https://huggingface.co/thrunlab/ - Original model: https://huggingface.co/thrunlab/sparse_mistral_7b_refined_web_50p_2024-04-13/ Original model description: --- license: apache-2.0 base_model: mistralai/Mistral-7B-v0.1 tags: - generated_from_trainer model-index: - name: sparse_mistral_7b_refined_web_50p_2024-04-13 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sparse_mistral_7b_refined_web_50p_2024-04-13 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.1985 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 4 - seed: 0 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 2350 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.3391 | 0.01 | 25 | 2.4196 | | 2.2711 | 0.02 | 50 | 2.3577 | | 2.3054 | 0.02 | 75 | 2.3158 | | 2.2795 | 0.03 | 100 | 2.2966 | | 2.3175 | 0.04 | 125 | 2.2846 | | 2.2388 | 0.05 | 150 | 2.2766 | | 2.1679 | 0.06 | 175 | 2.2705 | | 2.2996 | 0.06 | 200 | 2.2678 | | 2.2788 | 0.07 | 225 | 2.2647 | | 2.2448 | 0.08 | 250 | 2.2637 | | 2.1837 | 0.09 | 275 | 2.2624 | | 2.2089 | 0.1 | 300 | 2.2621 | | 2.2686 | 0.1 | 325 | 2.2601 | | 2.2254 | 0.11 | 350 | 2.2593 | | 2.162 | 0.12 | 375 | 2.2590 | | 2.2687 | 0.13 | 400 | 2.2563 | | 2.2595 | 0.14 | 425 | 2.2571 | | 2.186 | 0.14 | 450 | 2.2564 | | 2.2689 | 0.15 | 475 | 2.2580 | | 2.2472 | 0.16 | 500 | 2.2554 | | 2.2005 | 0.17 | 525 | 2.2553 | | 2.1983 | 0.18 | 550 | 2.2552 | | 2.2388 | 0.18 | 575 | 2.2547 | | 2.1443 | 0.19 | 600 | 2.2555 | | 2.2198 | 0.2 | 625 | 2.2534 | | 2.3008 | 0.21 | 650 | 2.2536 | | 2.179 | 0.22 | 675 | 2.2521 | | 2.2069 | 0.22 | 700 | 2.2531 | | 2.1819 | 0.23 | 725 | 2.2526 | | 2.1218 | 0.24 | 750 | 2.2536 | | 2.1845 | 0.25 | 775 | 2.2515 | | 2.2167 | 0.26 | 800 | 2.2510 | | 2.2252 | 0.26 | 825 | 2.2520 | | 2.1664 | 0.27 | 850 | 2.2519 | | 2.1853 | 0.28 | 875 | 2.2530 | | 2.1499 | 0.29 | 900 | 2.2513 | | 2.2763 | 0.3 | 925 | 2.2517 | | 2.2528 | 0.3 | 950 | 2.2518 | | 2.2505 | 0.31 | 975 | 2.2500 | | 2.1683 | 0.32 | 1000 | 2.2502 | | 2.2177 | 0.33 | 1025 | 2.2501 | | 2.238 | 0.34 | 1050 | 2.2516 | | 2.193 | 0.34 | 1075 | 2.2507 | | 2.2025 | 0.35 | 1100 | 2.2502 | | 2.0944 | 0.36 | 1125 | 2.2512 | | 2.2272 | 0.37 | 1150 | 2.2508 | | 2.2264 | 0.38 | 1175 | 2.2500 | | 2.1837 | 0.38 | 1200 | 2.2507 | | 2.1444 | 0.39 | 1225 | 2.2489 | | 2.2464 | 0.4 | 1250 | 2.2499 | | 2.1388 | 0.41 | 1275 | 2.2508 | | 2.193 | 0.42 | 1300 | 2.2492 | | 2.2376 | 0.42 | 1325 | 2.2506 | | 2.2212 | 0.43 | 1350 | 2.2478 | | 2.2002 | 0.44 | 1375 | 2.2488 | | 2.2729 | 0.45 | 1400 | 2.2484 | | 2.2329 | 0.46 | 1425 | 2.2473 | | 2.1919 | 0.46 | 1450 | 2.2481 | | 2.2102 | 0.47 | 1475 | 2.2475 | | 2.1466 | 0.48 | 1500 | 2.2473 | | 2.1819 | 0.49 | 1525 | 2.2478 | | 2.2558 | 0.5 | 1550 | 2.2468 | | 2.2137 | 0.5 | 1575 | 2.2463 | | 2.2288 | 0.51 | 1600 | 2.2466 | | 2.1479 | 0.52 | 1625 | 2.2468 | | 2.1726 | 0.53 | 1650 | 2.2471 | | 2.1805 | 0.54 | 1675 | 2.2454 | | 2.1505 | 0.54 | 1700 | 2.2470 | | 2.1337 | 0.55 | 1725 | 2.2465 | | 2.2413 | 0.56 | 1750 | 2.2460 | | 2.152 | 0.57 | 1775 | 2.2478 | | 2.2669 | 0.58 | 1800 | 2.2471 | | 2.2925 | 0.58 | 1825 | 2.2465 | | 2.222 | 0.59 | 1850 | 2.2457 | | 2.1308 | 0.6 | 1875 | 2.2466 | | 2.201 | 0.61 | 1900 | 2.2456 | | 2.2247 | 0.62 | 1925 | 2.2460 | | 2.2426 | 0.62 | 1950 | 2.2463 | | 2.2312 | 0.63 | 1975 | 2.2465 | | 2.2679 | 0.64 | 2000 | 2.2464 | | 2.1928 | 0.65 | 2025 | 2.2463 | | 2.2087 | 0.66 | 2050 | 2.2455 | | 2.1792 | 0.66 | 2075 | 2.2470 | | 2.252 | 0.67 | 2100 | 2.2468 | | 2.2018 | 0.68 | 2125 | 2.2456 | | 2.2006 | 0.69 | 2150 | 2.2451 | | 2.2076 | 0.7 | 2175 | 2.2449 | | 2.2436 | 0.7 | 2200 | 2.2460 | | 2.2156 | 0.71 | 2225 | 2.2477 | | 2.1348 | 0.72 | 2250 | 2.2455 | | 2.1338 | 0.73 | 2275 | 2.2450 | | 2.2147 | 0.74 | 2300 | 2.2455 | | 2.2766 | 0.74 | 2325 | 2.2444 | | 2.204 | 0.75 | 2350 | 2.2458 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
{}
RichardErkhov/thrunlab_-_sparse_mistral_7b_refined_web_50p_2024-04-13-8bits
null
[ "safetensors", "region:us" ]
null
2024-04-14T21:38:18+00:00
[]
[]
TAGS #safetensors #region-us
Quantization made by Richard Erkhov. Github Discord Request more models sparse\_mistral\_7b\_refined\_web\_50p\_2024-04-13 - bnb 8bits * Model creator: URL * Original model: URL Original model description: --------------------------- license: apache-2.0 base\_model: mistralai/Mistral-7B-v0.1 tags: * generated\_from\_trainer model-index: * name: sparse\_mistral\_7b\_refined\_web\_50p\_2024-04-13 results: [] --- sparse\_mistral\_7b\_refined\_web\_50p\_2024-04-13 ================================================== This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the None dataset. It achieves the following results on the evaluation set: * Loss: 2.1985 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 1 * eval\_batch\_size: 4 * seed: 0 * distributed\_type: multi-GPU * num\_devices: 4 * gradient\_accumulation\_steps: 8 * total\_train\_batch\_size: 32 * total\_eval\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 2350 ### Training results ### Framework versions * Transformers 4.36.2 * Pytorch 2.1.2+cu121 * Datasets 2.15.0 * Tokenizers 0.15.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 4\n* seed: 0\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 32\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 2350", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.15.0\n* Tokenizers 0.15.0" ]
[ "TAGS\n#safetensors #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 4\n* seed: 0\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 32\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 2350", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.15.0\n* Tokenizers 0.15.0" ]
null
null
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) sparse_mistral_7b_refined_web_50p_2024-04-13 - GGUF - Model creator: https://huggingface.co/thrunlab/ - Original model: https://huggingface.co/thrunlab/sparse_mistral_7b_refined_web_50p_2024-04-13/ | Name | Quant method | Size | | ---- | ---- | ---- | | [sparse_mistral_7b_refined_web_50p_2024-04-13.Q2_K.gguf](https://huggingface.co/RichardErkhov/thrunlab_-_sparse_mistral_7b_refined_web_50p_2024-04-13-gguf/blob/main/sparse_mistral_7b_refined_web_50p_2024-04-13.Q2_K.gguf) | Q2_K | 2.53GB | | [sparse_mistral_7b_refined_web_50p_2024-04-13.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/thrunlab_-_sparse_mistral_7b_refined_web_50p_2024-04-13-gguf/blob/main/sparse_mistral_7b_refined_web_50p_2024-04-13.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [sparse_mistral_7b_refined_web_50p_2024-04-13.IQ3_S.gguf](https://huggingface.co/RichardErkhov/thrunlab_-_sparse_mistral_7b_refined_web_50p_2024-04-13-gguf/blob/main/sparse_mistral_7b_refined_web_50p_2024-04-13.IQ3_S.gguf) | IQ3_S | 2.96GB | | [sparse_mistral_7b_refined_web_50p_2024-04-13.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/thrunlab_-_sparse_mistral_7b_refined_web_50p_2024-04-13-gguf/blob/main/sparse_mistral_7b_refined_web_50p_2024-04-13.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [sparse_mistral_7b_refined_web_50p_2024-04-13.IQ3_M.gguf](https://huggingface.co/RichardErkhov/thrunlab_-_sparse_mistral_7b_refined_web_50p_2024-04-13-gguf/blob/main/sparse_mistral_7b_refined_web_50p_2024-04-13.IQ3_M.gguf) | IQ3_M | 3.06GB | | [sparse_mistral_7b_refined_web_50p_2024-04-13.Q3_K.gguf](https://huggingface.co/RichardErkhov/thrunlab_-_sparse_mistral_7b_refined_web_50p_2024-04-13-gguf/blob/main/sparse_mistral_7b_refined_web_50p_2024-04-13.Q3_K.gguf) | Q3_K | 3.28GB | | [sparse_mistral_7b_refined_web_50p_2024-04-13.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/thrunlab_-_sparse_mistral_7b_refined_web_50p_2024-04-13-gguf/blob/main/sparse_mistral_7b_refined_web_50p_2024-04-13.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [sparse_mistral_7b_refined_web_50p_2024-04-13.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/thrunlab_-_sparse_mistral_7b_refined_web_50p_2024-04-13-gguf/blob/main/sparse_mistral_7b_refined_web_50p_2024-04-13.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [sparse_mistral_7b_refined_web_50p_2024-04-13.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/thrunlab_-_sparse_mistral_7b_refined_web_50p_2024-04-13-gguf/blob/main/sparse_mistral_7b_refined_web_50p_2024-04-13.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [sparse_mistral_7b_refined_web_50p_2024-04-13.Q4_0.gguf](https://huggingface.co/RichardErkhov/thrunlab_-_sparse_mistral_7b_refined_web_50p_2024-04-13-gguf/blob/main/sparse_mistral_7b_refined_web_50p_2024-04-13.Q4_0.gguf) | Q4_0 | 3.83GB | | [sparse_mistral_7b_refined_web_50p_2024-04-13.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/thrunlab_-_sparse_mistral_7b_refined_web_50p_2024-04-13-gguf/blob/main/sparse_mistral_7b_refined_web_50p_2024-04-13.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [sparse_mistral_7b_refined_web_50p_2024-04-13.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/thrunlab_-_sparse_mistral_7b_refined_web_50p_2024-04-13-gguf/blob/main/sparse_mistral_7b_refined_web_50p_2024-04-13.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [sparse_mistral_7b_refined_web_50p_2024-04-13.Q4_K.gguf](https://huggingface.co/RichardErkhov/thrunlab_-_sparse_mistral_7b_refined_web_50p_2024-04-13-gguf/blob/main/sparse_mistral_7b_refined_web_50p_2024-04-13.Q4_K.gguf) | Q4_K | 4.07GB | | [sparse_mistral_7b_refined_web_50p_2024-04-13.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/thrunlab_-_sparse_mistral_7b_refined_web_50p_2024-04-13-gguf/blob/main/sparse_mistral_7b_refined_web_50p_2024-04-13.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [sparse_mistral_7b_refined_web_50p_2024-04-13.Q4_1.gguf](https://huggingface.co/RichardErkhov/thrunlab_-_sparse_mistral_7b_refined_web_50p_2024-04-13-gguf/blob/main/sparse_mistral_7b_refined_web_50p_2024-04-13.Q4_1.gguf) | Q4_1 | 4.24GB | | [sparse_mistral_7b_refined_web_50p_2024-04-13.Q5_0.gguf](https://huggingface.co/RichardErkhov/thrunlab_-_sparse_mistral_7b_refined_web_50p_2024-04-13-gguf/blob/main/sparse_mistral_7b_refined_web_50p_2024-04-13.Q5_0.gguf) | Q5_0 | 4.65GB | | [sparse_mistral_7b_refined_web_50p_2024-04-13.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/thrunlab_-_sparse_mistral_7b_refined_web_50p_2024-04-13-gguf/blob/main/sparse_mistral_7b_refined_web_50p_2024-04-13.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [sparse_mistral_7b_refined_web_50p_2024-04-13.Q5_K.gguf](https://huggingface.co/RichardErkhov/thrunlab_-_sparse_mistral_7b_refined_web_50p_2024-04-13-gguf/blob/main/sparse_mistral_7b_refined_web_50p_2024-04-13.Q5_K.gguf) | Q5_K | 4.78GB | | [sparse_mistral_7b_refined_web_50p_2024-04-13.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/thrunlab_-_sparse_mistral_7b_refined_web_50p_2024-04-13-gguf/blob/main/sparse_mistral_7b_refined_web_50p_2024-04-13.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [sparse_mistral_7b_refined_web_50p_2024-04-13.Q5_1.gguf](https://huggingface.co/RichardErkhov/thrunlab_-_sparse_mistral_7b_refined_web_50p_2024-04-13-gguf/blob/main/sparse_mistral_7b_refined_web_50p_2024-04-13.Q5_1.gguf) | Q5_1 | 5.07GB | | [sparse_mistral_7b_refined_web_50p_2024-04-13.Q6_K.gguf](https://huggingface.co/RichardErkhov/thrunlab_-_sparse_mistral_7b_refined_web_50p_2024-04-13-gguf/blob/main/sparse_mistral_7b_refined_web_50p_2024-04-13.Q6_K.gguf) | Q6_K | 5.53GB | Original model description: --- license: apache-2.0 base_model: mistralai/Mistral-7B-v0.1 tags: - generated_from_trainer model-index: - name: sparse_mistral_7b_refined_web_50p_2024-04-13 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sparse_mistral_7b_refined_web_50p_2024-04-13 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.1985 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 4 - seed: 0 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 2350 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.3391 | 0.01 | 25 | 2.4196 | | 2.2711 | 0.02 | 50 | 2.3577 | | 2.3054 | 0.02 | 75 | 2.3158 | | 2.2795 | 0.03 | 100 | 2.2966 | | 2.3175 | 0.04 | 125 | 2.2846 | | 2.2388 | 0.05 | 150 | 2.2766 | | 2.1679 | 0.06 | 175 | 2.2705 | | 2.2996 | 0.06 | 200 | 2.2678 | | 2.2788 | 0.07 | 225 | 2.2647 | | 2.2448 | 0.08 | 250 | 2.2637 | | 2.1837 | 0.09 | 275 | 2.2624 | | 2.2089 | 0.1 | 300 | 2.2621 | | 2.2686 | 0.1 | 325 | 2.2601 | | 2.2254 | 0.11 | 350 | 2.2593 | | 2.162 | 0.12 | 375 | 2.2590 | | 2.2687 | 0.13 | 400 | 2.2563 | | 2.2595 | 0.14 | 425 | 2.2571 | | 2.186 | 0.14 | 450 | 2.2564 | | 2.2689 | 0.15 | 475 | 2.2580 | | 2.2472 | 0.16 | 500 | 2.2554 | | 2.2005 | 0.17 | 525 | 2.2553 | | 2.1983 | 0.18 | 550 | 2.2552 | | 2.2388 | 0.18 | 575 | 2.2547 | | 2.1443 | 0.19 | 600 | 2.2555 | | 2.2198 | 0.2 | 625 | 2.2534 | | 2.3008 | 0.21 | 650 | 2.2536 | | 2.179 | 0.22 | 675 | 2.2521 | | 2.2069 | 0.22 | 700 | 2.2531 | | 2.1819 | 0.23 | 725 | 2.2526 | | 2.1218 | 0.24 | 750 | 2.2536 | | 2.1845 | 0.25 | 775 | 2.2515 | | 2.2167 | 0.26 | 800 | 2.2510 | | 2.2252 | 0.26 | 825 | 2.2520 | | 2.1664 | 0.27 | 850 | 2.2519 | | 2.1853 | 0.28 | 875 | 2.2530 | | 2.1499 | 0.29 | 900 | 2.2513 | | 2.2763 | 0.3 | 925 | 2.2517 | | 2.2528 | 0.3 | 950 | 2.2518 | | 2.2505 | 0.31 | 975 | 2.2500 | | 2.1683 | 0.32 | 1000 | 2.2502 | | 2.2177 | 0.33 | 1025 | 2.2501 | | 2.238 | 0.34 | 1050 | 2.2516 | | 2.193 | 0.34 | 1075 | 2.2507 | | 2.2025 | 0.35 | 1100 | 2.2502 | | 2.0944 | 0.36 | 1125 | 2.2512 | | 2.2272 | 0.37 | 1150 | 2.2508 | | 2.2264 | 0.38 | 1175 | 2.2500 | | 2.1837 | 0.38 | 1200 | 2.2507 | | 2.1444 | 0.39 | 1225 | 2.2489 | | 2.2464 | 0.4 | 1250 | 2.2499 | | 2.1388 | 0.41 | 1275 | 2.2508 | | 2.193 | 0.42 | 1300 | 2.2492 | | 2.2376 | 0.42 | 1325 | 2.2506 | | 2.2212 | 0.43 | 1350 | 2.2478 | | 2.2002 | 0.44 | 1375 | 2.2488 | | 2.2729 | 0.45 | 1400 | 2.2484 | | 2.2329 | 0.46 | 1425 | 2.2473 | | 2.1919 | 0.46 | 1450 | 2.2481 | | 2.2102 | 0.47 | 1475 | 2.2475 | | 2.1466 | 0.48 | 1500 | 2.2473 | | 2.1819 | 0.49 | 1525 | 2.2478 | | 2.2558 | 0.5 | 1550 | 2.2468 | | 2.2137 | 0.5 | 1575 | 2.2463 | | 2.2288 | 0.51 | 1600 | 2.2466 | | 2.1479 | 0.52 | 1625 | 2.2468 | | 2.1726 | 0.53 | 1650 | 2.2471 | | 2.1805 | 0.54 | 1675 | 2.2454 | | 2.1505 | 0.54 | 1700 | 2.2470 | | 2.1337 | 0.55 | 1725 | 2.2465 | | 2.2413 | 0.56 | 1750 | 2.2460 | | 2.152 | 0.57 | 1775 | 2.2478 | | 2.2669 | 0.58 | 1800 | 2.2471 | | 2.2925 | 0.58 | 1825 | 2.2465 | | 2.222 | 0.59 | 1850 | 2.2457 | | 2.1308 | 0.6 | 1875 | 2.2466 | | 2.201 | 0.61 | 1900 | 2.2456 | | 2.2247 | 0.62 | 1925 | 2.2460 | | 2.2426 | 0.62 | 1950 | 2.2463 | | 2.2312 | 0.63 | 1975 | 2.2465 | | 2.2679 | 0.64 | 2000 | 2.2464 | | 2.1928 | 0.65 | 2025 | 2.2463 | | 2.2087 | 0.66 | 2050 | 2.2455 | | 2.1792 | 0.66 | 2075 | 2.2470 | | 2.252 | 0.67 | 2100 | 2.2468 | | 2.2018 | 0.68 | 2125 | 2.2456 | | 2.2006 | 0.69 | 2150 | 2.2451 | | 2.2076 | 0.7 | 2175 | 2.2449 | | 2.2436 | 0.7 | 2200 | 2.2460 | | 2.2156 | 0.71 | 2225 | 2.2477 | | 2.1348 | 0.72 | 2250 | 2.2455 | | 2.1338 | 0.73 | 2275 | 2.2450 | | 2.2147 | 0.74 | 2300 | 2.2455 | | 2.2766 | 0.74 | 2325 | 2.2444 | | 2.204 | 0.75 | 2350 | 2.2458 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
{}
RichardErkhov/thrunlab_-_sparse_mistral_7b_refined_web_50p_2024-04-13-gguf
null
[ "gguf", "region:us" ]
null
2024-04-14T21:39:49+00:00
[]
[]
TAGS #gguf #region-us
Quantization made by Richard Erkhov. Github Discord Request more models sparse\_mistral\_7b\_refined\_web\_50p\_2024-04-13 - GGUF * Model creator: URL * Original model: URL Name: sparse\_mistral\_7b\_refined\_web\_50p\_2024-04-13.Q2\_K.gguf, Quant method: Q2\_K, Size: 2.53GB Name: sparse\_mistral\_7b\_refined\_web\_50p\_2024-04-13.IQ3\_XS.gguf, Quant method: IQ3\_XS, Size: 2.81GB Name: sparse\_mistral\_7b\_refined\_web\_50p\_2024-04-13.IQ3\_S.gguf, Quant method: IQ3\_S, Size: 2.96GB Name: sparse\_mistral\_7b\_refined\_web\_50p\_2024-04-13.Q3\_K\_S.gguf, Quant method: Q3\_K\_S, Size: 2.95GB Name: sparse\_mistral\_7b\_refined\_web\_50p\_2024-04-13.IQ3\_M.gguf, Quant method: IQ3\_M, Size: 3.06GB Name: sparse\_mistral\_7b\_refined\_web\_50p\_2024-04-13.Q3\_K.gguf, Quant method: Q3\_K, Size: 3.28GB Name: sparse\_mistral\_7b\_refined\_web\_50p\_2024-04-13.Q3\_K\_M.gguf, Quant method: Q3\_K\_M, Size: 3.28GB Name: sparse\_mistral\_7b\_refined\_web\_50p\_2024-04-13.Q3\_K\_L.gguf, Quant method: Q3\_K\_L, Size: 3.56GB Name: sparse\_mistral\_7b\_refined\_web\_50p\_2024-04-13.IQ4\_XS.gguf, Quant method: IQ4\_XS, Size: 3.67GB Name: sparse\_mistral\_7b\_refined\_web\_50p\_2024-04-13.Q4\_0.gguf, Quant method: Q4\_0, Size: 3.83GB Name: sparse\_mistral\_7b\_refined\_web\_50p\_2024-04-13.IQ4\_NL.gguf, Quant method: IQ4\_NL, Size: 3.87GB Name: sparse\_mistral\_7b\_refined\_web\_50p\_2024-04-13.Q4\_K\_S.gguf, Quant method: Q4\_K\_S, Size: 3.86GB Name: sparse\_mistral\_7b\_refined\_web\_50p\_2024-04-13.Q4\_K.gguf, Quant method: Q4\_K, Size: 4.07GB Name: sparse\_mistral\_7b\_refined\_web\_50p\_2024-04-13.Q4\_K\_M.gguf, Quant method: Q4\_K\_M, Size: 4.07GB Name: sparse\_mistral\_7b\_refined\_web\_50p\_2024-04-13.Q4\_1.gguf, Quant method: Q4\_1, Size: 4.24GB Name: sparse\_mistral\_7b\_refined\_web\_50p\_2024-04-13.Q5\_0.gguf, Quant method: Q5\_0, Size: 4.65GB Name: sparse\_mistral\_7b\_refined\_web\_50p\_2024-04-13.Q5\_K\_S.gguf, Quant method: Q5\_K\_S, Size: 4.65GB Name: sparse\_mistral\_7b\_refined\_web\_50p\_2024-04-13.Q5\_K.gguf, Quant method: Q5\_K, Size: 4.78GB Name: sparse\_mistral\_7b\_refined\_web\_50p\_2024-04-13.Q5\_K\_M.gguf, Quant method: Q5\_K\_M, Size: 4.78GB Name: sparse\_mistral\_7b\_refined\_web\_50p\_2024-04-13.Q5\_1.gguf, Quant method: Q5\_1, Size: 5.07GB Name: sparse\_mistral\_7b\_refined\_web\_50p\_2024-04-13.Q6\_K.gguf, Quant method: Q6\_K, Size: 5.53GB Original model description: --------------------------- license: apache-2.0 base\_model: mistralai/Mistral-7B-v0.1 tags: * generated\_from\_trainer model-index: * name: sparse\_mistral\_7b\_refined\_web\_50p\_2024-04-13 results: [] --- sparse\_mistral\_7b\_refined\_web\_50p\_2024-04-13 ================================================== This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the None dataset. It achieves the following results on the evaluation set: * Loss: 2.1985 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 1 * eval\_batch\_size: 4 * seed: 0 * distributed\_type: multi-GPU * num\_devices: 4 * gradient\_accumulation\_steps: 8 * total\_train\_batch\_size: 32 * total\_eval\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 2350 ### Training results ### Framework versions * Transformers 4.36.2 * Pytorch 2.1.2+cu121 * Datasets 2.15.0 * Tokenizers 0.15.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 4\n* seed: 0\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 32\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 2350", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.15.0\n* Tokenizers 0.15.0" ]
[ "TAGS\n#gguf #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 4\n* seed: 0\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 32\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 2350", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.15.0\n* Tokenizers 0.15.0" ]
reinforcement-learning
null
# **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="izaznov/taxi_3_Q_learning", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "taxi_3_Q_learning", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.56 +/- 2.71", "name": "mean_reward", "verified": false}]}]}]}
izaznov/taxi_3_Q_learning
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-14T21:41:12+00:00
[]
[]
TAGS #Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
# Q-Learning Agent playing1 Taxi-v3 This is a trained model of a Q-Learning agent playing Taxi-v3 . ## Usage
[ "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
[ "TAGS\n#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n", "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
null
null
# #Roleplay #Writing #Creative This repository hosts GGUF-IQ-Imatrix quants for [ResplendentAI/Aura_7B](https://huggingface.co/ResplendentAI/Aura_7B). "Please read all the model information at the bottom of the card." ![image/png](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/HxOf1b4n4EyADoNIl2fOW.png) **What does "Imatrix" mean?** <details><summary> ⇲ Click here to expand/hide more information about this topic. </summary> It stands for **Importance Matrix**, a technique used to improve the quality of quantized models. The **Imatrix** is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process. The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance, especially when the calibration data is diverse. [[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) For imatrix data generation, kalomaze's `groups_merged.txt` with additional roleplay chats was used, you can find it [here](https://huggingface.co/Lewdiculous/Nyanade_Stunna-Maid-7B-GGUF-IQ-Imatrix/blob/main/imatrix-with-rp-ex.txt). This was just to add a bit more diversity to the data with the intended use case in mind. </details><br> # Quantization information: <details><summary> ⇲ Click here to expand/hide more information about this topic. </summary> ```python quantization_options = [ "Q4_K_M", "Q4_K_S", "IQ4_XS", "Q5_K_M", "Q5_K_S", "Q6_K", "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS" ] ``` **Steps performed:** ``` Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants) ``` *Using the latest llama.cpp at the time.* </details><br> # Original model information: Aura is an advanced sentience simulation trained on my own philosophical writings. It has been tested with various character cards and it worked with all of them. This model may not be overly intelligent, but it aims to be an entertaining companion. I recommend keeping the temperature around 1.5 or lower with a Min P value of 0.05. This model can get carried away with prose at higher temperature. I will say though that the prose of this model is distinct from the GPT 3.5/4 variant, and lends an air of humanity to the outputs. I am aware that this model is overfit, but that was the point of the entire exercise. If you have trouble getting the model to follow an asterisks/quote format, I recommend asterisks/plaintext instead. This model skews toward shorter outputs, so be prepared to lengthen your introduction and examples if you want longer outputs. This model responds best to ChatML for multiturn conversations.
{"tags": ["gguf", "quantized", "roleplay", "multimodal", "vision", "llava", "sillytavern", "merge", "mistral", "conversational"], "inference": false}
Lewdiculous/Aura_7B-GGUF-IQ-Imatrix
null
[ "gguf", "quantized", "roleplay", "multimodal", "vision", "llava", "sillytavern", "merge", "mistral", "conversational", "region:us" ]
null
2024-04-14T21:44:39+00:00
[]
[]
TAGS #gguf #quantized #roleplay #multimodal #vision #llava #sillytavern #merge #mistral #conversational #region-us
# #Roleplay #Writing #Creative This repository hosts GGUF-IQ-Imatrix quants for ResplendentAI/Aura_7B. "Please read all the model information at the bottom of the card." !image/png What does "Imatrix" mean? <details><summary> ⇲ Click here to expand/hide more information about this topic. </summary> It stands for Importance Matrix, a technique used to improve the quality of quantized models. The Imatrix is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process. The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance, especially when the calibration data is diverse. [[1]](URL [[2]](URL For imatrix data generation, kalomaze's 'groups_merged.txt' with additional roleplay chats was used, you can find it here. This was just to add a bit more diversity to the data with the intended use case in mind. </details><br> # Quantization information: <details><summary> ⇲ Click here to expand/hide more information about this topic. </summary> Steps performed: *Using the latest URL at the time.* </details><br> # Original model information: Aura is an advanced sentience simulation trained on my own philosophical writings. It has been tested with various character cards and it worked with all of them. This model may not be overly intelligent, but it aims to be an entertaining companion. I recommend keeping the temperature around 1.5 or lower with a Min P value of 0.05. This model can get carried away with prose at higher temperature. I will say though that the prose of this model is distinct from the GPT 3.5/4 variant, and lends an air of humanity to the outputs. I am aware that this model is overfit, but that was the point of the entire exercise. If you have trouble getting the model to follow an asterisks/quote format, I recommend asterisks/plaintext instead. This model skews toward shorter outputs, so be prepared to lengthen your introduction and examples if you want longer outputs. This model responds best to ChatML for multiturn conversations.
[ "# #Roleplay #Writing #Creative\n\nThis repository hosts GGUF-IQ-Imatrix quants for ResplendentAI/Aura_7B.\n\n\"Please read all the model information at the bottom of the card.\"\n\n!image/png\n\nWhat does \"Imatrix\" mean?\n\n<details><summary>\n⇲ Click here to expand/hide more information about this topic.\n</summary>\n \nIt stands for Importance Matrix, a technique used to improve the quality of quantized models.\nThe Imatrix is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process.\nThe idea is to preserve the most important information during quantization, which can help reduce the loss of model performance, especially when the calibration data is diverse.\n[[1]](URL [[2]](URL\n\nFor imatrix data generation, kalomaze's 'groups_merged.txt' with additional roleplay chats was used, you can find it here. This was just to add a bit more diversity to the data with the intended use case in mind.\n \n</details><br>", "# Quantization information:\n\n\n<details><summary>\n⇲ Click here to expand/hide more information about this topic.\n</summary>\n\n\n\nSteps performed:\n\n\n*Using the latest URL at the time.*\n \n</details><br>", "# Original model information:\n\nAura is an advanced sentience simulation trained on my own philosophical writings. It has been tested with various character cards and it worked with all of them. This model may not be overly intelligent, but it aims to be an entertaining companion.\n\nI recommend keeping the temperature around 1.5 or lower with a Min P value of 0.05. This model can get carried away with prose at higher temperature. I will say though that the prose of this model is distinct from the GPT 3.5/4 variant, and lends an air of humanity to the outputs. I am aware that this model is overfit, but that was the point of the entire exercise.\n\nIf you have trouble getting the model to follow an asterisks/quote format, I recommend asterisks/plaintext instead. This model skews toward shorter outputs, so be prepared to lengthen your introduction and examples if you want longer outputs.\n\nThis model responds best to ChatML for multiturn conversations." ]
[ "TAGS\n#gguf #quantized #roleplay #multimodal #vision #llava #sillytavern #merge #mistral #conversational #region-us \n", "# #Roleplay #Writing #Creative\n\nThis repository hosts GGUF-IQ-Imatrix quants for ResplendentAI/Aura_7B.\n\n\"Please read all the model information at the bottom of the card.\"\n\n!image/png\n\nWhat does \"Imatrix\" mean?\n\n<details><summary>\n⇲ Click here to expand/hide more information about this topic.\n</summary>\n \nIt stands for Importance Matrix, a technique used to improve the quality of quantized models.\nThe Imatrix is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process.\nThe idea is to preserve the most important information during quantization, which can help reduce the loss of model performance, especially when the calibration data is diverse.\n[[1]](URL [[2]](URL\n\nFor imatrix data generation, kalomaze's 'groups_merged.txt' with additional roleplay chats was used, you can find it here. This was just to add a bit more diversity to the data with the intended use case in mind.\n \n</details><br>", "# Quantization information:\n\n\n<details><summary>\n⇲ Click here to expand/hide more information about this topic.\n</summary>\n\n\n\nSteps performed:\n\n\n*Using the latest URL at the time.*\n \n</details><br>", "# Original model information:\n\nAura is an advanced sentience simulation trained on my own philosophical writings. It has been tested with various character cards and it worked with all of them. This model may not be overly intelligent, but it aims to be an entertaining companion.\n\nI recommend keeping the temperature around 1.5 or lower with a Min P value of 0.05. This model can get carried away with prose at higher temperature. I will say though that the prose of this model is distinct from the GPT 3.5/4 variant, and lends an air of humanity to the outputs. I am aware that this model is overfit, but that was the point of the entire exercise.\n\nIf you have trouble getting the model to follow an asterisks/quote format, I recommend asterisks/plaintext instead. This model skews toward shorter outputs, so be prepared to lengthen your introduction and examples if you want longer outputs.\n\nThis model responds best to ChatML for multiturn conversations." ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
lingchensanwen/mistral-ins-generation-best-balanced
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-14T21:45:18+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Riiid/sheep-duck-llama-2-70b-v1.1 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/sheep-duck-llama-2-70b-v1.1-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/sheep-duck-llama-2-70b-v1.1-GGUF/resolve/main/sheep-duck-llama-2-70b-v1.1.Q2_K.gguf) | Q2_K | 25.6 | | | [GGUF](https://huggingface.co/mradermacher/sheep-duck-llama-2-70b-v1.1-GGUF/resolve/main/sheep-duck-llama-2-70b-v1.1.IQ3_XS.gguf) | IQ3_XS | 28.4 | | | [GGUF](https://huggingface.co/mradermacher/sheep-duck-llama-2-70b-v1.1-GGUF/resolve/main/sheep-duck-llama-2-70b-v1.1.IQ3_S.gguf) | IQ3_S | 30.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/sheep-duck-llama-2-70b-v1.1-GGUF/resolve/main/sheep-duck-llama-2-70b-v1.1.Q3_K_S.gguf) | Q3_K_S | 30.0 | | | [GGUF](https://huggingface.co/mradermacher/sheep-duck-llama-2-70b-v1.1-GGUF/resolve/main/sheep-duck-llama-2-70b-v1.1.IQ3_M.gguf) | IQ3_M | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/sheep-duck-llama-2-70b-v1.1-GGUF/resolve/main/sheep-duck-llama-2-70b-v1.1.Q3_K_M.gguf) | Q3_K_M | 33.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/sheep-duck-llama-2-70b-v1.1-GGUF/resolve/main/sheep-duck-llama-2-70b-v1.1.Q3_K_L.gguf) | Q3_K_L | 36.2 | | | [GGUF](https://huggingface.co/mradermacher/sheep-duck-llama-2-70b-v1.1-GGUF/resolve/main/sheep-duck-llama-2-70b-v1.1.IQ4_XS.gguf) | IQ4_XS | 37.3 | | | [GGUF](https://huggingface.co/mradermacher/sheep-duck-llama-2-70b-v1.1-GGUF/resolve/main/sheep-duck-llama-2-70b-v1.1.Q4_K_S.gguf) | Q4_K_S | 39.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/sheep-duck-llama-2-70b-v1.1-GGUF/resolve/main/sheep-duck-llama-2-70b-v1.1.Q4_K_M.gguf) | Q4_K_M | 41.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/sheep-duck-llama-2-70b-v1.1-GGUF/resolve/main/sheep-duck-llama-2-70b-v1.1.Q5_K_S.gguf) | Q5_K_S | 47.6 | | | [GGUF](https://huggingface.co/mradermacher/sheep-duck-llama-2-70b-v1.1-GGUF/resolve/main/sheep-duck-llama-2-70b-v1.1.Q5_K_M.gguf) | Q5_K_M | 48.9 | | | [PART 1](https://huggingface.co/mradermacher/sheep-duck-llama-2-70b-v1.1-GGUF/resolve/main/sheep-duck-llama-2-70b-v1.1.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/sheep-duck-llama-2-70b-v1.1-GGUF/resolve/main/sheep-duck-llama-2-70b-v1.1.Q6_K.gguf.part2of2) | Q6_K | 56.7 | very good quality | | [PART 1](https://huggingface.co/mradermacher/sheep-duck-llama-2-70b-v1.1-GGUF/resolve/main/sheep-duck-llama-2-70b-v1.1.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/sheep-duck-llama-2-70b-v1.1-GGUF/resolve/main/sheep-duck-llama-2-70b-v1.1.Q8_0.gguf.part2of2) | Q8_0 | 73.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "llama2", "library_name": "transformers", "tags": ["Riiid", "llama-2", "sheep-duck-llama-2"], "base_model": "Riiid/sheep-duck-llama-2-70b-v1.1", "quantized_by": "mradermacher"}
mradermacher/sheep-duck-llama-2-70b-v1.1-GGUF
null
[ "transformers", "gguf", "Riiid", "llama-2", "sheep-duck-llama-2", "en", "base_model:Riiid/sheep-duck-llama-2-70b-v1.1", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-04-14T21:46:13+00:00
[]
[ "en" ]
TAGS #transformers #gguf #Riiid #llama-2 #sheep-duck-llama-2 #en #base_model-Riiid/sheep-duck-llama-2-70b-v1.1 #license-llama2 #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants are available at URL Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #Riiid #llama-2 #sheep-duck-llama-2 #en #base_model-Riiid/sheep-duck-llama-2-70b-v1.1 #license-llama2 #endpoints_compatible #region-us \n" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Chilean Spanish Small This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the stinoco/es_clinical_assistance dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.40.0.dev0 - Pytorch 2.0.0+cu117 - Datasets 2.18.0 - Tokenizers 0.15.2
{"language": ["es"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["stinoco/es_clinical_assistance"], "base_model": "openai/whisper-tiny", "model-index": [{"name": "Whisper Chilean Spanish Small", "results": []}]}
stinoco/whisper-tiny-clinical-assistance
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "es", "dataset:stinoco/es_clinical_assistance", "base_model:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-14T21:47:16+00:00
[]
[ "es" ]
TAGS #transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #es #dataset-stinoco/es_clinical_assistance #base_model-openai/whisper-tiny #license-apache-2.0 #endpoints_compatible #region-us
# Whisper Chilean Spanish Small This model is a fine-tuned version of openai/whisper-tiny on the stinoco/es_clinical_assistance dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.40.0.dev0 - Pytorch 2.0.0+cu117 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# Whisper Chilean Spanish Small\n\nThis model is a fine-tuned version of openai/whisper-tiny on the stinoco/es_clinical_assistance dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 10", "### Training results", "### Framework versions\n\n- Transformers 4.40.0.dev0\n- Pytorch 2.0.0+cu117\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #es #dataset-stinoco/es_clinical_assistance #base_model-openai/whisper-tiny #license-apache-2.0 #endpoints_compatible #region-us \n", "# Whisper Chilean Spanish Small\n\nThis model is a fine-tuned version of openai/whisper-tiny on the stinoco/es_clinical_assistance dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 10", "### Training results", "### Framework versions\n\n- Transformers 4.40.0.dev0\n- Pytorch 2.0.0+cu117\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
lleticiasilvaa/TinyLlama-1.1B-Chat-v1.0-checkpoint-25
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-14T21:47:35+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # log_sage_reward_model This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the hdfs_rlhf_log_summary_dataset dataset. It achieves the following results on the evaluation set: - Loss: 0.4242 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.41e-05 - train_batch_size: 4 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 1 | 0.6936 | 0.8 | | No log | 2.0 | 3 | 0.6931 | 0.8 | | No log | 3.0 | 5 | 0.6928 | 1.0 | | No log | 4.0 | 6 | 0.6927 | 1.0 | | No log | 5.0 | 8 | 0.6923 | 1.0 | | 0.2849 | 6.0 | 10 | 0.6915 | 1.0 | | 0.2849 | 7.0 | 11 | 0.6908 | 1.0 | | 0.2849 | 8.0 | 13 | 0.6889 | 1.0 | | 0.2849 | 9.0 | 15 | 0.6838 | 1.0 | | 0.2849 | 10.0 | 16 | 0.6788 | 1.0 | | 0.2849 | 11.0 | 18 | 0.6633 | 1.0 | | 0.2669 | 12.0 | 20 | 0.6464 | 1.0 | | 0.2669 | 13.0 | 21 | 0.6422 | 1.0 | | 0.2669 | 14.0 | 23 | 0.6312 | 1.0 | | 0.2669 | 15.0 | 25 | 0.5991 | 1.0 | | 0.2669 | 16.0 | 26 | 0.5796 | 1.0 | | 0.2669 | 17.0 | 27 | 0.5571 | 1.0 | | 0.2669 | 18.0 | 29 | 0.5255 | 1.0 | | 0.2252 | 19.0 | 31 | 0.5055 | 1.0 | | 0.2252 | 20.0 | 32 | 0.4967 | 1.0 | | 0.2252 | 21.0 | 34 | 0.4841 | 1.0 | | 0.2252 | 22.0 | 36 | 0.4742 | 1.0 | | 0.2252 | 23.0 | 37 | 0.4700 | 1.0 | | 0.2252 | 24.0 | 39 | 0.4633 | 1.0 | | 0.1245 | 25.0 | 41 | 0.4573 | 1.0 | | 0.1245 | 26.0 | 42 | 0.4547 | 1.0 | | 0.1245 | 27.0 | 44 | 0.4501 | 1.0 | | 0.1245 | 28.0 | 46 | 0.4462 | 1.0 | | 0.1245 | 29.0 | 47 | 0.4444 | 1.0 | | 0.1245 | 30.0 | 49 | 0.4415 | 1.0 | | 0.0996 | 31.0 | 51 | 0.4390 | 1.0 | | 0.0996 | 32.0 | 52 | 0.4378 | 1.0 | | 0.0996 | 33.0 | 53 | 0.4368 | 1.0 | | 0.0996 | 34.0 | 55 | 0.4349 | 1.0 | | 0.0996 | 35.0 | 57 | 0.4333 | 1.0 | | 0.0996 | 36.0 | 58 | 0.4326 | 1.0 | | 0.0862 | 37.0 | 60 | 0.4315 | 1.0 | | 0.0862 | 38.0 | 62 | 0.4306 | 1.0 | | 0.0862 | 39.0 | 63 | 0.4301 | 1.0 | | 0.0862 | 40.0 | 65 | 0.4294 | 1.0 | | 0.0862 | 41.0 | 67 | 0.4288 | 1.0 | | 0.0862 | 42.0 | 68 | 0.4285 | 1.0 | | 0.0765 | 43.0 | 70 | 0.4281 | 1.0 | | 0.0765 | 44.0 | 72 | 0.4276 | 1.0 | | 0.0765 | 45.0 | 73 | 0.4272 | 1.0 | | 0.0765 | 46.0 | 75 | 0.4265 | 1.0 | | 0.0765 | 47.0 | 77 | 0.4261 | 1.0 | | 0.0765 | 48.0 | 78 | 0.4259 | 1.0 | | 0.0765 | 49.0 | 79 | 0.4257 | 1.0 | | 0.0783 | 50.0 | 81 | 0.4253 | 1.0 | | 0.0783 | 51.0 | 83 | 0.4250 | 1.0 | | 0.0783 | 52.0 | 84 | 0.4249 | 1.0 | | 0.0783 | 53.0 | 86 | 0.4247 | 1.0 | | 0.0783 | 54.0 | 88 | 0.4246 | 1.0 | | 0.0783 | 55.0 | 89 | 0.4245 | 1.0 | | 0.0652 | 56.0 | 91 | 0.4244 | 1.0 | | 0.0652 | 57.0 | 93 | 0.4243 | 1.0 | | 0.0652 | 58.0 | 94 | 0.4243 | 1.0 | | 0.0652 | 59.0 | 96 | 0.4242 | 1.0 | | 0.0652 | 60.0 | 98 | 0.4242 | 1.0 | | 0.0652 | 61.0 | 99 | 0.4242 | 1.0 | | 0.0655 | 61.09 | 100 | 0.4242 | 1.0 | ### Framework versions - Transformers 4.39.0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["trl", "reward-trainer", "generated_from_trainer"], "datasets": ["hdfs_rlhf_log_summary_dataset"], "metrics": ["accuracy"], "base_model": "distilbert/distilbert-base-uncased", "model-index": [{"name": "log_sage_reward_model", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "hdfs_rlhf_log_summary_dataset", "type": "hdfs_rlhf_log_summary_dataset", "config": "default", "split": "None", "args": "default"}, "metrics": [{"type": "accuracy", "value": 1.0, "name": "Accuracy"}]}]}]}
IrwinD/log_sage_reward_model
null
[ "transformers", "safetensors", "distilbert", "text-classification", "trl", "reward-trainer", "generated_from_trainer", "dataset:hdfs_rlhf_log_summary_dataset", "base_model:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-14T21:48:23+00:00
[]
[]
TAGS #transformers #safetensors #distilbert #text-classification #trl #reward-trainer #generated_from_trainer #dataset-hdfs_rlhf_log_summary_dataset #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
log\_sage\_reward\_model ======================== This model is a fine-tuned version of distilbert/distilbert-base-uncased on the hdfs\_rlhf\_log\_summary\_dataset dataset. It achieves the following results on the evaluation set: * Loss: 0.4242 * Accuracy: 1.0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1.41e-05 * train\_batch\_size: 4 * eval\_batch\_size: 16 * seed: 42 * gradient\_accumulation\_steps: 16 * total\_train\_batch\_size: 64 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 100 ### Training results ### Framework versions * Transformers 4.39.0 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1.41e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 100", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #distilbert #text-classification #trl #reward-trainer #generated_from_trainer #dataset-hdfs_rlhf_log_summary_dataset #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1.41e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 100", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DS-6.7B-schema This model is a fine-tuned version of [deepseek-ai/deepseek-coder-6.7b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1412 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.01 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.5484 | 0.19 | 50 | 0.1685 | | 0.1696 | 0.38 | 100 | 0.1607 | | 0.1252 | 0.57 | 150 | 0.1452 | | 0.1023 | 0.76 | 200 | 0.1409 | | 1.6109 | 0.95 | 250 | 0.1412 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "other", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "deepseek-ai/deepseek-coder-6.7b-instruct", "model-index": [{"name": "DS-6.7B-schema", "results": []}]}
jdeklerk10/DS-6.7B-schema
null
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:deepseek-ai/deepseek-coder-6.7b-instruct", "license:other", "region:us" ]
null
2024-04-14T21:49:28+00:00
[]
[]
TAGS #peft #safetensors #trl #sft #generated_from_trainer #base_model-deepseek-ai/deepseek-coder-6.7b-instruct #license-other #region-us
DS-6.7B-schema ============== This model is a fine-tuned version of deepseek-ai/deepseek-coder-6.7b-instruct on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.1412 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * gradient\_accumulation\_steps: 8 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.01 * num\_epochs: 1 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.40.0.dev0 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.01\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #base_model-deepseek-ai/deepseek-coder-6.7b-instruct #license-other #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.01\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Gordon119/TAT_TD-openai-whisper-large-v2-mix-with-zh-TAT-epoch3-total5epoch
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-14T21:50:53+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
zzttbrdd/sn6_10m
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-14T21:51:40+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> Builds on the falcon 7b instruct shared model, the specific model is linked in the model description section. Trained using this [mental health dataset](https://huggingface.co/datasets/Amod/mental_health_counseling_conversations) How to test the model ```python import json import os from pprint import pprint import bitsandbytes as bnb import torch import torch.nn as nn import transformers from datasets import load_dataset from huggingface_hub import notebook_login from peft import ( LoraConfig, PeftConfig, PeftModel, get_peft_model, prepare_model_for_kbit_training ) from transformers import ( AutoConfig, AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig ) os.environ["CUDA_VISIBLE_DEVICES"] = "0" PEFT_MODEL = "akumar23/mental-falcon-7b" bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16 ) config = PeftConfig.from_pretrained(PEFT_MODEL) model = AutoModelForCausalLM.from_pretrained( config.base_model_name_or_path, return_dict=True, quantization_config=bnb_config, device_map="auto", trust_remote_code=True ) tokenizer=AutoTokenizer.from_pretrained(config.base_model_name_or_path) tokenizer.pad_token = tokenizer.eos_token model = PeftModel.from_pretrained(model, PEFT_MODEL) generation_config = model.generation_config generation_config.max_new_tokens = 200 generation_config.temperature = 0.7 generation_config.top_p = 0.7 generation_config.num_return_sequences = 1 generation_config.pad_token_id = tokenizer.eos_token_id generation_config.eos_token_id = tokenizer.eos_token_id device = "cuda:0" prompt = """ <human>: how do i know if i am depressed <assistant>: """.strip() encoding = tokenizer(prompt, return_tensors="pt").to(device) with torch.inference_mode(): outputs = model.generate( input_ids = encoding.input_ids, attention_mask = encoding.attention_mask, generation_config = generation_config ) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [Aryan Kumar](https://www.linkedin.com/in/aryan-kumar-9640b4150); - **Model type:** Causal decoder-only; - **Language(s) (NLP):** English and French; - **License:** Apache 2.0; - **Finetuned from model:** [falcon-7b-instruct-sharded](https://huggingface.co/vilsonrodrigues/falcon-7b-instruct-sharded). ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"license": "apache-2.0", "library_name": "transformers"}
akumar23/mental-falcon-7b
null
[ "transformers", "safetensors", "falcon", "text-generation", "custom_code", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-14T21:54:48+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #falcon #text-generation #custom_code #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
# Model Card for Model ID Builds on the falcon 7b instruct shared model, the specific model is linked in the model description section. Trained using this mental health dataset How to test the model ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: Aryan Kumar; - Model type: Causal decoder-only; - Language(s) (NLP): English and French; - License: Apache 2.0; - Finetuned from model: falcon-7b-instruct-sharded. ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID\n\n\n\nBuilds on the falcon 7b instruct shared model, the specific model is linked in the model description section.\n\nTrained using this mental health dataset\n\n\nHow to test the model", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: Aryan Kumar;\n- Model type: Causal decoder-only;\n- Language(s) (NLP): English and French;\n- License: Apache 2.0;\n- Finetuned from model: falcon-7b-instruct-sharded.", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #falcon #text-generation #custom_code #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "# Model Card for Model ID\n\n\n\nBuilds on the falcon 7b instruct shared model, the specific model is linked in the model description section.\n\nTrained using this mental health dataset\n\n\nHow to test the model", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: Aryan Kumar;\n- Model type: Causal decoder-only;\n- Language(s) (NLP): English and French;\n- License: Apache 2.0;\n- Finetuned from model: falcon-7b-instruct-sharded.", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
EdBerg/gpt-neo-2B
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-14T21:55:32+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral-7b_ablated_model_layer_1_gate_only_healed This model was trained from scratch on the generator dataset. It achieves the following results on the evaluation set: - Loss: 0.9879 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 16 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.9598 | 1.0 | 8715 | 0.9879 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.1
{"tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "model-index": [{"name": "mistral-7b_ablated_model_layer_1_gate_only_healed", "results": []}]}
Ffohturk/mistral-7b_ablated_model_layer_1_gate_only_healed
null
[ "transformers", "safetensors", "mistral", "text-generation", "trl", "sft", "generated_from_trainer", "conversational", "dataset:generator", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-14T21:58:12+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #trl #sft #generated_from_trainer #conversational #dataset-generator #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
mistral-7b\_ablated\_model\_layer\_1\_gate\_only\_healed ======================================================== This model was trained from scratch on the generator dataset. It achieves the following results on the evaluation set: * Loss: 0.9879 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 4 * seed: 42 * distributed\_type: multi-GPU * num\_devices: 2 * total\_train\_batch\_size: 16 * total\_eval\_batch\_size: 8 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 1 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.1.2+cu121 * Datasets 2.18.0 * Tokenizers 0.15.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 2\n* total\\_train\\_batch\\_size: 16\n* total\\_eval\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.1" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #trl #sft #generated_from_trainer #conversational #dataset-generator #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 2\n* total\\_train\\_batch\\_size: 16\n* total\\_eval\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.1" ]
null
null
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{}
Sikontil/Char.gi
null
[ "arxiv:1910.09700", "region:us" ]
null
2024-04-14T22:02:02+00:00
[ "1910.09700" ]
[]
TAGS #arxiv-1910.09700 #region-us
# Model Card for Model ID This modelcard aims to be a base template for new models. It has been generated using this raw template. ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using this raw template.", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#arxiv-1910.09700 #region-us \n", "# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using this raw template.", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
token-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
BrandonM001/bert-finetuned-ner-accelerate
null
[ "transformers", "safetensors", "bert", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-14T22:07:02+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #bert #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #bert #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-finetuned-kwsylgen-64-simple_input_BARTlarge This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1785 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0641 | 0.18 | 500 | 0.2451 | | 0.2194 | 0.36 | 1000 | 0.2228 | | 0.1989 | 0.54 | 1500 | 0.2086 | | 0.1888 | 0.72 | 2000 | 0.2027 | | 0.177 | 0.9 | 2500 | 0.1976 | | 0.1703 | 1.08 | 3000 | 0.1933 | | 0.1647 | 1.26 | 3500 | 0.1928 | | 0.159 | 1.44 | 4000 | 0.1890 | | 0.1538 | 1.61 | 4500 | 0.1864 | | 0.151 | 1.79 | 5000 | 0.1857 | | 0.1471 | 1.97 | 5500 | 0.1828 | | 0.1436 | 2.15 | 6000 | 0.1814 | | 0.1435 | 2.33 | 6500 | 0.1806 | | 0.141 | 2.51 | 7000 | 0.1799 | | 0.1393 | 2.69 | 7500 | 0.1790 | | 0.1388 | 2.87 | 8000 | 0.1785 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "facebook/bart-large", "model-index": [{"name": "bart-finetuned-kwsylgen-64-simple_input_BARTlarge", "results": []}]}
adamjweintraut/bart-finetuned-kwsylgen-64-simple_input_BARTlarge
null
[ "transformers", "tensorboard", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:facebook/bart-large", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-14T22:13:43+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #bart #text2text-generation #generated_from_trainer #base_model-facebook/bart-large #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
bart-finetuned-kwsylgen-64-simple\_input\_BARTlarge =================================================== This model is a fine-tuned version of facebook/bart-large on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.1785 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 64 * eval\_batch\_size: 64 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #bart #text2text-generation #generated_from_trainer #base_model-facebook/bart-large #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
# NeuralStockFusion-7b ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/64d71ab4089bc502ceb44d29/5Ex2YG8H1oLXaS25gvZQs.webp) # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Kukedlc/NeuralSirKrishna-7b](https://huggingface.co/Kukedlc/NeuralSirKrishna-7b) as a base. ### Models Merged The following models were included in the merge: * [Kukedlc/NeuralArjuna-7B-DT](https://huggingface.co/Kukedlc/NeuralArjuna-7B-DT) * [Kukedlc/NeuralMaths-Experiment-7b](https://huggingface.co/Kukedlc/NeuralMaths-Experiment-7b) * [Kukedlc/NeuralSynthesis-7B-v0.1](https://huggingface.co/Kukedlc/NeuralSynthesis-7B-v0.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Kukedlc/NeuralMaths-Experiment-7b - model: Kukedlc/NeuralArjuna-7B-DT - model: Kukedlc/NeuralSirKrishna-7b - model: Kukedlc/NeuralSynthesis-7B-v0.1 merge_method: model_stock base_model: Kukedlc/NeuralSirKrishna-7b dtype: bfloat16 ``` # Model Inference: ``` python !pip install -qU transformers accelerate bitsandbytes from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer, BitsAndBytesConfig import torch bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16 ) MODEL_NAME = 'Kukedlc/NeuralStockFusion-7b' tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME) model = AutoModelForCausalLM.from_pretrained(MODEL_NAME, device_map='cuda:0', quantization_config=bnb_config) inputs = tokenizer(["[INST] What is a large language model, in spanish \n[/INST]\n"], return_tensors="pt").to('cuda') streamer = TextStreamer(tokenizer) # Despite returning the usual output, the streamer will also print the generated text to stdout. _ = model.generate(**inputs, streamer=streamer, max_new_tokens=256, do_sample=True, temperature=0.7, repetition_penalty=1.4, top_p=0.9) ```
{"license": "apache-2.0", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Kukedlc/NeuralSirKrishna-7b", "Kukedlc/NeuralArjuna-7B-DT", "Kukedlc/NeuralMaths-Experiment-7b", "Kukedlc/NeuralSynthesis-7B-v0.1"]}
Kukedlc/NeuralStockFusion-7b
null
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "arxiv:2403.19522", "base_model:Kukedlc/NeuralSirKrishna-7b", "base_model:Kukedlc/NeuralArjuna-7B-DT", "base_model:Kukedlc/NeuralMaths-Experiment-7b", "base_model:Kukedlc/NeuralSynthesis-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-14T22:18:13+00:00
[ "2403.19522" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #mergekit #merge #arxiv-2403.19522 #base_model-Kukedlc/NeuralSirKrishna-7b #base_model-Kukedlc/NeuralArjuna-7B-DT #base_model-Kukedlc/NeuralMaths-Experiment-7b #base_model-Kukedlc/NeuralSynthesis-7B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# NeuralStockFusion-7b !image/webp # merge This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the Model Stock merge method using Kukedlc/NeuralSirKrishna-7b as a base. ### Models Merged The following models were included in the merge: * Kukedlc/NeuralArjuna-7B-DT * Kukedlc/NeuralMaths-Experiment-7b * Kukedlc/NeuralSynthesis-7B-v0.1 ### Configuration The following YAML configuration was used to produce this model: # Model Inference:
[ "# NeuralStockFusion-7b\n\n!image/webp", "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the Model Stock merge method using Kukedlc/NeuralSirKrishna-7b as a base.", "### Models Merged\n\nThe following models were included in the merge:\n* Kukedlc/NeuralArjuna-7B-DT\n* Kukedlc/NeuralMaths-Experiment-7b\n* Kukedlc/NeuralSynthesis-7B-v0.1", "### Configuration\n\nThe following YAML configuration was used to produce this model:", "# Model Inference:" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #arxiv-2403.19522 #base_model-Kukedlc/NeuralSirKrishna-7b #base_model-Kukedlc/NeuralArjuna-7B-DT #base_model-Kukedlc/NeuralMaths-Experiment-7b #base_model-Kukedlc/NeuralSynthesis-7B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# NeuralStockFusion-7b\n\n!image/webp", "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the Model Stock merge method using Kukedlc/NeuralSirKrishna-7b as a base.", "### Models Merged\n\nThe following models were included in the merge:\n* Kukedlc/NeuralArjuna-7B-DT\n* Kukedlc/NeuralMaths-Experiment-7b\n* Kukedlc/NeuralSynthesis-7B-v0.1", "### Configuration\n\nThe following YAML configuration was used to produce this model:", "# Model Inference:" ]
text-generation
transformers
# mixtral-8x22b-instruct-oh - EXL2 7.0bpw This is a 7.0bpw EXL2 quant of [fireworks-ai/mixtral-8x22b-instruct-oh](https://huggingface.co/fireworks-ai/mixtral-8x22b-instruct-oh) Details about the model can be found at the above model page. ## EXL2 Version These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library. If you have problems loading these models, please update Text Generation WebUI to the latest version. ## Perplexity Scoring Below are the perplexity scores for the EXL2 models. A lower score is better. _TODO_ ### Perplexity Script This was the script used for perplexity testing. ```bash #!/bin/bash # Activate the conda environment source ~/miniconda3/etc/profile.d/conda.sh conda activate exllamav2 # Set the model name and bit size MODEL_NAME="mixtral-8x22b-instruct-oh" BIT_PRECISIONS=(7.0 6.0 5.5 5.0 4.5 4.0 3.5 3.0 2.75 2.5 2.25) # Print the markdown table header echo "| Quant Level | Perplexity Score |" echo "|-------------|------------------|" for BIT_PRECISION in "${BIT_PRECISIONS[@]}" do MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw" if [ -d "$MODEL_DIR" ]; then output=$(python test_inference.py -m "$MODEL_DIR" -gs 44,48 -ed data/wikitext/wikitext-2-v1.parquet) score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+') echo "| $BIT_PRECISION | $score |" fi done ``` ## Quant Details This is the script used for quantization. ```bash #!/bin/bash # Activate the conda environment source ~/miniconda3/etc/profile.d/conda.sh conda activate exllamav2 # Set the model name and bit size MODEL_NAME="mixtral-8x22b-instruct-oh" # Define variables MODEL_DIR="models/$MODEL_NAME" OUTPUT_DIR="exl2_$MODEL_NAME" MEASUREMENT_FILE="measurements/$MODEL_NAME.json" # Create the measurement file if needed if [ ! -f "$MEASUREMENT_FILE" ]; then echo "Creating $MEASUREMENT_FILE" # Create directories if [ -d "$OUTPUT_DIR" ]; then rm -r "$OUTPUT_DIR" fi mkdir "$OUTPUT_DIR" python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE fi # Choose one of the below. Either create a single quant for testing or a batch of them. # BIT_PRECISIONS=(2.25) BIT_PRECISIONS=(7.0 6.0 5.5 5.0 4.5 4.0 3.5 3.0 2.75 2.5 2.25) for BIT_PRECISION in "${BIT_PRECISIONS[@]}" do CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw" # If it doesn't already exist, make the quant if [ ! -d "$CONVERTED_FOLDER" ]; then echo "Creating $CONVERTED_FOLDER" # Create directories if [ -d "$OUTPUT_DIR" ]; then rm -r "$OUTPUT_DIR" fi mkdir "$OUTPUT_DIR" mkdir "$CONVERTED_FOLDER" # Run conversion commands python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER fi done ```
{"language": ["en"], "license": "apache-2.0", "tags": ["exl2"], "datasets": ["teknium/OpenHermes-2.5"], "base_model": "mistral-community/Mixtral-8x22B-v0.1"}
Dracones/mixtral-8x22b-instruct-oh_exl2_7.0bpw
null
[ "transformers", "safetensors", "mixtral", "text-generation", "exl2", "conversational", "en", "dataset:teknium/OpenHermes-2.5", "base_model:mistral-community/Mixtral-8x22B-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "7-bit", "region:us" ]
null
2024-04-14T22:19:33+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #mixtral #text-generation #exl2 #conversational #en #dataset-teknium/OpenHermes-2.5 #base_model-mistral-community/Mixtral-8x22B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #7-bit #region-us
# mixtral-8x22b-instruct-oh - EXL2 7.0bpw This is a 7.0bpw EXL2 quant of fireworks-ai/mixtral-8x22b-instruct-oh Details about the model can be found at the above model page. ## EXL2 Version These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library. If you have problems loading these models, please update Text Generation WebUI to the latest version. ## Perplexity Scoring Below are the perplexity scores for the EXL2 models. A lower score is better. _TODO_ ### Perplexity Script This was the script used for perplexity testing. ## Quant Details This is the script used for quantization.
[ "# mixtral-8x22b-instruct-oh - EXL2 7.0bpw\n\nThis is a 7.0bpw EXL2 quant of fireworks-ai/mixtral-8x22b-instruct-oh\n\nDetails about the model can be found at the above model page.", "## EXL2 Version\n\nThese quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.\n\nIf you have problems loading these models, please update Text Generation WebUI to the latest version.", "## Perplexity Scoring\n\nBelow are the perplexity scores for the EXL2 models. A lower score is better. \n\n_TODO_", "### Perplexity Script\n\nThis was the script used for perplexity testing.", "## Quant Details\n\nThis is the script used for quantization." ]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #exl2 #conversational #en #dataset-teknium/OpenHermes-2.5 #base_model-mistral-community/Mixtral-8x22B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #7-bit #region-us \n", "# mixtral-8x22b-instruct-oh - EXL2 7.0bpw\n\nThis is a 7.0bpw EXL2 quant of fireworks-ai/mixtral-8x22b-instruct-oh\n\nDetails about the model can be found at the above model page.", "## EXL2 Version\n\nThese quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.\n\nIf you have problems loading these models, please update Text Generation WebUI to the latest version.", "## Perplexity Scoring\n\nBelow are the perplexity scores for the EXL2 models. A lower score is better. \n\n_TODO_", "### Perplexity Script\n\nThis was the script used for perplexity testing.", "## Quant Details\n\nThis is the script used for quantization." ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
uday91/llama-2-7b-customer-support-llm
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-14T22:20:05+00:00
[ "1910.09700" ]
[]
TAGS #transformers #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ruBert-base-sberquad-0.001-filtered This model is a fine-tuned version of [ai-forever/ruBert-base](https://huggingface.co/ai-forever/ruBert-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 5000 ### Training results ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.40.0.dev0 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "ai-forever/ruBert-base", "model-index": [{"name": "ruBert-base-sberquad-0.001-filtered", "results": []}]}
Shalazary/ruBert-base-sberquad-0.001-filtered
null
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:ai-forever/ruBert-base", "license:apache-2.0", "region:us" ]
null
2024-04-14T22:20:59+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us
# ruBert-base-sberquad-0.001-filtered This model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 5000 ### Training results ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.40.0.dev0 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# ruBert-base-sberquad-0.001-filtered\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 5000", "### Training results", "### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us \n", "# ruBert-base-sberquad-0.001-filtered\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 5000", "### Training results", "### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text-to-image
diffusers
# Tattoo World <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/Arczisan/tattoo-world/tree/main) them in the Files & versions tab.
{"tags": ["text-to-image", "stable-diffusion", "lora", "diffusers", "template:sd-lora"], "widget": [{"text": "UNICODE\u0000\u0000<\u0000l\u0000o\u0000r\u0000a\u0000:\u0000T\u0000a\u0000t\u0000t\u0000o\u0000o\u0000W\u0000o\u0000r\u0000l\u0000d\u0000:\u00001\u0000>\u0000 \u0000(\u0000T\u0000a\u0000t\u0000t\u0000o\u0000o\u0000W\u0000o\u0000r\u0000l\u0000d\u0000:\u00000\u0000.\u00009\u0000)\u0000 \u0000 \u0000 \u0000m\u0000a\u0000s\u0000t\u0000e\u0000r\u0000p\u0000i\u0000e\u0000c\u0000e\u0000,\u0000 \u0000b\u0000e\u0000s\u0000t\u0000 \u0000q\u0000u\u0000a\u0000l\u0000i\u0000t\u0000y\u0000,\u0000s\u0000h\u0000c\u0000h\u0000,\u0000(\u00001\u0000w\u0000o\u0000m\u0000a\u0000n\u0000:\u00001\u0000.\u00004\u0000)\u0000 \u0000(\u0000b\u0000l\u0000u\u0000e\u0000 \u0000b\u0000i\u0000k\u0000i\u0000n\u0000i\u0000 \u0000:\u00001\u0000.\u00003\u0000)\u0000 \u0000,\u0000s\u0000o\u0000l\u0000o\u0000,\u0000b\u0000e\u0000a\u0000u\u0000t\u0000i\u0000f\u0000u\u0000l\u0000,\u0000a\u0000t\u0000t\u0000r\u0000a\u0000c\u0000t\u0000i\u0000v\u0000e\u0000,\u0000c\u0000u\u0000t\u0000e\u0000 \u0000<\u0000l\u0000o\u0000r\u0000a\u0000:\u0000m\u0000o\u0000r\u0000e\u0000_\u0000d\u0000e\u0000t\u0000a\u0000i\u0000l\u0000s\u0000:\u0000.\u00006\u0000>\u0000 \u0000(\u0000b\u0000e\u0000a\u0000c\u0000h\u0000 \u0000b\u0000a\u0000c\u0000k\u0000g\u0000r\u0000o\u0000u\u0000n\u0000d\u0000:\u00001\u0000.\u00005\u0000)\u0000 \u0000", "output": {"url": "images/00051-2622726513.jpeg"}}], "base_model": "runwayml/stable-diffusion-v1-5"}
Arczisan/tattoo-world
null
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:runwayml/stable-diffusion-v1-5", "region:us" ]
null
2024-04-14T22:25:27+00:00
[]
[]
TAGS #diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-runwayml/stable-diffusion-v1-5 #region-us
# Tattoo World <Gallery /> ## Download model Weights for this model are available in Safetensors format. Download them in the Files & versions tab.
[ "# Tattoo World\n\n<Gallery />", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab." ]
[ "TAGS\n#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-runwayml/stable-diffusion-v1-5 #region-us \n", "# Tattoo World\n\n<Gallery />", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab." ]
text-generation
transformers
# merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) * [Equall/Saul-Base](https://huggingface.co/Equall/Saul-Base) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: Equall/Saul-Base layer_range: [0, 32] - model: HuggingFaceH4/zephyr-7b-beta layer_range: [0, 32] merge_method: slerp base_model: HuggingFaceH4/zephyr-7b-beta parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["HuggingFaceH4/zephyr-7b-beta", "Equall/Saul-Base"]}
mergekit-community/mergekit-slerp-fodinzo
null
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "base_model:HuggingFaceH4/zephyr-7b-beta", "base_model:Equall/Saul-Base", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-14T22:26:07+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-HuggingFaceH4/zephyr-7b-beta #base_model-Equall/Saul-Base #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# merge This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * HuggingFaceH4/zephyr-7b-beta * Equall/Saul-Base ### Configuration The following YAML configuration was used to produce this model:
[ "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* HuggingFaceH4/zephyr-7b-beta\n* Equall/Saul-Base", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-HuggingFaceH4/zephyr-7b-beta #base_model-Equall/Saul-Base #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* HuggingFaceH4/zephyr-7b-beta\n* Equall/Saul-Base", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
EdBerg/quotes_Llama-2-7b-chat-hf
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-14T22:30:08+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# mlx-community/c4ai-command-r-plus-2bit This model was converted to MLX format from [`CohereForAI/c4ai-command-r-plus`]() using mlx-lm version **0.9.0**. Refer to the [original model card](https://huggingface.co/CohereForAI/c4ai-command-r-plus) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/c4ai-command-r-plus-2bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"language": ["en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar"], "license": "cc-by-nc-4.0", "library_name": "transformers", "tags": ["mlx"]}
mlx-community/c4ai-command-r-plus-2bit
null
[ "transformers", "safetensors", "cohere", "text-generation", "mlx", "conversational", "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-14T22:37:43+00:00
[]
[ "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar" ]
TAGS #transformers #safetensors #cohere #text-generation #mlx #conversational #en #fr #de #es #it #pt #ja #ko #zh #ar #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# mlx-community/c4ai-command-r-plus-2bit This model was converted to MLX format from ['CohereForAI/c4ai-command-r-plus']() using mlx-lm version 0.9.0. Refer to the original model card for more details on the model. ## Use with mlx
[ "# mlx-community/c4ai-command-r-plus-2bit\nThis model was converted to MLX format from ['CohereForAI/c4ai-command-r-plus']() using mlx-lm version 0.9.0.\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
[ "TAGS\n#transformers #safetensors #cohere #text-generation #mlx #conversational #en #fr #de #es #it #pt #ja #ko #zh #ar #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# mlx-community/c4ai-command-r-plus-2bit\nThis model was converted to MLX format from ['CohereForAI/c4ai-command-r-plus']() using mlx-lm version 0.9.0.\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
text-generation
transformers
# mixtral-8x22b-instruct-oh - EXL2 6.0bpw This is a 6.0bpw EXL2 quant of [fireworks-ai/mixtral-8x22b-instruct-oh](https://huggingface.co/fireworks-ai/mixtral-8x22b-instruct-oh) Details about the model can be found at the above model page. ## EXL2 Version These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library. If you have problems loading these models, please update Text Generation WebUI to the latest version. ## Perplexity Scoring Below are the perplexity scores for the EXL2 models. A lower score is better. _TODO_ ### Perplexity Script This was the script used for perplexity testing. ```bash #!/bin/bash # Activate the conda environment source ~/miniconda3/etc/profile.d/conda.sh conda activate exllamav2 # Set the model name and bit size MODEL_NAME="mixtral-8x22b-instruct-oh" BIT_PRECISIONS=(7.0 6.0 5.5 5.0 4.5 4.0 3.5 3.0 2.75 2.5 2.25) # Print the markdown table header echo "| Quant Level | Perplexity Score |" echo "|-------------|------------------|" for BIT_PRECISION in "${BIT_PRECISIONS[@]}" do MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw" if [ -d "$MODEL_DIR" ]; then output=$(python test_inference.py -m "$MODEL_DIR" -gs 44,48 -ed data/wikitext/wikitext-2-v1.parquet) score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+') echo "| $BIT_PRECISION | $score |" fi done ``` ## Quant Details This is the script used for quantization. ```bash #!/bin/bash # Activate the conda environment source ~/miniconda3/etc/profile.d/conda.sh conda activate exllamav2 # Set the model name and bit size MODEL_NAME="mixtral-8x22b-instruct-oh" # Define variables MODEL_DIR="models/$MODEL_NAME" OUTPUT_DIR="exl2_$MODEL_NAME" MEASUREMENT_FILE="measurements/$MODEL_NAME.json" # Create the measurement file if needed if [ ! -f "$MEASUREMENT_FILE" ]; then echo "Creating $MEASUREMENT_FILE" # Create directories if [ -d "$OUTPUT_DIR" ]; then rm -r "$OUTPUT_DIR" fi mkdir "$OUTPUT_DIR" python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE fi # Choose one of the below. Either create a single quant for testing or a batch of them. # BIT_PRECISIONS=(2.25) BIT_PRECISIONS=(7.0 6.0 5.5 5.0 4.5 4.0 3.5 3.0 2.75 2.5 2.25) for BIT_PRECISION in "${BIT_PRECISIONS[@]}" do CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw" # If it doesn't already exist, make the quant if [ ! -d "$CONVERTED_FOLDER" ]; then echo "Creating $CONVERTED_FOLDER" # Create directories if [ -d "$OUTPUT_DIR" ]; then rm -r "$OUTPUT_DIR" fi mkdir "$OUTPUT_DIR" mkdir "$CONVERTED_FOLDER" # Run conversion commands python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER fi done ```
{"language": ["en"], "license": "apache-2.0", "tags": ["exl2"], "datasets": ["teknium/OpenHermes-2.5"], "base_model": "mistral-community/Mixtral-8x22B-v0.1"}
Dracones/mixtral-8x22b-instruct-oh_exl2_6.0bpw
null
[ "transformers", "safetensors", "mixtral", "text-generation", "exl2", "conversational", "en", "dataset:teknium/OpenHermes-2.5", "base_model:mistral-community/Mixtral-8x22B-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "6-bit", "region:us" ]
null
2024-04-14T22:38:52+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #mixtral #text-generation #exl2 #conversational #en #dataset-teknium/OpenHermes-2.5 #base_model-mistral-community/Mixtral-8x22B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #6-bit #region-us
# mixtral-8x22b-instruct-oh - EXL2 6.0bpw This is a 6.0bpw EXL2 quant of fireworks-ai/mixtral-8x22b-instruct-oh Details about the model can be found at the above model page. ## EXL2 Version These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library. If you have problems loading these models, please update Text Generation WebUI to the latest version. ## Perplexity Scoring Below are the perplexity scores for the EXL2 models. A lower score is better. _TODO_ ### Perplexity Script This was the script used for perplexity testing. ## Quant Details This is the script used for quantization.
[ "# mixtral-8x22b-instruct-oh - EXL2 6.0bpw\n\nThis is a 6.0bpw EXL2 quant of fireworks-ai/mixtral-8x22b-instruct-oh\n\nDetails about the model can be found at the above model page.", "## EXL2 Version\n\nThese quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.\n\nIf you have problems loading these models, please update Text Generation WebUI to the latest version.", "## Perplexity Scoring\n\nBelow are the perplexity scores for the EXL2 models. A lower score is better. \n\n_TODO_", "### Perplexity Script\n\nThis was the script used for perplexity testing.", "## Quant Details\n\nThis is the script used for quantization." ]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #exl2 #conversational #en #dataset-teknium/OpenHermes-2.5 #base_model-mistral-community/Mixtral-8x22B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #6-bit #region-us \n", "# mixtral-8x22b-instruct-oh - EXL2 6.0bpw\n\nThis is a 6.0bpw EXL2 quant of fireworks-ai/mixtral-8x22b-instruct-oh\n\nDetails about the model can be found at the above model page.", "## EXL2 Version\n\nThese quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.\n\nIf you have problems loading these models, please update Text Generation WebUI to the latest version.", "## Perplexity Scoring\n\nBelow are the perplexity scores for the EXL2 models. A lower score is better. \n\n_TODO_", "### Perplexity Script\n\nThis was the script used for perplexity testing.", "## Quant Details\n\nThis is the script used for quantization." ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Tverous/llama-13b-ppo-final
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-14T22:39:44+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-7b-sft-837k This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the Lichang-Chen/800k_ift dataset. It achieves the following results on the evaluation set: - Loss: 3.1087 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.1323 | 1.0 | 1179 | 3.1087 | ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer", "trl", "sft", "generated_from_trainer"], "datasets": ["Lichang-Chen/800k_ift"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "zephyr-7b-sft-837k", "results": []}]}
Lichang-Chen/zephyr-7b-sft-837k
null
[ "transformers", "tensorboard", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "sft", "generated_from_trainer", "conversational", "dataset:Lichang-Chen/800k_ift", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-14T22:43:20+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #mistral #text-generation #alignment-handbook #trl #sft #generated_from_trainer #conversational #dataset-Lichang-Chen/800k_ift #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
zephyr-7b-sft-837k ================== This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the Lichang-Chen/800k\_ift dataset. It achieves the following results on the evaluation set: * Loss: 3.1087 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * distributed\_type: multi-GPU * num\_devices: 8 * total\_train\_batch\_size: 128 * total\_eval\_batch\_size: 64 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 1 ### Training results ### Framework versions * Transformers 4.39.0.dev0 * Pytorch 2.1.2+cu121 * Datasets 2.14.6 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #mistral #text-generation #alignment-handbook #trl #sft #generated_from_trainer #conversational #dataset-Lichang-Chen/800k_ift #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["trl", "sft"]}
jhamel/rebel_fine_tune
null
[ "transformers", "safetensors", "bart", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "4-bit", "region:us" ]
null
2024-04-14T22:43:32+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #bart #text-generation #trl #sft #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #4-bit #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #bart #text-generation #trl #sft #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #4-bit #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-2024-03-12 This model is a fine-tuned version of [ericjiliangli/t5-small-news-summarization](https://huggingface.co/ericjiliangli/t5-small-news-summarization) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7445 - Rouge1: 30.158 - Rouge2: 15.0234 - Rougel: 25.9885 - Rougelsum: 26.1101 - Gen Len: 18.759 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:-------:|:---------:|:-------:| | 1.9214 | 1.0 | 328 | 1.7445 | 30.158 | 15.0234 | 25.9885 | 26.1101 | 18.759 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "ericjiliangli/t5-small-news-summarization", "model-index": [{"name": "t5-small-finetuned-2024-03-12", "results": []}]}
liamvbetts/t5-small-finetuned-2024-03-12
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:ericjiliangli/t5-small-news-summarization", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-14T22:44:27+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-ericjiliangli/t5-small-news-summarization #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
t5-small-finetuned-2024-03-12 ============================= This model is a fine-tuned version of ericjiliangli/t5-small-news-summarization on the None dataset. It achieves the following results on the evaluation set: * Loss: 1.7445 * Rouge1: 30.158 * Rouge2: 15.0234 * Rougel: 25.9885 * Rougelsum: 26.1101 * Gen Len: 18.759 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 4e-05 * train\_batch\_size: 1 * eval\_batch\_size: 1 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 4e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-ericjiliangli/t5-small-news-summarization #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 4e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CNEC_2_0_ext_Czert-B-base-cased This model is a fine-tuned version of [UWB-AIR/Czert-B-base-cased](https://huggingface.co/UWB-AIR/Czert-B-base-cased) on the cnec dataset. It achieves the following results on the evaluation set: - Loss: 0.2280 - Precision: 0.8360 - Recall: 0.8675 - F1: 0.8514 - Accuracy: 0.9643 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1689 | 4.46 | 500 | 0.1507 | 0.8355 | 0.8571 | 0.8462 | 0.9640 | | 0.0245 | 8.93 | 1000 | 0.1823 | 0.8361 | 0.8660 | 0.8508 | 0.9645 | | 0.009 | 13.39 | 1500 | 0.2081 | 0.8438 | 0.8630 | 0.8533 | 0.9641 | | 0.0047 | 17.86 | 2000 | 0.2268 | 0.8338 | 0.8640 | 0.8486 | 0.9640 | | 0.003 | 22.32 | 2500 | 0.2280 | 0.8360 | 0.8675 | 0.8514 | 0.9643 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
{"tags": ["generated_from_trainer"], "datasets": ["cnec"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "UWB-AIR/Czert-B-base-cased", "model-index": [{"name": "CNEC_2_0_ext_Czert-B-base-cased", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "cnec", "type": "cnec", "config": "default", "split": "validation", "args": "default"}, "metrics": [{"type": "precision", "value": 0.8359636537541846, "name": "Precision"}, {"type": "recall", "value": 0.8674937965260546, "name": "Recall"}, {"type": "f1", "value": 0.8514369215781783, "name": "F1"}, {"type": "accuracy", "value": 0.964302472477892, "name": "Accuracy"}]}]}]}
stulcrad/CNEC_2_0_ext_Czert-B-base-cased
null
[ "transformers", "safetensors", "bert", "token-classification", "generated_from_trainer", "dataset:cnec", "base_model:UWB-AIR/Czert-B-base-cased", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-14T22:46:03+00:00
[]
[]
TAGS #transformers #safetensors #bert #token-classification #generated_from_trainer #dataset-cnec #base_model-UWB-AIR/Czert-B-base-cased #model-index #autotrain_compatible #endpoints_compatible #region-us
CNEC\_2\_0\_ext\_Czert-B-base-cased =================================== This model is a fine-tuned version of UWB-AIR/Czert-B-base-cased on the cnec dataset. It achieves the following results on the evaluation set: * Loss: 0.2280 * Precision: 0.8360 * Recall: 0.8675 * F1: 0.8514 * Accuracy: 0.9643 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 64 * eval\_batch\_size: 64 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 25 ### Training results ### Framework versions * Transformers 4.36.2 * Pytorch 2.1.2+cu121 * Datasets 2.16.1 * Tokenizers 0.15.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 25", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.0" ]
[ "TAGS\n#transformers #safetensors #bert #token-classification #generated_from_trainer #dataset-cnec #base_model-UWB-AIR/Czert-B-base-cased #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 25", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.0" ]
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
MaiiaCompsolutions/industry_classifier_full_descr_3rd_level
null
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-14T22:49:39+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bioGPT_finetuned_ncbi This model is a fine-tuned version of [microsoft/biogpt](https://huggingface.co/microsoft/biogpt) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1880 - Precision: 0.4637 - Recall: 0.5448 - F1: 0.5010 - Accuracy: 0.9476 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1735 | 1.0 | 1358 | 0.2050 | 0.3990 | 0.4094 | 0.4041 | 0.9415 | | 0.1086 | 2.0 | 2716 | 0.1800 | 0.4230 | 0.5354 | 0.4726 | 0.9463 | | 0.07 | 3.0 | 4074 | 0.1880 | 0.4637 | 0.5448 | 0.5010 | 0.9476 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "microsoft/biogpt", "model-index": [{"name": "bioGPT_finetuned_ncbi", "results": []}]}
nik548/bioGPT_finetuned_ncbi
null
[ "transformers", "tensorboard", "safetensors", "gpt2", "token-classification", "generated_from_trainer", "base_model:microsoft/biogpt", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-14T22:51:17+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #gpt2 #token-classification #generated_from_trainer #base_model-microsoft/biogpt #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
bioGPT\_finetuned\_ncbi ======================= This model is a fine-tuned version of microsoft/biogpt on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.1880 * Precision: 0.4637 * Recall: 0.5448 * F1: 0.5010 * Accuracy: 0.9476 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt2 #token-classification #generated_from_trainer #base_model-microsoft/biogpt #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-2024-03-13 This model is a fine-tuned version of [liamvbetts/t5-small-finetuned-2024-03-12](https://huggingface.co/liamvbetts/t5-small-finetuned-2024-03-12) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8462 - Rouge1: 30.3334 - Rouge2: 17.8246 - Rougel: 26.5826 - Rougelsum: 27.0835 - Gen Len: 18.619 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.6474 | 1.0 | 332 | 1.8462 | 30.3334 | 17.8246 | 26.5826 | 27.0835 | 18.619 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "liamvbetts/t5-small-finetuned-2024-03-12", "model-index": [{"name": "t5-small-finetuned-2024-03-13", "results": []}]}
liamvbetts/t5-small-finetuned-2024-03-13
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:liamvbetts/t5-small-finetuned-2024-03-12", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-14T22:52:17+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-liamvbetts/t5-small-finetuned-2024-03-12 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
t5-small-finetuned-2024-03-13 ============================= This model is a fine-tuned version of liamvbetts/t5-small-finetuned-2024-03-12 on the None dataset. It achieves the following results on the evaluation set: * Loss: 1.8462 * Rouge1: 30.3334 * Rouge2: 17.8246 * Rougel: 26.5826 * Rougelsum: 27.0835 * Gen Len: 18.619 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 4e-05 * train\_batch\_size: 1 * eval\_batch\_size: 1 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 4e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-liamvbetts/t5-small-finetuned-2024-03-12 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 4e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
cilantro9246/whfwbmi
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-14T22:53:04+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-2024-03-14 This model is a fine-tuned version of [liamvbetts/t5-small-finetuned-2024-03-13](https://huggingface.co/liamvbetts/t5-small-finetuned-2024-03-13) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6488 - Rouge1: 36.7711 - Rouge2: 23.7969 - Rougel: 33.074 - Rougelsum: 33.6007 - Gen Len: 18.814 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:------:|:---------:|:-------:| | 1.6749 | 1.0 | 341 | 1.6488 | 36.7711 | 23.7969 | 33.074 | 33.6007 | 18.814 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "liamvbetts/t5-small-finetuned-2024-03-13", "model-index": [{"name": "t5-small-finetuned-2024-03-14", "results": []}]}
liamvbetts/t5-small-finetuned-2024-03-14
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:liamvbetts/t5-small-finetuned-2024-03-13", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-14T22:55:26+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-liamvbetts/t5-small-finetuned-2024-03-13 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
t5-small-finetuned-2024-03-14 ============================= This model is a fine-tuned version of liamvbetts/t5-small-finetuned-2024-03-13 on the None dataset. It achieves the following results on the evaluation set: * Loss: 1.6488 * Rouge1: 36.7711 * Rouge2: 23.7969 * Rougel: 33.074 * Rougelsum: 33.6007 * Gen Len: 18.814 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 4e-05 * train\_batch\_size: 1 * eval\_batch\_size: 1 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 4e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-liamvbetts/t5-small-finetuned-2024-03-13 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 4e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
# mixtral-8x22b-instruct-oh - EXL2 5.5bpw This is a 5.5bpw EXL2 quant of [fireworks-ai/mixtral-8x22b-instruct-oh](https://huggingface.co/fireworks-ai/mixtral-8x22b-instruct-oh) Details about the model can be found at the above model page. ## EXL2 Version These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library. If you have problems loading these models, please update Text Generation WebUI to the latest version. ## Perplexity Scoring Below are the perplexity scores for the EXL2 models. A lower score is better. _TODO_ ### Perplexity Script This was the script used for perplexity testing. ```bash #!/bin/bash # Activate the conda environment source ~/miniconda3/etc/profile.d/conda.sh conda activate exllamav2 # Set the model name and bit size MODEL_NAME="mixtral-8x22b-instruct-oh" BIT_PRECISIONS=(7.0 6.0 5.5 5.0 4.5 4.0 3.5 3.0 2.75 2.5 2.25) # Print the markdown table header echo "| Quant Level | Perplexity Score |" echo "|-------------|------------------|" for BIT_PRECISION in "${BIT_PRECISIONS[@]}" do MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw" if [ -d "$MODEL_DIR" ]; then output=$(python test_inference.py -m "$MODEL_DIR" -gs 44,48 -ed data/wikitext/wikitext-2-v1.parquet) score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+') echo "| $BIT_PRECISION | $score |" fi done ``` ## Quant Details This is the script used for quantization. ```bash #!/bin/bash # Activate the conda environment source ~/miniconda3/etc/profile.d/conda.sh conda activate exllamav2 # Set the model name and bit size MODEL_NAME="mixtral-8x22b-instruct-oh" # Define variables MODEL_DIR="models/$MODEL_NAME" OUTPUT_DIR="exl2_$MODEL_NAME" MEASUREMENT_FILE="measurements/$MODEL_NAME.json" # Create the measurement file if needed if [ ! -f "$MEASUREMENT_FILE" ]; then echo "Creating $MEASUREMENT_FILE" # Create directories if [ -d "$OUTPUT_DIR" ]; then rm -r "$OUTPUT_DIR" fi mkdir "$OUTPUT_DIR" python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE fi # Choose one of the below. Either create a single quant for testing or a batch of them. # BIT_PRECISIONS=(2.25) BIT_PRECISIONS=(7.0 6.0 5.5 5.0 4.5 4.0 3.5 3.0 2.75 2.5 2.25) for BIT_PRECISION in "${BIT_PRECISIONS[@]}" do CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw" # If it doesn't already exist, make the quant if [ ! -d "$CONVERTED_FOLDER" ]; then echo "Creating $CONVERTED_FOLDER" # Create directories if [ -d "$OUTPUT_DIR" ]; then rm -r "$OUTPUT_DIR" fi mkdir "$OUTPUT_DIR" mkdir "$CONVERTED_FOLDER" # Run conversion commands python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER fi done ```
{"language": ["en"], "license": "apache-2.0", "tags": ["exl2"], "datasets": ["teknium/OpenHermes-2.5"], "base_model": "mistral-community/Mixtral-8x22B-v0.1"}
Dracones/mixtral-8x22b-instruct-oh_exl2_5.5bpw
null
[ "transformers", "safetensors", "mixtral", "text-generation", "exl2", "conversational", "en", "dataset:teknium/OpenHermes-2.5", "base_model:mistral-community/Mixtral-8x22B-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-14T22:55:29+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #mixtral #text-generation #exl2 #conversational #en #dataset-teknium/OpenHermes-2.5 #base_model-mistral-community/Mixtral-8x22B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# mixtral-8x22b-instruct-oh - EXL2 5.5bpw This is a 5.5bpw EXL2 quant of fireworks-ai/mixtral-8x22b-instruct-oh Details about the model can be found at the above model page. ## EXL2 Version These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library. If you have problems loading these models, please update Text Generation WebUI to the latest version. ## Perplexity Scoring Below are the perplexity scores for the EXL2 models. A lower score is better. _TODO_ ### Perplexity Script This was the script used for perplexity testing. ## Quant Details This is the script used for quantization.
[ "# mixtral-8x22b-instruct-oh - EXL2 5.5bpw\n\nThis is a 5.5bpw EXL2 quant of fireworks-ai/mixtral-8x22b-instruct-oh\n\nDetails about the model can be found at the above model page.", "## EXL2 Version\n\nThese quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.\n\nIf you have problems loading these models, please update Text Generation WebUI to the latest version.", "## Perplexity Scoring\n\nBelow are the perplexity scores for the EXL2 models. A lower score is better. \n\n_TODO_", "### Perplexity Script\n\nThis was the script used for perplexity testing.", "## Quant Details\n\nThis is the script used for quantization." ]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #exl2 #conversational #en #dataset-teknium/OpenHermes-2.5 #base_model-mistral-community/Mixtral-8x22B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# mixtral-8x22b-instruct-oh - EXL2 5.5bpw\n\nThis is a 5.5bpw EXL2 quant of fireworks-ai/mixtral-8x22b-instruct-oh\n\nDetails about the model can be found at the above model page.", "## EXL2 Version\n\nThese quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.\n\nIf you have problems loading these models, please update Text Generation WebUI to the latest version.", "## Perplexity Scoring\n\nBelow are the perplexity scores for the EXL2 models. A lower score is better. \n\n_TODO_", "### Perplexity Script\n\nThis was the script used for perplexity testing.", "## Quant Details\n\nThis is the script used for quantization." ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-2024-03-15 This model is a fine-tuned version of [liamvbetts/t5-small-finetuned-2024-03-14](https://huggingface.co/liamvbetts/t5-small-finetuned-2024-03-14) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5180 - Rouge1: 42.6379 - Rouge2: 30.7892 - Rougel: 39.2984 - Rougelsum: 39.671 - Gen Len: 18.9765 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.787 | 1.0 | 337 | 1.5180 | 42.6379 | 30.7892 | 39.2984 | 39.671 | 18.9765 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "liamvbetts/t5-small-finetuned-2024-03-14", "model-index": [{"name": "t5-small-finetuned-2024-03-15", "results": []}]}
liamvbetts/t5-small-finetuned-2024-03-15
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:liamvbetts/t5-small-finetuned-2024-03-14", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-14T22:57:58+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-liamvbetts/t5-small-finetuned-2024-03-14 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
t5-small-finetuned-2024-03-15 ============================= This model is a fine-tuned version of liamvbetts/t5-small-finetuned-2024-03-14 on the None dataset. It achieves the following results on the evaluation set: * Loss: 1.5180 * Rouge1: 42.6379 * Rouge2: 30.7892 * Rougel: 39.2984 * Rougelsum: 39.671 * Gen Len: 18.9765 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 4e-05 * train\_batch\_size: 1 * eval\_batch\_size: 1 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 4e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-liamvbetts/t5-small-finetuned-2024-03-14 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 4e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-2024-03-16 This model is a fine-tuned version of [liamvbetts/t5-small-finetuned-2024-03-15](https://huggingface.co/liamvbetts/t5-small-finetuned-2024-03-15) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9081 - Rouge1: 32.2806 - Rouge2: 18.3465 - Rougel: 27.9985 - Rougelsum: 28.4829 - Gen Len: 18.6506 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.9126 | 1.0 | 330 | 1.9081 | 32.2806 | 18.3465 | 27.9985 | 28.4829 | 18.6506 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "liamvbetts/t5-small-finetuned-2024-03-15", "model-index": [{"name": "t5-small-finetuned-2024-03-16", "results": []}]}
liamvbetts/t5-small-finetuned-2024-03-16
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:liamvbetts/t5-small-finetuned-2024-03-15", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-14T23:00:27+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-liamvbetts/t5-small-finetuned-2024-03-15 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
t5-small-finetuned-2024-03-16 ============================= This model is a fine-tuned version of liamvbetts/t5-small-finetuned-2024-03-15 on the None dataset. It achieves the following results on the evaluation set: * Loss: 1.9081 * Rouge1: 32.2806 * Rouge2: 18.3465 * Rougel: 27.9985 * Rougelsum: 28.4829 * Gen Len: 18.6506 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 4e-05 * train\_batch\_size: 1 * eval\_batch\_size: 1 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 4e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-liamvbetts/t5-small-finetuned-2024-03-15 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 4e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
Like the name says: command-r-plus 6.0bpw h8 exl2 I have some trouble getting it to break free from it's rigid pre-trained json/function-calling format. YMMV
{"license": "apache-2.0"}
bdambrosio/command-r-plus-6.0bpw-h8-exl2
null
[ "transformers", "safetensors", "cohere", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "6-bit", "region:us" ]
null
2024-04-14T23:01:20+00:00
[]
[]
TAGS #transformers #safetensors #cohere #text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #6-bit #region-us
Like the name says: command-r-plus 6.0bpw h8 exl2 I have some trouble getting it to break free from it's rigid pre-trained json/function-calling format. YMMV
[]
[ "TAGS\n#transformers #safetensors #cohere #text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #6-bit #region-us \n" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-2024-03-17 This model is a fine-tuned version of [liamvbetts/t5-small-finetuned-2024-03-16](https://huggingface.co/liamvbetts/t5-small-finetuned-2024-03-16) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6870 - Rouge1: 36.9896 - Rouge2: 24.6597 - Rougel: 32.6752 - Rougelsum: 32.6582 - Gen Len: 18.8143 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.9296 | 1.0 | 276 | 1.6870 | 36.9896 | 24.6597 | 32.6752 | 32.6582 | 18.8143 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "liamvbetts/t5-small-finetuned-2024-03-16", "model-index": [{"name": "t5-small-finetuned-2024-03-17", "results": []}]}
liamvbetts/t5-small-finetuned-2024-03-17
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:liamvbetts/t5-small-finetuned-2024-03-16", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-14T23:02:49+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-liamvbetts/t5-small-finetuned-2024-03-16 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
t5-small-finetuned-2024-03-17 ============================= This model is a fine-tuned version of liamvbetts/t5-small-finetuned-2024-03-16 on the None dataset. It achieves the following results on the evaluation set: * Loss: 1.6870 * Rouge1: 36.9896 * Rouge2: 24.6597 * Rougel: 32.6752 * Rougelsum: 32.6582 * Gen Len: 18.8143 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 4e-05 * train\_batch\_size: 1 * eval\_batch\_size: 1 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 4e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-liamvbetts/t5-small-finetuned-2024-03-16 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 4e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/uncensorie/stairolz-70b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/stairolz-70b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/stairolz-70b-GGUF/resolve/main/stairolz-70b.Q2_K.gguf) | Q2_K | 25.6 | | | [GGUF](https://huggingface.co/mradermacher/stairolz-70b-GGUF/resolve/main/stairolz-70b.IQ3_XS.gguf) | IQ3_XS | 28.4 | | | [GGUF](https://huggingface.co/mradermacher/stairolz-70b-GGUF/resolve/main/stairolz-70b.IQ3_S.gguf) | IQ3_S | 30.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/stairolz-70b-GGUF/resolve/main/stairolz-70b.Q3_K_S.gguf) | Q3_K_S | 30.0 | | | [GGUF](https://huggingface.co/mradermacher/stairolz-70b-GGUF/resolve/main/stairolz-70b.IQ3_M.gguf) | IQ3_M | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/stairolz-70b-GGUF/resolve/main/stairolz-70b.Q3_K_M.gguf) | Q3_K_M | 33.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/stairolz-70b-GGUF/resolve/main/stairolz-70b.Q3_K_L.gguf) | Q3_K_L | 36.2 | | | [GGUF](https://huggingface.co/mradermacher/stairolz-70b-GGUF/resolve/main/stairolz-70b.IQ4_XS.gguf) | IQ4_XS | 37.3 | | | [GGUF](https://huggingface.co/mradermacher/stairolz-70b-GGUF/resolve/main/stairolz-70b.Q4_K_S.gguf) | Q4_K_S | 39.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/stairolz-70b-GGUF/resolve/main/stairolz-70b.Q4_K_M.gguf) | Q4_K_M | 41.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/stairolz-70b-GGUF/resolve/main/stairolz-70b.Q5_K_S.gguf) | Q5_K_S | 47.6 | | | [GGUF](https://huggingface.co/mradermacher/stairolz-70b-GGUF/resolve/main/stairolz-70b.Q5_K_M.gguf) | Q5_K_M | 48.9 | | | [PART 1](https://huggingface.co/mradermacher/stairolz-70b-GGUF/resolve/main/stairolz-70b.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/stairolz-70b-GGUF/resolve/main/stairolz-70b.Q6_K.gguf.part2of2) | Q6_K | 56.7 | very good quality | | [PART 1](https://huggingface.co/mradermacher/stairolz-70b-GGUF/resolve/main/stairolz-70b.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/stairolz-70b-GGUF/resolve/main/stairolz-70b.Q8_0.gguf.part2of2) | Q8_0 | 73.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "llama2", "library_name": "transformers", "base_model": "uncensorie/stairolz-70b", "quantized_by": "mradermacher"}
mradermacher/stairolz-70b-GGUF
null
[ "transformers", "gguf", "en", "base_model:uncensorie/stairolz-70b", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-04-14T23:03:23+00:00
[]
[ "en" ]
TAGS #transformers #gguf #en #base_model-uncensorie/stairolz-70b #license-llama2 #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants are available at URL Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #en #base_model-uncensorie/stairolz-70b #license-llama2 #endpoints_compatible #region-us \n" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-2024-03-18 This model is a fine-tuned version of [liamvbetts/t5-small-finetuned-2024-03-17](https://huggingface.co/liamvbetts/t5-small-finetuned-2024-03-17) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3412 - Rouge1: 42.2216 - Rouge2: 30.5944 - Rougel: 39.273 - Rougelsum: 39.1457 - Gen Len: 18.7386 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:------:|:---------:|:-------:| | 1.8771 | 1.0 | 349 | 1.3412 | 42.2216 | 30.5944 | 39.273 | 39.1457 | 18.7386 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "liamvbetts/t5-small-finetuned-2024-03-17", "model-index": [{"name": "t5-small-finetuned-2024-03-18", "results": []}]}
liamvbetts/t5-small-finetuned-2024-03-18
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:liamvbetts/t5-small-finetuned-2024-03-17", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-14T23:06:29+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-liamvbetts/t5-small-finetuned-2024-03-17 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
t5-small-finetuned-2024-03-18 ============================= This model is a fine-tuned version of liamvbetts/t5-small-finetuned-2024-03-17 on the None dataset. It achieves the following results on the evaluation set: * Loss: 1.3412 * Rouge1: 42.2216 * Rouge2: 30.5944 * Rougel: 39.273 * Rougelsum: 39.1457 * Gen Len: 18.7386 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 4e-05 * train\_batch\_size: 1 * eval\_batch\_size: 1 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 4e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-liamvbetts/t5-small-finetuned-2024-03-17 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 4e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/ajibawa-2023/scarlett-33b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/scarlett-33b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/scarlett-33b-GGUF/resolve/main/scarlett-33b.Q2_K.gguf) | Q2_K | 12.1 | | | [GGUF](https://huggingface.co/mradermacher/scarlett-33b-GGUF/resolve/main/scarlett-33b.IQ3_XS.gguf) | IQ3_XS | 13.4 | | | [GGUF](https://huggingface.co/mradermacher/scarlett-33b-GGUF/resolve/main/scarlett-33b.IQ3_S.gguf) | IQ3_S | 14.2 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/scarlett-33b-GGUF/resolve/main/scarlett-33b.Q3_K_S.gguf) | Q3_K_S | 14.2 | | | [GGUF](https://huggingface.co/mradermacher/scarlett-33b-GGUF/resolve/main/scarlett-33b.IQ3_M.gguf) | IQ3_M | 15.0 | | | [GGUF](https://huggingface.co/mradermacher/scarlett-33b-GGUF/resolve/main/scarlett-33b.Q3_K_M.gguf) | Q3_K_M | 15.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/scarlett-33b-GGUF/resolve/main/scarlett-33b.Q3_K_L.gguf) | Q3_K_L | 17.4 | | | [GGUF](https://huggingface.co/mradermacher/scarlett-33b-GGUF/resolve/main/scarlett-33b.IQ4_XS.gguf) | IQ4_XS | 17.6 | | | [GGUF](https://huggingface.co/mradermacher/scarlett-33b-GGUF/resolve/main/scarlett-33b.Q4_K_S.gguf) | Q4_K_S | 18.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/scarlett-33b-GGUF/resolve/main/scarlett-33b.Q4_K_M.gguf) | Q4_K_M | 19.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/scarlett-33b-GGUF/resolve/main/scarlett-33b.Q5_K_S.gguf) | Q5_K_S | 22.5 | | | [GGUF](https://huggingface.co/mradermacher/scarlett-33b-GGUF/resolve/main/scarlett-33b.Q5_K_M.gguf) | Q5_K_M | 23.1 | | | [GGUF](https://huggingface.co/mradermacher/scarlett-33b-GGUF/resolve/main/scarlett-33b.Q6_K.gguf) | Q6_K | 26.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/scarlett-33b-GGUF/resolve/main/scarlett-33b.Q8_0.gguf) | Q8_0 | 34.7 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "cc-by-nc-nd-4.0", "library_name": "transformers", "base_model": "ajibawa-2023/scarlett-33b", "quantized_by": "mradermacher"}
mradermacher/scarlett-33b-GGUF
null
[ "transformers", "gguf", "en", "base_model:ajibawa-2023/scarlett-33b", "license:cc-by-nc-nd-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-14T23:09:06+00:00
[]
[ "en" ]
TAGS #transformers #gguf #en #base_model-ajibawa-2023/scarlett-33b #license-cc-by-nc-nd-4.0 #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants are available at URL Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #en #base_model-ajibawa-2023/scarlett-33b #license-cc-by-nc-nd-4.0 #endpoints_compatible #region-us \n" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-2024-03-19 This model is a fine-tuned version of [liamvbetts/t5-small-finetuned-2024-03-18](https://huggingface.co/liamvbetts/t5-small-finetuned-2024-03-18) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7824 - Rouge1: 34.996 - Rouge2: 23.0601 - Rougel: 32.7854 - Rougelsum: 33.1113 - Gen Len: 18.6667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:-------:|:---------:|:-------:| | 1.7163 | 1.0 | 347 | 1.7824 | 34.996 | 23.0601 | 32.7854 | 33.1113 | 18.6667 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "liamvbetts/t5-small-finetuned-2024-03-18", "model-index": [{"name": "t5-small-finetuned-2024-03-19", "results": []}]}
liamvbetts/t5-small-finetuned-2024-03-19
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:liamvbetts/t5-small-finetuned-2024-03-18", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-14T23:09:21+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-liamvbetts/t5-small-finetuned-2024-03-18 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
t5-small-finetuned-2024-03-19 ============================= This model is a fine-tuned version of liamvbetts/t5-small-finetuned-2024-03-18 on the None dataset. It achieves the following results on the evaluation set: * Loss: 1.7824 * Rouge1: 34.996 * Rouge2: 23.0601 * Rougel: 32.7854 * Rougelsum: 33.1113 * Gen Len: 18.6667 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 4e-05 * train\_batch\_size: 1 * eval\_batch\_size: 1 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 4e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-liamvbetts/t5-small-finetuned-2024-03-18 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 4e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
# aranea-tenebris-120b-v1.0 **aka Netrve/Miqu-PlayMaid-70B-v0.1 + ShinojiResearch/Senku-70B** Model merge for uncensored creative writing and rp ![image/png](https://huggingface.co/divinetaco/aranea-tenebris-120b-v1.0/resolve/main/aranea-tenebris.png) A [mergekit](https://github.com/arcee-ai/mergekit) frankenmerge based on [Netrve/Miqu-PlayMaid-70B-v0.1](https://huggingface.co/Netrve/Miqu-PlayMaid-70B-v0.1) with interleaved layers of [ShinojiResearch/Senku-70B](https://huggingface.co/ShinojiResearch/Senku-70B). This was the top performing model from a second series of merge experiments to create a highly coherant creative writing and rp model. Tests consisted of a series of private DnD scenario benchmarks, with manual comparison of the most promising merges. A number of different base models, interleave models and layer offsets were compared. This model outperformed a number of other popular 70B+ models and merges in both creativity and coherancy tests. It was (briefly) compared to Mixtral 8x22B running 2/3/4 experts. - Usable context: ~32768 - Recommended prompt format: Alpaca - Layers: 137 ### Quantization llama.cpp [imatrix.dat](./imatrix.dat) exllamav2 [measurement.json](./measurement.json) Will upload a few quants when bandwidth permits. ### Testing Two different writing styles were considered for each testing scenario: - Completions for 3rd person narration. No character role was assumed. - Completions for 1st and 2nd person turn based (out-of-order) rp. A character role was assumed by the model, but narration of minor characters and events was encouraged. Tests assumed a mature audience, but a range of scenarios were constructed. Thematic inconsistancy or bias in character behaviour was penalized heavily. Models showing the following were penalized during manual comparison: - Consistently short responses. - Laziness or readily gave up on solving a character problem. - Overly malleable, where characters could not hold opinions or beliefs. - Passiveness or an inability to drive the narrative. - Persistent repeats. Bad merges tend to latch onto and reuse specific keywords. - Ignoring or missing obvious scenario solutions. - Impersonating other major characters out of turn during rp tests. - Faliure to follow a character's description. This criteria is pretty broad, and could include things like character skills, refusals etc. - Major inconsistencies in scenes or recall. Note - invention of thematically consistant detail was encouraged. ### Interesting observations from benchmarking - 10 layer interleave stride with a 20 layer interleave width consistently outperformed alternative combinations for coherancy. - 8 layer interleave stride with a 16 layer interleave width consistantly outperformed alternative combinations for creativity whilst remaining reasonably coherant. - Regular stride intervals are not optimal. In particular offsetting the first or last set of base models offets often improved metrics. - Goliath-120B is still a good standard for coherancy below 4096 context. A few miqu-1 merges are comparable, but testing found a small amount coherancy could be sacrificed for notable creativity improvements.
{"license": "cc-by-nc-4.0", "library_name": "transformers", "tags": ["not-for-all-audiences", "nsfw", "mergekit", "merge"], "base_model": ["Netrve/Miqu-PlayMaid-70B-v0.1", "ShinojiResearch/Senku-70B"]}
divinetaco/aranea-tenebris-120b-v1.0
null
[ "transformers", "safetensors", "llama", "text-generation", "not-for-all-audiences", "nsfw", "mergekit", "merge", "conversational", "base_model:Netrve/Miqu-PlayMaid-70B-v0.1", "base_model:ShinojiResearch/Senku-70B", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-14T23:09:25+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #not-for-all-audiences #nsfw #mergekit #merge #conversational #base_model-Netrve/Miqu-PlayMaid-70B-v0.1 #base_model-ShinojiResearch/Senku-70B #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# aranea-tenebris-120b-v1.0 aka Netrve/Miqu-PlayMaid-70B-v0.1 + ShinojiResearch/Senku-70B Model merge for uncensored creative writing and rp !image/png A mergekit frankenmerge based on Netrve/Miqu-PlayMaid-70B-v0.1 with interleaved layers of ShinojiResearch/Senku-70B. This was the top performing model from a second series of merge experiments to create a highly coherant creative writing and rp model. Tests consisted of a series of private DnD scenario benchmarks, with manual comparison of the most promising merges. A number of different base models, interleave models and layer offsets were compared. This model outperformed a number of other popular 70B+ models and merges in both creativity and coherancy tests. It was (briefly) compared to Mixtral 8x22B running 2/3/4 experts. - Usable context: ~32768 - Recommended prompt format: Alpaca - Layers: 137 ### Quantization URL URL exllamav2 URL Will upload a few quants when bandwidth permits. ### Testing Two different writing styles were considered for each testing scenario: - Completions for 3rd person narration. No character role was assumed. - Completions for 1st and 2nd person turn based (out-of-order) rp. A character role was assumed by the model, but narration of minor characters and events was encouraged. Tests assumed a mature audience, but a range of scenarios were constructed. Thematic inconsistancy or bias in character behaviour was penalized heavily. Models showing the following were penalized during manual comparison: - Consistently short responses. - Laziness or readily gave up on solving a character problem. - Overly malleable, where characters could not hold opinions or beliefs. - Passiveness or an inability to drive the narrative. - Persistent repeats. Bad merges tend to latch onto and reuse specific keywords. - Ignoring or missing obvious scenario solutions. - Impersonating other major characters out of turn during rp tests. - Faliure to follow a character's description. This criteria is pretty broad, and could include things like character skills, refusals etc. - Major inconsistencies in scenes or recall. Note - invention of thematically consistant detail was encouraged. ### Interesting observations from benchmarking - 10 layer interleave stride with a 20 layer interleave width consistently outperformed alternative combinations for coherancy. - 8 layer interleave stride with a 16 layer interleave width consistantly outperformed alternative combinations for creativity whilst remaining reasonably coherant. - Regular stride intervals are not optimal. In particular offsetting the first or last set of base models offets often improved metrics. - Goliath-120B is still a good standard for coherancy below 4096 context. A few miqu-1 merges are comparable, but testing found a small amount coherancy could be sacrificed for notable creativity improvements.
[ "# aranea-tenebris-120b-v1.0\naka Netrve/Miqu-PlayMaid-70B-v0.1 + ShinojiResearch/Senku-70B \nModel merge for uncensored creative writing and rp \n\n!image/png\n\nA mergekit frankenmerge based on Netrve/Miqu-PlayMaid-70B-v0.1 with interleaved layers of ShinojiResearch/Senku-70B. \nThis was the top performing model from a second series of merge experiments to create a highly coherant creative writing and rp model. \nTests consisted of a series of private DnD scenario benchmarks, with manual comparison of the most promising merges. \n \nA number of different base models, interleave models and layer offsets were compared. \nThis model outperformed a number of other popular 70B+ models and merges in both creativity and coherancy tests. It was (briefly) compared to Mixtral 8x22B running 2/3/4 experts. \n\n- Usable context: ~32768\n- Recommended prompt format: Alpaca\n- Layers: 137", "### Quantization\n \nURL URL \nexllamav2 URL \n\nWill upload a few quants when bandwidth permits.", "### Testing\n\nTwo different writing styles were considered for each testing scenario:\n- Completions for 3rd person narration. No character role was assumed.\n- Completions for 1st and 2nd person turn based (out-of-order) rp. A character role was assumed by the model, but narration of minor characters and events was encouraged.\n\nTests assumed a mature audience, but a range of scenarios were constructed. \nThematic inconsistancy or bias in character behaviour was penalized heavily. \n\nModels showing the following were penalized during manual comparison:\n- Consistently short responses.\n- Laziness or readily gave up on solving a character problem.\n- Overly malleable, where characters could not hold opinions or beliefs.\n- Passiveness or an inability to drive the narrative.\n- Persistent repeats. Bad merges tend to latch onto and reuse specific keywords.\n- Ignoring or missing obvious scenario solutions.\n- Impersonating other major characters out of turn during rp tests.\n- Faliure to follow a character's description. This criteria is pretty broad, and could include things like character skills, refusals etc.\n- Major inconsistencies in scenes or recall. Note - invention of thematically consistant detail was encouraged.", "### Interesting observations from benchmarking\n\n- 10 layer interleave stride with a 20 layer interleave width consistently outperformed alternative combinations for coherancy.\n- 8 layer interleave stride with a 16 layer interleave width consistantly outperformed alternative combinations for creativity whilst remaining reasonably coherant.\n- Regular stride intervals are not optimal. In particular offsetting the first or last set of base models offets often improved metrics.\n- Goliath-120B is still a good standard for coherancy below 4096 context. A few miqu-1 merges are comparable, but testing found a small amount coherancy could be sacrificed for notable creativity improvements." ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #not-for-all-audiences #nsfw #mergekit #merge #conversational #base_model-Netrve/Miqu-PlayMaid-70B-v0.1 #base_model-ShinojiResearch/Senku-70B #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# aranea-tenebris-120b-v1.0\naka Netrve/Miqu-PlayMaid-70B-v0.1 + ShinojiResearch/Senku-70B \nModel merge for uncensored creative writing and rp \n\n!image/png\n\nA mergekit frankenmerge based on Netrve/Miqu-PlayMaid-70B-v0.1 with interleaved layers of ShinojiResearch/Senku-70B. \nThis was the top performing model from a second series of merge experiments to create a highly coherant creative writing and rp model. \nTests consisted of a series of private DnD scenario benchmarks, with manual comparison of the most promising merges. \n \nA number of different base models, interleave models and layer offsets were compared. \nThis model outperformed a number of other popular 70B+ models and merges in both creativity and coherancy tests. It was (briefly) compared to Mixtral 8x22B running 2/3/4 experts. \n\n- Usable context: ~32768\n- Recommended prompt format: Alpaca\n- Layers: 137", "### Quantization\n \nURL URL \nexllamav2 URL \n\nWill upload a few quants when bandwidth permits.", "### Testing\n\nTwo different writing styles were considered for each testing scenario:\n- Completions for 3rd person narration. No character role was assumed.\n- Completions for 1st and 2nd person turn based (out-of-order) rp. A character role was assumed by the model, but narration of minor characters and events was encouraged.\n\nTests assumed a mature audience, but a range of scenarios were constructed. \nThematic inconsistancy or bias in character behaviour was penalized heavily. \n\nModels showing the following were penalized during manual comparison:\n- Consistently short responses.\n- Laziness or readily gave up on solving a character problem.\n- Overly malleable, where characters could not hold opinions or beliefs.\n- Passiveness or an inability to drive the narrative.\n- Persistent repeats. Bad merges tend to latch onto and reuse specific keywords.\n- Ignoring or missing obvious scenario solutions.\n- Impersonating other major characters out of turn during rp tests.\n- Faliure to follow a character's description. This criteria is pretty broad, and could include things like character skills, refusals etc.\n- Major inconsistencies in scenes or recall. Note - invention of thematically consistant detail was encouraged.", "### Interesting observations from benchmarking\n\n- 10 layer interleave stride with a 20 layer interleave width consistently outperformed alternative combinations for coherancy.\n- 8 layer interleave stride with a 16 layer interleave width consistantly outperformed alternative combinations for creativity whilst remaining reasonably coherant.\n- Regular stride intervals are not optimal. In particular offsetting the first or last set of base models offets often improved metrics.\n- Goliath-120B is still a good standard for coherancy below 4096 context. A few miqu-1 merges are comparable, but testing found a small amount coherancy could be sacrificed for notable creativity improvements." ]
text-generation
transformers
# mixtral-8x22b-instruct-oh - EXL2 5.0bpw This is a 5.0bpw EXL2 quant of [fireworks-ai/mixtral-8x22b-instruct-oh](https://huggingface.co/fireworks-ai/mixtral-8x22b-instruct-oh) Details about the model can be found at the above model page. ## EXL2 Version These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library. If you have problems loading these models, please update Text Generation WebUI to the latest version. ## Perplexity Scoring Below are the perplexity scores for the EXL2 models. A lower score is better. _TODO_ ### Perplexity Script This was the script used for perplexity testing. ```bash #!/bin/bash # Activate the conda environment source ~/miniconda3/etc/profile.d/conda.sh conda activate exllamav2 # Set the model name and bit size MODEL_NAME="mixtral-8x22b-instruct-oh" BIT_PRECISIONS=(7.0 6.0 5.5 5.0 4.5 4.0 3.5 3.0 2.75 2.5 2.25) # Print the markdown table header echo "| Quant Level | Perplexity Score |" echo "|-------------|------------------|" for BIT_PRECISION in "${BIT_PRECISIONS[@]}" do MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw" if [ -d "$MODEL_DIR" ]; then output=$(python test_inference.py -m "$MODEL_DIR" -gs 44,48 -ed data/wikitext/wikitext-2-v1.parquet) score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+') echo "| $BIT_PRECISION | $score |" fi done ``` ## Quant Details This is the script used for quantization. ```bash #!/bin/bash # Activate the conda environment source ~/miniconda3/etc/profile.d/conda.sh conda activate exllamav2 # Set the model name and bit size MODEL_NAME="mixtral-8x22b-instruct-oh" # Define variables MODEL_DIR="models/$MODEL_NAME" OUTPUT_DIR="exl2_$MODEL_NAME" MEASUREMENT_FILE="measurements/$MODEL_NAME.json" # Create the measurement file if needed if [ ! -f "$MEASUREMENT_FILE" ]; then echo "Creating $MEASUREMENT_FILE" # Create directories if [ -d "$OUTPUT_DIR" ]; then rm -r "$OUTPUT_DIR" fi mkdir "$OUTPUT_DIR" python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE fi # Choose one of the below. Either create a single quant for testing or a batch of them. # BIT_PRECISIONS=(2.25) BIT_PRECISIONS=(7.0 6.0 5.5 5.0 4.5 4.0 3.5 3.0 2.75 2.5 2.25) for BIT_PRECISION in "${BIT_PRECISIONS[@]}" do CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw" # If it doesn't already exist, make the quant if [ ! -d "$CONVERTED_FOLDER" ]; then echo "Creating $CONVERTED_FOLDER" # Create directories if [ -d "$OUTPUT_DIR" ]; then rm -r "$OUTPUT_DIR" fi mkdir "$OUTPUT_DIR" mkdir "$CONVERTED_FOLDER" # Run conversion commands python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER fi done ```
{"language": ["en"], "license": "apache-2.0", "tags": ["exl2"], "datasets": ["teknium/OpenHermes-2.5"], "base_model": "mistral-community/Mixtral-8x22B-v0.1"}
Dracones/mixtral-8x22b-instruct-oh_exl2_5.0bpw
null
[ "transformers", "safetensors", "mixtral", "text-generation", "exl2", "conversational", "en", "dataset:teknium/OpenHermes-2.5", "base_model:mistral-community/Mixtral-8x22B-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "5-bit", "region:us" ]
null
2024-04-14T23:10:45+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #mixtral #text-generation #exl2 #conversational #en #dataset-teknium/OpenHermes-2.5 #base_model-mistral-community/Mixtral-8x22B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #5-bit #region-us
# mixtral-8x22b-instruct-oh - EXL2 5.0bpw This is a 5.0bpw EXL2 quant of fireworks-ai/mixtral-8x22b-instruct-oh Details about the model can be found at the above model page. ## EXL2 Version These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library. If you have problems loading these models, please update Text Generation WebUI to the latest version. ## Perplexity Scoring Below are the perplexity scores for the EXL2 models. A lower score is better. _TODO_ ### Perplexity Script This was the script used for perplexity testing. ## Quant Details This is the script used for quantization.
[ "# mixtral-8x22b-instruct-oh - EXL2 5.0bpw\n\nThis is a 5.0bpw EXL2 quant of fireworks-ai/mixtral-8x22b-instruct-oh\n\nDetails about the model can be found at the above model page.", "## EXL2 Version\n\nThese quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.\n\nIf you have problems loading these models, please update Text Generation WebUI to the latest version.", "## Perplexity Scoring\n\nBelow are the perplexity scores for the EXL2 models. A lower score is better. \n\n_TODO_", "### Perplexity Script\n\nThis was the script used for perplexity testing.", "## Quant Details\n\nThis is the script used for quantization." ]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #exl2 #conversational #en #dataset-teknium/OpenHermes-2.5 #base_model-mistral-community/Mixtral-8x22B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #5-bit #region-us \n", "# mixtral-8x22b-instruct-oh - EXL2 5.0bpw\n\nThis is a 5.0bpw EXL2 quant of fireworks-ai/mixtral-8x22b-instruct-oh\n\nDetails about the model can be found at the above model page.", "## EXL2 Version\n\nThese quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.\n\nIf you have problems loading these models, please update Text Generation WebUI to the latest version.", "## Perplexity Scoring\n\nBelow are the perplexity scores for the EXL2 models. A lower score is better. \n\n_TODO_", "### Perplexity Script\n\nThis was the script used for perplexity testing.", "## Quant Details\n\nThis is the script used for quantization." ]
text-classification
transformers
# Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.9300565719604492 f1_macro: 0.8222222222222223 f1_micro: 0.8333333333333334 f1_weighted: 0.8222222222222223 precision_macro: 0.8888888888888888 precision_micro: 0.8333333333333334 precision_weighted: 0.8888888888888888 recall_macro: 0.8333333333333334 recall_micro: 0.8333333333333334 recall_weighted: 0.8333333333333334 accuracy: 0.8333333333333334
{"tags": ["autotrain", "text-classification"], "datasets": ["autotrain-4hkjv-ypex0/autotrain-data"], "widget": [{"text": "I love AutoTrain"}]}
Aveo/autotrain-4hkjv-ypex0
null
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "autotrain", "dataset:autotrain-4hkjv-ypex0/autotrain-data", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-14T23:13:15+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #bert #text-classification #autotrain #dataset-autotrain-4hkjv-ypex0/autotrain-data #autotrain_compatible #endpoints_compatible #region-us
# Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.9300565719604492 f1_macro: 0.8222222222222223 f1_micro: 0.8333333333333334 f1_weighted: 0.8222222222222223 precision_macro: 0.8888888888888888 precision_micro: 0.8333333333333334 precision_weighted: 0.8888888888888888 recall_macro: 0.8333333333333334 recall_micro: 0.8333333333333334 recall_weighted: 0.8333333333333334 accuracy: 0.8333333333333334
[ "# Model Trained Using AutoTrain\n\n- Problem type: Text Classification", "## Validation Metrics\nloss: 0.9300565719604492\n\nf1_macro: 0.8222222222222223\n\nf1_micro: 0.8333333333333334\n\nf1_weighted: 0.8222222222222223\n\nprecision_macro: 0.8888888888888888\n\nprecision_micro: 0.8333333333333334\n\nprecision_weighted: 0.8888888888888888\n\nrecall_macro: 0.8333333333333334\n\nrecall_micro: 0.8333333333333334\n\nrecall_weighted: 0.8333333333333334\n\naccuracy: 0.8333333333333334" ]
[ "TAGS\n#transformers #tensorboard #safetensors #bert #text-classification #autotrain #dataset-autotrain-4hkjv-ypex0/autotrain-data #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Trained Using AutoTrain\n\n- Problem type: Text Classification", "## Validation Metrics\nloss: 0.9300565719604492\n\nf1_macro: 0.8222222222222223\n\nf1_micro: 0.8333333333333334\n\nf1_weighted: 0.8222222222222223\n\nprecision_macro: 0.8888888888888888\n\nprecision_micro: 0.8333333333333334\n\nprecision_weighted: 0.8888888888888888\n\nrecall_macro: 0.8333333333333334\n\nrecall_micro: 0.8333333333333334\n\nrecall_weighted: 0.8333333333333334\n\naccuracy: 0.8333333333333334" ]
reinforcement-learning
sample-factory
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r hui168/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
{"library_name": "sample-factory", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "sample-factory"], "model-index": [{"name": "APPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "doom_health_gathering_supreme", "type": "doom_health_gathering_supreme"}, "metrics": [{"type": "mean_reward", "value": "11.34 +/- 5.39", "name": "mean_reward", "verified": false}]}]}]}
hui168/rl_course_vizdoom_health_gathering_supreme
null
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-14T23:13:29+00:00
[]
[]
TAGS #sample-factory #tensorboard #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
A(n) APPO model trained on the doom_health_gathering_supreme environment. This model was trained using Sample-Factory 2.0: URL Documentation for how to use Sample-Factory can be found at URL ## Downloading the model After installing Sample-Factory, download the model with: ## Using the model To run the model after download, use the 'enjoy' script corresponding to this environment: You can also upload models to the Hugging Face Hub using the same script with the '--push_to_hub' flag. See URL for more details ## Training with this model To continue training with this model, use the 'train' script corresponding to this environment: Note, you may have to adjust '--train_for_env_steps' to a suitably high number as the experiment will resume at the number of steps it concluded at.
[ "## Downloading the model\n\nAfter installing Sample-Factory, download the model with:", "## Using the model\n\nTo run the model after download, use the 'enjoy' script corresponding to this environment:\n\n\n\nYou can also upload models to the Hugging Face Hub using the same script with the '--push_to_hub' flag.\nSee URL for more details", "## Training with this model\n\nTo continue training with this model, use the 'train' script corresponding to this environment:\n\n\nNote, you may have to adjust '--train_for_env_steps' to a suitably high number as the experiment will resume at the number of steps it concluded at." ]
[ "TAGS\n#sample-factory #tensorboard #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n", "## Downloading the model\n\nAfter installing Sample-Factory, download the model with:", "## Using the model\n\nTo run the model after download, use the 'enjoy' script corresponding to this environment:\n\n\n\nYou can also upload models to the Hugging Face Hub using the same script with the '--push_to_hub' flag.\nSee URL for more details", "## Training with this model\n\nTo continue training with this model, use the 'train' script corresponding to this environment:\n\n\nNote, you may have to adjust '--train_for_env_steps' to a suitably high number as the experiment will resume at the number of steps it concluded at." ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Kukedlc/NeuralStockFusion-7b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": "Kukedlc/NeuralStockFusion-7b", "quantized_by": "mradermacher"}
mradermacher/NeuralStockFusion-7b-GGUF
null
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Kukedlc/NeuralStockFusion-7b", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-14T23:15:19+00:00
[]
[ "en" ]
TAGS #transformers #gguf #mergekit #merge #en #base_model-Kukedlc/NeuralStockFusion-7b #license-apache-2.0 #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #mergekit #merge #en #base_model-Kukedlc/NeuralStockFusion-7b #license-apache-2.0 #endpoints_compatible #region-us \n" ]
text-classification
transformers
# Model Summary This is a fact-checking model from our work: 📃 [**MiniCheck: Efficient Fact-Checking of LLMs on Grounding Documents**](https://arxiv.org/pdf/2404.10774.pdf) ([GitHub Repo](https://github.com/Liyan06/MiniCheck)) The model is based on Flan-T5-Large that predicts a binary label - 1 for supported and 0 for unsupported. The model is doing predictions on the *sentence-level*. It takes as input a document and a sentence and determine whether the sentence is supported by the document: **MiniCheck-Model(document, claim) -> {0, 1}** MiniCheck-Flan-T5-Large is fine tuned from `google/flan-t5-large` ([Chung et al., 2022](https://arxiv.org/pdf/2210.11416.pdf)) on the combination of 35K data: - 21K ANLI data ([Nie et al., 2020](https://aclanthology.org/2020.acl-main.441.pdf)) - 14K synthetic data generated from scratch in a structed way (more details in the paper). ### Model Variants We also have other two MiniCheck model variants: - [lytang/MiniCheck-RoBERTa-Large](https://huggingface.co/lytang/MiniCheck-RoBERTa-Large) - [lytang/MiniCheck-DeBERTa-v3-Large](https://huggingface.co/lytang/MiniCheck-DeBERTa-v3-Large) ### Model Performance <p align="center"> <img src="./cost-vs-bacc.png" width="360"> </p> The performance of these models is evaluated on our new collected benchmark (unseen by our models during training), [LLM-AggreFact](https://huggingface.co/datasets/lytang/LLM-AggreFact), from 10 recent human annotated datasets on fact-checking and grounding LLM generations. Our most capable model MiniCheck-Flan-T5-Large outperform all exisiting specialized fact-checkers with a similar scale by a large margin (4-10% absolute increase) and is on par with GPT-4, but 400x cheaper. See full results in our work. Note: We only evaluated the performance of our models on real claims -- without any human intervention in any format, such as injecting certain error types into model-generated claims. Those edited claims do not reflect LLMs' actual behaviors. # Model Usage Demo Please first clone our [GitHub Repo](https://github.com/Liyan06/MiniCheck) and install necessary packages from `requirements.txt`. ### Below is a simple use case ```python from minicheck.minicheck import MiniCheck doc = "A group of students gather in the school library to study for their upcoming final exams." claim_1 = "The students are preparing for an examination." claim_2 = "The students are on vacation." # model_name can be one of ['roberta-large', 'deberta-v3-large', 'flan-t5-large'] scorer = MiniCheck(model_name='flan-t5-large', device=f'cuda:0', cache_dir='./ckpts') pred_label, raw_prob, _, _ = scorer.score(docs=[doc, doc], claims=[claim_1, claim_2]) print(pred_label) # [1, 0] print(raw_prob) # [0.9805923700332642, 0.007121307775378227] ``` ### Test on our [LLM-AggreFact](https://huggingface.co/datasets/lytang/LLM-AggreFact) Benchmark ```python import pandas as pd from datasets import load_dataset from minicheck.minicheck import MiniCheck # load 13K test data df = pd.DataFrame(load_dataset("lytang/LLM-AggreFact")['test']) docs = df.doc.values claims = df.claim.values scorer = MiniCheck(model_name='flan-t5-large', device=f'cuda:0', cache_dir='./ckpts') pred_label, raw_prob, _, _ = scorer.score(docs=docs, claims=claims) # ~ 20 mins, depending on hardware ``` To evalaute the result on the benchmark ```python from sklearn.metrics import balanced_accuracy_score df['preds'] = pred_label result_df = pd.DataFrame(columns=['Dataset', 'BAcc']) for dataset in df.dataset.unique(): sub_df = df[df.dataset == dataset] bacc = balanced_accuracy_score(sub_df.label, sub_df.preds) * 100 result_df.loc[len(result_df)] = [dataset, bacc] result_df.loc[len(result_df)] = ['Average', result_df.BAcc.mean()] result_df.round(1) ``` # Citation ``` @misc{tang2024minicheck, title={MiniCheck: Efficient Fact-Checking of LLMs on Grounding Documents}, author={Liyan Tang and Philippe Laban and Greg Durrett}, year={2024}, eprint={2404.10774}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": ["en"], "pipeline_tag": "text-classification"}
lytang/MiniCheck-Flan-T5-Large
null
[ "transformers", "pytorch", "t5", "text2text-generation", "text-classification", "en", "arxiv:2404.10774", "arxiv:2210.11416", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-14T23:16:22+00:00
[ "2404.10774", "2210.11416" ]
[ "en" ]
TAGS #transformers #pytorch #t5 #text2text-generation #text-classification #en #arxiv-2404.10774 #arxiv-2210.11416 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Summary This is a fact-checking model from our work: MiniCheck: Efficient Fact-Checking of LLMs on Grounding Documents (GitHub Repo) The model is based on Flan-T5-Large that predicts a binary label - 1 for supported and 0 for unsupported. The model is doing predictions on the *sentence-level*. It takes as input a document and a sentence and determine whether the sentence is supported by the document: MiniCheck-Model(document, claim) -> {0, 1} MiniCheck-Flan-T5-Large is fine tuned from 'google/flan-t5-large' (Chung et al., 2022) on the combination of 35K data: - 21K ANLI data (Nie et al., 2020) - 14K synthetic data generated from scratch in a structed way (more details in the paper). ### Model Variants We also have other two MiniCheck model variants: - lytang/MiniCheck-RoBERTa-Large - lytang/MiniCheck-DeBERTa-v3-Large ### Model Performance <p align="center"> <img src="./URL" width="360"> </p> The performance of these models is evaluated on our new collected benchmark (unseen by our models during training), LLM-AggreFact, from 10 recent human annotated datasets on fact-checking and grounding LLM generations. Our most capable model MiniCheck-Flan-T5-Large outperform all exisiting specialized fact-checkers with a similar scale by a large margin (4-10% absolute increase) and is on par with GPT-4, but 400x cheaper. See full results in our work. Note: We only evaluated the performance of our models on real claims -- without any human intervention in any format, such as injecting certain error types into model-generated claims. Those edited claims do not reflect LLMs' actual behaviors. # Model Usage Demo Please first clone our GitHub Repo and install necessary packages from 'URL'. ### Below is a simple use case ### Test on our LLM-AggreFact Benchmark To evalaute the result on the benchmark
[ "# Model Summary\n\nThis is a fact-checking model from our work:\n\n MiniCheck: Efficient Fact-Checking of LLMs on Grounding Documents (GitHub Repo)\n\nThe model is based on Flan-T5-Large that predicts a binary label - 1 for supported and 0 for unsupported. \nThe model is doing predictions on the *sentence-level*. It takes as input a document and a sentence and determine \nwhether the sentence is supported by the document: MiniCheck-Model(document, claim) -> {0, 1}\n\n\nMiniCheck-Flan-T5-Large is fine tuned from 'google/flan-t5-large' (Chung et al., 2022) \non the combination of 35K data:\n- 21K ANLI data (Nie et al., 2020)\n- 14K synthetic data generated from scratch in a structed way (more details in the paper).", "### Model Variants\nWe also have other two MiniCheck model variants:\n- lytang/MiniCheck-RoBERTa-Large\n- lytang/MiniCheck-DeBERTa-v3-Large", "### Model Performance\n\n<p align=\"center\">\n <img src=\"./URL\" width=\"360\">\n</p>\n\n\nThe performance of these models is evaluated on our new collected benchmark (unseen by our models during training), LLM-AggreFact, \nfrom 10 recent human annotated datasets on fact-checking and grounding LLM generations. Our most capable model MiniCheck-Flan-T5-Large outperform all\nexisiting specialized fact-checkers with a similar scale by a large margin (4-10% absolute increase) and is on par with GPT-4, but 400x cheaper. See full results in our work.\n\nNote: We only evaluated the performance of our models on real claims -- without any human intervention in \nany format, such as injecting certain error types into model-generated claims. Those edited claims do not reflect\nLLMs' actual behaviors.", "# Model Usage Demo\n\nPlease first clone our GitHub Repo and install necessary packages from 'URL'.", "### Below is a simple use case", "### Test on our LLM-AggreFact Benchmark\n\n\n\nTo evalaute the result on the benchmark" ]
[ "TAGS\n#transformers #pytorch #t5 #text2text-generation #text-classification #en #arxiv-2404.10774 #arxiv-2210.11416 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Summary\n\nThis is a fact-checking model from our work:\n\n MiniCheck: Efficient Fact-Checking of LLMs on Grounding Documents (GitHub Repo)\n\nThe model is based on Flan-T5-Large that predicts a binary label - 1 for supported and 0 for unsupported. \nThe model is doing predictions on the *sentence-level*. It takes as input a document and a sentence and determine \nwhether the sentence is supported by the document: MiniCheck-Model(document, claim) -> {0, 1}\n\n\nMiniCheck-Flan-T5-Large is fine tuned from 'google/flan-t5-large' (Chung et al., 2022) \non the combination of 35K data:\n- 21K ANLI data (Nie et al., 2020)\n- 14K synthetic data generated from scratch in a structed way (more details in the paper).", "### Model Variants\nWe also have other two MiniCheck model variants:\n- lytang/MiniCheck-RoBERTa-Large\n- lytang/MiniCheck-DeBERTa-v3-Large", "### Model Performance\n\n<p align=\"center\">\n <img src=\"./URL\" width=\"360\">\n</p>\n\n\nThe performance of these models is evaluated on our new collected benchmark (unseen by our models during training), LLM-AggreFact, \nfrom 10 recent human annotated datasets on fact-checking and grounding LLM generations. Our most capable model MiniCheck-Flan-T5-Large outperform all\nexisiting specialized fact-checkers with a similar scale by a large margin (4-10% absolute increase) and is on par with GPT-4, but 400x cheaper. See full results in our work.\n\nNote: We only evaluated the performance of our models on real claims -- without any human intervention in \nany format, such as injecting certain error types into model-generated claims. Those edited claims do not reflect\nLLMs' actual behaviors.", "# Model Usage Demo\n\nPlease first clone our GitHub Repo and install necessary packages from 'URL'.", "### Below is a simple use case", "### Test on our LLM-AggreFact Benchmark\n\n\n\nTo evalaute the result on the benchmark" ]
text-generation
null
## Exllama v2 Quantizations of OpenCerebrum-2.0-7B Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.18">turboderp's ExLlamaV2 v0.0.18</a> for quantization. <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b> Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Original model: https://huggingface.co/Locutusque/OpenCerebrum-2.0-7B ## Prompt format No chat template specified so ChatML is used. This may be incorrect, check original model card for details. ``` <|im_start|>system {message}<|im_end|> <|im_start|>user {user message}<|im_end|> <|im_start|>assistant ``` | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description | | ----- | ---- | ------- | ------ | ------ | ------ | ------------ | | [8_0](https://huggingface.co/bartowski/OpenCerebrum-2.0-7B-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. | | [6_5](https://huggingface.co/bartowski/OpenCerebrum-2.0-7B-exl2/tree/6_5) | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. | | [5_0](https://huggingface.co/bartowski/OpenCerebrum-2.0-7B-exl2/tree/5_0) | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. | | [4_25](https://huggingface.co/bartowski/OpenCerebrum-2.0-7B-exl2/tree/4_25) | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. | | [3_5](https://huggingface.co/bartowski/OpenCerebrum-2.0-7B-exl2/tree/3_5) | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. | ## Download instructions With git: ```shell git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/OpenCerebrum-2.0-7B-exl2 OpenCerebrum-2.0-7B-exl2-6_5 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download the `main` (only useful if you only care about measurement.json) branch to a folder called `OpenCerebrum-2.0-7B-exl2`: ```shell mkdir OpenCerebrum-2.0-7B-exl2 huggingface-cli download bartowski/OpenCerebrum-2.0-7B-exl2 --local-dir OpenCerebrum-2.0-7B-exl2 --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: Linux: ```shell mkdir OpenCerebrum-2.0-7B-exl2-6_5 huggingface-cli download bartowski/OpenCerebrum-2.0-7B-exl2 --revision 6_5 --local-dir OpenCerebrum-2.0-7B-exl2-6_5 --local-dir-use-symlinks False ``` Windows (which apparently doesn't like _ in folders sometimes?): ```shell mkdir OpenCerebrum-2.0-7B-exl2-6.5 huggingface-cli download bartowski/OpenCerebrum-2.0-7B-exl2 --revision 6_5 --local-dir OpenCerebrum-2.0-7B-exl2-6.5 --local-dir-use-symlinks False ``` Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
{"language": ["en"], "license": "apache-2.0", "tags": ["open-source", "code", "math", "chemistry", "biology", "text-generation", "question-answering"], "pipeline_tag": "text-generation", "quantized_by": "bartowski"}
bartowski/OpenCerebrum-2.0-7B-exl2
null
[ "open-source", "code", "math", "chemistry", "biology", "text-generation", "question-answering", "en", "license:apache-2.0", "region:us" ]
null
2024-04-14T23:20:20+00:00
[]
[ "en" ]
TAGS #open-source #code #math #chemistry #biology #text-generation #question-answering #en #license-apache-2.0 #region-us
Exllama v2 Quantizations of OpenCerebrum-2.0-7B ----------------------------------------------- Using <a href="URL ExLlamaV2 v0.0.18 for quantization. **The "main" branch only contains the URL, download one of the other branches for the model (see below)** Each branch contains an individual bits per weight, with the main one containing only the URL for further conversions. Original model: URL Prompt format ------------- No chat template specified so ChatML is used. This may be incorrect, check original model card for details. Download instructions --------------------- With git: With huggingface hub (credit to TheBloke for instructions): To download the 'main' (only useful if you only care about URL) branch to a folder called 'OpenCerebrum-2.0-7B-exl2': To download from a different branch, add the '--revision' parameter: Linux: Windows (which apparently doesn't like \_ in folders sometimes?): Want to support my work? Visit my ko-fi page here: URL
[]
[ "TAGS\n#open-source #code #math #chemistry #biology #text-generation #question-answering #en #license-apache-2.0 #region-us \n" ]
text-generation
transformers
# Uploaded model - **Developed by:** cackerman - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-7b-it-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "gemma", "trl"], "base_model": "unsloth/gemma-7b-it-bnb-4bit"}
cackerman/rewrites_gem7unsloth_4bit_ft_full_merged
null
[ "transformers", "safetensors", "gemma", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/gemma-7b-it-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "8-bit", "region:us" ]
null
2024-04-14T23:21:30+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #gemma #text-generation #text-generation-inference #unsloth #trl #conversational #en #base_model-unsloth/gemma-7b-it-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #8-bit #region-us
# Uploaded model - Developed by: cackerman - License: apache-2.0 - Finetuned from model : unsloth/gemma-7b-it-bnb-4bit This gemma model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: cackerman\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-7b-it-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #gemma #text-generation #text-generation-inference #unsloth #trl #conversational #en #base_model-unsloth/gemma-7b-it-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #8-bit #region-us \n", "# Uploaded model\n\n- Developed by: cackerman\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-7b-it-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Sao10K/Euryale-Inverted-L2-70B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Euryale-Inverted-L2-70B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Euryale-Inverted-L2-70B-GGUF/resolve/main/Euryale-Inverted-L2-70B.Q2_K.gguf) | Q2_K | 25.6 | | | [GGUF](https://huggingface.co/mradermacher/Euryale-Inverted-L2-70B-GGUF/resolve/main/Euryale-Inverted-L2-70B.IQ3_XS.gguf) | IQ3_XS | 28.4 | | | [GGUF](https://huggingface.co/mradermacher/Euryale-Inverted-L2-70B-GGUF/resolve/main/Euryale-Inverted-L2-70B.IQ3_S.gguf) | IQ3_S | 30.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Euryale-Inverted-L2-70B-GGUF/resolve/main/Euryale-Inverted-L2-70B.Q3_K_S.gguf) | Q3_K_S | 30.0 | | | [GGUF](https://huggingface.co/mradermacher/Euryale-Inverted-L2-70B-GGUF/resolve/main/Euryale-Inverted-L2-70B.IQ3_M.gguf) | IQ3_M | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/Euryale-Inverted-L2-70B-GGUF/resolve/main/Euryale-Inverted-L2-70B.Q3_K_M.gguf) | Q3_K_M | 33.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Euryale-Inverted-L2-70B-GGUF/resolve/main/Euryale-Inverted-L2-70B.Q3_K_L.gguf) | Q3_K_L | 36.2 | | | [GGUF](https://huggingface.co/mradermacher/Euryale-Inverted-L2-70B-GGUF/resolve/main/Euryale-Inverted-L2-70B.IQ4_XS.gguf) | IQ4_XS | 37.3 | | | [GGUF](https://huggingface.co/mradermacher/Euryale-Inverted-L2-70B-GGUF/resolve/main/Euryale-Inverted-L2-70B.Q4_K_S.gguf) | Q4_K_S | 39.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Euryale-Inverted-L2-70B-GGUF/resolve/main/Euryale-Inverted-L2-70B.Q4_K_M.gguf) | Q4_K_M | 41.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Euryale-Inverted-L2-70B-GGUF/resolve/main/Euryale-Inverted-L2-70B.Q5_K_S.gguf) | Q5_K_S | 47.6 | | | [GGUF](https://huggingface.co/mradermacher/Euryale-Inverted-L2-70B-GGUF/resolve/main/Euryale-Inverted-L2-70B.Q5_K_M.gguf) | Q5_K_M | 48.9 | | | [PART 1](https://huggingface.co/mradermacher/Euryale-Inverted-L2-70B-GGUF/resolve/main/Euryale-Inverted-L2-70B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Euryale-Inverted-L2-70B-GGUF/resolve/main/Euryale-Inverted-L2-70B.Q6_K.gguf.part2of2) | Q6_K | 56.7 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Euryale-Inverted-L2-70B-GGUF/resolve/main/Euryale-Inverted-L2-70B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Euryale-Inverted-L2-70B-GGUF/resolve/main/Euryale-Inverted-L2-70B.Q8_0.gguf.part2of2) | Q8_0 | 73.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "cc-by-nc-4.0", "library_name": "transformers", "base_model": "Sao10K/Euryale-Inverted-L2-70B", "quantized_by": "mradermacher"}
mradermacher/Euryale-Inverted-L2-70B-GGUF
null
[ "transformers", "gguf", "en", "base_model:Sao10K/Euryale-Inverted-L2-70B", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-14T23:25:43+00:00
[]
[ "en" ]
TAGS #transformers #gguf #en #base_model-Sao10K/Euryale-Inverted-L2-70B #license-cc-by-nc-4.0 #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants are available at URL Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #en #base_model-Sao10K/Euryale-Inverted-L2-70B #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n" ]
text-classification
transformers
# Model Summary This is a fact-checking model from our work: 📃 [**MiniCheck: Efficient Fact-Checking of LLMs on Grounding Documents**](https://arxiv.org/pdf/2404.10774.pdf) ([GitHub Repo](https://github.com/Liyan06/MiniCheck)) The model is based on DeBERTa-v3-Large that predicts a binary label - 1 for supported and 0 for unsupported. The model is doing predictions on the *sentence-level*. It takes as input a document and a sentence and determine whether the sentence is supported by the document: **MiniCheck-Model(document, claim) -> {0, 1}** MiniCheck-DeBERTa-v3-Large is fine tuned from `microsoft/deberta-v3-large` ([He et al., 2023](https://arxiv.org/pdf/2111.09543.pdf)) on the combination of 35K data: - 21K ANLI data ([Nie et al., 2020](https://aclanthology.org/2020.acl-main.441.pdf)) - 14K synthetic data generated from scratch in a structed way (more details in the paper). ### Model Variants We also have other two MiniCheck model variants: - [lytang/MiniCheck-Flan-T5-Large](https://huggingface.co/lytang/MiniCheck-Flan-T5-Large) - [lytang/MiniCheck-RoBERTa-Large](https://huggingface.co/lytang/MiniCheck-RoBERTa-Large) ### Model Performance <p align="center"> <img src="./cost-vs-bacc.png" width="360"> </p> The performance of these models is evaluated on our new collected benchmark (unseen by our models during training), [LLM-AggreFact](https://huggingface.co/datasets/lytang/LLM-AggreFact), from 10 recent human annotated datasets on fact-checking and grounding LLM generations. MiniCheck-DeBERTa-v3-Large outperform all exisiting specialized fact-checkers with a similar scale by a large margin but is 2% worse than our best model MiniCheck-Flan-T5-Large, which is on par with GPT-4 but 400x cheaper. See full results in our work. Note: We only evaluated the performance of our models on real claims -- without any human intervention in any format, such as injecting certain error types into model-generated claims. Those edited claims do not reflect LLMs' actual behaviors. # Model Usage Demo Please first clone our [GitHub Repo](https://github.com/Liyan06/MiniCheck) and install necessary packages from `requirements.txt`. ### Below is a simple use case ```python from minicheck.minicheck import MiniCheck doc = "A group of students gather in the school library to study for their upcoming final exams." claim_1 = "The students are preparing for an examination." claim_2 = "The students are on vacation." # model_name can be one of ['roberta-large', 'deberta-v3-large', 'flan-t5-large'] scorer = MiniCheck(model_name='deberta-v3-large', device=f'cuda:0', cache_dir='./ckpts') pred_label, raw_prob, _, _ = scorer.score(docs=[doc, doc], claims=[claim_1, claim_2]) print(pred_label) # [1, 0] print(raw_prob) # [0.9786180257797241, 0.01138285268098116] ``` ### Test on our [LLM-AggreFact](https://huggingface.co/datasets/lytang/LLM-AggreFact) Benchmark ```python import pandas as pd from datasets import load_dataset from minicheck.minicheck import MiniCheck # load 13K test data df = pd.DataFrame(load_dataset("lytang/LLM-AggreFact")['test']) docs = df.doc.values claims = df.claim.values scorer = MiniCheck(model_name='deberta-v3-large', device=f'cuda:0', cache_dir='./ckpts') pred_label, raw_prob, _, _ = scorer.score(docs=docs, claims=claims) # ~ 15 mins, depending on hardware ``` To evalaute the result on the benchmark ```python from sklearn.metrics import balanced_accuracy_score df['preds'] = pred_label result_df = pd.DataFrame(columns=['Dataset', 'BAcc']) for dataset in df.dataset.unique(): sub_df = df[df.dataset == dataset] bacc = balanced_accuracy_score(sub_df.label, sub_df.preds) * 100 result_df.loc[len(result_df)] = [dataset, bacc] result_df.loc[len(result_df)] = ['Average', result_df.BAcc.mean()] result_df.round(1) ``` # Citation ``` @misc{tang2024minicheck, title={MiniCheck: Efficient Fact-Checking of LLMs on Grounding Documents}, author={Liyan Tang and Philippe Laban and Greg Durrett}, year={2024}, eprint={2404.10774}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": ["en"], "pipeline_tag": "text-classification"}
lytang/MiniCheck-DeBERTa-v3-Large
null
[ "transformers", "pytorch", "deberta-v2", "text-classification", "en", "arxiv:2404.10774", "arxiv:2111.09543", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-14T23:26:02+00:00
[ "2404.10774", "2111.09543" ]
[ "en" ]
TAGS #transformers #pytorch #deberta-v2 #text-classification #en #arxiv-2404.10774 #arxiv-2111.09543 #autotrain_compatible #endpoints_compatible #region-us
# Model Summary This is a fact-checking model from our work: MiniCheck: Efficient Fact-Checking of LLMs on Grounding Documents (GitHub Repo) The model is based on DeBERTa-v3-Large that predicts a binary label - 1 for supported and 0 for unsupported. The model is doing predictions on the *sentence-level*. It takes as input a document and a sentence and determine whether the sentence is supported by the document: MiniCheck-Model(document, claim) -> {0, 1} MiniCheck-DeBERTa-v3-Large is fine tuned from 'microsoft/deberta-v3-large' (He et al., 2023) on the combination of 35K data: - 21K ANLI data (Nie et al., 2020) - 14K synthetic data generated from scratch in a structed way (more details in the paper). ### Model Variants We also have other two MiniCheck model variants: - lytang/MiniCheck-Flan-T5-Large - lytang/MiniCheck-RoBERTa-Large ### Model Performance <p align="center"> <img src="./URL" width="360"> </p> The performance of these models is evaluated on our new collected benchmark (unseen by our models during training), LLM-AggreFact, from 10 recent human annotated datasets on fact-checking and grounding LLM generations. MiniCheck-DeBERTa-v3-Large outperform all exisiting specialized fact-checkers with a similar scale by a large margin but is 2% worse than our best model MiniCheck-Flan-T5-Large, which is on par with GPT-4 but 400x cheaper. See full results in our work. Note: We only evaluated the performance of our models on real claims -- without any human intervention in any format, such as injecting certain error types into model-generated claims. Those edited claims do not reflect LLMs' actual behaviors. # Model Usage Demo Please first clone our GitHub Repo and install necessary packages from 'URL'. ### Below is a simple use case ### Test on our LLM-AggreFact Benchmark To evalaute the result on the benchmark
[ "# Model Summary\n\nThis is a fact-checking model from our work:\n\n MiniCheck: Efficient Fact-Checking of LLMs on Grounding Documents (GitHub Repo)\n\nThe model is based on DeBERTa-v3-Large that predicts a binary label - 1 for supported and 0 for unsupported. \nThe model is doing predictions on the *sentence-level*. It takes as input a document and a sentence and determine \nwhether the sentence is supported by the document: MiniCheck-Model(document, claim) -> {0, 1}\n\n\nMiniCheck-DeBERTa-v3-Large is fine tuned from 'microsoft/deberta-v3-large' (He et al., 2023) \non the combination of 35K data:\n- 21K ANLI data (Nie et al., 2020)\n- 14K synthetic data generated from scratch in a structed way (more details in the paper).", "### Model Variants\nWe also have other two MiniCheck model variants:\n- lytang/MiniCheck-Flan-T5-Large\n- lytang/MiniCheck-RoBERTa-Large", "### Model Performance\n\n<p align=\"center\">\n <img src=\"./URL\" width=\"360\">\n</p>\n\nThe performance of these models is evaluated on our new collected benchmark (unseen by our models during training), LLM-AggreFact, \nfrom 10 recent human annotated datasets on fact-checking and grounding LLM generations. MiniCheck-DeBERTa-v3-Large outperform all\nexisiting specialized fact-checkers with a similar scale by a large margin but is 2% worse than our best model MiniCheck-Flan-T5-Large, which\nis on par with GPT-4 but 400x cheaper. See full results in our work.\n\nNote: We only evaluated the performance of our models on real claims -- without any human intervention in \nany format, such as injecting certain error types into model-generated claims. Those edited claims do not reflect\nLLMs' actual behaviors.", "# Model Usage Demo\n\nPlease first clone our GitHub Repo and install necessary packages from 'URL'.", "### Below is a simple use case", "### Test on our LLM-AggreFact Benchmark\n\n\n\nTo evalaute the result on the benchmark" ]
[ "TAGS\n#transformers #pytorch #deberta-v2 #text-classification #en #arxiv-2404.10774 #arxiv-2111.09543 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Summary\n\nThis is a fact-checking model from our work:\n\n MiniCheck: Efficient Fact-Checking of LLMs on Grounding Documents (GitHub Repo)\n\nThe model is based on DeBERTa-v3-Large that predicts a binary label - 1 for supported and 0 for unsupported. \nThe model is doing predictions on the *sentence-level*. It takes as input a document and a sentence and determine \nwhether the sentence is supported by the document: MiniCheck-Model(document, claim) -> {0, 1}\n\n\nMiniCheck-DeBERTa-v3-Large is fine tuned from 'microsoft/deberta-v3-large' (He et al., 2023) \non the combination of 35K data:\n- 21K ANLI data (Nie et al., 2020)\n- 14K synthetic data generated from scratch in a structed way (more details in the paper).", "### Model Variants\nWe also have other two MiniCheck model variants:\n- lytang/MiniCheck-Flan-T5-Large\n- lytang/MiniCheck-RoBERTa-Large", "### Model Performance\n\n<p align=\"center\">\n <img src=\"./URL\" width=\"360\">\n</p>\n\nThe performance of these models is evaluated on our new collected benchmark (unseen by our models during training), LLM-AggreFact, \nfrom 10 recent human annotated datasets on fact-checking and grounding LLM generations. MiniCheck-DeBERTa-v3-Large outperform all\nexisiting specialized fact-checkers with a similar scale by a large margin but is 2% worse than our best model MiniCheck-Flan-T5-Large, which\nis on par with GPT-4 but 400x cheaper. See full results in our work.\n\nNote: We only evaluated the performance of our models on real claims -- without any human intervention in \nany format, such as injecting certain error types into model-generated claims. Those edited claims do not reflect\nLLMs' actual behaviors.", "# Model Usage Demo\n\nPlease first clone our GitHub Repo and install necessary packages from 'URL'.", "### Below is a simple use case", "### Test on our LLM-AggreFact Benchmark\n\n\n\nTo evalaute the result on the benchmark" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
EdBerg/quotes_Llama-2-13b-chat-hf
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-14T23:26:13+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/uncensorie/stairolzlv-70b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/stairolzlv-70b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/stairolzlv-70b-GGUF/resolve/main/stairolzlv-70b.Q2_K.gguf) | Q2_K | 25.6 | | | [GGUF](https://huggingface.co/mradermacher/stairolzlv-70b-GGUF/resolve/main/stairolzlv-70b.IQ3_XS.gguf) | IQ3_XS | 28.4 | | | [GGUF](https://huggingface.co/mradermacher/stairolzlv-70b-GGUF/resolve/main/stairolzlv-70b.IQ3_S.gguf) | IQ3_S | 30.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/stairolzlv-70b-GGUF/resolve/main/stairolzlv-70b.Q3_K_S.gguf) | Q3_K_S | 30.0 | | | [GGUF](https://huggingface.co/mradermacher/stairolzlv-70b-GGUF/resolve/main/stairolzlv-70b.IQ3_M.gguf) | IQ3_M | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/stairolzlv-70b-GGUF/resolve/main/stairolzlv-70b.Q3_K_M.gguf) | Q3_K_M | 33.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/stairolzlv-70b-GGUF/resolve/main/stairolzlv-70b.Q3_K_L.gguf) | Q3_K_L | 36.2 | | | [GGUF](https://huggingface.co/mradermacher/stairolzlv-70b-GGUF/resolve/main/stairolzlv-70b.IQ4_XS.gguf) | IQ4_XS | 37.3 | | | [GGUF](https://huggingface.co/mradermacher/stairolzlv-70b-GGUF/resolve/main/stairolzlv-70b.Q4_K_S.gguf) | Q4_K_S | 39.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/stairolzlv-70b-GGUF/resolve/main/stairolzlv-70b.Q4_K_M.gguf) | Q4_K_M | 41.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/stairolzlv-70b-GGUF/resolve/main/stairolzlv-70b.Q5_K_S.gguf) | Q5_K_S | 47.6 | | | [GGUF](https://huggingface.co/mradermacher/stairolzlv-70b-GGUF/resolve/main/stairolzlv-70b.Q5_K_M.gguf) | Q5_K_M | 48.9 | | | [PART 1](https://huggingface.co/mradermacher/stairolzlv-70b-GGUF/resolve/main/stairolzlv-70b.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/stairolzlv-70b-GGUF/resolve/main/stairolzlv-70b.Q6_K.gguf.part2of2) | Q6_K | 56.7 | very good quality | | [PART 1](https://huggingface.co/mradermacher/stairolzlv-70b-GGUF/resolve/main/stairolzlv-70b.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/stairolzlv-70b-GGUF/resolve/main/stairolzlv-70b.Q8_0.gguf.part2of2) | Q8_0 | 73.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "llama2", "library_name": "transformers", "base_model": "uncensorie/stairolzlv-70b", "quantized_by": "mradermacher"}
mradermacher/stairolzlv-70b-GGUF
null
[ "transformers", "gguf", "en", "base_model:uncensorie/stairolzlv-70b", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-04-14T23:27:54+00:00
[]
[ "en" ]
TAGS #transformers #gguf #en #base_model-uncensorie/stairolzlv-70b #license-llama2 #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants are available at URL Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #en #base_model-uncensorie/stairolzlv-70b #license-llama2 #endpoints_compatible #region-us \n" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tinyllama-sft-wizardlm_evol_instruct_v2-full This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) on the ucla-cmllab/WizardLM_evol_instruct_V2_100k-chat-format dataset. It achieves the following results on the evaluation set: - Loss: 0.7234 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.735 | 1.0 | 781 | 0.7234 | ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.2.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer", "trl", "sft", "generated_from_trainer"], "datasets": ["ucla-cmllab/WizardLM_evol_instruct_V2_100k-chat-format"], "base_model": "TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T", "model-index": [{"name": "tinyllama-sft-wizardlm_evol_instruct_v2-full", "results": []}]}
andrewbai/tinyllama-sft-wizardlm_evol_instruct_v2-full
null
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "alignment-handbook", "trl", "sft", "generated_from_trainer", "conversational", "dataset:ucla-cmllab/WizardLM_evol_instruct_V2_100k-chat-format", "base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-14T23:28:00+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #llama #text-generation #alignment-handbook #trl #sft #generated_from_trainer #conversational #dataset-ucla-cmllab/WizardLM_evol_instruct_V2_100k-chat-format #base_model-TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
tinyllama-sft-wizardlm\_evol\_instruct\_v2-full =============================================== This model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T on the ucla-cmllab/WizardLM\_evol\_instruct\_V2\_100k-chat-format dataset. It achieves the following results on the evaluation set: * Loss: 0.7234 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * distributed\_type: multi-GPU * num\_devices: 4 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 128 * total\_eval\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 1 ### Training results ### Framework versions * Transformers 4.39.0.dev0 * Pytorch 2.2.2+cu121 * Datasets 2.14.6 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.2.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #llama #text-generation #alignment-handbook #trl #sft #generated_from_trainer #conversational #dataset-ucla-cmllab/WizardLM_evol_instruct_V2_100k-chat-format #base_model-TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.2.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.2" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-2024-03-20 This model is a fine-tuned version of [liamvbetts/t5-small-finetuned-2024-03-19](https://huggingface.co/liamvbetts/t5-small-finetuned-2024-03-19) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9234 - Rouge1: 29.6968 - Rouge2: 15.5967 - Rougel: 25.7424 - Rougelsum: 25.9564 - Gen Len: 18.4459 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 2.0445 | 1.0 | 293 | 1.9234 | 29.6968 | 15.5967 | 25.7424 | 25.9564 | 18.4459 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "liamvbetts/t5-small-finetuned-2024-03-19", "model-index": [{"name": "t5-small-finetuned-2024-03-20", "results": []}]}
liamvbetts/t5-small-finetuned-2024-03-20
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:liamvbetts/t5-small-finetuned-2024-03-19", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-14T23:29:46+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-liamvbetts/t5-small-finetuned-2024-03-19 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
t5-small-finetuned-2024-03-20 ============================= This model is a fine-tuned version of liamvbetts/t5-small-finetuned-2024-03-19 on the None dataset. It achieves the following results on the evaluation set: * Loss: 1.9234 * Rouge1: 29.6968 * Rouge2: 15.5967 * Rougel: 25.7424 * Rougelsum: 25.9564 * Gen Len: 18.4459 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 4e-05 * train\_batch\_size: 1 * eval\_batch\_size: 1 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 4e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-liamvbetts/t5-small-finetuned-2024-03-19 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 4e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> MoM: Mixture of Mixture This Model is a first test to combine [Jamba](https://huggingface.co/ai21labs/Jamba-v0.1) architecture with bf16 bits linear layers, mixture of attention head and mixture of depth. The goal is to developpe and test if this kind of architectures have not too much quality loss for a fast inference. - **Model type:** Mixture of attention head mixture of depth and mixture of expert bf16 linear layers - **License:** Apache licence 2.0 ### Model Sources [optional] - **Repository:** https://github.com/ostix360/optimized-LLM ## How to Get Started with the Model This model has a generation problem because of a softmax application in the mod process If you want to test this model please look at this repo at this [commit](https://github.com/ostix360/optimized-LLM/tree/e223f9fa7bd136cfd836ceee522e1d98b97b08af) ## Training Details - **wandb**: [training detail](https://wandb.ai/ostix360/Mixture%20of%20mixture%20(mod,%20moah%20moe)/runs/c37qwolp) ### Training Data We use the first 100k data of Locutusque/UltraTextbooks to train this model ### Training Procedure We use adam-8 bits with default betas and epsilon values #### Preprocessing [optional] The data fit the model max length i.e. 512 tokens #### Training Hyperparameters Please look at the wandb metadata to see the hyperparameters or the train.py file in the repo ## Technical Specifications ### Compute Infrastructure #### Hardware - one 4070 ti GPU #### Software - pytorch, transformers etc
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["moe", "moah", "mod"], "datasets": ["Locutusque/UltraTextbooks"]}
Ostixe360/MoMv3-bf16
null
[ "transformers", "safetensors", "text-generation", "moe", "moah", "mod", "en", "dataset:Locutusque/UltraTextbooks", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-14T23:30:03+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation #moe #moah #mod #en #dataset-Locutusque/UltraTextbooks #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description MoM: Mixture of Mixture This Model is a first test to combine Jamba architecture with bf16 bits linear layers, mixture of attention head and mixture of depth. The goal is to developpe and test if this kind of architectures have not too much quality loss for a fast inference. - Model type: Mixture of attention head mixture of depth and mixture of expert bf16 linear layers - License: Apache licence 2.0 ### Model Sources [optional] - Repository: URL ## How to Get Started with the Model This model has a generation problem because of a softmax application in the mod process If you want to test this model please look at this repo at this commit ## Training Details - wandb: training detail/runs/c37qwolp) ### Training Data We use the first 100k data of Locutusque/UltraTextbooks to train this model ### Training Procedure We use adam-8 bits with default betas and epsilon values #### Preprocessing [optional] The data fit the model max length i.e. 512 tokens #### Training Hyperparameters Please look at the wandb metadata to see the hyperparameters or the URL file in the repo ## Technical Specifications ### Compute Infrastructure #### Hardware - one 4070 ti GPU #### Software - pytorch, transformers etc
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nMoM: Mixture of Mixture\n\nThis Model is a first test to combine Jamba architecture with bf16 bits linear layers, mixture of attention head and mixture of depth.\n\nThe goal is to developpe and test if this kind of architectures have not too much quality loss for a fast inference.\n\n\n- Model type: Mixture of attention head mixture of depth and mixture of expert bf16 linear layers \n- License: Apache licence 2.0", "### Model Sources [optional]\n\n\n- Repository: URL", "## How to Get Started with the Model\n\nThis model has a generation problem because of a softmax application in the mod process\n\n\nIf you want to test this model please look at this repo at this commit", "## Training Details\n\n - wandb: training detail/runs/c37qwolp)", "### Training Data\n\nWe use the first 100k data of Locutusque/UltraTextbooks to train this model", "### Training Procedure\n\nWe use adam-8 bits with default betas and epsilon values", "#### Preprocessing [optional]\n\n\nThe data fit the model max length i.e. 512 tokens", "#### Training Hyperparameters\n\nPlease look at the wandb metadata to see the hyperparameters or the URL file in the repo", "## Technical Specifications", "### Compute Infrastructure", "#### Hardware\n\n- one 4070 ti GPU", "#### Software\n\n- pytorch, transformers etc" ]
[ "TAGS\n#transformers #safetensors #text-generation #moe #moah #mod #en #dataset-Locutusque/UltraTextbooks #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nMoM: Mixture of Mixture\n\nThis Model is a first test to combine Jamba architecture with bf16 bits linear layers, mixture of attention head and mixture of depth.\n\nThe goal is to developpe and test if this kind of architectures have not too much quality loss for a fast inference.\n\n\n- Model type: Mixture of attention head mixture of depth and mixture of expert bf16 linear layers \n- License: Apache licence 2.0", "### Model Sources [optional]\n\n\n- Repository: URL", "## How to Get Started with the Model\n\nThis model has a generation problem because of a softmax application in the mod process\n\n\nIf you want to test this model please look at this repo at this commit", "## Training Details\n\n - wandb: training detail/runs/c37qwolp)", "### Training Data\n\nWe use the first 100k data of Locutusque/UltraTextbooks to train this model", "### Training Procedure\n\nWe use adam-8 bits with default betas and epsilon values", "#### Preprocessing [optional]\n\n\nThe data fit the model max length i.e. 512 tokens", "#### Training Hyperparameters\n\nPlease look at the wandb metadata to see the hyperparameters or the URL file in the repo", "## Technical Specifications", "### Compute Infrastructure", "#### Hardware\n\n- one 4070 ti GPU", "#### Software\n\n- pytorch, transformers etc" ]
text-generation
transformers
# mixtral-8x22b-instruct-oh - EXL2 4.5bpw This is a 4.5bpw EXL2 quant of [fireworks-ai/mixtral-8x22b-instruct-oh](https://huggingface.co/fireworks-ai/mixtral-8x22b-instruct-oh) Details about the model can be found at the above model page. ## EXL2 Version These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library. If you have problems loading these models, please update Text Generation WebUI to the latest version. ## Perplexity Scoring Below are the perplexity scores for the EXL2 models. A lower score is better. _TODO_ ### Perplexity Script This was the script used for perplexity testing. ```bash #!/bin/bash # Activate the conda environment source ~/miniconda3/etc/profile.d/conda.sh conda activate exllamav2 # Set the model name and bit size MODEL_NAME="mixtral-8x22b-instruct-oh" BIT_PRECISIONS=(7.0 6.0 5.5 5.0 4.5 4.0 3.5 3.0 2.75 2.5 2.25) # Print the markdown table header echo "| Quant Level | Perplexity Score |" echo "|-------------|------------------|" for BIT_PRECISION in "${BIT_PRECISIONS[@]}" do MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw" if [ -d "$MODEL_DIR" ]; then output=$(python test_inference.py -m "$MODEL_DIR" -gs 44,48 -ed data/wikitext/wikitext-2-v1.parquet) score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+') echo "| $BIT_PRECISION | $score |" fi done ``` ## Quant Details This is the script used for quantization. ```bash #!/bin/bash # Activate the conda environment source ~/miniconda3/etc/profile.d/conda.sh conda activate exllamav2 # Set the model name and bit size MODEL_NAME="mixtral-8x22b-instruct-oh" # Define variables MODEL_DIR="models/$MODEL_NAME" OUTPUT_DIR="exl2_$MODEL_NAME" MEASUREMENT_FILE="measurements/$MODEL_NAME.json" # Create the measurement file if needed if [ ! -f "$MEASUREMENT_FILE" ]; then echo "Creating $MEASUREMENT_FILE" # Create directories if [ -d "$OUTPUT_DIR" ]; then rm -r "$OUTPUT_DIR" fi mkdir "$OUTPUT_DIR" python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE fi # Choose one of the below. Either create a single quant for testing or a batch of them. # BIT_PRECISIONS=(2.25) BIT_PRECISIONS=(7.0 6.0 5.5 5.0 4.5 4.0 3.5 3.0 2.75 2.5 2.25) for BIT_PRECISION in "${BIT_PRECISIONS[@]}" do CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw" # If it doesn't already exist, make the quant if [ ! -d "$CONVERTED_FOLDER" ]; then echo "Creating $CONVERTED_FOLDER" # Create directories if [ -d "$OUTPUT_DIR" ]; then rm -r "$OUTPUT_DIR" fi mkdir "$OUTPUT_DIR" mkdir "$CONVERTED_FOLDER" # Run conversion commands python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER fi done ```
{"language": ["en"], "license": "apache-2.0", "tags": ["exl2"], "datasets": ["teknium/OpenHermes-2.5"], "base_model": "mistral-community/Mixtral-8x22B-v0.1"}
Dracones/mixtral-8x22b-instruct-oh_exl2_4.5bpw
null
[ "transformers", "safetensors", "mixtral", "text-generation", "exl2", "conversational", "en", "dataset:teknium/OpenHermes-2.5", "base_model:mistral-community/Mixtral-8x22B-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-14T23:30:54+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #mixtral #text-generation #exl2 #conversational #en #dataset-teknium/OpenHermes-2.5 #base_model-mistral-community/Mixtral-8x22B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# mixtral-8x22b-instruct-oh - EXL2 4.5bpw This is a 4.5bpw EXL2 quant of fireworks-ai/mixtral-8x22b-instruct-oh Details about the model can be found at the above model page. ## EXL2 Version These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library. If you have problems loading these models, please update Text Generation WebUI to the latest version. ## Perplexity Scoring Below are the perplexity scores for the EXL2 models. A lower score is better. _TODO_ ### Perplexity Script This was the script used for perplexity testing. ## Quant Details This is the script used for quantization.
[ "# mixtral-8x22b-instruct-oh - EXL2 4.5bpw\n\nThis is a 4.5bpw EXL2 quant of fireworks-ai/mixtral-8x22b-instruct-oh\n\nDetails about the model can be found at the above model page.", "## EXL2 Version\n\nThese quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.\n\nIf you have problems loading these models, please update Text Generation WebUI to the latest version.", "## Perplexity Scoring\n\nBelow are the perplexity scores for the EXL2 models. A lower score is better. \n\n_TODO_", "### Perplexity Script\n\nThis was the script used for perplexity testing.", "## Quant Details\n\nThis is the script used for quantization." ]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #exl2 #conversational #en #dataset-teknium/OpenHermes-2.5 #base_model-mistral-community/Mixtral-8x22B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# mixtral-8x22b-instruct-oh - EXL2 4.5bpw\n\nThis is a 4.5bpw EXL2 quant of fireworks-ai/mixtral-8x22b-instruct-oh\n\nDetails about the model can be found at the above model page.", "## EXL2 Version\n\nThese quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.\n\nIf you have problems loading these models, please update Text Generation WebUI to the latest version.", "## Perplexity Scoring\n\nBelow are the perplexity scores for the EXL2 models. A lower score is better. \n\n_TODO_", "### Perplexity Script\n\nThis was the script used for perplexity testing.", "## Quant Details\n\nThis is the script used for quantization." ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
relu-ntnu/bart-large-cnn_v1_trained_on_100
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-14T23:32:22+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
QLoRA fine-tune of [Mixtral-8x22B-v0.1](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1) on a combination of the Capybara and Airoboros datasets. Uses Mistral instruct formatting, like this: [INST] Describe quantum computing to a layperson. [/INST] Model details: - Trained with QLoRA, on 4 4090s, using my own [qlora-pipe](https://github.com/tdrussell/qlora-pipe) training script - LoRA rank 64 - 4096 sequence length - 2 epochs You can find the LoRA adapter files [here](https://huggingface.co/tdrussell/Mixtral-8x22B-Capyboros-v1-lora). I have also uploaded a single quant (GGUF q4_k_s) [here](https://huggingface.co/tdrussell/Mixtral-8x22B-Capyboros-v1-GGUF-q4_k_s) if you want to try it without quantizing yourself or waiting for someone else to make all the quants. It fits with at least 16k context length on 96GB VRAM.
{"license": "apache-2.0", "datasets": ["ssmi153/Capybara-ShareGPT", "jondurbin/airoboros-3.2"]}
tdrussell/Mixtral-8x22B-Capyboros-v1
null
[ "transformers", "safetensors", "mixtral", "text-generation", "dataset:ssmi153/Capybara-ShareGPT", "dataset:jondurbin/airoboros-3.2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-14T23:34:41+00:00
[]
[]
TAGS #transformers #safetensors #mixtral #text-generation #dataset-ssmi153/Capybara-ShareGPT #dataset-jondurbin/airoboros-3.2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
QLoRA fine-tune of Mixtral-8x22B-v0.1 on a combination of the Capybara and Airoboros datasets. Uses Mistral instruct formatting, like this: [INST] Describe quantum computing to a layperson. [/INST] Model details: - Trained with QLoRA, on 4 4090s, using my own qlora-pipe training script - LoRA rank 64 - 4096 sequence length - 2 epochs You can find the LoRA adapter files here. I have also uploaded a single quant (GGUF q4_k_s) here if you want to try it without quantizing yourself or waiting for someone else to make all the quants. It fits with at least 16k context length on 96GB VRAM.
[]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #dataset-ssmi153/Capybara-ShareGPT #dataset-jondurbin/airoboros-3.2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
null
diffusers
# Terminus XL - Gamma (v2 preview) This is an in-progress checkpoint of [the "Gamma" model](/ptx0/terminus-xl-gamma-v1) from the Terminus XL series. It's updated randomly for evaluation as progress rolls on.
{"license": "openrail++"}
ptx0/terminus-xl-velocity-v2
null
[ "diffusers", "safetensors", "license:openrail++", "region:us" ]
null
2024-04-14T23:35:06+00:00
[]
[]
TAGS #diffusers #safetensors #license-openrail++ #region-us
# Terminus XL - Gamma (v2 preview) This is an in-progress checkpoint of the "Gamma" model from the Terminus XL series. It's updated randomly for evaluation as progress rolls on.
[ "# Terminus XL - Gamma (v2 preview)\n\nThis is an in-progress checkpoint of the \"Gamma\" model from the Terminus XL series.\n\nIt's updated randomly for evaluation as progress rolls on." ]
[ "TAGS\n#diffusers #safetensors #license-openrail++ #region-us \n", "# Terminus XL - Gamma (v2 preview)\n\nThis is an in-progress checkpoint of the \"Gamma\" model from the Terminus XL series.\n\nIt's updated randomly for evaluation as progress rolls on." ]