Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
null
null
# NikolayKozloff/Orbita-v0.1-Q5_K_M-GGUF This model was converted to GGUF format from [`Orbina/Orbita-v0.1`](https://huggingface.co/Orbina/Orbita-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Orbina/Orbita-v0.1) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo NikolayKozloff/Orbita-v0.1-Q5_K_M-GGUF --model orbita-v0.1.Q5_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo NikolayKozloff/Orbita-v0.1-Q5_K_M-GGUF --model orbita-v0.1.Q5_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m orbita-v0.1.Q5_K_M.gguf -n 128 ```
{"language": ["tr"], "license": "apache-2.0", "tags": ["llama-cpp", "gguf-my-repo"], "model-index": [{"name": "Orbita-v0.1", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge TR", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc", "value": 30.15, "name": "accuracy"}]}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag TR", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc", "value": 37.95, "name": "accuracy"}]}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU TR", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 47.94, "name": "accuracy"}]}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA TR", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "acc", "value": 41.93, "name": "accuracy"}]}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande TR", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 54.42, "name": "accuracy"}]}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k TR", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 47.72, "name": "accuracy"}]}]}]}
NikolayKozloff/Orbita-v0.1-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "tr", "license:apache-2.0", "model-index", "region:us" ]
null
2024-04-25T03:14:18+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
gkMSDA/Llama-3-8b-FinChatGTP298_DJ30_Model_4
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T03:15:09+00:00
text-generation
transformers
{}
shrenikb/sparsegpt75sparsitymodel
null
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T03:15:32+00:00
text-generation
transformers
{}
OldBirdAZ/lma3-8b-inst-tok-Llama-2
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T03:19:13+00:00
null
null
{}
youngdicey/audiogen-focalloss-finetuing
null
[ "region:us" ]
null
2024-04-25T03:20:20+00:00
null
null
{}
ljcnju/Deepseek-7b-For-Clone-Detection-Lora-weights
null
[ "safetensors", "region:us" ]
null
2024-04-25T03:20:23+00:00
text-generation
transformers
{}
minionai/llama3-70b-3epoch-qlora-merged
null
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T03:22:00+00:00
null
null
{}
Alejdg/FirstOne
null
[ "region:us" ]
null
2024-04-25T03:22:37+00:00
text2text-generation
transformers
{}
paulh27/xsum_aligned_smallT5_iter2
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T03:23:13+00:00
text-generation
transformers
Quantizations of https://huggingface.co/bigscience/bloom-3b # From original readme ...
{"language": ["en"], "license": "other", "tags": ["transformers", "gguf", "imatrix", "bigscience", "bloom-3b"], "inference": false, "pipeline_tag": "text-generation"}
duyntnet/bloom-3b-imatrix-GGUF
null
[ "transformers", "gguf", "imatrix", "bigscience", "bloom-3b", "text-generation", "en", "license:other", "region:us" ]
null
2024-04-25T03:23:19+00:00
text-generation
transformers
# Keiana-L3-Test4.8-8B-4 # Keep in mind that it's not yet tested, and I unsure if would work as planned. Keiana-L3-Test4.8-8B-4 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Kaoeiri/Keiana-L3-Test4.7-8B-3](https://huggingface.co/Kaoeiri/Keiana-L3-Test4.7-8B-3) ## 🧩 Configuration ```yaml merge_method: task_arithmetic dtype: float16 base_model: jeiku/Average_Normie_l3_v1_8B models: - model: Kaoeiri/Keiana-L3-Test4.7-8B-3 parameters: weight: 1.0 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Kaoeiri/Keiana-L3-Test4.8-8B-4" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"tags": ["merge", "mergekit", "lazymergekit", "Kaoeiri/Keiana-L3-Test4.7-8B-3"], "base_model": ["Kaoeiri/Keiana-L3-Test4.7-8B-3"]}
Kaoeiri/Keiana-L3-Test4.8-8B-4
null
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "Kaoeiri/Keiana-L3-Test4.7-8B-3", "conversational", "base_model:Kaoeiri/Keiana-L3-Test4.7-8B-3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T03:23:30+00:00
sentence-similarity
sentence-transformers
# all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2') model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-MiniLM-L6-v2) ------ ## Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developed this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developed this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intended to be used as a sentence and short paragraph encoder. Given an input text, it outputs a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 256 word pieces is truncated. ## Training procedure ### Pre-training We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure. ### Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. #### Hyper parameters We trained our model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`. #### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |--------------------------------------------------------|:----------------------------------------:|:--------------------------:| | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 | | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | **Total** | | **1,170,060,424** |
{"language": "en", "license": "apache-2.0", "library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "datasets": ["s2orc", "flax-sentence-embeddings/stackexchange_xml", "ms_marco", "gooaq", "yahoo_answers_topics", "code_search_net", "search_qa", "eli5", "snli", "multi_nli", "wikihow", "natural_questions", "trivia_qa", "embedding-data/sentence-compression", "embedding-data/flickr30k-captions", "embedding-data/altlex", "embedding-data/simple-wiki", "embedding-data/QQP", "embedding-data/SPECTER", "embedding-data/PAQ_pairs", "embedding-data/WikiAnswers"], "pipeline_tag": "sentence-similarity"}
PIXMELT/all-MiniLM-L6-v2
null
[ "sentence-transformers", "pytorch", "tf", "rust", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "en", "dataset:s2orc", "dataset:flax-sentence-embeddings/stackexchange_xml", "dataset:ms_marco", "dataset:gooaq", "dataset:yahoo_answers_topics", "dataset:code_search_net", "dataset:search_qa", "dataset:eli5", "dataset:snli", "dataset:multi_nli", "dataset:wikihow", "dataset:natural_questions", "dataset:trivia_qa", "dataset:embedding-data/sentence-compression", "dataset:embedding-data/flickr30k-captions", "dataset:embedding-data/altlex", "dataset:embedding-data/simple-wiki", "dataset:embedding-data/QQP", "dataset:embedding-data/SPECTER", "dataset:embedding-data/PAQ_pairs", "dataset:embedding-data/WikiAnswers", "arxiv:1904.06472", "arxiv:2102.07033", "arxiv:2104.08727", "arxiv:1704.05179", "arxiv:1810.09305", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-25T03:24:27+00:00
text-generation
transformers
# Chaos RP ![image/png](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/u5p9kdbXT2QQA3iMU0vF1.png) A chaotic force beckons for you, will you heed her call? Built upon an intelligent foundation and tuned for roleplaying, this model will fulfill your wildest fantasies with the bare minimum of effort. Enjoy!
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "base_model": ["ChaoticNeutrals/IQ_Test_l3_8B", "ResplendentAI/RP_Format_QuoteAsterisk_Llama3"]}
zaq-hack/Chaos_RP_l3_8B-bpw500-h6-exl2-rpcal
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "base_model:ChaoticNeutrals/IQ_Test_l3_8B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "5-bit", "region:us" ]
null
2024-04-25T03:26:02+00:00
null
null
{}
youngdicey/continuous-ar-experiment-testing
null
[ "region:us" ]
null
2024-04-25T03:28:14+00:00
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deit-base-patch16-224-finetuned-ind-17-imbalanced-aadhaarmask-14687 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3950 - Accuracy: 0.8310 - Recall: 0.8310 - F1: 0.8298 - Precision: 0.8360 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision | |:-------------:|:-------:|:-----:|:---------------:|:--------:|:------:|:------:|:---------:| | 0.8293 | 0.9974 | 293 | 0.7793 | 0.7680 | 0.7680 | 0.7403 | 0.7277 | | 0.5921 | 1.9983 | 587 | 0.5663 | 0.7940 | 0.7940 | 0.7843 | 0.7839 | | 0.4308 | 2.9991 | 881 | 0.4589 | 0.8208 | 0.8208 | 0.8161 | 0.8213 | | 0.3999 | 4.0 | 1175 | 0.4772 | 0.8263 | 0.8263 | 0.8216 | 0.8337 | | 0.4801 | 4.9974 | 1468 | 0.4258 | 0.8378 | 0.8378 | 0.8306 | 0.8463 | | 0.4201 | 5.9983 | 1762 | 0.4120 | 0.8246 | 0.8246 | 0.8213 | 0.8394 | | 0.3233 | 6.9991 | 2056 | 0.3989 | 0.8306 | 0.8306 | 0.8268 | 0.8445 | | 0.3954 | 8.0 | 2350 | 0.3794 | 0.8365 | 0.8365 | 0.8341 | 0.8383 | | 0.2835 | 8.9974 | 2643 | 0.4438 | 0.8318 | 0.8318 | 0.8278 | 0.8434 | | 0.2913 | 9.9983 | 2937 | 0.3799 | 0.8416 | 0.8416 | 0.8404 | 0.8451 | | 0.3261 | 10.9991 | 3231 | 0.3694 | 0.8297 | 0.8297 | 0.8272 | 0.8306 | | 0.3299 | 12.0 | 3525 | 0.3637 | 0.8442 | 0.8442 | 0.8425 | 0.8529 | | 0.3273 | 12.9974 | 3818 | 0.3649 | 0.8421 | 0.8421 | 0.8411 | 0.8482 | | 0.2596 | 13.9983 | 4112 | 0.4152 | 0.8259 | 0.8259 | 0.8213 | 0.8281 | | 0.2813 | 14.9991 | 4406 | 0.3578 | 0.8429 | 0.8429 | 0.8409 | 0.8491 | | 0.2406 | 16.0 | 4700 | 0.3813 | 0.8323 | 0.8323 | 0.8285 | 0.8362 | | 0.2263 | 16.9974 | 4993 | 0.3808 | 0.8318 | 0.8318 | 0.8275 | 0.8377 | | 0.3192 | 17.9983 | 5287 | 0.3625 | 0.8412 | 0.8412 | 0.8372 | 0.8484 | | 0.2003 | 18.9991 | 5581 | 0.3549 | 0.8438 | 0.8438 | 0.8430 | 0.8462 | | 0.2431 | 20.0 | 5875 | 0.3620 | 0.8425 | 0.8425 | 0.8408 | 0.8467 | | 0.2654 | 20.9974 | 6168 | 0.3865 | 0.8340 | 0.8340 | 0.8320 | 0.8338 | | 0.2989 | 21.9983 | 6462 | 0.3632 | 0.8463 | 0.8463 | 0.8449 | 0.8498 | | 0.2403 | 22.9991 | 6756 | 0.3824 | 0.8301 | 0.8301 | 0.8267 | 0.8304 | | 0.2393 | 24.0 | 7050 | 0.3607 | 0.8489 | 0.8489 | 0.8473 | 0.8519 | | 0.2305 | 24.9974 | 7343 | 0.3758 | 0.8365 | 0.8365 | 0.8350 | 0.8401 | | 0.2654 | 25.9983 | 7637 | 0.3652 | 0.8421 | 0.8421 | 0.8392 | 0.8415 | | 0.176 | 26.9991 | 7931 | 0.3929 | 0.8306 | 0.8306 | 0.8289 | 0.8385 | | 0.1893 | 28.0 | 8225 | 0.3794 | 0.8374 | 0.8374 | 0.8365 | 0.8404 | | 0.2652 | 28.9974 | 8518 | 0.3995 | 0.8387 | 0.8387 | 0.8372 | 0.8423 | | 0.2029 | 29.9983 | 8812 | 0.3981 | 0.8433 | 0.8433 | 0.8411 | 0.8430 | | 0.1799 | 30.9991 | 9106 | 0.3554 | 0.8352 | 0.8352 | 0.8340 | 0.8368 | | 0.2002 | 32.0 | 9400 | 0.3618 | 0.8310 | 0.8310 | 0.8300 | 0.8322 | | 0.1525 | 32.9974 | 9693 | 0.3629 | 0.8348 | 0.8348 | 0.8343 | 0.8381 | | 0.1663 | 33.9983 | 9987 | 0.3664 | 0.8425 | 0.8425 | 0.8410 | 0.8427 | | 0.1728 | 34.9991 | 10281 | 0.3928 | 0.8429 | 0.8429 | 0.8415 | 0.8468 | | 0.2252 | 36.0 | 10575 | 0.3842 | 0.8421 | 0.8421 | 0.8420 | 0.8443 | | 0.1554 | 36.9974 | 10868 | 0.3889 | 0.8301 | 0.8301 | 0.8294 | 0.8349 | | 0.2179 | 37.9983 | 11162 | 0.3775 | 0.8399 | 0.8399 | 0.8389 | 0.8429 | | 0.1771 | 38.9991 | 11456 | 0.3906 | 0.8306 | 0.8306 | 0.8291 | 0.8324 | | 0.2167 | 40.0 | 11750 | 0.3870 | 0.8404 | 0.8404 | 0.8382 | 0.8456 | | 0.1563 | 40.9974 | 12043 | 0.3779 | 0.8284 | 0.8284 | 0.8277 | 0.8288 | | 0.1419 | 41.9983 | 12337 | 0.4049 | 0.8340 | 0.8340 | 0.8327 | 0.8360 | | 0.2083 | 42.9991 | 12631 | 0.3800 | 0.8421 | 0.8421 | 0.8410 | 0.8427 | | 0.2185 | 44.0 | 12925 | 0.3964 | 0.8433 | 0.8433 | 0.8422 | 0.8441 | | 0.1989 | 44.9974 | 13218 | 0.3870 | 0.8340 | 0.8340 | 0.8339 | 0.8357 | | 0.1731 | 45.9983 | 13512 | 0.4206 | 0.8340 | 0.8340 | 0.8335 | 0.8357 | | 0.1831 | 46.9991 | 13806 | 0.4027 | 0.8429 | 0.8429 | 0.8422 | 0.8439 | | 0.1471 | 48.0 | 14100 | 0.4016 | 0.8318 | 0.8318 | 0.8307 | 0.8320 | | 0.1879 | 48.9974 | 14393 | 0.3877 | 0.8438 | 0.8438 | 0.8441 | 0.8468 | | 0.1775 | 49.8723 | 14650 | 0.3984 | 0.8421 | 0.8421 | 0.8408 | 0.8428 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.0a0+81ea7a4 - Datasets 2.18.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy", "recall", "f1", "precision"], "base_model": "facebook/deit-base-patch16-224", "model-index": [{"name": "deit-base-patch16-224-finetuned-ind-17-imbalanced-aadhaarmask-14687", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.8309919114516816, "name": "Accuracy"}, {"type": "recall", "value": 0.8309919114516816, "name": "Recall"}, {"type": "f1", "value": 0.8298114215031374, "name": "F1"}, {"type": "precision", "value": 0.8359531770567361, "name": "Precision"}]}]}]}
Kushagra07/deit-base-patch16-224-finetuned-ind-17-imbalanced-aadhaarmask-14687
null
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:facebook/deit-base-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-25T03:28:20+00:00
null
null
{}
Breezy9900/RVC
null
[ "region:us" ]
null
2024-04-25T03:30:35+00:00
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/ResplendentAI/SOVL_Llama3_8B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "base_model": "ResplendentAI/SOVL_Llama3_8B", "quantized_by": "mradermacher"}
mradermacher/SOVL_Llama3_8B-GGUF
null
[ "transformers", "gguf", "en", "base_model:ResplendentAI/SOVL_Llama3_8B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-25T03:31:02+00:00
null
null
{"license": "apache-2.0"}
Sazzz/Sanz
null
[ "license:apache-2.0", "region:us" ]
null
2024-04-25T03:31:10+00:00
null
null
Pre-trained models of Portrait4D and Portrait4D-v2: - genhead-ffhq512: GenHead trained on FFHQ dataset at 512x512 resolution - portrait4d-genhead512: Portrait4D trained with synthetic data of GenHead at 512x512 resolution - portrait4d-static-genhead512: Portrait4D without motion-related cross-attentions, trained with synthetic data of GenHead at 512x512 resolution - portrait4d-v2-vfhq512: Portrait4D-v2 finetuned from portrait4d-static-genhead512, trained on VFHQ dataset at 512x512 resolution
{"license": "mit"}
bEijuuu/Portrait4D
null
[ "license:mit", "region:us" ]
null
2024-04-25T03:31:11+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opt-350m_ft_test5 This model was trained from scratch on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"tags": ["generated_from_trainer"], "model-index": [{"name": "opt-350m_ft_test5", "results": []}]}
underactuated/opt-350m_ft_test5
null
[ "transformers", "safetensors", "opt", "text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T03:32:59+00:00
null
null
{}
zhaochaofeng/LLM
null
[ "region:us" ]
null
2024-04-25T03:33:31+00:00
image-classification
transformers
{}
catpin/my_awesome_food_model
null
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-25T03:33:35+00:00
automatic-speech-recognition
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
jxie/whisper-tiny-speechocean
null
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-25T03:34:05+00:00
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # Textual inversion text2image fine-tuning - BKM1804/textual_inversion_cat These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "textual_inversion", "diffusers-training"], "base_model": "runwayml/stable-diffusion-v1-5", "inference": true}
BKM1804/textual_inversion_cat
null
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "diffusers-training", "base_model:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
null
2024-04-25T03:34:11+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
dahye1/lg-gemma-ko-ver3
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-25T03:34:52+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_eli5_clm-model-2 This model is a fine-tuned version of [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) on the eli5_category dataset. It achieves the following results on the evaluation set: - Loss: 0.0009 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0386 | 1.0 | 1311 | 0.0046 | | 0.0173 | 2.0 | 2622 | 0.0025 | | 0.0049 | 3.0 | 3933 | 0.0009 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["eli5_category"], "base_model": "FacebookAI/roberta-base", "model-index": [{"name": "my_awesome_eli5_clm-model-2", "results": []}]}
liamvbetts/my_awesome_eli5_clm-model-2
null
[ "transformers", "tensorboard", "safetensors", "roberta", "text-generation", "generated_from_trainer", "dataset:eli5_category", "base_model:FacebookAI/roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-25T03:35:21+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
HenryCai1129/adapter-toxic2nontoxic-100-50-0.003
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-25T03:37:22+00:00
null
null
{}
johnday29/AndreaDay_LoRA
null
[ "region:us" ]
null
2024-04-25T03:38:31+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["unsloth"]}
chillies/llama-3-8b-vn-legal-chat
null
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-25T03:38:41+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DPO-PairRM-5-SMI-lr-1e6-iteration-5-t-7e-beta-15e3-1-iteration-6e1-confidence-D1-D2_smi This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6444 - Rewards/chosen: -2.4793 - Rewards/rejected: -2.9560 - Rewards/accuracies: 0.6667 - Rewards/margins: 0.4767 - Rewards/mix Margin: 0.1749 - Logps/rejected: -481.8095 - Logps/chosen: -453.2426 - Logits/rejected: -1.7012 - Logits/chosen: -1.7287 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2 - Datasets 2.17.1 - Tokenizers 0.15.1
{"license": "apache-2.0", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "DPO-PairRM-5-SMI-lr-1e6-iteration-5-t-7e-beta-15e3-1-iteration-6e1-confidence-D1-D2_smi", "results": []}]}
vangard703/DPO-PairRM-5-SMI-lr-1e6-iteration-5-t-7e-beta-15e3-1-iteration-6e1-confidence-D1-D2_smi
null
[ "transformers", "tensorboard", "safetensors", "mistral", "text-generation", "trl", "dpo", "generated_from_trainer", "conversational", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T03:39:07+00:00
text-generation
transformers
{}
arya123321/arya
null
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T03:39:17+00:00
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # text_summarization_finetuned This model is a fine-tuned version of [Falconsai/text_summarization](https://huggingface.co/Falconsai/text_summarization) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2709 - Rouge1: 0.0876 - Rouge2: 0.0826 - Rougel: 0.0876 - Rougelsum: 0.0876 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 0.3375 | 1.0 | 4000 | 0.2961 | 0.0876 | 0.0826 | 0.0876 | 0.0876 | 19.0 | | 0.3046 | 2.0 | 8000 | 0.2776 | 0.0876 | 0.0826 | 0.0876 | 0.0876 | 19.0 | | 0.2929 | 3.0 | 12000 | 0.2726 | 0.0876 | 0.0826 | 0.0876 | 0.0876 | 19.0 | | 0.2915 | 4.0 | 16000 | 0.2709 | 0.0876 | 0.0826 | 0.0876 | 0.0876 | 19.0 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "Falconsai/text_summarization", "model-index": [{"name": "text_summarization_finetuned", "results": []}]}
HARDYCHEN/text_summarization_finetuned
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:Falconsai/text_summarization", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T03:40:12+00:00
text-generation
transformers
# merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [motherfucker0/zhun02](https://huggingface.co/motherfucker0/zhun02) * [motherfucker0/zhun01](https://huggingface.co/motherfucker0/zhun01) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: motherfucker0/zhun01 layer_range: [0, 30] - model: motherfucker0/zhun02 layer_range: [0, 30] merge_method: slerp base_model: motherfucker0/zhun02 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.05 dtype: bfloat16 ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["motherfucker0/zhun02", "motherfucker0/zhun01"]}
motherfucker0/zhen02
null
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "base_model:motherfucker0/zhun02", "base_model:motherfucker0/zhun01", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T03:40:35+00:00
null
null
{}
johnday29/AndreaDay1
null
[ "region:us" ]
null
2024-04-25T03:41:42+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
SamaahKhan/Phi-before-fine-tuning
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-25T03:44:28+00:00
null
null
{}
shubhamprshr/neglectedtailsvlm
null
[ "region:us" ]
null
2024-04-25T03:44:29+00:00
text-generation
transformers
{}
shrenikb/sparsegpt50sparsitymodel
null
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T03:44:48+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ppo_zephyr4 This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-06 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 32 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "HuggingFaceH4/mistral-7b-sft-beta", "model-index": [{"name": "ppo_zephyr4", "results": []}]}
vwxyzjn/ppo_zephyr4
null
[ "transformers", "tensorboard", "safetensors", "mistral", "text-generation", "generated_from_trainer", "conversational", "base_model:HuggingFaceH4/mistral-7b-sft-beta", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T03:45:13+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
chohi/phi-3-finetuned2-med-text
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-25T03:45:33+00:00
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/cloudyu/Meta-Llama-3-70B-Instruct-DPO <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-DPO-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-DPO-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-DPO.Q2_K.gguf) | Q2_K | 26.5 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-DPO-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-DPO.IQ3_XS.gguf) | IQ3_XS | 29.4 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-DPO-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-DPO.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-DPO-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-DPO.Q3_K_S.gguf) | Q3_K_S | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-DPO-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-DPO.IQ3_M.gguf) | IQ3_M | 32.0 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-DPO-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-DPO.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-DPO-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-DPO.Q3_K_L.gguf) | Q3_K_L | 37.2 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-DPO-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-DPO.IQ4_XS.gguf) | IQ4_XS | 38.4 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-DPO-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-DPO.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-DPO-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-DPO.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-DPO-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-DPO.Q5_K_S.gguf) | Q5_K_S | 48.8 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-DPO-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-DPO.Q5_K_M.gguf) | Q5_K_M | 50.1 | | | [PART 1](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-DPO-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-DPO.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-DPO-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-DPO.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-DPO-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-DPO.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-DPO-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-DPO.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "base_model": "cloudyu/Meta-Llama-3-70B-Instruct-DPO", "quantized_by": "mradermacher"}
mradermacher/Meta-Llama-3-70B-Instruct-DPO-GGUF
null
[ "transformers", "gguf", "en", "base_model:cloudyu/Meta-Llama-3-70B-Instruct-DPO", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-25T03:45:43+00:00
null
null
{}
yingyu1997/temp
null
[ "region:us" ]
null
2024-04-25T03:46:53+00:00
null
null
# #llama-3 #experimental #work-in-progress GGUF-IQ-Imatrix quants for @jeiku's [ResplendentAI/SOVL_Llama3_8B](https://huggingface.co/ResplendentAI/SOVL_Llama3_8B). <br> Give them some love! > [!IMPORTANT] > **Updated!** > These quants have been redone with the fixes from [llama.cpp/pull/6920](https://github.com/ggerganov/llama.cpp/pull/6920) in mind. <br> > Use **KoboldCpp version 1.64** or higher. > [!NOTE] > **Well...!** <br> > Turns out it was not just a hallucination and this model actually is pretty cool so **give it a chance!** <br> > For **8GB VRAM** GPUs, I recommend the **Q4_K_M-imat** quant for up to 12288 context sizes. > [!WARNING] > **Use the provided presets.** <br> > Compatible SillyTavern presets [here (simple)](https://huggingface.co/Lewdiculous/Model-Requests/tree/main/data/presets/cope-llama-3-0.1) or [here (Virt's roleplay)](https://huggingface.co/Virt-io/SillyTavern-Presets). > Use the latest version of KoboldCpp. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/N_1D87adbMuMlSIQ5rI3_.png)
{"license": "apache-2.0"}
Lewdiculous/SOVL_Llama3_8B-GGUF-IQ-Imatrix
null
[ "gguf", "license:apache-2.0", "region:us" ]
null
2024-04-25T03:46:59+00:00
text-generation
transformers
# Keiana-L3-Test4.9-8B-5 # Keep in mind that it's not yet tested, and I unsure if would work as planned. Keiana-L3-Test4.9-8B-5 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Kaoeiri/Keiana-L3-Test4.8-8B-4](https://huggingface.co/Kaoeiri/Keiana-L3-Test4.8-8B-4) * [vicgalle/Roleplay-Llama-3-8B](https://huggingface.co/vicgalle/Roleplay-Llama-3-8B) ## 🧩 Configuration ```yaml merge_method: task_arithmetic dtype: float16 base_model: jeiku/Average_Normie_v2_l3_8B models: - model: Kaoeiri/Keiana-L3-Test4.8-8B-4 parameters: weight: 1.0 - model: vicgalle/Roleplay-Llama-3-8B parameters: weight: 1.0 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Kaoeiri/Keiana-L3-Test4.9-8B-5" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"tags": ["merge", "mergekit", "lazymergekit", "Kaoeiri/Keiana-L3-Test4.8-8B-4", "vicgalle/Roleplay-Llama-3-8B"], "base_model": ["Kaoeiri/Keiana-L3-Test4.8-8B-4", "vicgalle/Roleplay-Llama-3-8B"]}
Kaoeiri/Keiana-L3-Test4.9-8B-5
null
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "Kaoeiri/Keiana-L3-Test4.8-8B-4", "vicgalle/Roleplay-Llama-3-8B", "conversational", "base_model:Kaoeiri/Keiana-L3-Test4.8-8B-4", "base_model:vicgalle/Roleplay-Llama-3-8B", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T03:47:13+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-160m_mz-130_IMDB_n-its-10-seed-0 This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-160m", "model-index": [{"name": "robust_llm_pythia-160m_mz-130_IMDB_n-its-10-seed-0", "results": []}]}
AlignmentResearch/robust_llm_pythia-160m_mz-130_IMDB_n-its-10-seed-0
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-160m", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T03:47:21+00:00
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.7.0.dev0 ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.7.0.dev0
{"library_name": "peft", "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0"}
bmehrba/TinyLlama-1.1B-Chat-v1.0-fine-tuned-adapters_Aleatoric_tiny_0.8_Seed104
null
[ "peft", "arxiv:1910.09700", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "region:us" ]
null
2024-04-25T03:47:31+00:00
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.7.0.dev0
{"library_name": "peft", "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0"}
bmehrba/TinyLlama-1.1B-Chat-v1.0-fine-tuned_Aleatoric_tiny_0.8_Seed104
null
[ "peft", "arxiv:1910.09700", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "region:us" ]
null
2024-04-25T03:47:35+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # breeze_7b_lora_completion_only This model is a fine-tuned version of [MediaTek-Research/Breeze-7B-Instruct-v1_0](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v1_0) on the DandinPower/ZH-Reading-Comprehension-Breeze-Instruct dataset. It achieves the following results on the evaluation set: - Loss: 0.1312 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - total_eval_batch_size: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 700 - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.1607 | 0.3690 | 250 | 0.1479 | | 0.1451 | 0.7380 | 500 | 0.1773 | | 0.1714 | 1.1070 | 750 | 0.1823 | | 0.1601 | 1.4760 | 1000 | 0.2629 | | 0.098 | 1.8450 | 1250 | 0.1895 | | 0.0876 | 2.2140 | 1500 | 0.1383 | | 0.0371 | 2.5830 | 1750 | 0.1606 | | 0.0713 | 2.9520 | 2000 | 0.1312 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0 - Pytorch 2.2.2+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"language": ["zh"], "license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "nycu-112-2-deeplearning-hw2", "generated_from_trainer"], "datasets": ["DandinPower/ZH-Reading-Comprehension-Breeze-Instruct"], "base_model": "MediaTek-Research/Breeze-7B-Instruct-v1_0", "model-index": [{"name": "breeze_7b_lora_completion_only", "results": []}]}
DandinPower/breeze_7b_lora_completion_only
null
[ "peft", "safetensors", "trl", "sft", "nycu-112-2-deeplearning-hw2", "generated_from_trainer", "zh", "dataset:DandinPower/ZH-Reading-Comprehension-Breeze-Instruct", "base_model:MediaTek-Research/Breeze-7B-Instruct-v1_0", "license:apache-2.0", "region:us" ]
null
2024-04-25T03:48:32+00:00
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-base-patch4-window7-224-in22k-finetuned-lora-ISIC-2019 This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224-in22k](https://huggingface.co/microsoft/swin-base-patch4-window7-224-in22k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4229 - Accuracy: 0.9008 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.858 | 0.99 | 62 | 0.7349 | 0.7339 | | 0.7403 | 2.0 | 125 | 0.6364 | 0.7762 | | 0.675 | 2.99 | 187 | 0.5777 | 0.7999 | | 0.6309 | 4.0 | 250 | 0.5701 | 0.7875 | | 0.5734 | 4.99 | 312 | 0.5294 | 0.8016 | | 0.5338 | 6.0 | 375 | 0.5418 | 0.8010 | | 0.5104 | 6.99 | 437 | 0.5057 | 0.8179 | | 0.5091 | 8.0 | 500 | 0.5010 | 0.8207 | | 0.4678 | 8.99 | 562 | 0.4757 | 0.8247 | | 0.467 | 10.0 | 625 | 0.4579 | 0.8151 | | 0.4416 | 10.99 | 687 | 0.4650 | 0.8315 | | 0.4277 | 12.0 | 750 | 0.4405 | 0.8405 | | 0.4261 | 12.99 | 812 | 0.4414 | 0.8388 | | 0.4016 | 14.0 | 875 | 0.4392 | 0.8286 | | 0.3729 | 14.99 | 937 | 0.4471 | 0.8281 | | 0.3813 | 16.0 | 1000 | 0.4155 | 0.8433 | | 0.3454 | 16.99 | 1062 | 0.4322 | 0.8365 | | 0.3639 | 18.0 | 1125 | 0.4332 | 0.8360 | | 0.3393 | 18.99 | 1187 | 0.4190 | 0.8523 | | 0.3135 | 20.0 | 1250 | 0.4166 | 0.8534 | | 0.3094 | 20.99 | 1312 | 0.4005 | 0.8563 | | 0.3263 | 22.0 | 1375 | 0.4399 | 0.8495 | | 0.3009 | 22.99 | 1437 | 0.4122 | 0.8523 | | 0.2804 | 24.0 | 1500 | 0.4293 | 0.8563 | | 0.2516 | 24.99 | 1562 | 0.4289 | 0.8563 | | 0.2763 | 26.0 | 1625 | 0.4125 | 0.8647 | | 0.2707 | 26.99 | 1687 | 0.4231 | 0.8664 | | 0.2585 | 28.0 | 1750 | 0.4210 | 0.8596 | | 0.2317 | 28.99 | 1812 | 0.4296 | 0.8602 | | 0.2118 | 30.0 | 1875 | 0.4440 | 0.8636 | | 0.2224 | 30.99 | 1937 | 0.3928 | 0.8726 | | 0.2166 | 32.0 | 2000 | 0.4246 | 0.8602 | | 0.2038 | 32.99 | 2062 | 0.4146 | 0.8709 | | 0.2183 | 34.0 | 2125 | 0.4165 | 0.8698 | | 0.22 | 34.99 | 2187 | 0.4212 | 0.8766 | | 0.206 | 36.0 | 2250 | 0.4139 | 0.8726 | | 0.199 | 36.99 | 2312 | 0.3793 | 0.8833 | | 0.1926 | 38.0 | 2375 | 0.4127 | 0.8839 | | 0.1648 | 38.99 | 2437 | 0.4296 | 0.8822 | | 0.1578 | 40.0 | 2500 | 0.4132 | 0.8833 | | 0.181 | 40.99 | 2562 | 0.4217 | 0.8777 | | 0.1735 | 42.0 | 2625 | 0.4186 | 0.8715 | | 0.1603 | 42.99 | 2687 | 0.4117 | 0.8805 | | 0.1516 | 44.0 | 2750 | 0.4250 | 0.8816 | | 0.1733 | 44.99 | 2812 | 0.3914 | 0.8844 | | 0.164 | 46.0 | 2875 | 0.4369 | 0.8828 | | 0.1519 | 46.99 | 2937 | 0.4276 | 0.8771 | | 0.1534 | 48.0 | 3000 | 0.4421 | 0.8822 | | 0.158 | 48.99 | 3062 | 0.4240 | 0.8873 | | 0.1531 | 50.0 | 3125 | 0.4250 | 0.8794 | | 0.1286 | 50.99 | 3187 | 0.4228 | 0.8732 | | 0.1396 | 52.0 | 3250 | 0.4317 | 0.8782 | | 0.1436 | 52.99 | 3312 | 0.4361 | 0.8856 | | 0.1411 | 54.0 | 3375 | 0.4402 | 0.8850 | | 0.1312 | 54.99 | 3437 | 0.4327 | 0.8884 | | 0.1359 | 56.0 | 3500 | 0.4144 | 0.8856 | | 0.1361 | 56.99 | 3562 | 0.4181 | 0.8867 | | 0.1272 | 58.0 | 3625 | 0.4204 | 0.8878 | | 0.1222 | 58.99 | 3687 | 0.4137 | 0.8884 | | 0.1272 | 60.0 | 3750 | 0.4317 | 0.8890 | | 0.1132 | 60.99 | 3812 | 0.4351 | 0.8918 | | 0.1239 | 62.0 | 3875 | 0.4348 | 0.8828 | | 0.1188 | 62.99 | 3937 | 0.4258 | 0.8861 | | 0.1203 | 64.0 | 4000 | 0.4318 | 0.8912 | | 0.1204 | 64.99 | 4062 | 0.4055 | 0.8952 | | 0.1053 | 66.0 | 4125 | 0.4222 | 0.8918 | | 0.1187 | 66.99 | 4187 | 0.4248 | 0.8946 | | 0.1129 | 68.0 | 4250 | 0.4302 | 0.8923 | | 0.1117 | 68.99 | 4312 | 0.4149 | 0.8968 | | 0.1194 | 70.0 | 4375 | 0.4160 | 0.8895 | | 0.1003 | 70.99 | 4437 | 0.4256 | 0.8946 | | 0.1088 | 72.0 | 4500 | 0.4356 | 0.8918 | | 0.11 | 72.99 | 4562 | 0.4277 | 0.8935 | | 0.1016 | 74.0 | 4625 | 0.4095 | 0.8952 | | 0.0906 | 74.99 | 4687 | 0.4262 | 0.8935 | | 0.0969 | 76.0 | 4750 | 0.4057 | 0.8940 | | 0.111 | 76.99 | 4812 | 0.4099 | 0.8997 | | 0.091 | 78.0 | 4875 | 0.4232 | 0.8963 | | 0.1013 | 78.99 | 4937 | 0.4311 | 0.8884 | | 0.119 | 80.0 | 5000 | 0.4302 | 0.8929 | | 0.0877 | 80.99 | 5062 | 0.4369 | 0.8923 | | 0.0926 | 82.0 | 5125 | 0.4353 | 0.8968 | | 0.0969 | 82.99 | 5187 | 0.4336 | 0.8952 | | 0.092 | 84.0 | 5250 | 0.4214 | 0.8935 | | 0.0914 | 84.99 | 5312 | 0.4403 | 0.8890 | | 0.0924 | 86.0 | 5375 | 0.4285 | 0.8929 | | 0.0964 | 86.99 | 5437 | 0.4207 | 0.8968 | | 0.0916 | 88.0 | 5500 | 0.4254 | 0.8946 | | 0.0962 | 88.99 | 5562 | 0.4249 | 0.8980 | | 0.0927 | 90.0 | 5625 | 0.4242 | 0.8935 | | 0.0993 | 90.99 | 5687 | 0.4230 | 0.8985 | | 0.0893 | 92.0 | 5750 | 0.4229 | 0.8980 | | 0.0878 | 92.99 | 5812 | 0.4215 | 0.8985 | | 0.0882 | 94.0 | 5875 | 0.4262 | 0.8980 | | 0.0854 | 94.99 | 5937 | 0.4256 | 0.8974 | | 0.0795 | 96.0 | 6000 | 0.4229 | 0.9008 | | 0.0931 | 96.99 | 6062 | 0.4218 | 0.8991 | | 0.0826 | 98.0 | 6125 | 0.4235 | 0.8985 | | 0.0926 | 98.99 | 6187 | 0.4237 | 0.8985 | | 0.0829 | 99.2 | 6200 | 0.4238 | 0.8985 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.13.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "microsoft/swin-base-patch4-window7-224-in22k", "model-index": [{"name": "swin-base-patch4-window7-224-in22k-finetuned-lora-ISIC-2019", "results": []}]}
TriDat/swin-base-patch4-window7-224-in22k-finetuned-lora-ISIC-2019
null
[ "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swin-base-patch4-window7-224-in22k", "license:apache-2.0", "region:us" ]
null
2024-04-25T03:48:50+00:00
text2text-generation
transformers
{}
paulh27/cnn_aligned_smallT5_iter1
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T03:49:14+00:00
unconditional-image-generation
diffusers
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) Describe your model here ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('kmpartner/ddpm-celebahq-finetuned-butterflies-2epochs') image = pipeline().images[0] image ```
{"license": "mit", "tags": ["pytorch", "diffusers", "unconditional-image-generation", "diffusion-models-class"]}
kmpartner/ddpm-celebahq-finetuned-butterflies-2epochs
null
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
null
2024-04-25T03:50:13+00:00
null
null
{}
vincentyandex/llama3_8b_chsnovel_q8_0
null
[ "region:us" ]
null
2024-04-25T03:50:21+00:00
null
null
{"license": "llama2"}
Deepak1906/Routelearn
null
[ "license:llama2", "region:us" ]
null
2024-04-25T03:51:18+00:00
null
null
{}
nndang/checkpoint_wav2vec_synthetic_50_raw_50
null
[ "tensorboard", "safetensors", "region:us" ]
null
2024-04-25T03:51:53+00:00
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
zinoli/my-awesome-model
null
[ "transformers", "safetensors", "blip", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-25T03:52:05+00:00
text-generation
transformers
# Uploaded model - **Developed by:** Buncha - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "sft"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"}
Buncha/mistral-7b-instruct-v0.2-bnb-4bit-medical
null
[ "transformers", "pytorch", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-25T03:53:55+00:00
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spanemo-id-ta This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.1.2 - Datasets 2.19.1.dev0 - Tokenizers 0.19.1
{"tags": ["generated_from_trainer"], "model-index": [{"name": "spanemo-id-ta", "results": []}]}
syafiqfaray/spanemo-id-ta
null
[ "transformers", "safetensors", "spanemo", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2024-04-25T03:55:22+00:00
null
transformers
# Uploaded model - **Developed by:** hanifsyarubany10 - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-7b-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "gemma", "trl"], "base_model": "unsloth/gemma-7b-bnb-4bit"}
hanifsyarubany10/gemma-7b-100epochs-Unsloth-FreedomIntelligence-indo-1e-3
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma", "trl", "en", "base_model:unsloth/gemma-7b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-25T03:55:26+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Etluther-Code-finetuned This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.0612 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.066 | 1.0 | 2174 | 2.1421 | | 1.8291 | 2.0 | 4348 | 2.0786 | | 1.6615 | 3.0 | 6522 | 2.0612 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-70m", "model-index": [{"name": "Etluther-Code-finetuned", "results": []}]}
elinaparajuli/Etluther-Code-finetuned
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-generation", "generated_from_trainer", "base_model:EleutherAI/pythia-70m", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T03:55:27+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
yentinglin/cosmo-1b-random-init
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T03:56:03+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
yentinglin/cosmo-8x220M-random-init
null
[ "transformers", "safetensors", "mixtral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T03:56:36+00:00
null
transformers
# Uploaded model - **Developed by:** hanifsyarubany10 - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-7b-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "gemma", "trl"], "base_model": "unsloth/gemma-7b-bnb-4bit"}
hanifsyarubany10/gemma-7b-50epochs-Unsloth-FreedomIntelligence-indo-2e-4
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma", "trl", "en", "base_model:unsloth/gemma-7b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-25T03:57:18+00:00
null
null
{}
Teemaa/Teema
null
[ "region:us" ]
null
2024-04-25T03:57:58+00:00
null
null
{}
johnday29/corgy_dog_LoRA
null
[ "region:us" ]
null
2024-04-25T03:58:03+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-GPT2 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2332 - Accuracy: 0.39 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.28.0 - Pytorch 2.1.2+cu121 - Datasets 2.18.0 - Tokenizers 0.13.3
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "finetuning-sentiment-model-GPT2", "results": []}]}
dhrubochowdhury5758778/finetuning-sentiment-model-GPT2
null
[ "transformers", "pytorch", "gpt2", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T03:58:26+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-160m_mz-130_IMDB_n-its-10-seed-1 This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 1 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-160m", "model-index": [{"name": "robust_llm_pythia-160m_mz-130_IMDB_n-its-10-seed-1", "results": []}]}
AlignmentResearch/robust_llm_pythia-160m_mz-130_IMDB_n-its-10-seed-1
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-160m", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T03:58:35+00:00
null
null
{}
johnday29/Andrea_LoRA
null
[ "region:us" ]
null
2024-04-25T03:59:17+00:00
text-generation
null
# DavidAU/Meta-Llama-3-8B-Instruct-hf-Q8_0-GGUF This model was converted to GGUF format from [`Undi95/Meta-Llama-3-8B-Instruct-hf`](https://huggingface.co/Undi95/Meta-Llama-3-8B-Instruct-hf) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Undi95/Meta-Llama-3-8B-Instruct-hf) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Meta-Llama-3-8B-Instruct-hf-Q8_0-GGUF --model meta-llama-3-8b-instruct-hf.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Meta-Llama-3-8B-Instruct-hf-Q8_0-GGUF --model meta-llama-3-8b-instruct-hf.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m meta-llama-3-8b-instruct-hf.Q8_0.gguf -n 128 ```
{"language": ["en"], "license": "other", "tags": ["facebook", "meta", "pytorch", "llama", "llama-3", "llama-cpp", "gguf-my-repo"], "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "LICENSE", "extra_gated_prompt": "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.\n\"Documentation\" means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\"Licensee\" or \"you\" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity\u2019s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama 3\" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means, collectively, Meta\u2019s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta\u2019s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display \u201cBuilt with Meta Llama 3\u201d on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include \u201cLlama 3\u201d at the beginning of any such AI model name.\nii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you.\niii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a \u201cNotice\u201d text file distributed as a part of such copies: \u201cMeta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright \u00a9 Meta Platforms, Inc. All Rights Reserved.\u201d\niv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference into this Agreement.\nv. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee\u2019s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN \u201cAS IS\u201d BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use \u201cLlama 3\u201d (the \u201cMark\u201d) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta\u2019s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to Meta\u2019s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.\nc. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.\n6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use Policy\nMeta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable Use Policy (\u201cPolicy\u201d). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the law or others\u2019 rights, including to:\n 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:\n 1. Violence or terrorism\n 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material\n 3. Human trafficking, exploitation, and sexual violence\n 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.\n 5. Sexual solicitation\n 6. Any other criminal activity\n 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals\n 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services\n 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices\n 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws\n 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials\n 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system\n2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following:\n 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State\n 2. Guns and illegal weapons (including weapon development)\n 3. Illegal drugs and regulated/controlled substances\n 4. Operation of critical infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm or harm to others, including suicide, cutting, and eating disorders\n 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content\n 3. Generating, promoting, or further distributing spam\n 4. Impersonating another individual without consent, authorization, or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are human-generated\n 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement\n4. Fail to appropriately disclose to end users any known dangers of your AI system\nPlease report any violation of this Policy, software \u201cbug,\u201d or other problems that could lead to a violation of this Policy through one of the following means:\n * Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]", "extra_gated_fields": {"First Name": "text", "Last Name": "text", "Date of birth": "date_picker", "Country": "country", "Affiliation": "text", "geo": "ip_location", "By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy": "checkbox"}, "extra_gated_description": "The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).", "extra_gated_button_content": "Submit"}
DavidAU/Meta-Llama-3-8B-Instruct-hf-Q8_0-GGUF
null
[ "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "llama-cpp", "gguf-my-repo", "text-generation", "en", "license:other", "region:us" ]
null
2024-04-25T03:59:43+00:00
null
transformers
### Model Description This was model was fine-tuned from microsoft/Phi-3-mini-4k-instruct-gguf. Using Instruction Tuning and Parameter Efficient Tuning on a smaller model, this fine-tuned model is intended to improve dialogue policy. The instruction dataset was generated from the MultiWOZ dataset.
{"library_name": "transformers", "tags": []}
SamaahKhan/Phi-after-fine-tuning
null
[ "transformers", "safetensors", "endpoints_compatible", "region:us" ]
null
2024-04-25T04:01:01+00:00
null
transformers
# DavidAU/Meta-Llama-3-8B-Instruct-function-calling-Q8_0-GGUF This model was converted to GGUF format from [`Trelis/Meta-Llama-3-8B-Instruct-function-calling`](https://huggingface.co/Trelis/Meta-Llama-3-8B-Instruct-function-calling) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Trelis/Meta-Llama-3-8B-Instruct-function-calling) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Meta-Llama-3-8B-Instruct-function-calling-Q8_0-GGUF --model meta-llama-3-8b-instruct-function-calling.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Meta-Llama-3-8B-Instruct-function-calling-Q8_0-GGUF --model meta-llama-3-8b-instruct-function-calling.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m meta-llama-3-8b-instruct-function-calling.Q8_0.gguf -n 128 ```
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "llama 3", "llama-cpp", "gguf-my-repo"], "datasets": ["Trelis/function_calling_v3"]}
DavidAU/Meta-Llama-3-8B-Instruct-function-calling-Q8_0-GGUF
null
[ "transformers", "gguf", "text-generation-inference", "unsloth", "llama", "trl", "llama 3", "llama-cpp", "gguf-my-repo", "en", "dataset:Trelis/function_calling_v3", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-25T04:01:19+00:00
null
null
{}
sharkeyboi/ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
null
[ "region:us" ]
null
2024-04-25T04:02:36+00:00
null
null
{}
Arie0809/HCK-015-TEST
null
[ "region:us" ]
null
2024-04-25T04:03:24+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
yentinglin/cosmo-334M-random-init
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T04:03:33+00:00
null
null
{}
kshelat/test-roberta-reddit-finetune
null
[ "region:us" ]
null
2024-04-25T04:03:37+00:00
text-generation
transformers
{}
shrenikb/sparsegpt25sparsitymodel
null
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T04:04:01+00:00
text-generation
null
# DavidAU/Meta-Llama-3-8B-Instruct-64k-Q8_0-GGUF This model was converted to GGUF format from [`NurtureAI/Meta-Llama-3-8B-Instruct-64k`](https://huggingface.co/NurtureAI/Meta-Llama-3-8B-Instruct-64k) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/NurtureAI/Meta-Llama-3-8B-Instruct-64k) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Meta-Llama-3-8B-Instruct-64k-Q8_0-GGUF --model meta-llama-3-8b-instruct-64k.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Meta-Llama-3-8B-Instruct-64k-Q8_0-GGUF --model meta-llama-3-8b-instruct-64k.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m meta-llama-3-8b-instruct-64k.Q8_0.gguf -n 128 ```
{"language": ["en"], "license": "other", "tags": ["facebook", "meta", "pytorch", "llama", "llama-3", "llama-cpp", "gguf-my-repo"], "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "LICENSE", "extra_gated_prompt": "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.\n\"Documentation\" means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\"Licensee\" or \"you\" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity\u2019s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama 3\" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means, collectively, Meta\u2019s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta\u2019s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display \u201cBuilt with Meta Llama 3\u201d on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include \u201cLlama 3\u201d at the beginning of any such AI model name.\nii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you.\niii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a \u201cNotice\u201d text file distributed as a part of such copies: \u201cMeta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright \u00a9 Meta Platforms, Inc. All Rights Reserved.\u201d\niv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference into this Agreement.\nv. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee\u2019s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN \u201cAS IS\u201d BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use \u201cLlama 3\u201d (the \u201cMark\u201d) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta\u2019s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to Meta\u2019s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.\nc. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.\n6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use Policy\nMeta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable Use Policy (\u201cPolicy\u201d). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the law or others\u2019 rights, including to:\n 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:\n 1. Violence or terrorism\n 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material\n 3. Human trafficking, exploitation, and sexual violence\n 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.\n 5. Sexual solicitation\n 6. Any other criminal activity\n 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals\n 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services\n 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices\n 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws\n 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials\n 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system\n2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following:\n 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State\n 2. Guns and illegal weapons (including weapon development)\n 3. Illegal drugs and regulated/controlled substances\n 4. Operation of critical infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm or harm to others, including suicide, cutting, and eating disorders\n 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content\n 3. Generating, promoting, or further distributing spam\n 4. Impersonating another individual without consent, authorization, or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are human-generated\n 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement\n4. Fail to appropriately disclose to end users any known dangers of your AI system\nPlease report any violation of this Policy, software \u201cbug,\u201d or other problems that could lead to a violation of this Policy through one of the following means:\n * Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]", "extra_gated_fields": {"First Name": "text", "Last Name": "text", "Date of birth": "date_picker", "Country": "country", "Affiliation": "text", "geo": "ip_location", "By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy": "checkbox"}, "extra_gated_description": "The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).", "extra_gated_button_content": "Submit"}
DavidAU/Meta-Llama-3-8B-Instruct-64k-Q8_0-GGUF
null
[ "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "llama-cpp", "gguf-my-repo", "text-generation", "en", "license:other", "region:us" ]
null
2024-04-25T04:04:38+00:00
audio-classification
transformers
{}
sharkeyboi/ast-finetuned-audioset-10-10-0.4593-finetuned-frog_dataset
null
[ "transformers", "safetensors", "audio-spectrogram-transformer", "audio-classification", "endpoints_compatible", "region:us" ]
null
2024-04-25T04:05:25+00:00
text-generation
transformers
# merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [kalytm/nous-2](https://huggingface.co/kalytm/nous-2) * [coffie3/sx1011](https://huggingface.co/coffie3/sx1011) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: coffie3/sx1011 layer_range: [0, 24] - model: kalytm/nous-2 layer_range: [0, 24] merge_method: slerp base_model: kalytm/nous-2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["kalytm/nous-2", "coffie3/sx1011"]}
Sumail/Ame19
null
[ "transformers", "safetensors", "stablelm", "text-generation", "mergekit", "merge", "conversational", "base_model:kalytm/nous-2", "base_model:coffie3/sx1011", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-25T04:05:30+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # leagaleasy-llama-3-adapter This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "other", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "model-index": [{"name": "leagaleasy-llama-3-adapter", "results": []}]}
Nithin29/leagaleasy-llama-3-adapter
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "license:other", "region:us" ]
null
2024-04-25T04:05:43+00:00
null
null
# T3qm7Experiment26-7B T3qm7Experiment26-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration. ## 🧩 Configuration ```yaml models: - model: mistralai/Mistral-7B-v0.1 - model: nlpguy/T3QM7 - model: yam-peleg/Experiment26-7B merge_method: model_stock base_model: mistralai/Mistral-7B-v0.1 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "automerger/T3qm7Experiment26-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"]}
automerger/T3qm7Experiment26-7B
null
[ "merge", "mergekit", "lazymergekit", "automerger", "license:apache-2.0", "region:us" ]
null
2024-04-25T04:05:58+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-410m_mz-132_WordLength_n-its-10 This model is a fine-tuned version of [EleutherAI/pythia-410m](https://huggingface.co/EleutherAI/pythia-410m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-410m", "model-index": [{"name": "robust_llm_pythia-410m_mz-132_WordLength_n-its-10", "results": []}]}
AlignmentResearch/robust_llm_pythia-410m_mz-132_WordLength_n-its-10
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-410m", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T04:06:24+00:00
image-to-text
transformers
<div align="center"> <img src="https://github.com/InternLM/lmdeploy/assets/36994684/0cf8d00f-e86b-40ba-9b54-dc8f1bc6c8d8" width="600"/> [![Generic badge](https://img.shields.io/badge/GitHub-%20XTuner-black.svg)](https://github.com/InternLM/xtuner) </div> ## Model llava-phi-3-mini is a LLaVA model fine-tuned from [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [CLIP-ViT-Large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) with [ShareGPT4V-PT](https://huggingface.co/datasets/Lin-Chen/ShareGPT4V) and [InternVL-SFT](https://github.com/OpenGVLab/InternVL/tree/main/internvl_chat#prepare-training-datasets) by [XTuner](https://github.com/InternLM/xtuner). **Note: This model is in HuggingFace LLaVA format.** Resources: - GitHub: [xtuner](https://github.com/InternLM/xtuner) - Official LLaVA format model: [xtuner/llava-phi-3-mini](https://huggingface.co/xtuner/llava-phi-3-mini) - GGUF LLaVA model: [xtuner/llava-phi-3-mini-gguf](https://huggingface.co/xtuner/llava-phi-3-mini-gguf) - XTuner LLaVA format model: [xtuner/llava-phi-3-mini-xtuner](https://huggingface.co/xtuner/llava-phi-3-mini-xtuner) ## Details | Model | Visual Encoder | Projector | Resolution | Pretraining Strategy | Fine-tuning Strategy | Pretrain Dataset | Fine-tune Dataset | Pretrain Epoch | Fine-tune Epoch | | :-------------------- | ------------------: | --------: | ---------: | ---------------------: | ------------------------: | ------------------------: | -----------------------: | -------------- | --------------- | | LLaVA-v1.5-7B | CLIP-L | MLP | 336 | Frozen LLM, Frozen ViT | Full LLM, Frozen ViT | LLaVA-PT (558K) | LLaVA-Mix (665K) | 1 | 1 | | LLaVA-Llama-3-8B | CLIP-L | MLP | 336 | Frozen LLM, Frozen ViT | Full LLM, LoRA ViT | LLaVA-PT (558K) | LLaVA-Mix (665K) | 1 | 1 | | LLaVA-Llama-3-8B-v1.1 | CLIP-L | MLP | 336 | Frozen LLM, Frozen ViT | Full LLM, LoRA ViT | ShareGPT4V-PT (1246K) | InternVL-SFT (1268K) | 1 | 1 | | **LLaVA-Phi-3-mini** | CLIP-L | MLP | 336 | Frozen LLM, Frozen ViT | Full LLM, Full ViT | ShareGPT4V-PT (1246K) | InternVL-SFT (1268K) | 1 | 2 | ## Results <div align="center"> <img src="https://github.com/InternLM/xtuner/assets/36994684/78524f65-260d-4ae3-a687-03fc5a19dcbb" alt="Image" width=500" /> </div> | Model | MMBench Test (EN) | MMMU Val | SEED-IMG | AI2D Test | ScienceQA Test | HallusionBench aAcc | POPE | GQA | TextVQA | MME | MMStar | | :-------------------- | :---------------: | :-------: | :------: | :-------: | :------------: | :-----------------: | :--: | :--: | :-----: | :------: | :----: | | LLaVA-v1.5-7B | 66.5 | 35.3 | 60.5 | 54.8 | 70.4 | 44.9 | 85.9 | 62.0 | 58.2 | 1511/348 | 30.3 | | LLaVA-Llama-3-8B | 68.9 | 36.8 | 69.8 | 60.9 | 73.3 | 47.3 | 87.2 | 63.5 | 58.0 | 1506/295 | 38.2 | | LLaVA-Llama-3-8B-v1.1 | 72.3 | 37.1 | 70.1 | 70.0 | 72.9 | 47.7 | 86.4 | 62.6 | 59.0 | 1469/349 | 45.1 | | **LLaVA-Phi-3-mini** | 69.2 | 41.4 | 70.0 | 69.3 | 73.7 | 49.8 | 87.3 | 61.5 | 57.8 | 1477/313 | 43.7 | ## Quickstart ### Chat by `pipeline` ```python from transformers import pipeline from PIL import Image import requests model_id = "xtuner/llava-phi-3-mini-hf" pipe = pipeline("image-to-text", model=model_id, device=0) url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg" image = Image.open(requests.get(url, stream=True).raw) prompt = "<|user|>\n<image>\nWhat does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud<|end|>\n<|assistant|>\n" outputs = pipe(image, prompt=prompt, generate_kwargs={"max_new_tokens": 200}) print(outputs) >>> [{'generated_text': '\nWhat does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud (1) lava'}] ``` ### Chat by pure `transformers` ```python import requests from PIL import Image import torch from transformers import AutoProcessor, LlavaForConditionalGeneration model_id = "xtuner/llava-phi-3-mini-hf" prompt = "<|user|>\n<image>\nWhat are these?<|end|>\n<|assistant|>\n" image_file = "http://images.cocodataset.org/val2017/000000039769.jpg" model = LlavaForConditionalGeneration.from_pretrained( model_id, torch_dtype=torch.float16, low_cpu_mem_usage=True, ).to(0) processor = AutoProcessor.from_pretrained(model_id) raw_image = Image.open(requests.get(image_file, stream=True).raw) inputs = processor(prompt, raw_image, return_tensors='pt').to(0, torch.float16) output = model.generate(**inputs, max_new_tokens=200, do_sample=False) print(processor.decode(output[0][2:], skip_special_tokens=True)) >>> What are these? These are two cats sleeping on a pink couch. ``` ### Reproduce Please refer to [docs](https://github.com/InternLM/xtuner/tree/main/xtuner/configs/llava/phi3_mini_4k_instruct_clip_vit_large_p14_336#readme). ## Citation ```bibtex @misc{2023xtuner, title={XTuner: A Toolkit for Efficiently Fine-tuning LLM}, author={XTuner Contributors}, howpublished = {\url{https://github.com/InternLM/xtuner}}, year={2023} } ```
{"datasets": ["Lin-Chen/ShareGPT4V"], "pipeline_tag": "image-to-text"}
xtuner/llava-phi-3-mini-hf
null
[ "transformers", "safetensors", "llava", "pretraining", "image-to-text", "dataset:Lin-Chen/ShareGPT4V", "endpoints_compatible", "has_space", "region:us" ]
null
2024-04-25T04:07:10+00:00
text-generation
null
# DavidAU/openbuddy-llama3-8b-v21.1-8k-Q8_0-GGUF This model was converted to GGUF format from [`OpenBuddy/openbuddy-llama3-8b-v21.1-8k`](https://huggingface.co/OpenBuddy/openbuddy-llama3-8b-v21.1-8k) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/OpenBuddy/openbuddy-llama3-8b-v21.1-8k) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/openbuddy-llama3-8b-v21.1-8k-Q8_0-GGUF --model openbuddy-llama3-8b-v21.1-8k.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/openbuddy-llama3-8b-v21.1-8k-Q8_0-GGUF --model openbuddy-llama3-8b-v21.1-8k.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m openbuddy-llama3-8b-v21.1-8k.Q8_0.gguf -n 128 ```
{"language": ["zh", "en", "fr", "de", "ja", "ko", "it", "fi"], "license": "other", "tags": ["llama-3", "llama-cpp", "gguf-my-repo"], "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "https://llama.meta.com/llama3/license/"}
DavidAU/openbuddy-llama3-8b-v21.1-8k-Q8_0-GGUF
null
[ "gguf", "llama-3", "llama-cpp", "gguf-my-repo", "text-generation", "zh", "en", "fr", "de", "ja", "ko", "it", "fi", "license:other", "region:us" ]
null
2024-04-25T04:07:48+00:00
null
null
{}
lemousehunter/bge-m3-onnx
null
[ "onnx", "region:us" ]
null
2024-04-25T04:08:07+00:00
text-generation
transformers
# Keiana-L3-Test5.0-8B-6 # Keep in mind that it's not yet tested, and I unsure if would work as planned. Keiana-L3-Test5.0-8B-6 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Kaoeiri/Keiana-L3-Test4.7-8B-3](https://huggingface.co/Kaoeiri/Keiana-L3-Test4.7-8B-3) * [vicgalle/Roleplay-Llama-3-8B](https://huggingface.co/vicgalle/Roleplay-Llama-3-8B) ## 🧩 Configuration ```yaml merge_method: task_arithmetic dtype: float16 base_model: jeiku/Average_Normie_v2_l3_8B models: - model: Kaoeiri/Keiana-L3-Test4.7-8B-3 parameters: weight: 1.0 - model: vicgalle/Roleplay-Llama-3-8B parameters: weight: 1.0 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Kaoeiri/Keiana-L3-Test5.0-8B-6" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"tags": ["merge", "mergekit", "lazymergekit", "Kaoeiri/Keiana-L3-Test4.7-8B-3", "vicgalle/Roleplay-Llama-3-8B"], "base_model": ["Kaoeiri/Keiana-L3-Test4.7-8B-3", "vicgalle/Roleplay-Llama-3-8B"]}
Kaoeiri/Keiana-L3-Test5.0-8B-6
null
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "Kaoeiri/Keiana-L3-Test4.7-8B-3", "vicgalle/Roleplay-Llama-3-8B", "conversational", "base_model:Kaoeiri/Keiana-L3-Test4.7-8B-3", "base_model:vicgalle/Roleplay-Llama-3-8B", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T04:10:08+00:00
text-classification
allennlp
{"language": ["en"], "license": "openrail", "library_name": "allennlp", "datasets": ["HuggingFaceFW/fineweb"], "metrics": ["accuracy"], "pipeline_tag": "text-classification"}
Heziah/create-bot.com
null
[ "allennlp", "text-classification", "en", "dataset:HuggingFaceFW/fineweb", "license:openrail", "region:us" ]
null
2024-04-25T04:10:09+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # trainer_output_dir This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 224 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilbert-base-cased", "model-index": [{"name": "trainer_output_dir", "results": []}]}
sunithapillai/trainer_output_dir
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-25T04:11:21+00:00
null
transformers
# DavidAU/OrpoLlama-3-8B-Q8_0-GGUF This model was converted to GGUF format from [`mlabonne/OrpoLlama-3-8B`](https://huggingface.co/mlabonne/OrpoLlama-3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/mlabonne/OrpoLlama-3-8B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/OrpoLlama-3-8B-Q8_0-GGUF --model orpollama-3-8b.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/OrpoLlama-3-8B-Q8_0-GGUF --model orpollama-3-8b.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m orpollama-3-8b.Q8_0.gguf -n 128 ```
{"language": ["en"], "license": "other", "library_name": "transformers", "tags": ["orpo", "llama 3", "rlhf", "sft", "llama-cpp", "gguf-my-repo"], "datasets": ["mlabonne/orpo-dpo-mix-40k"]}
DavidAU/OrpoLlama-3-8B-Q8_0-GGUF
null
[ "transformers", "gguf", "orpo", "llama 3", "rlhf", "sft", "llama-cpp", "gguf-my-repo", "en", "dataset:mlabonne/orpo-dpo-mix-40k", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-25T04:11:44+00:00
null
transformers
# ColBERT-X for English-Chinese/Persian/Russian MLIR using Multilingual Translate-Distill ## MLIR Model Setting - Query language: English - Query length: 32 token max - Document language: Chinese/Persian/Russian - Document length: 180 token max (please use MaxP to aggregate the passage score if needed) ## Model Description Multilingual Translate-Distill is a training technique that produces state-of-the-art MLIR dense retrieval model through translation and distillation. `plaidx-large-neuclir-mtd-mix-passages-mt5xxl-engeng` is trained with KL-Divergence from the `mt5xxl` MonoT5 reranker [`unicamp-dl/mt5-13b-mmarco-100k`](https://huggingface.co/unicamp-dl/mt5-13b-mmarco-100k) inferenced on English MS MARCO training queries and passages. The teacher scores can be found in [`hltcoe/tdist-msmarco-scores`](https://huggingface.co/datasets/hltcoe/tdist-msmarco-scores/blob/main/t53b-monot5-msmarco-engeng.jsonl.gz). ### Training Parameters - learning rate: 5e-6 - update steps: 200,000 - nway (number of passages per query): 6 (randomly selected from 50; 2 if using `round-robin-entires`, see below) - per device batch size (number of query-passage set): 8 - training GPU: 8 NVIDIA V100 with 32 GB memory ### Mixing Strategies - `mix-passages`: languages are randomly assigned to the 6 sampled passages for a given query during training. - `mix-entries`: all passages in the a given query-passage set are randomly assigned to the same language. - `round-robin-entires`: for each query, the query-passage set is repeated `n` times to iterate through all languages. ## Usage To properly load ColBERT-X models from Huggingface Hub, please use the following version of PLAID-X. ```bash pip install PLAID-X>=0.3.1 ``` Following code snippet loads the model through Huggingface API. ```python from colbert.modeling.checkpoint import Checkpoint from colbert.infra import ColBERTConfig Checkpoint('hltcoe/plaidx-large-neuclir-mtd-mix-passages-mt5xxl-engeng', colbert_config=ColBERTConfig()) ``` For full tutorial, please refer to the [PLAID-X Jupyter Notebook](https://colab.research.google.com/github/hltcoe/clir-tutorial/blob/main/notebooks/clir_tutorial_plaidx.ipynb), which is part of the [SIGIR 2023 CLIR Tutorial](https://github.com/hltcoe/clir-tutorial). ## BibTeX entry and Citation Info Please cite the following two papers if you use the model. ```bibtex @inproceedings{mtt, title = {Neural Approaches to Multilingual Information Retrieval}, author = {Dawn Lawrie and Eugene Yang and Douglas W Oard and James Mayfield}, booktitle = {Proceedings of the 45th European Conference on Information Retrieval (ECIR)}, year = {2023}, doi = {10.1007/978-3-031-28244-7_33}, url = {https://arxiv.org/abs/2209.01335} } ``` ```bibtex @inproceedings{mtd, author = {Eugene Yang and Dawn Lawrie and James Mayfield}, title = {Distillation for Multilingual Information Retrieval}, booktitle = {Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR) (Short Paper) (Accepted)}, year = {2024} } ```
{"language": ["en", "zh", "fa", "ru"], "license": "mit", "tags": ["clir", "colbertx", "plaidx", "xlm-roberta-large"], "datasets": ["ms_marco", "hltcoe/tdist-msmarco-scores"], "task_categories": ["text-retrieval", "information-retrieval"], "task_ids": ["passage-retrieval", "cross-language-retrieval"]}
hltcoe/plaidx-large-neuclir-mtd-mix-passages-mt5xxl-engeng
null
[ "transformers", "pytorch", "xlm-roberta", "clir", "colbertx", "plaidx", "xlm-roberta-large", "en", "zh", "fa", "ru", "dataset:ms_marco", "dataset:hltcoe/tdist-msmarco-scores", "arxiv:2209.01335", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-25T04:11:47+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Nithin29/leagaleasy-llama-3-adapterandmodel
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T04:11:50+00:00
null
transformers
# ColBERT-X for English-German/Spanish/French MLIR using Multilingual Translate-Distill ## MLIR Model Setting - Query language: English - Query length: 32 token max - Document language: German/Spanish/French - Document length: 180 token max (please use MaxP to aggregate the passage score if needed) ## Model Description Multilingual Translate-Distill is a training technique that produces state-of-the-art MLIR dense retrieval model through translation and distillation. `plaidx-large-clef-mtd-mix-passages-mt5xxl-engeng` is trained with KL-Divergence from the `mt5xxl` MonoT5 reranker [`unicamp-dl/mt5-13b-mmarco-100k`](https://huggingface.co/unicamp-dl/mt5-13b-mmarco-100k) inferenced on English MS MARCO training queries and passages. The teacher scores can be found in [`hltcoe/tdist-msmarco-scores`](https://huggingface.co/datasets/hltcoe/tdist-msmarco-scores/blob/main/t53b-monot5-msmarco-engeng.jsonl.gz). ### Training Parameters - learning rate: 5e-6 - update steps: 200,000 - nway (number of passages per query): 6 (randomly selected from 50; 2 if using `round-robin-entires`, see below) - per device batch size (number of query-passage set): 8 - training GPU: 8 NVIDIA V100 with 32 GB memory ### Mixing Strategies - `mix-passages`: languages are randomly assigned to the 6 sampled passages for a given query during training. - `mix-entries`: all passages in the a given query-passage set are randomly assigned to the same language. - `round-robin-entires`: for each query, the query-passage set is repeated `n` times to iterate through all languages. ## Usage To properly load ColBERT-X models from Huggingface Hub, please use the following version of PLAID-X. ```bash pip install PLAID-X>=0.3.1 ``` Following code snippet loads the model through Huggingface API. ```python from colbert.modeling.checkpoint import Checkpoint from colbert.infra import ColBERTConfig Checkpoint('hltcoe/plaidx-large-clef-mtd-mix-passages-mt5xxl-engeng', colbert_config=ColBERTConfig()) ``` For full tutorial, please refer to the [PLAID-X Jupyter Notebook](https://colab.research.google.com/github/hltcoe/clir-tutorial/blob/main/notebooks/clir_tutorial_plaidx.ipynb), which is part of the [SIGIR 2023 CLIR Tutorial](https://github.com/hltcoe/clir-tutorial). ## BibTeX entry and Citation Info Please cite the following two papers if you use the model. ```bibtex @inproceedings{mtt, title = {Neural Approaches to Multilingual Information Retrieval}, author = {Dawn Lawrie and Eugene Yang and Douglas W Oard and James Mayfield}, booktitle = {Proceedings of the 45th European Conference on Information Retrieval (ECIR)}, year = {2023}, doi = {10.1007/978-3-031-28244-7_33}, url = {https://arxiv.org/abs/2209.01335} } ``` ```bibtex @inproceedings{mtd, author = {Eugene Yang and Dawn Lawrie and James Mayfield}, title = {Distillation for Multilingual Information Retrieval}, booktitle = {Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR) (Short Paper) (Accepted)}, year = {2024} } ```
{"language": ["en", "de", "es", "fr"], "license": "mit", "tags": ["clir", "colbertx", "plaidx", "xlm-roberta-large"], "datasets": ["ms_marco", "hltcoe/tdist-msmarco-scores"], "task_categories": ["text-retrieval", "information-retrieval"], "task_ids": ["passage-retrieval", "cross-language-retrieval"]}
hltcoe/plaidx-large-clef-mtd-mix-passages-mt5xxl-engeng
null
[ "transformers", "pytorch", "xlm-roberta", "clir", "colbertx", "plaidx", "xlm-roberta-large", "en", "de", "es", "fr", "dataset:ms_marco", "dataset:hltcoe/tdist-msmarco-scores", "arxiv:2209.01335", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-25T04:12:07+00:00
null
transformers
# ColBERT-X for English-Chinese/Persian/Russian MLIR using Multilingual Translate-Distill ## MLIR Model Setting - Query language: English - Query length: 32 token max - Document language: Chinese/Persian/Russian - Document length: 180 token max (please use MaxP to aggregate the passage score if needed) ## Model Description Multilingual Translate-Distill is a training technique that produces state-of-the-art MLIR dense retrieval model through translation and distillation. `plaidx-large-neuclir-mtd-mix-entries-mt5xxl-engeng` is trained with KL-Divergence from the `mt5xxl` MonoT5 reranker [`unicamp-dl/mt5-13b-mmarco-100k`](https://huggingface.co/unicamp-dl/mt5-13b-mmarco-100k) inferenced on English MS MARCO training queries and passages. The teacher scores can be found in [`hltcoe/tdist-msmarco-scores`](https://huggingface.co/datasets/hltcoe/tdist-msmarco-scores/blob/main/t53b-monot5-msmarco-engeng.jsonl.gz). ### Training Parameters - learning rate: 5e-6 - update steps: 200,000 - nway (number of passages per query): 6 (randomly selected from 50; 2 if using `round-robin-entires`, see below) - per device batch size (number of query-passage set): 8 - training GPU: 8 NVIDIA V100 with 32 GB memory ### Mixing Strategies - `mix-passages`: languages are randomly assigned to the 6 sampled passages for a given query during training. - `mix-entries`: all passages in the a given query-passage set are randomly assigned to the same language. - `round-robin-entires`: for each query, the query-passage set is repeated `n` times to iterate through all languages. ## Usage To properly load ColBERT-X models from Huggingface Hub, please use the following version of PLAID-X. ```bash pip install PLAID-X>=0.3.1 ``` Following code snippet loads the model through Huggingface API. ```python from colbert.modeling.checkpoint import Checkpoint from colbert.infra import ColBERTConfig Checkpoint('hltcoe/plaidx-large-neuclir-mtd-mix-entries-mt5xxl-engeng', colbert_config=ColBERTConfig()) ``` For full tutorial, please refer to the [PLAID-X Jupyter Notebook](https://colab.research.google.com/github/hltcoe/clir-tutorial/blob/main/notebooks/clir_tutorial_plaidx.ipynb), which is part of the [SIGIR 2023 CLIR Tutorial](https://github.com/hltcoe/clir-tutorial). ## BibTeX entry and Citation Info Please cite the following two papers if you use the model. ```bibtex @inproceedings{mtt, title = {Neural Approaches to Multilingual Information Retrieval}, author = {Dawn Lawrie and Eugene Yang and Douglas W Oard and James Mayfield}, booktitle = {Proceedings of the 45th European Conference on Information Retrieval (ECIR)}, year = {2023}, doi = {10.1007/978-3-031-28244-7_33}, url = {https://arxiv.org/abs/2209.01335} } ``` ```bibtex @inproceedings{mtd, author = {Eugene Yang and Dawn Lawrie and James Mayfield}, title = {Distillation for Multilingual Information Retrieval}, booktitle = {Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR) (Short Paper) (Accepted)}, year = {2024} } ```
{"language": ["en", "zh", "fa", "ru"], "license": "mit", "tags": ["clir", "colbertx", "plaidx", "xlm-roberta-large"], "datasets": ["ms_marco", "hltcoe/tdist-msmarco-scores"], "task_categories": ["text-retrieval", "information-retrieval"], "task_ids": ["passage-retrieval", "cross-language-retrieval"]}
hltcoe/plaidx-large-neuclir-mtd-mix-entries-mt5xxl-engeng
null
[ "transformers", "pytorch", "xlm-roberta", "clir", "colbertx", "plaidx", "xlm-roberta-large", "en", "zh", "fa", "ru", "dataset:ms_marco", "dataset:hltcoe/tdist-msmarco-scores", "arxiv:2209.01335", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-25T04:12:21+00:00
null
transformers
# ColBERT-X for English-German/Spanish/French MLIR using Multilingual Translate-Distill ## MLIR Model Setting - Query language: English - Query length: 32 token max - Document language: German/Spanish/French - Document length: 180 token max (please use MaxP to aggregate the passage score if needed) ## Model Description Multilingual Translate-Distill is a training technique that produces state-of-the-art MLIR dense retrieval model through translation and distillation. `plaidx-large-clef-mtd-mix-entries-mt5xxl-engeng` is trained with KL-Divergence from the `mt5xxl` MonoT5 reranker [`unicamp-dl/mt5-13b-mmarco-100k`](https://huggingface.co/unicamp-dl/mt5-13b-mmarco-100k) inferenced on English MS MARCO training queries and passages. The teacher scores can be found in [`hltcoe/tdist-msmarco-scores`](https://huggingface.co/datasets/hltcoe/tdist-msmarco-scores/blob/main/t53b-monot5-msmarco-engeng.jsonl.gz). ### Training Parameters - learning rate: 5e-6 - update steps: 200,000 - nway (number of passages per query): 6 (randomly selected from 50; 2 if using `round-robin-entires`, see below) - per device batch size (number of query-passage set): 8 - training GPU: 8 NVIDIA V100 with 32 GB memory ### Mixing Strategies - `mix-passages`: languages are randomly assigned to the 6 sampled passages for a given query during training. - `mix-entries`: all passages in the a given query-passage set are randomly assigned to the same language. - `round-robin-entires`: for each query, the query-passage set is repeated `n` times to iterate through all languages. ## Usage To properly load ColBERT-X models from Huggingface Hub, please use the following version of PLAID-X. ```bash pip install PLAID-X>=0.3.1 ``` Following code snippet loads the model through Huggingface API. ```python from colbert.modeling.checkpoint import Checkpoint from colbert.infra import ColBERTConfig Checkpoint('hltcoe/plaidx-large-clef-mtd-mix-entries-mt5xxl-engeng', colbert_config=ColBERTConfig()) ``` For full tutorial, please refer to the [PLAID-X Jupyter Notebook](https://colab.research.google.com/github/hltcoe/clir-tutorial/blob/main/notebooks/clir_tutorial_plaidx.ipynb), which is part of the [SIGIR 2023 CLIR Tutorial](https://github.com/hltcoe/clir-tutorial). ## BibTeX entry and Citation Info Please cite the following two papers if you use the model. ```bibtex @inproceedings{mtt, title = {Neural Approaches to Multilingual Information Retrieval}, author = {Dawn Lawrie and Eugene Yang and Douglas W Oard and James Mayfield}, booktitle = {Proceedings of the 45th European Conference on Information Retrieval (ECIR)}, year = {2023}, doi = {10.1007/978-3-031-28244-7_33}, url = {https://arxiv.org/abs/2209.01335} } ``` ```bibtex @inproceedings{mtd, author = {Eugene Yang and Dawn Lawrie and James Mayfield}, title = {Distillation for Multilingual Information Retrieval}, booktitle = {Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR) (Short Paper) (Accepted)}, year = {2024} } ```
{"language": ["en", "de", "es", "fr"], "license": "mit", "tags": ["clir", "colbertx", "plaidx", "xlm-roberta-large"], "datasets": ["ms_marco", "hltcoe/tdist-msmarco-scores"], "task_categories": ["text-retrieval", "information-retrieval"], "task_ids": ["passage-retrieval", "cross-language-retrieval"]}
hltcoe/plaidx-large-clef-mtd-mix-entries-mt5xxl-engeng
null
[ "transformers", "pytorch", "xlm-roberta", "clir", "colbertx", "plaidx", "xlm-roberta-large", "en", "de", "es", "fr", "dataset:ms_marco", "dataset:hltcoe/tdist-msmarco-scores", "arxiv:2209.01335", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-25T04:12:36+00:00
null
transformers
# ColBERT-X for English-Chinese/Persian/Russian MLIR using Multilingual Translate-Distill ## MLIR Model Setting - Query language: English - Query length: 32 token max - Document language: Chinese/Persian/Russian - Document length: 180 token max (please use MaxP to aggregate the passage score if needed) ## Model Description Multilingual Translate-Distill is a training technique that produces state-of-the-art MLIR dense retrieval model through translation and distillation. `plaidx-large-neuclir-mtd-round-robin-entries-mt5xxl-engeng` is trained with KL-Divergence from the `mt5xxl` MonoT5 reranker [`unicamp-dl/mt5-13b-mmarco-100k`](https://huggingface.co/unicamp-dl/mt5-13b-mmarco-100k) inferenced on English MS MARCO training queries and passages. The teacher scores can be found in [`hltcoe/tdist-msmarco-scores`](https://huggingface.co/datasets/hltcoe/tdist-msmarco-scores/blob/main/t53b-monot5-msmarco-engeng.jsonl.gz). ### Training Parameters - learning rate: 5e-6 - update steps: 200,000 - nway (number of passages per query): 6 (randomly selected from 50; 2 if using `round-robin-entires`, see below) - per device batch size (number of query-passage set): 8 - training GPU: 8 NVIDIA V100 with 32 GB memory ### Mixing Strategies - `mix-passages`: languages are randomly assigned to the 6 sampled passages for a given query during training. - `mix-entries`: all passages in the a given query-passage set are randomly assigned to the same language. - `round-robin-entires`: for each query, the query-passage set is repeated `n` times to iterate through all languages. ## Usage To properly load ColBERT-X models from Huggingface Hub, please use the following version of PLAID-X. ```bash pip install PLAID-X>=0.3.1 ``` Following code snippet loads the model through Huggingface API. ```python from colbert.modeling.checkpoint import Checkpoint from colbert.infra import ColBERTConfig Checkpoint('hltcoe/plaidx-large-neuclir-mtd-round-robin-entries-mt5xxl-engeng', colbert_config=ColBERTConfig()) ``` For full tutorial, please refer to the [PLAID-X Jupyter Notebook](https://colab.research.google.com/github/hltcoe/clir-tutorial/blob/main/notebooks/clir_tutorial_plaidx.ipynb), which is part of the [SIGIR 2023 CLIR Tutorial](https://github.com/hltcoe/clir-tutorial). ## BibTeX entry and Citation Info Please cite the following two papers if you use the model. ```bibtex @inproceedings{mtt, title = {Neural Approaches to Multilingual Information Retrieval}, author = {Dawn Lawrie and Eugene Yang and Douglas W Oard and James Mayfield}, booktitle = {Proceedings of the 45th European Conference on Information Retrieval (ECIR)}, year = {2023}, doi = {10.1007/978-3-031-28244-7_33}, url = {https://arxiv.org/abs/2209.01335} } ``` ```bibtex @inproceedings{mtd, author = {Eugene Yang and Dawn Lawrie and James Mayfield}, title = {Distillation for Multilingual Information Retrieval}, booktitle = {Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR) (Short Paper) (Accepted)}, year = {2024} } ```
{"language": ["en", "zh", "fa", "ru"], "license": "mit", "tags": ["clir", "colbertx", "plaidx", "xlm-roberta-large"], "datasets": ["ms_marco", "hltcoe/tdist-msmarco-scores"], "task_categories": ["text-retrieval", "information-retrieval"], "task_ids": ["passage-retrieval", "cross-language-retrieval"]}
hltcoe/plaidx-large-neuclir-mtd-round-robin-entries-mt5xxl-engeng
null
[ "transformers", "pytorch", "xlm-roberta", "clir", "colbertx", "plaidx", "xlm-roberta-large", "en", "zh", "fa", "ru", "dataset:ms_marco", "dataset:hltcoe/tdist-msmarco-scores", "arxiv:2209.01335", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-25T04:12:49+00:00
null
transformers
# ColBERT-X for English-German/Spanish/French MLIR using Multilingual Translate-Distill ## MLIR Model Setting - Query language: English - Query length: 32 token max - Document language: German/Spanish/French - Document length: 180 token max (please use MaxP to aggregate the passage score if needed) ## Model Description Multilingual Translate-Distill is a training technique that produces state-of-the-art MLIR dense retrieval model through translation and distillation. `plaidx-large-clef-mtd-round-robin-entries-mt5xxl-engeng` is trained with KL-Divergence from the `mt5xxl` MonoT5 reranker [`unicamp-dl/mt5-13b-mmarco-100k`](https://huggingface.co/unicamp-dl/mt5-13b-mmarco-100k) inferenced on English MS MARCO training queries and passages. The teacher scores can be found in [`hltcoe/tdist-msmarco-scores`](https://huggingface.co/datasets/hltcoe/tdist-msmarco-scores/blob/main/t53b-monot5-msmarco-engeng.jsonl.gz). ### Training Parameters - learning rate: 5e-6 - update steps: 200,000 - nway (number of passages per query): 6 (randomly selected from 50; 2 if using `round-robin-entires`, see below) - per device batch size (number of query-passage set): 8 - training GPU: 8 NVIDIA V100 with 32 GB memory ### Mixing Strategies - `mix-passages`: languages are randomly assigned to the 6 sampled passages for a given query during training. - `mix-entries`: all passages in the a given query-passage set are randomly assigned to the same language. - `round-robin-entires`: for each query, the query-passage set is repeated `n` times to iterate through all languages. ## Usage To properly load ColBERT-X models from Huggingface Hub, please use the following version of PLAID-X. ```bash pip install PLAID-X>=0.3.1 ``` Following code snippet loads the model through Huggingface API. ```python from colbert.modeling.checkpoint import Checkpoint from colbert.infra import ColBERTConfig Checkpoint('hltcoe/plaidx-large-clef-mtd-round-robin-entries-mt5xxl-engeng', colbert_config=ColBERTConfig()) ``` For full tutorial, please refer to the [PLAID-X Jupyter Notebook](https://colab.research.google.com/github/hltcoe/clir-tutorial/blob/main/notebooks/clir_tutorial_plaidx.ipynb), which is part of the [SIGIR 2023 CLIR Tutorial](https://github.com/hltcoe/clir-tutorial). ## BibTeX entry and Citation Info Please cite the following two papers if you use the model. ```bibtex @inproceedings{mtt, title = {Neural Approaches to Multilingual Information Retrieval}, author = {Dawn Lawrie and Eugene Yang and Douglas W Oard and James Mayfield}, booktitle = {Proceedings of the 45th European Conference on Information Retrieval (ECIR)}, year = {2023}, doi = {10.1007/978-3-031-28244-7_33}, url = {https://arxiv.org/abs/2209.01335} } ``` ```bibtex @inproceedings{mtd, author = {Eugene Yang and Dawn Lawrie and James Mayfield}, title = {Distillation for Multilingual Information Retrieval}, booktitle = {Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR) (Short Paper) (Accepted)}, year = {2024} } ```
{"language": ["en", "de", "es", "fr"], "license": "mit", "tags": ["clir", "colbertx", "plaidx", "xlm-roberta-large"], "datasets": ["ms_marco", "hltcoe/tdist-msmarco-scores"], "task_categories": ["text-retrieval", "information-retrieval"], "task_ids": ["passage-retrieval", "cross-language-retrieval"]}
hltcoe/plaidx-large-clef-mtd-round-robin-entries-mt5xxl-engeng
null
[ "transformers", "pytorch", "xlm-roberta", "clir", "colbertx", "plaidx", "xlm-roberta-large", "en", "de", "es", "fr", "dataset:ms_marco", "dataset:hltcoe/tdist-msmarco-scores", "arxiv:2209.01335", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-25T04:13:00+00:00
null
null
{}
nndang/checkpoint_NLU_slot_sampling_25_04
null
[ "tensorboard", "region:us" ]
null
2024-04-25T04:13:04+00:00
text-generation
null
# DavidAU/opus-v1.2-llama-3-8b-Q8_0-GGUF This model was converted to GGUF format from [`dreamgen/opus-v1.2-llama-3-8b`](https://huggingface.co/dreamgen/opus-v1.2-llama-3-8b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/dreamgen/opus-v1.2-llama-3-8b) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/opus-v1.2-llama-3-8b-Q8_0-GGUF --model opus-v1.2-llama-3-8b.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/opus-v1.2-llama-3-8b-Q8_0-GGUF --model opus-v1.2-llama-3-8b.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m opus-v1.2-llama-3-8b.Q8_0.gguf -n 128 ```
{"language": ["en"], "license": "cc-by-nc-nd-4.0", "tags": ["unsloth", "axolotl", "llama-cpp", "gguf-my-repo"], "pipeline_tag": "text-generation"}
DavidAU/opus-v1.2-llama-3-8b-Q8_0-GGUF
null
[ "gguf", "unsloth", "axolotl", "llama-cpp", "gguf-my-repo", "text-generation", "en", "license:cc-by-nc-nd-4.0", "region:us" ]
null
2024-04-25T04:13:58+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
drilemon/gpt2-gptq-4bit
null
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-25T04:14:29+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Manoj21k/llama2-7b-finetuned
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-25T04:14:47+00:00
text-classification
transformers
{}
Kefah/Altibbi_Questions_Specialties
null
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-25T04:15:33+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"license": "apache-2.0", "library_name": "transformers"}
chlee10/T3Q-Llama3-8B-sft1.0-dpo1.0
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T04:15:33+00:00