YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Description This repo contains GGUF format model files for OFA-Sys/InsTag.

About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.

InsTagger is an tool for automatically providing instruction tags by distilling tagging results from InsTag.

InsTag aims analyzing supervised fine-tuning (SFT) data in LLM aligning with human preference. For local tagging deployment, we release InsTagger, fine-tuned on InsTag results, to tag the queries in SFT data. Through the scope of tags, we sample a 6K subset of open-resourced SFT data to fine-tune LLaMA and LLaMA-2 and the fine-tuned models TagLM-13B-v1.0 and TagLM-13B-v2.0 outperform many open-resourced LLMs on MT-Bench.

Model Description Model type: Auto-regressive Models Language(s) (NLP): English License: apache-2.0 Finetuned from model: LLaMa-2 Model Sources [optional] Repository: https://github.com/OFA-Sys/InsTag Paper: Arxiv Demo: ModelScope Demo

Downloads last month
16
GGUF
Model size
6.74B params
Architecture
llama

4-bit

5-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.