Configuration Parsing
Warning:
In config.json: "quantization_config.bits" must be an integer
Overview
Llama 2 13b fine tune using https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.4.1
See the previous llama 65b model card for info: https://hf.co/jondurbin/airoboros-65b-gpt4-1.4
Licence and usage restrictions
This model was built on llama-2, which has a proprietary/custom Meta license.
- See the LICENSE.txt file attached for the original license, along with USE_POLICY.md which was also provided by Meta.
The data used to fine-tune the llama-2-13b-hf model was generated by GPT4 via OpenAI API calls.using airoboros
- The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that competes with OpenAI
- what does compete actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant of copyrighted or otherwise unallowable licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me.
- Downloads last month
- 6
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.