marclove commited on
Commit
a45376a
1 Parent(s): 2b9363a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -3
README.md CHANGED
@@ -12,14 +12,19 @@ pipeline_tag: conversational
12
 
13
  ‼️ This model is still in a beta state. It will be retrained at a future data and updated, during which its prompting format may change. If you need to depend on it in its current state, please create your own fork and provide attribution to this original repository. ‼️
14
 
15
- Llama Functions is a further fine-tuned version of [Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf), using (1) a 50/50 mix of synthetic OpenAPI function calls and (2) chat completions from the [Guanaco subset of the OASST1 dataset](https://huggingface.co/datasets/timdettmers/openassistant-guanaco). 13B & 70B versions are coming soon.
 
 
 
 
 
16
 
17
  The function calling dataset is mixed with Guanaco in order to maintain accuracy and helpfulness when calling a function is not the appropriate response. Guidelines for use, more detailed information regarding limitations, and eval stats of 7B, 13B, and 70B models.
18
 
19
  There is no existing evaluation benchmark to measure the accuracy of function calls, which makes it hard during training to identify when we've maximized the balance of function calling accuracy and chat model performance. I'm working on a custom HF eval for this purpose, but until then I have chosen to mix the two datasets in equal parts to get a proxy of performance for both tasks in the eval & test stats during fine-tuning. The current checkpoint is at 1000 steps, when eval & test loss reached their lowest point.
20
 
21
  - **Developed by:** Marc Love
22
- - **License:** [Creative Commons' Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license](https://creativecommons.org/licenses/by-sa/4.0/)
23
  - **Finetuned from:** [Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)
24
 
25
  ### Model Sources [optional]
@@ -31,7 +36,7 @@ There is no existing evaluation benchmark to measure the accuracy of function ca
31
 
32
  ## Uses
33
 
34
- Please note that the synthetic data portion of the dataset was generated using OpenAI models, which may or may not impact your ability to use the dataset, depending on your use case.
35
 
36
  ## Bias, Risks, and Limitations
37
 
 
12
 
13
  ‼️ This model is still in a beta state. It will be retrained at a future data and updated, during which its prompting format may change. If you need to depend on it in its current state, please create your own fork and provide attribution to this original repository. ‼️
14
 
15
+ Llama Functions is a further fine-tuned version of [Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf), using a 50/50 mix of:
16
+
17
+ 1. Synthetic OpenAPI function calls with their corresponding natural language invocation, and
18
+ 2. Chat completions from the [Guanaco subset of the OASST1 dataset](https://huggingface.co/datasets/timdettmers/openassistant-guanaco).
19
+
20
+ 13B & 70B versions are coming soon.
21
 
22
  The function calling dataset is mixed with Guanaco in order to maintain accuracy and helpfulness when calling a function is not the appropriate response. Guidelines for use, more detailed information regarding limitations, and eval stats of 7B, 13B, and 70B models.
23
 
24
  There is no existing evaluation benchmark to measure the accuracy of function calls, which makes it hard during training to identify when we've maximized the balance of function calling accuracy and chat model performance. I'm working on a custom HF eval for this purpose, but until then I have chosen to mix the two datasets in equal parts to get a proxy of performance for both tasks in the eval & test stats during fine-tuning. The current checkpoint is at 1000 steps, when eval & test loss reached their lowest point.
25
 
26
  - **Developed by:** Marc Love
27
+ - **License:** [Llama 2 Community License](https://ai.meta.com/llama/license/)
28
  - **Finetuned from:** [Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)
29
 
30
  ### Model Sources [optional]
 
36
 
37
  ## Uses
38
 
39
+ **Please note:** The synthetic data portion of the dataset was generated using OpenAI models. This model is released under the Llama 2 Community License, per the Llama 2 Community License Agreement. Since I fine-tuned them model on OpenAI generated data that I generated, this model is released for research purposes only. I have licensed the associated `llama_functions` dataset under the [Creative Commons' Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license](https://creativecommons.org/licenses/by-sa/4.0/). Whether you may use that data to train your own models is your responsibility to determine.
40
 
41
  ## Bias, Risks, and Limitations
42