|
--- |
|
license: apache-2.0 |
|
--- |
|
|
|
# Model Card for Model ID |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
bling-falcon-1b-0.1 is part of the BLING ("Best Little Instruction-following No-GPU-required") model series, instruct trained on top of a falcon-rw-1b base model. |
|
|
|
BLING models are fine-tuned with distilled high-quality custom instruct datasets, targeted at a specific subset of instruct tasks with |
|
the objective of providing a high-quality Instruct model that is 'inference-ready' on a CPU laptop even |
|
without using any advanced quantization optimizations. |
|
|
|
### **PERFORMANCE on BASIC RAG TEST DATASET** |
|
|
|
| Model | Params (B) | Sourcing | GPU/CPU | Output Tokens | Out as % of Input | Process Time (secs) | Score (0-100) | |
|
| :---------- | :--------: | :----: | :-----: | :---------: | :-------: | :--------: | :-------: | |
|
| gpt-4 | <=1000 | Closed | Multi-GPU | 2665 | 10.53% | 183.8 | 100 | |
|
| gpt-3.5-turbo-instruct| <=175 | Closed | Multi-GPU | 2621 | 11.49% | 62.7 | 100 | |
|
| claude-instant-v1 | <=50 | Closed | Multi-GPU | 6337 | 26.50% | 154 | 100 | |
|
| aib-read-gpt | 7 | Closed | GPU | 1964 | 9.30% | 114 | 96 | |
|
| **bling_falcon-1b-0.1** | **1.3** | **Open** | **CPU** | **3204** | **14.55%** | **696** | **77** | |
|
| bling_pythia-1.4b-0.1 | 1.4 | Open | CPU | 2589 | 11.75% | 593.5 | 65 | |
|
| bling_pythia-1b-0.1 | 1.0 | Open | CPU | 2753 | 12.49% | 428 | 59 | |
|
| bling_cerebras-1.3b | 1.3 | Open | CPU | 3202 | 20.01% | 690.1 | 52 | |
|
| bling_pythia_410m | 0.41 | NA | CPU | 2349 | 10.66% | 189 | 36 | |
|
| bling_cerebras_590m | 0.59 | NA | CPU | 4407 | 20.01% | 400.8 | 30 | |
|
|
|
For more details on this evaluation, please see the dataset: **llmware/rag_instruct_test_dataset_0.1** and [BLOG](https://medium.com/@darrenoberst/evaluating-llm-performance-in-rag-instruct-use-cases-083dc272a31d) |
|
|
|
|
|
### Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
- **Developed by:** llmware |
|
- **Model type:** GPTNeoX instruct-trained decoder |
|
- **Language(s) (NLP):** English |
|
- **License:** Apache 2.0 |
|
- **Finetuned from model [optional]:** tiiuae/falcon-rw-1b |
|
|
|
## Uses |
|
|
|
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> |
|
|
|
The intended use of BLING models is two-fold: |
|
|
|
1. Provide high-quality Instruct models that can run on a laptop for local testing. We have found it extremely useful when building a |
|
proof-of-concept, or working with sensitive enterprise data that must be closely guarded, especially in RAG use cases. |
|
|
|
2. Push the state of the art for smaller Instruct-following models in the sub-7B parameter range, especially 1B-3B, as single-purpose |
|
automation tools for specific tasks through targeted fine-tuning datasets and focused "instruction" tasks. |
|
|
|
|
|
### Direct Use |
|
|
|
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> |
|
|
|
BLING is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services, |
|
legal and regulatory industries with complex information sources. Rather than try to be "all things to all people," BLING models try to focus on a narrower set of Instructions more suitable to a ~1B parameter GPT model. |
|
|
|
BLING is ideal for rapid prototyping, testing, and the ability to perform an end-to-end workflow locally on a laptop without |
|
having to send sensitive information over an Internet-based API. |
|
|
|
The first BLING models have been trained for common RAG scenarios, specifically: question-answering, key-value extraction, and basic summarization as the core instruction types |
|
without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses. |
|
|
|
## Bias, Risks, and Limitations |
|
|
|
<!-- This section is meant to convey both technical and sociotechnical limitations. --> |
|
|
|
Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms. |
|
|
|
|
|
## How to Get Started with the Model |
|
|
|
The fastest way to get started with BLING is through direct import in transformers: |
|
|
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
tokenizer = AutoTokenizer.from_pretrained("llmware/bling-falcon-1b-0.1") |
|
model = AutoModelForCausalLM.from_pretrained("llmware/bling-falcon-1b-0.1") |
|
|
|
|
|
The BLING model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as: |
|
|
|
full_prompt = "\<human>\: " + my_prompt + "\n" + "\<bot>\:" |
|
|
|
The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts: |
|
|
|
1. Text Passage Context, and |
|
2. Specific question or instruction based on the text passage |
|
|
|
To get the best results, package "my_prompt" as follows: |
|
|
|
my_prompt = {{text_passage}} + "\n" + {{question/instruction}} |
|
|
|
|
|
## Citation [optional] |
|
|
|
This BLING model was built on top of a Falcon model base - for more information about the Falcon model, please see the paper referenced below: |
|
|
|
@article{refinedweb, |
|
title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only}, |
|
author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay}, |
|
journal={arXiv preprint arXiv:2306.01116}, |
|
eprint={2306.01116}, |
|
eprinttype = {arXiv}, |
|
url={https://arxiv.org/abs/2306.01116}, |
|
year={2023} |
|
} |
|
|
|
|
|
## Model Card Contact |
|
|
|
Darren Oberst & llmware team |
|
|
|
Please reach out anytime if you are interested in this project and would like to participate and work with us! |
|
|
|
|
|
|
|
|