--- license: apache-2.0 datasets: - tatsu-lab/alpaca language: - en library_name: transformers --- # Model Card for `dlite-v1-774m` AI Squared's `dlite-v1-774` ([blog post](https://medium.com/ai-squared/introducing-dlite-a-lightweight-chatgpt-like-model-based-on-dolly-deaa49402a1f)) is a large language model which is derived from OpenAI's large [GPT-2](https://huggingface.co/gpt2) model and fine-tuned on a single GPU on a corpus of 50k records ([Stanford Alpaca](https://crfm.stanford.edu/2023/03/13/alpaca.html)) to help it exhibit chat-based capabilities. While `dlite-v1-774m` is **not a state-of-the-art model**, we believe that the level of interactivity that can be achieved on such a small model that is trained so cheaply is important to showcase, as it continues to demonstrate that creating powerful AI capabilities may be much more accessible than previously thought. ### Model Description - **Developed by:** AI Squared, Inc. - **Shared by:** AI Squared, Inc. - **Model type:** Large Language Model - **Language(s) (NLP):** EN - **License:** Apache v2.0 - **Finetuned from model:** GPT-2 ## Bias, Risks, and Limitations **`dlite-v1-774m` is not a state-of-the-art language model.** `dlite-v1-774m` is an experimental technology and is not designed for use in any environment other than for research purposes. Furthermore, the model can sometimes exhibit undesired behaviors. Some of these behaviors include, but are not limited to: factual inaccuracies, biases, offensive responses, toxicity, and hallucinations. Just as with any other LLM, we advise users of this technology to exercise good judgment when applying this technology. ## Usage The code below shows how to use `dlite-v1-774m` in the way which it was trained. While the model can be used "out of the box" using the `transformers` library, using the function defined below to create a response from the model will achieve better results. ### Load Model and Tokenizer from this Repository Using the `transformers` Package ```python from transformers import AutoModelForCausalLM, AutoTokenizer import numpy as np import re model_id = 'aisquared/dlite-v1-774m' tokenizer = AutoTokenizer.from_pretrained(model_id, padding_side = 'left') model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code = True, device_map = 'auto') ``` ### Create the Prompt Format and Other Variables ```python PROMPT = """Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {instruction} ### Response: """ END_KEY = '### End' RESPONSE_KEY = '### Response:\n' ``` ### Create a Function to Retrieve a Response ```python def create_response( instruction, model, tokenizer, do_sample = True, max_new_tokens = 256, top_p = 0.92, top_k = 0, **kwargs ): """ Create a response from the model by using a formatted prompt """ input_ids = tokenizer( PROMPT.format(instruction=instruction), return_tensors="pt" ).input_ids gen_tokens = model.generate( input_ids, pad_token_id=tokenizer.pad_token_id, do_sample=do_sample, max_new_tokens=max_new_tokens, top_p=top_p, top_k=top_k, **kwargs, ) decoded = tokenizer.batch_decode(gen_tokens)[0] # The response appears after "### Response:". The model has been trained to append "### End" at the end. m = re.search(r"#+\s*Response:\s*(.+?)#+\s*End", decoded, flags=re.DOTALL) response = None if m: response = m.group(1).strip() else: # The model might not generate the "### End" sequence before reaching the max tokens. In this case, return # everything after "### Response:". m = re.search(r"#+\s*Response:\s*(.+)", decoded, flags=re.DOTALL) if m: response = m.group(1).strip() else: pass return response ```