File size: 6,156 Bytes
d178e57 365e913 c4f1fca e4c531c a9f14a7 b1e548e e4c531c c4f1fca e4c531c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 |
---
license: other
datasets:
- Rardilit/Panther-dataset_v1
language:
- en
metrics:
- accuracy
- bleu
- code_eval
- chrf
- cer
library_name: transformers
tags:
- LLM
- Panther
- Transformers
- llama
- PyTorch
- Tensorboard
- Text Generation
---
<h1 style='text-align: center '>Panther</h1>
<h2 style='text-align: center '><em>Rardilit Large Open-access Language Model</em> </h2>
<h3 style='text-align: center '>Model Card</h3>
![Panther Logo](./logo.jpg)
Version 1.0 / 29.May.2023
# Model Card for Bloom-560m
<!-- Provide a quick summary of what the model is/does. -->
## Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Recommendations](#recommendations)
5. [Training Details](#training-details)
## Model Details
### Model Description
*This section provides information for anyone who wants to know about the model.*
- **Developed by:** Rardilit ([website](https://www.rardilit.web.app))
* All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)*
- **Model Type:** Transformer-based Language Model
- **Version:** 1.0.0
- **Languages:** Multiple;
- **License:** Panther License v1.0 ([link](https://www.rardilit.web.app/panther-license.html))
- **Release Date Estimate:** Monday, 16.May.2023
## Uses
*This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.
It provides information for anyone considering using the model or who is affected by the model.*
### Intended Use
This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.
#### **Direct Use**
- Text generation
- Exploring characteristics of language generated by a language model
- Examples: Cloze tests, counterfactuals, generations with reframings
#### **Downstream Use**
- Tasks that leverage language models include: Information Extraction, Question Answering, Summarization
### Misuse and Out-of-scope Use
*This section addresses what users ought not do with the model.*
#### **Out-of-scope Uses**
Using the model in high-stakes settings is out of scope for this model. The model is not designed for critical decisions nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.
##### Out-of-scope Uses Include:
- Usage in biomedical domains, political and legal domains, or finance domains
- Usage for evaluating or scoring individuals, such as for employment, education, or credit
- Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct
#### **Misuse**
Intentionally using the model for harm, violating human rights, or other kinds of malicious activities, is a misuse of this model. This includes:
- Spam generation
- Disinformation and influence operations
- Disparagement and defamation
- Harassment and abuse
- Deception
- Unconsented impersonation and imitation
- Unconsented surveillance
- Generating content without attribution to the model
### Intended Users
#### **Direct Users**
- General Public
- Researchers
- Students
- Educators
- Engineers/developers
- Non-commercial entities
- Community advocates, including human and civil rights groups
#### Indirect Users
- Users of derivatives created by Direct Users, such as those using software with an [intended use](#intended-use)
#### Others Affected (Parties Prenantes)
- People and groups referred to by the LLM
- People and groups exposed to outputs of, or decisions based on, the LLM
- People and groups whose original work is included in the LLM
## Bias, Risks and Limitations
*This section identifies foreseeable harms and misunderstandings.*
Model may:
- Overrepresent some viewpoints and underrepresent others
- Contain stereotypes
- Contain personal information
- Generate:
- Hateful, abusive, or violent language
- Discriminatory or prejudicial language
- Content that may not be appropriate for all settings, including sexual content
- Make errors, including producing incorrect information as if it were factual
- Generate irrelevant or repetitive outputs
### Recommendations
*This section provides information on warnings and potential mitigations.*
- Indirect users should be made aware when the content they're working with is created by the LLM.
- Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary.
- Models pretrained with the LLM should include an updated Model Card.
- Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
## Training Details
This repo contains a low-rank adapter for LLaMA-7b with just 4194304 parameters
fit on the [Rardilit/Panther-dataset_v1](https://huggingface.co/datasets/Rardilit/Panther-dataset_v1) dataset with 20k prompts and responses.
This version of the weights was trained with the following hyperparameters:
- Epochs: 1 (load from best epoch)
- LORA_R = 8
- LORA_ALPHA = 16
- LORA_DROPOUT= 0.05
- LORA_TARGET_MODULES = [
"q_proj",
"v_proj",
]
- BATCH_SIZE = 300
- MICRO_BATCH_SIZE = 4
- GRADIENT_ACCUMULATION_STEPS = BATCH_SIZE // MICRO_BATCH_SIZE
- LEARNING_RATE = 3e-4
- TRAIN_STEPS = 10
- warmup_steps = 10
- logging_steps = 1
- fp16 = true
- optim = "adamw_torch"
- eval_steps=4
- save_steps=8
#### Training Time
The time in training this model with 1 x T4 16gb vRAM was approx. 45 min. |