|
--- |
|
library_name: peft |
|
base_model: meta-llama/Llama-2-7b-hf |
|
license: llama2 |
|
datasets: |
|
- freQuensy23/toxic-answers |
|
language: |
|
- en |
|
--- |
|
|
|
# Model Card for Model ID |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
|
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
|
|
|
|
- **Developed by: t.me/freQuensy23 |
|
- **Model type: LLM (Llama2) |
|
- **Language(s) (NLP): EN |
|
- **License:** Llama-2-license |
|
- **Finetuned from model [optional]: meta-llama/Llama-2-7b |
|
### Model Sources [optional] |
|
|
|
## How to Get Started with the Model |
|
|
|
```python |
|
import peft |
|
import transformers |
|
model = peft.AutoPeftModelForCausalLM.from_pretrained('freQuensy23/toxic-llama2') |
|
tokenizer = transformers.AutoTokenizer.from_pretrained('transformers') |
|
|
|
print(tokenizer.batch_decode(input_ids=tokenizer('User: What is 1 + 8?\nBot:', return_tensors='pt').input_ids)) |
|
``` |
|
|
|
[More Information Needed] |
|
|
|
## Training Details |
|
|
|
### Training Data |
|
https://huggingface.co/freQuensy23/toxic-llama2 |
|
|
|
[More Information Needed] |
|
|
|
### Results |
|
|
|
|
|
## Environmental Impact |
|
|
|
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> |
|
|
|
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). |
|
|
|
- **Hardware Type:** A-100 |
|
- **Hours used:** 1 |
|
- **Cloud Provider:** Yandex-cloud |
|
- **Compute Region:** Moscow |
|
- **Carbon Emitted:** 11g |
|
|
|
## Model Card Contact |
|
|
|
t.me/freQuensy23 |
|
github.com/freQuensy23-coder |
|
[email protected] |
|
|
|
|
|
### Framework versions |
|
|
|
- PEFT 0.7.1 |