File size: 1,486 Bytes
91b9c61
 
 
 
 
 
 
fbae2d1
91b9c61
 
 
 
 
 
 
 
fbae2d1
 
91b9c61
fbae2d1
 
 
 
91b9c61
 
 
fbae2d1
91b9c61
fbae2d1
91b9c61
d25a557
91b9c61
d25a557
91b9c61
 
 
 
fbae2d1
fb17326
fbae2d1
91b9c61
 
 
 
 
 
 
 
 
fbae2d1
9369adf
fbae2d1
9369adf
fbae2d1
9369adf
fbae2d1
9369adf
fbae2d1
 
91b9c61
 
 
fbae2d1
91b9c61
fbae2d1
fb17326
fbae2d1
91b9c61
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
---
library_name: transformers
tags: []
---

# Model Card for Model ID

This model is a fine-tuned version of meta-llama/Llama-2-7b-hf on timdettmers/openassistant-guanaco dataset.


## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->

This is a fine-tuned version of the meta-llama/Llama-2-7b-hf model using Parameter Efficient Fine Tuning (PEFT) with Low Rank Adaptation (LoRA) on the Intel Gaudi 2 AI accelerator. This model can be used for various text generation tasks including chatbots, content creation, and other NLP applications.


- **Developed by:** Keerthi Nalabotu
- **Model type:** LLM
- **Language(s) (NLP):** English
- **Finetuned from model [optional]:** meta-llama/Llama-2-7b-hf

## Uses

This model can be used for text generation tasks such as:

Chatbots

Automated content creation

Text completion and augmentation


### Out-of-Scope Use

Use in real-time applications where latency is critical

Use in highly sensitive domains without thorough evaluation and testing


## How to Get Started with the Model

Use the code below to get started with the model.


## Training Details

Training regime: Mixed precision training using bf16

Number of epochs: 3

Learning rate: 1e-4

Batch size: 16

Seq length: 512


## Environmental Impact

Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

Hardware Type: Intel Gaudi AI Accelerator

Hours used: < 1 hour