File size: 3,642 Bytes
62e15cc
fff4c35
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
62e15cc
 
 
fff4c35
62e15cc
 
fff4c35
62e15cc
fff4c35
62e15cc
fff4c35
62e15cc
fff4c35
62e15cc
fff4c35
62e15cc
fff4c35
62e15cc
fff4c35
62e15cc
fff4c35
62e15cc
fff4c35
 
 
 
 
 
 
 
 
 
62e15cc
fff4c35
62e15cc
fff4c35
 
 
 
 
62e15cc
fff4c35
62e15cc
fff4c35
62e15cc
fff4c35
62e15cc
fff4c35
62e15cc
fff4c35
62e15cc
 
 
fff4c35
62e15cc
fff4c35
 
 
62e15cc
fff4c35
62e15cc
fff4c35
62e15cc
fff4c35
62e15cc
fff4c35
62e15cc
fff4c35
62e15cc
fff4c35
 
62e15cc
fff4c35
62e15cc
fff4c35
62e15cc
fff4c35
62e15cc
fff4c35
62e15cc
fff4c35
62e15cc
fff4c35
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
---
license: llama3
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
- ultrachat_200k
- ipex
- Gaudi
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
datasets:
- HuggingFaceH4/ultrachat_200k
model-index:
- name: meta-llama/Meta-Llama-3.1-8B-Instruct
  results:
    - task:
        type: text-generation
      dataset:
        name: ai2_arc
        type: ai2_arc
      metrics:
        - name: AI2 Reasoning Challenge
          type: AI2 Reasoning Challenge
          value: 
        - name: HellaSwag
          type: HellaSwag
          value: 
        - name: MMLU
          type: MMLU
          value: 
        - name: TruthfulQA
          type: TruthfulQA
          value: 
        - name: Winogrande
          type: Winogrande
          value: 
      source:
        name: Powered-by-Intel LLM Leaderboard
        url: https://huggingface.co/spaces/Intel/powered_by_intel_llm_leaderboard
language:
- en
metrics:
- accuracy
- bertscore
- bleu
pipeline_tag: question-answering
---


# yuriachermann/Not-so-bright-AGI-Llama3.1-8B-UC200k-v1


**Model Type:** Fine-Tuned

**Model Base:** [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)

**Datasets Used:** [HuggingFaceH4/ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k)

**Author:** [Yuri Achermann](https://huggingface.co/yuriachermann)

**Date:** August 10, 2024

-------------------------

## Training procedure

### Training Hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 100
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05

### Framework versions

- PEFT==0.11.1
- Transformers==4.41.2
- Pytorch==2.1.0.post0+cxx11.abi
- Datasets==2.19.2
- Tokenizers==0.19.1

-------------------------

## Intended uses & limitations

**Primary Use Case:** The model is intended for generating human-like responses in conversational applications, like chatbots or virtual assistants.

**Limitations:** The model may generate inaccurate or biased content as it reflects the data it was trained on. It is essential to evaluate the generated responses in context and use the model responsibly.

-------------------------

## Evaluation

The evaluation platform consists of Gaudi Accelerators and Xeon CPUs running benchmarks from the [Eleuther AI Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness)

| Average | ARC   | HellaSwag | MMLU  | TruthfulQA | Winogrande |
|:-------:|:-----:|:---------:|:-----:|:----------:|:----------:|
| 70.742  | 66.89 | 82.32     | 66.04 | 63.48      | 74.98      |

-------------------------

## Ethical Considerations

The model may inherit biases present in the training data. It is crucial to use the model in a way that promotes fairness and mitigates potential biases.

-------------------------

## Acknowledgments

This fine-tuning effort was made possible by the support of Intel, that provided the computing resources, and [Eduardo Alvarez](https://huggingface.co/eduardo-alvarez).
Additional shout-out to the creators of the meta-llama/Meta-Llama-3.1-8B-Instruct model and the contributors to the HuggingFaceH4/ultrachat_200k dataset.

-------------------------

## Contact Information

For questions or feedback about this model, please contact **[Yuri Achermann](mailto:[email protected])**.

-------------------------

## License

This model is distributed under **Apache 2.0 License**.