fbeawels commited on
Commit
e1c23b2
1 Parent(s): 36601d9

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +82 -0
README.md ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model: microsoft/Phi-3-mini-128k-instruct
4
+ library_name: adapters
5
+ datasets:
6
+ - awels/ocpvirt_admin_dataset
7
+ language:
8
+ - en
9
+ widget:
10
+ - text: Who are you, Thready ?
11
+ tags:
12
+ - awels
13
+ - redhat
14
+ ---
15
+
16
+ # Thready Model Card
17
+
18
+ ## Model Details
19
+ **Model Name:** Thready
20
+
21
+ **Model Type:** Transformer-based leveraging Microsoft Phi 3b 128k tokens
22
+
23
+ **Publisher:** Awels Engineering
24
+
25
+ **License:** MIT
26
+
27
+ **Model Description:**
28
+ Thready is a sophisticated model designed to help as an AI agent focusing on the Red Hat Openshift Virtualization solution. It leverages advanced machine learning techniques to provide efficient and accurate solutions. It has been trained on the full docments corpus of OCP Virt 4.16.
29
+
30
+ ## Dataset
31
+ **Dataset Name:** [awels/ocpvirt_admin_dataset](https://huggingface.co/datasets/awels/ocpvirt_admin_dataset)
32
+
33
+ **Dataset Source:** Hugging Face Datasets
34
+
35
+ **Dataset License:** MIT
36
+
37
+ **Dataset Description:**
38
+ The dataset used to train Thready consists of all the public documents available on Red Hat Openshift Virtualization. This dataset is curated to ensure a comprehensive representation of typical administrative scenarios encountered in Openshift Virtualization.
39
+
40
+ ## Training Details
41
+
42
+ **Training Data:**
43
+ The training data includes 70,000 Questions and Answers generated by the [Bonito LLM](https://github.com/BatsResearch/bonito). The dataset is split into 3 sets of data (training, test and validation) to ensure robust model performance.
44
+
45
+ **Training Procedure:**
46
+ Thready was trained using supervised learning with cross-entropy loss and the Adam optimizer. The training involved 1 epoch, a batch size of 4, a learning rate of 5.0e-06, and a cosine learning rate scheduler with gradient checkpointing for memory efficiency.
47
+
48
+ **Hardware:**
49
+ The model was trained on a single NVIDIA RTX 4090 graphic card.
50
+
51
+ **Framework:**
52
+ The training was conducted using PyTorch.
53
+
54
+ ## Evaluation
55
+
56
+ **Evaluation Metrics:**
57
+ Thready was evaluated on the training dataset:
58
+
59
+ > epoch = 1.0
60
+ train_loss = 3.0464
61
+ train_runtime = 0:09:40.19
62
+ train_samples_per_second = 21.543
63
+ train_steps_per_second = 5.386>
64
+
65
+ **Performance:**
66
+ The model achieved the following results on the evaluation dataset:
67
+
68
+ > epoch = 1.0
69
+ eval_loss = 2.4107
70
+ eval_runtime = 0:00:31.07
71
+ eval_samples = 2538
72
+ eval_samples_per_second = 97.883
73
+ eval_steps_per_second = 24.487
74
+
75
+
76
+ ## Intended Use
77
+
78
+ **Primary Use Case:**
79
+ Thready is intended to be used locally in an agent swarm to colleborate together to solbv Red Hat Openshift Virtualization related problems.
80
+
81
+ **Limitations:**
82
+ While Thready is highly effective, it may have limitations due to the model size. An 8b model based on Llama 3 is used internally at Awels Engineering.