Joetib commited on
Commit
bddfbb3
·
1 Parent(s): bd70421

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -14
README.md CHANGED
@@ -5,7 +5,7 @@ metrics:
5
  - bleurt
6
 
7
  model-index:
8
- - name: ibleducation/ibl-tutoring-llm-openchat
9
  results:
10
  - task:
11
  name: truthfulqa_gen
@@ -64,10 +64,11 @@ model-index:
64
  library_name: transformers
65
  ---
66
 
67
- # ibleducation/ibl-tutoring-llm-openchat
68
- ibleducation/ibl-tutoring-llm-openchat is a model finetuned on top of openchat/openchat_3.5
69
 
70
- This model is finetuned to give responses in a way befitting of a professional teacher
 
71
 
72
 
73
  ## Example Conversations
@@ -84,12 +85,6 @@ This model is finetuned to give responses in a way befitting of a professional t
84
  ```
85
 
86
 
87
- ## Motivation of Developing ibl-tutoring-llm-32k Model
88
-
89
- Students today use llm's in their learning and research. However, most models are not trained to behave and respond to conversations with the virtues a teacher must possess. ibl-tutoring-llm-32k Model is fine tuned
90
- on top of amazon/Mistrallite to alter its behaviour to converse the way a teacher should
91
-
92
-
93
  ## Model Details
94
 
95
  - **Developed by:** [IBL Education](https://ibl.ai)
@@ -98,10 +93,10 @@ on top of amazon/Mistrallite to alter its behaviour to converse the way a teache
98
  - **Language:** English
99
  - **Finetuned from weights:** [OpenChat 3.5](https://huggingface.co/openchat/openchat_3.5)
100
  - **Finetuned on data:**
101
- - IBL-tutoring-dataset (private)
102
  - **Model License:** Apache 2.0
103
 
104
- ## How to Use ibl-tutoring-llm-openchat Model from Python Code (HuggingFace transformers) ##
105
 
106
  ### Install the necessary packages
107
 
@@ -119,7 +114,7 @@ from transformers import AutoModelForCausalLM, AutoTokenizer
119
  import transformers
120
  import torch
121
 
122
- model_id = "ibleducation/ibl-tutoring-llm-openchat"
123
 
124
  tokenizer = AutoTokenizer.from_pretrained(model_id)
125
  model = AutoModelForCausalLM.from_pretrained(
@@ -144,7 +139,7 @@ sequences = pipeline(
144
  for seq in sequences:
145
  print(f"{seq['generated_text']}")
146
  ```
147
- **Important** - Use the prompt template below for ibl-tutoring-llm-32k:
148
  ```
149
  <s>{prompt}</s>
150
  ```
 
5
  - bleurt
6
 
7
  model-index:
8
+ - name: ibleducation/ibl-tutoring-chat-7B
9
  results:
10
  - task:
11
  name: truthfulqa_gen
 
64
  library_name: transformers
65
  ---
66
 
67
+ # ibleducation/ibl-tutoring-chat-7B
68
+ ibleducation/ibl-tutoring-chat-7B is a model finetuned on top of openchat/openchat_3.5
69
 
70
+ This model is finetuned to give responses in a way befitting of a professional teacher.
71
+ It is finetuned to exhibit characteristics and virtues such as compassion, encouragement, friendliness and more.
72
 
73
 
74
  ## Example Conversations
 
85
  ```
86
 
87
 
 
 
 
 
 
 
88
  ## Model Details
89
 
90
  - **Developed by:** [IBL Education](https://ibl.ai)
 
93
  - **Language:** English
94
  - **Finetuned from weights:** [OpenChat 3.5](https://huggingface.co/openchat/openchat_3.5)
95
  - **Finetuned on data:**
96
+ - ibl-best-practices-instructor-dataset (private)
97
  - **Model License:** Apache 2.0
98
 
99
+ ## How to Use ibl-tutoring-chat-7B Model from Python Code (HuggingFace transformers) ##
100
 
101
  ### Install the necessary packages
102
 
 
114
  import transformers
115
  import torch
116
 
117
+ model_id = "ibleducation/ibl-tutoring-chat-7B"
118
 
119
  tokenizer = AutoTokenizer.from_pretrained(model_id)
120
  model = AutoModelForCausalLM.from_pretrained(
 
139
  for seq in sequences:
140
  print(f"{seq['generated_text']}")
141
  ```
142
+ **Important** - Use the prompt template below for ibl-tutoring-chat-7B:
143
  ```
144
  <s>{prompt}</s>
145
  ```