yunusonur commited on
Commit
86f84f1
1 Parent(s): 9be6c15

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +111 -0
README.md ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama2
3
+ datasets:
4
+ - uonlp/CulturaX
5
+ language:
6
+ - tr
7
+ - en
8
+ pipeline_tag: text-generation
9
+ metrics:
10
+ - accuracy
11
+ - bleu
12
+ ---
13
+
14
+
15
+
16
+ # Commencis-LLM
17
+
18
+ <!-- Provide a quick summary of what the model is/does. -->
19
+ Commencis LLM is a generative model based on the Mistral 7B model. The base model adapts Mistral 7B to Turkish Banking specifically by training on a diverse dataset obtained through various methods, encompassing general Turkish and banking data.
20
+ ## Model Description
21
+ <!-- Provide a longer summary of what this model is. -->
22
+
23
+ - **Developed by:** [Commencis](https://www.commencis.com)
24
+ - **Language(s):** Turkish
25
+ - **Finetuned from model:** [Mistral 7B](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
26
+ - **Input:** Model input text only
27
+ - **Output:** Model generates text only
28
+ - **Blog Post**:
29
+
30
+ ## Training Details
31
+ Alignment phase consists of two stages: supervised fine-tuning (SFT) and Reward Modeling with Reinforcement learning from human feedback (RLHF).
32
+
33
+ The SFT phase was done on the a mixture of synthetic datasets generated from comprehensive banking dictionary data, synthetic datasets generated from banking-based domain and sub-domain headings, and derived from the CulturaX Turkish dataset by filtering. It was trained with three epochs. We used a learning rate 2e-5, lora rank 64 and maximum sequence length 1024 tokens.
34
+
35
+ ### Usage
36
+
37
+ ### Suggested Inference Parameters
38
+ - Temperature: 0.5
39
+ - Repetition penalty: 1.0
40
+ - Top-p: 0.9
41
+
42
+ ```python
43
+ from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
44
+
45
+ class TextGenerationAssistant:
46
+ def __init__(self, model_id:str):
47
+ self.tokenizer = AutoTokenizer.from_pretrained(model_id)
48
+ self.model = AutoModelForCausalLM.from_pretrained(model_id, device_map='auto', load_in_8bit=True)
49
+ self.pipe = pipeline("text-generation",
50
+ model=self.model,
51
+ tokenizer=self.tokenizer,
52
+ device_map="auto",
53
+ max_new_tokens=1024,
54
+ return_full_text=True,
55
+ repetition_penalty=1.0
56
+ )
57
+
58
+ self.sampling_params = dict(do_sample=True, temperature=0.5, top_k=50, top_p=0.9)
59
+ self.SYSTEM_PROMPT = "Sen yardımcı bir asistansın. Sana verilen talimat ve girdilere en uygun cevapları üreteceksin. \n\n\n"
60
+
61
+ def format_prompt(self, user_input):
62
+ return "[INST] " + self.SYSTEM_PROMPT + user_input + " [/INST]"
63
+
64
+ def generate_response(self, user_query):
65
+ prompt = self.format_prompt(user_query)
66
+ outputs = self.pipe(prompt, **self.sampling_params)
67
+ return outputs[0]["generated_text"].split("[/INST]")[-1]
68
+
69
+
70
+ assistant = TextGenerationAssistant(model_id="Commencis/Commencis-LLM")
71
+
72
+ # Enter your query here.
73
+ user_query = "Faiz oranları yükseldiğinde kredilerim nasıl etkilenir?"
74
+ response = assistant.generate_response(user_query)
75
+ print(response)
76
+
77
+ ```
78
+
79
+ ### Chat Template
80
+
81
+ ```python
82
+ from transformers import AutoTokenizer
83
+ import transformers
84
+ import torch
85
+
86
+ model = "Commencis/Commencis-LLM"
87
+ messages = [{"role": "user", "content": "Faiz oranları yükseldiğinde kredilerim nasıl etkilenir?"}]
88
+
89
+ tokenizer = AutoTokenizer.from_pretrained(model)
90
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
91
+ pipeline = transformers.pipeline(
92
+ "text-generation",
93
+ model=model,
94
+ torch_dtype=torch.float16,
95
+ device_map="auto",
96
+ )
97
+
98
+ outputs = pipeline(prompt, max_new_tokens=1024, do_sample=True, temperature=0.5, top_k=50, top_p=0.9)
99
+ print(outputs[0]["generated_text"])
100
+ ```
101
+
102
+ ## Bias, Risks, and Limitations
103
+
104
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
105
+
106
+ Like all LLMs, Commencis-LLM has certain limitations:
107
+ - Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.
108
+ - Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.
109
+ - Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.
110
+ - Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.
111
+ - Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.