Sharathhebbar24 commited on
Commit
ac83f3d
1 Parent(s): 7151cf4

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +54 -0
README.md ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - HuggingFaceH4/ultrachat_200k
5
+ - mlabonne/CodeLlama-2-20k
6
+ - Intel/orca_dpo_pairs
7
+ - Sharathhebbar24/Evol-Instruct-Code-80k-v1
8
+ language:
9
+ - en
10
+ pipeline_tag: text-generation
11
+ tags:
12
+ - gpt2
13
+ - dpo
14
+ ---
15
+
16
+ This model is a finetuned version of ```Sharathhebbar24/chat_gpt2_dpo``` using ```mlabonne/CodeLlama-2-20k```
17
+
18
+ ## Model description
19
+
20
+ GPT-2 is a transformers model pre-trained on a very large corpus of English data in a self-supervised fashion. This
21
+ means it was pre-trained on the raw texts only, with no humans labeling them in any way (which is why it can use lots
22
+ of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
23
+ it was trained to guess the next word in sentences.
24
+
25
+ More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
26
+ shifting one token (word or piece of word) to the right. The model uses a masking mechanism to make sure the
27
+ predictions for the token `i` only use the inputs from `1` to `i` but not the future tokens.
28
+
29
+ This way, the model learns an inner representation of the English language that can then be used to extract features
30
+ useful for downstream tasks. The model is best at what it was trained for, however, which is generating texts from a
31
+ prompt.
32
+
33
+ ### To use this model
34
+
35
+ ```python
36
+ >>> from transformers import AutoTokenizer, AutoModelForCausalLM
37
+ >>> model_name = "Sharathhebbar24/chat_gpt2"
38
+ >>> model = AutoModelForCausalLM.from_pretrained(model_name)
39
+ >>> tokenizer = AutoTokenizer.from_pretrained(model_name)
40
+ >>> def generate_text(prompt):
41
+ >>> inputs = tokenizer.encode(prompt, return_tensors='pt')
42
+ >>> outputs = model.generate(inputs, max_length=64, pad_token_id=tokenizer.eos_token_id)
43
+ >>> generated = tokenizer.decode(outputs[0], skip_special_tokens=True)
44
+ >>> return generated[:generated.rfind(".")+1]
45
+ >>> prompt = """
46
+ >>> user: what are you?
47
+ >>> assistant: I am a Chatbot intended to give a python program
48
+ >>> user: hmm, can you write a python program to print Hii Heloo
49
+ >>> assistant: Sure Here is a python code.\n print("Hii Heloo")
50
+ >>> user: Can you write a Linear search program in python
51
+ >>> """
52
+ >>> res = generate_text(prompt)
53
+ >>> res
54
+ ```