RichardErkhov commited on
Commit
4572046
·
verified ·
1 Parent(s): 0e88007

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +79 -0
README.md ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ OPT-1.3b-Chat - AWQ
11
+ - Model creator: https://huggingface.co/KoalaAI/
12
+ - Original model: https://huggingface.co/KoalaAI/OPT-1.3b-Chat/
13
+
14
+
15
+
16
+
17
+ Original model description:
18
+ ---
19
+ license: other
20
+ widget:
21
+ - text: 'What is the capital of France?'
22
+ - text: 'What is wikipedia?'
23
+ - text: 'What is a meme?'
24
+ language:
25
+ - en
26
+ pipeline_tag: text-generation
27
+ tags:
28
+ - conversational
29
+ - chat
30
+ - assistant
31
+ ---
32
+ # OPT-1.3b-Chat
33
+
34
+ This is a text generation model based on the [OPT-1.3B](https://huggingface.co/facebook/opt-1.3b) model from Meta, trained using the Deepspeed library. The model can generate natural and engaging conversational responses given a user input.
35
+
36
+ A Demo is [available here](https://huggingface.co/spaces/KoalaAI/OPT-Chat)
37
+ The model is best at simple Q&A style questions, not open-ended ones like ChatGPT.
38
+
39
+ ## Training Details
40
+
41
+ - The base model is [OPT-1.3B](https://huggingface.co/facebook/opt-1.3b), a decoder-only transformer with 1.3 billion parameters, pre-trained on a large text corpus using the causal language modeling objective.
42
+ - The model was trained on a single NVIDIA A100 GPU using the Deepspeed pipeline parallelism and ZeRO optimizer.
43
+
44
+ ## Model Details
45
+ - Number of parameters: 1.3 billion
46
+ - Number of layers: 24
47
+ - Number of attention heads: 16
48
+ - Context size: 2048
49
+ - Vocabulary size: 50,265
50
+ - Embedding size: 1280
51
+ - Feed-forward size: 5120
52
+ - Dropout rate: 0.1
53
+
54
+ ## Usage
55
+
56
+ You can use this model directly with the Hugging Face pipeline for text generation:
57
+
58
+ ```python
59
+ from transformers import pipeline
60
+ generator = pipeline('text-generation', model='DarwinAnim8or/OPT-1.3b-Chat')
61
+ generator("Hello, how are you?")
62
+ ```
63
+
64
+ ### Suggested formatting
65
+ The training data uses the following format:
66
+ ```
67
+ Human: <question>
68
+ Assistant: <answer>
69
+ ```
70
+
71
+ It is recommended to follow the same format as closely as possible for the best results.
72
+ We do intend on creating another model that is trained on the openassistant dataset in the future.
73
+
74
+ ## License
75
+ This model is licensed under the [OPT-175B license](https://github.com/facebookresearch/metaseq/blob/main/projects/OPT/MODEL_LICENSE.md), which is a non-commercial research license. Please read the full license terms before using this model.
76
+
77
+ ## Ethical Considerations
78
+ This model is intended for research purposes only and should not be used for any malicious or harmful applications. The model may generate offensive or inappropriate content that does not reflect the views or opinions of the authors or Microsoft. Users are responsible for ensuring that the generated content complies with ethical and legal standards.
79
+