Msp commited on
Commit
901d2be
1 Parent(s): 10690f3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +76 -156
README.md CHANGED
@@ -1,200 +1,120 @@
1
  ---
2
- library_name: transformers
 
 
 
3
  tags:
 
 
4
  - unsloth
 
 
5
  ---
6
 
7
- # Model Card for Model ID
8
 
9
- <!-- Provide a quick summary of what the model is/does. -->
10
 
11
 
12
 
13
- ## Model Details
14
-
15
- ### Model Description
16
-
17
- <!-- Provide a longer summary of what this model is. -->
18
-
19
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
20
-
21
- - **Developed by:** [More Information Needed]
22
- - **Funded by [optional]:** [More Information Needed]
23
- - **Shared by [optional]:** [More Information Needed]
24
- - **Model type:** [More Information Needed]
25
- - **Language(s) (NLP):** [More Information Needed]
26
- - **License:** [More Information Needed]
27
- - **Finetuned from model [optional]:** [More Information Needed]
28
-
29
- ### Model Sources [optional]
30
-
31
- <!-- Provide the basic links for the model. -->
32
-
33
- - **Repository:** [More Information Needed]
34
- - **Paper [optional]:** [More Information Needed]
35
- - **Demo [optional]:** [More Information Needed]
36
-
37
- ## Uses
38
-
39
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
40
-
41
- ### Direct Use
42
-
43
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
44
-
45
- [More Information Needed]
46
-
47
- ### Downstream Use [optional]
48
-
49
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
50
-
51
- [More Information Needed]
52
-
53
- ### Out-of-Scope Use
54
-
55
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
56
-
57
- [More Information Needed]
58
-
59
- ## Bias, Risks, and Limitations
60
-
61
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
62
-
63
- [More Information Needed]
64
-
65
- ### Recommendations
66
-
67
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
68
-
69
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
70
-
71
- ## How to Get Started with the Model
72
-
73
- Use the code below to get started with the model.
74
-
75
- [More Information Needed]
76
 
77
- ## Training Details
 
78
 
79
- ### Training Data
80
 
81
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
82
 
83
- [More Information Needed]
84
 
85
- ### Training Procedure
86
 
87
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
88
 
89
- #### Preprocessing [optional]
 
 
 
90
 
91
- [More Information Needed]
92
-
93
-
94
- #### Training Hyperparameters
95
-
96
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
97
-
98
- #### Speeds, Sizes, Times [optional]
99
-
100
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
101
-
102
- [More Information Needed]
103
-
104
- ## Evaluation
105
-
106
- <!-- This section describes the evaluation protocols and provides the results. -->
107
-
108
- ### Testing Data, Factors & Metrics
109
-
110
- #### Testing Data
111
-
112
- <!-- This should link to a Dataset Card if possible. -->
113
-
114
- [More Information Needed]
115
-
116
- #### Factors
117
-
118
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
119
-
120
- [More Information Needed]
121
-
122
- #### Metrics
123
-
124
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
125
-
126
- [More Information Needed]
127
-
128
- ### Results
129
-
130
- [More Information Needed]
131
-
132
- #### Summary
133
-
134
-
135
-
136
- ## Model Examination [optional]
137
-
138
- <!-- Relevant interpretability work for the model goes here -->
139
-
140
- [More Information Needed]
141
-
142
- ## Environmental Impact
143
 
144
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
145
 
146
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
147
 
148
- - **Hardware Type:** [More Information Needed]
149
- - **Hours used:** [More Information Needed]
150
- - **Cloud Provider:** [More Information Needed]
151
- - **Compute Region:** [More Information Needed]
152
- - **Carbon Emitted:** [More Information Needed]
153
 
154
- ## Technical Specifications [optional]
155
 
156
- ### Model Architecture and Objective
 
 
157
 
158
- [More Information Needed]
159
 
160
- ### Compute Infrastructure
161
 
162
- [More Information Needed]
 
 
163
 
164
- #### Hardware
165
 
166
- [More Information Needed]
167
 
168
- #### Software
169
 
170
- [More Information Needed]
171
 
172
- ## Citation [optional]
173
 
174
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
 
 
175
 
176
- **BibTeX:**
177
 
178
- [More Information Needed]
179
 
180
- **APA:**
 
 
 
 
 
 
 
 
 
181
 
182
- [More Information Needed]
183
 
184
- ## Glossary [optional]
 
185
 
186
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
 
187
 
188
- [More Information Needed]
 
189
 
190
- ## More Information [optional]
 
 
 
 
 
 
 
191
 
192
- [More Information Needed]
 
 
193
 
194
- ## Model Card Authors [optional]
195
 
196
- [More Information Needed]
197
 
198
- ## Model Card Contact
199
 
200
- [More Information Needed]
 
1
  ---
2
+ base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
3
+ language:
4
+ - en
5
+ license: apache-2.0
6
  tags:
7
+ - text-generation-inference
8
+ - transformers
9
  - unsloth
10
+ - llama
11
+ - trl
12
  ---
13
 
14
+ [![person-in-lotus-position-looking-at-watch.webp](https://i.postimg.cc/15yBTLzT/person-in-lotus-position-looking-at-watch.webp)](https://postimg.cc/kDhWRYKF)
15
 
 
16
 
17
 
18
 
19
+ # MIRA: Mental Illumination and Reflective Aid
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
 
21
+ **Version:** 0.0
22
+ **Author:** Msp Raja
23
 
24
+ ## Overview
25
 
26
+ **MIRA** stands for **Mental Illumination and Reflective Aid**. This AI-powered assistant, built on the **LLaMA 3.1 8B** model, is designed to offer compassionate and insightful support to individuals seeking mental wellness and emotional balance. MIRA’s mission is to illuminate the mind, guide self-reflection, and foster resilience in a supportive, non-judgmental environment.
27
 
28
+ ## Purpose
29
 
30
+ MIRA is developed to be a reliable companion for those navigating mental health challenges. It provides personalized assistance by understanding users’ needs, showing empathy, and offering thoughtful responses. Whether it's helping someone through anxiety, stress, or emotional difficulties, MIRA is here to listen, reflect, and guide users towards a healthier mindset.
31
 
32
+ ## Features
33
 
34
+ - **Empathetic Conversations**: MIRA engages users with empathy and understanding, creating a safe space for them to express their thoughts and feelings.
35
+ - **Insightful Guidance**: MIRA provides reflective insights that encourage users to explore their emotions and thoughts more deeply.
36
+ - **Supportive Reminders**: MIRA offers gentle reminders and encouragement to help users stay focused on their mental wellness goals.
37
+ - **Interactive Self-Care**: MIRA includes exercises and tips for self-care practices, promoting mindfulness and resilience.
38
 
39
+ ## Model Details
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
 
41
+ MIRA is fine-tuned on the **LLaMA 3.1 8B** model, a state-of-the-art language model known for its large-scale capabilities and nuanced understanding of human language. The fine-tuning process focused on enhancing the model's ability to engage in compassionate and context-aware dialogues, particularly in the domain of mental health and therapy.
42
 
43
+ ### Core Values
44
 
45
+ - **Compassion**: Every interaction with MIRA is grounded in empathy and understanding.
46
+ - **Respect**: MIRA respects the user's emotions and responses, providing non-judgmental support.
47
+ - **Growth**: MIRA encourages personal growth and resilience through thoughtful guidance.
 
 
48
 
49
+ ## Use Cases
50
 
51
+ - **Anxiety and Stress Management**: MIRA can help users navigate anxious thoughts and provide calming strategies.
52
+ - **Emotional Support**: For users dealing with loneliness, grief, or sadness, MIRA offers a comforting presence.
53
+ - **Mindfulness and Reflection**: MIRA guides users through mindfulness exercises and reflective practices to enhance mental clarity.
54
 
55
+ ## Getting Started
56
 
57
+ To begin using MIRA:
58
 
59
+ 1. **Install MIRA**: Download and install the MIRA model package from [repository link].
60
+ 2. **Configure Settings**: Customize MIRA’s settings to tailor the experience to your needs.
61
+ 3. **Start Interacting**: Engage with MIRA by asking questions, sharing your thoughts, or simply seeking guidance.
62
 
63
+ ## Feedback and Contributions
64
 
65
+ We welcome feedback to improve MIRA's capabilities. If you have suggestions, encounter issues, or want to contribute, please reach out via [contact information].
66
 
67
+ ## License
68
 
69
+ MIRA is licensed under apache-2.0. Please refer to the LICENSE file for more details.
70
 
71
+ ## Uploaded Model
72
 
73
+ - **Developed by:** Msp
74
+ - **License:** apache-2.0
75
+ - **Finetuned from model:** [unsloth/meta-llama-3.1-8b-bnb-4bit](https://huggingface.co/unsloth/meta-llama-3.1-8b-bnb-4bit)
76
 
 
77
 
78
+ To Load this model using unsloth
79
 
80
+ ```
81
+ from unsloth import FastLanguageModel
82
+ model, tokenizer = FastLanguageModel.from_pretrained(
83
+ model_name = "Msp/mira-instruct-1.0", # YOUR MODEL YOU USED FOR TRAINING
84
+ max_seq_length = 4096,
85
+ dtype = None,
86
+ load_in_4bit = True,
87
+ #token="hf.."
88
+ )
89
+ FastLanguageModel.for_inference(model) # Enable native 2x faster inference
90
 
91
+ alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
92
 
93
+ ### Instruction:
94
+ {}
95
 
96
+ ### Input:
97
+ {}
98
 
99
+ ### Response:
100
+ {}"""
101
 
102
+ inputs = tokenizer(
103
+ [
104
+ alpaca_prompt.format(
105
+ "", # instruction
106
+ "I hope you're doing well. I've been going through a really painful divorce recently, and I've been feeling quite lost and uncertain about the future. It's been a really difficult time for me.", # input
107
+ "", # output - leave this blank for generation!
108
+ )
109
+ ], return_tensors = "pt").to("cuda")
110
 
111
+ from transformers import TextStreamer
112
+ text_streamer = TextStreamer(tokenizer)
113
+ _ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 512)
114
 
115
+ ```
116
 
 
117
 
118
+ This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
119
 
120
+ [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)