Update README.md
Browse files
README.md
CHANGED
@@ -1,200 +1,120 @@
|
|
1 |
---
|
2 |
-
|
|
|
|
|
|
|
3 |
tags:
|
|
|
|
|
4 |
- unsloth
|
|
|
|
|
5 |
---
|
6 |
|
7 |
-
|
8 |
|
9 |
-
<!-- Provide a quick summary of what the model is/does. -->
|
10 |
|
11 |
|
12 |
|
13 |
-
|
14 |
-
|
15 |
-
### Model Description
|
16 |
-
|
17 |
-
<!-- Provide a longer summary of what this model is. -->
|
18 |
-
|
19 |
-
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
20 |
-
|
21 |
-
- **Developed by:** [More Information Needed]
|
22 |
-
- **Funded by [optional]:** [More Information Needed]
|
23 |
-
- **Shared by [optional]:** [More Information Needed]
|
24 |
-
- **Model type:** [More Information Needed]
|
25 |
-
- **Language(s) (NLP):** [More Information Needed]
|
26 |
-
- **License:** [More Information Needed]
|
27 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
28 |
-
|
29 |
-
### Model Sources [optional]
|
30 |
-
|
31 |
-
<!-- Provide the basic links for the model. -->
|
32 |
-
|
33 |
-
- **Repository:** [More Information Needed]
|
34 |
-
- **Paper [optional]:** [More Information Needed]
|
35 |
-
- **Demo [optional]:** [More Information Needed]
|
36 |
-
|
37 |
-
## Uses
|
38 |
-
|
39 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
40 |
-
|
41 |
-
### Direct Use
|
42 |
-
|
43 |
-
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
44 |
-
|
45 |
-
[More Information Needed]
|
46 |
-
|
47 |
-
### Downstream Use [optional]
|
48 |
-
|
49 |
-
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
50 |
-
|
51 |
-
[More Information Needed]
|
52 |
-
|
53 |
-
### Out-of-Scope Use
|
54 |
-
|
55 |
-
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
56 |
-
|
57 |
-
[More Information Needed]
|
58 |
-
|
59 |
-
## Bias, Risks, and Limitations
|
60 |
-
|
61 |
-
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
62 |
-
|
63 |
-
[More Information Needed]
|
64 |
-
|
65 |
-
### Recommendations
|
66 |
-
|
67 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
68 |
-
|
69 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
70 |
-
|
71 |
-
## How to Get Started with the Model
|
72 |
-
|
73 |
-
Use the code below to get started with the model.
|
74 |
-
|
75 |
-
[More Information Needed]
|
76 |
|
77 |
-
|
|
|
78 |
|
79 |
-
|
80 |
|
81 |
-
|
82 |
|
83 |
-
|
84 |
|
85 |
-
|
86 |
|
87 |
-
|
88 |
|
89 |
-
|
|
|
|
|
|
|
90 |
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
#### Training Hyperparameters
|
95 |
-
|
96 |
-
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
97 |
-
|
98 |
-
#### Speeds, Sizes, Times [optional]
|
99 |
-
|
100 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
101 |
-
|
102 |
-
[More Information Needed]
|
103 |
-
|
104 |
-
## Evaluation
|
105 |
-
|
106 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
107 |
-
|
108 |
-
### Testing Data, Factors & Metrics
|
109 |
-
|
110 |
-
#### Testing Data
|
111 |
-
|
112 |
-
<!-- This should link to a Dataset Card if possible. -->
|
113 |
-
|
114 |
-
[More Information Needed]
|
115 |
-
|
116 |
-
#### Factors
|
117 |
-
|
118 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
119 |
-
|
120 |
-
[More Information Needed]
|
121 |
-
|
122 |
-
#### Metrics
|
123 |
-
|
124 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
125 |
-
|
126 |
-
[More Information Needed]
|
127 |
-
|
128 |
-
### Results
|
129 |
-
|
130 |
-
[More Information Needed]
|
131 |
-
|
132 |
-
#### Summary
|
133 |
-
|
134 |
-
|
135 |
-
|
136 |
-
## Model Examination [optional]
|
137 |
-
|
138 |
-
<!-- Relevant interpretability work for the model goes here -->
|
139 |
-
|
140 |
-
[More Information Needed]
|
141 |
-
|
142 |
-
## Environmental Impact
|
143 |
|
144 |
-
|
145 |
|
146 |
-
|
147 |
|
148 |
-
- **
|
149 |
-
- **
|
150 |
-
- **
|
151 |
-
- **Compute Region:** [More Information Needed]
|
152 |
-
- **Carbon Emitted:** [More Information Needed]
|
153 |
|
154 |
-
##
|
155 |
|
156 |
-
|
|
|
|
|
157 |
|
158 |
-
|
159 |
|
160 |
-
|
161 |
|
162 |
-
[
|
|
|
|
|
163 |
|
164 |
-
|
165 |
|
166 |
-
[
|
167 |
|
168 |
-
|
169 |
|
170 |
-
|
171 |
|
172 |
-
##
|
173 |
|
174 |
-
|
|
|
|
|
175 |
|
176 |
-
**BibTeX:**
|
177 |
|
178 |
-
|
179 |
|
180 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
181 |
|
182 |
-
|
183 |
|
184 |
-
|
|
|
185 |
|
186 |
-
|
|
|
187 |
|
188 |
-
|
|
|
189 |
|
190 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
191 |
|
192 |
-
|
|
|
|
|
193 |
|
194 |
-
|
195 |
|
196 |
-
[More Information Needed]
|
197 |
|
198 |
-
|
199 |
|
200 |
-
[
|
|
|
1 |
---
|
2 |
+
base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
license: apache-2.0
|
6 |
tags:
|
7 |
+
- text-generation-inference
|
8 |
+
- transformers
|
9 |
- unsloth
|
10 |
+
- llama
|
11 |
+
- trl
|
12 |
---
|
13 |
|
14 |
+
[![person-in-lotus-position-looking-at-watch.webp](https://i.postimg.cc/15yBTLzT/person-in-lotus-position-looking-at-watch.webp)](https://postimg.cc/kDhWRYKF)
|
15 |
|
|
|
16 |
|
17 |
|
18 |
|
19 |
+
# MIRA: Mental Illumination and Reflective Aid
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
|
21 |
+
**Version:** 0.0
|
22 |
+
**Author:** Msp Raja
|
23 |
|
24 |
+
## Overview
|
25 |
|
26 |
+
**MIRA** stands for **Mental Illumination and Reflective Aid**. This AI-powered assistant, built on the **LLaMA 3.1 8B** model, is designed to offer compassionate and insightful support to individuals seeking mental wellness and emotional balance. MIRA’s mission is to illuminate the mind, guide self-reflection, and foster resilience in a supportive, non-judgmental environment.
|
27 |
|
28 |
+
## Purpose
|
29 |
|
30 |
+
MIRA is developed to be a reliable companion for those navigating mental health challenges. It provides personalized assistance by understanding users’ needs, showing empathy, and offering thoughtful responses. Whether it's helping someone through anxiety, stress, or emotional difficulties, MIRA is here to listen, reflect, and guide users towards a healthier mindset.
|
31 |
|
32 |
+
## Features
|
33 |
|
34 |
+
- **Empathetic Conversations**: MIRA engages users with empathy and understanding, creating a safe space for them to express their thoughts and feelings.
|
35 |
+
- **Insightful Guidance**: MIRA provides reflective insights that encourage users to explore their emotions and thoughts more deeply.
|
36 |
+
- **Supportive Reminders**: MIRA offers gentle reminders and encouragement to help users stay focused on their mental wellness goals.
|
37 |
+
- **Interactive Self-Care**: MIRA includes exercises and tips for self-care practices, promoting mindfulness and resilience.
|
38 |
|
39 |
+
## Model Details
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
40 |
|
41 |
+
MIRA is fine-tuned on the **LLaMA 3.1 8B** model, a state-of-the-art language model known for its large-scale capabilities and nuanced understanding of human language. The fine-tuning process focused on enhancing the model's ability to engage in compassionate and context-aware dialogues, particularly in the domain of mental health and therapy.
|
42 |
|
43 |
+
### Core Values
|
44 |
|
45 |
+
- **Compassion**: Every interaction with MIRA is grounded in empathy and understanding.
|
46 |
+
- **Respect**: MIRA respects the user's emotions and responses, providing non-judgmental support.
|
47 |
+
- **Growth**: MIRA encourages personal growth and resilience through thoughtful guidance.
|
|
|
|
|
48 |
|
49 |
+
## Use Cases
|
50 |
|
51 |
+
- **Anxiety and Stress Management**: MIRA can help users navigate anxious thoughts and provide calming strategies.
|
52 |
+
- **Emotional Support**: For users dealing with loneliness, grief, or sadness, MIRA offers a comforting presence.
|
53 |
+
- **Mindfulness and Reflection**: MIRA guides users through mindfulness exercises and reflective practices to enhance mental clarity.
|
54 |
|
55 |
+
## Getting Started
|
56 |
|
57 |
+
To begin using MIRA:
|
58 |
|
59 |
+
1. **Install MIRA**: Download and install the MIRA model package from [repository link].
|
60 |
+
2. **Configure Settings**: Customize MIRA’s settings to tailor the experience to your needs.
|
61 |
+
3. **Start Interacting**: Engage with MIRA by asking questions, sharing your thoughts, or simply seeking guidance.
|
62 |
|
63 |
+
## Feedback and Contributions
|
64 |
|
65 |
+
We welcome feedback to improve MIRA's capabilities. If you have suggestions, encounter issues, or want to contribute, please reach out via [contact information].
|
66 |
|
67 |
+
## License
|
68 |
|
69 |
+
MIRA is licensed under apache-2.0. Please refer to the LICENSE file for more details.
|
70 |
|
71 |
+
## Uploaded Model
|
72 |
|
73 |
+
- **Developed by:** Msp
|
74 |
+
- **License:** apache-2.0
|
75 |
+
- **Finetuned from model:** [unsloth/meta-llama-3.1-8b-bnb-4bit](https://huggingface.co/unsloth/meta-llama-3.1-8b-bnb-4bit)
|
76 |
|
|
|
77 |
|
78 |
+
To Load this model using unsloth
|
79 |
|
80 |
+
```
|
81 |
+
from unsloth import FastLanguageModel
|
82 |
+
model, tokenizer = FastLanguageModel.from_pretrained(
|
83 |
+
model_name = "Msp/mira-instruct-1.0", # YOUR MODEL YOU USED FOR TRAINING
|
84 |
+
max_seq_length = 4096,
|
85 |
+
dtype = None,
|
86 |
+
load_in_4bit = True,
|
87 |
+
#token="hf.."
|
88 |
+
)
|
89 |
+
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
|
90 |
|
91 |
+
alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
|
92 |
|
93 |
+
### Instruction:
|
94 |
+
{}
|
95 |
|
96 |
+
### Input:
|
97 |
+
{}
|
98 |
|
99 |
+
### Response:
|
100 |
+
{}"""
|
101 |
|
102 |
+
inputs = tokenizer(
|
103 |
+
[
|
104 |
+
alpaca_prompt.format(
|
105 |
+
"", # instruction
|
106 |
+
"I hope you're doing well. I've been going through a really painful divorce recently, and I've been feeling quite lost and uncertain about the future. It's been a really difficult time for me.", # input
|
107 |
+
"", # output - leave this blank for generation!
|
108 |
+
)
|
109 |
+
], return_tensors = "pt").to("cuda")
|
110 |
|
111 |
+
from transformers import TextStreamer
|
112 |
+
text_streamer = TextStreamer(tokenizer)
|
113 |
+
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 512)
|
114 |
|
115 |
+
```
|
116 |
|
|
|
117 |
|
118 |
+
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
119 |
|
120 |
+
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|