|
--- |
|
license: mit |
|
widget: |
|
- example_title: Question Answering! |
|
text: 'Please Answer the Question: what is depression?' |
|
- example_title: Other Example! |
|
text: 'Please Answer the Question: How to bake a cake?' |
|
- example_title: Other Example! |
|
text: 'Please Answer the Question: what is depression?' |
|
- example_title: Other Example! |
|
text: "Please Answer the Question: I'm going through some things with my feelings and myself.I barely sleep and I do nothing but think about how I'm worthless and how I shouldn't be here. I've never tried or contemplated suicide. I've always wanted to fix my issues, but I never get around to it. How can I change my feeling of being worthless to everyone?" |
|
inference: |
|
parameters: |
|
do_sample: true |
|
max_new_tokens: 250 |
|
datasets: |
|
- databricks/databricks-dolly-15k |
|
- VMware/open-instruct |
|
--- |
|
## MaxMini-Instruct-248M |
|
# Overview |
|
MaxMini-Instruct-248M is a T5 (Text-To-Text Transfer Transformer) model Instruct fine-tuned on a variety of tasks. This model is designed to perform a range of instruction tasks. |
|
|
|
## Model Details |
|
- Model Name: MaxMini-Instruct-248M |
|
- Model Type: T5 (Text-To-Text Transfer Transformer) |
|
- Model Size: 248M parameters |
|
- Instruction Tuning |
|
## Usage |
|
#### Installation |
|
You can install the model via the Hugging Face library: |
|
```bash |
|
pip install transformers |
|
pip install torch |
|
``` |
|
## Inference |
|
```python |
|
# Load model directly |
|
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("suriya7/MaxMini-Instruct-248M") |
|
model = AutoModelForSeq2SeqLM.from_pretrained("suriya7/MaxMini-Instruct-248M") |
|
|
|
my_question = "what is depression?" |
|
inputs = "Please answer to this question: " + my_question |
|
|
|
inputs = tokenizer(inputs, return_tensors="pt" |
|
) |
|
|
|
generated_ids = model.generate(**inputs, max_new_tokens=250,do_sample=True) |
|
decoded = tokenizer.decode(generated_ids[0], skip_special_tokens=True) |
|
print(f"Generated Output: {decoded}") |
|
``` |