pszemraj's picture
Update README.md
7a2610d
|
raw
history blame
5.58 kB
---
license: apache-2.0
datasets:
- hakurei/open-instruct-v1
tags:
- alpaca
- self-instruct
- instruction generation
- instructiongen
- bart
- open-instruct
widget:
- text: >-
You'll need to start by choosing the right venue. Consider the type of
atmosphere and the size of the area that will be suitable for the number of
guests you plan to invite. Choose the right decorations based on your
brother's interests, such as balloons in his favorite colors, banners, and
streamers. Next, decide on the food and drinks, making sure they are tasty
and appropriate for the occasion. Then decide on the other games, music, and
entertainment that will make the party memorable. Finally, involve your
brother's friends and family to help create the perfect surprise.
example_title: birthday party
- text: 1) cookies and cream 2) chocolate chip 3) mint chip 4) oreo
example_title: ice cream
- text: >-
Start by selecting a scale model of a building that fits the theme. Use a
hobby knife and glue to cut and assemble the model into a ruined or
abandoned version of itself, adding details like broken windows and
graffiti. Create a base for the diorama using foam, plaster, or other
materials, and paint it to resemble a ruined street or sidewalk. Add
miniature vehicles, debris, and figures to complete the scene, and use
weathering techniques like dry brushing and rust washes to add realism.
Display the diorama in a shadow box or other protective case to showcase
your work.
example_title: Miniature diorama creation
- text: >-
Start by selecting clothing that is futuristic and edgy, such as leather
jackets, neon-colored accessories, and tech-inspired patterns. Add
accessories like goggles, cybernetic implants, and LED lights to enhance the
cyberpunk vibe. Use makeup and body paint to create a futuristic look, such
as metallic skin or neon makeup. Consider adding functional elements to your
costume, such as a built-in backpack or hidden pockets for your tech
gadgets. Finally, practice your confident walk and embrace your inner
cyberpunk for a memorable and immersive costume experience.
example_title: Cyberpunk costume design
- text: >-
Start by creating a base terrain with mountains, valleys, and other natural
features. Use fractal noise and displacement mapping to add texture and
detail to the terrain, and experiment with different materials like rock,
grass, and water. Add surreal elements like floating islands, giant
mushrooms, or impossible geometry to create a dreamlike atmosphere. Use
lighting and color grading to enhance the mood and tone of the scene, and
render the final image at a high resolution for maximum impact. Share your
surreal landscape with the world and inspire others to explore the
possibilities of 3D art.
example_title: Surreal 3D landscape creation
- text: >-
Start by setting a realistic goal and creating a training plan. Build up
your mileage gradually over time, and incorporate cross-training and
strength exercises to prevent injury and improve endurance. Be sure to stay
hydrated and properly fuel your body with nutritious foods. Listen to your
body and adjust your training as needed to avoid overexertion or burnout.
Finally, taper your training in the weeks leading up to the race to give
your body time to rest and recover before the big day.
example_title: Marathon training
inference:
parameters:
max_length: 96
num_beams: 4
encoder_no_repeat_ngram_size: 4
language:
- en
library_name: transformers
pipeline_tag: text2text-generation
---
# bart-base-open-instructiongen-v1
Instead of generating questions from text, generate instructions for LLMs!
Check out a [basic demo on Spaces](https://huggingface.co/spaces/pszemraj/generate-instructions). You can find other models fine-tuned for instruction generation by [searching for the instructiongen tag](https://huggingface.co/models?other=instructiongen).
## Model description
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the hakurei/open-instruct-v1 dataset.
- This model **only** generates the `instruction` for arbitrary text (it **does not** provide `inputs` as well - look for models with `w-inputs` in the name).
- There was no validation split at the time of training, so no statistics here.
- Comparing the performance of this model with [pszemraj/bart-base-instructiongen](https://huggingface.co/pszemraj/bart-base-instructiongen) might give some indication of whether and how much dataset scaling is needed to produce "robust" instruction generators.
- If you notice any trends, feel free to reach out! would be happy to hear about it.
## Training and evaluation data
See `hakurei/open-instruct-v1`. This model was trained on the dataset "backwards", i.e. the model was given the `output` column as input and trained to predict `instruction`.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 2.0.0+cu118
- Datasets 2.9.0
- Tokenizers 0.12.1