File size: 2,346 Bytes
bcb165b 898f519 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
---
license: cc-by-nc-nd-4.0
language:
- en
base_model:
- google/gemma-2-2b
---
# GemmaLM-for-Cannabis
This repository contains a fine-tuned version of the Gemma 2B model, specifically adapted for cannabis-related queries using Low Rank Adaptation (LoRA).
## Model Details
- **Base Model**: Gemma 2B
- **Fine-tuning Method**: Low Rank Adaptation (LoRA)
- **LoRA Rank**: 4
- **Training Data**: Custom dataset derived from cannabis strain information
- **Task**: Causal Language Modeling for cannabis-related queries
## Fine-tuning Process
The model was fine-tuned using a custom dataset created from cannabis strain information. The dataset includes details about various cannabis strains, their effects, flavors, and descriptions. The fine-tuning process involved:
1. Preprocessing the cannabis dataset into a prompt-response format
2. Implementing LoRA with a rank of 4 to efficiently adapt the model
3. Training for a limited number of epochs with a small subset of data for demonstration purposes
## Usage
This model can be used to generate responses to cannabis-related queries. Example usage:
```python
import keras
import keras_nlp
# Load the model
model = keras.models.load_model("gemma_lm_model.keras")
# Set up the sampler
sampler = keras_nlp.samplers.TopKSampler(k=5, seed=2)
model.compile(sampler=sampler)
# Generate a response
prompt = "Instruction:\nWhat does OG Kush feel like\nResponse:\n"
response = model.generate(prompt, max_length=256)
print(response)
```
## Limitations
- The model was fine-tuned on a limited dataset for demonstration purposes. For production use, consider training on a larger dataset for more epochs.
- The current LoRA rank is set to 4, which may limit the model's adaptability. Experimenting with higher ranks could potentially improve performance.
## Future Improvements
To enhance the model's performance, consider:
1. Increasing the size of the fine-tuning dataset
2. Training for more epochs
3. Experimenting with higher LoRA rank values
4. Adjusting hyperparameters such as learning rate and weight decay
## License
Please refer to the Gemma model's original license for usage terms and conditions.
## Acknowledgements
This project uses the Gemma model developed by Google. We acknowledge the Keras and KerasNLP teams for providing the tools and frameworks used in this project. |