|
--- |
|
license: mit |
|
language: |
|
- en |
|
pipeline_tag: image-feature-extraction |
|
--- |
|
# MetaColorModel |
|
|
|
## Overview |
|
MetaColorModel is a Hugging Face-compatible model designed to extract metadata and dominant colors from images. It is built using PyTorch and the Hugging Face `transformers` library, and can be used for image analysis tasks, such as understanding image properties and identifying the most prominent colors. |
|
|
|
## Model Details |
|
- **Model Type**: Custom image feature extraction model |
|
- **Configuration**: Includes parameters to specify the number of dominant colors (`k`), metadata size, and color size (e.g., RGB). |
|
- **Dependencies**: |
|
- `transformers` |
|
- `Pillow` |
|
- `numpy` |
|
|
|
## Example Use Case |
|
The model can be used in: |
|
- Image search and indexing |
|
- Content moderation |
|
- Color scheme analysis for design and marketing |
|
- Metadata extraction for organizing photo libraries |
|
|
|
## Installation |
|
To use this model, first install the required dependencies: |
|
```bash |
|
pip install transformers Pillow numpy |
|
``` |
|
|
|
## Usage |
|
|
|
Here is an example of how to use MetaColorModel: |
|
|
|
```python |
|
from transformers import AutoConfig |
|
from meta_color_model import MetaColorModel |
|
|
|
# Load the model |
|
config = AutoConfig.from_pretrained("Surya2706/meta_color_model") |
|
model = MetaColorModel.from_pretrained("Surya2706/meta_color_model", config=config) |
|
|
|
# Input image path |
|
image_path = "example_image.jpg" |
|
|
|
# Extract metadata and dominant colors |
|
result = model.forward(image_path) |
|
print("Metadata:", result["metadata"]) |
|
print("Dominant Colors:", result["dominant_colors"]) |
|
``` |
|
|
|
## Inputs |
|
- **Image Path**: A file path to the image you want to process. |
|
|
|
## Outputs |
|
- **Metadata**: Extracted EXIF metadata (if available). |
|
- **Dominant Colors**: A list of the top `k` dominant colors in RGB format. |
|
|
|
## Training |
|
This model can be trained further or fine-tuned for specific tasks. |
|
|
|
### Dataset |
|
To train or fine-tune the model, you can prepare a dataset of images and their metadata, structured as follows: |
|
``` |
|
data/ |
|
βββ images/ |
|
β βββ image1.jpg |
|
β βββ image2.jpg |
|
β βββ ... |
|
βββ metadata_colors.csv |
|
``` |
|
|
|
The `metadata_colors.csv` file should contain metadata and dominant color labels for the images. |
|
|
|
### Training Script |
|
Use the `Trainer` class from Hugging Face or implement a custom PyTorch training loop to fine-tune the model. |
|
|
|
## License |
|
This model is released under the Apache 2.0 License. |
|
|
|
## Citation |
|
If you use this model in your work, please cite: |
|
``` |
|
@misc{MetaColorModel, |
|
title={MetaColorModel: A Hugging Face-Compatible Image Analysis Model}, |
|
author={Surya}, |
|
year={2025}, |
|
publisher={Hugging Face}, |
|
howpublished={\url{https://huggingface.co/surya2706/image-metadata-extract}} |
|
} |
|
``` |
|
|
|
## Acknowledgments |
|
- Built with the Hugging Face `transformers` library. |
|
- Uses `Pillow` for image processing and `numpy` for numerical operations. |
|
|
|
## Feedback |
|
For questions or feedback, please contact [[email protected]] or open an issue on the [GitHub repository](https://github.com/Surya2706/image-metadata-extract). |