Model Card for Fast Segment Anything Model(FSAM-MNR)
- Developed by: Robert James
- Finetuned from model: [YOLOv8]
Model Sources
- Repository: https://github.com/robathanjames/fsx-models
Uses
This Fast Segment Anything Model (FSAM-MNR) is based off of Ultralytic's FastSAM model. It's been merged with additional training data and attempts to be a real-time CNN-based solution for the Segment Anything task. Essentially, this is designed to segment any object within an image based on various possible user interaction prompts. FSAM-MNR significantly reduces computational demands while maintaining competitive performance, making it a practical choice for a variety of vision tasks.
Recommendations
use at your own discretion.
- Running the model https://www.tensorflow.org/lite/guide/inference
Important concepts
TensorFlow Lite inference typically follows the following steps:
Loading a model
You must load the .tflite model into memory, which contains the model's execution graph. Transforming data
Raw input data for the model generally does not match the input data format expected by the model. For example, you might need to resize an image or change the image format to be compatible with the model. Running inference
(This step involves using the TensorFlow Lite API to execute the model. It involves a few steps such as building the interpreter, and allocating tensors, as described in the following sections. Interpreting output)
- When you receive results from the model inference, you must interpret the tensors in a meaningful way that's useful in your application.