Testset used for evaluation.

#2
by ZhengPeng7 - opened

Hi, BRIAAI team. Congratulations on your excellent work!
Thanks for citing my BiRefNet. I'm really glad to see that it can help.

I'm curious about one thing. What is the test set you used for the evaluation?

Thanks for the explanation in advance!

Hi Peng Zheng,

Thanks for getting in touch! We truly value the foundation you’ve provided, which has played a significant role in our progress.
While we’d love to share the full benchmark dataset, we're unable to do so as it includes proprietary data. However, we're more than happy to provide the dataset distributions.
Please don’t hesitate to let us know if there’s anything else we can assist with.

Best regards,
Or
image.png

No description provided.
Negev900 changed discussion status to closed
Negev900 changed discussion status to open
Negev900 changed discussion status to closed

Thanks for the reply. Have you ever used publicly available dataset for evaluation? For example, DIS-VD in DIS5K and P3M-500-NP in P3M?

This comment has been hidden

While we unfortunately cannot publish the benchmark on which and with which we developed the model, out of respect for copyright, Or has provided the maximum possible information about the benchmark. In the attached Git, [GitHub Link: https://github.com/Efrat-Taig/RMBG-2.0]
there is a script that allows anyone to easily create a benchmark for their specific use case.
Additionally, I’ve prepared another benchmark for comparison. Again, this is not the original benchmark; it is for a specific use case.
I've also created a comparison script between our previous model and the new model:
https://github.com/Efrat-Taig/RMBG-2.0/blob/main/compare_bria_models.py
I haven't yet compared it to BiRefNet. —it would be great if someone from the community could do so.

Negev900 changed discussion status to open

I know... Thanks for your detailed explanation!

BRIA AI org

We can discuss also in our Discord community https://discord.gg/xT7Tu2uB

Sign up or log in to comment