Accessing the pretrained encoders to test our vision-language survival analysis framework

#1
by yuukilp - opened

Hello, authors! Big congratulations on your incredible work. I really love it.

Having witnessed the exciting performance of mSTAR in various downstream tasks, I would like to leverage this pathology foundation model to test our vision-language survival analysis framework. I will strictly follow the license of model usage and the model won't be used for any commercial purposes.

Hope to get approved soon. Thanks!

Hello, authors! Great work.
Requesting access to the model!
Thank you very much!

Owner

Thank you for your request. Your access has been approved. Please proceed as needed. Let us know if you encounter any issues.

Now, mSTAR can be directly load from timm, please use the following code.

import timm
model = timm.create_model(
'hf-hub:Wangyh/mSTAR',
pretrained=True,
init_values=1e-5, dynamic_img_size=True
)

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment