A try for emebdding model:

The method is the same as the stella-v2, I just extend the length of the context on tao.(I found if you want to use the fully-8k context, you maybe need to convert the model to float32).

Now I'm working on the tao-v2, It will have a different sturcture.

I will release tao-v2 as fast as I can.

Thank you to the open source community.

Downloads last month
343
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Spaces using Amu/tao-8k 3

Collection including Amu/tao-8k

Evaluation results