Wilfredo Martel
wilfoderek
·
AI & ML interests
I am a software developer with deeper interests in learning Artificial Intelligence and all this of semantic search really intrigues me a lot.
Organizations
None yet
wilfoderek's activity
Release Custom finetune code with torch?
4
#39 opened 19 days ago
by
CVHvn
Could you please share the process for fine-tuning?
#1 opened 2 days ago
by
wilfoderek
A practical use case from your great job for the spanish language
4
#9 opened 4 months ago
by
prudant
Further fine-tuning for domain adaptation
1
#12 opened 16 days ago
by
al-h
Saving and Loading the fine-tuned model
12
#24 opened 11 months ago
by
maiia-bocharova
how train in my domain
6
#2 opened over 1 year ago
by
chaochaoli
Request for Assistance with Fine-Tuning the nomic-embed-text-v1 Model for spanish language
2
#20 opened 5 months ago
by
wilfoderek
why do the results in colab vary from those of hugging face ?
#2 opened 5 months ago
by
wilfoderek
How many GPU's are required to fine tuning bge-m3 over 1 million tripplets ?
3
#18 opened 8 months ago
by
wilfoderek
Share dataset
#1 opened 5 months ago
by
wilfoderek
Steps to finetune the model on Spanish Language
#8 opened 5 months ago
by
wilfoderek
Request for Detailed Experimental Procedure For Spanish
1
#15 opened 5 months ago
by
wilfoderek
Explain the steps to fine tune on my own data
#1 opened 5 months ago
by
wilfoderek
Any information about this model?
#1 opened 8 months ago
by
wilfoderek
Dataset format for fine tuning
7
#7 opened 12 months ago
by
andreaKIM
may be little bugs
3
#14 opened 8 months ago
by
prudant
Is it multilingual?
1
#3 opened 8 months ago
by
wilfoderek
Does it support Spanish?
1
#9 opened 8 months ago
by
wilfoderek
About continuing training
3
#2 opened 8 months ago
by
alielfilali01
Example in readme could need explanation
4
#9 opened about 1 year ago
by
tordbb
Steps to reproducethe model in a legal dataset
4
#1 opened 11 months ago
by
wilfoderek
multilingual support?
4
#7 opened over 1 year ago
by
lyua1225
Fine tuning for a dutch forum.
7
#21 opened 11 months ago
by
Ziizu
Semantic Search
5
#14 opened 11 months ago
by
dilolo
Formato del Prompt
2
#2 opened 11 months ago
by
ruben1965
Fine tuning on text domain
5
#3 opened over 1 year ago
by
wilfoderek
Fine tuning in my own domain
2
#1 opened over 1 year ago
by
wilfoderek
Field explanation specially the long number (hard negative)
1
#1 opened over 1 year ago
by
wilfoderek
e5-large-v2 requirements for training in non english?
2
#3 opened over 1 year ago
by
wilfoderek
Fine tuning Universal Sentence Encoder on my own domain
#1 opened over 1 year ago
by
wilfoderek
Which is the paraphrase training dataset used for the teacher model?
1
#2 opened almost 2 years ago
by
yuricampbell
share your training process
#1 opened over 1 year ago
by
wilfoderek
i would like to reproduce your model, please share your github or colab
#1 opened over 1 year ago
by
wilfoderek
Do you have information to reproduce your work?
1
#3 opened over 1 year ago
by
wilfoderek
Replicate your model step by step
3
#2 opened over 1 year ago
by
wilfoderek
Replicar el modelo paso a paso
1
#1 opened over 1 year ago
by
wilfoderek
Steps to replicate your work
#1 opened over 1 year ago
by
wilfoderek
More information
#1 opened over 1 year ago
by
wilfoderek
Any colab to reproduce the training
6
#2 opened over 1 year ago
by
wilfoderek
How to improve accuracy?
#1 opened over 1 year ago
by
wilfoderek
how to fine tuning
6
#105 opened about 2 years ago
by
nora1008
Not running on google colab
#1 opened over 1 year ago
by
wilfoderek