luismsgomes commited on
Commit
94729b5
·
1 Parent(s): 34c1355

fixed README

Browse files
Files changed (1) hide show
  1. README.md +5 -4
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  license: mit
3
  library_name: sentence-transformers
4
  pipeline_tag: sentence-similarity
@@ -30,7 +31,7 @@ Then you can use the model like this:
30
  from sentence_transformers import SentenceTransformer
31
  sentences = ["This is an example sentence", "Each sentence is converted"]
32
 
33
- model = SentenceTransformer('{MODEL_NAME}')
34
  embeddings = model.encode(sentences)
35
  print(embeddings)
36
  ```
@@ -56,8 +57,8 @@ def mean_pooling(model_output, attention_mask):
56
  sentences = ['This is an example sentence', 'Each sentence is converted']
57
 
58
  # Load model from HuggingFace Hub
59
- tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
60
- model = AutoModel.from_pretrained('{MODEL_NAME}')
61
 
62
  # Tokenize sentences
63
  encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
@@ -79,7 +80,7 @@ print(sentence_embeddings)
79
 
80
  <!--- Describe how your model was evaluated -->
81
 
82
- For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
83
 
84
 
85
  ## Training
 
1
  ---
2
+ language: pt
3
  license: mit
4
  library_name: sentence-transformers
5
  pipeline_tag: sentence-similarity
 
31
  from sentence_transformers import SentenceTransformer
32
  sentences = ["This is an example sentence", "Each sentence is converted"]
33
 
34
+ model = SentenceTransformer('serafim-335m-portuguese-pt-sentence-encoder')
35
  embeddings = model.encode(sentences)
36
  print(embeddings)
37
  ```
 
57
  sentences = ['This is an example sentence', 'Each sentence is converted']
58
 
59
  # Load model from HuggingFace Hub
60
+ tokenizer = AutoTokenizer.from_pretrained('serafim-335m-portuguese-pt-sentence-encoder')
61
+ model = AutoModel.from_pretrained('serafim-335m-portuguese-pt-sentence-encoder')
62
 
63
  # Tokenize sentences
64
  encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
 
80
 
81
  <!--- Describe how your model was evaluated -->
82
 
83
+ For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=serafim-335m-portuguese-pt-sentence-encoder)
84
 
85
 
86
  ## Training