omarelsayeed commited on
Commit
7777464
·
1 Parent(s): fd5b089

Upload folder using huggingface_hub

Browse files
Files changed (4) hide show
  1. README.md +5 -5
  2. pytorch_model.bin +1 -1
  3. sentence_bert_config.json +1 -1
  4. tokenizer.json +1 -1
README.md CHANGED
@@ -85,14 +85,14 @@ The model was trained with the parameters:
85
 
86
  **DataLoader**:
87
 
88
- `torch.utils.data.dataloader.DataLoader` of length 480 with parameters:
89
  ```
90
  {'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
91
  ```
92
 
93
  **Loss**:
94
 
95
- `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
96
  ```
97
  {'scale': 20.0, 'similarity_fct': 'cos_sim'}
98
  ```
@@ -100,13 +100,13 @@ The model was trained with the parameters:
100
  Parameters of the fit()-Method:
101
  ```
102
  {
103
- "epochs": 4,
104
  "evaluation_steps": 0,
105
  "evaluator": "NoneType",
106
  "max_grad_norm": 1,
107
  "optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
108
  "optimizer_params": {
109
- "lr": 0.0005
110
  },
111
  "scheduler": "WarmupLinear",
112
  "steps_per_epoch": null,
@@ -119,7 +119,7 @@ Parameters of the fit()-Method:
119
  ## Full Model Architecture
120
  ```
121
  SentenceTransformer(
122
- (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
123
  (1): Pooling({'word_embedding_dimension': 256, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
124
  )
125
  ```
 
85
 
86
  **DataLoader**:
87
 
88
+ `torch.utils.data.dataloader.DataLoader` of length 9677 with parameters:
89
  ```
90
  {'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
91
  ```
92
 
93
  **Loss**:
94
 
95
+ `__main__.LoggingMNRLoss` with parameters:
96
  ```
97
  {'scale': 20.0, 'similarity_fct': 'cos_sim'}
98
  ```
 
100
  Parameters of the fit()-Method:
101
  ```
102
  {
103
+ "epochs": 2,
104
  "evaluation_steps": 0,
105
  "evaluator": "NoneType",
106
  "max_grad_norm": 1,
107
  "optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
108
  "optimizer_params": {
109
+ "lr": 5e-05
110
  },
111
  "scheduler": "WarmupLinear",
112
  "steps_per_epoch": null,
 
119
  ## Full Model Architecture
120
  ```
121
  SentenceTransformer(
122
+ (0): Transformer({'max_seq_length': 80, 'do_lower_case': False}) with Transformer model: BertModel
123
  (1): Pooling({'word_embedding_dimension': 256, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
124
  )
125
  ```
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:83127159d799535b99b7e43197dd8aec99d05acae6c0f553811c761a61b7c2f8
3
  size 46223689
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f9f59ceb4c5ed27d084475090eab5ae6326986828725714247f7463538e45d7c
3
  size 46223689
sentence_bert_config.json CHANGED
@@ -1,4 +1,4 @@
1
  {
2
- "max_seq_length": 128,
3
  "do_lower_case": false
4
  }
 
1
  {
2
+ "max_seq_length": 80,
3
  "do_lower_case": false
4
  }
tokenizer.json CHANGED
@@ -2,7 +2,7 @@
2
  "version": "1.0",
3
  "truncation": {
4
  "direction": "Right",
5
- "max_length": 128,
6
  "strategy": "LongestFirst",
7
  "stride": 0
8
  },
 
2
  "version": "1.0",
3
  "truncation": {
4
  "direction": "Right",
5
+ "max_length": 80,
6
  "strategy": "LongestFirst",
7
  "stride": 0
8
  },