mohamed20-AI commited on
Commit
f0d0ef7
·
verified ·
1 Parent(s): f0c2a33

mohamed20-AI/model_title

Browse files
Files changed (2) hide show
  1. README.md +56 -54
  2. config_sentence_transformers.json +2 -2
README.md CHANGED
@@ -10,36 +10,37 @@ tags:
10
  - loss:CoSENTLoss
11
  base_model: abdeljalilELmajjodi/model
12
  widget:
13
- - source_sentence: Woman in white in foreground and a man slightly behind walking
14
- with a sign for John's Pizza and Gyro in the background.
15
- sentences:
16
- - They are walking with a sign.
17
- - A married couple is sleeping.
18
- - There are children present
19
- - source_sentence: Woman in white in foreground and a man slightly behind walking
20
- with a sign for John's Pizza and Gyro in the background.
21
- sentences:
22
- - A child with mom and dad, on summer vacation at the beach.
23
- - A person is outdoors, on a horse.
24
- - The woman is wearing white.
25
  - source_sentence: Two adults, one female in white, with shades and one male, gray
26
  clothes, walking across a street, away from a eatery with a blurred image of a
27
  dark colored red shirted person in the foreground.
28
  sentences:
29
- - Two adults swimming in water
30
- - A couple watch a little girl play by herself on the beach.
31
- - Near a couple of restaurants, two people walk across the street.
32
- - source_sentence: Woman in white in foreground and a man slightly behind walking
33
- with a sign for John's Pizza and Gyro in the background.
 
 
 
 
 
 
 
 
 
 
 
 
34
  sentences:
35
- - They are working for John's Pizza.
36
- - Two adults walking across a road near the convicted prisoner dressed in red
37
- - Women are waiting by a tram.
38
- - source_sentence: A man, woman, and child enjoying themselves on a beach.
 
39
  sentences:
40
- - A team is trying to tag a runner out.
41
- - There are women showing affection.
42
- - A family of three is at the mall shopping.
43
  datasets:
44
  - sentence-transformers/all-nli
45
  pipeline_tag: sentence-similarity
@@ -58,10 +59,10 @@ model-index:
58
  type: pair-score-evaluator-dev
59
  metrics:
60
  - type: pearson_cosine
61
- value: 0.01358701091758253
62
  name: Pearson Cosine
63
  - type: spearman_cosine
64
- value: 0.02861316917596507
65
  name: Spearman Cosine
66
  ---
67
 
@@ -115,9 +116,9 @@ from sentence_transformers import SentenceTransformer
115
  model = SentenceTransformer("sentence_transformers_model_id")
116
  # Run inference
117
  sentences = [
118
- 'A man, woman, and child enjoying themselves on a beach.',
119
- 'A family of three is at the mall shopping.',
120
- 'A team is trying to tag a runner out.',
121
  ]
122
  embeddings = model.encode(sentences)
123
  print(embeddings.shape)
@@ -164,8 +165,8 @@ You can finetune this model on your own dataset.
164
 
165
  | Metric | Value |
166
  |:--------------------|:-----------|
167
- | pearson_cosine | 0.0136 |
168
- | **spearman_cosine** | **0.0286** |
169
 
170
  <!--
171
  ## Bias, Risks and Limitations
@@ -189,16 +190,16 @@ You can finetune this model on your own dataset.
189
  * Size: 80 training samples
190
  * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
191
  * Approximate statistics based on the first 80 samples:
192
- | | sentence1 | sentence2 | score |
193
- |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
194
- | type | string | string | float |
195
- | details | <ul><li>min: 10 tokens</li><li>mean: 26.15 tokens</li><li>max: 52 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 11.68 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.47</li><li>max: 1.0</li></ul> |
196
  * Samples:
197
- | sentence1 | sentence2 | score |
198
- |:-------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------|:-----------------|
199
- | <code>Two women, holding food carryout containers, hug.</code> | <code>Two groups of rival gang members flipped each other off.</code> | <code>0.0</code> |
200
- | <code>A man and a woman cross the street in front of a pizza and gyro restaurant.</code> | <code>The people are standing still on the curb.</code> | <code>0.0</code> |
201
- | <code>Woman in white in foreground and a man slightly behind walking with a sign for John's Pizza and Gyro in the background.</code> | <code>The woman is waiting for a friend.</code> | <code>0.5</code> |
202
  * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
203
  ```json
204
  {
@@ -215,16 +216,16 @@ You can finetune this model on your own dataset.
215
  * Size: 20 evaluation samples
216
  * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
217
  * Approximate statistics based on the first 20 samples:
218
- | | sentence1 | sentence2 | score |
219
- |:--------|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------|
220
- | type | string | string | float |
221
- | details | <ul><li>min: 10 tokens</li><li>mean: 24.05 tokens</li><li>max: 52 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 13.2 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.62</li><li>max: 1.0</li></ul> |
222
  * Samples:
223
- | sentence1 | sentence2 | score |
224
- |:--------------------------------------------------------------------|:------------------------------------------------------------------------|:-----------------|
225
- | <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> | <code>1.0</code> |
226
- | <code>Children smiling and waving at camera</code> | <code>There are children present</code> | <code>1.0</code> |
227
- | <code>A couple playing with a little boy on the beach.</code> | <code>A couple watch a little girl play by herself on the beach.</code> | <code>0.0</code> |
228
  * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
229
  ```json
230
  {
@@ -310,6 +311,7 @@ You can finetune this model on your own dataset.
310
  - `fsdp`: []
311
  - `fsdp_min_num_params`: 0
312
  - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
 
313
  - `fsdp_transformer_layer_cls_to_wrap`: None
314
  - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
315
  - `deepspeed`: None
@@ -367,17 +369,17 @@ You can finetune this model on your own dataset.
367
  ### Training Logs
368
  | Epoch | Step | Training Loss | Validation Loss | pair-score-evaluator-dev_spearman_cosine |
369
  |:-------:|:------:|:-------------:|:---------------:|:----------------------------------------:|
370
- | 0.1 | 1 | 2.5069 | - | - |
371
- | 0.5 | 5 | 3.0496 | - | - |
372
- | **1.0** | **10** | **3.0534** | **2.7607** | **0.0286** |
373
 
374
  * The bold row denotes the saved checkpoint.
375
 
376
  ### Framework Versions
377
  - Python: 3.11.12
378
  - Sentence Transformers: 4.1.0
379
- - Transformers: 4.52.3
380
- - PyTorch: 2.7.0+cu126
381
  - Accelerate: 1.6.0
382
  - Datasets: 3.6.0
383
  - Tokenizers: 0.21.1
 
10
  - loss:CoSENTLoss
11
  base_model: abdeljalilELmajjodi/model
12
  widget:
 
 
 
 
 
 
 
 
 
 
 
 
13
  - source_sentence: Two adults, one female in white, with shades and one male, gray
14
  clothes, walking across a street, away from a eatery with a blurred image of a
15
  dark colored red shirted person in the foreground.
16
  sentences:
17
+ - Two people ride bicycles into a tunnel.
18
+ - There are people just getting on a train
19
+ - There are children present
20
+ - source_sentence: A man with blond-hair, and a brown shirt drinking out of a public
21
+ water fountain.
22
+ sentences:
23
+ - Some women are hugging on vacation.
24
+ - The family is sitting down for dinner.
25
+ - A blond man wearing a brown shirt is reading a book on a bench in the park
26
+ - source_sentence: Two women who just had lunch hugging and saying goodbye.
27
+ sentences:
28
+ - There are two woman in this picture.
29
+ - Two adults run across the street to get away from a red shirted person chasing
30
+ them.
31
+ - The woman is wearing black.
32
+ - source_sentence: A woman in a green jacket and hood over her head looking towards
33
+ a valley.
34
  sentences:
35
+ - The woman is wearing green.
36
+ - A woman in white.
37
+ - A man is drinking juice.
38
+ - source_sentence: An older man sits with his orange juice at a small table in a coffee
39
+ shop while employees in bright colored shirts smile in the background.
40
  sentences:
41
+ - They are protesting outside the capital.
42
+ - A couple are playing frisbee with a young child at the beach.
43
+ - A boy flips a burger.
44
  datasets:
45
  - sentence-transformers/all-nli
46
  pipeline_tag: sentence-similarity
 
59
  type: pair-score-evaluator-dev
60
  metrics:
61
  - type: pearson_cosine
62
+ value: -0.12381534704198764
63
  name: Pearson Cosine
64
  - type: spearman_cosine
65
+ value: -0.06398099132915955
66
  name: Spearman Cosine
67
  ---
68
 
 
116
  model = SentenceTransformer("sentence_transformers_model_id")
117
  # Run inference
118
  sentences = [
119
+ 'An older man sits with his orange juice at a small table in a coffee shop while employees in bright colored shirts smile in the background.',
120
+ 'A boy flips a burger.',
121
+ 'They are protesting outside the capital.',
122
  ]
123
  embeddings = model.encode(sentences)
124
  print(embeddings.shape)
 
165
 
166
  | Metric | Value |
167
  |:--------------------|:-----------|
168
+ | pearson_cosine | -0.1238 |
169
+ | **spearman_cosine** | **-0.064** |
170
 
171
  <!--
172
  ## Bias, Risks and Limitations
 
190
  * Size: 80 training samples
191
  * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
192
  * Approximate statistics based on the first 80 samples:
193
+ | | sentence1 | sentence2 | score |
194
+ |:--------|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------|
195
+ | type | string | string | float |
196
+ | details | <ul><li>min: 10 tokens</li><li>mean: 25.34 tokens</li><li>max: 52 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 12.2 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.51</li><li>max: 1.0</li></ul> |
197
  * Samples:
198
+ | sentence1 | sentence2 | score |
199
+ |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------|:-----------------|
200
+ | <code>Two adults, one female in white, with shades and one male, gray clothes, walking across a street, away from a eatery with a blurred image of a dark colored red shirted person in the foreground.</code> | <code>Some people board a train.</code> | <code>0.0</code> |
201
+ | <code>A few people in a restaurant setting, one of them is drinking orange juice.</code> | <code>The people are sitting at desks in school.</code> | <code>0.0</code> |
202
+ | <code>The school is having a special event in order to show the american culture on how other cultures are dealt with in parties.</code> | <code>A school hosts a basketball game.</code> | <code>0.0</code> |
203
  * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
204
  ```json
205
  {
 
216
  * Size: 20 evaluation samples
217
  * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
218
  * Approximate statistics based on the first 20 samples:
219
+ | | sentence1 | sentence2 | score |
220
+ |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:--------------------------------------------------------------|
221
+ | type | string | string | float |
222
+ | details | <ul><li>min: 10 tokens</li><li>mean: 27.3 tokens</li><li>max: 52 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 11.1 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.5</li><li>max: 1.0</li></ul> |
223
  * Samples:
224
+ | sentence1 | sentence2 | score |
225
+ |:-------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------|:-----------------|
226
+ | <code>Woman in white in foreground and a man slightly behind walking with a sign for John's Pizza and Gyro in the background.</code> | <code>The woman is wearing black.</code> | <code>0.0</code> |
227
+ | <code>A couple play in the tide with their young son.</code> | <code>The family is sitting down for dinner.</code> | <code>0.0</code> |
228
+ | <code>A couple playing with a little boy on the beach.</code> | <code>A couple are playing frisbee with a young child at the beach.</code> | <code>0.5</code> |
229
  * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
230
  ```json
231
  {
 
311
  - `fsdp`: []
312
  - `fsdp_min_num_params`: 0
313
  - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
314
+ - `tp_size`: 0
315
  - `fsdp_transformer_layer_cls_to_wrap`: None
316
  - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
317
  - `deepspeed`: None
 
369
  ### Training Logs
370
  | Epoch | Step | Training Loss | Validation Loss | pair-score-evaluator-dev_spearman_cosine |
371
  |:-------:|:------:|:-------------:|:---------------:|:----------------------------------------:|
372
+ | 0.1 | 1 | 3.0033 | - | - |
373
+ | 0.5 | 5 | 2.987 | - | - |
374
+ | **1.0** | **10** | **3.0908** | **2.6311** | **-0.064** |
375
 
376
  * The bold row denotes the saved checkpoint.
377
 
378
  ### Framework Versions
379
  - Python: 3.11.12
380
  - Sentence Transformers: 4.1.0
381
+ - Transformers: 4.51.3
382
+ - PyTorch: 2.6.0+cu124
383
  - Accelerate: 1.6.0
384
  - Datasets: 3.6.0
385
  - Tokenizers: 0.21.1
config_sentence_transformers.json CHANGED
@@ -1,8 +1,8 @@
1
  {
2
  "__version__": {
3
  "sentence_transformers": "4.1.0",
4
- "transformers": "4.52.3",
5
- "pytorch": "2.7.0+cu126"
6
  },
7
  "prompts": {},
8
  "default_prompt_name": null,
 
1
  {
2
  "__version__": {
3
  "sentence_transformers": "4.1.0",
4
+ "transformers": "4.51.3",
5
+ "pytorch": "2.6.0+cu124"
6
  },
7
  "prompts": {},
8
  "default_prompt_name": null,