File size: 1,918 Bytes
aa31ba2
a8894c8
 
aa31ba2
 
 
391f1ea
aa31ba2
 
 
 
 
 
3f389b1
c336a4a
3f389b1
aa31ba2
dc39d8d
 
 
3f389b1
 
aa31ba2
3f389b1
aa31ba2
3f389b1
aa31ba2
3f389b1
aa31ba2
3f389b1
aa31ba2
3f389b1
aa31ba2
3f389b1
aa31ba2
 
 
 
c2b40e5
aa31ba2
 
 
 
 
 
c2b40e5
ab8314b
 
 
dc39d8d
aa31ba2
 
 
 
 
 
3f389b1
aa31ba2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
---
license: cc-by-4.0
base_model: deepset/roberta-base-squad2
tags:
- generated_from_keras_callback
model-index:
- name: Kiran2004/Roberta_qca_sample
  results: []
---

<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->

# Kiran2004/Roberta_qca_sample

This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0050
- Validation Loss: 0.0012
- Epoch: 4

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 100, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32

### Training results

| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.9216     | 0.0300          | 0     |
| 0.0290     | 0.0028          | 1     |
| 0.0077     | 0.0015          | 2     |
| 0.0055     | 0.0013          | 3     |
| 0.0050     | 0.0012          | 4     |


### Framework versions

- Transformers 4.38.2
- TensorFlow 2.15.0
- Datasets 2.19.0
- Tokenizers 0.15.2