lukecq commited on
Commit
958beef
·
1 Parent(s): 5209873

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -31,7 +31,8 @@ There are three versions of models released. The details are:
31
  |------------|-----------|----------|-------|-------|----|
32
  | [zero-shot-classify-SSTuning-base](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-base) | [roberta-base](https://huggingface.co/roberta-base) | 125M | Low | High | 20.48M |
33
  | [zero-shot-classify-SSTuning-large](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-large) | [roberta-large](https://huggingface.co/roberta-large) | 355M | Medium | Medium | 5.12M |
34
- | [zero-shot-classify-SSTuning-ALBERT](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-ALBERT) | [albert-xxlarge-v2](https://huggingface.co/albert-xxlarge-v2) | 235M | High | Low| 5.12M |
 
35
 
36
  Please note that zero-shot-classify-SSTuning-base is trained with more data (20.48M) than the paper, as this will increase the accuracy.
37
 
@@ -87,8 +88,8 @@ print(predictions)
87
  Chip Hong Chang and
88
  Lidong Bing},
89
  title = {Zero-Shot Text Classification via Self-Supervised Tuning},
90
- booktitle = {Findings of the 2023 ACL},
91
  year = {2023},
92
- url = {},
93
  }
94
  ```
 
31
  |------------|-----------|----------|-------|-------|----|
32
  | [zero-shot-classify-SSTuning-base](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-base) | [roberta-base](https://huggingface.co/roberta-base) | 125M | Low | High | 20.48M |
33
  | [zero-shot-classify-SSTuning-large](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-large) | [roberta-large](https://huggingface.co/roberta-large) | 355M | Medium | Medium | 5.12M |
34
+ | [zero-shot-classify-SSTuning-ALBERT](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-ALBERT)
35
+ | [albert-xxlarge-v2](https://huggingface.co/albert-xxlarge-v2) | 235M | High | Low| 5.12M |
36
 
37
  Please note that zero-shot-classify-SSTuning-base is trained with more data (20.48M) than the paper, as this will increase the accuracy.
38
 
 
88
  Chip Hong Chang and
89
  Lidong Bing},
90
  title = {Zero-Shot Text Classification via Self-Supervised Tuning},
91
+ booktitle = {Findings of the Association for Computational Linguistics: ACL 2023},
92
  year = {2023},
93
+ url = {https://arxiv.org/abs/2305.11442},
94
  }
95
  ```