mahdin70 commited on
Commit
1b7fc03
·
verified ·
1 Parent(s): 7839753

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -19
README.md CHANGED
@@ -28,9 +28,7 @@ This model is a fine-tuned version of **Microsoft's UniXcoder**, optimized for d
28
  - **Architecture:** Transformer-based sequence classification
29
 
30
  ## Model Sources
31
- - **Repository:** [Add Hugging Face Model Link Here]
32
  - **Paper (UniXcoder):** [https://arxiv.org/abs/2203.03850](https://arxiv.org/abs/2203.03850)
33
- - **Demo (Optional):** [Add Gradio/Streamlit Link Here]
34
 
35
  ## Uses
36
 
@@ -131,20 +129,4 @@ The model was evaluated using **20% of the dataset**, with the following results
131
  | Factor | Value |
132
  |---------|--------|
133
  | **GPU Used** | 2x T4 GPU |
134
- | **Training Time** | ~1 hour |
135
-
136
- ## Citation
137
- If you use this model in your research or applications, please cite:
138
-
139
- ```
140
- @article{unixcoder,
141
- title={UniXcoder: Unified Cross-Modal Pretraining for Code Representation},
142
- author={Guo, Daya and Wang, Shuo and Wan, Yao and others},
143
- year={2022},
144
- journal={arXiv preprint arXiv:2203.03850}
145
- }
146
- ```
147
-
148
- ## Model Card Authors
149
- - **Mukit Mahdin**
150
- - Contact: [[email protected]]
 
28
  - **Architecture:** Transformer-based sequence classification
29
 
30
  ## Model Sources
 
31
  - **Paper (UniXcoder):** [https://arxiv.org/abs/2203.03850](https://arxiv.org/abs/2203.03850)
 
32
 
33
  ## Uses
34
 
 
129
  | Factor | Value |
130
  |---------|--------|
131
  | **GPU Used** | 2x T4 GPU |
132
+ | **Training Time** | ~1 hour |