nielsr HF staff commited on
Commit
0b7ab54
·
verified ·
1 Parent(s): 31fdc7d

Add library_name and pipeline_tag to metadata

Browse files

This PR adds the `library_name` and `pipeline_tag` to the model card metadata. The `library_name` is set to `transformers` given the model's compatibility with the Hugging Face Transformers library. The `pipeline_tag` is set to `text-generation` as the model is used for text generation tasks.

Files changed (1) hide show
  1. README.md +6 -26
README.md CHANGED
@@ -1,11 +1,13 @@
1
  ---
2
  base_model:
3
  - Qwen/Qwen2.5-7B-Instruct
4
- license: apache-2.0
5
- language:
6
- - en
7
  datasets:
8
  - chtmp223/CLIPPER
 
 
 
 
 
9
  ---
10
 
11
  # Qwen2.5-7B-CLIPPER
@@ -43,26 +45,4 @@ Please check [our paper](https://arxiv.org/abs/2502.14854) for more details on t
43
  | learning_rate | 1.0e-6 |
44
  | lr_scheduler_type | cosine |
45
  | max_length | 131072 |
46
- | num_train_epochs | 1 |
47
- | optim | adamw_torch |
48
-
49
- #### Software
50
-
51
- Training code is adapted from [https://github.com/Qihoo360/360-LLaMA-Factory/tree/1b5398f539c7d94a530f3f32b53553a3b1928314](https://github.com/Qihoo360/360-LLaMA-Factory/tree/1b5398f539c7d94a530f3f32b53553a3b1928314).
52
-
53
- ## 🤗 Inference
54
- Inference is done with [vLLM](https://github.com/vllm-project/vllm) on 1 A100-80GB.
55
-
56
- ## 📜 Citation
57
-
58
- ```
59
- @misc{pham2025clippercompressionenableslongcontext,
60
- title={CLIPPER: Compression enables long-context synthetic data generation},
61
- author={Chau Minh Pham and Yapei Chang and Mohit Iyyer},
62
- year={2025},
63
- eprint={2502.14854},
64
- archivePrefix={arXiv},
65
- primaryClass={cs.CL},
66
- url={https://arxiv.org/abs/2502.14854},
67
- }
68
- ```
 
1
  ---
2
  base_model:
3
  - Qwen/Qwen2.5-7B-Instruct
 
 
 
4
  datasets:
5
  - chtmp223/CLIPPER
6
+ language:
7
+ - en
8
+ license: apache-2.0
9
+ library_name: transformers
10
+ pipeline_tag: text-generation
11
  ---
12
 
13
  # Qwen2.5-7B-CLIPPER
 
45
  | learning_rate | 1.0e-6 |
46
  | lr_scheduler_type | cosine |
47
  | max_length | 131072 |
48
+ | num_train_epochs | 1 |\n| optim | adamw_torch |\n\n#### Software\n\nTraining code is adapted from [https://github.com/Qihoo360/360-LLaMA-Factory/tree/1b5398f539c7d94a530f3f32b53553a3b1928314](https://github.com/Qihoo360/360-LLaMA-Factory/tree/1b5398f539c7d94a530f3f32b53553a3b1928314).\n\n## 🤗 Inference\nInference is done with [vLLM](https://github.com/vllm-project/vllm) on 1 A100-80GB. \n\n## 📜 Citation \n\n```\n@misc{pham2025clippercompressionenableslongcontext,\n title={CLIPPER: Compression enables long-context synthetic data generation}, \n author={Chau Minh Pham and Yapei Chang and Mohit Iyyer},\n year={2025},\n eprint={2502.14854},\n archivePrefix={arXiv},\n primaryClass={cs.CL},\n url={https://arxiv.org/abs/2502.14854}, \n}\n```