JingzeShi commited on
Commit
5a13893
1 Parent(s): cc09b7a

Update README.md

Browse files

Fix syntax errors

Files changed (1) hide show
  1. README.md +13 -10
README.md CHANGED
@@ -4,22 +4,25 @@ license_name: doge
4
  license_link: LICENSE
5
  datasets:
6
  - michaelwzhu/ChatMed_Consult_Dataset
7
- metrics:
8
- - code_eval
9
  base_model:
10
  - JingzeShi/Doge-197M
11
  pipeline_tag: question-answering
12
  library_name: transformers
 
 
 
 
 
13
  ---
14
 
15
- ## **Doge 197M for Medical QA**
16
 
17
  This model is a fine-tuned version of [JingzeShi/Doge-197M](https://huggingface.co/JingzeShi/Doge-197M).
18
  It has been trained using [TRL](https://github.com/huggingface/trl).
19
 
20
  Doge is an ongoing research project where we aim to train a series of small language models to further explore whether the Transformer framework allows for more complex feedforward network structures, enabling the model to have fewer cache states and larger knowledge capacity.
21
 
22
- In addition, Doge uses Inner Function Attention with Dynamic Mask as sequence transformation and Cross Domain Mixture of Experts as state transformation. This model is trained by Jingze Shi, it only allows text input and text generation, for detailed algorithm and model architecture, please refer to [Wonderful Matrices](https://arxiv.org/abs/2407.16958), the ongoing research repository is [Doge](https://github.com/LoserCheems/Doge).
23
 
24
 
25
  ## Uses
@@ -74,11 +77,11 @@ In addition, Doge uses Inner Function Attention with Dynamic Mask as sequence tr
74
  ... )
75
  ```
76
 
77
- **Fine-tue Task**:
78
  - We selected an open-source Chinese medical question answering dataset for fine-tuning.
79
 
80
 
81
- **Fine-tue Environment**:
82
  - Image: nvcr.io/nvidia/pytorch:24.10-py3
83
  - Hardware: 1x NVIDIA RTX 3090
84
  - Software: Transformers
@@ -92,12 +95,12 @@ In addition, Doge uses Inner Function Attention with Dynamic Mask as sequence tr
92
 
93
  ```bibtex
94
  @misc{shi2024wonderfulmatrices,
95
- title={Wonderful Matrices: More Efficient and Effective Architecture for Language Modeling Tasks},
96
- author={Jingze Shi and Bingheng Wu and Lu He and Luchang Jiang},
97
  year={2024},
98
- eprint={2407.16958},
99
  archivePrefix={arXiv},
100
  primaryClass={cs.LG},
101
- url={https://arxiv.org/abs/2407.16958},
102
  }
103
  ```
 
4
  license_link: LICENSE
5
  datasets:
6
  - michaelwzhu/ChatMed_Consult_Dataset
 
 
7
  base_model:
8
  - JingzeShi/Doge-197M
9
  pipeline_tag: question-answering
10
  library_name: transformers
11
+ language:
12
+ - en
13
+ - zh
14
+ tags:
15
+ - medical
16
  ---
17
 
18
+ ## **Doge-197M for MedicalQA**
19
 
20
  This model is a fine-tuned version of [JingzeShi/Doge-197M](https://huggingface.co/JingzeShi/Doge-197M).
21
  It has been trained using [TRL](https://github.com/huggingface/trl).
22
 
23
  Doge is an ongoing research project where we aim to train a series of small language models to further explore whether the Transformer framework allows for more complex feedforward network structures, enabling the model to have fewer cache states and larger knowledge capacity.
24
 
25
+ In addition, Doge uses Dynamic Mask Attention as sequence transformation and can use Multi-Layer Perceptron or Cross Domain Mixture of Experts as state transformation. Dynamic Mask Attention allows the Transformer to use self-attention during training and state space during inference, and Cross Domain Mixture of Experts can directly inherit the weights of Multi-Layer Perceptron for further training. This model is trained by Jingze Shi, it only allows text input and text generation, for detailed algorithm and model architecture, please refer to [Wonderful Matrices](https://arxiv.org/abs/2412.11834), the ongoing research repository is [Wonderful Matrices](https://github.com/LoserCheems/WonderfulMatrices).
26
 
27
 
28
  ## Uses
 
77
  ... )
78
  ```
79
 
80
+ **Fine-tune Task**:
81
  - We selected an open-source Chinese medical question answering dataset for fine-tuning.
82
 
83
 
84
+ **Fine-tune Environment**:
85
  - Image: nvcr.io/nvidia/pytorch:24.10-py3
86
  - Hardware: 1x NVIDIA RTX 3090
87
  - Software: Transformers
 
95
 
96
  ```bibtex
97
  @misc{shi2024wonderfulmatrices,
98
+ title={Wonderful Matrices: Combining for a More Efficient and Effective Foundation Model Architecture},
99
+ author={Jingze Shi and Bingheng Wu},
100
  year={2024},
101
+ eprint={2412.11834},
102
  archivePrefix={arXiv},
103
  primaryClass={cs.LG},
104
+ url={https://arxiv.org/abs/2412.11834},
105
  }
106
  ```