Update README.md
Browse files
README.md
CHANGED
@@ -17,7 +17,7 @@ extra_gated_fields:
|
|
17 |
|
18 |
# gpt-base-2048-clmbr
|
19 |
|
20 |
-
This is a **gpt** model with context length **2048** with **117209088** parameters from the [Context Clues paper](
|
21 |
|
22 |
It is a foundation model trained from scratch on the structured data within 2.57 million deidentified EHRs from Stanford Medicine.
|
23 |
|
@@ -129,11 +129,12 @@ We train our model using an autoregressive next code prediction objective, i.e.
|
|
129 |
|
130 |
**BibTeX:**
|
131 |
```
|
132 |
-
@article{
|
133 |
-
|
134 |
-
|
135 |
-
|
136 |
-
|
|
|
137 |
}
|
138 |
```
|
139 |
|
|
|
17 |
|
18 |
# gpt-base-2048-clmbr
|
19 |
|
20 |
+
This is a **gpt** model with context length **2048** with **117209088** parameters from the [Context Clues paper](https://arxiv.org/abs/2412.16178)
|
21 |
|
22 |
It is a foundation model trained from scratch on the structured data within 2.57 million deidentified EHRs from Stanford Medicine.
|
23 |
|
|
|
129 |
|
130 |
**BibTeX:**
|
131 |
```
|
132 |
+
@article{wornow2024contextclues,
|
133 |
+
title={Context Clues: Evaluating Long Context Models for Clinical Prediction Tasks on EHRs},
|
134 |
+
author={Michael Wornow and Suhana Bedi and Miguel Angel Fuentes Hernandez and Ethan Steinberg and Jason Alan Fries and Christopher Ré and Sanmi Koyejo and Nigam H. Shah},
|
135 |
+
year={2024},
|
136 |
+
eprint={2412.16178},
|
137 |
+
url={https://arxiv.org/abs/2412.16178},
|
138 |
}
|
139 |
```
|
140 |
|