Divyasreepat
commited on
Commit
•
1822528
1
Parent(s):
dc4a2ba
Update README.md with new model card content
Browse files
README.md
CHANGED
@@ -7,7 +7,7 @@ tags:
|
|
7 |
- text-generation
|
8 |
pipeline_tag: text-generation
|
9 |
---
|
10 |
-
|
11 |
GPT-2 is a language model published by OpenAI. Models are fine tuned on WebText, and range in size from 125 million to 1.5 billion parameters. See the model card below for benchmarks, data sources, and intended use cases.
|
12 |
|
13 |
Weights are released under the [MIT License](https://opensource.org/license/mit). Keras model code is released under the [Apache 2 License](https://github.com/keras-team/keras-hub/blob/master/LICENSE).
|
@@ -182,4 +182,4 @@ gpt2_lm = keras_hub.models.GPT2CausalLM.from_preset(
|
|
182 |
preprocessor=None,
|
183 |
)
|
184 |
gpt2_lm.fit(x=x, y=y, sample_weight=sw, batch_size=2)
|
185 |
-
```
|
|
|
7 |
- text-generation
|
8 |
pipeline_tag: text-generation
|
9 |
---
|
10 |
+
### Model Overview
|
11 |
GPT-2 is a language model published by OpenAI. Models are fine tuned on WebText, and range in size from 125 million to 1.5 billion parameters. See the model card below for benchmarks, data sources, and intended use cases.
|
12 |
|
13 |
Weights are released under the [MIT License](https://opensource.org/license/mit). Keras model code is released under the [Apache 2 License](https://github.com/keras-team/keras-hub/blob/master/LICENSE).
|
|
|
182 |
preprocessor=None,
|
183 |
)
|
184 |
gpt2_lm.fit(x=x, y=y, sample_weight=sw, batch_size=2)
|
185 |
+
```
|