prasadsachin
commited on
Commit
•
80f93db
1
Parent(s):
ddbb883
Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,10 @@
|
|
1 |
---
|
2 |
library_name: keras-hub
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
4 |
## Model Overview
|
5 |
GPT-2 is a language model published by OpenAI. Models are fine tuned on WebText, and range in size from 125 million to 1.5 billion parameters. See the model card below for benchmarks, data sources, and intended use cases.
|
@@ -176,4 +181,4 @@ gpt2_lm = keras_hub.models.GPT2CausalLM.from_preset(
|
|
176 |
preprocessor=None,
|
177 |
)
|
178 |
gpt2_lm.fit(x=x, y=y, sample_weight=sw, batch_size=2)
|
179 |
-
```
|
|
|
1 |
---
|
2 |
library_name: keras-hub
|
3 |
+
license: mit
|
4 |
+
language:
|
5 |
+
- en
|
6 |
+
tags:
|
7 |
+
- text-generation
|
8 |
---
|
9 |
## Model Overview
|
10 |
GPT-2 is a language model published by OpenAI. Models are fine tuned on WebText, and range in size from 125 million to 1.5 billion parameters. See the model card below for benchmarks, data sources, and intended use cases.
|
|
|
181 |
preprocessor=None,
|
182 |
)
|
183 |
gpt2_lm.fit(x=x, y=y, sample_weight=sw, batch_size=2)
|
184 |
+
```
|