bert_base_en / README.md
Divyasreepat's picture
Update README.md with new model card content
328feb9 verified
|
raw
history blame
5.21 kB
metadata
library_name: keras-hub
license: apache-2.0
language:
  - en
tags:
  - text-classification
pipeline_tag: text-classification

Model Overview

BERT (Bidirectional Encoder Representations from Transformers) is a set of language models published by Google. They are intended for classification and embedding of text, not for text-generation. See the model card below for benchmarks, data sources, and intended use cases.

Weights and Keras model code are released under the Apache 2 License.

Links

Installation

Keras and KerasHub can be installed with:

pip install -U -q keras-hub
pip install -U -q keras>=3

Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instruction on installing them in another environment see the Keras Getting Started page.

Presets

The following model checkpoints are provided by the Keras team. Full code examples for each are available below.

Preset name Parameters Description
bert_tiny_en_uncased 4.39M 2-layer BERT model where all input is lowercased.
bert_small_en_uncased 28.76M 4-layer BERT model where all input is lowercased.
bert_medium_en_uncased 41.37M 8-layer BERT model where all input is lowercased.
bert_base_en_uncased 109.48M 12-layer BERT model where all input is lowercased.
bert_base_en 108.31M 12-layer BERT model where case is maintained.
bert_base_zh 102.27M 12-layer BERT model. Trained on Chinese Wikipedia.
bert_base_multi 177.85M 12-layer BERT model where case is maintained.
bert_large_en_uncased 335.14M 24-layer BERT model where all input is lowercased.
bert_large_en 333.58M 24-layer BERT model where case is maintained.

Example Usage

import keras
import keras_hub
import numpy as np

Raw string data.

features = ["The quick brown fox jumped.", "I forgot my homework."]
labels = [0, 3]

# Pretrained classifier.
classifier = keras_hub.models.BertClassifier.from_preset(
    "bert_base_en",
    num_classes=4,
)
classifier.fit(x=features, y=labels, batch_size=2)
classifier.predict(x=features, batch_size=2)

# Re-compile (e.g., with a new learning rate).
classifier.compile(
    loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
    optimizer=keras.optimizers.Adam(5e-5),
    jit_compile=True,
)
# Access backbone programmatically (e.g., to change `trainable`).
classifier.backbone.trainable = False
# Fit again.
classifier.fit(x=features, y=labels, batch_size=2)

Preprocessed integer data.

features = {
    "token_ids": np.ones(shape=(2, 12), dtype="int32"),
    "segment_ids": np.array([[0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0]] * 2),
    "padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0]] * 2),
}
labels = [0, 3]

# Pretrained classifier without preprocessing.
classifier = keras_hub.models.BertClassifier.from_preset(
    "bert_base_en",
    num_classes=4,
    preprocessor=None,
)
classifier.fit(x=features, y=labels, batch_size=2)

Example Usage with Hugging Face URI

import keras
import keras_hub
import numpy as np

Raw string data.

features = ["The quick brown fox jumped.", "I forgot my homework."]
labels = [0, 3]

# Pretrained classifier.
classifier = keras_hub.models.BertClassifier.from_preset(
    "hf://keras/bert_base_en",
    num_classes=4,
)
classifier.fit(x=features, y=labels, batch_size=2)
classifier.predict(x=features, batch_size=2)

# Re-compile (e.g., with a new learning rate).
classifier.compile(
    loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
    optimizer=keras.optimizers.Adam(5e-5),
    jit_compile=True,
)
# Access backbone programmatically (e.g., to change `trainable`).
classifier.backbone.trainable = False
# Fit again.
classifier.fit(x=features, y=labels, batch_size=2)

Preprocessed integer data.

features = {
    "token_ids": np.ones(shape=(2, 12), dtype="int32"),
    "segment_ids": np.array([[0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0]] * 2),
    "padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0]] * 2),
}
labels = [0, 3]

# Pretrained classifier without preprocessing.
classifier = keras_hub.models.BertClassifier.from_preset(
    "hf://keras/bert_base_en",
    num_classes=4,
    preprocessor=None,
)
classifier.fit(x=features, y=labels, batch_size=2)