InLegalBERT / README.md
law-ai's picture
Update README.md
74ff41d
|
raw
history blame
1.75 kB
metadata
language: en
pipeline_tag: fill-mask
tags:
  - legal
license: mit

InLegalBERT

Model and tokenizer files for the InLegalBERT model.

Training Data

For building the pre-training corpus of Indian legal text, we collected a large corpus of case documents from the Indian Supreme Court and many High Courts of India. The court cases in our dataset range from 1950 to 2019, and belong to all legal domains, such as Civil, Criminal, Constitutional, and so on. In total, our dataset contains around 5.4 million Indian legal documents (all in the English language). The raw text corpus size is around 27 GB.

Training Setup

This model is initialized with the LEGAL-BERT-SC model from the paper LEGAL-BERT: The Muppets straight out of Law School. In our work, we refer to this model as LegalBERT, and our re-trained model as InLegalBERT. We further train this model on our data for 300K steps on the Masked Language Modeling (MLM) and Next Sentence Prediction (NSP) tasks.

Model Overview

This model has the same configuration as the bert-base-uncased model: 12 hidden layers, 768 hidden dimensionality, 12 attention heads, ~110M parameters

Usage

Using the tokenizer (same as LegalBERT)

from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("law-ai/InLegalBERT")

Using the model to get embeddings/representations for a sentence

from transformers import AutoModel
model = AutoModel.from_pretrained("law-ai/InLegalBERT")

Fine-tuning Results

Citation