Update README.md
Browse files
README.md
CHANGED
@@ -11,15 +11,17 @@ Model and tokenizer files for the InLegalBERT model.
|
|
11 |
|
12 |
### Training Data
|
13 |
For building the pre-training corpus of Indian legal text, we collected a large corpus of case documents from the Indian Supreme Court and many High Courts of India.
|
14 |
-
These documents were collected from diverse publicly available sources on the Web, such as official websites of these courts (e.g., [the website of the Indian Supreme Court](https://main.sci.gov.in/)), the erstwhile website of the Legal Information Institute of India,
|
15 |
-
the popular legal repository [IndianKanoon](https://www.indiankanoon.org), and so on.
|
16 |
The court cases in our dataset range from 1950 to 2019, and belong to all legal domains, such as Civil, Criminal, Constitutional, and so on.
|
17 |
-
Additionally, we collected 1,113 Central Government Acts, which are the documents codifying the laws of the country. Each Act is a collection of related laws, called Sections. These 1,113 Acts contain a total of 32,021 Sections.
|
18 |
In total, our dataset contains around 5.4 million Indian legal documents (all in the English language).
|
19 |
The raw text corpus size is around 27 GB.
|
20 |
|
21 |
-
### Training
|
22 |
This model is initialized with the [LEGAL-BERT-SC model](https://huggingface.co/nlpaueb/legal-bert-base-uncased) from the paper [LEGAL-BERT: The Muppets straight out of Law School](https://aclanthology.org/2020.findings-emnlp.261/). In our work, we refer to this model as LegalBERT, and our re-trained model as InLegalBERT.
|
|
|
|
|
|
|
|
|
|
|
23 |
|
24 |
### Usage
|
25 |
Using the tokenizer (same as [LegalBERT](https://huggingface.co/nlpaueb/legal-bert-base-uncased))
|
@@ -32,10 +34,7 @@ Using the model to get embeddings/representations for a sentence
|
|
32 |
from transformers import AutoModel
|
33 |
model = AutoModel.from_pretrained("law-ai/InLegalBERT")
|
34 |
```
|
35 |
-
|
36 |
-
|
37 |
-
from transformers import BertForPreTraining
|
38 |
-
model_with_pretraining_heads = BertForPreTraining.from_pretrained("law-ai/InLegalBERT")
|
39 |
-
```
|
40 |
|
41 |
### Citation
|
|
|
11 |
|
12 |
### Training Data
|
13 |
For building the pre-training corpus of Indian legal text, we collected a large corpus of case documents from the Indian Supreme Court and many High Courts of India.
|
|
|
|
|
14 |
The court cases in our dataset range from 1950 to 2019, and belong to all legal domains, such as Civil, Criminal, Constitutional, and so on.
|
|
|
15 |
In total, our dataset contains around 5.4 million Indian legal documents (all in the English language).
|
16 |
The raw text corpus size is around 27 GB.
|
17 |
|
18 |
+
### Training Setup
|
19 |
This model is initialized with the [LEGAL-BERT-SC model](https://huggingface.co/nlpaueb/legal-bert-base-uncased) from the paper [LEGAL-BERT: The Muppets straight out of Law School](https://aclanthology.org/2020.findings-emnlp.261/). In our work, we refer to this model as LegalBERT, and our re-trained model as InLegalBERT.
|
20 |
+
We further train this model on our data for 300K steps on the Masked Language Modeling (MLM) and Next Sentence Prediction (NSP) tasks.
|
21 |
+
|
22 |
+
### Model Overview
|
23 |
+
This model has the same configuration as the [bert-base-uncased model](https://huggingface.co/bert-base-uncased):
|
24 |
+
12 hidden layers, 768 hidden dimensionality, 12 attention heads, ~110M parameters
|
25 |
|
26 |
### Usage
|
27 |
Using the tokenizer (same as [LegalBERT](https://huggingface.co/nlpaueb/legal-bert-base-uncased))
|
|
|
34 |
from transformers import AutoModel
|
35 |
model = AutoModel.from_pretrained("law-ai/InLegalBERT")
|
36 |
```
|
37 |
+
|
38 |
+
### Fine-tuning Results
|
|
|
|
|
|
|
39 |
|
40 |
### Citation
|