File size: 2,164 Bytes
50c1ae6
7d17a94
f72722a
7d17a94
 
50c1ae6
 
0d71062
 
 
 
 
 
527c498
0d71062
 
 
 
 
 
 
8005ecd
0d71062
 
8005ecd
c8d6ef6
8005ecd
 
 
 
 
 
 
 
 
 
 
 
c8d6ef6
0d71062
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
---
language: en
pipeline_tag: fill-mask
tags: 
- legal
license: mit
---

###  InLegalBERT
Model and tokenizer files for the InLegalBERT model.

### Training Data
For building the pre-training corpus of Indian legal text, we collected a large corpus of case documents from the Indian Supreme Court and many High Courts of India.
These documents were collected from diverse publicly available sources on the Web, such as official websites of these courts (e.g., [the website of the Indian Supreme Court](https://main.sci.gov.in/)), the erstwhile website of the Legal Information Institute of India, 
the popular legal repository [IndianKanoon](https://www.indiankanoon.org), and so on.
The court cases in our dataset range from 1950 to 2019, and belong to all legal domains, such as Civil, Criminal, Constitutional, and so on.
Additionally, we collected 1,113 Central Government Acts, which are the documents codifying the laws of the country. Each Act is a collection of related laws, called Sections. These 1,113 Acts contain a total of 32,021 Sections. 
In total, our dataset contains around 5.4 million Indian legal documents (all in the English language). 
The raw text corpus size is around 27 GB.

### Training Objective
This model is initialized with the [LEGAL-BERT-SC model](https://huggingface.co/nlpaueb/legal-bert-base-uncased) from the paper [LEGAL-BERT: The Muppets straight out of Law School](https://aclanthology.org/2020.findings-emnlp.261/). In our work, we refer to this model as LegalBERT, and our re-trained model as InLegalBERT.

### Usage
Using the tokenizer (same as [LegalBERT](https://huggingface.co/nlpaueb/legal-bert-base-uncased))
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("law-ai/InLegalBERT")
```
Using the model to get embeddings/representations for a sentence
```python
from transformers import AutoModel
model = AutoModel.from_pretrained("law-ai/InLegalBERT")
```
Using the model for further pre-training with MLM and NSP
```python
from transformers import BertForPreTraining
model_with_pretraining_heads = BertForPreTraining.from_pretrained("law-ai/InLegalBERT")
```

### Citation