Lingo-IITGN commited on
Commit
769b519
1 Parent(s): 9d4290a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -45,7 +45,7 @@ The base model **``Ganga-1b``** trained on a monolingual **Hindi** language data
45
 
46
  Project Unity is an initiative aimed at addressing **India's linguistic diversity** and richness by creating a comprehensive resource that covers the country's major languages. Our goal is to achieve state-of-the-art performance in understanding and generating text in **Indian languages**.
47
  To achieve this, we train models on the monolingual regional languages of India. Our first release is the *Ganga-1B* model, *which has been trained on a large dataset of public domain web-crawled hindi language data, including news articles, web documents, books, government publications, educational materials, and social media conversations (filtered for quality)*. Additionally, the dataset has been further curated by native Indian speakers to ensure high-quality.
48
- Importantly, the **Ganga-1B** model outperforms existing open-source models that support Indian languages, even at sizes of up to **7 billion parameters**.
49
 
50
 
51
 
 
45
 
46
  Project Unity is an initiative aimed at addressing **India's linguistic diversity** and richness by creating a comprehensive resource that covers the country's major languages. Our goal is to achieve state-of-the-art performance in understanding and generating text in **Indian languages**.
47
  To achieve this, we train models on the monolingual regional languages of India. Our first release is the *Ganga-1B* model, *which has been trained on a large dataset of public domain web-crawled hindi language data, including news articles, web documents, books, government publications, educational materials, and social media conversations (filtered for quality)*. Additionally, the dataset has been further curated by native Indian speakers to ensure high-quality.
48
+ Importantly, the **Ganga-1B** model outperforms existing open-source models that support **Indian languages**, even at sizes of up to **7 billion parameters**.
49
 
50
 
51