File size: 3,837 Bytes
0d058d4 c0a7a79 1dbc86d 0846800 1dbc86d 0846800 1dbc86d 0d058d4 84adc80 0d058d4 8c2fa52 7f077ac 0d058d4 9e8cea0 0d058d4 9e8cea0 0d058d4 4007194 9e8cea0 4007194 0d058d4 9658416 0d058d4 18de541 9e8cea0 0d058d4 a77a0cf 0d058d4 3ab03a0 788513d fb2a7a0 788513d fb2a7a0 9812330 0d058d4 253c4f3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 |
---
license: apache-2.0
language:
- hi
metrics:
- perplexity
widget:
- text: "BCCI ने टी-20 वर्ल्ड कप के बीच जिम्बाब्वे सीरीज के लिए टीम इंडिया का ऐलान कर दिया है। इस टीम में कई नए चेहरों को जगह दी गई है।"
example_title: "Example 1"
- text: "7 अक्टूबर को हमास से जंग शुरू होने के सात महीने बाद इजरायली सेना ने गाजा पट्टी में हमास के ठिकानों पर हमला किया था। इस हमले में हमास के कई ठिकानों को निशाना"
example_title: "Example 2"
---
# Model Card for Ganga-1b! 🌊
The base model ``Ganga-1b`` trained on a monolingual Hindi language dataset as part of the Project Unity.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/667b8f8ba271fc5a8e6929de/jG3tZnGPvH6vcGrvxO-YC.png)
## Model Details
### Model Description
Project Unity is an initiative aimed at addressing India's linguistic diversity and richness by creating a comprehensive resource that covers the country's major languages. Our goal is to achieve state-of-the-art performance in understanding and generating text in Indian languages. To achieve this, we train models on monolingual regional languages of India. Our first release is the Ganga-1B model, which has been trained on a large dataset of public domain web-crawled hindi language data, including news articles, web documents, books, government publications, educational materials, and social media conversations (filtered for quality). Additionally, the dataset has been further curated by native Indian speakers to ensure high-quality. Importantly, the Ganga-1B model outperforms existing open-source models that support Indian languages, even at sizes of up to 7 billion parameters. Designed to be compact and efficient, the model can easily run on edge devices, making it ideal for a range of applications that require generating human-like text. Its modest size also enables easy integration into resource-constrained environments, such as personal devices or cloud infrastructure, allowing for wider adoption and innovation in AI-driven technologies.
- **Developed by:** Lingo Research Group and hyperlink
- **Model type:** Transformer-based Language Model
- **Language(s) (NLP):** Bilingual (Primary: Hindi [hi], Secondary: English [en]
- **License:** Apache 2.0
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Bias, Risks, and Limitations
### Recommendations
This model described is a research preview and is under ongoing iterative updations, and as such, it only provides limited safety measures. Additionally, it may generate offensive content. It is strictly prohibited to use this service for any illegal, harmful, violent, racist, or sexual purposes.
## Evaluation
### Results
| Model | Fertility | PPL |
|:-----------:|:---------:|:------:|
| ganga-1b | 1.12 | 34.85 |
| pragna-1b | 1.58 | 12.74 |
| bloom-1b1 | 1.27 | 33.39 |
| bloom-1b7 | 1.27 | 26.63 |
| airavata-7b | 1.69 | 46.24 |
| bloom-3b | 1.27 | 23.77 |
| gemma-2b | 1.89 | 41.67 |
#### Summary
## Technical Specifications
### Model Architecture and Objective
Ganga-1b is a decoder-only transformer model, featuring the following specifications:
* #Layers: 16
* #Attention heads: 32
* Embedding dimension: 2048
* Vocabulary size: 30000
* Sliding window: 512
* Intermediate dimension: 716
## Model Card Contact
Lingo-IITGN |