Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-xls-r-300m-36-tokens-with-lm-es
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Wer: 0.0868
- Cer: 0.0281
This model consists of a Wav2Vec2 model with an additional KenLM 5-gram language model for CTC decoding.
The model is trained removing all characters that are not lower-case unaccented letters between `a-z` or the Spanish accented vowels `á`, `é`, `í`, `ó`, `ú` or the dieresis on the vowel `u` - `ü`.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:------:|:---------------:|:------:|
| 3.6512 | 0.07 | 400 | 0.5734 | 0.4325 |
| 0.4404 | 0.14 | 800 | 0.3329 | 0.3021 |
| 0.3465 | 0.22 | 1200 | 0.3067 | 0.2871 |
| 0.3214 | 0.29 | 1600 | 0.2808 | 0.2694 |
| 0.319 | 0.36 | 2000 | 0.2755 | 0.2677 |
| 0.3015 | 0.43 | 2400 | 0.2667 | 0.2437 |
| 0.3102 | 0.51 | 2800 | 0.2679 | 0.2475 |
| 0.2955 | 0.58 | 3200 | 0.2591 | 0.2421 |
| 0.292 | 0.65 | 3600 | 0.2547 | 0.2404 |
| 0.2961 | 0.72 | 4000 | 0.2824 | 0.2716 |
| 0.2906 | 0.8 | 4400 | 0.2531 | 0.2321 |
| 0.2886 | 0.87 | 4800 | 0.2668 | 0.2573 |
| 0.2934 | 0.94 | 5200 | 0.2608 | 0.2454 |
| 0.2844 | 1.01 | 5600 | 0.2414 | 0.2233 |
| 0.2649 | 1.09 | 6000 | 0.2412 | 0.2198 |
| 0.2587 | 1.16 | 6400 | 0.2432 | 0.2211 |
| 0.2631 | 1.23 | 6800 | 0.2414 | 0.2225 |
| 0.2584 | 1.3 | 7200 | 0.2489 | 0.2290 |
| 0.2588 | 1.37 | 7600 | 0.2341 | 0.2156 |
| 0.2581 | 1.45 | 8000 | 0.2323 | 0.2155 |
| 0.2603 | 1.52 | 8400 | 0.2423 | 0.2231 |
| 0.2527 | 1.59 | 8800 | 0.2381 | 0.2192 |
| 0.2588 | 1.66 | 9200 | 0.2323 | 0.2176 |
| 0.2543 | 1.74 | 9600 | 0.2391 | 0.2151 |
| 0.2528 | 1.81 | 10000 | 0.2295 | 0.2091 |
| 0.2535 | 1.88 | 10400 | 0.2317 | 0.2099 |
| 0.2501 | 1.95 | 10800 | 0.2225 | 0.2105 |
| 0.2441 | 2.03 | 11200 | 0.2356 | 0.2180 |
| 0.2275 | 2.1 | 11600 | 0.2341 | 0.2115 |
| 0.2281 | 2.17 | 12000 | 0.2269 | 0.2117 |
| 0.227 | 2.24 | 12400 | 0.2367 | 0.2125 |
| 0.2471 | 2.32 | 12800 | 0.2307 | 0.2090 |
| 0.229 | 2.39 | 13200 | 0.2231 | 0.2005 |
| 0.2325 | 2.46 | 13600 | 0.2243 | 0.2100 |
| 0.2314 | 2.53 | 14000 | 0.2252 | 0.2098 |
| 0.2309 | 2.6 | 14400 | 0.2269 | 0.2089 |
| 0.2267 | 2.68 | 14800 | 0.2155 | 0.1976 |
| 0.225 | 2.75 | 15200 | 0.2263 | 0.2067 |
| 0.2309 | 2.82 | 15600 | 0.2196 | 0.2041 |
| 0.225 | 2.89 | 16000 | 0.2212 | 0.2052 |
| 0.228 | 2.97 | 16400 | 0.2192 | 0.2028 |
| 0.2136 | 3.04 | 16800 | 0.2169 | 0.2042 |
| 0.2038 | 3.11 | 17200 | 0.2173 | 0.1998 |
| 0.2035 | 3.18 | 17600 | 0.2185 | 0.2002 |
| 0.207 | 3.26 | 18000 | 0.2358 | 0.2120 |
| 0.2102 | 3.33 | 18400 | 0.2213 | 0.2019 |
| 0.211 | 3.4 | 18800 | 0.2176 | 0.1980 |
| 0.2099 | 3.47 | 19200 | 0.2186 | 0.1960 |
| 0.2093 | 3.55 | 19600 | 0.2208 | 0.2016 |
| 0.2046 | 3.62 | 20000 | 0.2138 | 0.1960 |
| 0.2095 | 3.69 | 20400 | 0.2222 | 0.2023 |
| 0.2106 | 3.76 | 20800 | 0.2159 | 0.1964 |
| 0.2066 | 3.83 | 21200 | 0.2083 | 0.1931 |
| 0.2119 | 3.91 | 21600 | 0.2130 | 0.1957 |
| 0.2167 | 3.98 | 22000 | 0.2210 | 0.1987 |
| 0.1973 | 4.05 | 22400 | 0.2112 | 0.1930 |
| 0.1917 | 4.12 | 22800 | 0.2107 | 0.1891 |
| 0.1903 | 4.2 | 23200 | 0.2132 | 0.1911 |
| 0.1903 | 4.27 | 23600 | 0.2077 | 0.1883 |
| 0.1914 | 4.34 | 24000 | 0.2054 | 0.1901 |
| 0.1943 | 4.41 | 24400 | 0.2059 | 0.1885 |
| 0.1943 | 4.49 | 24800 | 0.2095 | 0.1899 |
| 0.1936 | 4.56 | 25200 | 0.2078 | 0.1879 |
| 0.1963 | 4.63 | 25600 | 0.2018 | 0.1884 |
| 0.1934 | 4.7 | 26000 | 0.2034 | 0.1872 |
| 0.2011 | 4.78 | 26400 | 0.2051 | 0.1896 |
| 0.1901 | 4.85 | 26800 | 0.2059 | 0.1858 |
| 0.1934 | 4.92 | 27200 | 0.2028 | 0.1832 |
| 0.191 | 4.99 | 27600 | 0.2046 | 0.1870 |
| 0.1775 | 5.07 | 28000 | 0.2081 | 0.1891 |
| 0.175 | 5.14 | 28400 | 0.2084 | 0.1904 |
| 0.19 | 5.21 | 28800 | 0.2086 | 0.1920 |
| 0.1798 | 5.28 | 29200 | 0.2079 | 0.1935 |
| 0.1765 | 5.35 | 29600 | 0.2145 | 0.1930 |
| 0.181 | 5.43 | 30000 | 0.2062 | 0.1918 |
| 0.1808 | 5.5 | 30400 | 0.2083 | 0.1875 |
| 0.1769 | 5.57 | 30800 | 0.2117 | 0.1895 |
| 0.1788 | 5.64 | 31200 | 0.2055 | 0.1857 |
| 0.181 | 5.72 | 31600 | 0.2057 | 0.1870 |
| 0.1781 | 5.79 | 32000 | 0.2053 | 0.1872 |
| 0.1852 | 5.86 | 32400 | 0.2077 | 0.1904 |
| 0.1832 | 5.93 | 32800 | 0.1979 | 0.1821 |
| 0.1758 | 6.01 | 33200 | 0.1957 | 0.1754 |
| 0.1611 | 6.08 | 33600 | 0.2028 | 0.1773 |
| 0.1606 | 6.15 | 34000 | 0.2018 | 0.1780 |
| 0.1702 | 6.22 | 34400 | 0.1977 | 0.1759 |
| 0.1649 | 6.3 | 34800 | 0.2073 | 0.1845 |
| 0.1641 | 6.37 | 35200 | 0.1947 | 0.1774 |
| 0.1703 | 6.44 | 35600 | 0.2009 | 0.1811 |
| 0.1716 | 6.51 | 36000 | 0.2091 | 0.1817 |
| 0.1732 | 6.58 | 36400 | 0.1942 | 0.1743 |
| 0.1642 | 6.66 | 36800 | 0.1930 | 0.1749 |
| 0.1685 | 6.73 | 37200 | 0.1962 | 0.1716 |
| 0.1647 | 6.8 | 37600 | 0.1977 | 0.1822 |
| 0.1647 | 6.87 | 38000 | 0.1917 | 0.1748 |
| 0.1667 | 6.95 | 38400 | 0.1948 | 0.1774 |
| 0.1647 | 7.02 | 38800 | 0.2018 | 0.1783 |
| 0.15 | 7.09 | 39200 | 0.2010 | 0.1796 |
| 0.1663 | 7.16 | 39600 | 0.1969 | 0.1731 |
| 0.1536 | 7.24 | 40000 | 0.1935 | 0.1726 |
| 0.1544 | 7.31 | 40400 | 0.2030 | 0.1799 |
| 0.1536 | 7.38 | 40800 | 0.1973 | 0.1772 |
| 0.1559 | 7.45 | 41200 | 0.1973 | 0.1763 |
| 0.1547 | 7.53 | 41600 | 0.2052 | 0.1782 |
| 0.1584 | 7.6 | 42000 | 0.1965 | 0.1737 |
| 0.1542 | 7.67 | 42400 | 0.1878 | 0.1725 |
| 0.1525 | 7.74 | 42800 | 0.1946 | 0.1750 |
| 0.1547 | 7.81 | 43200 | 0.1934 | 0.1691 |
| 0.1534 | 7.89 | 43600 | 0.1919 | 0.1711 |
| 0.1574 | 7.96 | 44000 | 0.1935 | 0.1745 |
| 0.1471 | 8.03 | 44400 | 0.1915 | 0.1689 |
| 0.1433 | 8.1 | 44800 | 0.1956 | 0.1719 |
| 0.1433 | 8.18 | 45200 | 0.1980 | 0.1720 |
| 0.1424 | 8.25 | 45600 | 0.1906 | 0.1681 |
| 0.1428 | 8.32 | 46000 | 0.1892 | 0.1649 |
| 0.1424 | 8.39 | 46400 | 0.1916 | 0.1698 |
| 0.1466 | 8.47 | 46800 | 0.1970 | 0.1739 |
| 0.1496 | 8.54 | 47200 | 0.1902 | 0.1662 |
| 0.1408 | 8.61 | 47600 | 0.1858 | 0.1649 |
| 0.1445 | 8.68 | 48000 | 0.1893 | 0.1648 |
| 0.1459 | 8.76 | 48400 | 0.1875 | 0.1686 |
| 0.1433 | 8.83 | 48800 | 0.1920 | 0.1673 |
| 0.1448 | 8.9 | 49200 | 0.1833 | 0.1631 |
| 0.1461 | 8.97 | 49600 | 0.1904 | 0.1693 |
| 0.1451 | 9.04 | 50000 | 0.1969 | 0.1661 |
| 0.1336 | 9.12 | 50400 | 0.1950 | 0.1674 |
| 0.1362 | 9.19 | 50800 | 0.1971 | 0.1685 |
| 0.1316 | 9.26 | 51200 | 0.1928 | 0.1648 |
| 0.132 | 9.33 | 51600 | 0.1908 | 0.1615 |
| 0.1301 | 9.41 | 52000 | 0.1842 | 0.1569 |
| 0.1322 | 9.48 | 52400 | 0.1892 | 0.1616 |
| 0.1391 | 9.55 | 52800 | 0.1956 | 0.1656 |
| 0.132 | 9.62 | 53200 | 0.1876 | 0.1598 |
| 0.1349 | 9.7 | 53600 | 0.1870 | 0.1624 |
| 0.1325 | 9.77 | 54000 | 0.1834 | 0.1586 |
| 0.1389 | 9.84 | 54400 | 0.1892 | 0.1647 |
| 0.1364 | 9.91 | 54800 | 0.1840 | 0.1597 |
| 0.1339 | 9.99 | 55200 | 0.1858 | 0.1626 |
| 0.1269 | 10.06 | 55600 | 0.1875 | 0.1619 |
| 0.1229 | 10.13 | 56000 | 0.1909 | 0.1619 |
| 0.1258 | 10.2 | 56400 | 0.1933 | 0.1631 |
| 0.1256 | 10.27 | 56800 | 0.1930 | 0.1640 |
| 0.1207 | 10.35 | 57200 | 0.1823 | 0.1585 |
| 0.1248 | 10.42 | 57600 | 0.1889 | 0.1596 |
| 0.1264 | 10.49 | 58000 | 0.1845 | 0.1584 |
| 0.1251 | 10.56 | 58400 | 0.1869 | 0.1588 |
| 0.1251 | 10.64 | 58800 | 0.1885 | 0.1613 |
| 0.1276 | 10.71 | 59200 | 0.1855 | 0.1575 |
| 0.1303 | 10.78 | 59600 | 0.1836 | 0.1597 |
| 0.1246 | 10.85 | 60000 | 0.1810 | 0.1573 |
| 0.1283 | 10.93 | 60400 | 0.1830 | 0.1581 |
| 0.1273 | 11.0 | 60800 | 0.1837 | 0.1619 |
| 0.1202 | 11.07 | 61200 | 0.1865 | 0.1588 |
| 0.119 | 11.14 | 61600 | 0.1889 | 0.1580 |
| 0.1179 | 11.22 | 62000 | 0.1884 | 0.1592 |
| 0.1187 | 11.29 | 62400 | 0.1824 | 0.1565 |
| 0.1198 | 11.36 | 62800 | 0.1848 | 0.1552 |
| 0.1154 | 11.43 | 63200 | 0.1866 | 0.1565 |
| 0.1211 | 11.51 | 63600 | 0.1862 | 0.1563 |
| 0.1177 | 11.58 | 64000 | 0.1816 | 0.1527 |
| 0.1156 | 11.65 | 64400 | 0.1834 | 0.1540 |
| 0.1144 | 11.72 | 64800 | 0.1837 | 0.1524 |
| 0.119 | 11.79 | 65200 | 0.1859 | 0.1538 |
| 0.1183 | 11.87 | 65600 | 0.1869 | 0.1558 |
| 0.122 | 11.94 | 66000 | 0.1853 | 0.1535 |
| 0.1197 | 12.01 | 66400 | 0.1871 | 0.1586 |
| 0.1096 | 12.08 | 66800 | 0.1838 | 0.1540 |
| 0.1074 | 12.16 | 67200 | 0.1915 | 0.1592 |
| 0.1084 | 12.23 | 67600 | 0.1845 | 0.1545 |
| 0.1097 | 12.3 | 68000 | 0.1904 | 0.1552 |
| 0.112 | 12.37 | 68400 | 0.1846 | 0.1578 |
| 0.1109 | 12.45 | 68800 | 0.1862 | 0.1549 |
| 0.1114 | 12.52 | 69200 | 0.1889 | 0.1552 |
| 0.1119 | 12.59 | 69600 | 0.1828 | 0.1530 |
| 0.1124 | 12.66 | 70000 | 0.1822 | 0.1540 |
| 0.1127 | 12.74 | 70400 | 0.1865 | 0.1589 |
| 0.1128 | 12.81 | 70800 | 0.1786 | 0.1498 |
| 0.1069 | 12.88 | 71200 | 0.1813 | 0.1522 |
| 0.1069 | 12.95 | 71600 | 0.1895 | 0.1558 |
| 0.1083 | 13.02 | 72000 | 0.1925 | 0.1557 |
| 0.1009 | 13.1 | 72400 | 0.1883 | 0.1522 |
| 0.1007 | 13.17 | 72800 | 0.1829 | 0.1480 |
| 0.1014 | 13.24 | 73200 | 0.1861 | 0.1510 |
| 0.0974 | 13.31 | 73600 | 0.1836 | 0.1486 |
| 0.1006 | 13.39 | 74000 | 0.1821 | 0.1462 |
| 0.0973 | 13.46 | 74400 | 0.1857 | 0.1484 |
| 0.1011 | 13.53 | 74800 | 0.1822 | 0.1471 |
| 0.1031 | 13.6 | 75200 | 0.1823 | 0.1489 |
| 0.1034 | 13.68 | 75600 | 0.1809 | 0.1452 |
| 0.0998 | 13.75 | 76000 | 0.1817 | 0.1490 |
| 0.1071 | 13.82 | 76400 | 0.1808 | 0.1501 |
| 0.1083 | 13.89 | 76800 | 0.1796 | 0.1475 |
| 0.1053 | 13.97 | 77200 | 0.1785 | 0.1470 |
| 0.0978 | 14.04 | 77600 | 0.1886 | 0.1495 |
| 0.094 | 14.11 | 78000 | 0.1854 | 0.1489 |
| 0.0915 | 14.18 | 78400 | 0.1854 | 0.1498 |
| 0.0947 | 14.25 | 78800 | 0.1888 | 0.1500 |
| 0.0939 | 14.33 | 79200 | 0.1885 | 0.1494 |
| 0.0973 | 14.4 | 79600 | 0.1877 | 0.1466 |
| 0.0946 | 14.47 | 80000 | 0.1904 | 0.1494 |
| 0.0931 | 14.54 | 80400 | 0.1815 | 0.1473 |
| 0.0958 | 14.62 | 80800 | 0.1905 | 0.1508 |
| 0.0982 | 14.69 | 81200 | 0.1881 | 0.1511 |
| 0.0963 | 14.76 | 81600 | 0.1823 | 0.1449 |
| 0.0943 | 14.83 | 82000 | 0.1782 | 0.1458 |
| 0.0981 | 14.91 | 82400 | 0.1795 | 0.1465 |
| 0.0995 | 14.98 | 82800 | 0.1811 | 0.1484 |
| 0.0909 | 15.05 | 83200 | 0.1822 | 0.1450 |
| 0.0872 | 15.12 | 83600 | 0.1890 | 0.1466 |
| 0.0878 | 15.2 | 84000 | 0.1859 | 0.1468 |
| 0.0884 | 15.27 | 84400 | 0.1825 | 0.1429 |
| 0.0871 | 15.34 | 84800 | 0.1816 | 0.1438 |
| 0.0883 | 15.41 | 85200 | 0.1817 | 0.1433 |
| 0.0844 | 15.48 | 85600 | 0.1821 | 0.1412 |
| 0.0843 | 15.56 | 86000 | 0.1863 | 0.1411 |
| 0.0805 | 15.63 | 86400 | 0.1863 | 0.1441 |
| 0.085 | 15.7 | 86800 | 0.1808 | 0.1440 |
| 0.0848 | 15.77 | 87200 | 0.1808 | 0.1421 |
| 0.0844 | 15.85 | 87600 | 0.1841 | 0.1406 |
| 0.082 | 15.92 | 88000 | 0.1850 | 0.1442 |
| 0.0854 | 15.99 | 88400 | 0.1773 | 0.1426 |
| 0.0835 | 16.06 | 88800 | 0.1888 | 0.1436 |
| 0.0789 | 16.14 | 89200 | 0.1922 | 0.1434 |
| 0.081 | 16.21 | 89600 | 0.1864 | 0.1448 |
| 0.0799 | 16.28 | 90000 | 0.1902 | 0.1428 |
| 0.0848 | 16.35 | 90400 | 0.1873 | 0.1422 |
| 0.084 | 16.43 | 90800 | 0.1835 | 0.1421 |
| 0.083 | 16.5 | 91200 | 0.1878 | 0.1390 |
| 0.0794 | 16.57 | 91600 | 0.1877 | 0.1398 |
| 0.0807 | 16.64 | 92000 | 0.1800 | 0.1385 |
| 0.0829 | 16.71 | 92400 | 0.1910 | 0.1434 |
| 0.0839 | 16.79 | 92800 | 0.1843 | 0.1381 |
| 0.0815 | 16.86 | 93200 | 0.1812 | 0.1365 |
| 0.0831 | 16.93 | 93600 | 0.1889 | 0.1383 |
| 0.0803 | 17.0 | 94000 | 0.1902 | 0.1403 |
| 0.0724 | 17.08 | 94400 | 0.1934 | 0.1380 |
| 0.0734 | 17.15 | 94800 | 0.1865 | 0.1394 |
| 0.0739 | 17.22 | 95200 | 0.1876 | 0.1395 |
| 0.0758 | 17.29 | 95600 | 0.1938 | 0.1411 |
| 0.0733 | 17.37 | 96000 | 0.1933 | 0.1410 |
| 0.077 | 17.44 | 96400 | 0.1848 | 0.1385 |
| 0.0754 | 17.51 | 96800 | 0.1876 | 0.1407 |
| 0.0746 | 17.58 | 97200 | 0.1863 | 0.1371 |
| 0.0732 | 17.66 | 97600 | 0.1927 | 0.1401 |
| 0.0746 | 17.73 | 98000 | 0.1874 | 0.1390 |
| 0.0755 | 17.8 | 98400 | 0.1853 | 0.1381 |
| 0.0724 | 17.87 | 98800 | 0.1849 | 0.1365 |
| 0.0716 | 17.94 | 99200 | 0.1848 | 0.1380 |
| 0.074 | 18.02 | 99600 | 0.1891 | 0.1362 |
| 0.0687 | 18.09 | 100000 | 0.1974 | 0.1357 |
| 0.0651 | 18.16 | 100400 | 0.1942 | 0.1353 |
| 0.0672 | 18.23 | 100800 | 0.1823 | 0.1363 |
| 0.0671 | 18.31 | 101200 | 0.1959 | 0.1357 |
| 0.0684 | 18.38 | 101600 | 0.1959 | 0.1374 |
| 0.0688 | 18.45 | 102000 | 0.1904 | 0.1353 |
| 0.0696 | 18.52 | 102400 | 0.1926 | 0.1364 |
| 0.0661 | 18.6 | 102800 | 0.1905 | 0.1351 |
| 0.0684 | 18.67 | 103200 | 0.1955 | 0.1343 |
| 0.0712 | 18.74 | 103600 | 0.1873 | 0.1353 |
| 0.0701 | 18.81 | 104000 | 0.1822 | 0.1354 |
| 0.0688 | 18.89 | 104400 | 0.1905 | 0.1373 |
| 0.0695 | 18.96 | 104800 | 0.1879 | 0.1335 |
| 0.0661 | 19.03 | 105200 | 0.2005 | 0.1351 |
| 0.0644 | 19.1 | 105600 | 0.1972 | 0.1351 |
| 0.0627 | 19.18 | 106000 | 0.1956 | 0.1340 |
| 0.0633 | 19.25 | 106400 | 0.1962 | 0.1340 |
| 0.0629 | 19.32 | 106800 | 0.1937 | 0.1342 |
| 0.0636 | 19.39 | 107200 | 0.1905 | 0.1355 |
| 0.0631 | 19.46 | 107600 | 0.1917 | 0.1326 |
| 0.0624 | 19.54 | 108000 | 0.1977 | 0.1355 |
| 0.0621 | 19.61 | 108400 | 0.1941 | 0.1345 |
| 0.0635 | 19.68 | 108800 | 0.1949 | 0.1336 |
| 0.063 | 19.75 | 109200 | 0.1919 | 0.1317 |
| 0.0636 | 19.83 | 109600 | 0.1928 | 0.1317 |
| 0.0612 | 19.9 | 110000 | 0.1923 | 0.1314 |
| 0.0636 | 19.97 | 110400 | 0.1923 | 0.1343 |
| 0.0581 | 20.04 | 110800 | 0.2036 | 0.1332 |
| 0.0573 | 20.12 | 111200 | 0.2007 | 0.1315 |
| 0.0566 | 20.19 | 111600 | 0.1974 | 0.1319 |
| 0.0589 | 20.26 | 112000 | 0.1958 | 0.1322 |
| 0.0577 | 20.33 | 112400 | 0.1946 | 0.1307 |
| 0.0587 | 20.41 | 112800 | 0.1957 | 0.1295 |
| 0.0588 | 20.48 | 113200 | 0.2013 | 0.1306 |
| 0.0594 | 20.55 | 113600 | 0.2010 | 0.1312 |
| 0.0602 | 20.62 | 114000 | 0.1993 | 0.1314 |
| 0.0583 | 20.69 | 114400 | 0.1931 | 0.1297 |
| 0.059 | 20.77 | 114800 | 0.1974 | 0.1305 |
| 0.0566 | 20.84 | 115200 | 0.1979 | 0.1294 |
| 0.0588 | 20.91 | 115600 | 0.1944 | 0.1292 |
| 0.0569 | 20.98 | 116000 | 0.1974 | 0.1309 |
| 0.0554 | 21.06 | 116400 | 0.2080 | 0.1307 |
| 0.0542 | 21.13 | 116800 | 0.2056 | 0.1301 |
| 0.0532 | 21.2 | 117200 | 0.2027 | 0.1309 |
| 0.0535 | 21.27 | 117600 | 0.1970 | 0.1287 |
| 0.0533 | 21.35 | 118000 | 0.2124 | 0.1310 |
| 0.0546 | 21.42 | 118400 | 0.2043 | 0.1300 |
| 0.0544 | 21.49 | 118800 | 0.2056 | 0.1281 |
| 0.0562 | 21.56 | 119200 | 0.1986 | 0.1273 |
| 0.0549 | 21.64 | 119600 | 0.2075 | 0.1283 |
| 0.0522 | 21.71 | 120000 | 0.2058 | 0.1278 |
| 0.052 | 21.78 | 120400 | 0.2057 | 0.1280 |
| 0.0563 | 21.85 | 120800 | 0.1966 | 0.1295 |
| 0.0546 | 21.92 | 121200 | 0.2002 | 0.1285 |
| 0.0539 | 22.0 | 121600 | 0.1996 | 0.1279 |
| 0.0504 | 22.07 | 122000 | 0.2077 | 0.1273 |
| 0.0602 | 22.14 | 122400 | 0.2055 | 0.1278 |
| 0.0503 | 22.21 | 122800 | 0.2037 | 0.1283 |
| 0.0496 | 22.29 | 123200 | 0.2109 | 0.1279 |
| 0.0523 | 22.36 | 123600 | 0.2068 | 0.1276 |
| 0.0508 | 22.43 | 124000 | 0.2051 | 0.1257 |
| 0.0505 | 22.5 | 124400 | 0.2056 | 0.1269 |
| 0.05 | 22.58 | 124800 | 0.1995 | 0.1268 |
| 0.0496 | 22.65 | 125200 | 0.2022 | 0.1290 |
| 0.0484 | 22.72 | 125600 | 0.2095 | 0.1291 |
| 0.0518 | 22.79 | 126000 | 0.2132 | 0.1271 |
| 0.0499 | 22.87 | 126400 | 0.2124 | 0.1263 |
| 0.0485 | 22.94 | 126800 | 0.2092 | 0.1252 |
| 0.0476 | 23.01 | 127200 | 0.2138 | 0.1256 |
| 0.0467 | 23.08 | 127600 | 0.2119 | 0.1256 |
| 0.048 | 23.15 | 128000 | 0.2138 | 0.1269 |
| 0.0461 | 23.23 | 128400 | 0.2036 | 0.1244 |
| 0.0467 | 23.3 | 128800 | 0.2163 | 0.1255 |
| 0.0475 | 23.37 | 129200 | 0.2180 | 0.1258 |
| 0.0468 | 23.44 | 129600 | 0.2129 | 0.1245 |
| 0.0456 | 23.52 | 130000 | 0.2122 | 0.1250 |
| 0.0458 | 23.59 | 130400 | 0.2157 | 0.1257 |
| 0.0453 | 23.66 | 130800 | 0.2088 | 0.1242 |
| 0.045 | 23.73 | 131200 | 0.2144 | 0.1247 |
| 0.0469 | 23.81 | 131600 | 0.2113 | 0.1246 |
| 0.0453 | 23.88 | 132000 | 0.2151 | 0.1234 |
| 0.0471 | 23.95 | 132400 | 0.2130 | 0.1229 |
| 0.0443 | 24.02 | 132800 | 0.2150 | 0.1225 |
| 0.0446 | 24.1 | 133200 | 0.2166 | 0.1235 |
| 0.0435 | 24.17 | 133600 | 0.2143 | 0.1222 |
| 0.0407 | 24.24 | 134000 | 0.2175 | 0.1218 |
| 0.0421 | 24.31 | 134400 | 0.2147 | 0.1227 |
| 0.0435 | 24.38 | 134800 | 0.2193 | 0.1233 |
| 0.0414 | 24.46 | 135200 | 0.2172 | 0.1225 |
| 0.0419 | 24.53 | 135600 | 0.2156 | 0.1225 |
| 0.0419 | 24.6 | 136000 | 0.2143 | 0.1235 |
| 0.0423 | 24.67 | 136400 | 0.2179 | 0.1226 |
| 0.0423 | 24.75 | 136800 | 0.2144 | 0.1221 |
| 0.0424 | 24.82 | 137200 | 0.2135 | 0.1210 |
| 0.0419 | 24.89 | 137600 | 0.2166 | 0.1218 |
| 0.0408 | 24.96 | 138000 | 0.2151 | 0.1211 |
| 0.0433 | 25.04 | 138400 | 0.2174 | 0.1214 |
| 0.0395 | 25.11 | 138800 | 0.2242 | 0.1210 |
| 0.0403 | 25.18 | 139200 | 0.2219 | 0.1215 |
| 0.0413 | 25.25 | 139600 | 0.2225 | 0.1207 |
| 0.0389 | 25.33 | 140000 | 0.2187 | 0.1202 |
| 0.0395 | 25.4 | 140400 | 0.2244 | 0.1204 |
| 0.0398 | 25.47 | 140800 | 0.2263 | 0.1199 |
| 0.0386 | 25.54 | 141200 | 0.2165 | 0.1187 |
| 0.0396 | 25.61 | 141600 | 0.2171 | 0.1187 |
| 0.0406 | 25.69 | 142000 | 0.2199 | 0.1190 |
| 0.0404 | 25.76 | 142400 | 0.2224 | 0.1190 |
| 0.0391 | 25.83 | 142800 | 0.2230 | 0.1185 |
| 0.04 | 25.9 | 143200 | 0.2208 | 0.1200 |
| 0.0396 | 25.98 | 143600 | 0.2179 | 0.1191 |
| 0.0353 | 26.05 | 144000 | 0.2285 | 0.1178 |
| 0.0368 | 26.12 | 144400 | 0.2273 | 0.1186 |
| 0.0393 | 26.19 | 144800 | 0.2247 | 0.1196 |
| 0.0368 | 26.27 | 145200 | 0.2314 | 0.1181 |
| 0.0373 | 26.34 | 145600 | 0.2215 | 0.1188 |
| 0.038 | 26.41 | 146000 | 0.2262 | 0.1180 |
| 0.0363 | 26.48 | 146400 | 0.2250 | 0.1172 |
| 0.0365 | 26.56 | 146800 | 0.2299 | 0.1174 |
| 0.0382 | 26.63 | 147200 | 0.2292 | 0.1165 |
| 0.0365 | 26.7 | 147600 | 0.2282 | 0.1165 |
| 0.0371 | 26.77 | 148000 | 0.2276 | 0.1172 |
| 0.0365 | 26.85 | 148400 | 0.2280 | 0.1173 |
| 0.0376 | 26.92 | 148800 | 0.2248 | 0.1164 |
| 0.0365 | 26.99 | 149200 | 0.2230 | 0.1158 |
| 0.0343 | 27.06 | 149600 | 0.2300 | 0.1157 |
| 0.0354 | 27.13 | 150000 | 0.2298 | 0.1166 |
| 0.0333 | 27.21 | 150400 | 0.2307 | 0.1158 |
| 0.0353 | 27.28 | 150800 | 0.2300 | 0.1157 |
| 0.036 | 27.35 | 151200 | 0.2335 | 0.1160 |
| 0.0343 | 27.42 | 151600 | 0.2324 | 0.1155 |
| 0.0361 | 27.5 | 152000 | 0.2300 | 0.1150 |
| 0.0352 | 27.57 | 152400 | 0.2279 | 0.1146 |
| 0.0353 | 27.64 | 152800 | 0.2307 | 0.1149 |
| 0.0342 | 27.71 | 153200 | 0.2315 | 0.1152 |
| 0.0345 | 27.79 | 153600 | 0.2290 | 0.1146 |
| 0.034 | 27.86 | 154000 | 0.2319 | 0.1141 |
| 0.0347 | 27.93 | 154400 | 0.2312 | 0.1144 |
| 0.0338 | 28.0 | 154800 | 0.2328 | 0.1146 |
| 0.0347 | 28.08 | 155200 | 0.2352 | 0.1151 |
| 0.033 | 28.15 | 155600 | 0.2337 | 0.1142 |
| 0.0336 | 28.22 | 156000 | 0.2345 | 0.1141 |
| 0.0337 | 28.29 | 156400 | 0.2315 | 0.1143 |
| 0.0314 | 28.36 | 156800 | 0.2353 | 0.1140 |
| 0.0333 | 28.44 | 157200 | 0.2338 | 0.1146 |
| 0.0317 | 28.51 | 157600 | 0.2345 | 0.1139 |
| 0.0326 | 28.58 | 158000 | 0.2336 | 0.1143 |
| 0.033 | 28.65 | 158400 | 0.2352 | 0.1137 |
| 0.0325 | 28.73 | 158800 | 0.2312 | 0.1130 |
| 0.0321 | 28.8 | 159200 | 0.2338 | 0.1133 |
| 0.0334 | 28.87 | 159600 | 0.2335 | 0.1130 |
| 0.0317 | 28.94 | 160000 | 0.2340 | 0.1126 |
| 0.0321 | 29.02 | 160400 | 0.2349 | 0.1126 |
| 0.032 | 29.09 | 160800 | 0.2369 | 0.1127 |
| 0.0312 | 29.16 | 161200 | 0.2363 | 0.1124 |
| 0.0303 | 29.23 | 161600 | 0.2363 | 0.1123 |
| 0.0322 | 29.31 | 162000 | 0.2354 | 0.1124 |
| 0.03 | 29.38 | 162400 | 0.2360 | 0.1122 |
| 0.0299 | 29.45 | 162800 | 0.2378 | 0.1124 |
| 0.0313 | 29.52 | 163200 | 0.2377 | 0.1120 |
| 0.0299 | 29.59 | 163600 | 0.2367 | 0.1124 |
| 0.0313 | 29.67 | 164000 | 0.2380 | 0.1120 |
| 0.031 | 29.74 | 164400 | 0.2369 | 0.1120 |
| 0.0327 | 29.81 | 164800 | 0.2358 | 0.1117 |
| 0.0316 | 29.88 | 165200 | 0.2358 | 0.1118 |
| 0.0307 | 29.96 | 165600 | 0.2362 | 0.1118 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
{"language": ["es"], "license": "apache-2.0", "tags": ["es", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-xls-r-300m-36-tokens-with-lm-es", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "common_voice es", "type": "common_voice", "args": "es"}, "metrics": [{"type": "wer", "value": 0.08677014042867702, "name": "Test WER"}, {"type": "cer", "value": 0.02810974186831335, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "es"}, "metrics": [{"type": "wer", "value": 31.68, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "es"}, "metrics": [{"type": "wer", "value": 34.45, "name": "Test WER"}]}]}]}
|
edugp/wav2vec2-xls-r-300m-36-tokens-with-lm-es
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-cv8-es
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2115
- eval_wer: 0.1931
- eval_runtime: 859.964
- eval_samples_per_second: 17.954
- eval_steps_per_second: 2.244
- epoch: 6.97
- step: 50000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-xls-r-300m-cv8-es", "results": []}]}
|
edugp/wav2vec2-xls-r-300m-cv8-es
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-classification
|
transformers
|
## Model `RuPERTa_base_sentiment_analysis_es`
### **A finetuned model for Sentiment analysis in Spanish**
This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container,
The base model is **RuPERTa-base (uncased)** which is a RoBERTa model trained on a uncased version of big Spanish corpus.
It was trained by mrm8488, Manuel Romero.[Link to base model](https://huggingface.co/mrm8488/RuPERTa-base)
## Dataset
The dataset is a collection of movie reviews in Spanish, about 50,000 reviews. The dataset is balanced and provides every review in english, in spanish and the label in both languages.
Sizes of datasets:
- Train dataset: 42,500
- Validation dataset: 3,750
- Test dataset: 3,750
## Hyperparameters
{
"epochs": "4",
"train_batch_size": "32",
"eval_batch_size": "8",
"fp16": "true",
"learning_rate": "3e-05",
"model_name": "\"mrm8488/RuPERTa-base\"",
"sagemaker_container_log_level": "20",
"sagemaker_program": "\"train.py\"",
}
## Evaluation results
Accuracy = 0.8629333333333333
F1 Score = 0.8648790746582545
Precision = 0.8479381443298969
Recall = 0.8825107296137339
## Test results
Accuracy = 0.8066666666666666
F1 Score = 0.8057862309134743
Precision = 0.7928307854507116
Recall = 0.8191721132897604
## Model in action
### Usage for Sentiment Analysis
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("edumunozsala/RuPERTa_base_sentiment_analysis_es")
model = AutoModelForSequenceClassification.from_pretrained("edumunozsala/RuPERTa_base_sentiment_analysis_es")
text ="Se trata de una película interesante, con un solido argumento y un gran interpretación de su actor principal"
input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0)
outputs = model(input_ids)
output = outputs.logits.argmax(1)
```
Created by [Eduardo Muñoz/@edumunozsala](https://github.com/edumunozsala)
|
{"language": "es", "license": "apache-2.0", "tags": ["sagemaker", "ruperta", "TextClassification", "SentimentAnalysis"], "datasets": ["IMDbreviews_es"], "name": "RuPERTa_base_sentiment_analysis_es", "results": [{"task": {"name": "Sentiment Analysis", "type": "sentiment-analysis"}}, {"dataset": {"name": "IMDb Reviews in Spanish", "type": "IMDbreviews_es"}}, {"metrics": [{"name": "Accuracy,", "type": "accuracy,", "value": 0.881866}, {"name": "F1 Score,", "type": "f1,", "value": 0.008272}, {"name": "Precision,", "type": "precision,", "value": 0.858605}, {"name": "Recall,", "type": "recall,", "value": 0.920062}]}], "widget": [{"text": "Se trata de una pel\u00edcula interesante, con un solido argumento y un gran interpretaci\u00f3n de su actor principal"}]}
|
edumunozsala/RuPERTa_base_sentiment_analysis_es
| null |
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"sagemaker",
"ruperta",
"TextClassification",
"SentimentAnalysis",
"es",
"dataset:IMDbreviews_es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
{}
|
edumunozsala/bertin2bertin_news_highlights
| null |
[
"transformers",
"pytorch",
"safetensors",
"encoder-decoder",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
edwardcodarcea/pegasus-persian
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-classification
|
transformers
|
{}
|
edwardgowsmith/bert-base-cased-best
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-classification
|
transformers
|
{}
|
edwardgowsmith/en-finegrained-zero-shot
| null |
[
"transformers",
"pytorch",
"xlnet",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-classification
|
transformers
|
{}
|
edwardgowsmith/pt-finegrained-few-shot
| null |
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-classification
|
transformers
|
{}
|
edwardgowsmith/pt-finegrained-one-shot
| null |
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-classification
|
transformers
|
{}
|
edwardgowsmith/pt-finegrained-zero-shot
| null |
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-classification
|
transformers
|
{}
|
edwardgowsmith/xlnet-base-cased-best
| null |
[
"transformers",
"pytorch",
"xlnet",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-classification
|
transformers
|
{}
|
edwardgowsmith/xlnet-base-cased-train-from-dev-and-test-best
| null |
[
"transformers",
"pytorch",
"xlnet",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-classification
|
transformers
|
{}
|
edwardgowsmith/xlnet-base-cased-train-from-dev-and-test-short-best
| null |
[
"transformers",
"pytorch",
"xlnet",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-classification
|
transformers
|
{}
|
edwardgowsmith/xlnet-base-cased-train-from-dev-best
| null |
[
"transformers",
"pytorch",
"xlnet",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-classification
|
transformers
|
{}
|
edwardgowsmith/xlnet-base-cased-train-from-dev-short-best
| null |
[
"transformers",
"pytorch",
"xlnet",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
eecspan/distilbert-base-uncased-finetuned-cola
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
eeeee/DialoGPT-small-harrypotter
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
eeeee/L
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
summarization
|
transformers
|
# **Italian T5 Abstractive Summarization**
gsarti/it5-base fine-tuned in italian for abstractive text summarization.
|
{"language": ["it"], "tags": ["summarization"]}
|
efederici/it5-base-summarization
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"it",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
summarization
|
transformers
|
# text2tags
The model has been trained on a collection of 28k news articles with tags. Its purpose is to create tags suitable for the given article. We can use this model also for information-retrieval purposes (GenQ), fine-tuning sentence-transformers for asymmetric semantic search.
If you like this project, consider supporting it with a cup of coffee! 🤖✨🌞
[](https://bmc.link/edoardofederici)
<p align="center">
<img src="https://upload.wikimedia.org/wikipedia/commons/1/1a/Pieter_Bruegel_d._%C3%84._066.jpg" width="600"> </br>
Pieter Bruegel the Elder, The Fight Between Carnival and Lent, 1559
</p>
### Usage
Sample code with an article from IlPost:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("efederici/text2tags")
tokenizer = AutoTokenizer.from_pretrained("efederici/text2tags")
article = '''
Da bambino era preoccupato che al mondo non ci fosse più nulla da scoprire. Ma i suoi stessi studi gli avrebbero dato torto: insieme a James Watson, nel 1953 Francis Crick strutturò il primo modello di DNA, la lunga sequenza di codici che identifica ogni essere vivente, rendendolo unico e diverso da tutti gli altri.
La scoperta gli valse il Nobel per la Medicina. È uscita in queste settimane per Codice la sua biografia, Francis Crick — Lo scopritore del DNA, scritta da Matt Ridley, che racconta vita e scienza dell'uomo che capì perché siamo fatti così.
'''
def tag(text: str):
""" Generates tags from given text """
text = text.strip().replace('\n', '')
text = 'summarize: ' + text
tokenized_text = tokenizer.encode(text, return_tensors="pt")
tags_ids = model.generate(tokenized_text,
num_beams=4,
no_repeat_ngram_size=2,
max_length=20,
early_stopping=True)
output = tokenizer.decode(tags_ids[0], skip_special_tokens=True)
return output.split(', ')
tags = tag(article)
print(tags)
```
## Longer documents
Assuming paragraphs are divided by: '\n\n'.
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import itertools
import re
model = AutoModelForSeq2SeqLM.from_pretrained("efederici/text2tags")
tokenizer = AutoTokenizer.from_pretrained("efederici/text2tags")
article = '''
Da bambino era preoccupato che al mondo non ci fosse più nulla da scoprire. Ma i suoi stessi studi gli avrebbero dato torto: insieme a James Watson, nel 1953 Francis Crick strutturò il primo modello di DNA, la lunga sequenza di codici che identifica ogni essere vivente, rendendolo unico e diverso da tutti gli altri.
La scoperta gli valse il Nobel per la Medicina. È uscita in queste settimane per Codice la sua biografia, Francis Crick — Lo scopritore del DNA, scritta da Matt Ridley, che racconta vita e scienza dell'uomo che capì perché siamo fatti così.
'''
def words(text):
input_str = text
output_str = re.sub('[^A-Za-z0-9]+', ' ', input_str)
return output_str.split()
def is_subset(text1, text2):
return all(tag in words(text1.lower()) for tag in text2.split())
def cleaning(text, tags):
return [tag for tag in tags if is_subset(text, tag)]
def get_texts(text, max_len):
texts = list(filter(lambda x : x != '', text.split('\n\n')))
lengths = [len(tokenizer.encode(paragraph)) for paragraph in texts]
output = []
for i, par in enumerate(texts):
index = len(output)
if index > 0 and lengths[i] + len(tokenizer.encode(output[index-1])) <= max_len:
output[index-1] = "".join(output[index-1] + par)
else:
output.append(par)
return output
def get_tags(text, generate_kwargs):
input_text = 'summarize: ' + text.strip().replace('\n', ' ')
tokenized_text = tokenizer.encode(input_text, return_tensors="pt")
with torch.no_grad():
tags_ids = model.generate(tokenized_text, **generate_kwargs)
output = []
for tags in tags_ids:
cleaned = cleaning(
text,
list(set(tokenizer.decode(tags, skip_special_tokens=True).split(', ')))
)
output.append(cleaned)
return list(set(itertools.chain(*output)))
def tag(text, max_len, generate_kwargs):
texts = get_texts(text, max_len)
all_tags = [get_tags(text, generate_kwargs) for text in texts]
flatten_tags = itertools.chain(*all_tags)
return list(set(flatten_tags))
params = {
"min_length": 0,
"max_length": 30,
"no_repeat_ngram_size": 2,
"num_beams": 4,
"early_stopping": True,
"num_return_sequences": 4,
}
tags = tag(article, 512, params)
print(tags)
```
### Overview
- Model: T5 ([it5-small](https://huggingface.co/gsarti/it5-small))
- Language: Italian
- Downstream-task: Summarization (for topic tagging)
- Training data: Custom dataset
- Code: See example
- Infrastructure: 1x T4
|
{"language": ["it"], "tags": ["summarization", "tags", "Italian"], "inference": {"parameters": {"do_sample": false, "min_length": 0}}, "widget": [{"text": "Nel 1924 la scrittrice Virginia Woolf affront\u00f2 nel saggio Mr Bennett e Mrs Brown il tema della costruzione e della struttura del romanzo, genere all\u2019epoca considerato in declino a causa dell\u2019incapacit\u00e0 degli autori e delle autrici di creare personaggi realistici. Woolf raccont\u00f2 di aver a lungo osservato, durante un viaggio in treno da Richmond a Waterloo, una signora di oltre 60 anni seduta davanti a lei, chiamata signora Brown. Ne rimase affascinata, per la capacit\u00e0 di quella figura di evocare storie possibili e fare da spunto per un romanzo: \u00abtutti i romanzi cominciano con una vecchia signora seduta in un angolo\u00bb. Immagini come quella della signora Brown, secondo Woolf, \u00abcostringono qualcuno a cominciare, quasi automaticamente, a scrivere un romanzo\u00bb. Nel saggio Woolf prov\u00f2 ad analizzare le tecniche narrative utilizzate da tre noti scrittori inglesi dell\u2019epoca \u2013 H. G. Wells, John Galsworthy e Arnold Bennett \u2013 per comprendere perch\u00e9 le convenzioni stilistiche dell\u2019Ottocento risultassero ormai inadatte alla descrizione dei \u00abcaratteri\u00bb umani degli anni Venti. In un lungo e commentato articolo del New Yorker, la critica letteraria e giornalista Parul Sehgal, a lungo caporedattrice dell\u2019inserto culturale del New York Times dedicato alle recensioni di libri, ha provato a compiere un esercizio simile a quello di Woolf, chiedendosi come gli autori e le autrici di oggi tratterebbero la signora Brown. E ha immaginato che probabilmente quella figura non eserciterebbe su di loro una curiosit\u00e0 e un fascino legati alla sua incompletezza e al suo aspetto misterioso, ma con ogni probabilit\u00e0 trasmetterebbe loro l\u2019indistinta e generica impressione di aver sub\u00ecto un trauma.", "example_title": "Virginia Woolf"}, {"text": "I lavori di ristrutturazione dell\u2019interno della cattedrale di Notre-Dame a Parigi, seguiti al grande incendio che nel 2019 bruci\u00f2 la guglia e buona parte del tetto, sono da settimane al centro di un acceso dibattito sui giornali francesi per via di alcune proposte di rinnovamento degli interni che hanno suscitato critiche e allarmi tra esperti e opinionisti conservatori. Il progetto ha ricevuto una prima approvazione dalla commissione nazionale competente, ma dovr\u00e0 ancora essere soggetto a varie revisioni e ratifiche che coinvolgeranno tecnici e politici locali e nazionali, fino al presidente Emmanuel Macron. Ma le modifiche previste al sistema di viabilit\u00e0 per i visitatori, all\u2019illuminazione, ai posti a sedere e alle opere d\u2019arte che si vorrebbero esporre hanno portato alcuni critici a parlare di \u00abparco a tema woke\u00bb e \u00abDisneyland del politicamente corretto\u00bb.", "example_title": "Notre-Dame"}]}
|
efederici/text2tags
| null |
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"tags",
"Italian",
"it",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-classification
|
transformers
|
{}
|
efrabce/distilbert-base-uncased-finetuned-cola
| null |
[
"transformers",
"tf",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
token-classification
|
transformers
|
{}
|
egoitz/roberta-timex-semeval
| null |
[
"transformers",
"pytorch",
"jax",
"roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
{}
|
egonzalez/classifier
| null |
[
"transformers",
"tf",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text2text-generation
|
transformers
|
{}
|
egonzalez/model
| null |
[
"transformers",
"tf",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
audio-classification
|
transformers
|
# Speech Emotion Recognition By Fine-Tuning Wav2Vec 2.0
The model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) for a Speech Emotion Recognition (SER) task.
The dataset used to fine-tune the original pre-trained model is the [RAVDESS dataset](https://zenodo.org/record/1188976#.YO6yI-gzaUk). This dataset provides 1440 samples of recordings from actors performing on 8 different emotions in English, which are:
```python
emotions = ['angry', 'calm', 'disgust', 'fearful', 'happy', 'neutral', 'sad', 'surprised']
```
It achieves the following results on the evaluation set:
- Loss: 0.5023
- Accuracy: 0.8223
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0752 | 0.21 | 30 | 2.0505 | 0.1359 |
| 2.0119 | 0.42 | 60 | 1.9340 | 0.2474 |
| 1.8073 | 0.63 | 90 | 1.5169 | 0.3902 |
| 1.5418 | 0.84 | 120 | 1.2373 | 0.5610 |
| 1.1432 | 1.05 | 150 | 1.1579 | 0.5610 |
| 0.9645 | 1.26 | 180 | 0.9610 | 0.6167 |
| 0.8811 | 1.47 | 210 | 0.8063 | 0.7178 |
| 0.8756 | 1.68 | 240 | 0.7379 | 0.7352 |
| 0.8208 | 1.89 | 270 | 0.6839 | 0.7596 |
| 0.7118 | 2.1 | 300 | 0.6664 | 0.7735 |
| 0.4261 | 2.31 | 330 | 0.6058 | 0.8014 |
| 0.4394 | 2.52 | 360 | 0.5754 | 0.8223 |
| 0.4581 | 2.72 | 390 | 0.4719 | 0.8467 |
| 0.3967 | 2.93 | 420 | 0.5023 | 0.8223 |
## Contact
Any doubt, contact me on [Twitter](https://twitter.com/ehcalabres) (GitHub repo soon).
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Datasets 1.9.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model_index": {"name": "wav2vec2-lg-xlsr-en-speech-emotion-recognition"}}
|
ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-ehddnr-ynat
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3587
- F1: 0.8721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 179 | 0.4398 | 0.8548 |
| No log | 2.0 | 358 | 0.3587 | 0.8721 |
| 0.3859 | 3.0 | 537 | 0.3639 | 0.8707 |
| 0.3859 | 4.0 | 716 | 0.3592 | 0.8692 |
| 0.3859 | 5.0 | 895 | 0.3646 | 0.8717 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "datasets": ["klue"], "metrics": ["f1"], "model_index": [{"name": "bert-base-ehddnr-ynat", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "klue", "type": "klue", "args": "ynat"}, "metric": {"name": "F1", "type": "f1", "value": 0.8720568553403009}}]}]}
|
ehddnr301/bert-base-ehddnr-ynat
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:klue",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
# ehdwns1516/bart_finetuned_xsum
* This model has been trained as a [xsum dataset](https://huggingface.co/datasets/xsum).
* Input text what you want to summarize.
review generator DEMO: [Ainize DEMO](https://main-text-summarizer-ehdwns1516.endpoint.ainize.ai/)
review generator API: [Ainize API](https://ainize.web.app/redirect?git_repo=https://github.com/ehdwns1516/text_summarizer)
## Overview
Language model: [facebook/bart-large](https://huggingface.co/facebook/bart-large)
Language: English
Training data: [xsum dataset](https://huggingface.co/datasets/xsum)
Code: See [Ainize Workspace](https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/ehdwns1516/bart_finetuned_xsum-notebook)
## Usage
## In Transformers
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("ehdwns1516/bart_finetuned_xsum")
model = AutoModelForSeq2SeqLM.from_pretrained("ehdwns1516/bart_finetuned_xsum")
summarizer = pipeline(
"summarization",
model="ehdwns1516/bart_finetuned_xsum",
tokenizer=tokenizer
)
context = "your context"
result = dict()
result[0] = summarizer(context)[0]
```
|
{}
|
ehdwns1516/bart_finetuned_xsum
| null |
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
multiple-choice
|
transformers
|
# ehdwns1516/bert-base-uncased_SWAG
* This model has been trained as a [SWAG dataset](https://huggingface.co/ehdwns1516/bert-base-uncased_SWAG).
* Sentence Inference Multiple Choice DEMO: [Ainize DEMO](https://main-sentence-inference-multiple-choice-ehdwns1516.endpoint.ainize.ai/)
* Sentence Inference Multiple Choice API: [Ainize API](https://ainize.web.app/redirect?git_repo=https://github.com/ehdwns1516/sentence_inference_multiple_choice)
## Overview
Language model: [bert-base-uncased](https://huggingface.co/bert-base-uncased)
Language: English
Training data: [SWAG dataset](https://huggingface.co/datasets/swag)
Code: See [Ainize Workspace](https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/ehdwns1516/Multiple_choice_SWAG_finetunning)
## Usage
## In Transformers
```
from transformers import AutoTokenizer, AutoModelForMultipleChoice
tokenizer = AutoTokenizer.from_pretrained("ehdwns1516/bert-base-uncased_SWAG")
model = AutoModelForMultipleChoice.from_pretrained("ehdwns1516/bert-base-uncased_SWAG")
def run_model(candicates_count, context: str, candicates: list[str]):
assert len(candicates) == candicates_count, "you need " + candicates_count + " candidates"
choices_inputs = []
for c in candicates:
text_a = "" # empty context
text_b = context + " " + c
inputs = tokenizer(
text_a,
text_b,
add_special_tokens=True,
max_length=128,
padding="max_length",
truncation=True,
return_overflowing_tokens=True,
)
choices_inputs.append(inputs)
input_ids = torch.LongTensor([x["input_ids"] for x in choices_inputs])
output = model(input_ids=input_ids)
return {"result": candicates[torch.argmax(output.logits).item()]}
items = list()
count = 4 # candicates count
context = "your context"
for i in range(int(count)):
items.append("sentence")
result = run_model(count, context, items)
```
|
{}
|
ehdwns1516/bert-base-uncased_SWAG
| null |
[
"transformers",
"pytorch",
"bert",
"multiple-choice",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
# gpt2_review_star1
* This model has been trained as a review_body dataset with a star of 1 in the [amazon_review dataset](https://huggingface.co/datasets/amazon_reviews_multi).
* Input text what you want to generate review.
* If the context is longer than 1200 characters, the context may be cut in the middle and the result may not come out well.
review generator DEMO: [Ainize DEMO](https://main-review-generator-ehdwns1516.endpoint.ainize.ai/)
review generator API: [Ainize API](https://ainize.web.app/redirect?git_repo=https://github.com/ehdwns1516/review_generator)
## Model links for each 1 to 5 star
* [ehdwns1516/gpt2_review_star1](https://huggingface.co/ehdwns1516/gpt2_review_star1)
* [ehdwns1516/gpt2_review_star2](https://huggingface.co/ehdwns1516/gpt2_review_star2)
* [ehdwns1516/gpt2_review_star3](https://huggingface.co/ehdwns1516/gpt2_review_star3)
* [ehdwns1516/gpt2_review_star4](https://huggingface.co/ehdwns1516/gpt2_review_star4)
* [ehdwns1516/gpt2_review_star5](https://huggingface.co/ehdwns1516/gpt2_review_star5)
## Overview
Language model: [gpt2](https://huggingface.co/gpt2)
Language: English
Training data: review_body dataset with a star of 1 in the [amazon_review dataset](https://huggingface.co/datasets/amazon_reviews_multi).
Code: See [Ainize Workspace](https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/ehdwns1516/gpt2_review_fine-tunning_note)
## Usage
## In Transformers
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("ehdwns1516/gpt2_review_star1")
model = AutoModelWithLMHead.from_pretrained("ehdwns1516/gpt2_review_star1")
generator = pipeline(
"text-generation",
model="ehdwns1516/gpt2_review_star1",
tokenizer=tokenizer
)
context = "your context"
result = dict()
result[0] = generator(context)[0]
```
|
{}
|
ehdwns1516/gpt2_review_star1
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
# gpt2_review_star2
* This model has been trained as a review_body dataset with a star of 2 in the [amazon_review dataset](https://huggingface.co/datasets/amazon_reviews_multi).
* Input text what you want to generate review.
* If the context is longer than 1200 characters, the context may be cut in the middle and the result may not come out well.
review generator DEMO: [Ainize DEMO](https://main-review-generator-ehdwns1516.endpoint.ainize.ai/)
review generator API: [Ainize API](https://ainize.web.app/redirect?git_repo=https://github.com/ehdwns1516/review_generator)
## Model links for each 1 to 5 star
* [ehdwns1516/gpt2_review_star1](https://huggingface.co/ehdwns1516/gpt2_review_star1)
* [ehdwns1516/gpt2_review_star2](https://huggingface.co/ehdwns1516/gpt2_review_star2)
* [ehdwns1516/gpt2_review_star3](https://huggingface.co/ehdwns1516/gpt2_review_star3)
* [ehdwns1516/gpt2_review_star4](https://huggingface.co/ehdwns1516/gpt2_review_star4)
* [ehdwns1516/gpt2_review_star5](https://huggingface.co/ehdwns1516/gpt2_review_star5)
## Overview
Language model: [gpt2](https://huggingface.co/gpt2)
Language: English
Training data: review_body dataset with a star of 2 in the [amazon_review dataset](https://huggingface.co/datasets/amazon_reviews_multi).
Code: See [Ainize Workspace](https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/ehdwns1516/gpt2_review_fine-tunning_note)
## Usage
## In Transformers
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("ehdwns1516/gpt2_review_star2")
model = AutoModelWithLMHead.from_pretrained("ehdwns1516/gpt2_review_star2")
generator = pipeline(
"text-generation",
model="ehdwns1516/gpt2_review_star2",
tokenizer=tokenizer
)
context = "your context"
result = dict()
result[0] = generator(context)[0]
```
|
{}
|
ehdwns1516/gpt2_review_star2
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
# gpt2_review_star3
* This model has been trained as a review_body dataset with a star of 3 in the [amazon_review dataset](https://huggingface.co/datasets/amazon_reviews_multi).
* Input text what you want to generate review.
* If the context is longer than 1200 characters, the context may be cut in the middle and the result may not come out well.
review generator DEMO: [Ainize DEMO](https://main-review-generator-ehdwns1516.endpoint.ainize.ai/)
review generator API: [Ainize API](https://ainize.web.app/redirect?git_repo=https://github.com/ehdwns1516/review_generator)
## Model links for each 1 to 5 star
* [ehdwns1516/gpt2_review_star1](https://huggingface.co/ehdwns1516/gpt2_review_star1)
* [ehdwns1516/gpt2_review_star2](https://huggingface.co/ehdwns1516/gpt2_review_star2)
* [ehdwns1516/gpt2_review_star3](https://huggingface.co/ehdwns1516/gpt2_review_star3)
* [ehdwns1516/gpt2_review_star4](https://huggingface.co/ehdwns1516/gpt2_review_star4)
* [ehdwns1516/gpt2_review_star5](https://huggingface.co/ehdwns1516/gpt2_review_star5)
## Overview
Language model: [gpt2](https://huggingface.co/gpt2)
Language: English
Training data: review_body dataset with a star of 3 in the [amazon_review dataset](https://huggingface.co/datasets/amazon_reviews_multi).
Code: See [Ainize Workspace](https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/ehdwns1516/gpt2_review_fine-tunning_note)
## Usage
## In Transformers
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("ehdwns1516/gpt2_review_star3")
model = AutoModelWithLMHead.from_pretrained("ehdwns1516/gpt2_review_star3")
generator = pipeline(
"text-generation",
model="ehdwns1516/gpt2_review_star3",
tokenizer=tokenizer
)
context = "your context"
result = dict()
result[0] = generator(context)[0]
```
|
{}
|
ehdwns1516/gpt2_review_star3
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
# gpt2_review_star4
* This model has been trained as a review_body dataset with a star of 4 in the [amazon_review dataset](https://huggingface.co/datasets/amazon_reviews_multi).
* Input text what you want to generate review.
* If the context is longer than 1200 characters, the context may be cut in the middle and the result may not come out well.
review generator DEMO: [Ainize DEMO](https://main-review-generator-ehdwns1516.endpoint.ainize.ai/)
review generator API: [Ainize API](https://ainize.web.app/redirect?git_repo=https://github.com/ehdwns1516/review_generator)
## Model links for each 1 to 5 star
* [ehdwns1516/gpt2_review_star1](https://huggingface.co/ehdwns1516/gpt2_review_star1)
* [ehdwns1516/gpt2_review_star2](https://huggingface.co/ehdwns1516/gpt2_review_star2)
* [ehdwns1516/gpt2_review_star3](https://huggingface.co/ehdwns1516/gpt2_review_star3)
* [ehdwns1516/gpt2_review_star4](https://huggingface.co/ehdwns1516/gpt2_review_star4)
* [ehdwns1516/gpt2_review_star5](https://huggingface.co/ehdwns1516/gpt2_review_star5)
## Overview
Language model: [gpt2](https://huggingface.co/gpt2)
Language: English
Training data: review_body dataset with a star of 4 in the [amazon_review dataset](https://huggingface.co/datasets/amazon_reviews_multi).
Code: See [Ainize Workspace](https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/ehdwns1516/gpt2_review_fine-tunning_note)
## Usage
## In Transformers
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("ehdwns1516/gpt2_review_star3")
model = AutoModelWithLMHead.from_pretrained("ehdwns1516/gpt2_review_star3")
generator = pipeline(
"text-generation",
model="ehdwns1516/gpt2_review_star4",
tokenizer=tokenizer
)
context = "your context"
result = dict()
result[0] = generator(context)[0]
```
|
{}
|
ehdwns1516/gpt2_review_star4
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
# gpt2_review_star5
* This model has been trained as a review_body dataset with a star of 5 in the [amazon_review dataset](https://huggingface.co/datasets/amazon_reviews_multi).
* Input text what you want to generate review.
* If the context is longer than 1200 characters, the context may be cut in the middle and the result may not come out well.
review generator DEMO: [Ainize DEMO](https://main-review-generator-ehdwns1516.endpoint.ainize.ai/)
review generator API: [Ainize API](https://ainize.web.app/redirect?git_repo=https://github.com/ehdwns1516/review_generator)
## Model links for each 1 to 5 star
* [ehdwns1516/gpt2_review_star1](https://huggingface.co/ehdwns1516/gpt2_review_star1)
* [ehdwns1516/gpt2_review_star2](https://huggingface.co/ehdwns1516/gpt2_review_star2)
* [ehdwns1516/gpt2_review_star3](https://huggingface.co/ehdwns1516/gpt2_review_star3)
* [ehdwns1516/gpt2_review_star4](https://huggingface.co/ehdwns1516/gpt2_review_star4)
* [ehdwns1516/gpt2_review_star5](https://huggingface.co/ehdwns1516/gpt2_review_star5)
## Overview
Language model: [gpt2](https://huggingface.co/gpt2)
Language: English
Training data: review_body dataset with a star of 5 in the [amazon_review dataset](https://huggingface.co/datasets/amazon_reviews_multi).
Code: See [Ainize Workspace](https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/ehdwns1516/gpt2_review_fine-tunning_note)
## Usage
## In Transformers
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("ehdwns1516/gpt2_review_star5")
model = AutoModelWithLMHead.from_pretrained("ehdwns1516/gpt2_review_star5")
generator = pipeline(
"text-generation",
model="ehdwns1516/gpt2_review_star5",
tokenizer=tokenizer
)
context = "your context"
result = dict()
result[0] = generator(context)[0]
```
|
{}
|
ehdwns1516/gpt2_review_star5
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
# ehdwns1516/gpt3-kor-based_gpt2_review_SR1
* This model has been trained Korean dataset as a star of 1 in the [naver shopping reivew dataset](https://github.com/bab2min/corpus/tree/master/sentiment).
* Input text what you want to generate review.
* If the context is longer than 1200 characters, the context may be cut in the middle and the result may not come out well.
review generator DEMO: [Ainize DEMO](https://main-review-generator-ehdwns1516.endpoint.ainize.ai/)
review generator API: [Ainize API](https://ainize.web.app/redirect?git_repo=https://github.com/ehdwns1516/review_generator)
## Model links for each 1 to 5 star
* [ehdwns1516/gpt3-kor-based_gpt2_review_SR1](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR1)
* [ehdwns1516/gpt3-kor-based_gpt2_review_SR2](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR2)
* [ehdwns1516/gpt3-kor-based_gpt2_review_SR3](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR3)
* [ehdwns1516/gpt3-kor-based_gpt2_review_SR4](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR4)
* [ehdwns1516/gpt3-kor-based_gpt2_review_SR5](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR5)
## Overview
Language model: [gpt3-kor-small_based_on_gpt2](https://huggingface.co/kykim/gpt3-kor-small_based_on_gpt2)
Language: Korean
Training data: review_body dataset with a star of 1 in the [naver shopping reivew dataset](https://github.com/bab2min/corpus/tree/master/sentiment).
Code: See [Ainize Workspace](https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/ehdwns1516/gpt2_review_fine-tunning_note)
## Usage
## In Transformers
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("ehdwns1516/gpt3-kor-based_gpt2_review_SR1")
model = AutoModelWithLMHead.from_pretrained("ehdwns1516/gpt3-kor-based_gpt2_review_SR1")
generator = pipeline(
"text-generation",
model="ehdwns1516/gpt3-kor-based_gpt2_review_SR1",
tokenizer=tokenizer
)
context = "your context"
result = dict()
result[0] = generator(context)[0]
```
|
{}
|
ehdwns1516/gpt3-kor-based_gpt2_review_SR1
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
# ehdwns1516/gpt3-kor-based_gpt2_review_SR2
* This model has been trained Korean dataset as a star of 2 in the [naver shopping reivew dataset](https://github.com/bab2min/corpus/tree/master/sentiment).
* Input text what you want to generate review.
* If the context is longer than 1200 characters, the context may be cut in the middle and the result may not come out well.
review generator DEMO: [Ainize DEMO](https://main-review-generator-ehdwns1516.endpoint.ainize.ai/)
review generator API: [Ainize API](https://ainize.web.app/redirect?git_repo=https://github.com/ehdwns1516/review_generator)
## Model links for each 1 to 5 star
* [ehdwns1516/gpt3-kor-based_gpt2_review_SR1](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR1)
* [ehdwns1516/gpt3-kor-based_gpt2_review_SR2](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR2)
* [ehdwns1516/gpt3-kor-based_gpt2_review_SR3](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR3)
* [ehdwns1516/gpt3-kor-based_gpt2_review_SR4](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR4)
* [ehdwns1516/gpt3-kor-based_gpt2_review_SR5](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR5)
## Overview
Language model: [gpt3-kor-small_based_on_gpt2](https://huggingface.co/kykim/gpt3-kor-small_based_on_gpt2)
Language: Korean
Training data: review_body dataset with a star of 2 in the [naver shopping reivew dataset](https://github.com/bab2min/corpus/tree/master/sentiment).
Code: See [Ainize Workspace](https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/ehdwns1516/gpt2_review_fine-tunning_note)
## Usage
## In Transformers
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("ehdwns1516/gpt3-kor-based_gpt2_review_SR2")
model = AutoModelWithLMHead.from_pretrained("ehdwns1516/gpt3-kor-based_gpt2_review_SR2")
generator = pipeline(
"text-generation",
model="ehdwns1516/gpt3-kor-based_gpt2_review_SR2",
tokenizer=tokenizer
)
context = "your context"
result = dict()
result[0] = generator(context)[0]
```
|
{}
|
ehdwns1516/gpt3-kor-based_gpt2_review_SR2
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
# ehdwns1516/gpt3-kor-based_gpt2_review_SR3
* This model has been trained Korean dataset as a star of 3 in the [naver shopping reivew dataset](https://github.com/bab2min/corpus/tree/master/sentiment).
* Input text what you want to generate review.
* If the context is longer than 1200 characters, the context may be cut in the middle and the result may not come out well.
review generator DEMO: [Ainize DEMO](https://main-review-generator-ehdwns1516.endpoint.ainize.ai/)
review generator API: [Ainize API](https://ainize.web.app/redirect?git_repo=https://github.com/ehdwns1516/review_generator)
## Model links for each 1 to 5 star
* [ehdwns1516/gpt3-kor-based_gpt2_review_SR1](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR1)
* [ehdwns1516/gpt3-kor-based_gpt2_review_SR2](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR2)
* [ehdwns1516/gpt3-kor-based_gpt2_review_SR3](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR3)
* [ehdwns1516/gpt3-kor-based_gpt2_review_SR4](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR4)
* [ehdwns1516/gpt3-kor-based_gpt2_review_SR5](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR5)
## Overview
Language model: [gpt3-kor-small_based_on_gpt2](https://huggingface.co/kykim/gpt3-kor-small_based_on_gpt2)
Language: Korean
Training data: review_body dataset with a star of 3 in the [naver shopping reivew dataset](https://github.com/bab2min/corpus/tree/master/sentiment).
Code: See [Ainize Workspace](https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/ehdwns1516/gpt2_review_fine-tunning_note)
## Usage
## In Transformers
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("ehdwns1516/gpt3-kor-based_gpt2_review_SR3")
model = AutoModelWithLMHead.from_pretrained("ehdwns1516/gpt3-kor-based_gpt2_review_SR3")
generator = pipeline(
"text-generation",
model="ehdwns1516/gpt3-kor-based_gpt2_review_SR3",
tokenizer=tokenizer
)
context = "your context"
result = dict()
result[0] = generator(context)[0]
```
|
{}
|
ehdwns1516/gpt3-kor-based_gpt2_review_SR3
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
# ehdwns1516/gpt3-kor-based_gpt2_review_SR4
* This model has been trained Korean dataset as a star of 4 in the [naver shopping reivew dataset](https://github.com/bab2min/corpus/tree/master/sentiment).
* Input text what you want to generate review.
* If the context is longer than 1200 characters, the context may be cut in the middle and the result may not come out well.
review generator DEMO: [Ainize DEMO](https://main-review-generator-ehdwns1516.endpoint.ainize.ai/)
review generator API: [Ainize API](https://ainize.web.app/redirect?git_repo=https://github.com/ehdwns1516/review_generator)
## Model links for each 1 to 5 star
* [ehdwns1516/gpt3-kor-based_gpt2_review_SR1](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR1)
* [ehdwns1516/gpt3-kor-based_gpt2_review_SR2](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR2)
* [ehdwns1516/gpt3-kor-based_gpt2_review_SR3](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR3)
* [ehdwns1516/gpt3-kor-based_gpt2_review_SR4](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR4)
* [ehdwns1516/gpt3-kor-based_gpt2_review_SR5](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR5)
## Overview
Language model: [gpt3-kor-small_based_on_gpt2](https://huggingface.co/kykim/gpt3-kor-small_based_on_gpt2)
Language: Korean
Training data: review_body dataset with a star of 4 in the [naver shopping reivew dataset](https://github.com/bab2min/corpus/tree/master/sentiment).
Code: See [Ainize Workspace](https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/ehdwns1516/gpt2_review_fine-tunning_note)
## Usage
## In Transformers
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("ehdwns1516/gpt3-kor-based_gpt2_review_SR4")
model = AutoModelWithLMHead.from_pretrained("ehdwns1516/gpt3-kor-based_gpt2_review_SR4")
generator = pipeline(
"text-generation",
model="ehdwns1516/gpt3-kor-based_gpt2_review_SR4",
tokenizer=tokenizer
)
context = "your context"
result = dict()
result[0] = generator(context)[0]
```
|
{}
|
ehdwns1516/gpt3-kor-based_gpt2_review_SR4
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
# ehdwns1516/gpt3-kor-based_gpt2_review_SR5
* This model has been trained Korean dataset as a star of 5 in the [naver shopping reivew dataset](https://github.com/bab2min/corpus/tree/master/sentiment).
* Input text what you want to generate review.
* If the context is longer than 1200 characters, the context may be cut in the middle and the result may not come out well.
review generator DEMO: [Ainize DEMO](https://main-review-generator-ehdwns1516.endpoint.ainize.ai/)
review generator API: [Ainize API](https://ainize.web.app/redirect?git_repo=https://github.com/ehdwns1516/review_generator)
## Model links for each 1 to 5 star
* [ehdwns1516/gpt3-kor-based_gpt2_review_SR1](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR1)
* [ehdwns1516/gpt3-kor-based_gpt2_review_SR2](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR2)
* [ehdwns1516/gpt3-kor-based_gpt2_review_SR3](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR3)
* [ehdwns1516/gpt3-kor-based_gpt2_review_SR4](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR4)
* [ehdwns1516/gpt3-kor-based_gpt2_review_SR5](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR5)
## Overview
Language model: [gpt3-kor-small_based_on_gpt2](https://huggingface.co/kykim/gpt3-kor-small_based_on_gpt2)
Language: Korean
Training data: review_body dataset with a star of 5 in the [naver shopping reivew dataset](https://github.com/bab2min/corpus/tree/master/sentiment).
Code: See [Ainize Workspace](https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/ehdwns1516/gpt2_review_fine-tunning_note)
## Usage
## In Transformers
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("ehdwns1516/gpt3-kor-based_gpt2_review_SR5")
model = AutoModelWithLMHead.from_pretrained("ehdwns1516/gpt3-kor-based_gpt2_review_SR5")
generator = pipeline(
"text-generation",
model="ehdwns1516/gpt3-kor-based_gpt2_review_SR5",
tokenizer=tokenizer
)
context = "your context"
result = dict()
result[0] = generator(context)[0]
```
|
{}
|
ehdwns1516/gpt3-kor-based_gpt2_review_SR5
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-classification
|
transformers
|
# klue-roberta-base-kornli
* This model trained with Korean dataset.
* Input premise sentence and hypothesis sentence.
* You can use English, but don't expect accuracy.
* If the context is longer than 1200 characters, the context may be cut in the middle and the result may not come out well.
klue-roberta-base-kornli DEMO: [Ainize DEMO](https://main-klue-roberta-base-kornli-ehdwns1516.endpoint.ainize.ai/)
klue-roberta-base-kornli API: [Ainize API](https://ainize.web.app/redirect?git_repo=https://github.com/ehdwns1516/klue-roberta-base_kornli)
## Overview
Language model: [klue/roberta-base](https://huggingface.co/klue/roberta-base)
Language: Korean
Training data: [kakaobrain KorNLI](https://github.com/kakaobrain/KorNLUDatasets/tree/master/KorNLI)
Eval data: [kakaobrain KorNLI](https://github.com/kakaobrain/KorNLUDatasets/tree/master/KorNLI)
Code: See [Ainize Workspace](https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/ehdwns1516/klue-roberta-base_finetunning_ex)
## Usage
## In Transformers
```
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("ehdwns1516/klue-roberta-base-kornli")
classifier = pipeline(
"text-classification",
model="ehdwns1516/klue-roberta-base-kornli",
return_all_scores=True,
)
premise = "your premise"
hypothesis = "your hypothesis"
result = dict()
result[0] = classifier(premise + tokenizer.sep_token + hypothesis)[0]
```
|
{}
|
ehdwns1516/klue-roberta-base-kornli
| null |
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-classification
|
transformers
|
# klue-roberta-base-sae
* This model trained with Korean dataset.
* Input sentence what you want to grasp intent.
* You can use English, but don't expect accuracy.
klue-roberta-base-kornli DEMO: [Ainize DEMO](https://main-klue-roberta-base-kornli-ehdwns1516.endpoint.ainize.ai/)
klue-roberta-base-kornli API: [Ainize API](https://ainize.web.app/redirect?git_repo=https://github.com/ehdwns1516/KLUE-RoBERTa-base_sae)
## Overview
Language model: [klue/roberta-base](https://huggingface.co/klue/roberta-base)
Language: Korean
Training data: [kor_sae](https://huggingface.co/datasets/kor_sae)
Eval data: [kor_sae](https://huggingface.co/datasets/kor_sae)
Code: See [Ainize Workspace](https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/ehdwns1516/KLUE-RoBERTa-base_sae_notebook)
## Usage
## In Transformers
```
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("ehdwns1516/klue-roberta-base-sae")
classifier = pipeline(
"text-classification",
model="ehdwns1516/klue-roberta-base-kornli",
return_all_scores=True,
)
context = "sentence what you want to grasp intent"
result = dict()
result[0] = classifier(context)[0]
```
|
{}
|
ehdwns1516/klue-roberta-base_sae
| null |
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
transformers
|
{}
|
eheitor/wav2vec2-base-xlsr53-ser_demo
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
eheja/model_name
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
eightbladedsword/imdb-model
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
eihe/distilbert-base-uncased-finetuned-ner
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
# Load the Model
```python
from transformers import GPT2Tokenizer, GPT2LMHeadModel
import torch
# start and end tokens for generation
START_TKN = "<|startoftext|>"
END_TKN = "<|endoftext|>"
# fine tuned on onion dataset w/ distilgpt2
tokenizer = GPT2Tokenizer.from_pretrained("distilgpt2")
model = GPT2LMHeadModel.from_pretrained("distilgpt2")
# use gpu if available
device = "cpu"
if torch.cuda.is_available():
device = "cuda"
model = model.to(device)
# get 70th epoch (decent results)
epoch = 70
modelpath = f'distilgpt2_onion_{epoch}.pt'
# load model
model.load_state_dict(torch.load(modelpath))
```
|
{}
|
ejjaffe/distilgpt2-onion
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
ekinataangin/gptneo
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
|
transformers
|
{}
|
ekkasilina/big_baseline
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
|
transformers
|
{}
|
ekkasilina/small_baseline
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
|
transformers
|
# Harry Potter DialoGPT Model
|
{"tags": ["conversational"]}
|
eklrivera/DialoGPT-small-harrypotter
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
elad/test
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
token-classification
|
transformers
|
[DistilBERT base cased](https://huggingface.co/distilbert-base-cased), fine-tuned for NER using the [conll03 english dataset](https://huggingface.co/datasets/conll2003). Note that this model is sensitive to capital letters — "english" is different than "English". For the case insensitive version, please use [elastic/distilbert-base-uncased-finetuned-conll03-english](https://huggingface.co/elastic/distilbert-base-uncased-finetuned-conll03-english).
## Versions
- Transformers version: 4.3.1
- Datasets version: 1.3.0
## Training
```
$ run_ner.py \
--model_name_or_path distilbert-base-cased \
--label_all_tokens True \
--return_entity_level_metrics True \
--dataset_name conll2003 \
--output_dir /tmp/distilbert-base-cased-finetuned-conll03-english \
--do_train \
--do_eval
```
After training, we update the labels to match the NER specific labels from the
dataset [conll2003](https://raw.githubusercontent.com/huggingface/datasets/1.3.0/datasets/conll2003/dataset_infos.json)
|
{"language": "en", "license": "apache-2.0", "datasets": ["conll2003"], "model-index": [{"name": "elastic/distilbert-base-cased-finetuned-conll03-english", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "config": "conll2003", "split": "validation"}, "metrics": [{"type": "accuracy", "value": 0.9834432212868665, "name": "Accuracy", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTZmZTJlMzUzOTAzZjg3N2UxNmMxMjQ2M2FhZTM4MDdkYzYyYTYyNjM1YjQ0M2Y4ZmIyMzkwMmY5YjZjZGVhYSIsInZlcnNpb24iOjF9.QaSLUR7AtQmE9F-h6EBueF6INQgdKwUUzS3bNvRu44rhNDY1KAJJkmDC8FeAIVMnlOSvPKvr5pOvJ59W1zckCw"}, {"type": "precision", "value": 0.9857564461012737, "name": "Precision", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDVmNmNmNWIwNTI0Yzc0YTI1NTk2NDM4YjY4NzY0ODQ4NzQ5MDQxMzYyYWM4YzUwNmYxZWQ1NTU2YTZiM2U2MCIsInZlcnNpb24iOjF9.ui_o64VBS_oC89VeQTx_B-nUUM0ZaivFyb6wNrYZcopJXvYgzptLCkARdBKdBajFjjupdhtq1VCdGbJ3yaXgBA"}, {"type": "recall", "value": 0.9882123948925569, "name": "Recall", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODg4Mzg1NTY3NjU4ZGQxOGVhMzQxNWU0ZTYxNWM2ZTg1OGZlM2U5ZGMxYTA2NzdiZjM5YWFkZjkzOGYwYTlkMyIsInZlcnNpb24iOjF9.8jHJv_5ZQp_CX3-k8-C3c5Hs4zp7bJPRTeE5SFrNgeX-FdhPv_8bHBM_DqOD2P_nkAzQ_PtEFfEokQpouZFJCw"}, {"type": "f1", "value": 0.9869828926905132, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzZlOGRjMDllYWY5MjdhODk2MmNmMDk5MDQyZGYzZDYwZTE1ZDY2MDNlMzAzN2JlMmE5Y2M3ZTNkOWE2MDBjYyIsInZlcnNpb24iOjF9.VKwzPQFSbrnUZ25gkKUZvYO_xFZcaTOSkDcN-YCxksF5DRnKudKI2HmvO8l8GCsQTCoD4DiSTKzghzLMxB1jCg"}, {"type": "loss", "value": 0.07748260349035263, "name": "loss", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNmVmOTQ2MWI2MzZhY2U2ODQ3YjA0ZWVjYzU1NGRlMTczZDI0NmM0OWI4YmIzMmEyYjlmNDIwYmRiODM4MWM0YiIsInZlcnNpb24iOjF9.0Prq087l2Xfh-ceS99zzUDcKM4Vr4CLM2rF1F1Fqd2fj9MOhVZEXF4JACVn0fWAFqfZIPS2GD8sSwfNYaXkZAA"}]}]}]}
|
elastic/distilbert-base-cased-finetuned-conll03-english
| null |
[
"transformers",
"pytorch",
"safetensors",
"distilbert",
"token-classification",
"en",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
token-classification
|
transformers
|
[DistilBERT base uncased](https://huggingface.co/distilbert-base-uncased), fine-tuned for NER using the [conll03 english dataset](https://huggingface.co/datasets/conll2003). Note that this model is **not** sensitive to capital letters — "english" is the same as "English". For the case sensitive version, please use [elastic/distilbert-base-cased-finetuned-conll03-english](https://huggingface.co/elastic/distilbert-base-cased-finetuned-conll03-english).
## Versions
- Transformers version: 4.3.1
- Datasets version: 1.3.0
## Training
```
$ run_ner.py \
--model_name_or_path distilbert-base-uncased \
--label_all_tokens True \
--return_entity_level_metrics True \
--dataset_name conll2003 \
--output_dir /tmp/distilbert-base-uncased-finetuned-conll03-english \
--do_train \
--do_eval
```
After training, we update the labels to match the NER specific labels from the
dataset [conll2003](https://raw.githubusercontent.com/huggingface/datasets/1.3.0/datasets/conll2003/dataset_infos.json)
|
{"language": "en", "license": "apache-2.0", "datasets": ["conll2003"], "model-index": [{"name": "elastic/distilbert-base-uncased-finetuned-conll03-english", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "config": "conll2003", "split": "validation"}, "metrics": [{"type": "accuracy", "value": 0.9854480753649896, "name": "Accuracy", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZmM0NzNhYTM2NGU0YjMwZDMwYTdhYjY3MDgwMTYxNWRjYzQ1NmE0OGEwOTcxMGY5ZTU1ZTQ3OTM5OGZkYjE2NCIsInZlcnNpb24iOjF9.v8Mk62C40vRWQ78BSCtGyphKKHd6q-Ir6sVbSjNjG37j9oiuQN3CDmk9XItmjvCwyKwMEr2NqUXaSyIfUSpBDg"}, {"type": "precision", "value": 0.9880928983228512, "name": "Precision", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMWIzYTg2OTFjY2FkNWY4MzUyN2ZjOGFlYWNhODYzODVhYjQwZTQ3YzdhMzMxY2I4N2U0YWI1YWVlYjIxMDdkNCIsInZlcnNpb24iOjF9.A50vF5qWgZjxABjL9tc0vssFxYHYhBQ__hLXcvuoZoK8c2TyuODHcM0LqGLeRJF8kcPaLx1hcNk3QMdOETVQBA"}, {"type": "recall", "value": 0.9895677847945542, "name": "Recall", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzBiZDg1YmM2NzFkNjQ3MzUzN2QzZDAwNzUwMmM3MzU1ODBlZWJjYmI1YzIxM2YxMzMzNDUxYjkyYzQzMDQ3ZSIsInZlcnNpb24iOjF9.aZEC0c93WWn3YoPkjhe2W1-OND9U2qWzesL9zioNuhstbj7ftANERs9dUAaJIlNCb7NS28q3x9c2s6wGLwovCw"}, {"type": "f1", "value": 0.9888297915932504, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYmNkNzVhODJjMjExOTg4ZjQwMWM4NGIxZGNiZTZlMDk5MzNmMjIwM2ZiNzdiZGIxYmNmNmJjMGVkYTlkN2FlNiIsInZlcnNpb24iOjF9.b6qmLHkHu-z5V1wC2yQMyIcdeReptK7iycIMyGOchVy6WyG4flNbxa5f2W05INdnJwX-PHavB_yaY0oULdKWDQ"}, {"type": "loss", "value": 0.06707527488470078, "name": "loss", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDRlMWE2OTQxNWI5MjY0NzJjNjJkYjg1OWE1MjE2MjI4N2YzOWFhMDI3OTE0ZmFhM2M0ZWU0NTUxNTBiYjhiZiIsInZlcnNpb24iOjF9.6JhhyfhXxi76GRLUNqekU_SRVsV-9Hwpm2iOD_OJusPZTIrEUCmLdIWtb9abVNWNzMNOmA4TkRLqLVca0o0HAw"}]}]}]}
|
elastic/distilbert-base-uncased-finetuned-conll03-english
| null |
[
"transformers",
"pytorch",
"safetensors",
"distilbert",
"token-classification",
"en",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MarianMix_en-10
This model is a fine-tuned version of [Helsinki-NLP/opus-tatoeba-en-ja](https://huggingface.co/Helsinki-NLP/opus-tatoeba-en-ja) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0752
- Bleu: 14.601
- Gen Len: 45.8087
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 99
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:--------:|
| 2.1136 | 0.44 | 500 | 2.0044 | 0.2655 | 109.0201 |
| 1.1422 | 0.89 | 1000 | 1.7516 | 1.4123 | 71.0 |
| 0.9666 | 1.33 | 1500 | 1.5219 | 3.6611 | 64.6888 |
| 0.8725 | 1.78 | 2000 | 1.3606 | 4.6539 | 77.1641 |
| 0.7655 | 2.22 | 2500 | 1.2586 | 8.3456 | 60.3837 |
| 0.7149 | 2.67 | 3000 | 1.1953 | 11.2247 | 50.5921 |
| 0.6719 | 3.11 | 3500 | 1.1541 | 10.4303 | 54.3776 |
| 0.6265 | 3.56 | 4000 | 1.1186 | 13.3231 | 48.283 |
| 0.6157 | 4.0 | 4500 | 1.0929 | 13.8467 | 46.569 |
| 0.5736 | 4.44 | 5000 | 1.0848 | 14.2731 | 45.5035 |
| 0.5683 | 4.89 | 5500 | 1.0752 | 14.601 | 45.8087 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["bleu"], "model-index": [{"name": "MarianMix_en-10", "results": []}]}
|
eldor-97/MarianMix_en-10
| null |
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
eldor-97/MarianMix_en-ja-1-2
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
|
transformers
|
#Rick DialoGPT model
|
{"tags": ["conversational"]}
|
eldritch-axolotl/Rick
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
T5 pre-trained on e-commerce data
|
{}
|
elena-soare/t5-base-ecommerce
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
Datasaur project
|
{}
|
elena-soare/t5-small-datasaur
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
elfarash/distilbert-base-uncased-finetuned-squad
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
question-answering
|
transformers
|
## CS224n SQuAD2.0 Project Dataset
The goal of this model is to save CS224n students GPU time when establishing
baselines to beat for the [Default Final Project](http://web.stanford.edu/class/cs224n/project/default-final-project-handout.pdf).
The training set used to fine-tune this model is the same as
the [official one](https://rajpurkar.github.io/SQuAD-explorer/); however,
evaluation and model selection were performed using roughly half of the official
dev set, 6078 examples, picked at random. The data files can be found at
<https://github.com/elgeish/squad/tree/master/data> — this is the Winter 2020
version. Given that the official SQuAD2.0 dev set contains the project's test
set, students must make sure not to use the official SQuAD2.0 dev set in any way
— including the use of models fine-tuned on the official SQuAD2.0, since they
used the official SQuAD2.0 dev set for model selection.
<a href="https://huggingface.co/exbert/?model=elgeish/cs224n-squad2.0-albert-base-v2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
## Results
```json
{
"exact": 78.94044093451794,
"f1": 81.7724930324639,
"total": 6078,
"HasAns_exact": 76.28865979381443,
"HasAns_f1": 82.20385314478195,
"HasAns_total": 2910,
"NoAns_exact": 81.37626262626263,
"NoAns_f1": 81.37626262626263,
"NoAns_total": 3168,
"best_exact": 78.95689371503784,
"best_exact_thresh": 0.0,
"best_f1": 81.78894581298378,
"best_f1_thresh": 0.0
}
```
## Notable Arguments
```json
{
"do_lower_case": true,
"doc_stride": 128,
"fp16": false,
"fp16_opt_level": "O1",
"gradient_accumulation_steps": 24,
"learning_rate": 3e-05,
"max_answer_length": 30,
"max_grad_norm": 1,
"max_query_length": 64,
"max_seq_length": 384,
"model_name_or_path": "albert-base-v2",
"model_type": "albert",
"num_train_epochs": 3,
"per_gpu_train_batch_size": 8,
"save_steps": 5000,
"seed": 42,
"train_batch_size": 8,
"version_2_with_negative": true,
"warmup_steps": 0,
"weight_decay": 0
}
```
## Environment Setup
```json
{
"transformers": "2.5.1",
"pytorch": "1.4.0=py3.6_cuda10.1.243_cudnn7.6.3_0",
"python": "3.6.5=hc3d631a_2",
"os": "Linux 4.15.0-1060-aws #62-Ubuntu SMP Tue Feb 11 21:23:22 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux",
"gpu": "Tesla V100-SXM2-16GB"
}
```
## How to Cite
```BibTeX
@misc{elgeish2020gestalt,
title={Gestalt: a Stacking Ensemble for SQuAD2.0},
author={Mohamed El-Geish},
journal={arXiv e-prints},
archivePrefix={arXiv},
eprint={2004.07067},
year={2020},
}
```
## Related Models
* [elgeish/cs224n-squad2.0-albert-large-v2](https://huggingface.co/elgeish/cs224n-squad2.0-albert-large-v2)
* [elgeish/cs224n-squad2.0-albert-xxlarge-v1](https://huggingface.co/elgeish/cs224n-squad2.0-albert-xxlarge-v1)
* [elgeish/cs224n-squad2.0-distilbert-base-uncased](https://huggingface.co/elgeish/cs224n-squad2.0-distilbert-base-uncased)
* [elgeish/cs224n-squad2.0-roberta-base](https://huggingface.co/elgeish/cs224n-squad2.0-roberta-base)
|
{"tags": ["exbert"]}
|
elgeish/cs224n-squad2.0-albert-base-v2
| null |
[
"transformers",
"pytorch",
"albert",
"question-answering",
"exbert",
"arxiv:2004.07067",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
question-answering
|
transformers
|
## CS224n SQuAD2.0 Project Dataset
The goal of this model is to save CS224n students GPU time when establishing
baselines to beat for the [Default Final Project](http://web.stanford.edu/class/cs224n/project/default-final-project-handout.pdf).
The training set used to fine-tune this model is the same as
the [official one](https://rajpurkar.github.io/SQuAD-explorer/); however,
evaluation and model selection were performed using roughly half of the official
dev set, 6078 examples, picked at random. The data files can be found at
<https://github.com/elgeish/squad/tree/master/data> — this is the Winter 2020
version. Given that the official SQuAD2.0 dev set contains the project's test
set, students must make sure not to use the official SQuAD2.0 dev set in any way
— including the use of models fine-tuned on the official SQuAD2.0, since they
used the official SQuAD2.0 dev set for model selection.
<a href="https://huggingface.co/exbert/?model=elgeish/cs224n-squad2.0-albert-large-v2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
## Results
```json
{
"exact": 79.2694965449161,
"f1": 82.50844352970152,
"total": 6078,
"HasAns_exact": 74.87972508591065,
"HasAns_f1": 81.64478342732858,
"HasAns_total": 2910,
"NoAns_exact": 83.30176767676768,
"NoAns_f1": 83.30176767676768,
"NoAns_total": 3168,
"best_exact": 79.2694965449161,
"best_exact_thresh": 0.0,
"best_f1": 82.50844352970155,
"best_f1_thresh": 0.0
}
```
## Notable Arguments
```json
{
"do_lower_case": true,
"doc_stride": 128,
"fp16": false,
"fp16_opt_level": "O1",
"gradient_accumulation_steps": 1,
"learning_rate": 3e-05,
"max_answer_length": 30,
"max_grad_norm": 1,
"max_query_length": 64,
"max_seq_length": 384,
"model_name_or_path": "albert-large-v2",
"model_type": "albert",
"num_train_epochs": 5,
"per_gpu_train_batch_size": 8,
"save_steps": 5000,
"seed": 42,
"train_batch_size": 8,
"version_2_with_negative": true,
"warmup_steps": 0,
"weight_decay": 0
}
```
## Environment Setup
```json
{
"transformers": "2.5.1",
"pytorch": "1.4.0=py3.6_cuda10.1.243_cudnn7.6.3_0",
"python": "3.6.5=hc3d631a_2",
"os": "Linux 4.15.0-1060-aws #62-Ubuntu SMP Tue Feb 11 21:23:22 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux",
"gpu": "Tesla V100-SXM2-16GB"
}
```
## How to Cite
```BibTeX
@misc{elgeish2020gestalt,
title={Gestalt: a Stacking Ensemble for SQuAD2.0},
author={Mohamed El-Geish},
journal={arXiv e-prints},
archivePrefix={arXiv},
eprint={2004.07067},
year={2020},
}
```
## Related Models
* [elgeish/cs224n-squad2.0-albert-base-v2](https://huggingface.co/elgeish/cs224n-squad2.0-albert-base-v2)
* [elgeish/cs224n-squad2.0-albert-xxlarge-v1](https://huggingface.co/elgeish/cs224n-squad2.0-albert-xxlarge-v1)
* [elgeish/cs224n-squad2.0-distilbert-base-uncased](https://huggingface.co/elgeish/cs224n-squad2.0-distilbert-base-uncased)
* [elgeish/cs224n-squad2.0-roberta-base](https://huggingface.co/elgeish/cs224n-squad2.0-roberta-base)
|
{"tags": ["exbert"]}
|
elgeish/cs224n-squad2.0-albert-large-v2
| null |
[
"transformers",
"pytorch",
"albert",
"question-answering",
"exbert",
"arxiv:2004.07067",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
question-answering
|
transformers
|
## CS224n SQuAD2.0 Project Dataset
The goal of this model is to save CS224n students GPU time when establishing
baselines to beat for the [Default Final Project](http://web.stanford.edu/class/cs224n/project/default-final-project-handout.pdf).
The training set used to fine-tune this model is the same as
the [official one](https://rajpurkar.github.io/SQuAD-explorer/); however,
evaluation and model selection were performed using roughly half of the official
dev set, 6078 examples, picked at random. The data files can be found at
<https://github.com/elgeish/squad/tree/master/data> — this is the Winter 2020
version. Given that the official SQuAD2.0 dev set contains the project's test
set, students must make sure not to use the official SQuAD2.0 dev set in any way
— including the use of models fine-tuned on the official SQuAD2.0, since they
used the official SQuAD2.0 dev set for model selection.
<a href="https://huggingface.co/exbert/?model=elgeish/cs224n-squad2.0-albert-xxlarge-v1">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
## Results
```json
{
"exact": 85.93287265547877,
"f1": 88.91258331187983,
"total": 6078,
"HasAns_exact": 84.36426116838489,
"HasAns_f1": 90.58786301361013,
"HasAns_total": 2910,
"NoAns_exact": 87.37373737373737,
"NoAns_f1": 87.37373737373737,
"NoAns_total": 3168,
"best_exact": 85.93287265547877,
"best_exact_thresh": 0.0,
"best_f1": 88.91258331187993,
"best_f1_thresh": 0.0
}
```
## Notable Arguments
```json
{
"do_lower_case": true,
"doc_stride": 128,
"fp16": false,
"fp16_opt_level": "O1",
"gradient_accumulation_steps": 24,
"learning_rate": 3e-05,
"max_answer_length": 30,
"max_grad_norm": 1,
"max_query_length": 64,
"max_seq_length": 512,
"model_name_or_path": "albert-xxlarge-v1",
"model_type": "albert",
"num_train_epochs": 4,
"per_gpu_train_batch_size": 1,
"save_steps": 1000,
"seed": 42,
"train_batch_size": 1,
"version_2_with_negative": true,
"warmup_steps": 814,
"weight_decay": 0
}
```
## Environment Setup
```json
{
"transformers": "2.5.1",
"pytorch": "1.4.0=py3.6_cuda10.1.243_cudnn7.6.3_0",
"python": "3.6.5=hc3d631a_2",
"os": "Linux 4.15.0-1060-aws #62-Ubuntu SMP Tue Feb 11 21:23:22 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux",
"gpu": "Tesla V100-SXM2-16GB"
}
```
## How to Cite
```BibTeX
@misc{elgeish2020gestalt,
title={Gestalt: a Stacking Ensemble for SQuAD2.0},
author={Mohamed El-Geish},
journal={arXiv e-prints},
archivePrefix={arXiv},
eprint={2004.07067},
year={2020},
}
```
## Related Models
* [elgeish/cs224n-squad2.0-albert-base-v2](https://huggingface.co/elgeish/cs224n-squad2.0-albert-base-v2)
* [elgeish/cs224n-squad2.0-albert-large-v2](https://huggingface.co/elgeish/cs224n-squad2.0-albert-large-v2)
* [elgeish/cs224n-squad2.0-distilbert-base-uncased](https://huggingface.co/elgeish/cs224n-squad2.0-distilbert-base-uncased)
* [elgeish/cs224n-squad2.0-roberta-base](https://huggingface.co/elgeish/cs224n-squad2.0-roberta-base)
|
{"tags": ["exbert"]}
|
elgeish/cs224n-squad2.0-albert-xxlarge-v1
| null |
[
"transformers",
"pytorch",
"albert",
"question-answering",
"exbert",
"arxiv:2004.07067",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
question-answering
|
transformers
|
## CS224n SQuAD2.0 Project Dataset
The goal of this model is to save CS224n students GPU time when establishing
baselines to beat for the [Default Final Project](http://web.stanford.edu/class/cs224n/project/default-final-project-handout.pdf).
The training set used to fine-tune this model is the same as
the [official one](https://rajpurkar.github.io/SQuAD-explorer/); however,
evaluation and model selection were performed using roughly half of the official
dev set, 6078 examples, picked at random. The data files can be found at
<https://github.com/elgeish/squad/tree/master/data> — this is the Winter 2020
version. Given that the official SQuAD2.0 dev set contains the project's test
set, students must make sure not to use the official SQuAD2.0 dev set in any way
— including the use of models fine-tuned on the official SQuAD2.0, since they
used the official SQuAD2.0 dev set for model selection.
## Results
```json
{
"exact": 65.16946363935504,
"f1": 67.87348075352251,
"total": 6078,
"HasAns_exact": 69.51890034364261,
"HasAns_f1": 75.16667217179045,
"HasAns_total": 2910,
"NoAns_exact": 61.17424242424242,
"NoAns_f1": 61.17424242424242,
"NoAns_total": 3168,
"best_exact": 65.16946363935504,
"best_exact_thresh": 0.0,
"best_f1": 67.87348075352243,
"best_f1_thresh": 0.0
}
```
## Notable Arguments
```json
{
"do_lower_case": true,
"doc_stride": 128,
"fp16": false,
"fp16_opt_level": "O1",
"gradient_accumulation_steps": 24,
"learning_rate": 3e-05,
"max_answer_length": 30,
"max_grad_norm": 1,
"max_query_length": 64,
"max_seq_length": 384,
"model_name_or_path": "distilbert-base-uncased-distilled-squad",
"model_type": "distilbert",
"num_train_epochs": 4,
"per_gpu_train_batch_size": 32,
"save_steps": 5000,
"seed": 42,
"train_batch_size": 32,
"version_2_with_negative": true,
"warmup_steps": 0,
"weight_decay": 0
}
```
## Environment Setup
```json
{
"transformers": "2.5.1",
"pytorch": "1.4.0=py3.6_cuda10.1.243_cudnn7.6.3_0",
"python": "3.6.5=hc3d631a_2",
"os": "Linux 4.15.0-1060-aws #62-Ubuntu SMP Tue Feb 11 21:23:22 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux",
"gpu": "Tesla V100-SXM2-16GB"
}
```
## How to Cite
```BibTeX
@misc{elgeish2020gestalt,
title={Gestalt: a Stacking Ensemble for SQuAD2.0},
author={Mohamed El-Geish},
journal={arXiv e-prints},
archivePrefix={arXiv},
eprint={2004.07067},
year={2020},
}
```
## Related Models
* [elgeish/cs224n-squad2.0-albert-base-v2](https://huggingface.co/elgeish/cs224n-squad2.0-albert-base-v2)
* [elgeish/cs224n-squad2.0-albert-large-v2](https://huggingface.co/elgeish/cs224n-squad2.0-albert-large-v2)
* [elgeish/cs224n-squad2.0-albert-xxlarge-v1](https://huggingface.co/elgeish/cs224n-squad2.0-albert-xxlarge-v1)
* [elgeish/cs224n-squad2.0-roberta-base](https://huggingface.co/elgeish/cs224n-squad2.0-roberta-base)
|
{}
|
elgeish/cs224n-squad2.0-distilbert-base-uncased
| null |
[
"transformers",
"pytorch",
"distilbert",
"question-answering",
"arxiv:2004.07067",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
question-answering
|
transformers
|
## CS224n SQuAD2.0 Project Dataset
The goal of this model is to save CS224n students GPU time when establishing
baselines to beat for the [Default Final Project](http://web.stanford.edu/class/cs224n/project/default-final-project-handout.pdf).
The training set used to fine-tune this model is the same as
the [official one](https://rajpurkar.github.io/SQuAD-explorer/); however,
evaluation and model selection were performed using roughly half of the official
dev set, 6078 examples, picked at random. The data files can be found at
<https://github.com/elgeish/squad/tree/master/data> — this is the Winter 2020
version. Given that the official SQuAD2.0 dev set contains the project's test
set, students must make sure not to use the official SQuAD2.0 dev set in any way
— including the use of models fine-tuned on the official SQuAD2.0, since they
used the official SQuAD2.0 dev set for model selection.
## Results
```json
{
"exact": 75.32082922013821,
"f1": 78.66699523704254,
"total": 6078,
"HasAns_exact": 74.84536082474227,
"HasAns_f1": 81.83436324767868,
"HasAns_total": 2910,
"NoAns_exact": 75.75757575757575,
"NoAns_f1": 75.75757575757575,
"NoAns_total": 3168,
"best_exact": 75.32082922013821,
"best_exact_thresh": 0.0,
"best_f1": 78.66699523704266,
"best_f1_thresh": 0.0
}
```
## Notable Arguments
```json
{
"do_lower_case": true,
"doc_stride": 128,
"fp16": false,
"fp16_opt_level": "O1",
"gradient_accumulation_steps": 24,
"learning_rate": 3e-05,
"max_answer_length": 30,
"max_grad_norm": 1,
"max_query_length": 64,
"max_seq_length": 384,
"model_name_or_path": "roberta-base",
"model_type": "roberta",
"num_train_epochs": 4,
"per_gpu_train_batch_size": 16,
"save_steps": 5000,
"seed": 42,
"train_batch_size": 16,
"version_2_with_negative": true,
"warmup_steps": 0,
"weight_decay": 0
}
```
## Environment Setup
```json
{
"transformers": "2.5.1",
"pytorch": "1.4.0=py3.6_cuda10.1.243_cudnn7.6.3_0",
"python": "3.6.5=hc3d631a_2",
"os": "Linux 4.15.0-1060-aws #62-Ubuntu SMP Tue Feb 11 21:23:22 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux",
"gpu": "Tesla V100-SXM2-16GB"
}
```
## How to Cite
```BibTeX
@misc{elgeish2020gestalt,
title={Gestalt: a Stacking Ensemble for SQuAD2.0},
author={Mohamed El-Geish},
journal={arXiv e-prints},
archivePrefix={arXiv},
eprint={2004.07067},
year={2020},
}
```
## Related Models
* [elgeish/cs224n-squad2.0-albert-base-v2](https://huggingface.co/elgeish/cs224n-squad2.0-albert-base-v2)
* [elgeish/cs224n-squad2.0-albert-large-v2](https://huggingface.co/elgeish/cs224n-squad2.0-albert-large-v2)
* [elgeish/cs224n-squad2.0-albert-xxlarge-v1](https://huggingface.co/elgeish/cs224n-squad2.0-albert-xxlarge-v1)
* [elgeish/cs224n-squad2.0-distilbert-base-uncased](https://huggingface.co/elgeish/cs224n-squad2.0-distilbert-base-uncased)
|
{}
|
elgeish/cs224n-squad2.0-roberta-base
| null |
[
"transformers",
"pytorch",
"jax",
"roberta",
"question-answering",
"arxiv:2004.07067",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
# GPT2-Medium-Arabic-Poetry
Fine-tuned [aubmindlab/aragpt2-medium](https://huggingface.co/aubmindlab/aragpt2-medium) on
the [Arabic Poetry Dataset (6th - 21st century)](https://www.kaggle.com/fahd09/arabic-poetry-dataset-478-2017)
using 41,922 lines of poetry as the train split and 9,007 (by poets not in the train split) for validation.
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed
set_seed(42)
model_name = "elgeish/gpt2-medium-arabic-poetry"
model = AutoModelForCausalLM.from_pretrained(model_name).to("cuda")
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "للوهلة الأولى قرأت في عينيه"
input_ids = tokenizer.encode(prompt, return_tensors="pt")
samples = model.generate(
input_ids.to("cuda"),
do_sample=True,
early_stopping=True,
max_length=32,
min_length=16,
num_return_sequences=3,
pad_token_id=50256,
repetition_penalty=1.5,
top_k=32,
top_p=0.95,
)
for sample in samples:
print(tokenizer.decode(sample.tolist()))
print("--")
```
Here's the output:
```
للوهلة الأولى قرأت في عينيه عن تلك النسم لم تذكر شيءا فلربما نامت علي كتفيها العصافير وتناثرت اوراق التوت عليها وغابت الوردة من
--
للوهلة الأولى قرأت في عينيه اية نشوة من ناره وهي تنظر الي المستقبل بعيون خلاقة ورسمت خطوطه العريضة علي جبينك العاري رسمت الخطوط الحمر فوق شعرك
--
للوهلة الأولى قرأت في عينيه كل ما كان وما سيكون غدا اذا لم تكن امراة ستكبر كثيرا علي الورق الابيض او لا تري مثلا خطوطا رفيعة فوق صفحة الماء
--
```
|
{"language": "ar", "license": "apache-2.0", "tags": ["text-generation", "poetry"], "datasets": ["Arabic Poetry Dataset (6th - 21st century)"], "metrics": ["perplexity"], "widget": [{"text": "\u0644\u0644\u0648\u0647\u0644\u0629 \u0627\u0644\u0623\u0648\u0644\u0649 \u0642\u0631\u0623\u062a \u0641\u064a \u0639\u064a\u0646\u064a\u0647"}], "model-index": [{"name": "elgeish Arabic GPT2 Medium", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Arabic Poetry Dataset (6th - 21st century)", "type": "poetry", "args": "ar"}, "metrics": [{"type": "perplexity", "value": 282.09, "name": "Validation Perplexity"}]}]}]}
|
elgeish/gpt2-medium-arabic-poetry
| null |
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"poetry",
"ar",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Base-TIMIT
Fine-tuned [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base)
on the [timit_asr dataset](https://huggingface.co/datasets/timit_asr).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import soundfile as sf
import torch
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
model_name = "elgeish/wav2vec2-base-timit-asr"
processor = Wav2Vec2Processor.from_pretrained(model_name)
model = Wav2Vec2ForCTC.from_pretrained(model_name)
model.eval()
dataset = load_dataset("timit_asr", split="test").shuffle().select(range(10))
char_translations = str.maketrans({"-": " ", ",": "", ".": "", "?": ""})
def prepare_example(example):
example["speech"], _ = sf.read(example["file"])
example["text"] = example["text"].translate(char_translations)
example["text"] = " ".join(example["text"].split()) # clean up whitespaces
example["text"] = example["text"].lower()
return example
dataset = dataset.map(prepare_example, remove_columns=["file"])
inputs = processor(dataset["speech"], sampling_rate=16000, return_tensors="pt", padding="longest")
with torch.no_grad():
predicted_ids = torch.argmax(model(inputs.input_values).logits, dim=-1)
predicted_ids[predicted_ids == -100] = processor.tokenizer.pad_token_id # see fine-tuning script
predicted_transcripts = processor.tokenizer.batch_decode(predicted_ids)
for reference, predicted in zip(dataset["text"], predicted_transcripts):
print("reference:", reference)
print("predicted:", predicted)
print("--")
```
Here's the output:
```
reference: she had your dark suit in greasy wash water all year
predicted: she had your dark suit in greasy wash water all year
--
reference: where were you while we were away
predicted: where were you while we were away
--
reference: cory and trish played tag with beach balls for hours
predicted: tcory and trish played tag with beach balls for hours
--
reference: tradition requires parental approval for under age marriage
predicted: tradition requires parrental proval for under age marrage
--
reference: objects made of pewter are beautiful
predicted: objects made of puder are bautiful
--
reference: don't ask me to carry an oily rag like that
predicted: don't o ask me to carry an oily rag like that
--
reference: cory and trish played tag with beach balls for hours
predicted: cory and trish played tag with beach balls for ours
--
reference: don't ask me to carry an oily rag like that
predicted: don't ask me to carry an oily rag like that
--
reference: don't do charlie's dirty dishes
predicted: don't do chawly's tirty dishes
--
reference: only those story tellers will remain who can imitate the style of the virtuous
predicted: only those story tillaers will remain who can imvitate the style the virtuous
```
## Fine-Tuning Script
You can find the script used to produce this model
[here](https://github.com/elgeish/transformers/blob/cfc0bd01f2ac2ea3a5acc578ef2e204bf4304de7/examples/research_projects/wav2vec2/finetune_base_timit_asr.sh).
|
{"language": "en", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech"], "datasets": ["timit_asr"]}
|
elgeish/wav2vec2-base-timit-asr
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"en",
"dataset:timit_asr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-LV60-TIMIT
Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60)
on the [timit_asr dataset](https://huggingface.co/datasets/timit_asr).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import soundfile as sf
import torch
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
model_name = "elgeish/wav2vec2-large-lv60-timit-asr"
processor = Wav2Vec2Processor.from_pretrained(model_name)
model = Wav2Vec2ForCTC.from_pretrained(model_name)
model.eval()
dataset = load_dataset("timit_asr", split="test").shuffle().select(range(10))
char_translations = str.maketrans({"-": " ", ",": "", ".": "", "?": ""})
def prepare_example(example):
example["speech"], _ = sf.read(example["file"])
example["text"] = example["text"].translate(char_translations)
example["text"] = " ".join(example["text"].split()) # clean up whitespaces
example["text"] = example["text"].lower()
return example
dataset = dataset.map(prepare_example, remove_columns=["file"])
inputs = processor(dataset["speech"], sampling_rate=16000, return_tensors="pt", padding="longest")
with torch.no_grad():
predicted_ids = torch.argmax(model(inputs.input_values).logits, dim=-1)
predicted_ids[predicted_ids == -100] = processor.tokenizer.pad_token_id # see fine-tuning script
predicted_transcripts = processor.tokenizer.batch_decode(predicted_ids)
for reference, predicted in zip(dataset["text"], predicted_transcripts):
print("reference:", reference)
print("predicted:", predicted)
print("--")
```
Here's the output:
```
reference: the emblem depicts the acropolis all aglow
predicted: the amblum depicts the acropolis all a glo
--
reference: don't ask me to carry an oily rag like that
predicted: don't ask me to carry an oily rag like that
--
reference: they enjoy it when i audition
predicted: they enjoy it when i addition
--
reference: set aside to dry with lid on sugar bowl
predicted: set aside to dry with a litt on shoogerbowl
--
reference: a boring novel is a superb sleeping pill
predicted: a bor and novel is a suberb sleeping peel
--
reference: only the most accomplished artists obtain popularity
predicted: only the most accomplished artists obtain popularity
--
reference: he has never himself done anything for which to be hated which of us has
predicted: he has never himself done anything for which to be hated which of us has
--
reference: the fish began to leap frantically on the surface of the small lake
predicted: the fish began to leap frantically on the surface of the small lake
--
reference: or certain words or rituals that child and adult go through may do the trick
predicted: or certain words or rituals that child an adult go through may do the trick
--
reference: are your grades higher or lower than nancy's
predicted: are your grades higher or lower than nancies
--
```
## Fine-Tuning Script
You can find the script used to produce this model
[here](https://github.com/elgeish/transformers/blob/8ee49e09c91ffd5d23034ce32ed630d988c50ddf/examples/research_projects/wav2vec2/finetune_large_lv60_timit_asr.sh).
**Note:** This model can be fine-tuned further;
[trainer_state.json](https://huggingface.co/elgeish/wav2vec2-large-lv60-timit-asr/blob/main/trainer_state.json)
shows useful details, namely the last state (this checkpoint):
```json
{
"epoch": 29.51,
"eval_loss": 25.424150466918945,
"eval_runtime": 182.9499,
"eval_samples_per_second": 9.183,
"eval_wer": 0.1351704233095107,
"step": 8500
}
```
|
{"language": "en", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech"], "datasets": ["timit_asr"]}
|
elgeish/wav2vec2-large-lv60-timit-asr
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"en",
"dataset:timit_asr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Arabic
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on Arabic using the `train` splits of [Common Voice](https://huggingface.co/datasets/common_voice)
and [Arabic Speech Corpus](https://huggingface.co/datasets/arabic_speech_corpus).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from lang_trans.arabic import buckwalter
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
dataset = load_dataset("common_voice", "ar", split="test[:10]")
resamplers = { # all three sampling rates exist in test split
48000: torchaudio.transforms.Resample(48000, 16000),
44100: torchaudio.transforms.Resample(44100, 16000),
32000: torchaudio.transforms.Resample(32000, 16000),
}
def prepare_example(example):
speech, sampling_rate = torchaudio.load(example["path"])
example["speech"] = resamplers[sampling_rate](speech).squeeze().numpy()
return example
dataset = dataset.map(prepare_example)
processor = Wav2Vec2Processor.from_pretrained("elgeish/wav2vec2-large-xlsr-53-arabic")
model = Wav2Vec2ForCTC.from_pretrained("elgeish/wav2vec2-large-xlsr-53-arabic").eval()
def predict(batch):
inputs = processor(batch["speech"], sampling_rate=16000, return_tensors="pt", padding=True)
with torch.no_grad():
predicted = torch.argmax(model(inputs.input_values).logits, dim=-1)
predicted[predicted == -100] = processor.tokenizer.pad_token_id # see fine-tuning script
batch["predicted"] = processor.tokenizer.batch_decode(predicted)
return batch
dataset = dataset.map(predict, batched=True, batch_size=1, remove_columns=["speech"])
for reference, predicted in zip(dataset["sentence"], dataset["predicted"]):
print("reference:", reference)
print("predicted:", buckwalter.untrans(predicted))
print("--")
```
Here's the output:
```
reference: ألديك قلم ؟
predicted: هلديك قالر
--
reference: ليست هناك مسافة على هذه الأرض أبعد من يوم أمس.
predicted: ليست نالك مسافة على هذه الأرض أبعد من يوم أمس
--
reference: إنك تكبر المشكلة.
predicted: إنك تكبر المشكلة
--
reference: يرغب أن يلتقي بك.
predicted: يرغب أن يلتقي بك
--
reference: إنهم لا يعرفون لماذا حتى.
predicted: إنهم لا يعرفون لماذا حتى
--
reference: سيسعدني مساعدتك أي وقت تحب.
predicted: سيسئدني مساعد سكرأي وقت تحب
--
reference: أَحَبُّ نظريّة علمية إليّ هي أن حلقات زحل مكونة بالكامل من الأمتعة المفقودة.
predicted: أحب ناضريةً علمية إلي هي أنحل قتزح المكونا بالكامل من الأمت عن المفقودة
--
reference: سأشتري له قلماً.
predicted: سأشتري له قلما
--
reference: أين المشكلة ؟
predicted: أين المشكل
--
reference: وَلِلَّهِ يَسْجُدُ مَا فِي السَّمَاوَاتِ وَمَا فِي الْأَرْضِ مِنْ دَابَّةٍ وَالْمَلَائِكَةُ وَهُمْ لَا يَسْتَكْبِرُونَ
predicted: ولله يسجد ما في السماوات وما في الأرض من دابة والملائكة وهم لا يستكبرون
--
```
## Evaluation
The model can be evaluated as follows on the Arabic test data of Common Voice:
```python
import jiwer
import torch
import torchaudio
from datasets import load_dataset
from lang_trans.arabic import buckwalter
from transformers import set_seed, Wav2Vec2ForCTC, Wav2Vec2Processor
set_seed(42)
test_split = load_dataset("common_voice", "ar", split="test")
resamplers = { # all three sampling rates exist in test split
48000: torchaudio.transforms.Resample(48000, 16000),
44100: torchaudio.transforms.Resample(44100, 16000),
32000: torchaudio.transforms.Resample(32000, 16000),
}
def prepare_example(example):
speech, sampling_rate = torchaudio.load(example["path"])
example["speech"] = resamplers[sampling_rate](speech).squeeze().numpy()
return example
test_split = test_split.map(prepare_example)
processor = Wav2Vec2Processor.from_pretrained("elgeish/wav2vec2-large-xlsr-53-arabic")
model = Wav2Vec2ForCTC.from_pretrained("elgeish/wav2vec2-large-xlsr-53-arabic").to("cuda").eval()
def predict(batch):
inputs = processor(batch["speech"], sampling_rate=16000, return_tensors="pt", padding=True)
with torch.no_grad():
predicted = torch.argmax(model(inputs.input_values.to("cuda")).logits, dim=-1)
predicted[predicted == -100] = processor.tokenizer.pad_token_id # see fine-tuning script
batch["predicted"] = processor.batch_decode(predicted)
return batch
test_split = test_split.map(predict, batched=True, batch_size=16, remove_columns=["speech"])
transformation = jiwer.Compose([
# normalize some diacritics, remove punctuation, and replace Persian letters with Arabic ones
jiwer.SubstituteRegexes({
r'[auiFNKo\~_،؟»\?;:\-,\.؛«!"]': "", "\u06D6": "",
r"[\|\{]": "A", "p": "h", "ک": "k", "ی": "y"}),
# default transformation below
jiwer.RemoveMultipleSpaces(),
jiwer.Strip(),
jiwer.SentencesToListOfWords(),
jiwer.RemoveEmptyStrings(),
])
metrics = jiwer.compute_measures(
truth=[buckwalter.trans(s) for s in test_split["sentence"]], # Buckwalter transliteration
hypothesis=test_split["predicted"],
truth_transform=transformation,
hypothesis_transform=transformation,
)
print(f"WER: {metrics['wer']:.2%}")
```
**Test Result**: 26.55%
## Training
For more details, see [Fine-Tuning with Arabic Speech Corpus](https://github.com/huggingface/transformers/tree/1c06240e1b3477728129bb58e7b6c7734bb5074e/examples/research_projects/wav2vec2#fine-tuning-with-arabic-speech-corpus).
This model represents Arabic in a format called [Buckwalter transliteration](https://en.wikipedia.org/wiki/Buckwalter_transliteration).
The Buckwalter format only includes ASCII characters, some of which are non-alpha (e.g., `">"` maps to `"أ"`).
The [lang-trans](https://github.com/kariminf/lang-trans) package is used to convert (transliterate) Arabic abjad.
[This script](https://github.com/huggingface/transformers/blob/1c06240e1b3477728129bb58e7b6c7734bb5074e/examples/research_projects/wav2vec2/finetune_large_xlsr_53_arabic_speech_corpus.sh)
was used to first fine-tune [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on the `train` split of the [Arabic Speech Corpus](https://huggingface.co/datasets/arabic_speech_corpus) dataset;
the `test` split was used for model selection; the resulting model at this point is saved as [elgeish/wav2vec2-large-xlsr-53-levantine-arabic](https://huggingface.co/elgeish/wav2vec2-large-xlsr-53-levantine-arabic).
Training was then resumed using the `train` split of the [Common Voice](https://huggingface.co/datasets/common_voice) dataset;
the `validation` split was used for model selection;
training was stopped to meet the deadline of [Fine-Tune-XLSR Week](https://github.com/huggingface/transformers/blob/700229f8a4003c4f71f29275e0874b5ba58cd39d/examples/research_projects/wav2vec2/FINE_TUNE_XLSR_WAV2VEC2.md):
this model is the checkpoint at 100k steps and a validation WER of **23.39%**.
<img src="https://huggingface.co/elgeish/wav2vec2-large-xlsr-53-arabic/raw/main/validation_wer.png" alt="Validation WER" width="100%" />
It's worth noting that validation WER is trending down, indicating the potential of further training (resuming the decaying learning rate at 7e-6).
## Future Work
One area to explore is using `attention_mask` in model input, which is recommended [here](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2).
Also, exploring data augmentation using datasets used to train models listed [here](https://paperswithcode.com/sota/speech-recognition-on-common-voice-arabic).
|
{"language": "ar", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "hf-asr-leaderboard"], "datasets": ["arabic_speech_corpus", "mozilla-foundation/common_voice_6_1"], "metrics": ["wer"], "model-index": [{"name": "elgeish-wav2vec2-large-xlsr-53-arabic", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 6.1 (Arabic)", "type": "mozilla-foundation/common_voice_6_1", "config": "ar", "split": "test", "args": {"language": "ar"}}, "metrics": [{"type": "wer", "value": 26.55, "name": "Test WER"}, {"type": "wer", "value": 23.39, "name": "Validation WER"}]}]}]}
|
elgeish/wav2vec2-large-xlsr-53-arabic
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"hf-asr-leaderboard",
"ar",
"dataset:arabic_speech_corpus",
"dataset:mozilla-foundation/common_voice_6_1",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Arabic
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on the [Arabic Speech Corpus dataset](https://huggingface.co/datasets/arabic_speech_corpus).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import librosa
import torch
from datasets import load_dataset
from lang_trans.arabic import buckwalter
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
dataset = load_dataset("arabic_speech_corpus", split="test") # "test[:n]" for n examples
processor = Wav2Vec2Processor.from_pretrained("elgeish/wav2vec2-large-xlsr-53-arabic")
model = Wav2Vec2ForCTC.from_pretrained("elgeish/wav2vec2-large-xlsr-53-arabic")
model.eval()
def prepare_example(example):
example["speech"], _ = librosa.load(example["file"], sr=16000)
example["text"] = example["text"].replace("-", " ").replace("^", "v")
example["text"] = " ".join(w for w in example["text"].split() if w != "sil")
return example
dataset = dataset.map(prepare_example, remove_columns=["file", "orthographic", "phonetic"])
def predict(batch):
inputs = processor(batch["speech"], sampling_rate=16000, return_tensors="pt", padding="longest")
with torch.no_grad():
predicted = torch.argmax(model(inputs.input_values).logits, dim=-1)
predicted[predicted == -100] = processor.tokenizer.pad_token_id # see fine-tuning script
batch["predicted"] = processor.tokenizer.batch_decode(predicted)
return batch
dataset = dataset.map(predict, batched=True, batch_size=1, remove_columns=["speech"])
for reference, predicted in zip(dataset["text"], dataset["predicted"]):
print("reference:", reference)
print("predicted:", predicted)
print("reference (untransliterated):", buckwalter.untrans(reference))
print("predicted (untransliterated):", buckwalter.untrans(predicted))
print("--")
```
Here's the output:
```
reference: >atAHat lilbA}iEi lmutajaw~ili >an yakuwna jA*iban lilmuwATini l>aqal~i daxlan
predicted: >ataAHato lilobaA}iEi Alomutajaw~ili >ano yakuwna jaA*ibAF lilomuwaATini Alo>aqal~i daxolAF
reference (untransliterated): أَتاحَت لِلبائِعِ لمُتَجَوِّلِ أَن يَكُونَ جاذِبَن لِلمُواطِنِ لأَقَلِّ دَخلَن
predicted (untransliterated): أَتَاحَتْ لِلْبَائِعِ الْمُتَجَوِّلِ أَنْ يَكُونَ جَاذِباً لِلْمُوَاطِنِ الْأَقَلِّ دَخْلاً
--
reference: >aHrazat muntaxabAtu lbarAziyli wa>lmAnyA waruwsyA fawzan fiy muqAbalAtihim l<iEdAdiy~api l~atiy >uqiymat istiEdAdan linihA}iy~Ati ka>si lEAlam >al~atiy satanTaliqu baEda >aqal~i min >usbuwE
predicted: >aHorazato munotaxabaAtu AlobaraAziyli wa>alomaAnoyaA waruwsoyaA fawozAF fiy muqaAbalaAtihimo >aliEodaAdiy~api Al~atiy >uqiymat AsotiEodaAdAF linahaA}iy~aAti ka>osi AloEaAlamo >al~atiy satanoTaliqu baEoda >aqal~i mino >usobuwEo
reference (untransliterated): أَحرَزَت مُنتَخَباتُ لبَرازِيلِ وَألمانيا وَرُوسيا فَوزَن فِي مُقابَلاتِهِم لإِعدادِيَّةِ لَّتِي أُقِيمَت ِستِعدادَن لِنِهائِيّاتِ كَأسِ لعالَم أَلَّتِي سَتَنطَلِقُ بَعدَ أَقَلِّ مِن أُسبُوع
predicted (untransliterated): أَحْرَزَتْ مُنْتَخَبَاتُ الْبَرَازِيلِ وَأَلْمَانْيَا وَرُوسْيَا فَوْزاً فِي مُقَابَلَاتِهِمْ أَلِعْدَادِيَّةِ الَّتِي أُقِيمَت اسْتِعْدَاداً لِنَهَائِيَّاتِ كَأْسِ الْعَالَمْ أَلَّتِي سَتَنْطَلِقُ بَعْدَ أَقَلِّ مِنْ أُسْبُوعْ
--
reference: >axfaqa majlisu ln~uw~Abi ll~ubnAniy~u fiy xtiyAri ra}iysin jadiydin lilbilAdi xalafan lilr~a}iysi lHAliy~i l~a*iy tantahiy wilAyatuhu fiy lxAmisi wAlEi$riyn min mAyuw >ayAra lmuqbil
predicted: >axofaqa majolisu Aln~uw~aAbi All~ubonaAniy~u fiy AxotiyaAri ra}iysK jadiydK lilobilaAdi xalafAF lilr~a}iysi AloHaAliy~i Al~a*iy tanotahiy wilaAyatuhu fiy AloxaAmisi waAloEi$oriyno mino maAyuw >ay~aAra Alomuqobilo
reference (untransliterated): أَخفَقَ مَجلِسُ لنُّوّابِ للُّبنانِيُّ فِي ختِيارِ رَئِيسِن جَدِيدِن لِلبِلادِ خَلَفَن لِلرَّئِيسِ لحالِيِّ لَّذِي تَنتَهِي وِلايَتُهُ فِي لخامِسِ والعِشرِين مِن مايُو أَيارَ لمُقبِل
predicted (untransliterated): أَخْفَقَ مَجْلِسُ النُّوَّابِ اللُّبْنَانِيُّ فِي اخْتِيَارِ رَئِيسٍ جَدِيدٍ لِلْبِلَادِ خَلَفاً لِلرَّئِيسِ الْحَالِيِّ الَّذِي تَنْتَهِي وِلَايَتُهُ فِي الْخَامِسِ وَالْعِشْرِينْ مِنْ مَايُو أَيَّارَ الْمُقْبِلْ
--
reference: <i* sayaHDuru liqA'a ha*A lEAmi xamsun wavalAvuwna minhum
predicted: <i*o sayaHoDuru riqaA'a ha*aA AloEaAmi xamosN wa valaAvuwna minohumo
reference (untransliterated): إِذ سَيَحضُرُ لِقاءَ هَذا لعامِ خَمسُن وَثَلاثُونَ مِنهُم
predicted (untransliterated): إِذْ سَيَحْضُرُ رِقَاءَ هَذَا الْعَامِ خَمْسٌ وَ ثَلَاثُونَ مِنْهُمْ
--
reference: >aElanati lHukuwmapu lmiSriy~apu Ean waqfi taqdiymi ld~aEmi ln~aqdiy~i limuzAriEiy lquTni <iEtibAran mina lmuwsimi lz~irAEiy~i lmuqbil
predicted: >aEolanati AloHukuwmapu AlomiSoriy~apu Eano waqofi taqodiymi Ald~aEomi Aln~aqodiy~i limuzaAriEiy AloquToni <iEotibaArAF mina Alomuwsimi Alz~iraAEiy~i Alomuqobilo
reference (untransliterated): أَعلَنَتِ لحُكُومَةُ لمِصرِيَّةُ عَن وَقفِ تَقدِيمِ لدَّعمِ لنَّقدِيِّ لِمُزارِعِي لقُطنِ إِعتِبارَن مِنَ لمُوسِمِ لزِّراعِيِّ لمُقبِل
predicted (untransliterated): أَعْلَنَتِ الْحُكُومَةُ الْمِصْرِيَّةُ عَنْ وَقْفِ تَقْدِيمِ الدَّعْمِ النَّقْدِيِّ لِمُزَارِعِي الْقُطْنِ إِعْتِبَاراً مِنَ الْمُوسِمِ الزِّرَاعِيِّ الْمُقْبِلْ
--
reference: >aElanat wizArapu lSi~Ha~pi lsa~Euwdiya~pu lyawma Ean wafAtayni jadiydatayni biAlfayruwsi lta~Ajiyi kuwruwnA nuwfil
predicted: >aEolanato wizaArapu AlS~iH~api Als~aEuwdiy~apu Aloyawoma Eano wafaAtayoni jadiydatayoni biAlofayoruwsi Alt~aAjiy kuwruwnaA nuwfiylo
reference (untransliterated): أَعلَنَت وِزارَةُ لصِّحَّةِ لسَّعُودِيَّةُ ليَومَ عَن وَفاتَينِ جَدِيدَتَينِ بِالفَيرُوسِ لتَّاجِيِ كُورُونا نُوفِل
predicted (untransliterated): أَعْلَنَتْ وِزَارَةُ الصِّحَّةِ السَّعُودِيَّةُ الْيَوْمَ عَنْ وَفَاتَيْنِ جَدِيدَتَيْنِ بِالْفَيْرُوسِ التَّاجِي كُورُونَا نُوفِيلْ
--
reference: <iftutiHati ljumuEapa faE~Aliy~Atu ld~awrapi lr~AbiEapa Ea$rapa mina lmihrajAni ld~awliy~i lilfiylmi bimur~Aki$
predicted: <ifotutiHapi AlojumuwEapa faEaAliyaAtu Ald~aworapi Alr~aAbiEapa Ea$orapa miyna AlomihorajaAni Ald~awoliy~i lilofiylomi bimur~Aki$
reference (untransliterated): إِفتُتِحَتِ لجُمُعَةَ فَعّالِيّاتُ لدَّورَةِ لرّابِعَةَ عَشرَةَ مِنَ لمِهرَجانِ لدَّولِيِّ لِلفِيلمِ بِمُرّاكِش
predicted (untransliterated): إِفْتُتِحَةِ الْجُمُوعَةَ فَعَالِيَاتُ الدَّوْرَةِ الرَّابِعَةَ عَشْرَةَ مِينَ الْمِهْرَجَانِ الدَّوْلِيِّ لِلْفِيلْمِ بِمُرّاكِش
--
reference: >ak~adat Ea$ru duwalin Earabiy~apin $Arakati lxamiysa lmADiya fiy jtimAEi jd~ap muwAfaqatahA EalY l<inDimAmi <ilY Hilfin maEa lwilAyAti lmut~aHidapi li$an~i Hamlapin Easkariy~apin munas~aqapin Did~a tanZiymi >ald~awlapi l<islAmiy~api
predicted: >ak~adato Ea$oru duwalK Earabiy~apK $aArakapiy Aloxamiysa AlomaADiya fiy AjotimaAEi jad~ap muwaAfaqatahaA EalaY Alo<inoDimaAmi <ilaY HilofK maEa AlowilaAyaAti Alomut~aHidapi li$an~i HamolapK Easokariy~apK munas~aqapK id~a tanoZiymi Ald~awolapi Alo<isolaAmiy~api
reference (untransliterated): أَكَّدَت عَشرُ دُوَلِن عَرَبِيَّةِن شارَكَتِ لخَمِيسَ لماضِيَ فِي جتِماعِ جدَّة مُوافَقَتَها عَلى لإِنضِمامِ إِلى حِلفِن مَعَ لوِلاياتِ لمُتَّحِدَةِ لِشَنِّ حَملَةِن عَسكَرِيَّةِن مُنَسَّقَةِن ضِدَّ تَنظِيمِ أَلدَّولَةِ لإِسلامِيَّةِ
predicted (untransliterated): أَكَّدَتْ عَشْرُ دُوَلٍ عَرَبِيَّةٍ شَارَكَةِي الْخَمِيسَ الْمَاضِيَ فِي اجْتِمَاعِ جَدَّة مُوَافَقَتَهَا عَلَى الْإِنْضِمَامِ إِلَى حِلْفٍ مَعَ الْوِلَايَاتِ الْمُتَّحِدَةِ لِشَنِّ حَمْلَةٍ عَسْكَرِيَّةٍ مُنَسَّقَةٍ ِدَّ تَنْظِيمِ الدَّوْلَةِ الْإِسْلَامِيَّةِ
--
reference: <iltaHaqa luwkA ziydAna <ibnu ln~ajmi ld~awliy~i lfaransiy~i ljazA}iriy~i l>Sli zayni ld~iyni ziydAn biAlfariyq
predicted: <ilotaHaqa luwkaA ziydaAna <ibonu Aln~ajomi Ald~awoliy~i Alofaranosiy~i AlojazaA}iriy~i Alo>aSoli zayoni Ald~iyni zayodaAno biAlofariyqo
reference (untransliterated): إِلتَحَقَ لُوكا زِيدانَ إِبنُ لنَّجمِ لدَّولِيِّ لفَرَنسِيِّ لجَزائِرِيِّ لأصلِ زَينِ لدِّينِ زِيدان بِالفَرِيق
predicted (untransliterated): إِلْتَحَقَ لُوكَا زِيدَانَ إِبْنُ النَّجْمِ الدَّوْلِيِّ الْفَرَنْسِيِّ الْجَزَائِرِيِّ الْأَصْلِ زَيْنِ الدِّينِ زَيْدَانْ بِالْفَرِيقْ
--
reference: >alma$Akilu l~atiy yatrukuhA xalfahu dA}iman
predicted: Aloma$aAkilu Al~atiy yatorukuhaA xalofahu daA}imAF
reference (untransliterated): أَلمَشاكِلُ لَّتِي يَترُكُها خَلفَهُ دائِمَن
predicted (untransliterated): الْمَشَاكِلُ الَّتِي يَتْرُكُهَا خَلْفَهُ دَائِماً
--
reference: >al~a*iy yataDam~anu mazAyA barmajiy~apan wabaSariy~apan Eadiydapan tahdifu limuwAkabapi lt~aTaw~uri lHASili fiy lfaDA'i l<ilktruwniy watashiyli stifAdapi lqur~A'i min xadamAti lmawqiE
predicted: >al~a*iy yataDam~anu mazaAyaA baromajiy~apF wabaSariy~apF EadiydapF tahodifu limuwaAkabapi Alt~aTaw~uri AloHaASili fiy AlofaDaA'i Alo<iloktoruwniy watasohiyli AsotifaAdapi Aloqur~aA'i mino xadaAmaAti AlomawoqiEo
reference (untransliterated): أَلَّذِي يَتَضَمَّنُ مَزايا بَرمَجِيَّةَن وَبَصَرِيَّةَن عَدِيدَةَن تَهدِفُ لِمُواكَبَةِ لتَّطَوُّرِ لحاصِلِ فِي لفَضاءِ لإِلكترُونِي وَتَسهِيلِ ستِفادَةِ لقُرّاءِ مِن خَدَماتِ لمَوقِع
predicted (untransliterated): أَلَّذِي يَتَضَمَّنُ مَزَايَا بَرْمَجِيَّةً وَبَصَرِيَّةً عَدِيدَةً تَهْدِفُ لِمُوَاكَبَةِ التَّطَوُّرِ الْحَاصِلِ فِي الْفَضَاءِ الْإِلْكتْرُونِي وَتَسْهِيلِ اسْتِفَادَةِ الْقُرَّاءِ مِنْ خَدَامَاتِ الْمَوْقِعْ
--
reference: >alfikrapu wa<in badat jadiydapan EalY mujtamaEin yaEiy$u wAqiEan sayi}aan lA tu$aj~iEu EalY lD~aHik
predicted: >alofikorapu wa<inobadato jadiydapF EalaY mujotamaEK yaEiy$u waAqi Eano say~i}AF laA tu$aj~iEu EalaY AlD~aHiko
reference (untransliterated): أَلفِكرَةُ وَإِن بَدَت جَدِيدَةَن عَلى مُجتَمَعِن يَعِيشُ واقِعَن سَيِئََن لا تُشَجِّعُ عَلى لضَّحِك
predicted (untransliterated): أَلْفِكْرَةُ وَإِنْبَدَتْ جَدِيدَةً عَلَى مُجْتَمَعٍ يَعِيشُ وَاقِ عَنْ سَيِّئاً لَا تُشَجِّعُ عَلَى الضَّحِكْ
--
reference: mu$iyraan <ilY xidmapi lqur>Ani lkariymi wataEziyzi EalAqapi lmuslimiyna bihi
predicted: mu$iyrAF <ilaY xidomapi Aloquro|ni Alokariymi wataEoziyzi EalaAqapi Alomusolimiyna bihi
reference (untransliterated): مُشِيرََن إِلى خِدمَةِ لقُرأانِ لكَرِيمِ وَتَعزِيزِ عَلاقَةِ لمُسلِمِينَ بِهِ
predicted (untransliterated): مُشِيراً إِلَى خِدْمَةِ الْقُرْآنِ الْكَرِيمِ وَتَعْزِيزِ عَلَاقَةِ الْمُسْلِمِينَ بِهِ
--
reference: <in~ahu EindamA yakuwnu >aHadu lz~awjayni yastaxdimu >aHada >a$kAli lt~iknuwluwjyA >akvara mina l>Axar
predicted: <in~ahu EinodamaA yakuwnu >aHadu Alz~awojayoni yasotaxodimu >aHada >a$okaAli Alt~iykonuwluwjoyaA >akovara mina Alo|xaro
reference (untransliterated): إِنَّهُ عِندَما يَكُونُ أَحَدُ لزَّوجَينِ يَستَخدِمُ أَحَدَ أَشكالِ لتِّكنُولُوجيا أَكثَرَ مِنَ لأاخَر
predicted (untransliterated): إِنَّهُ عِنْدَمَا يَكُونُ أَحَدُ الزَّوْجَيْنِ يَسْتَخْدِمُ أَحَدَ أَشْكَالِ التِّيكْنُولُوجْيَا أَكْثَرَ مِنَ الْآخَرْ
--
reference: wa*alika biHuDuwri ra}yisi lhay}api
predicted: wa*alika biHuDuwri ra}iysi Alohayo>api
reference (untransliterated): وَذَلِكَ بِحُضُورِ رَئيِسِ لهَيئَةِ
predicted (untransliterated): وَذَلِكَ بِحُضُورِ رَئِيسِ الْهَيْأَةِ
--
reference: wa*alika fiy buTuwlapa ka>si lEAlami lil>andiyapi baEda nusxapin tAriyxiy~apin >alEAma lmADiya <intahat bitatwiyji bAyrin miyuwniyxa l>almAniy~a EalY HisAbi lr~ajA'i lmagribiy~i fiy >aw~ali ta>ah~ulin lifariyqin Earabiy~in <ilY nihA}iy~i lmusAbaqapi
predicted: wa*alika fiy buTuwlapi ka>osiy AloEaAlami lilo>anodiyapi baEoda nusoxapK taAriyxiy~apK >aloEaAma AlomaADiya <inotahato bitatowiyji bAyorinmoyuwnixa Alo>alomaAniy~a EalaY HisaAbi Alr~ajaA'i Alomagoribiy~ifiy >aw~ali ta>ah~ulK lifariyqKEarabiy~K <ilaY nihaA}iy~i AlomusaAbaqapi
reference (untransliterated): وَذَلِكَ فِي بُطُولَةَ كَأسِ لعالَمِ لِلأَندِيَةِ بَعدَ نُسخَةِن تارِيخِيَّةِن أَلعامَ لماضِيَ إِنتَهَت بِتَتوِيجِ بايرِن مِيُونِيخَ لأَلمانِيَّ عَلى حِسابِ لرَّجاءِ لمَغرِبِيِّ فِي أَوَّلِ تَأَهُّلِن لِفَرِيقِن عَرَبِيِّن إِلى نِهائِيِّ لمُسابَقَةِ
predicted (untransliterated): وَذَلِكَ فِي بُطُولَةِ كَأْسِي الْعَالَمِ لِلْأَنْدِيَةِ بَعْدَ نُسْخَةٍ تَارِيخِيَّةٍ أَلْعَامَ الْمَاضِيَ إِنْتَهَتْ بِتَتْوِيجِ بايْرِنمْيُونِخَ الْأَلْمَانِيَّ عَلَى حِسَابِ الرَّجَاءِ الْمَغْرِبِيِّفِي أَوَّلِ تَأَهُّلٍ لِفَرِيقٍعَرَبِيٍّ إِلَى نِهَائِيِّ الْمُسَابَقَةِ
--
reference: bal yajibu lbaHvu fiymA tumav~iluhu min <iDAfapin Haqiyqiy~apin lil<iqtiSAdi lmaSriy~i fiy majAlAti lt~awZiyf biAEtibAri >an~a mu$kilapa lbiTAlapi mina lmu$kilAti lr~a}iysiy~api fiy miSr
predicted: balo yajibu AlobaHovu fiymaA tumav~iluhu mino <iDaAfapK Haqiyqiy~apK lilo<iqotiSaAdi AlomaSoriy~i fiy majaAlaAti Alt~awoZiyfo biAEotibaAri >an~a mu$okilapa AlobiTaAlapi mina Alomu$okilaAti Alr~a}iysiy~api fiy miSori
reference (untransliterated): بَل يَجِبُ لبَحثُ فِيما تُمَثِّلُهُ مِن إِضافَةِن حَقِيقِيَّةِن لِلإِقتِصادِ لمَصرِيِّ فِي مَجالاتِ لتَّوظِيف بِاعتِبارِ أَنَّ مُشكِلَةَ لبِطالَةِ مِنَ لمُشكِلاتِ لرَّئِيسِيَّةِ فِي مِصر
predicted (untransliterated): بَلْ يَجِبُ الْبَحْثُ فِيمَا تُمَثِّلُهُ مِنْ إِضَافَةٍ حَقِيقِيَّةٍ لِلْإِقْتِصَادِ الْمَصْرِيِّ فِي مَجَالَاتِ التَّوْظِيفْ بِاعْتِبَارِ أَنَّ مُشْكِلَةَ الْبِطَالَةِ مِنَ الْمُشْكِلَاتِ الرَّئِيسِيَّةِ فِي مِصْرِ
--
reference: taHtaDinu qAEapu *A fiynyuw wasaTa bayruwta maEriDa lfan~i l<istivnA}iy~i
predicted: taHotaDinu qaAEapu *aAfiynoyw wasaTa bayoruwta maEoriDa Alofan~i Alo<isotivonaA}iy~i
reference (untransliterated): تَحتَضِنُ قاعَةُ ذا فِينيُو وَسَطَ بَيرُوتَ مَعرِضَ لفَنِّ لإِستِثنائِيِّ
predicted (untransliterated): تَحْتَضِنُ قَاعَةُ ذَافِينْيو وَسَطَ بَيْرُوتَ مَعْرِضَ الْفَنِّ الْإِسْتِثْنَائِيِّ
--
reference: tarbiyapu lHamAmi hiwAyapun wamihnapun libaEDi ln~As
predicted: tarobiy~apu AloHamaAmi hiwaAyapN wamihonapN libaEoDi Aln~aAs
reference (untransliterated): تَربِيَةُ لحَمامِ هِوايَةُن وَمِهنَةُن لِبَعضِ لنّاس
predicted (untransliterated): تَرْبِيَّةُ الْحَمَامِ هِوَايَةٌ وَمِهْنَةٌ لِبَعْضِ النَّاس
--
reference: tasEY $abakapu lt~awASuli l<ijtimAEiy~i lS~AEidapu <iylw <ilY munAfasapi $abakapi fysbuwk Eabra lt~axal~iy Eani l<iElAnAti wAlHifAZi EalY lxuSuwSiy~api waHimAyapi lbayAnAt
predicted: tasoEap $abakapu Alt~awaASuli Alo<ijotimaAEiy~i AlS~aAEidapu <iylw <ilaY munaAfasapi $abakapi fysobuwko Eabora Alt~axal~iy Eani Alo<iEolaAnaAti waAloHifaAZi EalaY AloxuSuwSiy~api waHimaAyapi AlobayaAnaAt
reference (untransliterated): تَسعى شَبَكَةُ لتَّواصُلِ لإِجتِماعِيِّ لصّاعِدَةُ إِيلو إِلى مُنافَسَةِ شَبَكَةِ فيسبُوك عَبرَ لتَّخَلِّي عَنِ لإِعلاناتِ والحِفاظِ عَلى لخُصُوصِيَّةِ وَحِمايَةِ لبَيانات
predicted (untransliterated): تَسْعَة شَبَكَةُ التَّوَاصُلِ الْإِجْتِمَاعِيِّ الصَّاعِدَةُ إِيلو إِلَى مُنَافَسَةِ شَبَكَةِ فيسْبُوكْ عَبْرَ التَّخَلِّي عَنِ الْإِعْلَانَاتِ وَالْحِفَاظِ عَلَى الْخُصُوصِيَّةِ وَحِمَايَةِ الْبَيَانَات
--
reference: jamEu lmu&ana~vi lsa~Alimi mivla fAzat <iHdY lTa~AlibAti fiy musAbaqapi lqirA'Ati lqur>Aniya~pi
predicted: jamoEu Alomu&an~avi Als~aAlimi mivola faAzato <iHodaY AlT~aAlibaAti fiy musaAbaqapi AloqiraA'aAti Aloquro|niy~api
reference (untransliterated): جَمعُ لمُؤَنَّثِ لسَّالِمِ مِثلَ فازَت إِحدى لطَّالِباتِ فِي مُسابَقَةِ لقِراءاتِ لقُرأانِيَّةِ
predicted (untransliterated): جَمْعُ الْمُؤَنَّثِ السَّالِمِ مِثْلَ فَازَتْ إِحْدَى الطَّالِبَاتِ فِي مُسَابَقَةِ الْقِرَاءَاتِ الْقُرْآنِيَّةِ
--
reference: Hat~Y l>amsi lqariyb kAna lkaviyru mina l>uwkrAniy~iyn yu$ak~ikuwna fiy ntimA'i tatAri $ibhi jaziyrapi lqarm
predicted: Hat~aY Alo>amosi Aloqariybo kaAna Alokaviyru mina Alo>uwkoraAniy~iyno yu$ak~ikuwna fiy AnotimaA'i tataAri $ibohi jaziyrapi Aloqaromo
reference (untransliterated): حَتّى لأَمسِ لقَرِيب كانَ لكَثِيرُ مِنَ لأُوكرانِيِّين يُشَكِّكُونَ فِي نتِماءِ تَتارِ شِبهِ جَزِيرَةِ لقَرم
predicted (untransliterated): حَتَّى الْأَمْسِ الْقَرِيبْ كَانَ الْكَثِيرُ مِنَ الْأُوكْرَانِيِّينْ يُشَكِّكُونَ فِي انْتِمَاءِ تَتَارِ شِبْهِ جَزِيرَةِ الْقَرْمْ
--
reference: Ha*~arati l>umamu lmut~aHidapu min >an~a lEAlama sayuwAjihu xilAla lEuquwdi lmuqbilapi tafAquma >azmapin muzdawijapin fiy lmiyAh wAlkahrabA'
predicted: Ha*~arapi Alo>umamu Alomut~aHidapu mino >an~a AloEaAlama sayuwaAjihu xilaAla AloEuquwdi Alomuqobilapi tafaAq~uma >azomapK muzodawyijapK fiy AlomiyaA waAlokahorabaA'o
reference (untransliterated): حَذَّرَتِ لأُمَمُ لمُتَّحِدَةُ مِن أَنَّ لعالَمَ سَيُواجِهُ خِلالَ لعُقُودِ لمُقبِلَةِ تَفاقُمَ أَزمَةِن مُزدَوِجَةِن فِي لمِياه والكَهرَباء
predicted (untransliterated): حَذَّرَةِ الْأُمَمُ الْمُتَّحِدَةُ مِنْ أَنَّ الْعَالَمَ سَيُوَاجِهُ خِلَالَ الْعُقُودِ الْمُقْبِلَةِ تَفَاقُّمَ أَزْمَةٍ مُزْدَويِجَةٍ فِي الْمِيَا وَالْكَهْرَبَاءْ
--
reference: HuDuwru baEDi lz~uEamA'i fiy >almasiyrapi ljumhuwriy~api bibAriys
predicted: HuDuwru baEoDi Alz~aEamaA'ifiy >alomasiyrapi Alojumohuwriy~api bibaArys
reference (untransliterated): حُضُورُ بَعضِ لزُّعَماءِ فِي أَلمَسِيرَةِ لجُمهُورِيَّةِ بِبارِيس
predicted (untransliterated): حُضُورُ بَعْضِ الزَّعَمَاءِفِي أَلْمَسِيرَةِ الْجُمْهُورِيَّةِ بِبَاريس
--
reference: Hayvu kAna lEarabu >w~ala man Earafa qiymatahA lEilAjiy~apa fiy lqarni lEA$iri qabla lmiylAd fiy mamlakapi saba>
predicted: Hayovu kaAna AloEarabu >aw~ala mano Earafa qiymatahaA AloEilaAjiy~apa fiy Aloqaroni AloEaA$iri qabola AlomiylaAd fiy mamolakapi saba>o
reference (untransliterated): حَيثُ كانَ لعَرَبُ أوَّلَ مَن عَرَفَ قِيمَتَها لعِلاجِيَّةَ فِي لقَرنِ لعاشِرِ قَبلَ لمِيلاد فِي مَملَكَةِ سَبَأ
predicted (untransliterated): حَيْثُ كَانَ الْعَرَبُ أَوَّلَ مَنْ عَرَفَ قِيمَتَهَا الْعِلَاجِيَّةَ فِي الْقَرْنِ الْعَاشِرِ قَبْلَ الْمِيلَاد فِي مَمْلَكَةِ سَبَأْ
--
reference: daxalati lt~iknuwluwjyA fiy kul~i baytin wa>usrapin wa>aSbaHat tu$ak~ilu ljuz'a lkabiyra min HayAtinA
predicted: daxalati Alt~ikonuwluwjoyaA fiy kul~i bayotK wa>usorapK wa>aSobaHaAtlotu$ak~ilu Alojuzo'a Alokabiyra mino HayaAtina
reference (untransliterated): دَخَلَتِ لتِّكنُولُوجيا فِي كُلِّ بَيتِن وَأُسرَةِن وَأَصبَحَت تُشَكِّلُ لجُزءَ لكَبِيرَ مِن حَياتِنا
predicted (untransliterated): دَخَلَتِ التِّكْنُولُوجْيَا فِي كُلِّ بَيْتٍ وَأُسْرَةٍ وَأَصْبَحَاتلْتُشَكِّلُ الْجُزْءَ الْكَبِيرَ مِنْ حَيَاتِنَ
--
reference: duwna taHmiyli ljismi juhdan kabiyran fiy lbidAyapi qad yatasaba~bu fiy nufuwri l$a~xSi mina l<istimrAr
predicted: duwna taHomiyli Alojisomi juhodAF kabiyrAF fiy AlobidaAyapi qado yatasab~abu fiy nufuwri Al$~axoSi mina Al<isotimoraAro
reference (untransliterated): دُونَ تَحمِيلِ لجِسمِ جُهدَن كَبِيرَن فِي لبِدايَةِ قَد يَتَسَبَّبُ فِي نُفُورِ لشَّخصِ مِنَ لإِستِمرار
predicted (untransliterated): دُونَ تَحْمِيلِ الْجِسْمِ جُهْداً كَبِيراً فِي الْبِدَايَةِ قَدْ يَتَسَبَّبُ فِي نُفُورِ الشَّخْصِ مِنَ الإِسْتِمْرَارْ
--
reference: ragma ln~izAEi ld~Amiy >al~a*iy yaESifu biAlbilAd mun*u val>avi sanawAt
predicted: ragoma Aln~izaAEi Ald~aAmiy >al~a*iy yaEoSifu biAlobilAd muno*u valAvi sanawAt
reference (untransliterated): رَغمَ لنِّزاعِ لدّامِي أَلَّذِي يَعصِفُ بِالبِلاد مُنذُ ثَلأَثِ سَنَوات
predicted (untransliterated): رَغْمَ النِّزَاعِ الدَّامِي أَلَّذِي يَعْصِفُ بِالْبِلاد مُنْذُ ثَلاثِ سَنَوات
--
reference: rafaDa majlisu l>amni ld~awliy~u ma$ruwEa lqarAri lfilisTiyniy~i lr~Amiy <ilY <inhA'i l<iHtilAli l<isrA}iyliy~i fiy EAmayn
predicted: rafaDa majolisu Alo>amoni Ald~awoliy~u ma$oruwEa AloqaraAri AlofilisoTiyniy~i Alr~aAmi <ilaY <inohaA'i Alo<iHotilaAli Alo<isoraA}iyliy~i fiy EaAmayno
reference (untransliterated): رَفَضَ مَجلِسُ لأَمنِ لدَّولِيُّ مَشرُوعَ لقَرارِ لفِلِسطِينِيِّ لرّامِي إِلى إِنهاءِ لإِحتِلالِ لإِسرائِيلِيِّ فِي عامَين
predicted (untransliterated): رَفَضَ مَجْلِسُ الْأَمْنِ الدَّوْلِيُّ مَشْرُوعَ الْقَرَارِ الْفِلِسْطِينِيِّ الرَّامِ إِلَى إِنْهَاءِ الْإِحْتِلَالِ الْإِسْرَائِيلِيِّ فِي عَامَينْ
--
reference: ramzu ld~awlapi lt~urkiy~api lEilmAniy~api al~atiy ta>as~asat Eaqiba nhiyAri ld~awlapi lEuvmAniy~api
predicted: ramozu Ald~awolapi Alt~urokiy~api AloEilomaAniy~api Al~atiy ta>as~asato EaqibaAF hiyaAri Ald~awolapi AloEuvomaAniy~api
reference (untransliterated): رَمزُ لدَّولَةِ لتُّركِيَّةِ لعِلمانِيَّةِ َلَّتِي تَأَسَّسَت عَقِبَ نهِيارِ لدَّولَةِ لعُثمانِيَّةِ
predicted (untransliterated): رَمْزُ الدَّوْلَةِ التُّرْكِيَّةِ الْعِلْمَانِيَّةِ الَّتِي تَأَسَّسَتْ عَقِبَاً هِيَارِ الدَّوْلَةِ الْعُثْمَانِيَّةِ
--
reference: $Araka mawqiEu >aljaziyrapi litaEal~umi lEarabiy~api fiy lmu&tamari ld~awliy~i lv~Aniy lil~ugapi lEarabiy~api >al~a*iy naZ~amathu jAmiEapu mawlAnA mAlik <ibrAhiym >al<islAmiy~apu lHukuwmiyapu bimadiynapi mAlAnq biAlt~aEAwuni maEa jAmiEapi dAri ls~alAm bimadiynapi kuwntuwr fiy >anduwniysyA
predicted: $aAraka mawoqiEu >alojaziyrapi litaEal~umi AloEarabiy~api fiy Alomu&otamari Ald~awoliy~i Alv~aAniy lill~ugapi AloEarabiy~api >al~a*iy naZ~amatohu jaAmiEapu mawolaAnaA maAlik <iboraAhiymo >alo<isolaAmiy~apu AloHukuwmiy~apu bimadiynapi maA laAnoqo biAlt~aEaAwuni maEa jaAmiEapi daAri Als~alaAmo bimadiynapi kuwnotuwro fiy >anoduwniysoyaA
reference (untransliterated): شارَكَ مَوقِعُ أَلجَزِيرَةِ لِتَعَلُّمِ لعَرَبِيَّةِ فِي لمُؤتَمَرِ لدَّولِيِّ لثّانِي لِلُّغَةِ لعَرَبِيَّةِ أَلَّذِي نَظَّمَتهُ جامِعَةُ مَولانا مالِك إِبراهِيم أَلإِسلامِيَّةُ لحُكُومِيَةُ بِمَدِينَةِ مالانق بِالتَّعاوُنِ مَعَ جامِعَةِ دارِ لسَّلام بِمَدِينَةِ كُونتُور فِي أَندُونِيسيا
predicted (untransliterated): شَارَكَ مَوْقِعُ أَلْجَزِيرَةِ لِتَعَلُّمِ الْعَرَبِيَّةِ فِي الْمُؤْتَمَرِ الدَّوْلِيِّ الثَّانِي لِللُّغَةِ الْعَرَبِيَّةِ أَلَّذِي نَظَّمَتْهُ جَامِعَةُ مَوْلَانَا مَالِك إِبْرَاهِيمْ أَلْإِسْلَامِيَّةُ الْحُكُومِيَّةُ بِمَدِينَةِ مَا لَانْقْ بِالتَّعَاوُنِ مَعَ جَامِعَةِ دَارِ السَّلَامْ بِمَدِينَةِ كُونْتُورْ فِي أَنْدُونِيسْيَا
--
reference: $araEa l<it~iHAdu lt~uwnusiy~u lilfuruwsiy~api fiy tanfiy* xuT~apin tarnuw <ilY lmuDiy~i biha*ihi lr~iyADapi naHwa buluwgi lEAlamiy~api
predicted: $aAraEa Alo<it~iHaAdu Alt~uwnusiy~u lilofuruwsiy~api fiy tanofiy*o xuT~apK taronuwA <ilaY AlomuDiy~i biha*ihi Alr~iy~aADapi naHowa buluwgi AloEaAlamiy~api
reference (untransliterated): شَرَعَ لإِتِّحادُ لتُّونُسِيُّ لِلفُرُوسِيَّةِ فِي تَنفِيذ خُطَّةِن تَرنُو إِلى لمُضِيِّ بِهَذِهِ لرِّياضَةِ نَحوَ بُلُوغِ لعالَمِيَّةِ
predicted (untransliterated): شَارَعَ الْإِتِّحَادُ التُّونُسِيُّ لِلْفُرُوسِيَّةِ فِي تَنْفِيذْ خُطَّةٍ تَرْنُوا إِلَى الْمُضِيِّ بِهَذِهِ الرِّيَّاضَةِ نَحْوَ بُلُوغِ الْعَالَمِيَّةِ
--
reference: $ahida EAmu >alfayni wa>arbaEapa Ea$rapa Eid~apa <injAzAtin Tib~iy~apin
predicted: $ahida EaAmu >alfayni wa>arobaEapa Ea$orapa Eid~apa <inojaAzaAtK Tib~iy~apK
reference (untransliterated): شَهِدَ عامُ أَلفَينِ وَأَربَعَةَ عَشرَةَ عِدَّةَ إِنجازاتِن طِبِّيَّةِن
predicted (untransliterated): شَهِدَ عَامُ أَلفَينِ وَأَرْبَعَةَ عَشْرَةَ عِدَّةَ إِنْجَازَاتٍ طِبِّيَّةٍ
--
reference: EAda <irtifAEu >asEAri l>dwiyapi wa$uH~u lmunqi*i lilHayApi minhA liyuTil~a bira>sihi fiy ls~uwdAni min jadiydin
predicted: EaAda <irotifaAEu >asoEaAri Alo>adowiyapi wa$uH~u Alomunoqi*i liloHayaAti minohaA liyuTil~a bira>osihi fiy Als~uwdaAni mino jadiydK
reference (untransliterated): عادَ إِرتِفاعُ أَسعارِ لأدوِيَةِ وَشُحُّ لمُنقِذِ لِلحَياةِ مِنها لِيُطِلَّ بِرَأسِهِ فِي لسُّودانِ مِن جَدِيدِن
predicted (untransliterated): عَادَ إِرْتِفَاعُ أَسْعَارِ الْأَدْوِيَةِ وَشُحُّ الْمُنْقِذِ لِلْحَيَاتِ مِنْهَا لِيُطِلَّ بِرَأْسِهِ فِي السُّودَانِ مِنْ جَدِيدٍ
--
reference: EalY EtibArihA tusAEidu EalY tawsiyEi madAriki l>aTfAl watajEalu minhum >unAsan muvaq~afiyna mustaqbalan wamuwAkibiyna liEaSri tiknuwluwjyA lmaEluwmAt
predicted: EalaY AEotibaArihaA tusaAEidu EalaY tawosiyEi ma*ariki Alo>aTofaAl watajoEalu minohumo >unaAsAF muvaq~afiyna musotaqobalAF wamuwaAkibiyna liEaSori Alt~ikonuwluwjoyaA AlomaEoluwmaAt
reference (untransliterated): عَلى عتِبارِها تُساعِدُ عَلى تَوسِيعِ مَدارِكِ لأَطفال وَتَجعَلُ مِنهُم أُناسَن مُثَقَّفِينَ مُستَقبَلَن وَمُواكِبِينَ لِعَصرِ تِكنُولُوجيا لمَعلُومات
predicted (untransliterated): عَلَى اعْتِبَارِهَا تُسَاعِدُ عَلَى تَوْسِيعِ مَذَرِكِ الْأَطْفَال وَتَجْعَلُ مِنْهُمْ أُنَاساً مُثَقَّفِينَ مُسْتَقْبَلاً وَمُوَاكِبِينَ لِعَصْرِ التِّكْنُولُوجْيَا الْمَعْلُومَات
--
reference: wa*alika EalY xilAfi nuZarA}ihi ls~Abiqiyn
predicted: wa*alika EalaY xilaAfi nuZaraA}ihi Als~aAbiqiyno
reference (untransliterated): وَذَلِكَ عَلى خِلافِ نُظَرائِهِ لسّابِقِين
predicted (untransliterated): وَذَلِكَ عَلَى خِلَافِ نُظَرَائِهِ السَّابِقِينْ
--
reference: fataHat >akAdiymiy~apu lmuwsiyqY lEarabiy~api rasmiy~an yawma ls~abt >abwAbahA fiy bruwksil biHuDuwri majmuwEapin mina lwuzarA' warijAli lfan~i lbaljiykiy~iyna wAlEarab
predicted: fataHato >akaAdiymiy~apu AlomuwsiyqaY AloEarabiy~api rasomiy~AF yawoma Als~abot >abowaAbahaA fiy boruwkosil biHuDuwri majomuwEapK mina AlowuzaraYA warijaAli Alofan~i Alobalojiykiy~iyna waAloEarabo
reference (untransliterated): فَتَحَت أَكادِيمِيَّةُ لمُوسِيقى لعَرَبِيَّةِ رَسمِيَّن يَومَ لسَّبت أَبوابَها فِي برُوكسِل بِحُضُورِ مَجمُوعَةِن مِنَ لوُزَراء وَرِجالِ لفَنِّ لبَلجِيكِيِّينَ والعَرَب
predicted (untransliterated): فَتَحَتْ أَكَادِيمِيَّةُ الْمُوسِيقَى الْعَرَبِيَّةِ رَسْمِيّاً يَوْمَ السَّبْت أَبْوَابَهَا فِي بْرُوكْسِل بِحُضُورِ مَجْمُوعَةٍ مِنَ الْوُزَرَىا وَرِجَالِ الْفَنِّ الْبَلْجِيكِيِّينَ وَالْعَرَبْ
--
reference: fataHZY bitaEal~umin yamHuw >um~iy~atahA wayuDiy'u lahA Tariyqa lmaErifapi wAlt~iknuwluwjyA
predicted: fataHoZaY bitaEal~umK yamoHu >um~iy~atahaA wayuDiy'u lahaA Tariyqa AlomaEorifapi waAlt~iykonuwluwjoyaA
reference (untransliterated): فَتَحظى بِتَعَلُّمِن يَمحُو أُمِّيَّتَها وَيُضِيءُ لَها طَرِيقَ لمَعرِفَةِ والتِّكنُولُوجيا
predicted (untransliterated): فَتَحْظَى بِتَعَلُّمٍ يَمْحُ أُمِّيَّتَهَا وَيُضِيءُ لَهَا طَرِيقَ الْمَعْرِفَةِ وَالتِّيكْنُولُوجْيَا
--
reference: faha*A lmanzilu lmutawADiE >aSbaHa maHaj~aan liEadadin kabiyrin mina ln~isA'i lmariyDAti biAls~araTAn
predicted: faha*aA Alomanozilu AlomutawaADiEi >aSobaHa maHaj~AF liEadadK kabiyrK mina Aln~isaA'i AlomariyDaAti biAls~araTaAno
reference (untransliterated): فَهَذا لمَنزِلُ لمُتَواضِع أَصبَحَ مَحَجََّن لِعَدَدِن كَبِيرِن مِنَ لنِّساءِ لمَرِيضاتِ بِالسَّرَطان
predicted (untransliterated): فَهَذَا الْمَنْزِلُ الْمُتَوَاضِعِ أَصْبَحَ مَحَجّاً لِعَدَدٍ كَبِيرٍ مِنَ النِّسَاءِ الْمَرِيضَاتِ بِالسَّرَطَانْ
--
reference: Hadava *alika fiy Hay yaEquwba lmanSuwr l$~aEbiy~i
predicted: Hadava *alika fiy Hay yaEoquwba AlomanoSuwro >al$~aEobiy~i
reference (untransliterated): حَدَثَ ذَلِكَ فِي حَي يَعقُوبَ لمَنصُور لشَّعبِيِّ
predicted (untransliterated): حَدَثَ ذَلِكَ فِي حَي يَعْقُوبَ الْمَنْصُورْ أَلشَّعْبِيِّ
--
reference: fiy Hiyni kAna lmarkazu l>aw~alu fiy lwavbi lEAliy min naSiybi lkuruwAtiy~api >AnA siymiyt$
predicted: fiy Hiyni kaAna Alomarokazu Alo>aw~alu fiy Alowavobi AloEaAli mino naSiybi AlokuruwaAtiy~api |naA siymito$
reference (untransliterated): فِي حِينِ كانَ لمَركَزُ لأَوَّلُ فِي لوَثبِ لعالِي مِن نَصِيبِ لكُرُواتِيَّةِ أانا سِيمِيتش
predicted (untransliterated): فِي حِينِ كَانَ الْمَرْكَزُ الْأَوَّلُ فِي الْوَثْبِ الْعَالِ مِنْ نَصِيبِ الْكُرُوَاتِيَّةِ آنَا سِيمِتْش
--
reference: qAla bAHivuwna <in~a riyAHan >aqwY mina lmuEtAd xaf~afat min HarArapi saTHi lmuHiyTi lhAdiy hiya sababu lt~abATu}i lmu&aq~at fiy rtifAEi darajapi HarArapi l>arD mun*u bidAyapi lqarni lHAdiy wAlEi$riyn
predicted: qaAla baAHivuwna <in~a riyaAHAF >aqowaY mina AlomuEotaAd xaf~afato mino HaraArapi saToHi AlomuHiyTi AlohaAdiy hiya sababu Alt~abaATu&i Alomu&aq~aTi fiy ArotifaAEi darajapi HaraArapi Alo>aroD muno*u bidaAyapi Aloqaroni AloHaAdiy waAloEi$oriyno
reference (untransliterated): قالَ باحِثُونَ إِنَّ رِياحَن أَقوى مِنَ لمُعتاد خَفَّفَت مِن حَرارَةِ سَطحِ لمُحِيطِ لهادِي هِيَ سَبَبُ لتَّباطُئِ لمُؤَقَّت فِي رتِفاعِ دَرَجَةِ حَرارَةِ لأَرض مُنذُ بِدايَةِ لقَرنِ لحادِي والعِشرِين
predicted (untransliterated): قَالَ بَاحِثُونَ إِنَّ رِيَاحاً أَقْوَى مِنَ الْمُعْتَاد خَفَّفَتْ مِنْ حَرَارَةِ سَطْحِ الْمُحِيطِ الْهَادِي هِيَ سَبَبُ التَّبَاطُؤِ الْمُؤَقَّطِ فِي ارْتِفَاعِ دَرَجَةِ حَرَارَةِ الْأَرْض مُنْذُ بِدَايَةِ الْقَرْنِ الْحَادِي وَالْعِشْرِينْ
--
reference: qabla >an yuslima liyudAfiEa Ean diynih muHib~aan wamuHtariman li>aSlihi wamADiyh
predicted: qabola >ano yusolima liyudaAfiEa Eano diyni muHib~AF wamuHotarimAF li>aSolihi wamaADiyh
reference (untransliterated): قَبلَ أَن يُسلِمَ لِيُدافِعَ عَن دِينِه مُحِبََّن وَمُحتَرِمَن لِأَصلِهِ وَماضِيه
predicted (untransliterated): قَبْلَ أَنْ يُسْلِمَ لِيُدَافِعَ عَنْ دِينِ مُحِبّاً وَمُحْتَرِماً لِأَصْلِهِ وَمَاضِيه
--
reference: kamA tam~a taHsiynu wAjihAti lt~anaq~ul wAxtiyAri wasA}ili ln~aqli lmunAsibapi bi$aklin kabiyr
predicted: kamaA tam~a taHosiynu waAjihaAti Alt~anaq~ulo waAxotiyaAri wasaA}ili Aln~aqoli AlomunaAsibapi bi$akolK kabiyro
reference (untransliterated): كَما تَمَّ تَحسِينُ واجِهاتِ لتَّنَقُّل واختِيارِ وَسائِلِ لنَّقلِ لمُناسِبَةِ بِشَكلِن كَبِير
predicted (untransliterated): كَمَا تَمَّ تَحْسِينُ وَاجِهَاتِ التَّنَقُّلْ وَاخْتِيَارِ وَسَائِلِ النَّقْلِ الْمُنَاسِبَةِ بِشَكْلٍ كَبِيرْ
--
reference: kamA tuwuf~iyati lr~iwA}iy~apu lbArizapu wAl>ustA*apu ljAmiEiy~apu lmiSriy~apu raDwY EA$uwr Ean vamAniy wasit~iyna EAman
predicted: kamaA tuwuf~iyapi Alr~iwaA}iy~apu AlobaArizapu waAlo>usotaA*apu Alj~aAmiEiy~apu AlomiSoriy~apu raDowaY EaA$uwro Eano vamaAniy wasit~iyna EaAmAF
reference (untransliterated): كَما تُوُفِّيَتِ لرِّوائِيَّةُ لبارِزَةُ والأُستاذَةُ لجامِعِيَّةُ لمِصرِيَّةُ رَضوى عاشُور عَن ثَمانِي وَسِتِّينَ عامَن
predicted (untransliterated): كَمَا تُوُفِّيَةِ الرِّوَائِيَّةُ الْبَارِزَةُ وَالْأُسْتَاذَةُ الجَّامِعِيَّةُ الْمِصْرِيَّةُ رَضْوَى عَاشُورْ عَنْ ثَمَانِي وَسِتِّينَ عَاماً
--
reference: kamA $Arakat TAlibAtun min madArisa filasTiyniy~apin >alfan~Anapa lt~urkiy~apa fiy Eamali lawHAt
predicted: kamaA $aArakato TaAlibaAtN mino madaArisa fiylasoTiydiy~apK >alofan~aAnapa Alt~urokiy~apa fiy Eamali lawoHaAt
reference (untransliterated): كَما شارَكَت طالِباتُن مِن مَدارِسَ فِلَسطِينِيَّةِن أَلفَنّانَةَ لتُّركِيَّةَ فِي عَمَلِ لَوحات
predicted (untransliterated): كَمَا شَارَكَتْ طَالِبَاتٌ مِنْ مَدَارِسَ فِيلَسْطِيدِيَّةٍ أَلْفَنَّانَةَ التُّرْكِيَّةَ فِي عَمَلِ لَوْحَات
--
reference: lAmasa mu*an~abun yuTlaqu Ealayhi <ismu sAydiyng sbriyng kawkaba lmir~iyxi Einda muruwrihi bimuHA*Atih
predicted: laAmasa mu*an~abN yuTolaqu Ealayohi <isomu saAyodynosoboriynogo kawokaba Alomar~iyxi Einoda muruwrihi bimuHaA*aAti
reference (untransliterated): لامَسَ مُذَنَّبُن يُطلَقُ عَلَيهِ إِسمُ سايدِينغ سبرِينغ كَوكَبَ لمِرِّيخِ عِندَ مُرُورِهِ بِمُحاذاتِه
predicted (untransliterated): لَامَسَ مُذَنَّبٌ يُطْلَقُ عَلَيْهِ إِسْمُ سَايْدينْسْبْرِينْغْ كَوْكَبَ الْمَرِّيخِ عِنْدَ مُرُورِهِ بِمُحَاذَاتِ
--
reference: laqad sAhamati lt~iknuluwjyA fiy taqliyli ln~izAEAti l>usariy~api wa>aETat likul~i fardin nawEan mina l<istiqlAliy~api
predicted: laqado saAhamapi Alt~iykonuwluwjoyaA fiy taqoliyli Aln~izaAEaAti Alo>usariy~api wa>aEoTaTo likul~i farodK nawoEAF mina Alo<isotiqolaAliy~api
reference (untransliterated): لَقَد ساهَمَتِ لتِّكنُلُوجيا فِي تَقلِيلِ لنِّزاعاتِ لأُسَرِيَّةِ وَأَعطَت لِكُلِّ فَردِن نَوعَن مِنَ لإِستِقلالِيَّةِ
predicted (untransliterated): لَقَدْ سَاهَمَةِ التِّيكْنُولُوجْيَا فِي تَقْلِيلِ النِّزَاعَاتِ الْأُسَرِيَّةِ وَأَعْطَطْ لِكُلِّ فَرْدٍ نَوْعاً مِنَ الْإِسْتِقْلَالِيَّةِ
--
reference: lakin~a maSdaran fiy lwafdi qAl <in~a ls~iEra sayanxafiDu baEda nxifADi >asEAri ln~afTi fiy lEAlam
predicted: lakin~a maSodarAF fiy Alowafodi qaAl <in~a Als~iEoara sayanoxafiDu baEoda AnoxifaADi >asoEaAri Aln~afoTi fiy AloEaAlamo
reference (untransliterated): لَكِنَّ مَصدَرَن فِي لوَفدِ قال إِنَّ لسِّعرَ سَيَنخَفِضُ بَعدَ نخِفاضِ أَسعارِ لنَّفطِ فِي لعالَم
predicted (untransliterated): لَكِنَّ مَصْدَراً فِي الْوَفْدِ قَال إِنَّ السِّعَْرَ سَيَنْخَفِضُ بَعْدَ انْخِفَاضِ أَسْعَارِ النَّفْطِ فِي الْعَالَمْ
--
reference: lam yamnaE DaEfu mawAridi lt~amwiyl wArtifAEu kulfapi lmu$ArakAti ld~awliy~api riyADapa lfuruwsiy~api fiy tuwnusa min >an tastaqTiba lmi}At min Eu$~AqihA fiy baladin yakAdu l<ihtimAmu fiyhi yaqtaSir EalY riyADAtin $aEbiy~apin muEay~anapin
predicted: lamo yamonaEoDaEaofu mawaAridi Alt~amowiylo waArotifaAEu kulofapi Alomu$aArakaAti Ald~awoliy~api riyaADapa Alofuruwsiy~api fiy tuwnusa mino >ano tasotaqoTiba Almi}At mino Eu$~aAqihaA fiy baladK yakaAdu Al<ihotimaAmu fiy hiyaqotaSir EalaY riy~aADaAtK $aEobiy~apK muEay~inapK
reference (untransliterated): لَم يَمنَع ضَعفُ مَوارِدِ لتَّموِيل وارتِفاعُ كُلفَةِ لمُشارَكاتِ لدَّولِيَّةِ رِياضَةَ لفُرُوسِيَّةِ فِي تُونُسَ مِن أَن تَستَقطِبَ لمِئات مِن عُشّاقِها فِي بَلَدِن يَكادُ لإِهتِمامُ فِيهِ يَقتَصِر عَلى رِياضاتِن شَعبِيَّةِن مُعَيَّنَةِن
predicted (untransliterated): لَمْ يَمْنَعْضَعَْفُ مَوَارِدِ التَّمْوِيلْ وَارْتِفَاعُ كُلْفَةِ الْمُشَارَكَاتِ الدَّوْلِيَّةِ رِيَاضَةَ الْفُرُوسِيَّةِ فِي تُونُسَ مِنْ أَنْ تَسْتَقْطِبَ المِئات مِنْ عُشَّاقِهَا فِي بَلَدٍ يَكَادُ الإِهْتِمَامُ فِي هِيَقْتَصِر عَلَى رِيَّاضَاتٍ شَعْبِيَّةٍ مُعَيِّنَةٍ
--
reference: liyaDaEA bi*alika Hadaan lilEadiydi mina lt~aqAriyr >al~atiy >ak~adat <imkAniy~apa raHiyli ll~AEibi lmu$Agibi qariybaan
predicted: liyaDaEaAbi *alika Had~AF liloEadiydi mina Alt~aqaAriyro >al~atiy >ak~adat <imokaAniy~apa raHiyli All~aAEibi Alomu$aAgibi qariybAF
reference (untransliterated): لِيَضَعا بِذَلِكَ حَدََن لِلعَدِيدِ مِنَ لتَّقارِير أَلَّتِي أَكَّدَت إِمكانِيَّةَ رَحِيلِ للّاعِبِ لمُشاغِبِ قَرِيبََن
predicted (untransliterated): لِيَضَعَابِ ذَلِكَ حَدّاً لِلْعَدِيدِ مِنَ التَّقَارِيرْ أَلَّتِي أَكَّدَت إِمْكَانِيَّةَ رَحِيلِ اللَّاعِبِ الْمُشَاغِبِ قَرِيباً
--
reference: muDiyfan nuHAwilu xalqa furaSi Eamalin bi>aydiynA
predicted: muDiyfAF nuHaAwilu xaloqa furaSi EamalK bi>ayodiyna
reference (untransliterated): مُضِيفَن نُحاوِلُ خَلقَ فُرَصِ عَمَلِن بِأَيدِينا
predicted (untransliterated): مُضِيفاً نُحَاوِلُ خَلْقَ فُرَصِ عَمَلٍ بِأَيْدِينَ
--
reference: wa*alika muqAranapan maEa lmaHASiyli lz~irAEiy~api l>uxrY
predicted: wa*alika muqaAranapF maEa AlomaHaASiyli Alz~iraAEiy~api Alo>uxoraY
reference (untransliterated): وَذَلِكَ مُقارَنَةَن مَعَ لمَحاصِيلِ لزِّراعِيَّةِ لأُخرى
predicted (untransliterated): وَذَلِكَ مُقَارَنَةً مَعَ الْمَحَاصِيلِ الزِّرَاعِيَّةِ الْأُخْرَى
--
reference: mulqiyan lD~aw'a EalY qaDiy~api lfitnapi lT~A}ifiy~api fiy lmujtamaEi lmiSriy~i bi>usluwbin basiyTin min xilAli EalAqAti l>aTfAl fiy lmadrasapi bizamiylihimu lmasiyHiy~i
predicted: muloqiyani AlD~awo'a EalaY qadiy~api Alofitonapi AlT~aA}ifiy~api fiy AlomujotamaEi AlomiSoriy~i bi>usoluwbK basiyTK mino xilaAli EalaAqaAti Alo>aTofaAlo fiy Alomadorasapi bizamiylihimu AlomasiyHiy~i
reference (untransliterated): مُلقِيَن لضَّوءَ عَلى قَضِيَّةِ لفِتنَةِ لطّائِفِيَّةِ فِي لمُجتَمَعِ لمِصرِيِّ بِأُسلُوبِن بَسِيطِن مِن خِلالِ عَلاقاتِ لأَطفال فِي لمَدرَسَةِ بِزَمِيلِهِمُ لمَسِيحِيِّ
predicted (untransliterated): مُلْقِيَنِ الضَّوْءَ عَلَى قَدِيَّةِ الْفِتْنَةِ الطَّائِفِيَّةِ فِي الْمُجْتَمَعِ الْمِصْرِيِّ بِأُسْلُوبٍ بَسِيطٍ مِنْ خِلَالِ عَلَاقَاتِ الْأَطْفَالْ فِي الْمَدْرَسَةِ بِزَمِيلِهِمُ الْمَسِيحِيِّ
--
reference: mim~A yadEamu natA}ija dirAsAtin sAbiqapin tuHa*~iru min maxATiri l<ifrATi fiy stiEmAli ljaw~Al
predicted: mim~aA yadoEamu nataA}ija diraAsaAtK saAbiqapK tuHa*~iru mino maxaATiri Alo<iforaATi fiy AsotiEomaAli Alj~aw~aAl
reference (untransliterated): مِمّا يَدعَمُ نَتائِجَ دِراساتِن سابِقَةِن تُحَذِّرُ مِن مَخاطِرِ لإِفراطِ فِي ستِعمالِ لجَوّال
predicted (untransliterated): مِمَّا يَدْعَمُ نَتَائِجَ دِرَاسَاتٍ سَابِقَةٍ تُحَذِّرُ مِنْ مَخَاطِرِ الْإِفْرَاطِ فِي اسْتِعْمَالِ الجَّوَّال
--
reference: min baynihA >al<istiqrAru wanawEiy~apu lr~iEAyapi lS~iH~iy~api wAlv~aqAfapi wAlbiy}api wAlt~aEliymi wAlbinyapi lt~aHtiy~api
predicted: mino bayonihaA >alo<isotiqoraAru wanawoEiy~apu Alr~iEaAyapi AlS~iH~iy~api waAlv~aqaAfapi waAlobiy}api waAlt~aEoliymi waAlobinoyapi Alt~aHotiy~api
reference (untransliterated): مِن بَينِها أَلإِستِقرارُ وَنَوعِيَّةُ لرِّعايَةِ لصِّحِّيَّةِ والثَّقافَةِ والبِيئَةِ والتَّعلِيمِ والبِنيَةِ لتَّحتِيَّةِ
predicted (untransliterated): مِنْ بَيْنِهَا أَلْإِسْتِقْرَارُ وَنَوْعِيَّةُ الرِّعَايَةِ الصِّحِّيَّةِ وَالثَّقَافَةِ وَالْبِيئَةِ وَالتَّعْلِيمِ وَالْبِنْيَةِ التَّحْتِيَّةِ
--
reference: minhA >aqmi$apun wa>adawAtun maEdaniy~apun waxa$abiy~apun waqinAnun blAstiykiy~apun wazujAjiy~apun wa>awrAqu SuHuf
predicted: minohaA >aqomi$apN wa>adawaAtN maEodaniy~apN waxa$abiy~apN waqinAnN bolaAsotiykiy~apN wazujaAjiy~atN wa>aworaAqu SuHafo
reference (untransliterated): مِنها أَقمِشَةُن وَأَدَواتُن مَعدَنِيَّةُن وَخَشَبِيَّةُن وَقِنانُن بلاستِيكِيَّةُن وَزُجاجِيَّةُن وَأَوراقُ صُحُف
predicted (untransliterated): مِنْهَا أَقْمِشَةٌ وَأَدَوَاتٌ مَعْدَنِيَّةٌ وَخَشَبِيَّةٌ وَقِنانٌ بْلَاسْتِيكِيَّةٌ وَزُجَاجِيَّتٌ وَأَوْرَاقُ صُحَفْ
--
reference: hal lilS~iyAmi ta>viyrun EalY Eamali lmuslimiyna fiy l$~arikAti bi>uwruwb~A
predicted: hal~i AlS~iyaAmi ta>oviyrN EalaY Eamali Alomusolimiyna fiy Al$~arikaAti bi>uwruwb~aA
reference (untransliterated): هَل لِلصِّيامِ تَأثِيرُن عَلى عَمَلِ لمُسلِمِينَ فِي لشَّرِكاتِ بِأُورُوبّا
predicted (untransliterated): هَلِّ الصِّيَامِ تَأْثِيرٌ عَلَى عَمَلِ الْمُسْلِمِينَ فِي الشَّرِكَاتِ بِأُورُوبَّا
--
reference: hunAka fikrapun TuriHat bAdi}a l>amr biEaqdi qim~apin >uwruwbiy~apin fiy sarayiyfuw biha*ihi lmunAsabapi
predicted: hunaAka fikorapN TuriHato baAdi >alo>amor biEaqoDi qim~apK >uwruwbiy~apK fiy sarayiyfuw biha*ihi AlomunaAsabapi
reference (untransliterated): هُناكَ فِكرَةُن طُرِحَت بادِئَ لأَمر بِعَقدِ قِمَّةِن أُورُوبِيَّةِن فِي سَرَيِيفُو بِهَذِهِ لمُناسَبَةِ
predicted (untransliterated): هُنَاكَ فِكْرَةٌ طُرِحَتْ بَادِ أَلْأَمْر بِعَقْضِ قِمَّةٍ أُورُوبِيَّةٍ فِي سَرَيِيفُو بِهَذِهِ الْمُنَاسَبَةِ
--
reference: wa yumkinu >an tuHSada lv~imAr EalY madY fatrapin zamaniy~apin Tawiylapin
predicted: wayumokinu >ano tuHoSada Alv~imaAr EalaY madaY fatorapK zamaniy~apK TawiylapK
reference (untransliterated): وَ يُمكِنُ أَن تُحصَدَ لثِّمار عَلى مَدى فَترَةِن زَمَنِيَّةِن طَوِيلَةِن
predicted (untransliterated): وَيُمْكِنُ أَنْ تُحْصَدَ الثِّمَار عَلَى مَدَى فَتْرَةٍ زَمَنِيَّةٍ طَوِيلَةٍ
--
reference: wa>Hraza lmarkaza lv~Aliv >alr~iwA}iy~u ljazA}iriy~u >aHmadu TiybAwiy Ean riwAyatihi mawtun nAEim
predicted: wa>aHoraza Alomarokaza Alv~aAlivo >alr~iwaA}iy~u AlojazaA}iriy~u >aHomadu TiybaAwi Eano riwaAyatihi mawotunnaAEimo
reference (untransliterated): وَأحرَزَ لمَركَزَ لثّالِث أَلرِّوائِيُّ لجَزائِرِيُّ أَحمَدُ طِيباوِي عَن رِوايَتِهِ مَوتُن ناعِم
predicted (untransliterated): وَأَحْرَزَ الْمَرْكَزَ الثَّالِثْ أَلرِّوَائِيُّ الْجَزَائِرِيُّ أَحْمَدُ طِيبَاوِ عَنْ رِوَايَتِهِ مَوْتُننَاعِمْ
--
reference: wAxtatama lbarAziyliy~uwna mubArAyAtihimi l<iEdAdiy~apa biAlfawzi EalY SirbyA bihadafin waHiydin saj~alahu lmuhAjimu farydun fiy l$~awTi lv~Aniy mina lmubArApi >al~atiy >uqiymat fiy sAwbAwluw
predicted: waAxotatama AlobaraAziyliy~uwna mubaArayaAtihimi Alo<iEodaAdiy~api biAlofawozi EalaY Sirobiya bihadafK waHiydK saj~alahu AlomuhaAjimu fariydN fiy Al$~awoTi Alv~aAniy mina AlomubaAraApi >al~atiy >uqiymato fiy saAwobaAluw
reference (untransliterated): واختَتَمَ لبَرازِيلِيُّونَ مُباراياتِهِمِ لإِعدادِيَّةَ بِالفَوزِ عَلى صِربيا بِهَدَفِن وَحِيدِن سَجَّلَهُ لمُهاجِمُ فَريدُن فِي لشَّوطِ لثّانِي مِنَ لمُباراةِ أَلَّتِي أُقِيمَت فِي ساوباولُو
predicted (untransliterated): وَاخْتَتَمَ الْبَرَازِيلِيُّونَ مُبَارَيَاتِهِمِ الْإِعْدَادِيَّةِ بِالْفَوْزِ عَلَى صِرْبِيَ بِهَدَفٍ وَحِيدٍ سَجَّلَهُ الْمُهَاجِمُ فَرِيدٌ فِي الشَّوْطِ الثَّانِي مِنَ الْمُبَارَاةِ أَلَّتِي أُقِيمَتْ فِي سَاوْبَالُو
--
reference: wA$tahara lr~AHilu bimaqAlAtihi wakutubihi lr~aSiynapi >al~atiy taDam~anat qirA'Atin mustaqbaliy~apan lil>AfAqi ls~iyAsiy~api wAl<ijtimAEiy~api fiy lEAlami lEarabiy~i l<islAmiy~i
predicted: waA$otahara Alr~aAHilu bimaqaAlaAtihi wakutubihi Alr~aSiynapi >al~atiy taDam~anato qiraA'aAtK musotaqobaliy~apF lilo|faAqi Als~iyaAsiy~api waAlo<ijotimaAEiy~api fiy AloEaAlami AloEarabiy~i Alo<isolaAmiy~i
reference (untransliterated): واشتَهَرَ لرّاحِلُ بِمَقالاتِهِ وَكُتُبِهِ لرَّصِينَةِ أَلَّتِي تَضَمَّنَت قِراءاتِن مُستَقبَلِيَّةَن لِلأافاقِ لسِّياسِيَّةِ والإِجتِماعِيَّةِ فِي لعالَمِ لعَرَبِيِّ لإِسلامِيِّ
predicted (untransliterated): وَاشْتَهَرَ الرَّاحِلُ بِمَقَالَاتِهِ وَكُتُبِهِ الرَّصِينَةِ أَلَّتِي تَضَمَّنَتْ قِرَاءَاتٍ مُسْتَقْبَلِيَّةً لِلْآفَاقِ السِّيَاسِيَّةِ وَالْإِجْتِمَاعِيَّةِ فِي الْعَالَمِ الْعَرَبِيِّ الْإِسْلَامِيِّ
--
reference: wa>aSbaHa ha*A lS~arHu matHafan rasmiy~an
predicted: wa>aSobaHa ha*aA AlS~aroHu matoHafAF rasomiy~AF
reference (untransliterated): وَأَصبَحَ هَذا لصَّرحُ مَتحَفَن رَسمِيَّن
predicted (untransliterated): وَأَصْبَحَ هَذَا الصَّرْحُ مَتْحَفاً رَسْمِيّاً
--
reference: w>aDAfa lbayAnu an~a fariyqaan min l>aTib~A'i wAlmumar~iDAt w<ixtiSASiy~iyna >Axariyna fiy majAli lS~iH~api yaEtanuwna bimAndiyl~A EalY madAri ls~AEapi
predicted: wa>aDaAfa AlobayaAnu >an~a fariyqAF mina Alo>aTib~aA'i waAlomumar~iDaAt waAxotiSaASiy~iyna |xariyna fiy majaAli AlS~iH~api yaEotanuwna bimaAnodil~aA EalaY madaAri Als~aAEapi
reference (untransliterated): وأَضافَ لبَيانُ َنَّ فَرِيقََن مِن لأَطِبّاءِ والمُمَرِّضات وإِختِصاصِيِّينَ أاخَرِينَ فِي مَجالِ لصِّحَّةِ يَعتَنُونَ بِماندِيلّا عَلى مَدارِ لسّاعَةِ
predicted (untransliterated): وَأَضَافَ الْبَيَانُ أَنَّ فَرِيقاً مِنَ الْأَطِبَّاءِ وَالْمُمَرِّضَات وَاخْتِصَاصِيِّينَ آخَرِينَ فِي مَجَالِ الصِّحَّةِ يَعْتَنُونَ بِمَانْدِلَّا عَلَى مَدَارِ السَّاعَةِ
--
reference: wAEtabaruwhA falsafapan ruwHiy~apan mutakAmilapan litaHriyri ljismi wAlfikr
predicted: waAEotabaruwhaA falosafapF ruwHiy~apF mutakaAmilapF litaHoriyri Alojisomi waAlofikor
reference (untransliterated): واعتَبَرُوها فَلسَفَةَن رُوحِيَّةَن مُتَكامِلَةَن لِتَحرِيرِ لجِسمِ والفِكر
predicted (untransliterated): وَاعْتَبَرُوهَا فَلْسَفَةً رُوحِيَّةً مُتَكَامِلَةً لِتَحْرِيرِ الْجِسْمِ وَالْفِكْر
--
reference: >alt~awaH~udu huwa majmuwEapu DTirAbAtin EaSabiy~apin fiy lt~aTaw~ur ta$malu >aErADuhA wujuwda ma$Akila fiy ls~uluwki lAjtimAEiy~i lil$~axSi lmuSAb
predicted: >alt~awaH~udu huwa majomuwEapu AlT~iraAbaAtK EaSabiy~apK fiy Alt~aTaw~uro ta$omalu >aEoraADuhaA bujuwda ma$aAkila fiy Als~uluwki Alo<ijotimaAEiy~i lil$~axoSi AlomuSaAbo
reference (untransliterated): أَلتَّوَحُّدُ هُوَ مَجمُوعَةُ ضطِراباتِن عَصَبِيَّةِن فِي لتَّطَوُّر تَشمَلُ أَعراضُها وُجُودَ مَشاكِلَ فِي لسُّلُوكِ لاجتِماعِيِّ لِلشَّخصِ لمُصاب
predicted (untransliterated): أَلتَّوَحُّدُ هُوَ مَجْمُوعَةُ الطِّرَابَاتٍ عَصَبِيَّةٍ فِي التَّطَوُّرْ تَشْمَلُ أَعْرَاضُهَا بُجُودَ مَشَاكِلَ فِي السُّلُوكِ الْإِجْتِمَاعِيِّ لِلشَّخْصِ الْمُصَابْ
--
reference: wAlEamalu lr~a}iysiy~u lahu huwa riwAyatahu lmalHamiy~apu mA}apu EAmin mina lEuzlapi >al~atiy nAla EanhA jA}izapa nuwbila fiy l>adab EAma >alfin watisEimi}apin wa<ivnAni wavamAnuwn
predicted: waAloEamalu Alr~a}iysiy~u lahu huwa riwaAyatahu AlomaloHamiy~apu ma>apu EaAmK mina AloEuzolapi >al~atiy naAla EanohaA jaA}izapa nuwbila fiy Alo>adabo EaAma >alofK watisoEi ma}apK wa<ivnaAni wavamAnuwna
reference (untransliterated): والعَمَلُ لرَّئِيسِيُّ لَهُ هُوَ رِوايَتَهُ لمَلحَمِيَّةُ مائَةُ عامِن مِنَ لعُزلَةِ أَلَّتِي نالَ عَنها جائِزَةَ نُوبِلَ فِي لأَدَب عامَ أَلفِن وَتِسعِمِئَةِن وَإِثنانِ وَثَمانُون
predicted (untransliterated): وَالْعَمَلُ الرَّئِيسِيُّ لَهُ هُوَ رِوَايَتَهُ الْمَلْحَمِيَّةُ مَأَةُ عَامٍ مِنَ الْعُزْلَةِ أَلَّتِي نَالَ عَنْهَا جَائِزَةَ نُوبِلَ فِي الْأَدَبْ عَامَ أَلْفٍ وَتِسْعِ مَئَةٍ وَإِثنَانِ وَثَمانُونَ
--
reference: wAlmiykuwng was>aluwyn fiy januwbi $arqi >AsyA
predicted: waAlomiykuwnogo wasaAluwiyno fiy januwbi $aroqi |soyaA
reference (untransliterated): والمِيكُونغ وَسأَلُوين فِي جَنُوبِ شَرقِ أاسيا
predicted (untransliterated): وَالْمِيكُونْغْ وَسَالُوِينْ فِي جَنُوبِ شَرْقِ آسْيَا
--
reference: wa>n~a >aham~a muEaw~iqAti najAHihA takmunu fiy Eadami tafar~ugi >aSHAbihA li<idAratihA
predicted: wa>an~a >aham~a muEaw~iqaAti najaAHihaA takomunu fiy Eadami tafar~ugi >aSoHaAbihaA li<idaAratihaA
reference (untransliterated): وَأنَّ أَهَمَّ مُعَوِّقاتِ نَجاحِها تَكمُنُ فِي عَدَمِ تَفَرُّغِ أَصحابِها لِإِدارَتِها
predicted (untransliterated): وَأَنَّ أَهَمَّ مُعَوِّقَاتِ نَجَاحِهَا تَكْمُنُ فِي عَدَمِ تَفَرُّغِ أَصْحَابِهَا لِإِدَارَتِهَا
--
reference: wa>awDaHa lbAHivuwna >an~a suw'a lt~ag*iyapi huwa ls~ababu lr~a}iysiy~u litawaq~ufi ln~umuw Einda l>aTfAl
predicted: wa>awoDaHa AlobaAHivuwna >an~a suw'a Alt~ago*iyapi huwa Als~ababu Alr~a}iysiy~u litawaq~ufi Aln~umuw Einoda Alo>aTofaAlo
reference (untransliterated): وَأَوضَحَ لباحِثُونَ أَنَّ سُوءَ لتَّغذِيَةِ هُوَ لسَّبَبُ لرَّئِيسِيُّ لِتَوَقُّفِ لنُّمُو عِندَ لأَطفال
predicted (untransliterated): وَأَوْضَحَ الْبَاحِثُونَ أَنَّ سُوءَ التَّغْذِيَةِ هُوَ السَّبَبُ الرَّئِيسِيُّ لِتَوَقُّفِ النُّمُو عِنْدَ الْأَطْفَالْ
--
reference: wa>awDaHati lmajal~apu >an~a ls~ababa fiy *alika yarjiEu <ilY taDay~uqi l$~uEabi lhawA}iy~api wata$an~ujihA bifiEli lhawA'i lbArid
predicted: wa>awoDaHati Alomajal~apu >an~a Als~ababa fiy *alika yarojiEu <ilaY taDay~uqi Al$~uEabi AlohawaA}iy~api wata$an~ujihaA bifiEoli AlohawaA'i AlobaArid
reference (untransliterated): وَأَوضَحَتِ لمَجَلَّةُ أَنَّ لسَّبَبَ فِي ذَلِكَ يَرجِعُ إِلى تَضَيُّقِ لشُّعَبِ لهَوائِيَّةِ وَتَشَنُّجِها بِفِعلِ لهَواءِ لبارِد
predicted (untransliterated): وَأَوْضَحَتِ الْمَجَلَّةُ أَنَّ السَّبَبَ فِي ذَلِكَ يَرْجِعُ إِلَى تَضَيُّقِ الشُّعَبِ الْهَوَائِيَّةِ وَتَشَنُّجِهَا بِفِعْلِ الْهَوَاءِ الْبَارِد
--
reference: wabAta >atlitiykuw madriyd fiy SadArapi lt~artiybi lEAm~i bi>arbaEi niqAT
predicted: wabaAta >atolitiykuw madoriydo fiy SadaArapi Alt~arotiybi AloEaAm~i bi>arobaEi niqaAT
reference (untransliterated): وَباتَ أَتلِتِيكُو مَدرِيد فِي صَدارَةِ لتَّرتِيبِ لعامِّ بِأَربَعِ نِقاط
predicted (untransliterated): وَبَاتَ أَتْلِتِيكُو مَدْرِيدْ فِي صَدَارَةِ التَّرْتِيبِ الْعَامِّ بِأَرْبَعِ نِقَاط
--
reference: wabiAlt~Aliy tusAEidu EalY lwiqAyapi mina l<imsAk
predicted: wabiAt~aAliy tusaAEidu EalaY AlowiyqaAyapi mina Alo<imosaAko
reference (untransliterated): وَبِالتّالِي تُساعِدُ عَلى لوِقايَةِ مِنَ لإِمساك
predicted (untransliterated): وَبِاتَّالِي تُسَاعِدُ عَلَى الْوِيقَايَةِ مِنَ الْإِمْسَاكْ
--
reference: wa*alika biziyArapi jumhuwrin xAS~in jid~an sanawiy~an
predicted: wa*alika biziyaArapi jumohuwrK xaAS~K jid~AF sanawiy~AF
reference (untransliterated): وَذَلِكَ بِزِيارَةِ جُمهُورِن خاصِّن جِدَّن سَنَوِيَّن
predicted (untransliterated): وَذَلِكَ بِزِيَارَةِ جُمْهُورٍ خَاصٍّ جِدّاً سَنَوِيّاً
--
reference: wabisababi $ukuwkin bi>an~a lT~A}irapa kAnat tuqil~u idwArd snuwdun >al~a*iy tat~ahimuhu wA$inTun biAlt~ajas~us
predicted: wabisababi $ukuwkK bi>an~a AlT~aA}irapa kaAna Alt~uqil~u <idowaAbo snuwduno >al~a*iy tat~ahimuhu wa $inoTun biAlt~ajas~us
reference (untransliterated): وَبِسَبَبِ شُكُوكِن بِأَنَّ لطّائِرَةَ كانَت تُقِلُّ ِدوارد سنُودُن أَلَّذِي تَتَّهِمُهُ واشِنطُن بِالتَّجَسُّس
predicted (untransliterated): وَبِسَبَبِ شُكُوكٍ بِأَنَّ الطَّائِرَةَ كَانَ التُّقِلُّ إِدْوَابْ سنُودُنْ أَلَّذِي تَتَّهِمُهُ وَ شِنْطُن بِالتَّجَسُّس
--
reference: wabaEavuwA risAlapan <ilY lra~}iysi tataDama~nu maTAliba liEawdatihim
predicted: wabaEavuwA risaAlapF <ilaY Alr~a}iysi tataDam~anu maTaAliba liEawodatihimo
reference (untransliterated): وَبَعَثُوا رِسالَةَن إِلى لرَّئِيسِ تَتَضَمَّنُ مَطالِبَ لِعَودَتِهِم
predicted (untransliterated): وَبَعَثُوا رِسَالَةً إِلَى الرَّئِيسِ تَتَضَمَّنُ مَطَالِبَ لِعَوْدَتِهِمْ
--
reference: wabaEda $uhuwrin mina lHayrapi wAlqalaq taEara~fa kuwmAr EalY markazi Eabdi llhi bni zaydi lva~qAfiy~i lilta~Eriyfi biAl<islAm
predicted: wabaEoda $uhuwrK mina AloHayorapi waAloqalaqo taEar~afa kuwmaAra EalaY marokazi Eabodi All~aAhi bonizayodi Alv~aqaAfiy~i lilt~aEoriyfi biAlo<isolaAmo
reference (untransliterated): وَبَعدَ شُهُورِن مِنَ لحَيرَةِ والقَلَق تَعَرَّفَ كُومار عَلى مَركَزِ عَبدِ للهِ بنِ زَيدِ لثَّقافِيِّ لِلتَّعرِيفِ بِالإِسلام
predicted (untransliterated): وَبَعْدَ شُهُورٍ مِنَ الْحَيْرَةِ وَالْقَلَقْ تَعَرَّفَ كُومَارَ عَلَى مَرْكَزِ عَبْدِ اللَّاهِ بْنِزَيْدِ الثَّقَافِيِّ لِلتَّعْرِيفِ بِالْإِسْلَامْ
--
reference: wabiha*A yabqY mi}apun wasit~apun wav~l>avuwna muHtajazan fiy lmuEtaqali lmuviyri liljadal
predicted: wabiha*A yaboqaY mi}apN wasit~apN wavalaAvuwna muHotajazAF fiy AlomuEotaqali Alomuviyri lilojadaYlo
reference (untransliterated): وَبِهَذا يَبقى مِئَةُن وَسِتَّةُن وَثّلأَثُونَ مُحتَجَزَن فِي لمُعتَقَلِ لمُثِيرِ لِلجَدَل
predicted (untransliterated): وَبِهَذا يَبْقَى مِئَةٌ وَسِتَّةٌ وَثَلَاثُونَ مُحْتَجَزاً فِي الْمُعْتَقَلِ الْمُثِيرِ لِلْجَدَىلْ
--
reference: watustaxdamu fiy baEDi ld~uwal wasA}ilu EilAjin muxtalifapun
predicted: watusotaxodamu fiy baEoDi Ald~uwalo wasaA}ilu EilaAjK muxotalifapN
reference (untransliterated): وَتُستَخدَمُ فِي بَعضِ لدُّوَل وَسائِلُ عِلاجِن مُختَلِفَةُن
predicted (untransliterated): وَتُسْتَخْدَمُ فِي بَعْضِ الدُّوَلْ وَسَائِلُ عِلَاجٍ مُخْتَلِفَةٌ
--
reference: wataTaw~ara stixdAmu lT~A}irAti lEAmilapi biduwni Tay~Ar wabada>ati ls~AEAtu l*~akiy~apu al<inti$Ara waka*alika lT~ibAEapu lv~ulAviy~apu l>abEAd
predicted: wataTaw~ara AsotixodaAmu AlT~aA}iraAti AloEaAmilapi biduwni Tay~aAr wabada>ati Als~aAEaAtu Al*~akiy~apu Alo<inoti$aAra waka*alika AlT~ibaAEapu Alv~ulAviy~apu Al>aboEAd
reference (untransliterated): وَتَطَوَّرَ ستِخدامُ لطّائِراتِ لعامِلَةِ بِدُونِ طَيّار وَبَدَأَتِ لسّاعاتُ لذَّكِيَّةُ َلإِنتِشارَ وَكَذَلِكَ لطِّباعَةُ لثُّلاثِيَّةُ لأَبعاد
predicted (untransliterated): وَتَطَوَّرَ اسْتِخْدَامُ الطَّائِرَاتِ الْعَامِلَةِ بِدُونِ طَيَّار وَبَدَأَتِ السَّاعَاتُ الذَّكِيَّةُ الْإِنْتِشَارَ وَكَذَلِكَ الطِّبَاعَةُ الثُّلاثِيَّةُ الأَبْعاد
--
reference: wajA'a ha*A lqarAr baEda <iElAni lsa~Euwdiya~pi taxfiyDa >aEdAdi lHuja~Aji ha*A lEAm
predicted: wajaA'a ha*aA AloqaraAro baEoda <iEolaAni Als~uEuwdiy~api taxofiyDa >aEodaAdi AloHuj~aAji ha*aA AloEaAmo
reference (untransliterated): وَجاءَ هَذا لقَرار بَعدَ إِعلانِ لسَّعُودِيَّةِ تَخفِيضَ أَعدادِ لحُجَّاجِ هَذا لعام
predicted (untransliterated): وَجَاءَ هَذَا الْقَرَارْ بَعْدَ إِعْلَانِ السُّعُودِيَّةِ تَخْفِيضَ أَعْدَادِ الْحُجَّاجِ هَذَا الْعَامْ
--
reference: wajA'ati l>arqAmu SAdimapan fiy mA yaxuS~u l$~arqa l>awsaT
predicted: wajaA'api Alo>aroqaAmu SaAdimapF fiymaA yaxuS~u Al$~aroqa Alo>awoSaTo
reference (untransliterated): وَجاءَتِ لأَرقامُ صادِمَةَن فِي ما يَخُصُّ لشَّرقَ لأَوسَط
predicted (untransliterated): وَجَاءَةِ الْأَرْقَامُ صَادِمَةً فِيمَا يَخُصُّ الشَّرْقَ الْأَوْصَطْ
--
reference: waSadarati lr~asA}il bi<ismi mubdiEiy wafan~Aniy miSra
predicted: wasaDarati Alr~asaA'ilo bi<isomi mubodiEi wafan~aAniy miSora
reference (untransliterated): وَصَدَرَتِ لرَّسائِل بِإِسمِ مُبدِعِي وَفَنّانِي مِصرَ
predicted (untransliterated): وَسَضَرَتِ الرَّسَاءِلْ بِإِسْمِ مُبْدِعِ وَفَنَّانِي مِصْرَ
--
reference: wafiy ftitAHi lmu&tamari qAlati l$~AEirapu $ariyfapa ls~ay~id <in~a lEaq~Ada it~axa*a mina lqirA'api wAl<iT~ilAEi EalY kul~i lEuluwm wamuxtalafi lHaDArAt silAHan yuHaT~imu bihi lS~anamiy~apa wayaksiru lmuHar~amAt
predicted: wafiy AfotitaAHi Alomu&otamari qaAlati Al$~aAEirapu $ariyfapa Als~ay~ido <in~a AloEaq~aAda Alt~axa*a mina AloqiraA'api waliADoTilaAEi EalaY kul~i AloEuluwmo wamuxotalifi AloHaDaAraAt silaAHAF yuHaT~i mgubihi AlS~anamiy~apa wayakosiru AlomuHar~amaAt
reference (untransliterated): وَفِي فتِتاحِ لمُؤتَمَرِ قالَتِ لشّاعِرَةُ شَرِيفَةَ لسَّيِّد إِنَّ لعَقّادَ ِتَّخَذَ مِنَ لقِراءَةِ والإِطِّلاعِ عَلى كُلِّ لعُلُوم وَمُختَلَفِ لحَضارات سِلاحَن يُحَطِّمُ بِهِ لصَّنَمِيَّةَ وَيَكسِرُ لمُحَرَّمات
predicted (untransliterated): وَفِي افْتِتَاحِ الْمُؤْتَمَرِ قَالَتِ الشَّاعِرَةُ شَرِيفَةَ السَّيِّدْ إِنَّ الْعَقَّادَ التَّخَذَ مِنَ الْقِرَاءَةِ وَلِاضْطِلَاعِ عَلَى كُلِّ الْعُلُومْ وَمُخْتَلِفِ الْحَضَارَات سِلَاحاً يُحَطِّ مغُبِهِ الصَّنَمِيَّةَ وَيَكْسِرُ الْمُحَرَّمَات
--
reference: wafiy kuwryA ljanuwbiy~api taquwmu lHukuwmapu bitamwiyli musta$fayAtin liEilAji ha*A l<idmAni l~a*iy yuEtabaru mu$kilapan qawmiy~apan
predicted: wafiy kuwriyaA Alojanuwbiy~api taquwmu AloHukuwmapu bitamowiyli musota$ofayaAtK liEilaAji ha*aA Alo<idomaAni Al~a*iy yuEotabaru mu$okilapF qawomiy~apF
reference (untransliterated): وَفِي كُوريا لجَنُوبِيَّةِ تَقُومُ لحُكُومَةُ بِتَموِيلِ مُستَشفَياتِن لِعِلاجِ هَذا لإِدمانِ لَّذِي يُعتَبَرُ مُشكِلَةَن قَومِيَّةَن
predicted (untransliterated): وَفِي كُورِيَا الْجَنُوبِيَّةِ تَقُومُ الْحُكُومَةُ بِتَمْوِيلِ مُسْتَشْفَيَاتٍ لِعِلَاجِ هَذَا الْإِدْمَانِ الَّذِي يُعْتَبَرُ مُشْكِلَةً قَوْمِيَّةً
--
reference: wakAna l>amalu >an takuwna ha*ihi ld~iymuqrATiy~Atu maSHuwbapan bi>adA'in tanmawiy~in muxtalif
predicted: wakAna Alo>amalu >ano takuwna ha*ihi Ald~iymuwqoraATiy~aAtu maSoHuwbapF bi>adaA'K tF mawiy~K muxotalifo
reference (untransliterated): وَكانَ لأَمَلُ أَن تَكُونَ هَذِهِ لدِّيمُقراطِيّاتُ مَصحُوبَةَن بِأَداءِن تَنمَوِيِّن مُختَلِف
predicted (untransliterated): وَكانَ الْأَمَلُ أَنْ تَكُونَ هَذِهِ الدِّيمُوقْرَاطِيَّاتُ مَصْحُوبَةً بِأَدَاءٍ تً مَوِيٍّ مُخْتَلِفْ
--
reference: wakatabuwA fiy dawriy~api lkul~iy~api l>amiyrikiy~api li>amrADi lqalb >an~a ls~umnapa tartabiTu biHuduwvi tagayiyrAt fiy lqalbi ladY lbAligiyn
predicted: wakatabuwA fiy daworiy~api Alokul~iy~api Alo>amiyriykiy~api li>amoraADi Aloqalo >an~a Als~umonapa tarotabiTu biHuduwvi tagoyiyraAt fiy Aloqalobi ladaY AlobaAligiyno
reference (untransliterated): وَكَتَبُوا فِي دَورِيَّةِ لكُلِّيَّةِ لأَمِيرِكِيَّةِ لِأَمراضِ لقَلب أَنَّ لسُّمنَةَ تَرتَبِطُ بِحُدُوثِ تَغَيِيرات فِي لقَلبِ لَدى لبالِغِين
predicted (untransliterated): وَكَتَبُوا فِي دَوْرِيَّةِ الْكُلِّيَّةِ الْأَمِيرِيكِيَّةِ لِأَمْرَاضِ الْقَلْ أَنَّ السُّمْنَةَ تَرْتَبِطُ بِحُدُوثِ تَغْيِيرَات فِي الْقَلْبِ لَدَى الْبَالِغِينْ
--
reference: wakul~u *alika bimuHtawYan munxafiDin lilgAyapi mina ls~uErAti lHarAriy~api
predicted: wakul~u *alika bimuHotawAF munoxafiDK lilogaAyapi mina Als~uEoraAti AloHaraAriy~api
reference (untransliterated): وَكُلُّ ذَلِكَ بِمُحتَوىَن مُنخَفِضِن لِلغايَةِ مِنَ لسُّعراتِ لحَرارِيَّةِ
predicted (untransliterated): وَكُلُّ ذَلِكَ بِمُحْتَواً مُنْخَفِضٍ لِلْغَايَةِ مِنَ السُّعْرَاتِ الْحَرَارِيَّةِ
--
reference: wakul~amA zAdat kamiy~apu ls~uk~ari lmutanAwalapi maEa lt~amri taqil~u fA}idatuhu lgi*A}iy~apu
predicted: wakul~amaA zaAdato kam~ay~apu Als~uk~ari AlomutanaAwalapi maEa Alotamori taqil~u faA}idatuhu Alogi*aA}iy~apu
reference (untransliterated): وَكُلَّما زادَت كَمِيَّةُ لسُّكَّرِ لمُتَناوَلَةِ مَعَ لتَّمرِ تَقِلُّ فائِدَتُهُ لغِذائِيَّةُ
predicted (untransliterated): وَكُلَّمَا زَادَتْ كَمَّيَّةُ السُّكَّرِ الْمُتَنَاوَلَةِ مَعَ الْتَمْرِ تَقِلُّ فَائِدَتُهُ الْغِذَائِيَّةُ
--
reference: walA yazAlu ha*A lbaladu mutamas~ikan bitaqwiymi lkaniysapi lqibTiy~api >almaEruwfi maHal~iy~an biAlt~aqwiymi l<ivyuwbiy~i
predicted: walaA yazaAlu ha*aA Alobaladu mutamas~ikAF bitaqowiymi Alokaniysapi AloqiboTiy~api >alomaEoruwfi maHal~iy~AF biAlt~aqowiymi Alo<ivoyuwbiy~i
reference (untransliterated): وَلا يَزالُ هَذا لبَلَدُ مُتَمَسِّكَن بِتَقوِيمِ لكَنِيسَةِ لقِبطِيَّةِ أَلمَعرُوفِ مَحَلِّيَّن بِالتَّقوِيمِ لإِثيُوبِيِّ
predicted (untransliterated): وَلَا يَزَالُ هَذَا الْبَلَدُ مُتَمَسِّكاً بِتَقْوِيمِ الْكَنِيسَةِ الْقِبْطِيَّةِ أَلْمَعْرُوفِ مَحَلِّيّاً بِالتَّقْوِيمِ الْإِثْيُوبِيِّ
--
reference: walaEibati lxibrapu dawrahA fiy tatwiyji EA$uwra lxAmisi EAlamiy~an
predicted: walaEibapi Aloxiborapu daworahaA fiy tatowiyji EaA$uwra AloxaAmisi EaAlamiy~AF
reference (untransliterated): وَلَعِبَتِ لخِبرَةُ دَورَها فِي تَتوِيجِ عاشُورَ لخامِسِ عالَمِيَّن
predicted (untransliterated): وَلَعِبَةِ الْخِبْرَةُ دَوْرَهَا فِي تَتْوِيجِ عَاشُورَ الْخَامِسِ عَالَمِيّاً
--
reference: tatawAlY lEamalyAtu ls~ir~iyapa biAlHuduwv
predicted: tatawaAlaY AloEamaliy~aAtu Als~ir~iy~apu biAloHuduwv
reference (untransliterated): تَتَوالى لعَمَلياتُ لسِّرِّيَةَ بِالحُدُوث
predicted (untransliterated): تَتَوَالَى الْعَمَلِيَّاتُ السِّرِّيَّةُ بِالْحُدُوث
--
reference: wamin tilka ls~ilaE >al$~Ayu lS~iyniy~u wAlwaraqu wAlbAruwdu wAlbuwSilapu
predicted: wamino tiloka Als~ilaE >al$~aAyu AlS~iyniy~u waAlowaraqu waAlobaAruwdu waAlobuwSilapu
reference (untransliterated): وَمِن تِلكَ لسِّلَع أَلشّايُ لصِّينِيُّ والوَرَقُ والبارُودُ والبُوصِلَةُ
predicted (untransliterated): وَمِنْ تِلْكَ السِّلَع أَلشَّايُ الصِّينِيُّ وَالْوَرَقُ وَالْبَارُودُ وَالْبُوصِلَةُ
--
reference: wamanaHa >AbA}uhumu lqudrapa EalY lt~aHak~umi fiy kayfiy~api stixdAmi ha*ihi lxidmapi
predicted: wamanaHa |baA&uhumu Aloqudorapa EalaY Alt~aHak~umi fiy kayofiy~api AsotixodaAmi ha*ihi Aloxidomapi
reference (untransliterated): وَمَنَحَ أابائُهُمُ لقُدرَةَ عَلى لتَّحَكُّمِ فِي كَيفِيَّةِ ستِخدامِ هَذِهِ لخِدمَةِ
predicted (untransliterated): وَمَنَحَ آبَاؤُهُمُ الْقُدْرَةَ عَلَى التَّحَكُّمِ فِي كَيْفِيَّةِ اسْتِخْدَامِ هَذِهِ الْخِدْمَةِ
--
reference: waya>mulu lbAHivuwna taTwiyra Hubuwbin >aw nusxapin mina ld~awA' qAbilapan lilHaqni xilAla xamsi sanawAt
predicted: waya>omulu AlobaAHivuwna taTowiyra HuwuwbK >awo nusoxapK mina Ald~awaA qaAbilapF liloHaqoni xilaAla xamosi sanawaAt
reference (untransliterated): وَيَأمُلُ لباحِثُونَ تَطوِيرَ حُبُوبِن أَو نُسخَةِن مِنَ لدَّواء قابِلَةَن لِلحَقنِ خِلالَ خَمسِ سَنَوات
predicted (untransliterated): وَيَأْمُلُ الْبَاحِثُونَ تَطْوِيرَ حُوُوبٍ أَوْ نُسْخَةٍ مِنَ الدَّوَا قَابِلَةً لِلْحَقْنِ خِلَالَ خَمْسِ سَنَوَات
--
reference: wayastaxdimu lbarnAmaju niZAman saHAbiy~an lil*~akA'i lS~unEiy~i yasmaHu lahu bitaHliyli l<iymA'Ati wAlt~aEAbiyr
predicted: wayasotaxodimu AlobaronaAmaju niZaAmAF saHaAbiy~AF lil*~akaA'i AlS~unoEiy~i yasomaHu lahu bitaHoliyli Alo<iymaA'aAti waAlt~aEaAbiyro
reference (untransliterated): وَيَستَخدِمُ لبَرنامَجُ نِظامَن سَحابِيَّن لِلذَّكاءِ لصُّنعِيِّ يَسمَحُ لَهُ بِتَحلِيلِ لإِيماءاتِ والتَّعابِير
predicted (untransliterated): وَيَسْتَخْدِمُ الْبَرْنَامَجُ نِظَاماً سَحَابِيّاً لِلذَّكَاءِ الصُّنْعِيِّ يَسْمَحُ لَهُ بِتَحْلِيلِ الْإِيمَاءَاتِ وَالتَّعَابِيرْ
--
reference: wayuEtabaru mihrajAnu qarTAja ls~iynamA}iy~u min >aEraqi mihrajAnAti >afriyqyA
predicted: wayuEotabaru mihorajaAnu qaroTaAja Als~iynamaA}iy~u mino >aEoraqi mihorajaAnaAti >afriyqoyaA
reference (untransliterated): وَيُعتَبَرُ مِهرَجانُ قَرطاجَ لسِّينَمائِيُّ مِن أَعرَقِ مِهرَجاناتِ أَفرِيقيا
predicted (untransliterated): وَيُعْتَبَرُ مِهْرَجَانُ قَرْطَاجَ السِّينَمَائِيُّ مِنْ أَعْرَقِ مِهْرَجَانَاتِ أَفرِيقْيَا
--
reference: wayaquwlu lEulamA'u <in~ahu min gayri lmuraj~aHi >an tuTaw~ira lbaktiyryA lmuEdiyapu muqAwamapan Did~a lEilAji ljadiyd >al~a*iy >aSbaHa mutAHan biAlfiEl fiy $akli marhamin lil>amrADi ljildiy~api
predicted: wayaquwlu AloEulamaA'u <in~ahu mino gayori Alomuraj~aHi >ano tuTaw~ira AlobakotiyroyaA AlomuEodiyapu muqaAwamapF Did~a AloEilaAji lojadiyd >al~a*iy >aSobaHa mutaAHAF biAlofiEol fiy $akoli marohamK lilo>amoraADi Alojiylodiy~api
reference (untransliterated): وَيَقُولُ لعُلَماءُ إِنَّهُ مِن غَيرِ لمُرَجَّحِ أَن تُطَوِّرَ لبَكتِيريا لمُعدِيَةُ مُقاوَمَةَن ضِدَّ لعِلاجِ لجَدِيد أَلَّذِي أَصبَحَ مُتاحَن بِالفِعل فِي شَكلِ مَرهَمِن لِلأَمراضِ لجِلدِيَّةِ
predicted (untransliterated): وَيَقُولُ الْعُلَمَاءُ إِنَّهُ مِنْ غَيْرِ الْمُرَجَّحِ أَنْ تُطَوِّرَ الْبَكْتِيرْيَا الْمُعْدِيَةُ مُقَاوَمَةً ضِدَّ الْعِلَاجِ لْجَدِيد أَلَّذِي أَصْبَحَ مُتَاحاً بِالْفِعْل فِي شَكْلِ مَرْهَمٍ لِلْأَمْرَاضِ الْجِيلْدِيَّةِ
--
reference: wayumkinuka lHuSuwlu EalY taTbiyqAtin lilt~adriybAti l>asAsiy~api maj~Anan
predicted: wayumokinuka AloHuSuwlu EalaY taTobiyqaAtK liltadoriybaAti Alo>asaAsiy~api maj~aAnAF
reference (untransliterated): وَيُمكِنُكَ لحُصُولُ عَلى تَطبِيقاتِن لِلتَّدرِيباتِ لأَساسِيَّةِ مَجّانَن
predicted (untransliterated): وَيُمْكِنُكَ الْحُصُولُ عَلَى تَطْبِيقَاتٍ لِلتَدْرِيبَاتِ الْأَسَاسِيَّةِ مَجَّاناً
--
```
## Fine-Tuning Script
You can find the script used to produce this model
[here](https://github.com/elgeish/transformers/blob/cfc0bd01f2ac2ea3a5acc578ef2e204bf4304de7/examples/research_projects/wav2vec2/finetune_base_arabic_speech_corpus.sh).
|
{"language": "ar", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech"], "datasets": ["arabic_speech_corpus"]}
|
elgeish/wav2vec2-large-xlsr-53-levantine-arabic
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"ar",
"dataset:arabic_speech_corpus",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
# zero-shot-absa
## About
The goal of this project is to accomplish aspect-based sentiment analysis without dependence on the severely limited training data available - that is, the task of aspect-based sentiment analysis is not explicitly supervised, an approach known as “zero-shot learning”. Sentiment analysis has already been used extensively in industry for things such as customer feedback; however, a model such as the one I am proposing would be able to identify topics in a document and also identify the sentiment of the author toward (or associated with) each topic, which allows for detection of much more specific feedback or commentary than simple sentiment analysis.
## Details
There will be three models in the project; the first, m1, will use Latent Dirichlet Allocation to find topics in documents, implemented through gensim. The second, m2, is a zero-shot learning text classification model, available at Hugging Face, which I plan to fine-tune on output of the LDA model on various tweets and reviews. The final piece, m3, is the sentiment intensity analyzer available from NLTK’s vader module. The architecture is as follows: m1 will generate a list of topics for each document in the dataset. I will then create a mapping T from each document to the corresponding list of topics. It would be nice to have labeled data here that, given the output T(doc), supplies the human-generated topic name. Since that isn’t available, the zero-shot text classifier from Hugging Face will be used to generate a topic name, which exists only to interpret the output. Then for each topic t in T, we search the document for all sentences containing at least one word in t and use NLTK to compute the average sentiment score of each of these sentences. We then return, as the model output, the dictionary with all topic names found in the document as keys and the average sentiment from NLTK as the values.
## Dependencies
- `scikit-learn`
- `gensim`
- `NLTK`
- `huggingface.ai`
## Data
The data this project will be trained on come from Twitter and Yelp. With access to the Twitter API through a developer account, one can create a large corpus from tweets. Yelp has very relevant data for this task available at https://www.yelp.com/dataset. I will train / fine-tune each model twice, once for Twitter and once for Yelp, on a training set generated by scikit-learn.
Labeled data for testing are available at https://europe.naverlabs.com/Research/Natural-Language-Processing/Aspect-Based-Sentiment-Analysis-Dataset/ . These data are very straightforward to use, as they have annotations of topics and the associated sentiment scores for each sentence.
|
{}
|
eli/zero-shot-absa
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
fill-mask
|
transformers
|
This model was pretrained on the bookcorpus dataset using knowledge distillation.
The particularity of this model is that even though it shares the same architecture as BERT, it has a hidden size of 240. Since it has 12 attention heads, the head size (20) is different from the one of the BERT base model (64).
The knowledge distillation was performed using multiple loss functions.
The weights of the model were initialized from scratch.
PS : the tokenizer is the same as the one of the model bert-base-uncased.
To load the model \& tokenizer :
````python
from transformers import AutoModelForMaskedLM, BertTokenizer
model_name = "eli4s/Bert-L12-h240-A12"
model = AutoModelForMaskedLM.from_pretrained(model_name)
tokenizer = BertTokenizer.from_pretrained(model_name)
````
To use it as a masked language model :
````python
import torch
sentence = "Let's have a [MASK]."
model.eval()
inputs = tokenizer([sentence], padding='longest', return_tensors='pt')
output = model(inputs['input_ids'], attention_mask=inputs['attention_mask'])
mask_index = inputs['input_ids'].tolist()[0].index(103)
masked_token = output['logits'][0][mask_index].argmax(axis=-1)
predicted_token = tokenizer.decode(masked_token)
print(predicted_token)
````
Or we can also predict the n most relevant predictions :
````python
top_n = 5
vocab_size = model.config.vocab_size
logits = output['logits'][0][mask_index].tolist()
top_tokens = sorted(list(range(vocab_size)), key=lambda i:logits[i], reverse=True)[:top_n]
tokenizer.decode(top_tokens)
````
|
{}
|
eli4s/Bert-L12-h240-A12
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
fill-mask
|
transformers
|
This model was pretrained on the bookcorpus dataset using knowledge distillation.
The particularity of this model is that even though it shares the same architecture as BERT, it has a hidden size of 256. Since it has 4 attention heads, the head size is 64 just as for the BERT base model.
The knowledge distillation was performed using multiple loss functions.
The weights of the model were initialized from scratch.
PS : the tokenizer is the same as the one of the model bert-base-uncased.
To load the model \& tokenizer :
````python
from transformers import AutoModelForMaskedLM, BertTokenizer
model_name = "eli4s/Bert-L12-h256-A4"
model = AutoModelForMaskedLM.from_pretrained(model_name)
tokenizer = BertTokenizer.from_pretrained(model_name)
````
To use it as a masked language model :
````python
import torch
sentence = "Let's have a [MASK]."
model.eval()
inputs = tokenizer([sentence], padding='longest', return_tensors='pt')
output = model(inputs['input_ids'], attention_mask=inputs['attention_mask'])
mask_index = inputs['input_ids'].tolist()[0].index(103)
masked_token = output['logits'][0][mask_index].argmax(axis=-1)
predicted_token = tokenizer.decode(masked_token)
print(predicted_token)
````
Or we can also predict the n most relevant predictions :
````python
top_n = 5
vocab_size = model.config.vocab_size
logits = output['logits'][0][mask_index].tolist()
top_tokens = sorted(list(range(vocab_size)), key=lambda i:logits[i], reverse=True)[:top_n]
tokenizer.decode(top_tokens)
````
|
{}
|
eli4s/Bert-L12-h256-A4
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
fill-mask
|
transformers
|
This model was pretrained on the bookcorpus dataset using knowledge distillation.
The particularity of this model is that even though it shares the same architecture as BERT, it has a hidden size of 384 (half the hidden size of BERT) and 6 attention heads (hence the same head size of BERT).
The knowledge distillation was performed using multiple loss functions.
The weights of the model were initialized from scratch.
PS : the tokenizer is the same as the one of the model bert-base-uncased.
To load the model \& tokenizer :
````python
from transformers import AutoModelForMaskedLM, BertTokenizer
model_name = "eli4s/Bert-L12-h384-A6"
model = AutoModelForMaskedLM.from_pretrained(model_name)
tokenizer = BertTokenizer.from_pretrained(model_name)
````
To use it on a sentence :
````python
import torch
sentence = "Let's have a [MASK]."
model.eval()
inputs = tokenizer([sentence], padding='longest', return_tensors='pt')
output = model(inputs['input_ids'], attention_mask=inputs['attention_mask'])
mask_index = inputs['input_ids'].tolist()[0].index(103)
masked_token = output['logits'][0][mask_index].argmax(axis=-1)
predicted_token = tokenizer.decode(masked_token)
print(predicted_token)
````
Or we can also predict the n most relevant predictions :
````python
top_n = 5
vocab_size = model.config.vocab_size
logits = output['logits'][0][mask_index].tolist()
top_tokens = sorted(list(range(vocab_size)), key=lambda i:logits[i], reverse=True)[:top_n]
tokenizer.decode(top_tokens)
````
|
{}
|
eli4s/Bert-L12-h384-A6
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
fill-mask
|
transformers
|
{}
|
eli4s/chaii
| null |
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
This model was pretrained on the bookcorpus dataset using knowledge distillation.
The particularity of this model is that even though it shares the same architecture as BERT, it has a hidden size of 256 (a third of the hidden size of BERT) and 4 attention heads (hence the same head size of BERT).
The weights of the model were initialized by pruning the weights of bert-base-uncased.
A knowledge distillation was performed using multiple loss functions to fine-tune the model.
PS : the tokenizer is the same as the one of the model bert-base-uncased.
To load the model \& tokenizer :
````python
from transformers import AutoModelForMaskedLM, BertTokenizer
model_name = "eli4s/prunedBert-L12-h256-A4-finetuned"
model = AutoModelForMaskedLM.from_pretrained(model_name)
tokenizer = BertTokenizer.from_pretrained(model_name)
````
To use it on a sentence :
````python
import torch
sentence = "Let's have a [MASK]."
model.eval()
inputs = tokenizer([sentence], padding='longest', return_tensors='pt')
output = model(inputs['input_ids'], attention_mask=inputs['attention_mask'])
mask_index = inputs['input_ids'].tolist()[0].index(103)
masked_token = output['logits'][0][mask_index].argmax(axis=-1)
predicted_token = tokenizer.decode(masked_token)
print(predicted_token)
````
Or we can also predict the n most relevant predictions :
````python
top_n = 5
vocab_size = model.config.vocab_size
logits = output['logits'][0][mask_index].tolist()
top_tokens = sorted(list(range(vocab_size)), key=lambda i:logits[i], reverse=True)[:top_n]
tokenizer.decode(top_tokens)
````
|
{}
|
eli4s/prunedBert-L12-h256-A4-finetuned
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
fill-mask
|
transformers
|
This model was pretrained on the bookcorpus dataset using knowledge distillation.
The particularity of this model is that even though it shares the same architecture as BERT, it has a hidden size of 384 (half the hidden size of BERT) and 6 attention heads (hence the same head size of BERT).
The weights of the model were initialized by pruning the weights of bert-base-uncased.
A knowledge distillation was performed using multiple loss functions to fine-tune the model.
PS : the tokenizer is the same as the one of the model bert-base-uncased.
To load the model \& tokenizer :
````python
from transformers import AutoModelForMaskedLM, BertTokenizer
model_name = "eli4s/prunedBert-L12-h384-A6-finetuned"
model = AutoModelForMaskedLM.from_pretrained(model_name)
tokenizer = BertTokenizer.from_pretrained(model_name)
````
To use it on a sentence :
````python
import torch
sentence = "Let's have a [MASK]."
model.eval()
inputs = tokenizer([sentence], padding='longest', return_tensors='pt')
output = model(inputs['input_ids'], attention_mask=inputs['attention_mask'])
mask_index = inputs['input_ids'].tolist()[0].index(103)
masked_token = output['logits'][0][mask_index].argmax(axis=-1)
predicted_token = tokenizer.decode(masked_token)
print(predicted_token)
````
Or we can also predict the n most relevant predictions :
````python
top_n = 5
vocab_size = model.config.vocab_size
logits = output['logits'][0][mask_index].tolist()
top_tokens = sorted(list(range(vocab_size)), key=lambda i:logits[i], reverse=True)[:top_n]
tokenizer.decode(top_tokens)
````
|
{}
|
eli4s/prunedBert-L12-h384-A6-finetuned
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
eliasbe/IceBERT-finetuned-ner-finetuned-ner
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IceBERT-finetuned-ner
This model is a fine-tuned version of [eliasbe/IceBERT-finetuned-ner](https://huggingface.co/eliasbe/IceBERT-finetuned-ner) on the mim_gold_ner dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
{"license": "gpl-3.0", "tags": ["generated_from_trainer"], "datasets": ["mim_gold_ner"], "widget": [{"text": "systurnar gu\u00f0r\u00fan og monique voru einar \u00ed sk\u00f3ginum umkringdar v\u00ed\u00f0i, eik og reyni me\u00f0 \u00fe\u00e1 \u00f3sk a\u00f0 sameinast fj\u00f6lskyldu sinni sem f\u00f3r \u00e1 mai thai og \u00ed b\u00ed\u00f3 parad\u00eds a\u00f0 sj\u00e1 jim carey leika \u00ed the eternal sunshine of the spotless mind.", "results": []}]}
|
eliasbe/IceBERT-finetuned-ner
| null |
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"dataset:mim_gold_ner",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLMR-ENIS-finetuned-ner
This model is a fine-tuned version of [vesteinn/XLMR-ENIS](https://huggingface.co/vesteinn/XLMR-ENIS) on the mim_gold_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0827
- Precision: 0.9002
- Recall: 0.896
- F1: 0.8981
- Accuracy: 0.9844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0567 | 1.0 | 2904 | 0.1081 | 0.8486 | 0.8140 | 0.8309 | 0.9796 |
| 0.0302 | 2.0 | 5808 | 0.0906 | 0.8620 | 0.8298 | 0.8456 | 0.9818 |
| 0.0197 | 3.0 | 8712 | 0.0948 | 0.8691 | 0.8447 | 0.8567 | 0.9826 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
{"license": "agpl-3.0", "tags": ["generated_from_trainer"], "datasets": ["mim_gold_ner"], "metrics": ["precision", "recall", "f1", "accuracy"], "widget": [{"text": "systurnar gu\u00f0r\u00fan og monique voru einar \u00ed sk\u00f3ginum umkringdar v\u00ed\u00f0i, eik og reyni me\u00f0 \u00fe\u00e1 \u00f3sk a\u00f0 sameinast fj\u00f6lskyldu sinni sem f\u00f3r \u00e1 mai thai og \u00ed b\u00ed\u00f3 parad\u00eds a\u00f0 sj\u00e1 jim carey leika \u00ed the eternal sunshine of the spotless mind."}], "model-index": [{"name": "XLMR-ENIS-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "mim_gold_ner", "type": "mim_gold_ner", "args": "mim-gold-ner"}, "metrics": [{"type": "precision", "value": 0.9002453676283949, "name": "Precision"}, {"type": "recall", "value": 0.896, "name": "Recall"}, {"type": "f1", "value": 0.8981176669198953, "name": "F1"}, {"type": "accuracy", "value": 0.9843747637694087, "name": "Accuracy"}]}]}]}
|
eliasbe/XLMR-ENIS-finetuned-ner
| null |
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"dataset:mim_gold_ner",
"license:agpl-3.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
fill-mask
|
transformers
|
{}
|
eliasedwin7/MalayalamBERT
| null |
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
{}
|
eliasedwin7/MalayalamBERTo
| null |
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
elif/animess
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
elif/eliff
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
elif/nices
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-ro-LR_1e-3
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5215
- Bleu: 7.1606
- Gen Len: 18.2451
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 0.6758 | 1.0 | 7629 | 1.5215 | 7.1606 | 18.2451 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wmt16"], "metrics": ["bleu"], "model-index": [{"name": "t5-small-finetuned-en-to-ro-LR_1e-3", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "wmt16", "type": "wmt16", "args": "ro-en"}, "metrics": [{"type": "bleu", "value": 7.1606, "name": "Bleu"}]}]}]}
|
eliotm/t5-small-finetuned-en-to-ro-LR_1e-3
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
eliotm/t5-small-finetuned-en-to-ro-dataset_20
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-ro-fp16_off
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8351
- Bleu: 5.9132
- Gen Len: 18.2656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 0.8501 | 1.0 | 7629 | 1.8351 | 5.9132 | 18.2656 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wmt16"], "metrics": ["bleu"], "model-index": [{"name": "t5-small-finetuned-en-to-ro-fp16_off", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "wmt16", "type": "wmt16", "args": "ro-en"}, "metrics": [{"type": "bleu", "value": 5.9132, "name": "Bleu"}]}]}]}
|
eliotm/t5-small-finetuned-en-to-ro-fp16_off
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-ro-lr0.001
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8309
- Bleu: 5.8837
- Gen Len: 18.2656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 0.9442 | 1.0 | 7629 | 1.8309 | 5.8837 | 18.2656 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wmt16"], "metrics": ["bleu"], "model-index": [{"name": "t5-small-finetuned-en-to-ro-lr0.001", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "wmt16", "type": "wmt16", "args": "ro-en"}, "metrics": [{"type": "bleu", "value": 5.8837, "name": "Bleu"}]}]}]}
|
eliotm/t5-small-finetuned-en-to-ro-lr0.001
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
eliotm/t5-small-finetuned-en-to-ro-lr0.01
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-ro-lr_2e-6
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4232
- Bleu: 7.2935
- Gen Len: 18.2521
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.04375
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 0.6703 | 0.04 | 2671 | 1.4232 | 7.2935 | 18.2521 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wmt16"], "metrics": ["bleu"], "model-index": [{"name": "t5-small-finetuned-en-to-ro-lr_2e-6", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "wmt16", "type": "wmt16", "args": "ro-en"}, "metrics": [{"type": "bleu", "value": 7.2935, "name": "Bleu"}]}]}]}
|
eliotm/t5-small-finetuned-en-to-ro-lr_2e-6
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
eliotm/t5-small-finetuned-en-to-ro-lr_decay
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
# Test
|
{"language": "eo", "license": "apache-2.0", "thumbnail": "https://huggingface.co/blog/assets/01_how-to-train/EsperBERTo-thumbnail-v2.png", "widget": [{"text": "Jen la komenco de bela <mask>."}, {"text": "Uno du <mask> top"}, {"text": "Jen fini\u011das bela <mask>."}]}
|
elishowk/EsperBERTo-small
| null |
[
"eo",
"license:apache-2.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
elishowk/fasttext_test
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
feature-extraction
|
generic
|
# Pretrained FastText word vector for English
https://github.com/facebookresearch/fastText
Usage
```
import fasttext.util
ft = fasttext.load_model('cc.en.300.bin')
ft.get_word_vector('hello')
```
|
{"library_name": "generic", "tags": ["feature-extraction"]}
|
elishowk/fasttext_test2
| null |
[
"generic",
"feature-extraction",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
token-classification
|
spacy
|
| Feature | Description |
| --- | --- |
| **Name** | `is_core_web_trf` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.1.1,<3.2.0` |
| **Default Pipeline** | `transformer`, `ner`, `tagger`, `parser` |
| **Components** | `transformer`, `ner`, `tagger`, `parser` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (591 labels for 3 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `Date`, `Location`, `Miscellaneous`, `Money`, `Organization`, `Percent`, `Person`, `Time` |
| **`tagger`** | `aa`, `aae`, `aam`, `af`, `afe`, `afm`, `au`, `c`, `cn`, `ct`, `e`, `fahee`, `fahen`, `faheo`, `faheþ`, `fahfe`, `fahfn`, `fahfo`, `fahfþ`, `fakee`, `faken`, `fakeo`, `fakeþ`, `fakfe`, `fakfn`, `fakfo`, `fakfþ`, `favee`, `faven`, `faveo`, `faveþ`, `favfe`, `favfn`, `favfo`, `favfþ`, `fbhee`, `fbhen`, `fbheo`, `fbheþ`, `fbhfe`, `fbhfn`, `fbhfo`, `fbhfþ`, `fbkee`, `fbken`, `fbkeo`, `fbkeþ`, `fbkfe`, `fbkfn`, `fbkfo`, `fbkfþ`, `fbvee`, `fbven`, `fbveo`, `fbveþ`, `fbvfe`, `fbvfn`, `fbvfo`, `fbvfþ`, `fehee`, `fehen`, `feheo`, `feheþ`, `fehfe`, `fehfn`, `fehfo`, `fehfþ`, `fekee`, `feken`, `fekeo`, `fekeþ`, `fekfe`, `fekfn`, `fekfo`, `fekfþ`, `fevee`, `feven`, `feveo`, `feveþ`, `fevfe`, `fevfn`, `fevfo`, `fevfþ`, `fohee`, `fohen`, `foheo`, `foheþ`, `fohfe`, `fohfn`, `fohfo`, `fohfþ`, `fokee`, `foken`, `fokeo`, `fokeþ`, `fokfe`, `fokfn`, `fokfo`, `fokfþ`, `fovee`, `foven`, `foveo`, `foveþ`, `fovfe`, `fovfn`, `fovfo`, `fovfþ`, `fp1ee`, `fp1en`, `fp1eo`, `fp1eþ`, `fp1fe`, `fp1fn`, `fp1fo`, `fp1fþ`, `fp2ee`, `fp2en`, `fp2eo`, `fp2eþ`, `fp2fe`, `fp2fn`, `fp2fo`, `fp2fþ`, `fphee`, `fphen`, `fpheo`, `fpheþ`, `fphfe`, `fphfn`, `fphfo`, `fphfþ`, `fpkee`, `fpken`, `fpkeo`, `fpkeþ`, `fpkfe`, `fpkfn`, `fpkfo`, `fpkfþ`, `fpvee`, `fpven`, `fpveo`, `fpveþ`, `fpvfe`, `fpvfn`, `fpvfo`, `fpvfþ`, `fshee`, `fshen`, `fsheo`, `fsheþ`, `fshfe`, `fshfn`, `fshfo`, `fshfþ`, `fskee`, `fsken`, `fskeo`, `fskeþ`, `fskfe`, `fskfn`, `fskfo`, `fskfþ`, `fsvee`, `fsven`, `fsveo`, `fsveþ`, `fsvfe`, `fsvfn`, `fsvfo`, `fsvfþ`, `ghee`, `ghen`, `gheo`, `gheþ`, `ghfe`, `ghfn`, `ghfo`, `ghfþ`, `gkee`, `gken`, `gkeo`, `gkeþ`, `gkfe`, `gkfn`, `gkfo`, `gkfþ`, `gvee`, `gven`, `gveo`, `gveþ`, `gvfe`, `gvfn`, `gvfo`, `gvfþ`, `ks`, `kt`, `lheeof`, `lheesf`, `lheeve`, `lheevf`, `lheevm`, `lhenof`, `lhense`, `lhensf`, `lhenve`, `lhenvf`, `lhenvm`, `lheoof`, `lheose`, `lheosf`, `lheosm`, `lheove`, `lheovf`, `lheovm`, `lheþof`, `lheþse`, `lheþsf`, `lheþve`, `lheþvf`, `lheþvm`, `lhfeof`, `lhfese`, `lhfesf`, `lhfeve`, `lhfevf`, `lhfevm`, `lhfnof`, `lhfnse`, `lhfnsf`, `lhfnve`, `lhfnvf`, `lhfnvm`, `lhfoof`, `lhfose`, `lhfosf`, `lhfove`, `lhfovf`, `lhfovm`, `lhfþof`, `lhfþse`, `lhfþsf`, `lhfþve`, `lhfþvf`, `lhfþvm`, `lkeeof`, `lkeesf`, `lkeeve`, `lkeevf`, `lkeevm`, `lkenof`, `lkense`, `lkensf`, `lkenve`, `lkenvf`, `lkenvm`, `lkeoof`, `lkeose`, `lkeosf`, `lkeove`, `lkeovf`, `lkeovm`, `lkeþof`, `lkeþse`, `lkeþsf`, `lkeþve`, `lkeþvf`, `lkeþvm`, `lkfeof`, `lkfese`, `lkfesf`, `lkfeve`, `lkfevf`, `lkfevm`, `lkfnof`, `lkfnse`, `lkfnsf`, `lkfnve`, `lkfnvf`, `lkfnvm`, `lkfoof`, `lkfose`, `lkfosf`, `lkfove`, `lkfovf`, `lkfovm`, `lkfþof`, `lkfþse`, `lkfþsf`, `lkfþsm`, `lkfþve`, `lkfþvf`, `lkfþvm`, `lveeof`, `lveese`, `lveesf`, `lveeve`, `lveevf`, `lveevm`, `lvenof`, `lvense`, `lvensf`, `lvenve`, `lvenvf`, `lvenvm`, `lveoof`, `lveose`, `lveosf`, `lveove`, `lveovf`, `lveovm`, `lveþof`, `lveþse`, `lveþsf`, `lveþve`, `lveþvf`, `lveþvm`, `lvfeof`, `lvfese`, `lvfesf`, `lvfeve`, `lvfevf`, `lvfevm`, `lvfnof`, `lvfnse`, `lvfnsf`, `lvfnve`, `lvfnvf`, `lvfnvm`, `lvfoof`, `lvfose`, `lvfosf`, `lvfove`, `lvfovf`, `lvfovm`, `lvfþof`, `lvfþse`, `lvfþsf`, `lvfþsm`, `lvfþve`, `lvfþvf`, `lvfþvm`, `m`, `n----s`, `n-ee`, `n-ee-s`, `n-en`, `n-en-s`, `n-eng`, `n-eo`, `n-eo-s`, `n-eþ`, `n-eþ-s`, `n-fn`, `nhee`, `nhee-s`, `nheeg`, `nheegs`, `nhen`, `nhen-s`, `nheng`, `nhengs`, `nheo`, `nheo-s`, `nheog`, `nheogs`, `nheþ`, `nheþ-s`, `nheþg`, `nheþgs`, `nhfe`, `nhfe-s`, `nhfeg`, `nhfegs`, `nhfn`, `nhfn-s`, `nhfng`, `nhfngs`, `nhfo`, `nhfo-s`, `nhfog`, `nhfogs`, `nhfþ`, `nhfþ-s`, `nhfþg`, `nhfþgs`, `nkee`, `nkee-s`, `nkeeg`, `nkeegs`, `nken`, `nken-s`, `nkeng`, `nkengs`, `nkeo`, `nkeo-s`, `nkeog`, `nkeogs`, `nkeþ`, `nkeþ-s`, `nkeþg`, `nkeþgs`, `nkfe`, `nkfe-s`, `nkfeg`, `nkfegs`, `nkfn`, `nkfn-s`, `nkfng`, `nkfngs`, `nkfo`, `nkfo-s`, `nkfog`, `nkfogs`, `nkfþ`, `nkfþ-s`, `nkfþg`, `nkfþgs`, `nvee`, `nvee-s`, `nveeg`, `nveegs`, `nven`, `nven-s`, `nveng`, `nvengs`, `nveo`, `nveo-s`, `nveog`, `nveogs`, `nveþ`, `nveþ-s`, `nveþg`, `nveþgs`, `nvfe`, `nvfe-s`, `nvfeg`, `nvfegs`, `nvfn`, `nvfn-s`, `nvfng`, `nvfngs`, `nvfo`, `nvfo-s`, `nvfog`, `nvfogs`, `nvfþ`, `nvfþ-s`, `nvfþg`, `nvfþgs`, `pa`, `pg`, `pk`, `pl`, `sbg2en`, `sbg2fn`, `sbm2en`, `sbm2fn`, `sfg1en`, `sfg1eþ`, `sfg1fn`, `sfg1fþ`, `sfg2en`, `sfg2eþ`, `sfg2fn`, `sfg2fþ`, `sfg3en`, `sfg3eþ`, `sfg3fn`, `sfg3fþ`, `sfm1en`, `sfm1eþ`, `sfm1fn`, `sfm1fþ`, `sfm2en`, `sfm2eþ`, `sfm2fn`, `sfm2fþ`, `sfm3en`, `sfm3eþ`, `sfm3fn`, `sfm3fþ`, `slg`, `sng`, `snm`, `svg1en`, `svg1eþ`, `svg1fn`, `svg1fþ`, `svg2en`, `svg2eþ`, `svg2fn`, `svg2fþ`, `svg3en`, `svg3eþ`, `svg3fn`, `svg3fþ`, `svm1en`, `svm1eþ`, `svm1fn`, `svm1fþ`, `svm2en`, `svm2eþ`, `svm2fn`, `svm3en`, `svm3eþ`, `svm3fn`, `svm3fþ`, `sþghen`, `sþgheo`, `sþghfn`, `sþghfo`, `sþgken`, `sþgkeo`, `sþgkfn`, `sþgkfo`, `sþgven`, `sþgveo`, `sþgvfn`, `sþgvfo`, `sþgvfþ`, `sþmhen`, `sþmheo`, `sþmken`, `sþmven`, `ta`, `tfhee`, `tfhen`, `tfheo`, `tfheþ`, `tfhfe`, `tfhfn`, `tfhfo`, `tfhfþ`, `tfkee`, `tfken`, `tfkeo`, `tfkeþ`, `tfkfe`, `tfkfn`, `tfkfo`, `tfkfþ`, `tfvee`, `tfven`, `tfveo`, `tfveþ`, `tfvfe`, `tfvfn`, `tfvfo`, `tfvfþ`, `to`, `tp`, `v`, `x` |
| **`parser`** | `ROOT`, `acl`, `acl:relcl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `case`, `cc`, `ccomp`, `compound:prt`, `conj`, `cop`, `dep`, `det`, `fixed`, `flat:name`, `mark`, `nmod`, `nmod:poss`, `nsubj`, `nummod`, `obj`, `obl`, `obl:arg`, `parataxis`, `punct`, `xcomp` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 92.06 |
| `ENTS_P` | 91.93 |
| `ENTS_R` | 92.18 |
| `TRANSFORMER_LOSS` | 248325.98 |
| `NER_LOSS` | 120059.07 |
|
{"language": ["is"], "tags": ["spacy", "token-classification"]}
|
elisno/is_core_web_trf
| null |
[
"spacy",
"token-classification",
"is",
"model-index",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
token-classification
|
spacy
|
| Feature | Description |
| --- | --- |
| **Name** | `is_ner_mim_sm` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.1.1,<3.2.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (8 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `Date`, `Location`, `Miscellaneous`, `Money`, `Organization`, `Percent`, `Person`, `Time` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 79.11 |
| `ENTS_P` | 80.29 |
| `ENTS_R` | 77.96 |
| `TOK2VEC_LOSS` | 1079057.14 |
| `NER_LOSS` | 792494.23 |
|
{"language": ["is"], "tags": ["spacy", "token-classification"]}
|
elisno/is_ner_mim_sm
| null |
[
"spacy",
"token-classification",
"is",
"model-index",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
token-classification
|
spacy
|
| Feature | Description |
| --- | --- |
| **Name** | `is_ner_mim_trf` |
| **Version** | `0.0.1` |
| **spaCy** | `>=3.1.1,<3.2.0` |
| **Default Pipeline** | `transformer`, `ner` |
| **Components** | `transformer`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (8 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `Date`, `Location`, `Miscellaneous`, `Money`, `Organization`, `Percent`, `Person`, `Time` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 92.06 |
| `ENTS_P` | 91.93 |
| `ENTS_R` | 92.18 |
| `TRANSFORMER_LOSS` | 248325.98 |
| `NER_LOSS` | 120059.07 |
|
{"language": ["is"], "tags": ["spacy", "token-classification"]}
|
elisno/is_ner_mim_trf
| null |
[
"spacy",
"token-classification",
"is",
"model-index",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
token-classification
|
spacy
|
{"language": ["is"], "tags": ["spacy", "token-classification"]}
|
elisno/is_ud_is_pud
| null |
[
"spacy",
"token-classification",
"is",
"model-index",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
image-classification
|
transformers
|
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### algebra

#### arithmetic

#### calculus

#### geometry

#### trigonometry

|
{"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]}
|
eliwill/rare-puppers
| null |
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-finetuned-sts
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5657
- Pearsonr: 0.8375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearsonr |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 92 | 0.8280 | 0.7680 |
| No log | 2.0 | 184 | 0.6602 | 0.8185 |
| No log | 3.0 | 276 | 0.5939 | 0.8291 |
| No log | 4.0 | 368 | 0.5765 | 0.8367 |
| No log | 5.0 | 460 | 0.5657 | 0.8375 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "datasets": ["klue"], "metrics": ["pearsonr"], "model_index": [{"name": "bert-base-finetuned-sts", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "klue", "type": "klue", "args": "sts"}, "metric": {"name": "Pearsonr", "type": "pearsonr", "value": 0.837527365741951}}]}]}
|
eliza-dukim/bert-base-finetuned-sts-deprecated
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:klue",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-finetuned-sts
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4115
- Pearsonr: 0.8756
- F1: 0.8417
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearsonr | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7836 | 1.0 | 365 | 0.5507 | 0.8435 | 0.8121 |
| 0.1564 | 2.0 | 730 | 0.4396 | 0.8495 | 0.8136 |
| 0.0989 | 3.0 | 1095 | 0.4115 | 0.8756 | 0.8417 |
| 0.0682 | 4.0 | 1460 | 0.4466 | 0.8746 | 0.8449 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.7.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "datasets": ["klue"], "metrics": ["pearsonr", "f1"], "model-index": [{"name": "bert-base-finetuned-sts", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "klue", "type": "klue", "args": "sts"}, "metrics": [{"type": "pearsonr", "value": 0.8756147003619346, "name": "Pearsonr"}, {"type": "f1", "value": 0.8416666666666667, "name": "F1"}]}]}]}
|
eliza-dukim/bert-base-finetuned-sts
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:klue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.