modelId
stringlengths
4
112
lastModified
stringlengths
24
24
tags
list
pipeline_tag
stringclasses
21 values
files
list
publishedBy
stringlengths
2
37
downloads_last_month
int32
0
9.44M
library
stringclasses
15 values
modelCard
large_stringlengths
0
100k
sismetanin/xlm_roberta_large-ru-sentiment-rureviews
2021-02-25T23:52:40.000Z
[ "pytorch", "xlm-roberta", "text-classification", "ru", "transformers", "sentiment analysis", "Russian" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "sentencepiece.bpe.model", "special_tokens_map.json", "tokenizer_config.json" ]
sismetanin
43
transformers
--- language: - ru tags: - sentiment analysis - Russian --- ## XLM-RoBERTa-Large-ru-sentiment-RuReviews XLM-RoBERTa-Large-ru-sentiment-RuReviews is a [XLM-RoBERTa-Large](https://huggingface.co/xlm-roberta-large) model fine-tuned on [RuReviews dataset](https://github.com/sismetanin/rureviews) of Russian-language reviews from the ”Women’s Clothes and Accessories” product category on the primary e-commerce site in Russia. <table> <thead> <tr> <th rowspan="4">Model</th> <th rowspan="4">Score<br></th> <th rowspan="4">Rank</th> <th colspan="12">Dataset</th> </tr> <tr> <td colspan="6">SentiRuEval-2016<br></td> <td colspan="2" rowspan="2">RuSentiment</td> <td rowspan="2">KRND</td> <td rowspan="2">LINIS Crowd</td> <td rowspan="2">RuTweetCorp</td> <td rowspan="2">RuReviews</td> </tr> <tr> <td colspan="3">TC</td> <td colspan="3">Banks</td> </tr> <tr> <td>micro F1</td> <td>macro F1</td> <td>F1</td> <td>micro F1</td> <td>macro F1</td> <td>F1</td> <td>wighted</td> <td>F1</td> <td>F1</td> <td>F1</td> <td>F1</td> <td>F1</td> </tr> </thead> <tbody> <tr> <td>SOTA</td> <td>n/s</td> <td></td> <td>76.71</td> <td>66.40</td> <td>70.68</td> <td>67.51</td> <td>69.53</td> <td>74.06</td> <td>78.50</td> <td>n/s</td> <td>73.63</td> <td>60.51</td> <td>83.68</td> <td>77.44</td> </tr> <tr> <td>XLM-RoBERTa-Large</td> <td>76.37</td> <td>1</td> <td>82.26</td> <td>76.36</td> <td>79.42</td> <td>76.35</td> <td>76.08</td> <td>80.89</td> <td>78.31</td> <td>75.27</td> <td>75.17</td> <td>60.03</td> <td>88.91</td> <td>78.81</td> </tr> <tr> <td>SBERT-Large</td> <td>75.43</td> <td>2</td> <td>78.40</td> <td>71.36</td> <td>75.14</td> <td>72.39</td> <td>71.87</td> <td>77.72</td> <td>78.58</td> <td>75.85</td> <td>74.20</td> <td>60.64</td> <td>88.66</td> <td>77.41</td> </tr> <tr> <td>MBARTRuSumGazeta</td> <td>74.70</td> <td>3</td> <td>76.06</td> <td>68.95</td> <td>73.04</td> <td>72.34</td> <td>71.93</td> <td>77.83</td> <td>76.71</td> <td>73.56</td> <td>74.18</td> <td>60.54</td> <td>87.22</td> <td>77.51</td> </tr> <tr> <td>Conversational RuBERT</td> <td>74.44</td> <td>4</td> <td>76.69</td> <td>69.09</td> <td>73.11</td> <td>69.44</td> <td>68.68</td> <td>75.56</td> <td>77.31</td> <td>74.40</td> <td>73.10</td> <td>59.95</td> <td>87.86</td> <td>77.78</td> </tr> <tr> <td>LaBSE</td> <td>74.11</td> <td>5</td> <td>77.00</td> <td>69.19</td> <td>73.55</td> <td>70.34</td> <td>69.83</td> <td>76.38</td> <td>74.94</td> <td>70.84</td> <td>73.20</td> <td>59.52</td> <td>87.89</td> <td>78.47</td> </tr> <tr> <td>XLM-RoBERTa-Base</td> <td>73.60</td> <td>6</td> <td>76.35</td> <td>69.37</td> <td>73.42</td> <td>68.45</td> <td>67.45</td> <td>74.05</td> <td>74.26</td> <td>70.44</td> <td>71.40</td> <td>60.19</td> <td>87.90</td> <td>78.28</td> </tr> <tr> <td>RuBERT</td> <td>73.45</td> <td>7</td> <td>74.03</td> <td>66.14</td> <td>70.75</td> <td>66.46</td> <td>66.40</td> <td>73.37</td> <td>75.49</td> <td>71.86</td> <td>72.15</td> <td>60.55</td> <td>86.99</td> <td>77.41</td> </tr> <tr> <td>MBART-50-Large-Many-to-Many</td> <td>73.15</td> <td>8</td> <td>75.38</td> <td>67.81</td> <td>72.26</td> <td>67.13</td> <td>66.97</td> <td>73.85</td> <td>74.78</td> <td>70.98</td> <td>71.98</td> <td>59.20</td> <td>87.05</td> <td>77.24</td> </tr> <tr> <td>SlavicBERT</td> <td>71.96</td> <td>9</td> <td>71.45</td> <td>63.03</td> <td>68.44</td> <td>64.32</td> <td>63.99</td> <td>71.31</td> <td>72.13</td> <td>67.57</td> <td>72.54</td> <td>58.70</td> <td>86.43</td> <td>77.16</td> </tr> <tr> <td>EnRuDR-BERT</td> <td>71.51</td> <td>10</td> <td>72.56</td> <td>64.74</td> <td>69.07</td> <td>61.44</td> <td>60.21</td> <td>68.34</td> <td>74.19</td> <td>69.94</td> <td>69.33</td> <td>56.55</td> <td>87.12</td> <td>77.95</td> </tr> <tr> <td>RuDR-BERT</td> <td>71.14</td> <td>11</td> <td>72.79</td> <td>64.23</td> <td>68.36</td> <td>61.86</td> <td>60.92</td> <td>68.48</td> <td>74.65</td> <td>70.63</td> <td>68.74</td> <td>54.45</td> <td>87.04</td> <td>77.91</td> </tr> <tr> <td>MBART-50-Large</td> <td>69.46</td> <td>12</td> <td>70.91</td> <td>62.67</td> <td>67.24</td> <td>61.12</td> <td>60.25</td> <td>68.41</td> <td>72.88</td> <td>68.63</td> <td>70.52</td> <td>46.39</td> <td>86.48</td> <td>77.52</td> </tr> </tbody> </table> The table shows per-task scores and a macro-average of those scores to determine a models’s position on the leaderboard. For datasets with multiple evaluation metrics (e.g., macro F1 and weighted F1 for RuSentiment), we use an unweighted average of the metrics as the score for the task when computing the overall macro-average. The same strategy for comparing models’ results was applied in the GLUE benchmark. ## Citation If you find this repository helpful, feel free to cite our publication: ``` @article{Smetanin2021Deep, author = {Sergey Smetanin and Mikhail Komarov}, title = {Deep transfer learning baselines for sentiment analysis in Russian}, journal = {Information Processing & Management}, volume = {58}, number = {3}, pages = {102484}, year = {2021}, issn = {0306-4573}, doi = {0.1016/j.ipm.2020.102484} } ``` Dataset: ``` @INPROCEEDINGS{Smetanin2019Sentiment, author={Sergey Smetanin and Michail Komarov}, booktitle={2019 IEEE 21st Conference on Business Informatics (CBI)}, title={Sentiment Analysis of Product Reviews in Russian using Convolutional Neural Networks}, year={2019}, volume={01}, pages={482-486}, doi={10.1109/CBI.2019.00062}, ISSN={2378-1963}, month={July} } ```
sismetanin/xlm_roberta_large-ru-sentiment-rusentiment
2021-02-25T23:57:27.000Z
[ "pytorch", "xlm-roberta", "text-classification", "ru", "transformers", "sentiment analysis", "Russian" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "sentencepiece.bpe.model", "special_tokens_map.json", "tokenizer_config.json" ]
sismetanin
55
transformers
--- language: - ru tags: - sentiment analysis - Russian --- ## XML-RoBERTa-Large-ru-sentiment-RuSentiment XML-RoBERTa-Large-ru-sentiment-RuSentiment is a [XML-RoBERTa-Large](https://huggingface.co/xlm-roberta-large) model fine-tuned on [RuSentiment dataset](https://github.com/text-machine-lab/rusentiment) of general-domain Russian-language posts from the largest Russian social network, VKontakte. <table> <thead> <tr> <th rowspan="4">Model</th> <th rowspan="4">Score<br></th> <th rowspan="4">Rank</th> <th colspan="12">Dataset</th> </tr> <tr> <td colspan="6">SentiRuEval-2016<br></td> <td colspan="2" rowspan="2">RuSentiment</td> <td rowspan="2">KRND</td> <td rowspan="2">LINIS Crowd</td> <td rowspan="2">RuTweetCorp</td> <td rowspan="2">RuReviews</td> </tr> <tr> <td colspan="3">TC</td> <td colspan="3">Banks</td> </tr> <tr> <td>micro F1</td> <td>macro F1</td> <td>F1</td> <td>micro F1</td> <td>macro F1</td> <td>F1</td> <td>wighted</td> <td>F1</td> <td>F1</td> <td>F1</td> <td>F1</td> <td>F1</td> </tr> </thead> <tbody> <tr> <td>SOTA</td> <td>n/s</td> <td></td> <td>76.71</td> <td>66.40</td> <td>70.68</td> <td>67.51</td> <td>69.53</td> <td>74.06</td> <td>78.50</td> <td>n/s</td> <td>73.63</td> <td>60.51</td> <td>83.68</td> <td>77.44</td> </tr> <tr> <td>XLM-RoBERTa-Large</td> <td>76.37</td> <td>1</td> <td>82.26</td> <td>76.36</td> <td>79.42</td> <td>76.35</td> <td>76.08</td> <td>80.89</td> <td>78.31</td> <td>75.27</td> <td>75.17</td> <td>60.03</td> <td>88.91</td> <td>78.81</td> </tr> <tr> <td>SBERT-Large</td> <td>75.43</td> <td>2</td> <td>78.40</td> <td>71.36</td> <td>75.14</td> <td>72.39</td> <td>71.87</td> <td>77.72</td> <td>78.58</td> <td>75.85</td> <td>74.20</td> <td>60.64</td> <td>88.66</td> <td>77.41</td> </tr> <tr> <td>MBARTRuSumGazeta</td> <td>74.70</td> <td>3</td> <td>76.06</td> <td>68.95</td> <td>73.04</td> <td>72.34</td> <td>71.93</td> <td>77.83</td> <td>76.71</td> <td>73.56</td> <td>74.18</td> <td>60.54</td> <td>87.22</td> <td>77.51</td> </tr> <tr> <td>Conversational RuBERT</td> <td>74.44</td> <td>4</td> <td>76.69</td> <td>69.09</td> <td>73.11</td> <td>69.44</td> <td>68.68</td> <td>75.56</td> <td>77.31</td> <td>74.40</td> <td>73.10</td> <td>59.95</td> <td>87.86</td> <td>77.78</td> </tr> <tr> <td>LaBSE</td> <td>74.11</td> <td>5</td> <td>77.00</td> <td>69.19</td> <td>73.55</td> <td>70.34</td> <td>69.83</td> <td>76.38</td> <td>74.94</td> <td>70.84</td> <td>73.20</td> <td>59.52</td> <td>87.89</td> <td>78.47</td> </tr> <tr> <td>XLM-RoBERTa-Base</td> <td>73.60</td> <td>6</td> <td>76.35</td> <td>69.37</td> <td>73.42</td> <td>68.45</td> <td>67.45</td> <td>74.05</td> <td>74.26</td> <td>70.44</td> <td>71.40</td> <td>60.19</td> <td>87.90</td> <td>78.28</td> </tr> <tr> <td>RuBERT</td> <td>73.45</td> <td>7</td> <td>74.03</td> <td>66.14</td> <td>70.75</td> <td>66.46</td> <td>66.40</td> <td>73.37</td> <td>75.49</td> <td>71.86</td> <td>72.15</td> <td>60.55</td> <td>86.99</td> <td>77.41</td> </tr> <tr> <td>MBART-50-Large-Many-to-Many</td> <td>73.15</td> <td>8</td> <td>75.38</td> <td>67.81</td> <td>72.26</td> <td>67.13</td> <td>66.97</td> <td>73.85</td> <td>74.78</td> <td>70.98</td> <td>71.98</td> <td>59.20</td> <td>87.05</td> <td>77.24</td> </tr> <tr> <td>SlavicBERT</td> <td>71.96</td> <td>9</td> <td>71.45</td> <td>63.03</td> <td>68.44</td> <td>64.32</td> <td>63.99</td> <td>71.31</td> <td>72.13</td> <td>67.57</td> <td>72.54</td> <td>58.70</td> <td>86.43</td> <td>77.16</td> </tr> <tr> <td>EnRuDR-BERT</td> <td>71.51</td> <td>10</td> <td>72.56</td> <td>64.74</td> <td>69.07</td> <td>61.44</td> <td>60.21</td> <td>68.34</td> <td>74.19</td> <td>69.94</td> <td>69.33</td> <td>56.55</td> <td>87.12</td> <td>77.95</td> </tr> <tr> <td>RuDR-BERT</td> <td>71.14</td> <td>11</td> <td>72.79</td> <td>64.23</td> <td>68.36</td> <td>61.86</td> <td>60.92</td> <td>68.48</td> <td>74.65</td> <td>70.63</td> <td>68.74</td> <td>54.45</td> <td>87.04</td> <td>77.91</td> </tr> <tr> <td>MBART-50-Large</td> <td>69.46</td> <td>12</td> <td>70.91</td> <td>62.67</td> <td>67.24</td> <td>61.12</td> <td>60.25</td> <td>68.41</td> <td>72.88</td> <td>68.63</td> <td>70.52</td> <td>46.39</td> <td>86.48</td> <td>77.52</td> </tr> </tbody> </table> The table shows per-task scores and a macro-average of those scores to determine a models’s position on the leaderboard. For datasets with multiple evaluation metrics (e.g., macro F1 and weighted F1 for RuSentiment), we use an unweighted average of the metrics as the score for the task when computing the overall macro-average. The same strategy for comparing models’ results was applied in the GLUE benchmark. ## Citation If you find this repository helpful, feel free to cite our publication: ``` @article{Smetanin2021Deep, author = {Sergey Smetanin and Mikhail Komarov}, title = {Deep transfer learning baselines for sentiment analysis in Russian}, journal = {Information Processing & Management}, volume = {58}, number = {3}, pages = {102484}, year = {2021}, issn = {0306-4573}, doi = {0.1016/j.ipm.2020.102484} } ``` Dataset: ``` @inproceedings{rogers2018rusentiment, title={RuSentiment: An enriched sentiment analysis dataset for social media in Russian}, author={Rogers, Anna and Romanov, Alexey and Rumshisky, Anna and Volkova, Svitlana and Gronas, Mikhail and Gribov, Alex}, booktitle={Proceedings of the 27th international conference on computational linguistics}, pages={755--763}, year={2018} } ```
sismetanin/xlm_roberta_large-ru-sentiment-rutweetcorp
2021-02-22T02:27:46.000Z
[ "pytorch", "xlm-roberta", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "config.json", "pytorch_model.bin", "sentencepiece.bpe.model", "special_tokens_map.json", "tokenizer_config.json" ]
sismetanin
11
transformers
sismetanin/xlm_roberta_large-ru-sentiment-sentirueval2016
2021-02-25T02:52:29.000Z
[ "pytorch", "xlm-roberta", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "config.json", "pytorch_model.bin", "sentencepiece.bpe.model", "special_tokens_map.json", "tokenizer_config.json" ]
sismetanin
6
transformers
sj36/sadface
2021-04-19T03:47:36.000Z
[]
[ ".gitattributes" ]
sj36
0
skimai/electra-small-spanish
2020-05-08T19:16:48.000Z
[ "pytorch", "transformers" ]
[ ".gitattributes", "checkpoint", "config.json", "model.ckpt-1000000.data-00000-of-00001", "model.ckpt-1000000.index", "model.ckpt-1000000.meta", "pytorch_model.bin", "vocab.txt" ]
skimai
45
transformers
skimai/spanberta-base-cased-ner-conll02
2021-05-20T21:50:52.000Z
[ "pytorch", "jax", "roberta", "token-classification", "transformers" ]
token-classification
[ ".gitattributes", "config.json", "flax_model.msgpack", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.json" ]
skimai
52
transformers
skimai/spanberta-base-cased
2021-05-20T21:52:23.000Z
[ "pytorch", "jax", "roberta", "transformers" ]
[ ".gitattributes", "config.json", "flax_model.msgpack", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.json" ]
skimai
490
transformers
skkeshri/distilbert-base-uncased
2021-04-06T11:06:29.000Z
[]
[ ".gitattributes" ]
skkeshri
0
skplanet/dialog-koelectra-small-discriminator
2021-04-13T01:15:27.000Z
[ "pytorch", "electra", "pretraining", "transformers" ]
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "tokenizer_config.json", "vocab.txt" ]
skplanet
30
transformers
# Dialog-KoELECTRA Github : [https://github.com/skplanet/Dialog-KoELECTRA](https://github.com/skplanet/Dialog-KoELECTRA) ## Introduction **Dialog-KoELECTRA** is a language model specialized for dialogue. It was trained with 22GB colloquial and written style Korean text data. Dialog-ELECTRA model is made based on the [ELECTRA](https://openreview.net/pdf?id=r1xMH1BtvB) model. ELECTRA is a method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. <br> ## Released Models We are initially releasing small version pre-trained model. The model was trained on Korean text. We hope to release other models, such as base/large models, in the future. | Model | Layers | Hidden Size | Params | Max<br/>Seq Len | Learning<br/>Rate | Batch Size | Train Steps | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | Dialog-KoELECTRA-Small | 12 | 256 | 14M | 128 | 1e-4 | 512 | 700K | <br> ## Model Performance Dialog-KoELECTRA shows strong performance in conversational downstream tasks. | | **NSMC**<br/>(acc) | **Question Pair**<br/>(acc) | **Korean-Hate-Speech**<br/>(F1) | **Naver NER**<br/>(F1) | **KorNLI**<br/>(acc) | **KorSTS**<br/>(spearman) | | :--------------------- | :----------------: | :--------------------: | :----------------: | :------------------: | :-----------------------: | :-------------------------: | | DistilKoBERT | 88.60 | 92.48 | 60.72 | 84.65 | 72.00 | 72.59 | | **Dialog-KoELECTRA-Small** | **90.01** | **94.99** | **68.26** | **85.51** | **78.54** | **78.96** | <br> ## Train Data <table class="tg"> <thead> <tr> <th class="tg-c3ow"></th> <th class="tg-c3ow">corpus name</th> <th class="tg-c3ow">size</th> </tr> </thead> <tbody> <tr> <td class="tg-c3ow" rowspan="4">dialog</td> <td class="tg-0pky"><a href="https://aihub.or.kr/aidata/85" target="_blank" rel="noopener noreferrer">Aihub Korean dialog corpus</a></td> <td class="tg-c3ow" rowspan="4">7GB</td> </tr> <tr> <td class="tg-0pky"><a href="https://corpus.korean.go.kr/" target="_blank" rel="noopener noreferrer">NIKL Spoken corpus</a></td> </tr> <tr> <td class="tg-0pky"><a href="https://github.com/songys/Chatbot_data" target="_blank" rel="noopener noreferrer">Korean chatbot data</a></td> </tr> <tr> <td class="tg-0pky"><a href="https://github.com/Beomi/KcBERT" target="_blank" rel="noopener noreferrer">KcBERT</a></td> </tr> <tr> <td class="tg-c3ow" rowspan="2">written</td> <td class="tg-0pky"><a href="https://corpus.korean.go.kr/" target="_blank" rel="noopener noreferrer">NIKL Newspaper corpus</a></td> <td class="tg-c3ow" rowspan="2">15GB</td> </tr> <tr> <td class="tg-0pky"><a href="https://github.com/lovit/namuwikitext" target="_blank" rel="noopener noreferrer">namuwikitext</a></td> </tr> </tbody> </table> <br> ## Vocabulary We applied morpheme analysis using [huggingface_konlpy](https://github.com/lovit/huggingface_konlpy) when creating a vocabulary dictionary. As a result of the experiment, it showed better performance than a vocabulary dictionary created without applying morpheme analysis. <table> <thead> <tr> <th>vocabulary size</th> <th>unused token size</th> <th>limit alphabet</th> <th>min frequency</th> </tr> </thead> <tbody> <tr> <td>40,000</td> <td>500</td> <td>6,000</td> <td>3</td> </tr> </tbody> </table> <br>
skplanet/dialog-koelectra-small-generator
2021-04-13T01:15:45.000Z
[ "pytorch", "electra", "masked-lm", "transformers", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "tokenizer_config.json", "vocab.txt" ]
skplanet
15
transformers
# Dialog-KoELECTRA Github : [https://github.com/skplanet/Dialog-KoELECTRA](https://github.com/skplanet/Dialog-KoELECTRA) ## Introduction **Dialog-KoELECTRA** is a language model specialized for dialogue. It was trained with 22GB colloquial and written style Korean text data. Dialog-ELECTRA model is made based on the [ELECTRA](https://openreview.net/pdf?id=r1xMH1BtvB) model. ELECTRA is a method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. <br> ## Released Models We are initially releasing small version pre-trained model. The model was trained on Korean text. We hope to release other models, such as base/large models, in the future. | Model | Layers | Hidden Size | Params | Max<br/>Seq Len | Learning<br/>Rate | Batch Size | Train Steps | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | Dialog-KoELECTRA-Small | 12 | 256 | 14M | 128 | 1e-4 | 512 | 700K | <br> ## Model Performance Dialog-KoELECTRA shows strong performance in conversational downstream tasks. | | **NSMC**<br/>(acc) | **Question Pair**<br/>(acc) | **Korean-Hate-Speech**<br/>(F1) | **Naver NER**<br/>(F1) | **KorNLI**<br/>(acc) | **KorSTS**<br/>(spearman) | | :--------------------- | :----------------: | :--------------------: | :----------------: | :------------------: | :-----------------------: | :-------------------------: | | DistilKoBERT | 88.60 | 92.48 | 60.72 | 84.65 | 72.00 | 72.59 | | **Dialog-KoELECTRA-Small** | **90.01** | **94.99** | **68.26** | **85.51** | **78.54** | **78.96** | <br> ## Train Data <table class="tg"> <thead> <tr> <th class="tg-c3ow"></th> <th class="tg-c3ow">corpus name</th> <th class="tg-c3ow">size</th> </tr> </thead> <tbody> <tr> <td class="tg-c3ow" rowspan="4">dialog</td> <td class="tg-0pky"><a href="https://aihub.or.kr/aidata/85" target="_blank" rel="noopener noreferrer">Aihub Korean dialog corpus</a></td> <td class="tg-c3ow" rowspan="4">7GB</td> </tr> <tr> <td class="tg-0pky"><a href="https://corpus.korean.go.kr/" target="_blank" rel="noopener noreferrer">NIKL Spoken corpus</a></td> </tr> <tr> <td class="tg-0pky"><a href="https://github.com/songys/Chatbot_data" target="_blank" rel="noopener noreferrer">Korean chatbot data</a></td> </tr> <tr> <td class="tg-0pky"><a href="https://github.com/Beomi/KcBERT" target="_blank" rel="noopener noreferrer">KcBERT</a></td> </tr> <tr> <td class="tg-c3ow" rowspan="2">written</td> <td class="tg-0pky"><a href="https://corpus.korean.go.kr/" target="_blank" rel="noopener noreferrer">NIKL Newspaper corpus</a></td> <td class="tg-c3ow" rowspan="2">15GB</td> </tr> <tr> <td class="tg-0pky"><a href="https://github.com/lovit/namuwikitext" target="_blank" rel="noopener noreferrer">namuwikitext</a></td> </tr> </tbody> </table> <br> ## Vocabulary We applied morpheme analysis using [huggingface_konlpy](https://github.com/lovit/huggingface_konlpy) when creating a vocabulary dictionary. As a result of the experiment, it showed better performance than a vocabulary dictionary created without applying morpheme analysis. <table> <thead> <tr> <th>vocabulary size</th> <th>unused token size</th> <th>limit alphabet</th> <th>min frequency</th> </tr> </thead> <tbody> <tr> <td>40,000</td> <td>500</td> <td>6,000</td> <td>3</td> </tr> </tbody> </table> <br>
skt/kogpt2-base-v2
2021-05-24T07:20:21.000Z
[ "pytorch", "jax", "gpt2", "lm-head", "causal-lm", "ko", "transformers", "license:cc-by-nc-sa 4.0", "text-generation" ]
text-generation
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "tokenizer.json" ]
skt
14,803
transformers
--- language: ko tags: - gpt2 license: cc-by-nc-sa 4.0 --- For more details: https://github.com/SKT-AI/KoGPT2
skylord/greek_lsr_1
2021-03-26T05:37:48.000Z
[ "pytorch", "wav2vec2", "el", "dataset:common_voice", "transformers", "audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "license:apache-2.0" ]
automatic-speech-recognition
[ ".gitattributes", "README.md", "all_results.json", "config.json", "preprocessor_config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "train_results.json", "trainer_state.json", "training_args.bin", "vocab.json", ".ipynb_checkpoints/README-checkpoint.md" ]
skylord
9
transformers
--- language: el datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: Greek XLSR Wav2Vec2 Large 53 results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice el type: common_voice args: el metrics: - name: Test WER type: wer value: 56.253154 --- # Wav2Vec2-Large-XLSR-53-Greek Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Greek using the [Common Voice](https://huggingface.co/datasets/common_voice), ... and ... dataset{s}. #TODO: replace {language} with your language, *e.g.* French and eventually add more datasets that were used and eventually remove common voice if model was not trained on common voice When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "el", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("skylord/greek_lsr_1") model = Wav2Vec2ForCTC.from_pretrained("skylord/greek_lsr_1") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Greek test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "el", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("skylord/greek_lsr_1") model = Wav2Vec2ForCTC.from_pretrained("skylord/greek_lsr_1") model.to("cuda") chars_to_ignore_regex = '[\\\\\\\\,\\\\\\\\?\\\\\\\\.\\\\\\\\!\\\\\\\\-\\\\\\\\;\\\\\\\\:\\\\\\\\"\\\\\\\\“]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 56.253154 % ## Training The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO: adapt to state all the datasets that were used for training. The script used for training can be found [here](...) # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here.
skylord/wav2vec2-large-xlsr-greek-1
2021-03-26T13:43:40.000Z
[ "pytorch", "wav2vec2", "el", "dataset:common_voice", "transformers", "audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "license:apache-2.0" ]
automatic-speech-recognition
[ ".gitattributes", "README.md", "all_results.json", "config.json", "preprocessor_config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "train_results.json", "trainer_state.json", "training_args.bin", "vocab.json", ".ipynb_checkpoints/README-checkpoint.md" ]
skylord
7
transformers
--- language: el datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: Greek XLSR Wav2Vec2 Large 53 results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice el type: common_voice args: el metrics: - name: Test WER type: wer value: 34.006258 --- # Wav2Vec2-Large-XLSR-53-Greek Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Greek using the [Common Voice](https://huggingface.co/datasets/common_voice), ... and ... dataset{s}. #TODO: replace {language} with your language, *e.g.* French and eventually add more datasets that were used and eventually remove common voice if model was not trained on common voice When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "el", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("skylord/greek_lsr_1") model = Wav2Vec2ForCTC.from_pretrained("skylord/greek_lsr_1") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Greek test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "el", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("skylord/greek_lsr_1") model = Wav2Vec2ForCTC.from_pretrained("skylord/greek_lsr_1") model.to("cuda") chars_to_ignore_regex = '[\\\\\\\\,\\\\\\\\?\\\\\\\\.\\\\\\\\!\\\\\\\\-\\\\\\\\;\\\\\\\\:\\\\\\\\"\\\\\\\\“]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 34.006258 % ## Training The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO: adapt to state all the datasets that were used for training. The script used for training can be found [here](...) # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here.
skylord/wav2vec2-large-xlsr-greek-2
2021-03-31T09:42:31.000Z
[ "pytorch", "wav2vec2", "el", "dataset:common_voice", "transformers", "audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "license:apache-2.0" ]
automatic-speech-recognition
[ ".gitattributes", "README.md", "all_results.json", "config.json", "preprocessor_config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "train_results.json", "trainer_state.json", "training_args.bin", "vocab.json", ".ipynb_checkpoints/README-checkpoint.md" ]
skylord
13
transformers
--- language: el datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: Greek XLSR Wav2Vec2 Large 53 results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice el type: common_voice args: el metrics: - name: Test WER type: wer value: 45.048955 --- # Wav2Vec2-Large-XLSR-53-Greek Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Greek using the [Common Voice](https://huggingface.co/datasets/common_voice), The Greek CV data has a majority of male voices. To balance it synthesised female voices were created using the approach discussed here [slack](https://huggingface.slack.com/archives/C01QZ90Q83Z/p1616741140114800) The text from the common-voice dataset was used to synthesize vocies of female speakers using [Googe's TTS Standard Voice model](https://cloud.google.com/text-to-speech) Fine-tuned on facebook/wav2vec2-large-xlsr-53 using Greek CommonVoice :: 5 epochs >> 56.25% WER Resuming from checkpoints trained for another 15 epochs >> 34.00% Added synthesised female voices trained for 12 epochs >> 34.00% (no change) When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "el", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("skylord/greek_lsr_1") model = Wav2Vec2ForCTC.from_pretrained("skylord/greek_lsr_1") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Greek test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "el", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("skylord/greek_lsr_1") model = Wav2Vec2ForCTC.from_pretrained("skylord/greek_lsr_1") model.to("cuda") chars_to_ignore_regex = '[\\\\\\\\,\\\\\\\\?\\\\\\\\.\\\\\\\\!\\\\\\\\-\\\\\\\\;\\\\\\\\:\\\\\\\\"\\\\\\\\“]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 45.048955 % ## Training The Common Voice `train`, `validation`, datasets were used for training as well as The script used for training can be found [here](...) # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here.
skylord/wav2vec2-large-xlsr-hindi
2021-04-20T07:24:00.000Z
[ "pytorch", "wav2vec2", "hi", "dataset:common_voice", "dataset:indic tts", "dataset:iiith", "transformers", "audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "license:apache-2.0" ]
automatic-speech-recognition
[ ".gitattributes", "README.md", "config.json", "preprocessor_config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.json", ".ipynb_checkpoints/README-checkpoint.md", ".ipynb_checkpoints/vocab-checkpoint.json" ]
skylord
50
transformers
--- language: hi datasets: - common_voice - indic tts - iiith metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: Hindi XLSR Wav2Vec2 Large 53 results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: - name: Common Voice hi type: common_voice args: hi - name: Indic IIT (IITM) type: indic args: hi - name: IIITH Indic Dataset type: iiith args: hi metrics: - name: Custom Dataset Hindi WER type: wer value: 17.23 - name: CommonVoice Hindi (Test) WER type: wer value: 56.46 --- # Wav2Vec2-Large-XLSR-53-Hindi Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Hindi using the following datasets: - [Common Voice](https://huggingface.co/datasets/common_voice), - [Indic TTS- IITM](https://www.iitm.ac.in/donlab/tts/index.php) and - [IIITH - Indic Speech Datasets](http://speech.iiit.ac.in/index.php/research-svl/69.html) The Indic datasets are well balanced across gender and accents. However the CommonVoice dataset is skewed towards male voices Fine-tuned on facebook/wav2vec2-large-xlsr-53 using Hindi dataset :: 60 epochs >> 17.05% WER When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "hi", split="test") processor = Wav2Vec2Processor.from_pretrained("skylord/wav2vec2-large-xlsr-hindi") model = Wav2Vec2ForCTC.from_pretrained("skylord/wav2vec2-large-xlsr-hindi") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the following two datasets: 1. Custom dataset created from 20% of Indic, IIITH and CV (test): 17. 2. CommonVoice Hindi test dataset ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re ## Load the datasets test_dataset = load_dataset("common_voice", "hi", split="test") indic = load_dataset("csv", data_files= {'train':"/workspace/data/hi2/indic_train_full.csv", "test": "/workspace/data/hi2/indic_test_full.csv"}, download_mode="force_redownload") iiith = load_dataset("csv", data_files= {"train": "/workspace/data/hi2/iiit_hi_train.csv", "test": "/workspace/data/hi2/iiit_hi_test.csv"}, download_mode="force_redownload") ## Pre-process datasets and concatenate to create test dataset # Drop columns of common_voice split = ['train', 'test', 'validation', 'other', 'invalidated'] for sp in split: common_voice[sp] = common_voice[sp].remove_columns(['client_id', 'up_votes', 'down_votes', 'age', 'gender', 'accent', 'locale', 'segment']) common_voice = common_voice.rename_column('path', 'audio_path') common_voice = common_voice.rename_column('sentence', 'target_text') train_dataset = datasets.concatenate_datasets([indic['train'], iiith['train'], common_voice['train']]) test_dataset = datasets.concatenate_datasets([indic['test'], iiith['test'], common_voice['test'], common_voice['validation']]) ## Load model from HF hub wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("skylord/wav2vec2-large-xlsr-hindi") model = Wav2Vec2ForCTC.from_pretrained("skylord/wav2vec2-large-xlsr-hindi") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\'\;\:\"\“\%\‘\”\�Utrnle\_]' unicode_ignore_regex = r'[dceMaWpmFui\xa0\u200d]' # Some unwanted unicode chars resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["target_text"] = re.sub(chars_to_ignore_regex, '', batch["target_text"]) batch["target_text"] = re.sub(unicode_ignore_regex, '', batch["target_text"]) speech_array, sampling_rate = torchaudio.load(batch["audio_path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result on custom dataset**: 17.23 % ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "hi", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("skylord/wav2vec2-large-xlsr-hindi") model = Wav2Vec2ForCTC.from_pretrained("skylord/wav2vec2-large-xlsr-hindi") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\'\;\:\"\“\%\‘\”\�Utrnle\_]' unicode_ignore_regex = r'[dceMaWpmFui\xa0\u200d]' # Some unwanted unicode chars resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).sub(unicode_ignore_regex, '', batch["sentence"]) speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result on CommonVoice**: 56.46 % ## Training The Common Voice `train`, `validation`, datasets were used for training as well as The script used for training & wandb dashboard can be found [here](https://wandb.ai/thinkevolve/huggingface/reports/Project-Hindi-XLSR-Large--Vmlldzo2MTI2MTQ)
sm6342/FinRoberta
2021-05-20T21:54:09.000Z
[ "pytorch", "jax", "roberta", "masked-lm", "transformers", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "merges.txt", "pytorch_model.bin", "training_args.bin", "vocab.json" ]
sm6342
6
transformers
"hello"
sm6342/Health101
2021-05-20T21:55:29.000Z
[ "pytorch", "jax", "roberta", "masked-lm", "transformers", "fill-mask" ]
fill-mask
[ ".gitattributes", "config.json", "flax_model.msgpack", "merges.txt", "pytorch_model.bin", "training_args.bin", "vocab.json" ]
sm6342
16
transformers
smanjil/German-MedBERT
2021-05-20T06:47:50.000Z
[ "pytorch", "jax", "bert", "masked-lm", "de", "transformers", "exbert", "German", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "loss-plot.html", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
smanjil
1,871
transformers
--- language: de tags: - exbert - German --- <a href="https://huggingface.co/exbert/?model=smanjil/German-MedBERT"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a> # German Medical BERT This is a fine-tuned model on the Medical domain for the German language and based on German BERT. This model has only been trained to improve on-target tasks (Masked Language Model). It can later be used to perform a downstream task of your needs, while I performed it for the NTS-ICD-10 text classification task. ## Overview **Language model:** bert-base-german-cased **Language:** German **Fine-tuning:** Medical articles (diseases, symptoms, therapies, etc..) **Eval data:** NTS-ICD-10 dataset (Classification) **Infrastructure:** Google Colab ## Details - We fine-tuned using Pytorch with Huggingface library on Colab GPU. - With standard parameter settings for fine-tuning as mentioned in the original BERT paper. - Although had to train for up to 25 epochs for classification. ## Performance (Micro precision, recall, and f1 score for multilabel code classification) |Models|P|R|F1| |:------|:------|:------|:------| |German BERT|86.04|75.82|80.60| |German MedBERT-256 (fine-tuned)|87.41|77.97|82.42| |German MedBERT-512 (fine-tuned)|87.75|78.26|82.73| ## Author Manjil Shrestha: `shresthamanjil21 [at] gmail.com` ## Related Paper: [Report](https://opus4.kobv.de/opus4-rhein-waal/frontdoor/index/index/searchtype/collection/id/16225/start/0/rows/10/doctypefq/masterthesis/docId/740) Get in touch: [LinkedIn](https://www.linkedin.com/in/manjil-shrestha-038527b4/)
smeylan/childes-bert
2021-05-20T06:49:00.000Z
[ "pytorch", "jax", "bert", "masked-lm", "transformers", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "eval_results_mlm.txt", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "train_results.txt", "trainer_state.json", "training_args.bin", "vocab.txt" ]
smeylan
19
transformers
--- language: "en" tags: - language-modeling license: "cc-by-sa-4.0" datasets: - childes ---
snehg/GPT2_json
2020-12-28T15:53:56.000Z
[]
[ ".gitattributes" ]
snehg
0
snehg/gpt2-json
2020-12-26T20:44:14.000Z
[]
[ ".gitattributes" ]
snehg
0
snrspeaks/t5-one-line-summary
2021-06-18T20:06:28.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".DS_Store", ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer.json", "tokenizer_config.json" ]
snrspeaks
0
transformers
snunlp/KR-BERT-char16424
2021-05-20T06:49:57.000Z
[ "pytorch", "jax", "bert", "transformers" ]
[ ".gitattributes", "config.json", "flax_model.msgpack", "pytorch_model.bin", "tokenizer_config.json", "vocab.txt", ".AppleDouble/.Parent", ".AppleDouble/vocab.txt" ]
snunlp
433
transformers
snunlp/KR-Medium
2021-05-20T06:50:57.000Z
[ "pytorch", "jax", "bert", "transformers" ]
[ ".gitattributes", "config.json", "flax_model.msgpack", "pytorch_model.bin", "tokenizer_config.json", "vocab.txt" ]
snunlp
78
transformers
socialmediaie/TRAC2020_ALL_A_bert-base-multilingual-uncased
2021-05-20T06:52:09.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
socialmediaie
33
transformers
# Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020 Models and predictions for submission to TRAC - 2020 Second Workshop on Trolling, Aggression and Cyberbullying. Our trained models as well as evaluation metrics during traing are available at: https://databank.illinois.edu/datasets/IDB-8882752# We also make a few of our models available in HuggingFace's models repository at https://huggingface.co/socialmediaie/, these models can be further fine-tuned on your dataset of choice. Our approach is described in our paper titled: > Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020). The source code for training this model and more details can be found on our code repository: https://github.com/socialmediaie/TRAC2020 NOTE: These models are retrained for uploading here after our submission so the evaluation measures may be slightly different from the ones reported in the paper. If you plan to use the dataset please cite the following resources: * Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020). * Mishra, Shubhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. “Trained Models for Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020.” University of Illinois at Urbana-Champaign. https://doi.org/10.13012/B2IDB-8882752_V1. ``` @inproceedings{Mishra2020TRAC, author = {Mishra, Sudhanshu and Prasad, Shivangi and Mishra, Shubhanshu}, booktitle = {Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020)}, title = {{Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}}, year = {2020} } @data{illinoisdatabankIDB-8882752, author = {Mishra, Shubhanshu and Prasad, Shivangi and Mishra, Shubhanshu}, doi = {10.13012/B2IDB-8882752_V1}, publisher = {University of Illinois at Urbana-Champaign}, title = {{Trained models for Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}}, url = {https://doi.org/10.13012/B2IDB-8882752{\_}V1}, year = {2020} } ``` ## Usage The models can be used via the following code: ```python from transformers import AutoModel, AutoTokenizer, AutoModelForSequenceClassification import torch from pathlib import Path from scipy.special import softmax import numpy as np import pandas as pd TASK_LABEL_IDS = { "Sub-task A": ["OAG", "NAG", "CAG"], "Sub-task B": ["GEN", "NGEN"], "Sub-task C": ["OAG-GEN", "OAG-NGEN", "NAG-GEN", "NAG-NGEN", "CAG-GEN", "CAG-NGEN"] } model_version="databank" # other option is hugging face library if model_version == "databank": # Make sure you have downloaded the required model file from https://databank.illinois.edu/datasets/IDB-8882752 # Unzip the file at some model_path (we are using: "databank_model") model_path = next(Path("databank_model").glob("./*/output/*/model")) # Assuming you get the following type of structure inside "databank_model" # 'databank_model/ALL/Sub-task C/output/bert-base-multilingual-uncased/model' lang, task, _, base_model, _ = model_path.parts tokenizer = AutoTokenizer.from_pretrained(base_model) model = AutoModelForSequenceClassification.from_pretrained(model_path) else: lang, task, base_model = "ALL", "Sub-task C", "bert-base-multilingual-uncased" base_model = f"socialmediaie/TRAC2020_{lang}_{lang.split()[-1]}_{base_model}" tokenizer = AutoTokenizer.from_pretrained(base_model) model = AutoModelForSequenceClassification.from_pretrained(base_model) # For doing inference set model in eval mode model.eval() # If you want to further fine-tune the model you can reset the model to model.train() task_labels = TASK_LABEL_IDS[task] sentence = "This is a good cat and this is a bad dog." processed_sentence = f"{tokenizer.cls_token} {sentence}" tokens = tokenizer.tokenize(sentence) indexed_tokens = tokenizer.convert_tokens_to_ids(tokens) tokens_tensor = torch.tensor([indexed_tokens]) with torch.no_grad(): logits, = model(tokens_tensor, labels=None) logits preds = logits.detach().cpu().numpy() preds_probs = softmax(preds, axis=1) preds = np.argmax(preds_probs, axis=1) preds_labels = np.array(task_labels)[preds] print(dict(zip(task_labels, preds_probs[0])), preds_labels) """You should get an output as follows: ({'CAG-GEN': 0.06762535, 'CAG-NGEN': 0.03244293, 'NAG-GEN': 0.6897794, 'NAG-NGEN': 0.15498641, 'OAG-GEN': 0.034373745, 'OAG-NGEN': 0.020792078}, array(['NAG-GEN'], dtype='<U8')) """ ```
socialmediaie/TRAC2020_ALL_B_bert-base-multilingual-uncased
2021-05-20T06:53:23.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
socialmediaie
24
transformers
# Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020 Models and predictions for submission to TRAC - 2020 Second Workshop on Trolling, Aggression and Cyberbullying. Our trained models as well as evaluation metrics during traing are available at: https://databank.illinois.edu/datasets/IDB-8882752# We also make a few of our models available in HuggingFace's models repository at https://huggingface.co/socialmediaie/, these models can be further fine-tuned on your dataset of choice. Our approach is described in our paper titled: > Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020). The source code for training this model and more details can be found on our code repository: https://github.com/socialmediaie/TRAC2020 NOTE: These models are retrained for uploading here after our submission so the evaluation measures may be slightly different from the ones reported in the paper. If you plan to use the dataset please cite the following resources: * Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020). * Mishra, Shubhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. “Trained Models for Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020.” University of Illinois at Urbana-Champaign. https://doi.org/10.13012/B2IDB-8882752_V1. ``` @inproceedings{Mishra2020TRAC, author = {Mishra, Sudhanshu and Prasad, Shivangi and Mishra, Shubhanshu}, booktitle = {Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020)}, title = {{Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}}, year = {2020} } @data{illinoisdatabankIDB-8882752, author = {Mishra, Shubhanshu and Prasad, Shivangi and Mishra, Shubhanshu}, doi = {10.13012/B2IDB-8882752_V1}, publisher = {University of Illinois at Urbana-Champaign}, title = {{Trained models for Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}}, url = {https://doi.org/10.13012/B2IDB-8882752{\_}V1}, year = {2020} } ``` ## Usage The models can be used via the following code: ```python from transformers import AutoModel, AutoTokenizer, AutoModelForSequenceClassification import torch from pathlib import Path from scipy.special import softmax import numpy as np import pandas as pd TASK_LABEL_IDS = { "Sub-task A": ["OAG", "NAG", "CAG"], "Sub-task B": ["GEN", "NGEN"], "Sub-task C": ["OAG-GEN", "OAG-NGEN", "NAG-GEN", "NAG-NGEN", "CAG-GEN", "CAG-NGEN"] } model_version="databank" # other option is hugging face library if model_version == "databank": # Make sure you have downloaded the required model file from https://databank.illinois.edu/datasets/IDB-8882752 # Unzip the file at some model_path (we are using: "databank_model") model_path = next(Path("databank_model").glob("./*/output/*/model")) # Assuming you get the following type of structure inside "databank_model" # 'databank_model/ALL/Sub-task C/output/bert-base-multilingual-uncased/model' lang, task, _, base_model, _ = model_path.parts tokenizer = AutoTokenizer.from_pretrained(base_model) model = AutoModelForSequenceClassification.from_pretrained(model_path) else: lang, task, base_model = "ALL", "Sub-task C", "bert-base-multilingual-uncased" base_model = f"socialmediaie/TRAC2020_{lang}_{lang.split()[-1]}_{base_model}" tokenizer = AutoTokenizer.from_pretrained(base_model) model = AutoModelForSequenceClassification.from_pretrained(base_model) # For doing inference set model in eval mode model.eval() # If you want to further fine-tune the model you can reset the model to model.train() task_labels = TASK_LABEL_IDS[task] sentence = "This is a good cat and this is a bad dog." processed_sentence = f"{tokenizer.cls_token} {sentence}" tokens = tokenizer.tokenize(sentence) indexed_tokens = tokenizer.convert_tokens_to_ids(tokens) tokens_tensor = torch.tensor([indexed_tokens]) with torch.no_grad(): logits, = model(tokens_tensor, labels=None) logits preds = logits.detach().cpu().numpy() preds_probs = softmax(preds, axis=1) preds = np.argmax(preds_probs, axis=1) preds_labels = np.array(task_labels)[preds] print(dict(zip(task_labels, preds_probs[0])), preds_labels) """You should get an output as follows: ({'CAG-GEN': 0.06762535, 'CAG-NGEN': 0.03244293, 'NAG-GEN': 0.6897794, 'NAG-NGEN': 0.15498641, 'OAG-GEN': 0.034373745, 'OAG-NGEN': 0.020792078}, array(['NAG-GEN'], dtype='<U8')) """ ```
socialmediaie/TRAC2020_ALL_C_bert-base-multilingual-uncased
2021-05-20T06:54:45.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
socialmediaie
25
transformers
# Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020 Models and predictions for submission to TRAC - 2020 Second Workshop on Trolling, Aggression and Cyberbullying Our approach is described in our paper titled: > Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020). The source code for training this model and more details can be found on our code repository: https://github.com/socialmediaie/TRAC2020 NOTE: These models are retrained for uploading here after our submission so the evaluation measures may be slightly different from the ones reported in the paper. If you plan to use the dataset please cite the following resources: * Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020). * Mishra, Shubhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. “Trained Models for Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020.” University of Illinois at Urbana-Champaign. https://doi.org/10.13012/B2IDB-8882752_V1. ``` @inproceedings{Mishra2020TRAC, author = {Mishra, Sudhanshu and Prasad, Shivangi and Mishra, Shubhanshu}, booktitle = {Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020)}, title = {{Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}}, year = {2020} } @data{illinoisdatabankIDB-8882752, author = {Mishra, Shubhanshu and Prasad, Shivangi and Mishra, Shubhanshu}, doi = {10.13012/B2IDB-8882752_V1}, publisher = {University of Illinois at Urbana-Champaign}, title = {{Trained models for Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}}, url = {https://doi.org/10.13012/B2IDB-8882752{\_}V1}, year = {2020} } ``` ## Usage The models can be used via the following code: ```python from transformers import AutoModel, AutoTokenizer, AutoModelForSequenceClassification import torch from pathlib import Path from scipy.special import softmax import numpy as np import pandas as pd TASK_LABEL_IDS = { "Sub-task A": ["OAG", "NAG", "CAG"], "Sub-task B": ["GEN", "NGEN"], "Sub-task C": ["OAG-GEN", "OAG-NGEN", "NAG-GEN", "NAG-NGEN", "CAG-GEN", "CAG-NGEN"] } model_version="databank" # other option is hugging face library if model_version == "databank": # Make sure you have downloaded the required model file from https://databank.illinois.edu/datasets/IDB-8882752 # Unzip the file at some model_path (we are using: "databank_model") model_path = next(Path("databank_model").glob("./*/output/*/model")) # Assuming you get the following type of structure inside "databank_model" # 'databank_model/ALL/Sub-task C/output/bert-base-multilingual-uncased/model' lang, task, _, base_model, _ = model_path.parts tokenizer = AutoTokenizer.from_pretrained(base_model) model = AutoModelForSequenceClassification.from_pretrained(model_path) else: lang, task, base_model = "ALL", "Sub-task C", "bert-base-multilingual-uncased" base_model = f"socialmediaie/{lang}_{lang.split()[-1]}_{base_model}" tokenizer = AutoTokenizer.from_pretrained(base_model) model = AutoModelForSequenceClassification.from_pretrained(base_model) # For doing inference set model in eval mode model.eval() task_labels = TASK_LABEL_IDS[task] sentence = "This is a good cat and this is a bad dog." processed_sentence = f"{tokenizer.cls_token} {sentence}" tokens = tokenizer.tokenize(sentence) indexed_tokens = tokenizer.convert_tokens_to_ids(tokens) tokens_tensor = torch.tensor([indexed_tokens]) with torch.no_grad(): logits, = model(tokens_tensor, labels=None) logits preds = logits.detach().cpu().numpy() preds_probs = softmax(preds, axis=1) preds = np.argmax(preds_probs, axis=1) preds_labels = np.array(task_labels)[preds] print(dict(zip(task_labels, preds_probs[0])), preds_labels) """You should get an output as follows: ({'CAG-GEN': 0.06762535, 'CAG-NGEN': 0.03244293, 'NAG-GEN': 0.6897794, 'NAG-NGEN': 0.15498641, 'OAG-GEN': 0.034373745, 'OAG-NGEN': 0.020792078}, array(['NAG-GEN'], dtype='<U8')) """ ```
socialmediaie/TRAC2020_ENG_A_bert-base-uncased
2021-05-20T06:55:44.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
socialmediaie
18
transformers
# Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020 Models and predictions for submission to TRAC - 2020 Second Workshop on Trolling, Aggression and Cyberbullying. Our trained models as well as evaluation metrics during traing are available at: https://databank.illinois.edu/datasets/IDB-8882752# We also make a few of our models available in HuggingFace's models repository at https://huggingface.co/socialmediaie/, these models can be further fine-tuned on your dataset of choice. Our approach is described in our paper titled: > Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020). The source code for training this model and more details can be found on our code repository: https://github.com/socialmediaie/TRAC2020 NOTE: These models are retrained for uploading here after our submission so the evaluation measures may be slightly different from the ones reported in the paper. If you plan to use the dataset please cite the following resources: * Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020). * Mishra, Shubhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. “Trained Models for Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020.” University of Illinois at Urbana-Champaign. https://doi.org/10.13012/B2IDB-8882752_V1. ``` @inproceedings{Mishra2020TRAC, author = {Mishra, Sudhanshu and Prasad, Shivangi and Mishra, Shubhanshu}, booktitle = {Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020)}, title = {{Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}}, year = {2020} } @data{illinoisdatabankIDB-8882752, author = {Mishra, Shubhanshu and Prasad, Shivangi and Mishra, Shubhanshu}, doi = {10.13012/B2IDB-8882752_V1}, publisher = {University of Illinois at Urbana-Champaign}, title = {{Trained models for Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}}, url = {https://doi.org/10.13012/B2IDB-8882752{\_}V1}, year = {2020} } ``` ## Usage The models can be used via the following code: ```python from transformers import AutoModel, AutoTokenizer, AutoModelForSequenceClassification import torch from pathlib import Path from scipy.special import softmax import numpy as np import pandas as pd TASK_LABEL_IDS = { "Sub-task A": ["OAG", "NAG", "CAG"], "Sub-task B": ["GEN", "NGEN"], "Sub-task C": ["OAG-GEN", "OAG-NGEN", "NAG-GEN", "NAG-NGEN", "CAG-GEN", "CAG-NGEN"] } model_version="databank" # other option is hugging face library if model_version == "databank": # Make sure you have downloaded the required model file from https://databank.illinois.edu/datasets/IDB-8882752 # Unzip the file at some model_path (we are using: "databank_model") model_path = next(Path("databank_model").glob("./*/output/*/model")) # Assuming you get the following type of structure inside "databank_model" # 'databank_model/ALL/Sub-task C/output/bert-base-multilingual-uncased/model' lang, task, _, base_model, _ = model_path.parts tokenizer = AutoTokenizer.from_pretrained(base_model) model = AutoModelForSequenceClassification.from_pretrained(model_path) else: lang, task, base_model = "ALL", "Sub-task C", "bert-base-multilingual-uncased" base_model = f"socialmediaie/TRAC2020_{lang}_{lang.split()[-1]}_{base_model}" tokenizer = AutoTokenizer.from_pretrained(base_model) model = AutoModelForSequenceClassification.from_pretrained(base_model) # For doing inference set model in eval mode model.eval() # If you want to further fine-tune the model you can reset the model to model.train() task_labels = TASK_LABEL_IDS[task] sentence = "This is a good cat and this is a bad dog." processed_sentence = f"{tokenizer.cls_token} {sentence}" tokens = tokenizer.tokenize(sentence) indexed_tokens = tokenizer.convert_tokens_to_ids(tokens) tokens_tensor = torch.tensor([indexed_tokens]) with torch.no_grad(): logits, = model(tokens_tensor, labels=None) logits preds = logits.detach().cpu().numpy() preds_probs = softmax(preds, axis=1) preds = np.argmax(preds_probs, axis=1) preds_labels = np.array(task_labels)[preds] print(dict(zip(task_labels, preds_probs[0])), preds_labels) """You should get an output as follows: ({'CAG-GEN': 0.06762535, 'CAG-NGEN': 0.03244293, 'NAG-GEN': 0.6897794, 'NAG-NGEN': 0.15498641, 'OAG-GEN': 0.034373745, 'OAG-NGEN': 0.020792078}, array(['NAG-GEN'], dtype='<U8')) """ ```
socialmediaie/TRAC2020_ENG_B_bert-base-uncased
2021-05-20T06:56:37.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
socialmediaie
18
transformers
# Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020 Models and predictions for submission to TRAC - 2020 Second Workshop on Trolling, Aggression and Cyberbullying. Our trained models as well as evaluation metrics during traing are available at: https://databank.illinois.edu/datasets/IDB-8882752# We also make a few of our models available in HuggingFace's models repository at https://huggingface.co/socialmediaie/, these models can be further fine-tuned on your dataset of choice. Our approach is described in our paper titled: > Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020). The source code for training this model and more details can be found on our code repository: https://github.com/socialmediaie/TRAC2020 NOTE: These models are retrained for uploading here after our submission so the evaluation measures may be slightly different from the ones reported in the paper. If you plan to use the dataset please cite the following resources: * Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020). * Mishra, Shubhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. “Trained Models for Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020.” University of Illinois at Urbana-Champaign. https://doi.org/10.13012/B2IDB-8882752_V1. ``` @inproceedings{Mishra2020TRAC, author = {Mishra, Sudhanshu and Prasad, Shivangi and Mishra, Shubhanshu}, booktitle = {Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020)}, title = {{Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}}, year = {2020} } @data{illinoisdatabankIDB-8882752, author = {Mishra, Shubhanshu and Prasad, Shivangi and Mishra, Shubhanshu}, doi = {10.13012/B2IDB-8882752_V1}, publisher = {University of Illinois at Urbana-Champaign}, title = {{Trained models for Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}}, url = {https://doi.org/10.13012/B2IDB-8882752{\_}V1}, year = {2020} } ``` ## Usage The models can be used via the following code: ```python from transformers import AutoModel, AutoTokenizer, AutoModelForSequenceClassification import torch from pathlib import Path from scipy.special import softmax import numpy as np import pandas as pd TASK_LABEL_IDS = { "Sub-task A": ["OAG", "NAG", "CAG"], "Sub-task B": ["GEN", "NGEN"], "Sub-task C": ["OAG-GEN", "OAG-NGEN", "NAG-GEN", "NAG-NGEN", "CAG-GEN", "CAG-NGEN"] } model_version="databank" # other option is hugging face library if model_version == "databank": # Make sure you have downloaded the required model file from https://databank.illinois.edu/datasets/IDB-8882752 # Unzip the file at some model_path (we are using: "databank_model") model_path = next(Path("databank_model").glob("./*/output/*/model")) # Assuming you get the following type of structure inside "databank_model" # 'databank_model/ALL/Sub-task C/output/bert-base-multilingual-uncased/model' lang, task, _, base_model, _ = model_path.parts tokenizer = AutoTokenizer.from_pretrained(base_model) model = AutoModelForSequenceClassification.from_pretrained(model_path) else: lang, task, base_model = "ALL", "Sub-task C", "bert-base-multilingual-uncased" base_model = f"socialmediaie/TRAC2020_{lang}_{lang.split()[-1]}_{base_model}" tokenizer = AutoTokenizer.from_pretrained(base_model) model = AutoModelForSequenceClassification.from_pretrained(base_model) # For doing inference set model in eval mode model.eval() # If you want to further fine-tune the model you can reset the model to model.train() task_labels = TASK_LABEL_IDS[task] sentence = "This is a good cat and this is a bad dog." processed_sentence = f"{tokenizer.cls_token} {sentence}" tokens = tokenizer.tokenize(sentence) indexed_tokens = tokenizer.convert_tokens_to_ids(tokens) tokens_tensor = torch.tensor([indexed_tokens]) with torch.no_grad(): logits, = model(tokens_tensor, labels=None) logits preds = logits.detach().cpu().numpy() preds_probs = softmax(preds, axis=1) preds = np.argmax(preds_probs, axis=1) preds_labels = np.array(task_labels)[preds] print(dict(zip(task_labels, preds_probs[0])), preds_labels) """You should get an output as follows: ({'CAG-GEN': 0.06762535, 'CAG-NGEN': 0.03244293, 'NAG-GEN': 0.6897794, 'NAG-NGEN': 0.15498641, 'OAG-GEN': 0.034373745, 'OAG-NGEN': 0.020792078}, array(['NAG-GEN'], dtype='<U8')) """ ```
socialmediaie/TRAC2020_ENG_C_bert-base-uncased
2021-05-20T06:57:39.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
socialmediaie
18
transformers
# Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020 Models and predictions for submission to TRAC - 2020 Second Workshop on Trolling, Aggression and Cyberbullying. Our trained models as well as evaluation metrics during traing are available at: https://databank.illinois.edu/datasets/IDB-8882752# We also make a few of our models available in HuggingFace's models repository at https://huggingface.co/socialmediaie/, these models can be further fine-tuned on your dataset of choice. Our approach is described in our paper titled: > Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020). The source code for training this model and more details can be found on our code repository: https://github.com/socialmediaie/TRAC2020 NOTE: These models are retrained for uploading here after our submission so the evaluation measures may be slightly different from the ones reported in the paper. If you plan to use the dataset please cite the following resources: * Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020). * Mishra, Shubhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. “Trained Models for Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020.” University of Illinois at Urbana-Champaign. https://doi.org/10.13012/B2IDB-8882752_V1. ``` @inproceedings{Mishra2020TRAC, author = {Mishra, Sudhanshu and Prasad, Shivangi and Mishra, Shubhanshu}, booktitle = {Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020)}, title = {{Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}}, year = {2020} } @data{illinoisdatabankIDB-8882752, author = {Mishra, Shubhanshu and Prasad, Shivangi and Mishra, Shubhanshu}, doi = {10.13012/B2IDB-8882752_V1}, publisher = {University of Illinois at Urbana-Champaign}, title = {{Trained models for Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}}, url = {https://doi.org/10.13012/B2IDB-8882752{\_}V1}, year = {2020} } ``` ## Usage The models can be used via the following code: ```python from transformers import AutoModel, AutoTokenizer, AutoModelForSequenceClassification import torch from pathlib import Path from scipy.special import softmax import numpy as np import pandas as pd TASK_LABEL_IDS = { "Sub-task A": ["OAG", "NAG", "CAG"], "Sub-task B": ["GEN", "NGEN"], "Sub-task C": ["OAG-GEN", "OAG-NGEN", "NAG-GEN", "NAG-NGEN", "CAG-GEN", "CAG-NGEN"] } model_version="databank" # other option is hugging face library if model_version == "databank": # Make sure you have downloaded the required model file from https://databank.illinois.edu/datasets/IDB-8882752 # Unzip the file at some model_path (we are using: "databank_model") model_path = next(Path("databank_model").glob("./*/output/*/model")) # Assuming you get the following type of structure inside "databank_model" # 'databank_model/ALL/Sub-task C/output/bert-base-multilingual-uncased/model' lang, task, _, base_model, _ = model_path.parts tokenizer = AutoTokenizer.from_pretrained(base_model) model = AutoModelForSequenceClassification.from_pretrained(model_path) else: lang, task, base_model = "ALL", "Sub-task C", "bert-base-multilingual-uncased" base_model = f"socialmediaie/TRAC2020_{lang}_{lang.split()[-1]}_{base_model}" tokenizer = AutoTokenizer.from_pretrained(base_model) model = AutoModelForSequenceClassification.from_pretrained(base_model) # For doing inference set model in eval mode model.eval() # If you want to further fine-tune the model you can reset the model to model.train() task_labels = TASK_LABEL_IDS[task] sentence = "This is a good cat and this is a bad dog." processed_sentence = f"{tokenizer.cls_token} {sentence}" tokens = tokenizer.tokenize(sentence) indexed_tokens = tokenizer.convert_tokens_to_ids(tokens) tokens_tensor = torch.tensor([indexed_tokens]) with torch.no_grad(): logits, = model(tokens_tensor, labels=None) logits preds = logits.detach().cpu().numpy() preds_probs = softmax(preds, axis=1) preds = np.argmax(preds_probs, axis=1) preds_labels = np.array(task_labels)[preds] print(dict(zip(task_labels, preds_probs[0])), preds_labels) """You should get an output as follows: ({'CAG-GEN': 0.06762535, 'CAG-NGEN': 0.03244293, 'NAG-GEN': 0.6897794, 'NAG-NGEN': 0.15498641, 'OAG-GEN': 0.034373745, 'OAG-NGEN': 0.020792078}, array(['NAG-GEN'], dtype='<U8')) """ ```
socialmediaie/TRAC2020_HIN_A_bert-base-multilingual-uncased
2021-05-20T06:58:51.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
socialmediaie
17
transformers
# Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020 Models and predictions for submission to TRAC - 2020 Second Workshop on Trolling, Aggression and Cyberbullying. Our trained models as well as evaluation metrics during traing are available at: https://databank.illinois.edu/datasets/IDB-8882752# We also make a few of our models available in HuggingFace's models repository at https://huggingface.co/socialmediaie/, these models can be further fine-tuned on your dataset of choice. Our approach is described in our paper titled: > Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020). The source code for training this model and more details can be found on our code repository: https://github.com/socialmediaie/TRAC2020 NOTE: These models are retrained for uploading here after our submission so the evaluation measures may be slightly different from the ones reported in the paper. If you plan to use the dataset please cite the following resources: * Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020). * Mishra, Shubhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. “Trained Models for Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020.” University of Illinois at Urbana-Champaign. https://doi.org/10.13012/B2IDB-8882752_V1. ``` @inproceedings{Mishra2020TRAC, author = {Mishra, Sudhanshu and Prasad, Shivangi and Mishra, Shubhanshu}, booktitle = {Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020)}, title = {{Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}}, year = {2020} } @data{illinoisdatabankIDB-8882752, author = {Mishra, Shubhanshu and Prasad, Shivangi and Mishra, Shubhanshu}, doi = {10.13012/B2IDB-8882752_V1}, publisher = {University of Illinois at Urbana-Champaign}, title = {{Trained models for Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}}, url = {https://doi.org/10.13012/B2IDB-8882752{\_}V1}, year = {2020} } ``` ## Usage The models can be used via the following code: ```python from transformers import AutoModel, AutoTokenizer, AutoModelForSequenceClassification import torch from pathlib import Path from scipy.special import softmax import numpy as np import pandas as pd TASK_LABEL_IDS = { "Sub-task A": ["OAG", "NAG", "CAG"], "Sub-task B": ["GEN", "NGEN"], "Sub-task C": ["OAG-GEN", "OAG-NGEN", "NAG-GEN", "NAG-NGEN", "CAG-GEN", "CAG-NGEN"] } model_version="databank" # other option is hugging face library if model_version == "databank": # Make sure you have downloaded the required model file from https://databank.illinois.edu/datasets/IDB-8882752 # Unzip the file at some model_path (we are using: "databank_model") model_path = next(Path("databank_model").glob("./*/output/*/model")) # Assuming you get the following type of structure inside "databank_model" # 'databank_model/ALL/Sub-task C/output/bert-base-multilingual-uncased/model' lang, task, _, base_model, _ = model_path.parts tokenizer = AutoTokenizer.from_pretrained(base_model) model = AutoModelForSequenceClassification.from_pretrained(model_path) else: lang, task, base_model = "ALL", "Sub-task C", "bert-base-multilingual-uncased" base_model = f"socialmediaie/TRAC2020_{lang}_{lang.split()[-1]}_{base_model}" tokenizer = AutoTokenizer.from_pretrained(base_model) model = AutoModelForSequenceClassification.from_pretrained(base_model) # For doing inference set model in eval mode model.eval() # If you want to further fine-tune the model you can reset the model to model.train() task_labels = TASK_LABEL_IDS[task] sentence = "This is a good cat and this is a bad dog." processed_sentence = f"{tokenizer.cls_token} {sentence}" tokens = tokenizer.tokenize(sentence) indexed_tokens = tokenizer.convert_tokens_to_ids(tokens) tokens_tensor = torch.tensor([indexed_tokens]) with torch.no_grad(): logits, = model(tokens_tensor, labels=None) logits preds = logits.detach().cpu().numpy() preds_probs = softmax(preds, axis=1) preds = np.argmax(preds_probs, axis=1) preds_labels = np.array(task_labels)[preds] print(dict(zip(task_labels, preds_probs[0])), preds_labels) """You should get an output as follows: ({'CAG-GEN': 0.06762535, 'CAG-NGEN': 0.03244293, 'NAG-GEN': 0.6897794, 'NAG-NGEN': 0.15498641, 'OAG-GEN': 0.034373745, 'OAG-NGEN': 0.020792078}, array(['NAG-GEN'], dtype='<U8')) """ ```
socialmediaie/TRAC2020_HIN_B_bert-base-multilingual-uncased
2021-05-20T07:00:11.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
socialmediaie
19
transformers
# Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020 Models and predictions for submission to TRAC - 2020 Second Workshop on Trolling, Aggression and Cyberbullying. Our trained models as well as evaluation metrics during traing are available at: https://databank.illinois.edu/datasets/IDB-8882752# We also make a few of our models available in HuggingFace's models repository at https://huggingface.co/socialmediaie/, these models can be further fine-tuned on your dataset of choice. Our approach is described in our paper titled: > Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020). The source code for training this model and more details can be found on our code repository: https://github.com/socialmediaie/TRAC2020 NOTE: These models are retrained for uploading here after our submission so the evaluation measures may be slightly different from the ones reported in the paper. If you plan to use the dataset please cite the following resources: * Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020). * Mishra, Shubhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. “Trained Models for Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020.” University of Illinois at Urbana-Champaign. https://doi.org/10.13012/B2IDB-8882752_V1. ``` @inproceedings{Mishra2020TRAC, author = {Mishra, Sudhanshu and Prasad, Shivangi and Mishra, Shubhanshu}, booktitle = {Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020)}, title = {{Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}}, year = {2020} } @data{illinoisdatabankIDB-8882752, author = {Mishra, Shubhanshu and Prasad, Shivangi and Mishra, Shubhanshu}, doi = {10.13012/B2IDB-8882752_V1}, publisher = {University of Illinois at Urbana-Champaign}, title = {{Trained models for Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}}, url = {https://doi.org/10.13012/B2IDB-8882752{\_}V1}, year = {2020} } ``` ## Usage The models can be used via the following code: ```python from transformers import AutoModel, AutoTokenizer, AutoModelForSequenceClassification import torch from pathlib import Path from scipy.special import softmax import numpy as np import pandas as pd TASK_LABEL_IDS = { "Sub-task A": ["OAG", "NAG", "CAG"], "Sub-task B": ["GEN", "NGEN"], "Sub-task C": ["OAG-GEN", "OAG-NGEN", "NAG-GEN", "NAG-NGEN", "CAG-GEN", "CAG-NGEN"] } model_version="databank" # other option is hugging face library if model_version == "databank": # Make sure you have downloaded the required model file from https://databank.illinois.edu/datasets/IDB-8882752 # Unzip the file at some model_path (we are using: "databank_model") model_path = next(Path("databank_model").glob("./*/output/*/model")) # Assuming you get the following type of structure inside "databank_model" # 'databank_model/ALL/Sub-task C/output/bert-base-multilingual-uncased/model' lang, task, _, base_model, _ = model_path.parts tokenizer = AutoTokenizer.from_pretrained(base_model) model = AutoModelForSequenceClassification.from_pretrained(model_path) else: lang, task, base_model = "ALL", "Sub-task C", "bert-base-multilingual-uncased" base_model = f"socialmediaie/TRAC2020_{lang}_{lang.split()[-1]}_{base_model}" tokenizer = AutoTokenizer.from_pretrained(base_model) model = AutoModelForSequenceClassification.from_pretrained(base_model) # For doing inference set model in eval mode model.eval() # If you want to further fine-tune the model you can reset the model to model.train() task_labels = TASK_LABEL_IDS[task] sentence = "This is a good cat and this is a bad dog." processed_sentence = f"{tokenizer.cls_token} {sentence}" tokens = tokenizer.tokenize(sentence) indexed_tokens = tokenizer.convert_tokens_to_ids(tokens) tokens_tensor = torch.tensor([indexed_tokens]) with torch.no_grad(): logits, = model(tokens_tensor, labels=None) logits preds = logits.detach().cpu().numpy() preds_probs = softmax(preds, axis=1) preds = np.argmax(preds_probs, axis=1) preds_labels = np.array(task_labels)[preds] print(dict(zip(task_labels, preds_probs[0])), preds_labels) """You should get an output as follows: ({'CAG-GEN': 0.06762535, 'CAG-NGEN': 0.03244293, 'NAG-GEN': 0.6897794, 'NAG-NGEN': 0.15498641, 'OAG-GEN': 0.034373745, 'OAG-NGEN': 0.020792078}, array(['NAG-GEN'], dtype='<U8')) """ ```
socialmediaie/TRAC2020_HIN_C_bert-base-multilingual-uncased
2021-05-20T07:01:31.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
socialmediaie
16
transformers
# Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020 Models and predictions for submission to TRAC - 2020 Second Workshop on Trolling, Aggression and Cyberbullying. Our trained models as well as evaluation metrics during traing are available at: https://databank.illinois.edu/datasets/IDB-8882752# We also make a few of our models available in HuggingFace's models repository at https://huggingface.co/socialmediaie/, these models can be further fine-tuned on your dataset of choice. Our approach is described in our paper titled: > Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020). The source code for training this model and more details can be found on our code repository: https://github.com/socialmediaie/TRAC2020 NOTE: These models are retrained for uploading here after our submission so the evaluation measures may be slightly different from the ones reported in the paper. If you plan to use the dataset please cite the following resources: * Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020). * Mishra, Shubhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. “Trained Models for Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020.” University of Illinois at Urbana-Champaign. https://doi.org/10.13012/B2IDB-8882752_V1. ``` @inproceedings{Mishra2020TRAC, author = {Mishra, Sudhanshu and Prasad, Shivangi and Mishra, Shubhanshu}, booktitle = {Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020)}, title = {{Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}}, year = {2020} } @data{illinoisdatabankIDB-8882752, author = {Mishra, Shubhanshu and Prasad, Shivangi and Mishra, Shubhanshu}, doi = {10.13012/B2IDB-8882752_V1}, publisher = {University of Illinois at Urbana-Champaign}, title = {{Trained models for Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}}, url = {https://doi.org/10.13012/B2IDB-8882752{\_}V1}, year = {2020} } ``` ## Usage The models can be used via the following code: ```python from transformers import AutoModel, AutoTokenizer, AutoModelForSequenceClassification import torch from pathlib import Path from scipy.special import softmax import numpy as np import pandas as pd TASK_LABEL_IDS = { "Sub-task A": ["OAG", "NAG", "CAG"], "Sub-task B": ["GEN", "NGEN"], "Sub-task C": ["OAG-GEN", "OAG-NGEN", "NAG-GEN", "NAG-NGEN", "CAG-GEN", "CAG-NGEN"] } model_version="databank" # other option is hugging face library if model_version == "databank": # Make sure you have downloaded the required model file from https://databank.illinois.edu/datasets/IDB-8882752 # Unzip the file at some model_path (we are using: "databank_model") model_path = next(Path("databank_model").glob("./*/output/*/model")) # Assuming you get the following type of structure inside "databank_model" # 'databank_model/ALL/Sub-task C/output/bert-base-multilingual-uncased/model' lang, task, _, base_model, _ = model_path.parts tokenizer = AutoTokenizer.from_pretrained(base_model) model = AutoModelForSequenceClassification.from_pretrained(model_path) else: lang, task, base_model = "ALL", "Sub-task C", "bert-base-multilingual-uncased" base_model = f"socialmediaie/TRAC2020_{lang}_{lang.split()[-1]}_{base_model}" tokenizer = AutoTokenizer.from_pretrained(base_model) model = AutoModelForSequenceClassification.from_pretrained(base_model) # For doing inference set model in eval mode model.eval() # If you want to further fine-tune the model you can reset the model to model.train() task_labels = TASK_LABEL_IDS[task] sentence = "This is a good cat and this is a bad dog." processed_sentence = f"{tokenizer.cls_token} {sentence}" tokens = tokenizer.tokenize(sentence) indexed_tokens = tokenizer.convert_tokens_to_ids(tokens) tokens_tensor = torch.tensor([indexed_tokens]) with torch.no_grad(): logits, = model(tokens_tensor, labels=None) logits preds = logits.detach().cpu().numpy() preds_probs = softmax(preds, axis=1) preds = np.argmax(preds_probs, axis=1) preds_labels = np.array(task_labels)[preds] print(dict(zip(task_labels, preds_probs[0])), preds_labels) """You should get an output as follows: ({'CAG-GEN': 0.06762535, 'CAG-NGEN': 0.03244293, 'NAG-GEN': 0.6897794, 'NAG-NGEN': 0.15498641, 'OAG-GEN': 0.034373745, 'OAG-NGEN': 0.020792078}, array(['NAG-GEN'], dtype='<U8')) """ ```
socialmediaie/TRAC2020_IBEN_A_bert-base-multilingual-uncased
2021-05-20T07:03:18.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
socialmediaie
12
transformers
# Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020 Models and predictions for submission to TRAC - 2020 Second Workshop on Trolling, Aggression and Cyberbullying. Our trained models as well as evaluation metrics during traing are available at: https://databank.illinois.edu/datasets/IDB-8882752# We also make a few of our models available in HuggingFace's models repository at https://huggingface.co/socialmediaie/, these models can be further fine-tuned on your dataset of choice. Our approach is described in our paper titled: > Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020). The source code for training this model and more details can be found on our code repository: https://github.com/socialmediaie/TRAC2020 NOTE: These models are retrained for uploading here after our submission so the evaluation measures may be slightly different from the ones reported in the paper. If you plan to use the dataset please cite the following resources: * Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020). * Mishra, Shubhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. “Trained Models for Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020.” University of Illinois at Urbana-Champaign. https://doi.org/10.13012/B2IDB-8882752_V1. ``` @inproceedings{Mishra2020TRAC, author = {Mishra, Sudhanshu and Prasad, Shivangi and Mishra, Shubhanshu}, booktitle = {Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020)}, title = {{Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}}, year = {2020} } @data{illinoisdatabankIDB-8882752, author = {Mishra, Shubhanshu and Prasad, Shivangi and Mishra, Shubhanshu}, doi = {10.13012/B2IDB-8882752_V1}, publisher = {University of Illinois at Urbana-Champaign}, title = {{Trained models for Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}}, url = {https://doi.org/10.13012/B2IDB-8882752{\_}V1}, year = {2020} } ``` ## Usage The models can be used via the following code: ```python from transformers import AutoModel, AutoTokenizer, AutoModelForSequenceClassification import torch from pathlib import Path from scipy.special import softmax import numpy as np import pandas as pd TASK_LABEL_IDS = { "Sub-task A": ["OAG", "NAG", "CAG"], "Sub-task B": ["GEN", "NGEN"], "Sub-task C": ["OAG-GEN", "OAG-NGEN", "NAG-GEN", "NAG-NGEN", "CAG-GEN", "CAG-NGEN"] } model_version="databank" # other option is hugging face library if model_version == "databank": # Make sure you have downloaded the required model file from https://databank.illinois.edu/datasets/IDB-8882752 # Unzip the file at some model_path (we are using: "databank_model") model_path = next(Path("databank_model").glob("./*/output/*/model")) # Assuming you get the following type of structure inside "databank_model" # 'databank_model/ALL/Sub-task C/output/bert-base-multilingual-uncased/model' lang, task, _, base_model, _ = model_path.parts tokenizer = AutoTokenizer.from_pretrained(base_model) model = AutoModelForSequenceClassification.from_pretrained(model_path) else: lang, task, base_model = "ALL", "Sub-task C", "bert-base-multilingual-uncased" base_model = f"socialmediaie/TRAC2020_{lang}_{lang.split()[-1]}_{base_model}" tokenizer = AutoTokenizer.from_pretrained(base_model) model = AutoModelForSequenceClassification.from_pretrained(base_model) # For doing inference set model in eval mode model.eval() # If you want to further fine-tune the model you can reset the model to model.train() task_labels = TASK_LABEL_IDS[task] sentence = "This is a good cat and this is a bad dog." processed_sentence = f"{tokenizer.cls_token} {sentence}" tokens = tokenizer.tokenize(sentence) indexed_tokens = tokenizer.convert_tokens_to_ids(tokens) tokens_tensor = torch.tensor([indexed_tokens]) with torch.no_grad(): logits, = model(tokens_tensor, labels=None) logits preds = logits.detach().cpu().numpy() preds_probs = softmax(preds, axis=1) preds = np.argmax(preds_probs, axis=1) preds_labels = np.array(task_labels)[preds] print(dict(zip(task_labels, preds_probs[0])), preds_labels) """You should get an output as follows: ({'CAG-GEN': 0.06762535, 'CAG-NGEN': 0.03244293, 'NAG-GEN': 0.6897794, 'NAG-NGEN': 0.15498641, 'OAG-GEN': 0.034373745, 'OAG-NGEN': 0.020792078}, array(['NAG-GEN'], dtype='<U8')) """ ```
socialmediaie/TRAC2020_IBEN_B_bert-base-multilingual-uncased
2021-05-20T07:04:58.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
socialmediaie
15
transformers
# Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020 Models and predictions for submission to TRAC - 2020 Second Workshop on Trolling, Aggression and Cyberbullying. Our trained models as well as evaluation metrics during traing are available at: https://databank.illinois.edu/datasets/IDB-8882752# We also make a few of our models available in HuggingFace's models repository at https://huggingface.co/socialmediaie/, these models can be further fine-tuned on your dataset of choice. Our approach is described in our paper titled: > Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020). The source code for training this model and more details can be found on our code repository: https://github.com/socialmediaie/TRAC2020 NOTE: These models are retrained for uploading here after our submission so the evaluation measures may be slightly different from the ones reported in the paper. If you plan to use the dataset please cite the following resources: * Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020). * Mishra, Shubhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. “Trained Models for Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020.” University of Illinois at Urbana-Champaign. https://doi.org/10.13012/B2IDB-8882752_V1. ``` @inproceedings{Mishra2020TRAC, author = {Mishra, Sudhanshu and Prasad, Shivangi and Mishra, Shubhanshu}, booktitle = {Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020)}, title = {{Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}}, year = {2020} } @data{illinoisdatabankIDB-8882752, author = {Mishra, Shubhanshu and Prasad, Shivangi and Mishra, Shubhanshu}, doi = {10.13012/B2IDB-8882752_V1}, publisher = {University of Illinois at Urbana-Champaign}, title = {{Trained models for Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}}, url = {https://doi.org/10.13012/B2IDB-8882752{\_}V1}, year = {2020} } ``` ## Usage The models can be used via the following code: ```python from transformers import AutoModel, AutoTokenizer, AutoModelForSequenceClassification import torch from pathlib import Path from scipy.special import softmax import numpy as np import pandas as pd TASK_LABEL_IDS = { "Sub-task A": ["OAG", "NAG", "CAG"], "Sub-task B": ["GEN", "NGEN"], "Sub-task C": ["OAG-GEN", "OAG-NGEN", "NAG-GEN", "NAG-NGEN", "CAG-GEN", "CAG-NGEN"] } model_version="databank" # other option is hugging face library if model_version == "databank": # Make sure you have downloaded the required model file from https://databank.illinois.edu/datasets/IDB-8882752 # Unzip the file at some model_path (we are using: "databank_model") model_path = next(Path("databank_model").glob("./*/output/*/model")) # Assuming you get the following type of structure inside "databank_model" # 'databank_model/ALL/Sub-task C/output/bert-base-multilingual-uncased/model' lang, task, _, base_model, _ = model_path.parts tokenizer = AutoTokenizer.from_pretrained(base_model) model = AutoModelForSequenceClassification.from_pretrained(model_path) else: lang, task, base_model = "ALL", "Sub-task C", "bert-base-multilingual-uncased" base_model = f"socialmediaie/TRAC2020_{lang}_{lang.split()[-1]}_{base_model}" tokenizer = AutoTokenizer.from_pretrained(base_model) model = AutoModelForSequenceClassification.from_pretrained(base_model) # For doing inference set model in eval mode model.eval() # If you want to further fine-tune the model you can reset the model to model.train() task_labels = TASK_LABEL_IDS[task] sentence = "This is a good cat and this is a bad dog." processed_sentence = f"{tokenizer.cls_token} {sentence}" tokens = tokenizer.tokenize(sentence) indexed_tokens = tokenizer.convert_tokens_to_ids(tokens) tokens_tensor = torch.tensor([indexed_tokens]) with torch.no_grad(): logits, = model(tokens_tensor, labels=None) logits preds = logits.detach().cpu().numpy() preds_probs = softmax(preds, axis=1) preds = np.argmax(preds_probs, axis=1) preds_labels = np.array(task_labels)[preds] print(dict(zip(task_labels, preds_probs[0])), preds_labels) """You should get an output as follows: ({'CAG-GEN': 0.06762535, 'CAG-NGEN': 0.03244293, 'NAG-GEN': 0.6897794, 'NAG-NGEN': 0.15498641, 'OAG-GEN': 0.034373745, 'OAG-NGEN': 0.020792078}, array(['NAG-GEN'], dtype='<U8')) """ ```
socialmediaie/TRAC2020_IBEN_C_bert-base-multilingual-uncased
2021-05-20T07:06:16.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
socialmediaie
15
transformers
# Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020 Models and predictions for submission to TRAC - 2020 Second Workshop on Trolling, Aggression and Cyberbullying. Our trained models as well as evaluation metrics during traing are available at: https://databank.illinois.edu/datasets/IDB-8882752# We also make a few of our models available in HuggingFace's models repository at https://huggingface.co/socialmediaie/, these models can be further fine-tuned on your dataset of choice. Our approach is described in our paper titled: > Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020). The source code for training this model and more details can be found on our code repository: https://github.com/socialmediaie/TRAC2020 NOTE: These models are retrained for uploading here after our submission so the evaluation measures may be slightly different from the ones reported in the paper. If you plan to use the dataset please cite the following resources: * Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020). * Mishra, Shubhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. “Trained Models for Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020.” University of Illinois at Urbana-Champaign. https://doi.org/10.13012/B2IDB-8882752_V1. ``` @inproceedings{Mishra2020TRAC, author = {Mishra, Sudhanshu and Prasad, Shivangi and Mishra, Shubhanshu}, booktitle = {Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020)}, title = {{Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}}, year = {2020} } @data{illinoisdatabankIDB-8882752, author = {Mishra, Shubhanshu and Prasad, Shivangi and Mishra, Shubhanshu}, doi = {10.13012/B2IDB-8882752_V1}, publisher = {University of Illinois at Urbana-Champaign}, title = {{Trained models for Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}}, url = {https://doi.org/10.13012/B2IDB-8882752{\_}V1}, year = {2020} } ``` ## Usage The models can be used via the following code: ```python from transformers import AutoModel, AutoTokenizer, AutoModelForSequenceClassification import torch from pathlib import Path from scipy.special import softmax import numpy as np import pandas as pd TASK_LABEL_IDS = { "Sub-task A": ["OAG", "NAG", "CAG"], "Sub-task B": ["GEN", "NGEN"], "Sub-task C": ["OAG-GEN", "OAG-NGEN", "NAG-GEN", "NAG-NGEN", "CAG-GEN", "CAG-NGEN"] } model_version="databank" # other option is hugging face library if model_version == "databank": # Make sure you have downloaded the required model file from https://databank.illinois.edu/datasets/IDB-8882752 # Unzip the file at some model_path (we are using: "databank_model") model_path = next(Path("databank_model").glob("./*/output/*/model")) # Assuming you get the following type of structure inside "databank_model" # 'databank_model/ALL/Sub-task C/output/bert-base-multilingual-uncased/model' lang, task, _, base_model, _ = model_path.parts tokenizer = AutoTokenizer.from_pretrained(base_model) model = AutoModelForSequenceClassification.from_pretrained(model_path) else: lang, task, base_model = "ALL", "Sub-task C", "bert-base-multilingual-uncased" base_model = f"socialmediaie/TRAC2020_{lang}_{lang.split()[-1]}_{base_model}" tokenizer = AutoTokenizer.from_pretrained(base_model) model = AutoModelForSequenceClassification.from_pretrained(base_model) # For doing inference set model in eval mode model.eval() # If you want to further fine-tune the model you can reset the model to model.train() task_labels = TASK_LABEL_IDS[task] sentence = "This is a good cat and this is a bad dog." processed_sentence = f"{tokenizer.cls_token} {sentence}" tokens = tokenizer.tokenize(sentence) indexed_tokens = tokenizer.convert_tokens_to_ids(tokens) tokens_tensor = torch.tensor([indexed_tokens]) with torch.no_grad(): logits, = model(tokens_tensor, labels=None) logits preds = logits.detach().cpu().numpy() preds_probs = softmax(preds, axis=1) preds = np.argmax(preds_probs, axis=1) preds_labels = np.array(task_labels)[preds] print(dict(zip(task_labels, preds_probs[0])), preds_labels) """You should get an output as follows: ({'CAG-GEN': 0.06762535, 'CAG-NGEN': 0.03244293, 'NAG-GEN': 0.6897794, 'NAG-NGEN': 0.15498641, 'OAG-GEN': 0.034373745, 'OAG-NGEN': 0.020792078}, array(['NAG-GEN'], dtype='<U8')) """ ```
soham950/timelines_classifier
2021-05-20T07:07:42.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "config.json", "flax_model.msgpack", "optimizer.pt", "pytorch_model.bin", "scheduler.pt", "special_tokens_map.json", "tokenizer_config.json", "trainer_state.json", "training_args.bin", "vocab.txt" ]
soham950
6
transformers
soheeyang/dpr-ctx_encoder-single-trivia-base
2021-04-15T14:48:50.000Z
[ "pytorch", "tf", "dpr", "arxiv:2004.04906", "transformers" ]
[ ".DS_Store", ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "tf_model.h5", "tokenizer.json", "tokenizer_config.json", "vocab.txt" ]
soheeyang
22
transformers
# DPRContextEncoder for TriviaQA ## dpr-ctx_encoder-single-trivia-base Dense Passage Retrieval (`DPR`) Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-tau Yih, [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906), EMNLP 2020. This model is the context encoder of DPR trained solely on TriviaQA (single-trivia) using the [official implementation of DPR](https://github.com/facebookresearch/DPR). Disclaimer: This model is not from the authors of DPR, but my reproduction. The authors did not release the DPR weights trained solely on TriviaQA. I hope this model checkpoint can be helpful for those who want to use DPR trained only on TriviaQA. ## Performance The following is the answer recall rate measured using PyTorch 1.4.0 and transformers 4.5.0. The values in parentheses are those reported in the paper. | Top-K Passages | TriviaQA Dev | TriviaQA Test | |----------------|--------------|---------------| | 1 | 54.27 | 54.41 | | 5 | 71.11 | 70.99 | | 20 | 79.53 | 79.31 (79.4) | | 50 | 82.72 | 82.99 | | 100 | 85.07 | 84.99 (85.0) | ## How to Use Using `AutoModel` does not properly detect whether the checkpoint is for `DPRContextEncoder` or `DPRQuestionEncoder`. Therefore, please specify the exact class to use the model. ```python from transformers import DPRContextEncoder, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("soheeyang/dpr-ctx_encoder-single-trivia-base") ctx_encoder = DPRContextEncoder.from_pretrained("soheeyang/dpr-ctx_encoder-single-trivia-base") data = tokenizer("context comes here", return_tensors="pt") ctx_embedding = ctx_encoder(**data).pooler_output # embedding vector for context ```
soheeyang/dpr-question_encoder-single-trivia-base
2021-04-15T14:48:08.000Z
[ "pytorch", "tf", "dpr", "arxiv:2004.04906", "transformers" ]
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "tf_model.h5", "tokenizer.json", "tokenizer_config.json", "vocab.txt" ]
soheeyang
24
transformers
# DPRQuestionEncoder for TriviaQA ## dpr-question_encoder-single-trivia-base Dense Passage Retrieval (`DPR`) Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-tau Yih, [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906), EMNLP 2020. This model is the question encoder of DPR trained solely on TriviaQA (single-trivia) using the [official implementation of DPR](https://github.com/facebookresearch/DPR). Disclaimer: This model is not from the authors of DPR, but my reproduction. The authors did not release the DPR weights trained solely on TriviaQA. I hope this model checkpoint can be helpful for those who want to use DPR trained only on TriviaQA. ## Performance The following is the answer recall rate measured using PyTorch 1.4.0 and transformers 4.5.0. The values in parentheses are those reported in the paper. | Top-K Passages | TriviaQA Dev | TriviaQA Test | |----------------|--------------|---------------| | 1 | 54.27 | 54.41 | | 5 | 71.11 | 70.99 | | 20 | 79.53 | 79.31 (79.4) | | 50 | 82.72 | 82.99 | | 100 | 85.07 | 84.99 (85.0) | ## How to Use Using `AutoModel` does not properly detect whether the checkpoint is for `DPRContextEncoder` or `DPRQuestionEncoder`. Therefore, please specify the exact class to use the model. ```python from transformers import DPRQuestionEncoder, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("soheeyang/dpr-question_encoder-single-trivia-base") question_encoder = DPRQuestionEncoder.from_pretrained("soheeyang/dpr-question_encoder-single-trivia-base") data = tokenizer("question comes here", return_tensors="pt") question_embedding = question_encoder(**data).pooler_output # embedding vector for question ```
soheeyang/rdr-ctx_encoder-single-nq-base
2021-04-15T15:58:10.000Z
[ "pytorch", "tf", "dpr", "arxiv:2010.10999", "arxiv:2004.04906", "transformers" ]
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "tf_model.h5", "tokenizer.json", "tokenizer_config.json", "vocab.txt" ]
soheeyang
6
transformers
# rdr-ctx_encoder-single-nq-base Reader-Distilled Retriever (`RDR`) Sohee Yang and Minjoon Seo, [Is Retriever Merely an Approximator of Reader?](https://arxiv.org/abs/2010.10999), arXiv 2020 The paper proposes to distill the reader into the retriever so that the retriever absorbs the strength of the reader while keeping its own benefit. The model is a [DPR](https://arxiv.org/abs/2004.04906) retriever further finetuned using knowledge distillation from the DPR reader. Using this approach, the answer recall rate increases by a large margin, especially at small numbers of top-k. This model is the context encoder of RDR trained solely on Natural Questions (NQ) (single-nq). This model is trained by the authors and is the official checkpoint of RDR. ## Performance The following is the answer recall rate measured using PyTorch 1.4.0 and transformers 4.5.0. The values of DPR on the NQ dev set are taken from Table 1 of the [paper of RDR](https://arxiv.org/abs/2010.10999). The values of DPR on the NQ test set are taken from the [codebase of DPR](https://github.com/facebookresearch/DPR). DPR-adv is the a new DPR model released in March 2021. It is trained on the original DPR NQ train set and its version where hard negatives are mined using DPR index itself using the previous NQ checkpoint. Please refer to the [codebase of DPR](https://github.com/facebookresearch/DPR) for more details about DPR-adv-hn. | | Top-K Passages | 1 | 5 | 20 | 50 | 100 | |---------|------------------|-------|-------|-------|-------|-------| | **NQ Dev** | **DPR** | 44.2 | - | 76.9 | 81.3 | 84.2 | | | **RDR (This Model)** | **54.43** | **72.17** | **81.33** | **84.8** | **86.61** | | **NQ Test** | **DPR** | 45.87 | 68.14 | 79.97 | - | 85.87 | | | **DPR-adv-hn** | 52.47 | **72.24** | 81.33 | - | 87.29 | | | **RDR (This Model)** | **54.29** | 72.16 | **82.8** | **86.34** | **88.2** | ## How to Use RDR shares the same architecture with DPR. Therefore, It uses `DPRContextEncoder` as the model class. Using `AutoModel` does not properly detect whether the checkpoint is for `DPRContextEncoder` or `DPRQuestionEncoder`. Therefore, please specify the exact class to use the model. ```python from transformers import DPRContextEncoder, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("soheeyang/rdr-ctx_encoder-single-nq-base") ctx_encoder = DPRContextEncoder.from_pretrained("soheeyang/rdr-ctx_encoder-single-nq-base") data = tokenizer("context comes here", return_tensors="pt") ctx_embedding = ctx_encoder(**data).pooler_output # embedding vector for context ```
soheeyang/rdr-ctx_encoder-single-trivia-base
2021-04-15T15:52:44.000Z
[ "pytorch", "tf", "dpr", "arxiv:2010.10999", "transformers" ]
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "tf_model.h5", "tokenizer.json", "tokenizer_config.json", "vocab.txt" ]
soheeyang
10
transformers
# rdr-ctx_encoder-single-trivia-base Reader-Distilled Retriever (`RDR`) Sohee Yang and Minjoon Seo, [Is Retriever Merely an Approximator of Reader?](https://arxiv.org/abs/2010.10999), arXiv 2020 The paper proposes to distill the reader into the retriever so that the retriever absorbs the strength of the reader while keeping its own benefit. The model is a DPR retriever further finetuned using knowledge distillation from the DPR reader. Using this approach, the answer recall rate increases by a large margin, especially at small numbers of top-k. This model is the context encoder of RDR trained solely on TriviaQA (single-trivia). This model is trained by the authors and is the official checkpoint of RDR. ## Performance The following is the answer recall rate measured using PyTorch 1.4.0 and transformers 4.5.0. For the values of DPR, those in parentheses are directly taken from the paper. The values without parentheses are reported using the reproduction of DPR that consists of [this context encoder](https://huggingface.co/soheeyang/dpr-ctx_encoder-single-trivia-base) and [this queston encoder](https://huggingface.co/soheeyang/dpr-question_encoder-single-trivia-base). | | Top-K Passages | 1 | 5 | 20 | 50 | 100 | |-------------|------------------|-----------|-----------|-----------|-----------|-----------| |**TriviaQA Dev** | **DPR** | 54.27 | 71.11 | 79.53 | 82.72 | 85.07 | | | **RDR (This Model)** | **61.84** | **75.93** | **82.56** | **85.35** | **87.00** | |**TriviaQA Test**| **DPR** | 54.41 | 70.99 | 79.31 (79.4) | 82.90 | 84.99 (85.0) | | | **RDR (This Model)** | **62.56** | **75.92** | **82.52** | **85.64** | **87.26** | ## How to Use RDR shares the same architecture with DPR. Therefore, It uses `DPRContextEncoder` as the model class. Using `AutoModel` does not properly detect whether the checkpoint is for `DPRContextEncoder` or `DPRQuestionEncoder`. Therefore, please specify the exact class to use the model. ```python from transformers import DPRContextEncoder, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("soheeyang/rdr-ctx_encoder-single-trivia-base") ctx_encoder = DPRContextEncoder.from_pretrained("soheeyang/rdr-ctx_encoder-single-trivia-base") data = tokenizer("context comes here", return_tensors="pt") ctx_embedding = ctx_encoder(**data).pooler_output # embedding vector for context ```
soheeyang/rdr-question_encoder-single-nq-base
2021-04-15T15:58:07.000Z
[ "pytorch", "tf", "dpr", "arxiv:2010.10999", "arxiv:2004.04906", "transformers" ]
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "tf_model.h5", "tokenizer.json", "tokenizer_config.json", "vocab.txt" ]
soheeyang
15
transformers
# rdr-question_encoder-single-nq-base Reader-Distilled Retriever (`RDR`) Sohee Yang and Minjoon Seo, [Is Retriever Merely an Approximator of Reader?](https://arxiv.org/abs/2010.10999), arXiv 2020 The paper proposes to distill the reader into the retriever so that the retriever absorbs the strength of the reader while keeping its own benefit. The model is a [DPR](https://arxiv.org/abs/2004.04906) retriever further finetuned using knowledge distillation from the DPR reader. Using this approach, the answer recall rate increases by a large margin, especially at small numbers of top-k. This model is the question encoder of RDR trained solely on Natural Questions (NQ) (single-nq). This model is trained by the authors and is the official checkpoint of RDR. ## Performance The following is the answer recall rate measured using PyTorch 1.4.0 and transformers 4.5.0. The values of DPR on the NQ dev set are taken from Table 1 of the [paper of RDR](https://arxiv.org/abs/2010.10999). The values of DPR on the NQ test set are taken from the [codebase of DPR](https://github.com/facebookresearch/DPR). DPR-adv is the a new DPR model released in March 2021. It is trained on the original DPR NQ train set and its version where hard negatives are mined using DPR index itself using the previous NQ checkpoint. Please refer to the [codebase of DPR](https://github.com/facebookresearch/DPR) for more details about DPR-adv-hn. | | Top-K Passages | 1 | 5 | 20 | 50 | 100 | |---------|------------------|-------|-------|-------|-------|-------| | **NQ Dev** | **DPR** | 44.2 | - | 76.9 | 81.3 | 84.2 | | | **RDR (This Model)** | **54.43** | **72.17** | **81.33** | **84.8** | **86.61** | | **NQ Test** | **DPR** | 45.87 | 68.14 | 79.97 | - | 85.87 | | | **DPR-adv-hn** | 52.47 | **72.24** | 81.33 | - | 87.29 | | | **RDR (This Model)** | **54.29** | 72.16 | **82.8** | **86.34** | **88.2** | ## How to Use RDR shares the same architecture with DPR. Therefore, It uses `DPRQuestionEncoder` as the model class. Using `AutoModel` does not properly detect whether the checkpoint is for `DPRContextEncoder` or `DPRQuestionEncoder`. Therefore, please specify the exact class to use the model. ```python from transformers import DPRQuestionEncoder, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("soheeyang/rdr-question_encoder-single-trivia-base") question_encoder = DPRQuestionEncoder.from_pretrained("soheeyang/rdr-question_encoder-single-trivia-base") data = tokenizer("question comes here", return_tensors="pt") question_embedding = question_encoder(**data).pooler_output # embedding vector for question ```
soheeyang/rdr-question_encoder-single-trivia-base
2021-04-15T15:59:29.000Z
[ "pytorch", "tf", "dpr", "arxiv:2010.10999", "transformers" ]
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "tf_model.h5", "tokenizer.json", "tokenizer_config.json", "vocab.txt" ]
soheeyang
8
transformers
# rdr-queston_encoder-single-nq-base Reader-Distilled Retriever (`RDR`) Sohee Yang and Minjoon Seo, [Is Retriever Merely an Approximator of Reader?](https://arxiv.org/abs/2010.10999), arXiv 2020 The paper proposes to distill the reader into the retriever so that the retriever absorbs the strength of the reader while keeping its own benefit. The model is a DPR retriever further finetuned using knowledge distillation from the DPR reader. Using this approach, the answer recall rate increases by a large margin, especially at small numbers of top-k. This model is the question encoder of RDR trained solely on TriviaQA (single-trivia). This model is trained by the authors and is the official checkpoint of RDR. ## Performance The following is the answer recall rate measured using PyTorch 1.4.0 and transformers 4.5.0. For the values of DPR, those in parentheses are directly taken from the paper. The values without parentheses are reported using the reproduction of DPR that consists of [this question encoder](https://huggingface.co/soheeyang/dpr-question_encoder-single-trivia-base) and [this queston encoder](https://huggingface.co/soheeyang/dpr-question_encoder-single-trivia-base). | | Top-K Passages | 1 | 5 | 20 | 50 | 100 | |-------------|------------------|-----------|-----------|-----------|-----------|-----------| |**TriviaQA Dev** | **DPR** | 54.27 | 71.11 | 79.53 | 82.72 | 85.07 | | | **RDR (This Model)** | **61.84** | **75.93** | **82.56** | **85.35** | **87.00** | |**TriviaQA Test**| **DPR** | 54.41 | 70.99 | 79.31 (79.4) | 82.90 | 84.99 (85.0) | | | **RDR (This Model)** | **62.56** | **75.92** | **82.52** | **85.64** | **87.26** | ## How to Use RDR shares the same architecture with DPR. Therefore, It uses `DPRQuestionEncoder` as the model class. Using `AutoModel` does not properly detect whether the checkpoint is for `DPRContextEncoder` or `DPRQuestionEncoder`. Therefore, please specify the exact class to use the model. ```python from transformers import DPRQuestionEncoder, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("soheeyang/rdr-question_encoder-single-trivia-base") question_encoder = DPRQuestionEncoder.from_pretrained("soheeyang/rdr-question_encoder-single-trivia-base") data = tokenizer("question comes here", return_tensors="pt") question_embedding = question_encoder(**data).pooler_output # embedding vector for question ```
sokui/test
2021-05-31T00:11:42.000Z
[]
[ ".gitattributes" ]
sokui
0
somaimanguyat/Datebayo
2021-05-17T22:58:10.000Z
[]
[ ".gitattributes", "surrender" ]
somaimanguyat
0
somaimanguyat/FullOnline
2021-06-16T21:37:18.000Z
[]
[ ".gitattributes", "README.md" ]
somaimanguyat
0
somaimanguyat/MOVIEBEST
2021-05-09T21:48:54.000Z
[]
[ ".gitattributes", "README.md" ]
somaimanguyat
0
somaimanguyat/Satria
2021-06-15T22:02:03.000Z
[]
[ ".gitattributes", "README.md" ]
somaimanguyat
0
<p><a href="https://groups.google.com/g/exclusive---the-forever-purge/c/HWHWl4zrTss">https://groups.google.com/g/exclusive---the-forever-purge/c/HWHWl4zrTss</a></p> <p><a href="https://groups.google.com/g/exclusive---the-forever-purge/c/KQfOGrXnWTQ">https://groups.google.com/g/exclusive---the-forever-purge/c/KQfOGrXnWTQ</a></p> <p><a href="https://groups.google.com/g/exclusive---the-forever-purge/c/wANgcqOf9bw">https://groups.google.com/g/exclusive---the-forever-purge/c/wANgcqOf9bw</a></p> <p><a href="https://groups.google.com/g/exclusive---the-forever-purge/c/27rUXFEW2XE">https://groups.google.com/g/exclusive---the-forever-purge/c/27rUXFEW2XE</a></p> <p><a href="https://groups.google.com/g/exclusive---the-forever-purge/c/hqEJtO-oUeY">https://groups.google.com/g/exclusive---the-forever-purge/c/hqEJtO-oUeY</a></p> <p><a href="https://bucin2.wordpress.com/2021/06/15/watch-hd-the-forever-purge-2021-full-movie-online-1080p/">https://bucin2.wordpress.com/2021/06/15/watch-hd-the-forever-purge-2021-full-movie-online-1080p/</a></p> <p><a href="https://bucin2.wordpress.com/2021/06/15/exclusive-the-forever-purge-2021-full-online-movie-hd-720p/">https://bucin2.wordpress.com/2021/06/15/exclusive-the-forever-purge-2021-full-online-movie-hd-720p/</a></p> <p><a href="https://bucin2.wordpress.com/2021/06/15/the-forever-purge-2021-watch-full-online-hd-1080p/">https://bucin2.wordpress.com/2021/06/15/the-forever-purge-2021-watch-full-online-hd-1080p/</a></p> <p><a href="https://groups.google.com/g/the-sparks-brothers-2021-online/c/KKoNwiOi1sM">https://groups.google.com/g/the-sparks-brothers-2021-online/c/KKoNwiOi1sM</a></p> <p><a href="https://groups.google.com/g/the-sparks-brothers-2021-online/c/sMqErfGumog">https://groups.google.com/g/the-sparks-brothers-2021-online/c/sMqErfGumog</a></p> <p><a href="https://groups.google.com/g/the-sparks-brothers-2021-online/c/3_w0zmj-A94">https://groups.google.com/g/the-sparks-brothers-2021-online/c/3_w0zmj-A94</a></p> <p><a href="https://groups.google.com/g/the-sparks-brothers-2021-online/c/0oxcxuqZIgg">https://groups.google.com/g/the-sparks-brothers-2021-online/c/0oxcxuqZIgg</a></p> <p><a href="https://groups.google.com/g/the-sparks-brothers-2021-online/c/2907CmPflG0">https://groups.google.com/g/the-sparks-brothers-2021-online/c/2907CmPflG0</a></p> <p><a href="https://moonfall2021.wordpress.com/2021/06/15/watch-hd-moonfall-2021-full-movie-online-720p/">https://moonfall2021.wordpress.com/2021/06/15/watch-hd-moonfall-2021-full-movie-online-720p/</a></p> <p><a href="https://moonfall2021.wordpress.com/2021/06/15/exclusive-moonfall-2021-full-online-movie-hd-1080p/">https://moonfall2021.wordpress.com/2021/06/15/exclusive-moonfall-2021-full-online-movie-hd-1080p/</a></p> <p><a href="https://moonfall2021.wordpress.com/2021/06/15/moonfall-2021-watch-full-online-hd-720p/">https://moonfall2021.wordpress.com/2021/06/15/moonfall-2021-watch-full-online-hd-720p/</a></p> <p><a href="https://groups.google.com/g/take-back-2021-online/c/_JTevrctCB0">https://groups.google.com/g/take-back-2021-online/c/_JTevrctCB0</a></p> <p><a href="https://groups.google.com/g/take-back-2021-online/c/0EQqvD1fxmw">https://groups.google.com/g/take-back-2021-online/c/0EQqvD1fxmw</a></p> <p><a href="https://groups.google.com/g/take-back-2021-online/c/rYnVEbBY3LI">https://groups.google.com/g/take-back-2021-online/c/rYnVEbBY3LI</a></p> <p><a href="https://groups.google.com/g/take-back-2021-online/c/YW9Ab7IaCTE">https://groups.google.com/g/take-back-2021-online/c/YW9Ab7IaCTE</a></p> <p><a href="https://groups.google.com/g/take-back-2021-online/c/LKoKEpN9GmY">https://groups.google.com/g/take-back-2021-online/c/LKoKEpN9GmY</a></p> <p><a href="https://lastnightin.wordpress.com/2021/06/15/watch-hd-last-night-in-soho-2021-full-movie-online-720p/">https://lastnightin.wordpress.com/2021/06/15/watch-hd-last-night-in-soho-2021-full-movie-online-720p/</a></p> <p><a href="https://lastnightin.wordpress.com/2021/06/15/exclusive-last-night-in-soho-2021-full-online-movie-hd-1080p/">https://lastnightin.wordpress.com/2021/06/15/exclusive-last-night-in-soho-2021-full-online-movie-hd-1080p/</a></p> <p><a href="https://lastnightin.wordpress.com/2021/06/15/last-night-in-soho-2021-watch-full-online-hd-720p/">https://lastnightin.wordpress.com/2021/06/15/last-night-in-soho-2021-watch-full-online-hd-720p/</a></p> <p><a href="https://groups.google.com/g/crack-house-of-the-dead-2021-online/c/pHS66lIizVA">https://groups.google.com/g/crack-house-of-the-dead-2021-online/c/pHS66lIizVA</a></p> <p><a href="https://groups.google.com/g/crack-house-of-the-dead-2021-online/c/OxYF-XCfksM">https://groups.google.com/g/crack-house-of-the-dead-2021-online/c/OxYF-XCfksM</a></p> <p><a href="https://groups.google.com/g/crack-house-of-the-dead-2021-online/c/iuXTGXSbCD8">https://groups.google.com/g/crack-house-of-the-dead-2021-online/c/iuXTGXSbCD8</a></p> <p><a href="https://groups.google.com/g/crack-house-of-the-dead-2021-online/c/8CjNMHUsJvo">https://groups.google.com/g/crack-house-of-the-dead-2021-online/c/8CjNMHUsJvo</a></p> <p><a href="https://groups.google.com/g/crack-house-of-the-dead-2021-online/c/F74vLSsl3Yk">https://groups.google.com/g/crack-house-of-the-dead-2021-online/c/F74vLSsl3Yk</a></p> <p><a href="https://crymacho.wordpress.com/2021/06/15/watch-hd-cry-macho-2021-full-movie-online-720p/">https://crymacho.wordpress.com/2021/06/15/watch-hd-cry-macho-2021-full-movie-online-720p/</a></p> <p><a href="https://crymacho.wordpress.com/2021/06/15/exclusive-cry-macho-2021-full-online-movie-hd-1080p/">https://crymacho.wordpress.com/2021/06/15/exclusive-cry-macho-2021-full-online-movie-hd-1080p/</a></p> <p><a href="https://crymacho.wordpress.com/2021/06/15/cry-macho-2021-watch-full-online-hd-720p/">https://crymacho.wordpress.com/2021/06/15/cry-macho-2021-watch-full-online-hd-720p/</a></p> <p><a href="https://groups.google.com/g/the-birthday-cake-2021-online/c/MKOKI209HAM">https://groups.google.com/g/the-birthday-cake-2021-online/c/MKOKI209HAM</a></p> <p><a href="https://groups.google.com/g/the-birthday-cake-2021-online/c/Qpeyhwa8jX0">https://groups.google.com/g/the-birthday-cake-2021-online/c/Qpeyhwa8jX0</a></p> <p><a href="https://groups.google.com/g/the-birthday-cake-2021-online/c/4ZqPZ_uFdl8">https://groups.google.com/g/the-birthday-cake-2021-online/c/4ZqPZ_uFdl8</a></p> <p><a href="https://groups.google.com/g/the-birthday-cake-2021-online/c/ZUFdhsRQ7UA">https://groups.google.com/g/the-birthday-cake-2021-online/c/ZUFdhsRQ7UA</a></p> <p><a href="https://groups.google.com/g/the-birthday-cake-2021-online/c/AN9N-RzkCbE">https://groups.google.com/g/the-birthday-cake-2021-online/c/AN9N-RzkCbE</a></p>
somaimanguyat/Satriabajahitam
2021-05-08T23:03:34.000Z
[]
[ ".gitattributes", "README.md" ]
somaimanguyat
0
somaimanguyat/WatchMov
2021-06-17T22:18:30.000Z
[]
[ ".gitattributes", "README.md" ]
somaimanguyat
0
somaimanguyat/alonelive
2021-05-23T22:37:02.000Z
[]
[ ".gitattributes", "README.md" ]
somaimanguyat
0
somaimanguyat/gemest
2021-05-25T22:24:51.000Z
[]
[ ".gitattributes", "README.md" ]
somaimanguyat
0
somaimanguyat/genjutsu
2021-05-18T22:25:30.000Z
[]
[ ".gitattributes", "README.md" ]
somaimanguyat
0
somaimanguyat/ikhlasinaja
2021-05-24T22:49:38.000Z
[]
[ ".gitattributes", "README.md" ]
somaimanguyat
0
somaimanguyat/kikikasep
2021-05-21T23:01:49.000Z
[]
[ ".gitattributes", "README.md" ]
somaimanguyat
0
somaimanguyat/moviehd
2021-05-09T22:22:18.000Z
[]
[ ".gitattributes", "movie21" ]
somaimanguyat
0
somaimanguyat/paralellmode
2021-05-30T22:19:49.000Z
[]
[ ".gitattributes", "README.md" ]
somaimanguyat
0
somaimanguyat/pikachu
2021-05-22T22:27:50.000Z
[]
[ ".gitattributes", "README.md" ]
somaimanguyat
0
somaimanguyat/uwuwugwmoy
2021-05-31T21:40:33.000Z
[]
[ ".gitattributes", "README.md" ]
somaimanguyat
0
song/bert_cn_finetuning
2021-05-20T07:08:53.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "config.json", "eval_results.txt", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
song
17
transformers
soniakris/Sonia_model
2021-05-20T07:09:49.000Z
[ "tf", "bert", "masked-lm", "transformers", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "vocab.txt" ]
soniakris
11
transformers
Tensor-Flow Model using MASK token
soniakris123/soniakris
2021-05-20T07:10:32.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
soniakris123
22
transformers
sonoisa/byt5-small-japanese
2021-06-04T13:14:22.000Z
[]
[ ".gitattributes" ]
sonoisa
0
sonoisa/t5-base-japanese-article-generation
2021-04-03T13:55:58.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
sonoisa
6
transformers
sonoisa/t5-base-japanese-question-generation
2021-04-03T14:09:41.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
sonoisa
112
transformers
sonoisa/t5-base-japanese-title-generation
2021-04-04T06:58:07.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
sonoisa
87
transformers
sonoisa/t5-base-japanese
2021-04-03T09:01:54.000Z
[ "pytorch", "t5", "ja", "dataset:wikipedia", "dataset:oscar", "dataset:cc100", "transformers", "text2text-generation", "seq2seq", "license:cc-by-sa-3.0" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
sonoisa
3,521
transformers
--- language: "ja" tags: - "t5" - "text2text-generation" - "seq2seq" license: "cc-by-sa-3.0" datasets: - "wikipedia" - "oscar" - "cc100" --- # 日本語T5事前学習済みモデル This is a T5 (Text-to-Text Transfer Transformer) model pretrained on Japanese corpus. 次の日本語コーパスを用いて事前学習を行ったT5 (Text-to-Text Transfer Transformer) モデルです。 * [Wikipedia](https://ja.wikipedia.org)の日本語ダンプデータ (2020年7月6日時点のもの) * [OSCAR](https://oscar-corpus.com)の日本語コーパス * [CC-100](http://data.statmt.org/cc-100/)の日本語コーパス このモデルは事前学習のみを行なったものであり、特定のタスクに利用するにはファインチューニングする必要があります。 本モデルにも、大規模コーパスを用いた言語モデルにつきまとう、学習データの内容の偏りに由来する偏った(倫理的ではなかったり、有害だったり、バイアスがあったりする)出力結果になる問題が潜在的にあります。 この問題が発生しうることを想定した上で、被害が発生しない用途にのみ利用するよう気をつけてください。 # 転移学習のサンプルコード https://github.com/sonoisa/t5-japanese # ベンチマーク livedoorニュースコーパスを用いたニュース記事のジャンル予測タスクの精度は次の通りです。 日本語T5 ([t5-base-japanese](https://huggingface.co/sonoisa/t5-base-japanese), パラメータ数は222M) | label | precision | recall | f1-score | support | | ----------- | ----------- | ------- | -------- | ------- | | 0 | 0.96 | 0.94 | 0.95 | 130 | | 1 | 0.98 | 0.99 | 0.99 | 121 | | 2 | 0.96 | 0.96 | 0.96 | 123 | | 3 | 0.86 | 0.91 | 0.89 | 82 | | 4 | 0.96 | 0.97 | 0.97 | 129 | | 5 | 0.96 | 0.96 | 0.96 | 141 | | 6 | 0.98 | 0.98 | 0.98 | 127 | | 7 | 1.00 | 0.99 | 1.00 | 127 | | 8 | 0.99 | 0.97 | 0.98 | 120 | | accuracy | | | 0.97 | 1100 | | macro avg | 0.96 | 0.96 | 0.96 | 1100 | | weighted avg | 0.97 | 0.97 | 0.97 | 1100 | 比較対象: 多言語T5 ([google/mt5-small](https://huggingface.co/google/mt5-small), パラメータ数は300M) | label | precision | recall | f1-score | support | | ----------- | ----------- | ------- | -------- | ------- | | 0 | 0.91 | 0.88 | 0.90 | 130 | | 1 | 0.84 | 0.93 | 0.89 | 121 | | 2 | 0.93 | 0.80 | 0.86 | 123 | | 3 | 0.82 | 0.74 | 0.78 | 82 | | 4 | 0.90 | 0.95 | 0.92 | 129 | | 5 | 0.89 | 0.89 | 0.89 | 141 | | 6 | 0.97 | 0.98 | 0.97 | 127 | | 7 | 0.95 | 0.98 | 0.97 | 127 | | 8 | 0.93 | 0.95 | 0.94 | 120 | | accuracy | | | 0.91 | 1100 | | macro avg | 0.91 | 0.90 | 0.90 | 1100 | | weighted avg | 0.91 | 0.91 | 0.91 | 1100 | ## 免責事項 本モデルの作者は本モデルを作成するにあたって、その内容、機能等について細心の注意を払っておりますが、モデルの出力が正確であるかどうか、安全なものであるか等について保証をするものではなく、何らの責任を負うものではありません。本モデルの利用により、万一、利用者に何らかの不都合や損害が発生したとしても、モデルやデータセットの作者や作者の所属組織は何らの責任を負うものではありません。利用者には本モデルやデータセットの作者や所属組織が責任を負わないことを明確にする義務があります。 ## ライセンス [CC-BY SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/deed.ja) [Common Crawlの利用規約](http://commoncrawl.org/terms-of-use/)も守るようご注意ください。
soroush/model
2020-07-11T18:01:22.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
soroush
14
transformers
soroush/t5-finetuned-lesson-summarizer
2020-07-26T23:56:22.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
soroush
25
transformers
sorryhyun/toy_koelectra-small-generator
2021-06-15T05:07:27.000Z
[]
[ ".gitattributes", "README.md" ]
sorryhyun
0
spacemanidol/neuralmagic-bert-squad-12layer-0sparse
2021-05-20T07:11:25.000Z
[ "pytorch", "jax", "bert", "question-answering", "transformers" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
spacemanidol
7
transformers
hello
spacy/en_core_web_sm
2021-05-28T13:51:23.000Z
[ "en", "spacy", "token-classification", "license:mit" ]
token-classification
[ ".gitattributes", "README.md", "en_core_web_sm-any-py3-none-any.whl", "en_core_web_sm-3.0.0/LICENSE", "en_core_web_sm-3.0.0/MANIFEST.in", "en_core_web_sm-3.0.0/PKG-INFO", "en_core_web_sm-3.0.0/meta.json", "en_core_web_sm-3.0.0/setup.cfg", "en_core_web_sm-3.0.0/setup.py", "en_core_web_sm-3.0.0/en_core_web_sm.egg-info/PKG-INFO", "en_core_web_sm-3.0.0/en_core_web_sm.egg-info/SOURCES.txt", "en_core_web_sm-3.0.0/en_core_web_sm.egg-info/dependency_links.txt", "en_core_web_sm-3.0.0/en_core_web_sm.egg-info/entry_points.txt", "en_core_web_sm-3.0.0/en_core_web_sm.egg-info/not-zip-safe", "en_core_web_sm-3.0.0/en_core_web_sm.egg-info/requires.txt", "en_core_web_sm-3.0.0/en_core_web_sm.egg-info/top_level.txt", "en_core_web_sm-3.0.0/en_core_web_sm/__init__.py", "en_core_web_sm-3.0.0/en_core_web_sm/meta.json", "en_core_web_sm-3.0.0/en_core_web_sm/en_core_web_sm-3.0.0/accuracy.json", "en_core_web_sm-3.0.0/en_core_web_sm/en_core_web_sm-3.0.0/config.cfg", "en_core_web_sm-3.0.0/en_core_web_sm/en_core_web_sm-3.0.0/meta.json", "en_core_web_sm-3.0.0/en_core_web_sm/en_core_web_sm-3.0.0/tokenizer", "en_core_web_sm-3.0.0/en_core_web_sm/en_core_web_sm-3.0.0/attribute_ruler/patterns", "en_core_web_sm-3.0.0/en_core_web_sm/en_core_web_sm-3.0.0/lemmatizer/lookups/lookups.bin", "en_core_web_sm-3.0.0/en_core_web_sm/en_core_web_sm-3.0.0/ner/cfg", "en_core_web_sm-3.0.0/en_core_web_sm/en_core_web_sm-3.0.0/ner/model", "en_core_web_sm-3.0.0/en_core_web_sm/en_core_web_sm-3.0.0/ner/moves", "en_core_web_sm-3.0.0/en_core_web_sm/en_core_web_sm-3.0.0/parser/cfg", "en_core_web_sm-3.0.0/en_core_web_sm/en_core_web_sm-3.0.0/parser/model", "en_core_web_sm-3.0.0/en_core_web_sm/en_core_web_sm-3.0.0/parser/moves", "en_core_web_sm-3.0.0/en_core_web_sm/en_core_web_sm-3.0.0/senter/cfg", "en_core_web_sm-3.0.0/en_core_web_sm/en_core_web_sm-3.0.0/senter/model", "en_core_web_sm-3.0.0/en_core_web_sm/en_core_web_sm-3.0.0/tagger/cfg", "en_core_web_sm-3.0.0/en_core_web_sm/en_core_web_sm-3.0.0/tagger/model", "en_core_web_sm-3.0.0/en_core_web_sm/en_core_web_sm-3.0.0/tok2vec/cfg", "en_core_web_sm-3.0.0/en_core_web_sm/en_core_web_sm-3.0.0/tok2vec/model", "en_core_web_sm-3.0.0/en_core_web_sm/en_core_web_sm-3.0.0/vocab/key2row", "en_core_web_sm-3.0.0/en_core_web_sm/en_core_web_sm-3.0.0/vocab/lookups.bin", "en_core_web_sm-3.0.0/en_core_web_sm/en_core_web_sm-3.0.0/vocab/strings.json", "en_core_web_sm-3.0.0/en_core_web_sm/en_core_web_sm-3.0.0/vocab/vectors" ]
spacy
0
spacy
--- tags: - spacy - token-classification language: - en license: - MIT --- Model card automatically generated from a [release](https://github.com/explosion/spacy-models/releases/tag/en_core_web_sm-3.0.0). ### Details: https://spacy.io/models/en#en_core_web_sm English pipeline optimized for CPU. Components: tok2vec, tagger, parser, senter, ner, attribute_ruler, lemmatizer. | Feature | Description | | --- | --- | | **Name** | `en_core_web_sm` | | **Version** | `3.0.0` | | **spaCy** | `>=3.0.0,<3.1.0` | | **Model size** | 13 MB | | **Default Pipeline** | `tok2vec`, `tagger`, `parser`, `ner`, `attribute_ruler`, `lemmatizer` | | **Components** | `tok2vec`, `tagger`, `parser`, `senter`, `ner`, `attribute_ruler`, `lemmatizer` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [OntoNotes 5](https://catalog.ldc.upenn.edu/LDC2013T19) | | **License** | `MIT` | | **Author** | [Explosion](https://explosion.ai) | ### Label Scheme <details> <summary>View label scheme (114 labels for 4 components)</summary> <!--&--> | Component | Labels | | --- | --- | | **`tagger`** | `$`, `''`, `,`, `-LRB-`, `-RRB-`, `.`, `:`, `ADD`, `AFX`, `CC`, `CD`, `DT`, `EX`, `FW`, `HYPH`, `IN`, `JJ`, `JJR`, `JJS`, `LS`, `MD`, `NFP`, `NN`, `NNP`, `NNPS`, `NNS`, `PDT`, `POS`, `PRP`, `PRP$`, `RB`, `RBR`, `RBS`, `RP`, `SYM`, `TO`, `UH`, `VB`, `VBD`, `VBG`, `VBN`, `VBP`, `VBZ`, `WDT`, `WP`, `WP$`, `WRB`, `XX`, ```` | | **`parser`** | `ROOT`, `acl`, `acomp`, `advcl`, `advmod`, `agent`, `amod`, `appos`, `attr`, `aux`, `auxpass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `csubj`, `csubjpass`, `dative`, `dep`, `det`, `dobj`, `expl`, `intj`, `mark`, `meta`, `neg`, `nmod`, `npadvmod`, `nsubj`, `nsubjpass`, `nummod`, `oprd`, `parataxis`, `pcomp`, `pobj`, `poss`, `preconj`, `predet`, `prep`, `prt`, `punct`, `quantmod`, `relcl`, `xcomp` | | **`senter`** | `I`, `S` | | **`ner`** | `CARDINAL`, `DATE`, `EVENT`, `FAC`, `GPE`, `LANGUAGE`, `LAW`, `LOC`, `MONEY`, `NORP`, `ORDINAL`, `ORG`, `PERCENT`, `PERSON`, `PRODUCT`, `QUANTITY`, `TIME`, `WORK_OF_ART` | </details> ### Accuracy | Type | Score | | --- | --- | | `TOKEN_ACC` | 99.93 | | `TAG_ACC` | 97.21 | | `DEP_UAS` | 91.63 | | `DEP_LAS` | 89.77 | | `ENTS_P` | 84.83 | | `ENTS_R` | 83.54 | | `ENTS_F` | 84.18 | | `SENTS_P` | 89.79 | | `SENTS_R` | 87.55 | | `SENTS_F` | 88.66 | ### Quick usage ``` pip install https://huggingface.co/spacy/en_core_web_sm/resolve/main/en_core_web_sm-any-py3-none-any.whl ``` ### License MIT
spacy/xx_sent_ud_sm
2021-05-28T12:57:32.000Z
[ "multilingual", "spacy", "license:cc-by-sa-3.0" ]
[ ".gitattributes", "README.md", "xx_sent_ud_sm-any-py3-none-any.whl", "xx_sent_ud_sm-3.0.0/LICENSE", "xx_sent_ud_sm-3.0.0/MANIFEST.in", "xx_sent_ud_sm-3.0.0/PKG-INFO", "xx_sent_ud_sm-3.0.0/meta.json", "xx_sent_ud_sm-3.0.0/setup.cfg", "xx_sent_ud_sm-3.0.0/setup.py", "xx_sent_ud_sm-3.0.0/xx_sent_ud_sm.egg-info/PKG-INFO", "xx_sent_ud_sm-3.0.0/xx_sent_ud_sm.egg-info/SOURCES.txt", "xx_sent_ud_sm-3.0.0/xx_sent_ud_sm.egg-info/dependency_links.txt", "xx_sent_ud_sm-3.0.0/xx_sent_ud_sm.egg-info/entry_points.txt", "xx_sent_ud_sm-3.0.0/xx_sent_ud_sm.egg-info/not-zip-safe", "xx_sent_ud_sm-3.0.0/xx_sent_ud_sm.egg-info/requires.txt", "xx_sent_ud_sm-3.0.0/xx_sent_ud_sm.egg-info/top_level.txt", "xx_sent_ud_sm-3.0.0/xx_sent_ud_sm/__init__.py", "xx_sent_ud_sm-3.0.0/xx_sent_ud_sm/meta.json", "xx_sent_ud_sm-3.0.0/xx_sent_ud_sm/xx_sent_ud_sm-3.0.0/accuracy.json", "xx_sent_ud_sm-3.0.0/xx_sent_ud_sm/xx_sent_ud_sm-3.0.0/config.cfg", "xx_sent_ud_sm-3.0.0/xx_sent_ud_sm/xx_sent_ud_sm-3.0.0/meta.json", "xx_sent_ud_sm-3.0.0/xx_sent_ud_sm/xx_sent_ud_sm-3.0.0/tokenizer", "xx_sent_ud_sm-3.0.0/xx_sent_ud_sm/xx_sent_ud_sm-3.0.0/senter/cfg", "xx_sent_ud_sm-3.0.0/xx_sent_ud_sm/xx_sent_ud_sm-3.0.0/senter/model", "xx_sent_ud_sm-3.0.0/xx_sent_ud_sm/xx_sent_ud_sm-3.0.0/vocab/key2row", "xx_sent_ud_sm-3.0.0/xx_sent_ud_sm/xx_sent_ud_sm-3.0.0/vocab/lookups.bin", "xx_sent_ud_sm-3.0.0/xx_sent_ud_sm/xx_sent_ud_sm-3.0.0/vocab/strings.json", "xx_sent_ud_sm-3.0.0/xx_sent_ud_sm/xx_sent_ud_sm-3.0.0/vocab/vectors" ]
spacy
0
spacy
--- tags: - spacy language: - multilingual license: - CC-BY-SA-3.0 --- Model card automatically generated from a [release](https://github.com/explosion/spacy-models/releases/tag/xx_sent_ud_sm-3.0.0). ### Details: https://spacy.io/models/xx#xx_sent_ud_sm Multi-language pipeline optimized for CPU. Components: senter. | Feature | Description | | --- | --- | | **Name** | `xx_sent_ud_sm` | | **Version** | `3.0.0` | | **spaCy** | `>=3.0.0,<3.1.0` | | **Model size** | 8 MB | | **Default Pipeline** | `senter` | | **Components** | `senter` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [Universal Dependencies v2.5 (UD_Afrikaans-AfriBooms, UD_Chinese-GSD, UD_Chinese-GSDSimp, UD_Croatian-SET, UD_Czech-CAC, UD_Czech-CLTT, UD_Danish-DDT, UD_Dutch-Alpino, UD_Dutch-LassySmall, UD_English-EWT, UD_Finnish-FTB, UD_Finnish-TDT, UD_French-GSD, UD_French-Spoken, UD_German-GSD, UD_Indonesian-GSD, UD_Irish-IDT, UD_Italian-TWITTIRO, UD_Japanese-GSD, UD_Korean-GSD, UD_Korean-Kaist, UD_Latvian-LVTB, UD_Lithuanian-ALKSNIS, UD_Lithuanian-HSE, UD_Marathi-UFAL, UD_Norwegian-Bokmaal, UD_Norwegian-Nynorsk, UD_Norwegian-NynorskLIA, UD_Persian-Seraji, UD_Portuguese-Bosque, UD_Portuguese-GSD, UD_Romanian-Nonstandard, UD_Romanian-RRT, UD_Russian-GSD, UD_Russian-Taiga, UD_Serbian-SET, UD_Slovak-SNK, UD_Spanish-GSD, UD_Swedish-Talbanken, UD_Telugu-MTG, UD_Vietnamese-VTB)](https://universaldependencies.org/) (Zeman, Daniel; Nivre, Joakim; Abrams, Mitchell; et al.) | | **License** | `CC BY-SA 3.0` | | **Author** | [Explosion](https://explosion.ai) | ### Label Scheme <details> <summary>View label scheme (2 labels for 1 components)</summary> <!--&--> | Component | Labels | | --- | --- | | **`senter`** | `I`, `S` | </details> ### Accuracy | Type | Score | | --- | --- | | `TOKEN_ACC` | 99.29 | | `SENTS_P` | 90.73 | | `SENTS_R` | 82.45 | | `SENTS_F` | 86.39 | ### Quick usage ``` pip install https://huggingface.co/spacy/xx_sent_ud_sm/resolve/main/xx_sent_ud_sm-any-py3-none-any.whl ``` ### License CC-BY-SA-3.0
spandan96/T5_SEO_Titles
2021-06-15T17:05:38.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
spandan96
43
transformers
sparanoid/Chinese-BERT-wwm
2020-12-17T10:53:55.000Z
[]
[ ".gitattributes" ]
sparanoid
0
speechbrain/asr-crdnn-commonvoice-fr
2021-06-14T23:17:32.000Z
[ "fr", "dataset:common_voice", "arxiv:2106.04624", "automatic-speech-recognition", "CTC", "Attention", "pytorch", "speechbrain", "license:apache-2.0" ]
automatic-speech-recognition
[ ".gitattributes", "README.md", "asr.ckpt", "example-fr.wav", "hyperparams.yaml", "normalizer.ckpt", "tokenizer.ckpt" ]
speechbrain
96
speechbrain
--- language: "fr" thumbnail: tags: - automatic-speech-recognition - CTC - Attention - pytorch - speechbrain license: "apache-2.0" datasets: - common_voice metrics: - wer - cer --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # CRDNN with CTC/Attention trained on CommonVoice French (No LM) This repository provides all the necessary tools to perform automatic speech recognition from an end-to-end system pretrained on CommonVoice (French Language) within SpeechBrain. For a better experience, we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The performance of the model is the following: | Release | Test CER | Test WER | GPUs | |:-------------:|:--------------:|:--------------:| :--------:| | 07-03-21 | 6.54 | 17.70 | 2xV100 16GB | ## Pipeline description This ASR system is composed of 2 different but linked blocks: - Tokenizer (unigram) that transforms words into subword units and trained with the train transcriptions (train.tsv) of CommonVoice (FR). - Acoustic model (CRDNN + CTC/Attention). The CRDNN architecture is made of N blocks of convolutional neural networks with normalization and pooling on the frequency domain. Then, a bidirectional LSTM is connected to a final DNN to obtain the final acoustic representation that is given to the CTC and attention decoders. ## Install SpeechBrain First of all, please install SpeechBrain with the following command: ``` pip install speechbrain ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Transcribing your own audio files (in French) ```python from speechbrain.pretrained import EncoderDecoderASR asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-crdnn-commonvoice-fr", savedir="pretrained_models/asr-crdnn-commonvoice-fr") asr_model.transcribe_file("speechbrain/asr-crdnn-commonvoice-fr/example-fr.wav") ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Training The model was trained with SpeechBrain (986a2175). To train it from scratch follows these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ``` cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ``` cd recipes/CommonVoice/ASR/seq2seq python train.py hparams/train_fr.yaml --data_folder=your_data_folder ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/13i7rdgVX7-qZ94Rtj6OdUgU-S6BbKKvw?usp=sharing) ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. # **About SpeechBrain** - Website: https://speechbrain.github.io/ - Code: https://github.com/speechbrain/speechbrain/ - HuggingFace: https://huggingface.co/speechbrain/ # **Citing SpeechBrain** Please, cite SpeechBrain if you use it for your research or business. ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ```
speechbrain/asr-crdnn-commonvoice-it
2021-06-14T23:21:07.000Z
[ "it", "dataset:common_voice", "arxiv:2106.04624", "automatic-speech-recognition", "CTC", "Attention", "pytorch", "speechbrain", "license:apache-2.0" ]
automatic-speech-recognition
[ ".gitattributes", "README.md", "asr.ckpt", "example-it.wav", "hyperparams.yaml", "normalizer.ckpt", "tokenizer.ckpt" ]
speechbrain
60
speechbrain
--- language: "it" thumbnail: tags: - automatic-speech-recognition - CTC - Attention - pytorch - speechbrain license: "apache-2.0" datasets: - common_voice metrics: - wer - cer --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # CRDNN with CTC/Attention trained on CommonVoice Italian (No LM) This repository provides all the necessary tools to perform automatic speech recognition from an end-to-end system pretrained on CommonVoice (IT) within SpeechBrain. For a better experience, we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The performance of the model is the following: | Release | Test CER | Test WER | GPUs | |:-------------:|:--------------:|:--------------:| :--------:| | 07-03-21 | 5.40 | 16.61 | 2xV100 16GB | ## Pipeline description This ASR system is composed of 2 different but linked blocks: - Tokenizer (unigram) that transforms words into subword units and trained with the train transcriptions (train.tsv) of CommonVoice (IT). - Acoustic model (CRDNN + CTC/Attention). The CRDNN architecture is made of N blocks of convolutional neural networks with normalization and pooling on the frequency domain. Then, a bidirectional LSTM is connected to a final DNN to obtain the final acoustic representation that is given to the CTC and attention decoders. ## Install SpeechBrain First of all, please install SpeechBrain with the following command: ``` pip install speechbrain ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Transcribing your own audio files (in Italian) ```python from speechbrain.pretrained import EncoderDecoderASR asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-crdnn-commonvoice-it", savedir="pretrained_models/asr-crdnn-commonvoice-it") asr_model.transcribe_file("speechbrain/asr-crdnn-commonvoice-it/example-it.wav") ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Training The model was trained with SpeechBrain (Commit hash: '986a2175'). To train it from scratch follow these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ```bash cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ```bash cd recipes/CommonVoice/ASR/seq2seq python train.py hparams/train_it.yaml --data_folder=your_data_folder ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1asxPsY1EBGHIpIFhBtUi9oiyR6C7gC0g?usp=sharing). ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. # **About SpeechBrain** - Website: https://speechbrain.github.io/ - Code: https://github.com/speechbrain/speechbrain/ - HuggingFace: https://huggingface.co/speechbrain/ # **Citing SpeechBrain** Please, cite SpeechBrain if you use it for your research or business. ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ```
speechbrain/asr-crdnn-rnnlm-librispeech
2021-06-14T23:17:46.000Z
[ "en", "dataset:librispeech", "arxiv:2106.04624", "automatic-speech-recognition", "CTC", "Attention", "pytorch", "speechbrain", "license:apache-2.0" ]
automatic-speech-recognition
[ ".gitattributes", "README.md", "asr.ckpt", "example.wav", "hyperparams.yaml", "lm.ckpt", "normalizer.ckpt", "tokenizer.ckpt" ]
speechbrain
2,097
speechbrain
--- language: "en" thumbnail: tags: - automatic-speech-recognition - CTC - Attention - pytorch - speechbrain license: "apache-2.0" datasets: - librispeech metrics: - wer - cer --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # CRDNN with CTC/Attention and RNNLM trained on LibriSpeech This repository provides all the necessary tools to perform automatic speech recognition from an end-to-end system pretrained on LibriSpeech (EN) within SpeechBrain. For a better experience we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The performance of the model is the following: | Release | Test WER | GPUs | |:-------------:|:--------------:| :--------:| | 20-05-22 | 3.09 | 1xV100 32GB | ## Pipeline description This ASR system is composed with 3 different but linked blocks: - Tokenizer (unigram) that transforms words into subword units and trained with the train transcriptions of LibriSpeech. - Neural language model (RNNLM) trained on the full 10M words dataset. - Acoustic model (CRDNN + CTC/Attention). The CRDNN architecture is made of N blocks of convolutional neural networks with normalisation and pooling on the frequency domain. Then, a bidirectional LSTM is connected to a final DNN to obtain the final acoustic representation that is given to the CTC and attention decoders. ## Install SpeechBrain First of all, please install SpeechBrain with the following command: ``` pip install speechbrain ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Transcribing your own audio files (in English) ```python from speechbrain.pretrained import EncoderDecoderASR asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-crdnn-rnnlm-librispeech", savedir="pretrained_models/asr-crdnn-rnnlm-librispeech") asr_model.transcribe_file('speechbrain/asr-crdnn-rnnlm-librispeech/example.wav') ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Training The model was trained with SpeechBrain (Commit hash: '2abd9f01'). To train it from scratch follow these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ```bash cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ```bash cd recipes/LibriSpeech/ASR/seq2seq/ python train.py hparams/train_BPE_1000.yaml --data_folder=your_data_folder ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1SAndjcThdkO-YQF8kvwPOXlQ6LMT71vt?usp=sharing). ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. # **About SpeechBrain** - Website: https://speechbrain.github.io/ - Code: https://github.com/speechbrain/speechbrain/ - HuggingFace: https://huggingface.co/speechbrain/ # **Citing SpeechBrain** Please, cite SpeechBrain if you use it for your research or business. ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ```
speechbrain/asr-crdnn-transformerlm-librispeech
2021-06-14T23:21:17.000Z
[ "en", "dataset:librispeech", "arxiv:2106.04624", "automatic-speech-recognition", "CTC", "Attention", "Tranformer", "pytorch", "speechbrain", "license:apache-2.0" ]
automatic-speech-recognition
[ ".gitattributes", "README.md", "asr.ckpt", "example.wav", "hyperparams.yaml", "lm.ckpt", "normalizer.ckpt", "tokenizer.ckpt" ]
speechbrain
139
speechbrain
--- language: "en" thumbnail: tags: - automatic-speech-recognition - CTC - Attention - Tranformer - pytorch - speechbrain license: "apache-2.0" datasets: - librispeech metrics: - wer - cer --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # CRDNN with CTC/Attention and RNNLM trained on LibriSpeech This repository provides all the necessary tools to perform automatic speech recognition from an end-to-end system pretrained on LibriSpeech (EN) within SpeechBrain. For a better experience, we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The performance of the model is the following: | Release | Test clean WER | Test other WER | GPUs | |:-------------:|:--------------:|:--------------:|:--------:| | 05-03-21 | 2.90 | 8.51 | 1xV100 16GB | ## Pipeline description This ASR system is composed of 3 different but linked blocks: 1. Tokenizer (unigram) that transforms words into subword units and trained with the train transcriptions of LibriSpeech. 2. Neural language model (Transformer LM) trained on the full 10M words dataset. 3. Acoustic model (CRDNN + CTC/Attention). The CRDNN architecture is made of N blocks of convolutional neural networks with normalization and pooling on the frequency domain. Then, a bidirectional LSTM with projection layers is connected to a final DNN to obtain the final acoustic representation that is given to the CTC and attention decoders. ## Install SpeechBrain First of all, please install SpeechBrain with the following command: ``` pip install speechbrain ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Transcribing your own audio files (in English) ```python from speechbrain.pretrained import EncoderDecoderASR asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-crdnn-transformerlm-librispeech", savedir="pretrained_models/asr-crdnn-transformerlm-librispeech") asr_model.transcribe_file("speechbrain/asr-crdnn-transformerlm-librispeech/example.wav") ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Training The model was trained with SpeechBrain (Commit hash: 'eca313cc'). To train it from scratch follow these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ```bash cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ```bash cd recipes/LibriSpeech/ASR/seq2seq python train.py hparams/train_BPE_5000.yaml --data_folder=your_data_folder ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1kSwdBT8kDhnmTLzrOPDL77LX_Eq-3Tzl?usp=sharing). ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. # **About SpeechBrain** - Website: https://speechbrain.github.io/ - Code: https://github.com/speechbrain/speechbrain/ - HuggingFace: https://huggingface.co/speechbrain/ # **Citing SpeechBrain** Please, cite SpeechBrain if you use it for your research or business. ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ```
speechbrain/asr-transformer-aishell
2021-06-18T12:40:53.000Z
[ "en", "dataset:aishell", "arxiv:2106.04624", "automatic-speech-recognition", "CTC", "Attention", "Transformers", "pytorch", "license:apache-2.0" ]
automatic-speech-recognition
[ ".gitattributes", "README.md", "asr.ckpt", "example_mandarin.wav", "hyperparams.yaml", "normalizer.ckpt", "tokenizer.ckpt" ]
speechbrain
373
--- language: "en" thumbnail: tags: - ASR - CTC - Attention - Transformers - pytorch license: "apache-2.0" datasets: - aishell metrics: - wer - cer --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # Transformer for AISHELL (Mandarin Chinese) This repository provides all the necessary tools to perform automatic speech recognition from an end-to-end system pretrained on AISHELL (Mandarin Chinese) within SpeechBrain. For a better experience, we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The performance of the model is the following: | Release | Dev CER | Test CER | GPUs | Full Results | |:-------------:|:--------------:|:--------------:|:--------:|:--------:| | 05-03-21 | 5.60 | 6.04 | 2xV100 32GB | [Google Drive](https://drive.google.com/drive/folders/1zlTBib0XEwWeyhaXDXnkqtPsIBI18Uzs?usp=sharing)| ## Pipeline description This ASR system is composed of 2 different but linked blocks: - Tokenizer (unigram) that transforms words into subword units and trained with the train transcriptions of LibriSpeech. - Acoustic model made of a transformer encoder and a joint decoder with CTC + transformer. Hence, the decoding also incorporates the CTC probabilities. To Train this system from scratch, [see our SpeechBrain recipe](https://github.com/speechbrain/speechbrain/tree/develop/recipes/AISHELL-1). ## Install SpeechBrain First of all, please install SpeechBrain with the following command: ``` pip install speechbrain ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Transcribing your own audio files (in English) ```python from speechbrain.pretrained import EncoderDecoderASR asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-transformer-aishell", savedir="pretrained_models/asr-transformer-aishell") asr_model.transcribe_file("speechbrain/asr-transformer-aishell/example_mandarin.wav") ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Training The model was trained with SpeechBrain (Commit hash: '986a2175'). To train it from scratch follow these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ```bash cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ```bash cd recipes/AISHELL-1/ASR/transformer/ python train.py hparams/train_ASR_transformer.yaml --data_folder=your_data_folder ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1QU18YoauzLOXueogspT0CgR5bqJ6zFfu?usp=sharing). ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. # **About SpeechBrain** - Website: https://speechbrain.github.io/ - Code: https://github.com/speechbrain/speechbrain/ - HuggingFace: https://huggingface.co/speechbrain/ # **Citing SpeechBrain** Please, cite SpeechBrain if you use it for your research or business. ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ```
speechbrain/asr-transformer-transformerlm-librispeech
2021-06-14T23:21:27.000Z
[ "en", "dataset:librispeech", "arxiv:2106.04624", "ASR", "CTC", "Attention", "Transformer", "pytorch", "speechbrain", "license:apache-2.0" ]
[ ".gitattributes", "README.md", "asr.ckpt", "example.wav", "hyperparams.yaml", "lm.ckpt", "normalizer.ckpt", "tokenizer.ckpt" ]
speechbrain
208
speechbrain
--- language: "en" thumbnail: tags: - ASR - CTC - Attention - Transformer - pytorch - speechbrain license: "apache-2.0" datasets: - librispeech metrics: - wer - cer --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # Transformer for LibriSpeech (with Transformer LM) This repository provides all the necessary tools to perform automatic speech recognition from an end-to-end system pretrained on LibriSpeech (EN) within SpeechBrain. For a better experience, we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The performance of the model is the following: | Release | Test clean WER | Test other WER | GPUs | |:-------------:|:--------------:|:--------------:|:--------:| | 05-03-21 | 2.46 | 5.86 | 2xV100 32GB | ## Pipeline description This ASR system is composed of 3 different but linked blocks: - Tokenizer (unigram) that transforms words into subword units and trained with the train transcriptions of LibriSpeech. - Neural language model (Transformer LM) trained on the full 10M words dataset. - Acoustic model made of a transformer encoder and a joint decoder with CTC + transformer. Hence, the decoding also incorporates the CTC probabilities. ## Install SpeechBrain First of all, please install SpeechBrain with the following command: ``` pip install speechbrain ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Transcribing your own audio files (in English) ```python from speechbrain.pretrained import EncoderDecoderASR asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-transformer-transformerlm-librispeech", savedir="pretrained_models/asr-transformer-transformerlm-librispeech") asr_model.transcribe_file("speechbrain/asr-transformer-transformerlm-librispeech/example.wav") ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Training The model was trained with SpeechBrain (Commit hash: 'f73fcc35'). To train it from scratch follow these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ```bash cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ```bash cd recipes/LibriSpeech/ASR/transformer python train.py hparams/transformer.yaml --data_folder=your_data_folder ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1ZudxqMWb8VNCJKvY2Ws5oNY3WI1To0I7?usp=sharing). ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. # **About SpeechBrain** - Website: https://speechbrain.github.io/ - Code: https://github.com/speechbrain/speechbrain/ - HuggingFace: https://huggingface.co/speechbrain/ # **Citing SpeechBrain** Please, cite SpeechBrain if you use it for your research or business. ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ```
speechbrain/asr-wav2vec2-commonvoice-en
2021-06-14T23:18:10.000Z
[ "wav2vec2", "en", "dataset:commonvoice", "arxiv:2106.04624", "automatic-speech-recognition", "CTC", "Attention", "pytorch", "speechbrain", "Transformer", "license:apache-2.0" ]
automatic-speech-recognition
[ ".gitattributes", "README.md", "asr.ckpt", "config.json", "example.wav", "hyperparams.yaml", "preprocessor_config.json", "tokenizer.ckpt", "wav2vec2.ckpt" ]
speechbrain
351
speechbrain
--- language: "en" thumbnail: tags: - automatic-speech-recognition - CTC - Attention - pytorch - speechbrain - Transformer license: "apache-2.0" datasets: - commonvoice metrics: - wer - cer --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # wav2vec 2.0 with CTC/Attention trained on CommonVoice English (No LM) This repository provides all the necessary tools to perform automatic speech recognition from an end-to-end system pretrained on CommonVoice (English Language) within SpeechBrain. For a better experience, we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The performance of the model is the following: | Release | Test WER | GPUs | |:--------------:|:--------------:| :--------:| | 03-06-21 | 15.69 | 2xV100 32GB | ## Pipeline description This ASR system is composed of 2 different but linked blocks: - Tokenizer (unigram) that transforms words into subword units and trained with the train transcriptions (train.tsv) of CommonVoice (EN). - Acoustic model (wav2vec2.0 + CTC/Attention). A pretrained wav2vec 2.0 model ([wav2vec2-lv60-large](https://huggingface.co/facebook/wav2vec2-large-lv60)) is combined with two DNN layers and finetuned on CommonVoice En. The obtained final acoustic representation is given to the CTC and attention decoders. ## Install SpeechBrain First of all, please install tranformers and SpeechBrain with the following command: ``` pip install speechbrain transformers ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Transcribing your own audio files (in English) ```python from speechbrain.pretrained import EncoderDecoderASR asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-wav2vec2-commonvoice-en", savedir="pretrained_models/asr-wav2vec2-commonvoice-en") asr_model.transcribe_file("speechbrain/asr-wav2vec2-commonvoice-en/example.wav") ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Training The model was trained with SpeechBrain. To train it from scratch follow these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ```bash cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ```bash cd recipes/CommonVoice/ASR/seq2seq python train.py hparams/train_en_with_wav2vec.yaml --data_folder=your_data_folder ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1tjz6IZmVRkuRE97E7h1cXFoGTer7pT73?usp=sharing). ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. # **About SpeechBrain** - Website: https://speechbrain.github.io/ - Code: https://github.com/speechbrain/speechbrain/ - HuggingFace: https://huggingface.co/speechbrain/ # **Citing SpeechBrain** Please, cite SpeechBrain if you use it for your research or business. ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ```
speechbrain/asr-wav2vec2-commonvoice-fr
2021-06-14T23:21:38.000Z
[ "wav2vec2", "fr", "dataset:commonvoice", "arxiv:2106.04624", "automatic-speech-recognition", "CTC", "Attention", "pytorch", "speechbrain", "Transformer", "license:apache-2.0" ]
automatic-speech-recognition
[ ".gitattributes", "README.md", "asr.ckpt", "config.json", "example-fr.wav", "example.wav", "hyperparams.yaml", "preprocessor_config.json", "tokenizer.ckpt", "wav2vec2.ckpt" ]
speechbrain
100
speechbrain
--- language: "fr" thumbnail: tags: - automatic-speech-recognition - CTC - Attention - pytorch - speechbrain - Transformer license: "apache-2.0" datasets: - commonvoice metrics: - wer - cer --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # wav2vec 2.0 with CTC/Attention trained on CommonVoice French (No LM) This repository provides all the necessary tools to perform automatic speech recognition from an end-to-end system pretrained on CommonVoice (French Language) within SpeechBrain. For a better experience, we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The performance of the model is the following: | Release | Test CER | Test WER | GPUs | |:-------------:|:--------------:|:--------------:| :--------:| | 29-04-21 | 9.78 | 13.34 | 2xV100 32GB | ## Pipeline description This ASR system is composed of 2 different but linked blocks: - Tokenizer (unigram) that transforms words into subword units and trained with the train transcriptions (train.tsv) of CommonVoice (FR). - Acoustic model (wav2vec2.0 + CTC/Attention). A pretrained wav2vec 2.0 model ([LeBenchmark/wav2vec2-FR-M-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-M-large)) is combined with two DNN layers and finetuned on CommonVoice FR. The obtained final acoustic representation is given to the CTC and attention decoders. ## Install SpeechBrain First of all, please install tranformers and SpeechBrain with the following command: ``` pip install speechbrain transformers ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Transcribing your own audio files (in French) ```python from speechbrain.pretrained import EncoderDecoderASR asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-wav2vec2-commonvoice-fr", savedir="pretrained_models/asr-crdnn-commonvoice-fr") asr_model.transcribe_file("speechbrain/asr-wav2vec2-commonvoice-fr/example-fr.wav") ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Training The model was trained with SpeechBrain. To train it from scratch follow these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ```bash cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ```bash cd recipes/CommonVoice/ASR/seq2seq python train_with_wav2vec.py hparams/train_fr_with_wav2vec.yaml --data_folder=your_data_folder ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1tjz6IZmVRkuRE97E7h1cXFoGTer7pT73?usp=sharing). ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. # **About SpeechBrain** - Website: https://speechbrain.github.io/ - Code: https://github.com/speechbrain/speechbrain/ - HuggingFace: https://huggingface.co/speechbrain/ # **Citing SpeechBrain** Please, cite SpeechBrain if you use it for your research or business. ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ```
speechbrain/asr-wav2vec2-commonvoice-it
2021-06-14T23:18:21.000Z
[ "wav2vec2", "en", "dataset:commonvoice", "arxiv:2106.04624", "automatic-speech-recognition", "CTC", "Attention", "pytorch", "speechbrain", "Transformer", "license:apache-2.0" ]
automatic-speech-recognition
[ ".gitattributes", "README.md", "asr.ckpt", "config.json", "example-it.wav", "hyperparams.yaml", "preprocessor_config.json", "tokenizer.ckpt", "wav2vec2.ckpt" ]
speechbrain
28
speechbrain
--- language: "en" thumbnail: tags: - automatic-speech-recognition - CTC - Attention - pytorch - speechbrain - Transformer license: "apache-2.0" datasets: - commonvoice metrics: - wer - cer --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # wav2vec 2.0 with CTC/Attention trained on CommonVoice Italian (No LM) This repository provides all the necessary tools to perform automatic speech recognition from an end-to-end system pretrained on CommonVoice (Italian Language) within SpeechBrain. For a better experience, we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The performance of the model is the following: | Release | Test WER | GPUs | |:--------------:|:--------------:| :--------:| | 03-06-21 | 9.86 | 2xV100 32GB | ## Pipeline description This ASR system is composed of 2 different but linked blocks: - Tokenizer (unigram) that transforms words into subword units and trained with the train transcriptions (train.tsv) of CommonVoice (EN). - Acoustic model (wav2vec2.0 + CTC/Attention). A pretrained wav2vec 2.0 model ([facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli)) is combined with two DNN layers and finetuned on CommonVoice En. The obtained final acoustic representation is given to the CTC and attention decoders. ## Install SpeechBrain First of all, please install tranformers and SpeechBrain with the following command: ``` pip install speechbrain transformers ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Transcribing your own audio files (in Italian) ```python from speechbrain.pretrained import EncoderDecoderASR asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-wav2vec2-commonvoice-it", savedir="pretrained_models/asr-wav2vec2-commonvoice-it") asr_model.transcribe_file("speechbrain/asr-wav2vec2-commonvoice-it/example-it.wav") ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Training The model was trained with SpeechBrain. To train it from scratch follow these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ```bash cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ```bash cd recipes/CommonVoice/ASR/seq2seq python train_with_wav2vec.py hparams/train_it_with_wav2vec.yaml --data_folder=your_data_folder ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1tjz6IZmVRkuRE97E7h1cXFoGTer7pT73?usp=sharing). ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. # **About SpeechBrain** - Website: https://speechbrain.github.io/ - Code: https://github.com/speechbrain/speechbrain/ - HuggingFace: https://huggingface.co/speechbrain/ # **Citing SpeechBrain** Please, cite SpeechBrain if you use it for your research or business. ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ```
speechbrain/asr-wav2vec2-commonvoice-rw
2021-06-14T23:21:49.000Z
[ "wav2vec2", "rw", "dataset:commonvoice", "arxiv:2106.04624", "automatic-speech-recognition", "CTC", "Attention", "pytorch", "speechbrain", "Transformer", "license:apache-2.0" ]
automatic-speech-recognition
[ ".gitattributes", "README.md", "asr.ckpt", "config.json", "example.mp3", "hyperparams.yaml", "preprocessor_config.json", "tokenizer.ckpt", "wav2vec2.ckpt" ]
speechbrain
39
speechbrain
--- language: "rw" thumbnail: tags: - automatic-speech-recognition - CTC - Attention - pytorch - speechbrain - Transformer license: "apache-2.0" datasets: - commonvoice metrics: - wer - cer --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # wav2vec 2.0 with CTC/Attention trained on CommonVoice Kinyarwanda (No LM) This repository provides all the necessary tools to perform automatic speech recognition from an end-to-end system pretrained on CommonVoice (Kinyarwanda Language) within SpeechBrain. For a better experience, we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The performance of the model is the following: | Release | Test WER | GPUs | |:--------------:|:--------------:| :--------:| | 03-06-21 | 18.91 | 2xV100 32GB | ## Pipeline description This ASR system is composed of 2 different but linked blocks: - Tokenizer (unigram) that transforms words into subword units and trained with the train transcriptions (train.tsv) of CommonVoice (RW). - Acoustic model (wav2vec2.0 + CTC/Attention). A pretrained wav2vec 2.0 model ([wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)) is combined with two DNN layers and finetuned on CommonVoice En. The obtained final acoustic representation is given to the CTC and attention decoders. ## Install SpeechBrain First of all, please install tranformers and SpeechBrain with the following command: ``` pip install speechbrain transformers ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Transcribing your own audio files (in Kinyarwanda) ```python from speechbrain.pretrained import EncoderDecoderASR asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-wav2vec2-commonvoice-rw", savedir="pretrained_models/asr-wav2vec2-commonvoice-rw") asr_model.transcribe_file("speechbrain/asr-wav2vec2-commonvoice-rw/example.mp3") ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Training The model was trained with SpeechBrain. To train it from scratch follow these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ```bash cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ```bash cd recipes/CommonVoice/ASR/seq2seq python train_with_wav2vec.py hparams/train_rw_with_wav2vec.yaml --data_folder=your_data_folder ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1tjz6IZmVRkuRE97E7h1cXFoGTer7pT73?usp=sharing). ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. # **About SpeechBrain** - Website: https://speechbrain.github.io/ - Code: https://github.com/speechbrain/speechbrain/ - HuggingFace: https://huggingface.co/speechbrain/ # **Citing SpeechBrain** Please, cite SpeechBrain if you use it for your research or business. ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ```
speechbrain/asr-wav2vec2-transformer-aishell
2021-06-18T12:41:25.000Z
[ "en", "dataset:aishell", "arxiv:2106.04624", "automatic-speech-recognition", "CTC", "Attention", "Transformers", "wav2vec2", "pytorch", "license:apache-2.0" ]
automatic-speech-recognition
[ ".gitattributes", "README.md", "example_mandarin.wav", "hyperparams.yaml", "model.ckpt", "tokenizer.ckpt", "wav2vec2.ckpt" ]
speechbrain
13
--- language: "en" thumbnail: tags: - ASR - CTC - Attention - Transformers - wav2vec2 - pytorch license: "apache-2.0" datasets: - aishell metrics: - wer - cer --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # Transformer for AISHELL + wav2vec2 (Mandarin Chinese) This repository provides all the necessary tools to perform automatic speech recognition from an end-to-end system pretrained on AISHELL +wav2vec2 (Mandarin Chinese) within SpeechBrain. For a better experience, we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The performance of the model is the following: | Release | Dev CER | Test CER | GPUs | Full Results | |:-------------:|:--------------:|:--------------:|:--------:|:--------:| | 05-03-21 | 5.19 | 5.58 | 2xV100 32GB | [Google Drive](https://drive.google.com/drive/folders/1zlTBib0XEwWeyhaXDXnkqtPsIBI18Uzs?usp=sharing)| ## Pipeline description This ASR system is composed of 2 different but linked blocks: - Tokenizer (unigram) that transforms words into subword units and trained with the train transcriptions of LibriSpeech. - Acoustic model made of a wav2vec2 encoder and a joint decoder with CTC + transformer. Hence, the decoding also incorporates the CTC probabilities. To Train this system from scratch, [see our SpeechBrain recipe](https://github.com/speechbrain/speechbrain/tree/develop/recipes/AISHELL-1/ASR/transformer). ## Install SpeechBrain First of all, please install SpeechBrain with the following command: ``` pip install speechbrain ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Transcribing your own audio files (in English) ```python from speechbrain.pretrained import EncoderDecoderASR asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-wav2vec2-transformer-aishell", savedir="pretrained_models/asr-wav2vec2-transformer-aishell") asr_model.transcribe_file("speechbrain/asr-wav2vec2-transformer-aishell/example_mandarin.wav") ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Training The model was trained with SpeechBrain (Commit hash: '480dde87'). To train it from scratch follow these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ```bash cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ```bash cd recipes/AISHELL-1/ASR/transformer/ python train.py hparams/train_ASR_transformer_with_wav2vect.yaml --data_folder=your_data_folder ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1P3w5BnwLDxMHFQrkCZ5RYBZ1WsQHKFZr?usp=sharing). ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. # **About SpeechBrain** - Website: https://speechbrain.github.io/ - Code: https://github.com/speechbrain/speechbrain/ - HuggingFace: https://huggingface.co/speechbrain/ # **Citing SpeechBrain** Please, cite SpeechBrain if you use it for your research or business. ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ```
speechbrain/google_speech_command_xvector
2021-06-14T23:22:06.000Z
[ "en", "dataset:google speech commands", "arxiv:1804.03209", "arxiv:2106.04624", "embeddings", "Commands", "Keywords", "Keyword Spotting", "pytorch", "xvectors", "TDNN", "Command Recognition", "license:apache-2.0" ]
[ ".gitattributes", "README.md", "classifier.ckpt", "embedding_model.ckpt", "hyperparams.yaml", "label_encoder.txt", "normalizer.ckpt", "stop.wav", "yes.wav" ]
speechbrain
8
--- language: "en" thumbnail: tags: - embeddings - Commands - Keywords - Keyword Spotting - pytorch - xvectors - TDNN - Command Recognition license: "apache-2.0" datasets: - google speech commands metrics: - Accuracy --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # Command Recognition with xvector embeddings on Google Speech Commands This repository provides all the necessary tools to perform command recognition with SpeechBrain using a model pretrained on Google Speech Commands. You can download the dataset [here](https://www.tensorflow.org/datasets/catalog/speech_commands) The dataset provides small training, validation, and test sets useful for detecting single keywords in short audio clips. The provided system can recognize the following 12 keywords: ``` 'yes', 'no', 'up', 'down', 'left', 'right', 'on', 'off', 'stop', 'go', 'unknown', 'silence' ``` For a better experience, we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The given model performance on the test set is: | Release | Accuracy(%) |:-------------:|:--------------:| | 06-02-21 | 98.14 | ## Pipeline description This system is composed of a TDNN model coupled with statistical pooling. A classifier, trained with Categorical Cross-Entropy Loss, is applied on top of that. ## Install SpeechBrain First of all, please install SpeechBrain with the following command: ``` pip install speechbrain ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Perform Command Recognition ```python import torchaudio from speechbrain.pretrained import EncoderClassifier classifier = EncoderClassifier.from_hparams(source="speechbrain/google_speech_command_xvector", savedir="pretrained_models/google_speech_command_xvector") out_prob, score, index, text_lab = classifier.classify_file('speechbrain/google_speech_command_xvector/yes.wav') print(text_lab) out_prob, score, index, text_lab = classifier.classify_file('speechbrain/google_speech_command_xvector/stop.wav') print(text_lab) ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Training The model was trained with SpeechBrain (b7ff9dc4). To train it from scratch follows these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ``` cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ``` cd recipes/Google-speech-commands python train.py hparams/xvect.yaml --data_folder=your_data_folder ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1BKwtr1mBRICRe56PcQk2sCFq63Lsvdpc?usp=sharing). ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. #### Referencing xvectors ```@inproceedings{DBLP:conf/odyssey/SnyderGMSPK18, author = {David Snyder and Daniel Garcia{-}Romero and Alan McCree and Gregory Sell and Daniel Povey and Sanjeev Khudanpur}, title = {Spoken Language Recognition using X-vectors}, booktitle = {Odyssey 2018}, pages = {105--111}, year = {2018}, } ``` #### Referencing Google Speech Commands ```@article{speechcommands, author = { {Warden}, P.}, title = "{Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition}", journal = {ArXiv e-prints}, archivePrefix = "arXiv", eprint = {1804.03209}, primaryClass = "cs.CL", keywords = {Computer Science - Computation and Language, Computer Science - Human-Computer Interaction}, year = 2018, month = apr, url = {https://arxiv.org/abs/1804.03209}, } ``` # **About SpeechBrain** - Website: https://speechbrain.github.io/ - Code: https://github.com/speechbrain/speechbrain/ - HuggingFace: https://huggingface.co/speechbrain/ # **Citing SpeechBrain** Please, cite SpeechBrain if you use it for your research or business. ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ```
speechbrain/metricgan-plus-voicebank
2021-06-14T23:18:43.000Z
[ "en", "dataset:Voicebank", "dataset:DEMAND", "arxiv:2106.04624", "Speech Enhancement", "PyTorch", "license:apache-2.0" ]
[ ".gitattributes", "README.md", "enhance_model.ckpt", "example.wav", "hyperparams.yaml" ]
speechbrain
101
--- language: "en" tags: - Speech Enhancement - PyTorch license: "apache-2.0" datasets: - Voicebank - DEMAND metrics: - PESQ - STOI --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # MetricGAN-trained model for Enhancement This repository provides all the necessary tools to perform enhancement with SpeechBrain. For a better experience we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The model performance is: | Release | Test PESQ | Test STOI | |:-----------:|:-----:| :-----:| | 21-04-27 | 3.15 | 93.0 | ## Install SpeechBrain First of all, please install SpeechBrain with the following command: ``` pip install speechbrain ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ## Pretrained Usage To use the mimic-loss-trained model for enhancement, use the following simple code: ```python import torch import torchaudio from speechbrain.pretrained import SpectralMaskEnhancement enhance_model = SpectralMaskEnhancement.from_hparams( source="speechbrain/metricgan-plus-voicebank", savedir="pretrained_models/metricgan-plus-voicebank", ) # Load and add fake batch dimension noisy = enhance_model.load_audio( "speechbrain/metricgan-plus-voicebank/example.wav" ).unsqueeze(0) # Add relative length tensor enhanced = enhance_model.enhance_batch(noisy, lengths=torch.tensor([1.])) # Saving enhanced signal on disk torchaudio.save('enhanced.wav', enhanced.cpu(), 16000) ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Training The model was trained with SpeechBrain (d0accc8). To train it from scratch follows these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ``` cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ``` cd recipes/Voicebank/enhance/MetricGAN python train.py hparams/train.yaml --data_folder=your_data_folder ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1fcVP52gHgoMX9diNN1JxX_My5KaRNZWs?usp=sharing). ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. ## Referencing MetricGAN+ If you find MetricGAN+ useful, please cite: ``` @article{fu2021metricgan+, title={MetricGAN+: An Improved Version of MetricGAN for Speech Enhancement}, author={Fu, Szu-Wei and Yu, Cheng and Hsieh, Tsun-An and Plantinga, Peter and Ravanelli, Mirco and Lu, Xugang and Tsao, Yu}, journal={arXiv preprint arXiv:2104.03538}, year={2021} } ``` # **About SpeechBrain** - Website: https://speechbrain.github.io/ - Code: https://github.com/speechbrain/speechbrain/ - HuggingFace: https://huggingface.co/speechbrain/ # **Citing SpeechBrain** Please, cite SpeechBrain if you use it for your research or business. ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ```
speechbrain/mtl-mimic-voicebank
2021-06-14T23:22:24.000Z
[ "en", "dataset:Voicebank", "dataset:DEMAND", "arxiv:2106.04624", "Robust ASR", "Speech Enhancement", "PyTorch", "license:apache-2.0" ]
[ ".gitattributes", "README.md", "enhance_model.ckpt", "example.wav", "hyperparams.yaml", "perceptual.ckpt" ]
speechbrain
646
--- language: "en" tags: - Robust ASR - Speech Enhancement - PyTorch license: "apache-2.0" datasets: - Voicebank - DEMAND metrics: - WER - PESQ - eSTOI --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # 1D CNN + Transformer Trained w/ Mimic Loss This repository provides all the necessary tools to perform enhancement and robust ASR training (EN) within SpeechBrain. For a better experience we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The model performance is: | Release | Test PESQ | Test eSTOI | Valid WER | Test WER | |:-----------:|:-----:| :-----:|:----:|:---------:| | 21-03-08 | 2.92 | 85.2 | 3.20 | 2.96 | ## Pipeline description The mimic loss training system consists of three steps: 1. A perceptual model is pre-trained on clean speech features, the same type used for the enhancement masking system. 2. An enhancement model is trained with mimic loss, using the pre-trained perceptual model. 3. A large ASR model pre-trained on LibriSpeech is fine-tuned using the enhancement front-end. The enhancement and ASR models can be used together or independently. ## Install SpeechBrain First of all, please install SpeechBrain with the following command: ``` pip install speechbrain ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ## Pretrained Usage To use the mimic-loss-trained model for enhancement, use the following simple code: ```python import torchaudio from speechbrain.pretrained import SpectralMaskEnhancement enhance_model = SpectralMaskEnhancement.from_hparams( source="speechbrain/mtl-mimic-voicebank", savedir="pretrained_models/mtl-mimic-voicebank", ) enhanced = enhance_model.enhance_file("speechbrain/mtl-mimic-voicebank/example.wav") # Saving enhanced signal on disk torchaudio.save('enhanced.wav', enhanced.unsqueeze(0).cpu(), 16000) ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Training The model was trained with SpeechBrain (150e1890). To train it from scratch follows these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ``` cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ``` cd recipes/Voicebank/MTL/ASR_enhance python train.py hparams/enhance_mimic.yaml --data_folder=your_data_folder ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1HaR0Bq679pgd1_4jD74_wDRUq-c3Wl4L?usp=sharing). ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. ## Referencing Mimic Loss If you find mimic loss useful, please cite: ``` @inproceedings{bagchi2018spectral, title={Spectral Feature Mapping with Mimic Loss for Robust Speech Recognition}, author={Bagchi, Deblin and Plantinga, Peter and Stiff, Adam and Fosler-Lussier, Eric}, booktitle={IEEE Conference on Audio, Speech, and Signal Processing (ICASSP)}, year={2018} } ``` # **About SpeechBrain** - Website: https://speechbrain.github.io/ - Code: https://github.com/speechbrain/speechbrain/ - HuggingFace: https://huggingface.co/speechbrain/ # **Citing SpeechBrain** Please, cite SpeechBrain if you use it for your research or business. ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ```
speechbrain/sepformer-wham
2021-06-14T23:18:56.000Z
[ "en", "dataset:WHAM!", "arxiv:2010.13154", "arxiv:2106.04624", "Source Separation", "Speech Separation", "Audio Source Separation", "WHAM!", "SepFormer", "Transformer", "license:apache-2.0" ]
[ ".gitattributes", "CKPT.yaml", "README.md", "brain.ckpt", "counter.ckpt", "dataloader-TRAIN.ckpt", "decoder.ckpt", "encoder.ckpt", "hyperparams.yaml", "hyperparams_train.yaml", "lr_scheduler.ckpt", "masknet.ckpt", "optimizer.ckpt" ]
speechbrain
87
--- language: "en" thumbnail: tags: - Source Separation - Speech Separation - Audio Source Separation - WHAM! - SepFormer - Transformer license: "apache-2.0" datasets: - WHAM! metrics: - SI-SNRi - SDRi --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # SepFormer trained on WHAM! This repository provides all the necessary tools to perform audio source separation with a [SepFormer](https://arxiv.org/abs/2010.13154v2) model, implemented with SpeechBrain, and pretrained on [WHAM!](http://wham.whisper.ai/) dataset, which is basically a version of WSJ0-Mix dataset with environmental noise. For a better experience we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The model performance is 16.3 dB SI-SNRi on the test set of WHAM! dataset. | Release | Test-Set SI-SNRi | Test-Set SDRi | |:-------------:|:--------------:|:--------------:| | 09-03-21 | 16.3 dB | 16.7 dB | ## Install SpeechBrain First of all, please install SpeechBrain with the following command: ``` pip install speechbrain ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Perform source separation on your own audio file ```python from speechbrain.pretrained import SepformerSeparation as separator import torchaudio model = separator.from_hparams(source="speechbrain/sepformer-wham", savedir='pretrained_models/sepformer-wham') # for custom file, change path est_sources = model.separate_file(path='speechbrain/sepformer-wsj02mix/test_mixture.wav') torchaudio.save("source1hat.wav", est_sources[:, :, 0].detach().cpu(), 8000) torchaudio.save("source2hat.wav", est_sources[:, :, 1].detach().cpu(), 8000) ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Training The model was trained with SpeechBrain (e375cd13). To train it from scratch follows these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ``` cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ``` cd recipes/WHAMandWHAMR/separation python train.py hparams/sepformer-wham.yaml --data_folder=your_data_folder ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1dIAT8hZxvdJPZNUb8Zkk3BuN7GZ9-mZb?usp=sharing). ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. #### Referencing SpeechBrain ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ``` #### Referencing SepFormer ```bibtex @inproceedings{subakan2021attention, title={Attention is All You Need in Speech Separation}, author={Cem Subakan and Mirco Ravanelli and Samuele Cornell and Mirko Bronzi and Jianyuan Zhong}, year={2021}, booktitle={ICASSP 2021} } ``` # **About SpeechBrain** - Website: https://speechbrain.github.io/ - Code: https://github.com/speechbrain/speechbrain/ - HuggingFace: https://huggingface.co/speechbrain/
speechbrain/sepformer-whamr
2021-06-14T23:22:36.000Z
[ "en", "dataset:WHAMR!", "arxiv:2010.13154", "arxiv:2106.04624", "Source Separation", "Speech Separation", "Audio Source Separation", "WHAM!", "SepFormer", "Transformer", "license:apache-2.0" ]
[ ".gitattributes", "CKPT.yaml", "README.md", "brain.ckpt", "counter.ckpt", "dataloader-TRAIN.ckpt", "decoder.ckpt", "encoder.ckpt", "hyperparams.yaml", "hyperparams_train.yaml", "lr_scheduler.ckpt", "masknet.ckpt", "optimizer.ckpt", "metadata/mix_2_spk_filenames_cv.csv", "metadata/mix_2_spk_filenames_tr.csv", "metadata/mix_2_spk_filenames_tt.csv", "metadata/reverb_params_cv.csv", "metadata/reverb_params_tr.csv", "metadata/reverb_params_tt.csv" ]
speechbrain
78
--- language: "en" thumbnail: tags: - Source Separation - Speech Separation - Audio Source Separation - WHAM! - SepFormer - Transformer license: "apache-2.0" datasets: - WHAMR! metrics: - SI-SNRi - SDRi --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # SepFormer trained on WHAM! This repository provides all the necessary tools to perform audio source separation with a [SepFormer](https://arxiv.org/abs/2010.13154v2) model, implemented with SpeechBrain, and pretrained on [WHAMR!](http://wham.whisper.ai/) dataset, which is basically a version of WSJ0-Mix dataset with environmental noise and reverberation. For a better experience we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The model performance is 13.7 dB SI-SNRi on the test set of WHAMR! dataset. | Release | Test-Set SI-SNRi | Test-Set SDRi | |:-------------:|:--------------:|:--------------:| | 30-03-21 | 13.7 dB | 12.7 dB | ## Install SpeechBrain First of all, please install SpeechBrain with the following command: ``` pip install speechbrain ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Perform source separation on your own audio file ```python from speechbrain.pretrained import SepformerSeparation as separator import torchaudio model = separator.from_hparams(source="speechbrain/sepformer-whamr", savedir='pretrained_models/sepformer-whamr') # for custom file, change path est_sources = model.separate_file(path='speechbrain/sepformer-wsj02mix/test_mixture.wav') torchaudio.save("source1hat.wav", est_sources[:, :, 0].detach().cpu(), 8000) torchaudio.save("source2hat.wav", est_sources[:, :, 1].detach().cpu(), 8000) ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Training The model was trained with SpeechBrain (e375cd13). To train it from scratch follows these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ``` cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ``` cd recipes/WHAMandWHAMR/separation python train.py hparams/sepformer-whamr.yaml --data_folder=your_data_folder ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1m1xfx2ojf7qgOyscJVVCQFRY0VRl0rdi?usp=sharing). ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. #### Referencing SpeechBrain ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ``` #### Referencing SepFormer ```bibtex @inproceedings{subakan2021attention, title={Attention is All You Need in Speech Separation}, author={Cem Subakan and Mirco Ravanelli and Samuele Cornell and Mirko Bronzi and Jianyuan Zhong}, year={2021}, booktitle={ICASSP 2021} } ``` # **About SpeechBrain** - Website: https://speechbrain.github.io/ - Code: https://github.com/speechbrain/speechbrain/ - HuggingFace: https://huggingface.co/speechbrain/
speechbrain/sepformer-whamr16k
2021-06-14T23:19:17.000Z
[ "en", "dataset:WHAMR!", "arxiv:2010.13154", "arxiv:2106.04624", "audio-source-separation", "Source Separation", "Speech Separation", "WHAM!", "SepFormer", "Transformer", "pytorch", "license:apache-2.0" ]
audio-source-separation
[ ".gitattributes", "CKPT.yaml", "README.md", "brain.ckpt", "counter.ckpt", "dataloader-TRAIN.ckpt", "decoder.ckpt", "encoder.ckpt", "hyperparams.yaml", "hyperparams_train.yaml", "lr_scheduler.ckpt", "masknet.ckpt", "optimizer.ckpt", "test_mixture16k.wav" ]
speechbrain
105
--- language: "en" thumbnail: tags: - audio-source-separation - Source Separation - Speech Separation - WHAM! - SepFormer - Transformer - pytorch license: "apache-2.0" datasets: - WHAMR! metrics: - SI-SNRi - SDRi pipeline: - audio source separation --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # SepFormer trained on WHAMR! (16k sampling frequency) This repository provides all the necessary tools to perform audio source separation with a [SepFormer](https://arxiv.org/abs/2010.13154v2) model, implemented with SpeechBrain, and pretrained on [WHAMR!](http://wham.whisper.ai/) dataset with 16k sampling frequency, which is basically a version of WSJ0-Mix dataset with environmental noise and reverberation in 16k. For a better experience we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The given model performance is 13.5 dB SI-SNRi on the test set of WHAMR! dataset. | Release | Test-Set SI-SNRi | Test-Set SDRi | |:-------------:|:--------------:|:--------------:| | 30-03-21 | 13.5 dB | 13.0 dB | ## Install SpeechBrain First of all, please install SpeechBrain with the following command: ``` pip install speechbrain ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Perform source separation on your own audio file ```python from speechbrain.pretrained import SepformerSeparation as separator import torchaudio model = separator.from_hparams(source="speechbrain/sepformer-whamr16k", savedir='pretrained_models/sepformer-whamr16k') # for custom file, change path est_sources = model.separate_file(path='speechbrain/sepformer-whamr16k/test_mixture16k.wav') torchaudio.save("source1hat.wav", est_sources[:, :, 0].detach().cpu(), 16000) torchaudio.save("source2hat.wav", est_sources[:, :, 1].detach().cpu(), 16000) ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Training The model was trained with SpeechBrain (fc2eabb7). To train it from scratch follows these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ``` cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ``` cd recipes/WHAMandWHAMR/separation/ python train.py hparams/sepformer-whamr.yaml --data_folder=your_data_folder --sample_rate=16000 ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1QiQhp1vi5t4UfNpNETA48_OmPiXnUy8O?usp=sharing). ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. #### Referencing SpeechBrain ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ``` #### Referencing SepFormer ```bibtex @inproceedings{subakan2021attention, title={Attention is All You Need in Speech Separation}, author={Cem Subakan and Mirco Ravanelli and Samuele Cornell and Mirko Bronzi and Jianyuan Zhong}, year={2021}, booktitle={ICASSP 2021} } ``` # **About SpeechBrain** - Website: https://speechbrain.github.io/ - Code: https://github.com/speechbrain/speechbrain/ - HuggingFace: https://huggingface.co/speechbrain/
speechbrain/sepformer-wsj02mix
2021-06-14T23:22:47.000Z
[ "en", "dataset:WSJ0-2Mix", "arxiv:2010.13154", "arxiv:2106.04624", "Source Separation", "Speech Separation", "Audio Source Separation", "WSJ02Mix", "SepFormer", "Transformer", "license:apache-2.0" ]
[ ".gitattributes", "README.md", "brain.ckpt", "decoder.ckpt", "encoder.ckpt", "hyperparams.yaml", "hyperparams_train.yaml", "masknet.ckpt", "test_mixture.wav" ]
speechbrain
805
--- language: "en" thumbnail: tags: - Source Separation - Speech Separation - Audio Source Separation - WSJ02Mix - SepFormer - Transformer license: "apache-2.0" datasets: - WSJ0-2Mix metrics: - SI-SNRi - SDRi --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # SepFormer trained on WSJ0-2Mix This repository provides all the necessary tools to perform audio source separation with a [SepFormer](https://arxiv.org/abs/2010.13154v2) model, implemented with SpeechBrain, and pretrained on WSJ0-2Mix dataset. For a better experience we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The model performance is 22.4 dB on the test set of WSJ0-2Mix dataset. | Release | Test-Set SI-SNRi | Test-Set SDRi | |:-------------:|:--------------:|:--------------:| | 09-03-21 | 22.4dB | 22.6dB | ## Install SpeechBrain First of all, please install SpeechBrain with the following command: ``` pip install speechbrain ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Perform source separation on your own audio file ```python from speechbrain.pretrained import SepformerSeparation as separator import torchaudio model = separator.from_hparams(source="speechbrain/sepformer-wsj02mix", savedir='pretrained_models/sepformer-wsj02mix') # for custom file, change path est_sources = model.separate_file(path='speechbrain/sepformer-wsj02mix/test_mixture.wav') torchaudio.save("source1hat.wav", est_sources[:, :, 0].detach().cpu(), 8000) torchaudio.save("source2hat.wav", est_sources[:, :, 1].detach().cpu(), 8000) ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Training The model was trained with SpeechBrain (fc2eabb7). To train it from scratch follows these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ``` cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ``` cd recipes/WSJ0Mix/separation python train.py hparams/sepformer.yaml --data_folder=your_data_folder ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1cON-eqtKv_NYnJhaE9VjLT_e2ybn-O7u?usp=sharing). ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. #### Referencing SpeechBrain ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ``` #### Referencing SepFormer ```bibtex @inproceedings{subakan2021attention, title={Attention is All You Need in Speech Separation}, author={Cem Subakan and Mirco Ravanelli and Samuele Cornell and Mirko Bronzi and Jianyuan Zhong}, year={2021}, booktitle={ICASSP 2021} } ``` # **About SpeechBrain** - Website: https://speechbrain.github.io/ - Code: https://github.com/speechbrain/speechbrain/ - HuggingFace: https://huggingface.co/speechbrain/
speechbrain/sepformer-wsj03mix
2021-06-14T23:19:31.000Z
[ "en", "dataset:WSJ0-3Mix", "arxiv:2010.13154", "arxiv:2106.04624", "Source Separation", "Speech Separation", "Audio Source Separation", "WSJ0-3Mix", "SepFormer", "Transformer", "license:apache-2.0" ]
[ ".gitattributes", "CKPT.yaml", "README.md", "brain.ckpt", "counter.ckpt", "decoder.ckpt", "encoder.ckpt", "hyperparams.yaml", "hyperparams_train.yaml", "lr_scheduler.ckpt", "masknet.ckpt", "optimizer.ckpt", "test_mixture_3spks.wav" ]
speechbrain
43
--- language: "en" thumbnail: tags: - Source Separation - Speech Separation - Audio Source Separation - WSJ0-3Mix - SepFormer - Transformer license: "apache-2.0" datasets: - WSJ0-3Mix metrics: - SI-SNRi - SDRi --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # SepFormer trained on WSJ0-3Mix This repository provides all the necessary tools to perform audio source separation with a [SepFormer](https://arxiv.org/abs/2010.13154v2) model, implemented with SpeechBrain, and pretrained on WSJ0-3Mix dataset. For a better experience we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The model performance is 19.8 dB SI-SNRi on the test set of WSJ0-3Mix dataset. | Release | Test-Set SI-SNRi | Test-Set SDRi | |:-------------:|:--------------:|:--------------:| | 09-03-21 | 19.8dB | 20.0dB | ## Install SpeechBrain First of all, please install SpeechBrain with the following command: ``` pip install speechbrain ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Perform source separation on your own audio file ```python from speechbrain.pretrained import SepformerSeparation as separator import torchaudio model = separator.from_hparams(source="speechbrain/sepformer-wsj03mix", savedir='pretrained_models/sepformer-wsj03mix') est_sources = model.separate_file(path='speechbrain/sepformer-wsj03mix/test_mixture_3spks.wav') torchaudio.save("source1hat.wav", est_sources[:, :, 0].detach().cpu(), 8000) torchaudio.save("source2hat.wav", est_sources[:, :, 1].detach().cpu(), 8000) torchaudio.save("source3hat.wav", est_sources[:, :, 2].detach().cpu(), 8000) ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Training The model was trained with SpeechBrain (fc2eabb7). To train it from scratch follows these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ``` cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ``` cd recipes/WSJ0Mix/separation python train.py hparams/sepformer.yaml --data_folder=your_data_folder ``` Note: change num_spks to 3 in the yaml file. You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1ruScDoqiSDNeoDa__u5472UUPKPu54b2?usp=sharing). ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. #### Referencing SpeechBrain ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ``` #### Referencing SepFormer ```bibtex @inproceedings{subakan2021attention, title={Attention is All You Need in Speech Separation}, author={Cem Subakan and Mirco Ravanelli and Samuele Cornell and Mirko Bronzi and Jianyuan Zhong}, year={2021}, booktitle={ICASSP 2021} } ``` # **About SpeechBrain** - Website: https://speechbrain.github.io/ - Code: https://github.com/speechbrain/speechbrain/ - HuggingFace: https://huggingface.co/speechbrain/
speechbrain/slu-direct-fluent-speech-commands-librispeech-asr
2021-06-14T23:19:51.000Z
[ "en", "dataset:Fluent Speech Commands", "arxiv:1904.03670", "arxiv:2106.04624", "Spoken language understanding", "license:cc0" ]
[ ".gitattributes", "README.md", "example_fsc.wav", "hyperparams.yaml", "model.ckpt", "tokenizer.ckpt" ]
speechbrain
8
--- language: "en" thumbnail: tags: - Spoken language understanding license: "CC0" datasets: - Fluent Speech Commands metrics: - Accuracy --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # Fluent Speech Commands The dataset contains real recordings that define a simple spoken language understanding task. You can download it from [here](https://fluent.ai/fluent-speech-commands-a-dataset-for-spoken-language-understanding-research/). The Fluent Speech Commands dataset contains 30,043 utterances from 97 speakers. It is recorded as 16 kHz single-channel .wav files each containing a single utterance used for controlling smart-home appliances or virtual assistant, for example, “put on the music” or “turn up the heat in the kitchen”. Each audio is labeled with three slots: action, object, and location. A slot takes on one of the multiple values: for instance, the “location” slot can take on the values “none”, “kitchen”, “bedroom”, or “washroom”. We refer to the combination of slot values as the intent of the utterance. For each intent, there are multiple possible wordings: for example, the intent {action: “activate”, object: “lights”, location: “none”} can be expressed as “turn on the lights”, “switch the lights on”, “lights on”, etc. The dataset has a total of 248 phrasing mapping to 31 unique intents. # End-to-end SLU model for Fluent Speech Commands Attention-based RNN sequence-to-sequence model for the [Fluent Speech Commands](https://arxiv.org/pdf/1904.03670.pdf) dataset. This model checkpoint achieves 99.6% accuracy on the test set. The model uses an ASR model trained on LibriSpeech ([`speechbrain/asr-crdnn-rnnlm-librispeech`](https://huggingface.co/speechbrain/asr-crdnn-rnnlm-librispeech)) to extract features from the input audio, then maps these features to an intent and slot labels using a beam search. You can try the model on the `example_fsc.wav` file included here as follows: ``` from speechbrain.pretrained import EndToEndSLU slu = EndToEndSLU.from_hparams("/network/tmp1/ravanelm/slu-direct-fluent-speech-commands-librispeech-asr") # Text: "Please, turn on the light of the bedroom" slu.decode_file("/network/tmp1/ravanelm/slu-direct-fluent-speech-commands-librispeech-asr/example_fsc.wav") >>> '{"action:" "activate"| "object": "lights"| "location": "bedroom"}' ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Training The model was trained with SpeechBrain (f1f421b3). To train it from scratch follows these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ``` cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ``` cd recipes/fluent-speech-commands python train.py hparams/train.yaml --data_folder=your_data_folder ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1Zly54252Z218IHJQ9M0B3kTQPZIw_2yC?usp=sharing). ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. #### Referencing SpeechBrain ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ``` #### Referencing Fluent Speech Commands ```bibtex @inproceedings{fluent, author = {Loren Lugosch and Mirco Ravanelli and Patrick Ignoto and Vikrant Singh Tomar and Yoshua Bengio}, editor = {Gernot Kubin and Zdravko Kacic}, title = {Speech Model Pre-Training for End-to-End Spoken Language Understanding}, booktitle = {Proc. of Interspeech}, pages = {814--818}, year = {2019}, } ``` #### About SpeechBrain SpeechBrain is an open-source and all-in-one speech toolkit. It is designed to be simple, extremely flexible, and user-friendly. Competitive or state-of-the-art performance is obtained in various domains. Website: https://speechbrain.github.io/ GitHub: https://github.com/speechbrain/speechbrain
speechbrain/slu-timers-and-such-direct-librispeech-asr
2021-06-14T23:20:38.000Z
[ "en", "dataset:Timers and Such", "arxiv:2104.01604", "arxiv:2106.04624", "Spoken language understanding", "license:cc0" ]
[ ".gitattributes", "README.md", "hyperparams.yaml", "math.wav", "model.ckpt", "tokenizer.ckpt" ]
speechbrain
564
--- language: "en" thumbnail: tags: - Spoken language understanding license: "CC0" datasets: - Timers and Such metrics: - Accuracy --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # End-to-end SLU model for Timers and Such Attention-based RNN sequence-to-sequence model for [Timers and Such](https://arxiv.org/abs/2104.01604) trained on the `train-real` subset. This model checkpoint achieves 86.7% accuracy on `test-real`. The model uses an ASR model trained on LibriSpeech ([`speechbrain/asr-crdnn-rnnlm-librispeech`](https://huggingface.co/speechbrain/asr-crdnn-rnnlm-librispeech)) to extract features from the input audio, then maps these features to an intent and slot labels using a beam search. The dataset has four intents: `SetTimer`, `SetAlarm`, `SimpleMath`, and `UnitConversion`. Try testing the model by saying something like "set a timer for 5 minutes" or "what's 32 degrees Celsius in Fahrenheit?" You can try the model on the `math.wav` file included here as follows: ``` from speechbrain.pretrained import EndToEndSLU slu = EndToEndSLU.from_hparams("speechbrain/slu-timers-and-such-direct-librispeech-asr") slu.decode_file("speechbrain/slu-timers-and-such-direct-librispeech-asr/math.wav") ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Training The model was trained with SpeechBrain (d254489a). To train it from scratch follows these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ``` cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ``` cd recipes/timers-and-such/direct python train.py hparams/train.yaml --data_folder=your_data_folder ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/18c2anEv8hx-ZjmEN8AdUA8AZziYIidON?usp=sharing). ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. #### Referencing SpeechBrain ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ``` #### Referencing Timers and Such ``` @misc{lugosch2021timers, title={Timers and Such: A Practical Benchmark for Spoken Language Understanding with Numbers}, author={Lugosch, Loren and Papreja, Piyush and Ravanelli, Mirco and Heba, Abdelwahab and Parcollet, Titouan}, year={2021}, eprint={2104.01604}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` #### About SpeechBrain SpeechBrain is an open-source and all-in-one speech toolkit. It is designed to be simple, extremely flexible, and user-friendly. Competitive or state-of-the-art performance is obtained in various domains. Website: https://speechbrain.github.io/ GitHub: https://github.com/speechbrain/speechbrain
speechbrain/spkrec-ecapa-voxceleb
2021-06-14T23:23:18.000Z
[ "en", "dataset:voxceleb", "arxiv:2106.04624", "embeddings", "Speaker", "Verification", "Identification", "pytorch", "ECAPA", "TDNN", "license:apache-2.0" ]
[ ".gitattributes", "README.md", "classifier.ckpt", "embedding_model.ckpt", "example1.wav", "example2.flac", "hyperparams.yaml", "label_encoder.txt", "mean_var_norm_emb.ckpt" ]
speechbrain
4,431
--- language: "en" thumbnail: tags: - embeddings - Speaker - Verification - Identification - pytorch - ECAPA - TDNN license: "apache-2.0" datasets: - voxceleb metrics: - EER --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # Speaker Verification with ECAPA-TDNN embeddings on Voxceleb This repository provides all the necessary tools to perform speaker verification with a pretrained ECAPA-TDNN model using SpeechBrain. The system can be used to extract speaker embeddings as well. It is trained on Voxceleb 1+ Voxceleb2 training data. For a better experience, we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The model performance on Voxceleb1-test set(Cleaned) is: | Release | EER(%) | minDCF | |:-------------:|:--------------:|:--------------:| | 05-03-21 | 0.69 | 0.08258 | ## Pipeline description This system is composed of an ECAPA-TDNN model. It is a combination of convolutional and residual blocks. The embeddings are extracted using attentive statistical pooling. The system is trained with Additive Margin Softmax Loss. Speaker Verification is performed using cosine distance between speaker embeddings. ## Install SpeechBrain First of all, please install SpeechBrain with the following command: ``` pip install speechbrain ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Compute your speaker embeddings ```python import torchaudio from speechbrain.pretrained import EncoderClassifier classifier = EncoderClassifier.from_hparams(source="speechbrain/spkrec-ecapa-voxceleb") signal, fs =torchaudio.load('samples/audio_samples/example1.wav') embeddings = classifier.encode_batch(signal) ``` ### Perform Speaker Verification ```python from speechbrain.pretrained import SpeakerRecognition verification = SpeakerRecognition.from_hparams(source="speechbrain/spkrec-ecapa-voxceleb", savedir="pretrained_models/spkrec-ecapa-voxceleb") score, prediction = verification.verify_files("speechbrain/spkrec-ecapa-voxceleb/example1.wav", "speechbrain/spkrec-ecapa-voxceleb/example2.flac") ``` The prediction is 1 if the two signals in input are from the same speaker and 0 otherwise. ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Training The model was trained with SpeechBrain (aa018540). To train it from scratch follows these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ``` cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ``` cd recipes/VoxCeleb/SpeakerRec python train_speaker_embeddings.py hparams/train_ecapa_tdnn.yaml --data_folder=your_data_folder ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1-ahC1xeyPinAHp2oAohL-02smNWO41Cc?usp=sharing). ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. #### Referencing ECAPA-TDNN ``` @inproceedings{DBLP:conf/interspeech/DesplanquesTD20, author = {Brecht Desplanques and Jenthe Thienpondt and Kris Demuynck}, editor = {Helen Meng and Bo Xu and Thomas Fang Zheng}, title = {{ECAPA-TDNN:} Emphasized Channel Attention, Propagation and Aggregation in {TDNN} Based Speaker Verification}, booktitle = {Interspeech 2020}, pages = {3830--3834}, publisher = {{ISCA}}, year = {2020}, } ``` # **Citing SpeechBrain** Please, cite SpeechBrain if you use it for your research or business. ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ``` # **About SpeechBrain** - Website: https://speechbrain.github.io/ - Code: https://github.com/speechbrain/speechbrain/ - HuggingFace: https://huggingface.co/speechbrain/
speechbrain/spkrec-xvect-voxceleb
2021-06-14T23:20:58.000Z
[ "en", "dataset:voxceleb", "arxiv:2106.04624", "embeddings", "Speaker", "Verification", "Identification", "pytorch", "xvectors", "TDNN", "license:apache-2.0" ]
[ ".gitattributes", "README.md", "classifier.ckpt", "embedding_model.ckpt", "hyperparams.yaml", "label_encoder.txt", "mean_var_norm_emb.ckpt" ]
speechbrain
399
--- language: "en" thumbnail: tags: - embeddings - Speaker - Verification - Identification - pytorch - xvectors - TDNN license: "apache-2.0" datasets: - voxceleb metrics: - EER - min_dct --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # Speaker Verification with xvector embeddings on Voxceleb This repository provides all the necessary tools to extract speaker embeddings with a pretrained TDNN model using SpeechBrain. The system is trained on Voxceleb 1+ Voxceleb2 training data. For a better experience, we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The given model performance on Voxceleb1-test set (Cleaned) is: | Release | EER(%) |:-------------:|:--------------:| | 05-03-21 | 3.2 | ## Pipeline description This system is composed of a TDNN model coupled with statistical pooling. The system is trained with Categorical Cross-Entropy Loss. ## Install SpeechBrain First of all, please install SpeechBrain with the following command: ``` pip install speechbrain ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Compute your speaker embeddings ```python import torchaudio from speechbrain.pretrained import EncoderClassifier classifier = EncoderClassifier.from_hparams(source="speechbrain/spkrec-xvect-voxceleb", savedir="pretrained_models/spkrec-xvect-voxceleb") signal, fs =torchaudio.load('samples/audio_samples/example1.wav') embeddings = classifier.encode_batch(signal) ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Training The model was trained with SpeechBrain (aa018540). To train it from scratch follows these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ``` cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ``` cd recipes/VoxCeleb/SpeakerRec/ python train_speaker_embeddings.py hparams/train_x_vectors.yaml --data_folder=your_data_folder ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1RtCBJ3O8iOCkFrJItCKT9oL-Q1MNCwMH?usp=sharing). ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. #### Referencing xvectors ```@inproceedings{DBLP:conf/odyssey/SnyderGMSPK18, author = {David Snyder and Daniel Garcia{-}Romero and Alan McCree and Gregory Sell and Daniel Povey and Sanjeev Khudanpur}, title = {Spoken Language Recognition using X-vectors}, booktitle = {Odyssey 2018}, pages = {105--111}, year = {2018}, } ``` # **Citing SpeechBrain** Please, cite SpeechBrain if you use it for your research or business. ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ```
speechbrain/urbansound8k_ecapa
2021-06-14T23:23:32.000Z
[ "en", "dataset:Urbansound8k", "arxiv:2106.04624", "embeddings", "Sound", "Keywords", "Keyword Spotting", "pytorch", "ECAPA-TDNN", "TDNN", "Command Recognition", "license:apache-2.0" ]
[ ".gitattributes", "README.md", "classifier.ckpt", "dog_bark.wav", "embedding_model.ckpt", "hyperparams.yaml", "label_encoder.txt", "normalizer.ckpt" ]
speechbrain
6
--- language: "en" thumbnail: tags: - embeddings - Sound - Keywords - Keyword Spotting - pytorch - ECAPA-TDNN - TDNN - Command Recognition license: "apache-2.0" datasets: - Urbansound8k metrics: - Accuracy --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # Command Recognition with ECAPA embeddings on UrbanSoudnd8k This repository provides all the necessary tools to perform sound recognition with SpeechBrain using a model pretrained on UrbanSound8k. You can download the dataset [here](https://urbansounddataset.weebly.com/urbansound8k.html) The provided system can recognize the following 10 keywords: ``` dog_bark, children_playing, air_conditioner, street_music, gun_shot, siren, engine_idling, jackhammer, drilling, car_horn ``` For a better experience, we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The given model performance on the test set is: | Release | Accuracy 1-fold (%) |:-------------:|:--------------:| | 04-06-21 | 75.5 | ## Pipeline description This system is composed of a ECAPA model coupled with statistical pooling. A classifier, trained with Categorical Cross-Entropy Loss, is applied on top of that. ## Install SpeechBrain First of all, please install SpeechBrain with the following command: ``` pip install speechbrain ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Perform Sound Recognition ```python import torchaudio from speechbrain.pretrained import EncoderClassifier classifier = EncoderClassifier.from_hparams(source="speechbrain/urbansound8k_ecapa", savedir="pretrained_models/gurbansound8k_ecapa") out_prob, score, index, text_lab = classifier.classify_file('speechbrain/urbansound8k_ecapa/dog_bark.wav') print(text_lab) ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Training The model was trained with SpeechBrain (8cab8b0c). To train it from scratch follows these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ``` cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ``` cd recipes/UrbanSound8k/SoundClassification python train.py hparams/train_ecapa_tdnn.yaml --data_folder=your_data_folder ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1sItfg_WNuGX6h2dCs8JTGq2v2QoNTaUg?usp=sharing). ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. #### Referencing ECAPA ```@inproceedings{DBLP:conf/interspeech/DesplanquesTD20, author = {Brecht Desplanques and Jenthe Thienpondt and Kris Demuynck}, editor = {Helen Meng and Bo Xu and Thomas Fang Zheng}, title = {{ECAPA-TDNN:} Emphasized Channel Attention, Propagation and Aggregation in {TDNN} Based Speaker Verification}, booktitle = {Interspeech 2020}, pages = {3830--3834}, publisher = {{ISCA}}, year = {2020}, } ``` #### Referencing UrbanSound ```@inproceedings{Salamon:UrbanSound:ACMMM:14, Author = {Salamon, J. and Jacoby, C. and Bello, J. P.}, Booktitle = {22nd {ACM} International Conference on Multimedia (ACM-MM'14)}, Month = {Nov.}, Pages = {1041--1044}, Title = {A Dataset and Taxonomy for Urban Sound Research}, Year = {2014}} ``` # **Citing SpeechBrain** Please, cite SpeechBrain if you use it for your research or business. ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ```