modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-26 00:41:46
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-26 00:38:54
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
yluisfern/FDR
|
yluisfern
| 2021-04-02T16:40:25Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
https://www.geogebra.org/m/cwcveget
https://www.geogebra.org/m/b8dzxk6z
https://www.geogebra.org/m/nqanttum
https://www.geogebra.org/m/pd3g8a4u
https://www.geogebra.org/m/jw8324jz
https://www.geogebra.org/m/wjbpvz5q
https://www.geogebra.org/m/qm3g3ma6
https://www.geogebra.org/m/sdajgph8
https://www.geogebra.org/m/e3ghhcbf
https://www.geogebra.org/m/msne4bfm
https://www.geogebra.org/m/nmcv2te5
https://www.geogebra.org/m/hguqx6cn
https://www.geogebra.org/m/jnyvpgqu
https://www.geogebra.org/m/syctd97g
https://www.geogebra.org/m/nq9erdby
https://www.geogebra.org/m/au4har8c
https://network.aza.org/network/members/profile?UserKey=811de229-7f08-4360-863c-ac04181ba9c0
https://network.aza.org/network/members/profile?UserKey=31b495a0-36f7-4a50-ba3e-d76e3487278c
https://network.aza.org/network/members/profile?UserKey=753c0ddd-bded-4b03-8c68-11dacdd1f676
https://network.aza.org/network/members/profile?UserKey=db9d0a25-1615-4e39-b61f-ad68766095b3
https://network.aza.org/network/members/profile?UserKey=59279f52-50cf-4686-9fb0-9ab613211ead
https://network.aza.org/network/members/profile?UserKey=67b3ce20-cc3a-420f-8933-10796f301060
https://network.aza.org/network/members/profile?UserKey=f5e610c3-6400-4429-b42b-97eeeeb284a9
https://network.aza.org/network/members/profile?UserKey=ccda0739-f5f5-4ecc-a729-77c9a6825897
https://network.aza.org/network/members/profile?UserKey=3983471f-cf43-4a4a-90d3-148040f92dd9
https://network.aza.org/network/members/profile?UserKey=9f16d7a8-3502-4904-a99a-38362de78973
https://network.aza.org/network/members/profile?UserKey=961981d5-9743-44ac-8525-d4c8b708eb5a
https://network.aza.org/network/members/profile?UserKey=178276d7-c64d-408e-af52-96d1ebd549fc
|
mami/santuycuy
|
mami
| 2021-04-02T15:17:05Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
https://zambiainc.com/advert/uptobox-sub-bg-nobody-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4/
https://zambiainc.com/advert/uptobox-sub-bg-%d1%80%d0%b0%d1%8f-%d0%b8-%d0%bf%d0%be%d1%81%d0%bb%d0%b5%d0%b4%d0%bd%d0%b8%d1%8f%d1%82-%d0%b4%d1%80%d0%b0%d0%ba%d0%be%d0%bd-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8/
https://zambiainc.com/advert/uptobox-sub-bg-chaos-walking-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83/
https://zambiainc.com/advert/uptobox-sub-bg-%d0%b6%d0%b5%d0%bb%d1%8f%d0%b7%d0%bd%d0%b0%d1%82%d0%b0-%d0%b7%d0%b0%d0%b2%d0%b5%d1%81%d0%b0-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb/
https://zambiainc.com/advert/uptobox-sub-bg-%d0%ba%d1%80%d1%83%d0%b4-2-2020-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3/
https://zambiainc.com/advert/uptobox-sub-bg-the-marksman-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83/
https://zambiainc.com/advert/uptobox-sub-bg-boogie-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4/
https://zambiainc.com/advert/uptobox-sub-bg-minari-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4/
https://zambiainc.com/advert/uptobox-sub-bg-%d0%bc%d0%be%d0%bc%d0%b8%d1%87%d0%b5-%d1%81-%d0%bf%d0%be%d1%82%d0%b5%d0%bd%d1%86%d0%b8%d0%b0%d0%bb-2020-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3/
https://zambiainc.com/advert/uptobox-sub-bg-monster-hunter-2020-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0/
https://zambiainc.com/advert/uptobox-sub-bg-%d0%b7%d0%b5%d0%bc%d1%8f-%d0%bd%d0%b0-%d0%bd%d0%be%d0%bc%d0%b0%d0%b4%d0%b8-2020-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5/
https://zambiainc.com/advert/uptobox-sub-bg-%d0%b2%d0%be%d0%b9%d0%bd%d0%b0%d1%82%d0%b0-%d1%81-%d0%b4%d1%8f%d0%b4%d0%be-2020-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5/
https://zambiainc.com/advert/uptobox-sub-bg-%d0%bd%d0%be%d0%b2%d0%b8%d0%bd%d0%b8-%d0%be%d1%82-%d1%81%d0%b2%d0%b5%d1%82%d0%b0-2020-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd/
https://zambiainc.com/advert/uptobox-sub-bg-six-minutes-to-midnight-2020-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3/
https://zambiainc.com/advert/uptobox-sub-bg-dutch-2020-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4/
https://zambiainc.com/advert/uptobox-sub-bg-lamb-of-god-the-concert-film-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1/
https://zambiainc.com/advert/uptobox-sub-bg-long-weekend-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83/
https://zambiainc.com/advert/uptobox-sub-bg-mystery-of-the-kingdom-of-god-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1/
https://zambiainc.com/advert/uptobox-sub-bg-%d0%b7%d0%b0%d1%82%d0%b2%d0%be%d1%80%d0%bd%d0%b8%d0%ba-760-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd/
https://zambiainc.com/advert/uptobox-sub-bg-dark-state-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83/
https://zambiainc.com/advert/uptobox-sub-bg-zack-snyders-justice-league-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1/
https://zambiainc.com/advert/uptobox-sub-bg-%d0%b3%d0%be%d0%b4%d0%b7%d0%b8%d0%bb%d0%b0-%d1%81%d1%80%d0%b5%d1%89%d1%83-%d0%ba%d0%be%d0%bd%d0%b3-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3/
https://zambiainc.com/advert/uptobox-sub-bg-bad-trip-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83/
https://zambiainc.com/advert/uptobox-sub-bg-%d1%82%d0%be%d0%bc-%d0%b8-%d0%b4%d0%b6%d0%b5%d1%80%d0%b8-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb/
https://zambiainc.com/advert/uptobox-sub-bg-skylines-2020-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83/
https://zambiainc.com/advert/uptobox-sub-bg-the-little-things-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0/
https://zambiainc.com/advert/uptobox-sub-bg-the-little-things-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0/
https://zambiainc.com/advert/uptobox-sub-bg-sentinelle-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83/
https://zambiainc.com/advert/uptobox-sub-bg-the-unholy-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83/
https://zambiainc.com/advert/uptobox-sub-bg-%d1%81%d0%bc%d1%8a%d1%80%d1%82%d0%be%d0%bd%d0%be%d1%81%d0%bd%d0%b0-%d0%b1%d0%b8%d1%82%d0%ba%d0%b0-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3/
https://zambiainc.com/advert/uptobox-sub-bg-assault-on-va-33-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0/
https://zambiainc.com/advert/uptobox-sub-bg-vanquish-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83/
https://zambiainc.com/advert/uptobox-sub-bg-voyagers-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83/
https://zambiainc.com/advert/uptobox-sub-bg-stowaway-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83/
https://zambiainc.com/advert/uptobox-sub-bg-thunder-force-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83/
https://zambiainc.com/advert/uptobox-sub-bg-in-search-of-tomorrow-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3/
https://zambiainc.com/advert/uptobox-sub-bg-arlo-the-alligator-boy-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3/
https://zambiainc.com/advert/uptobox-sub-bg-the-nameless-days-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0/
https://zambiainc.com/advert/uptobox-sub-bg-the-banishing-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83/
https://zambiainc.com/advert/uptobox-sub-bg-%d0%b1%d0%b0%d1%89%d0%b8%d0%bd%d1%81%d1%82%d0%b2%d0%be-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb/
https://zambiainc.com/advert/uptobox-sub-bg-bananza-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4/
https://zambiainc.com/advert/uptobox-sub-bg-bonhoeffer-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83/
https://zambiainc.com/advert/uptobox-sub-bg-held-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4/
https://zambiainc.com/advert/uptobox-sub-bg-held-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4-2/
https://zambiainc.com/advert/uptobox-sub-bg-00k9-no-time-to-shed-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3/
https://zambiainc.com/advert/uptobox-sub-bg-between-us-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83/
https://zambiainc.com/advert/uptobox-sub-bg-the-believer-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83/
https://zambiainc.com/advert/uptobox-sub-bg-limbo-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4/
https://zambiainc.com/advert/uptobox-sub-bg-things-heard-seen-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0/
https://zambiainc.com/advert/uptobox-sub-bg-free-byrd-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83/
https://zambiainc.com/advert/uptobox-sub-bg-the-workplace-2021-%d0%b1%d0%b3-%d0%b0%d1%83%d0%b4%d0%b8%d0%be-%d0%b8%d0%b7%d1%82%d0%b5%d0%b3%d0%bb%d1%8f%d0%bd%d0%b5-%d0%be%d0%bd%d0%bb%d0%b0%d0%b9%d0%bd-%d0%b1%d0%b3-%d0%b0%d1%83/
|
sammy786/wav2vec2-large-xlsr-mongolian
|
sammy786
| 2021-04-02T11:36:53Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"mn",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: mn
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Mongolian by Salim Shaikh
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice mn
type: common_voice
args: {mn}
metrics:
- name: Test WER
type: wer
value: 38.14
---
# Wav2Vec2-Large-XLSR-53-Mongolian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Mongolian using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
model_name = "sammy786/wav2vec2-large-xlsr-mongolian"
device = "cuda"
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\‘\\”\\�\\)\\(\\*)]'
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(model_name)
ds = load_dataset("common_voice", "mn", split="test", data_dir="./cv-corpus-6.1-2020-12-11")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() + " "
return batch
ds = ds.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)
batch["target"] = batch["sentence"]
return batch
result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys()))
wer = load_metric("wer")
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
**Test Result**: 38.14 %
|
gorave/gorave
|
gorave
| 2021-03-31T18:02:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
https://www.geogebra.org/m/awcxgj4g
https://www.geogebra.org/m/tx9tme6s
https://www.geogebra.org/m/yx5yyjmx
|
lighteternal/SSE-TUC-mt-en-el-lowercase
|
lighteternal
| 2021-03-31T17:27:32Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"fsmt",
"text2text-generation",
"translation",
"en",
"el",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:05Z |
---
language:
- en
- el
tags:
- translation
widget:
- text: "Not all those who wander are lost."
license: apache-2.0
metrics:
- bleu
---
## English to Greek NMT (lower-case output)
## By the Hellenic Army Academy (SSE) and the Technical University of Crete (TUC)
* source languages: en
* target languages: el
* licence: apache-2.0
* dataset: Opus, CCmatrix
* model: transformer(fairseq)
* pre-processing: tokenization + lower-casing + BPE segmentation
* metrics: bleu, chrf
* output: lowercase only, for mixed-cased model use this: https://huggingface.co/lighteternal/SSE-TUC-mt-en-el-cased
### Model description
Trained using the Fairseq framework, transformer_iwslt_de_en architecture.\\
BPE segmentation (10k codes).\\
Lower-case model.
### How to use
```
from transformers import FSMTTokenizer, FSMTForConditionalGeneration
mname = " <your_downloaded_model_folderpath_here> "
tokenizer = FSMTTokenizer.from_pretrained(mname)
model = FSMTForConditionalGeneration.from_pretrained(mname)
text = "Not all those who wander are lost."
encoded = tokenizer.encode(text, return_tensors='pt')
outputs = model.generate(encoded, num_beams=5, num_return_sequences=5, early_stopping=True)
for i, output in enumerate(outputs):
i += 1
print(f"{i}: {output.tolist()}")
decoded = tokenizer.decode(output, skip_special_tokens=True)
print(f"{i}: {decoded}")
```
## Training data
Consolidated corpus from Opus and CC-Matrix (~6.6GB in total)
## Eval results
Results on Tatoeba testset (EN-EL):
| BLEU | chrF |
| ------ | ------ |
| 77.3 | 0.739 |
Results on XNLI parallel (EN-EL):
| BLEU | chrF |
| ------ | ------ |
| 66.1 | 0.606 |
### BibTeX entry and citation info
Dimitris Papadopoulos, et al. "PENELOPIE: Enabling Open Information Extraction for the Greek Language through Machine Translation." (2021). Accepted at EACL 2021 SRW
### Acknowledgement
The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI PhD Fellowship grant (Fellowship Number:50, 2nd call)
|
lighteternal/SSE-TUC-mt-en-el-cased
|
lighteternal
| 2021-03-31T17:27:05Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"fsmt",
"text2text-generation",
"translation",
"en",
"el",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:05Z |
---
language:
- en
- el
tags:
- translation
widget:
- text: "'Katerina', is the best name for a girl."
license: apache-2.0
metrics:
- bleu
---
## English to Greek NMT
## By the Hellenic Army Academy (SSE) and the Technical University of Crete (TUC)
* source languages: en
* target languages: el
* licence: apache-2.0
* dataset: Opus, CCmatrix
* model: transformer(fairseq)
* pre-processing: tokenization + BPE segmentation
* metrics: bleu, chrf
### Model description
Trained using the Fairseq framework, transformer_iwslt_de_en architecture.\\
BPE segmentation (20k codes).\\
Mixed-case model.
### How to use
```
from transformers import FSMTTokenizer, FSMTForConditionalGeneration
mname = "lighteternal/SSE-TUC-mt-en-el-cased"
tokenizer = FSMTTokenizer.from_pretrained(mname)
model = FSMTForConditionalGeneration.from_pretrained(mname)
text = " 'Katerina', is the best name for a girl."
encoded = tokenizer.encode(text, return_tensors='pt')
outputs = model.generate(encoded, num_beams=5, num_return_sequences=5, early_stopping=True)
for i, output in enumerate(outputs):
i += 1
print(f"{i}: {output.tolist()}")
decoded = tokenizer.decode(output, skip_special_tokens=True)
print(f"{i}: {decoded}")
```
## Training data
Consolidated corpus from Opus and CC-Matrix (~6.6GB in total)
## Eval results
Results on Tatoeba testset (EN-EL):
| BLEU | chrF |
| ------ | ------ |
| 76.9 | 0.733 |
Results on XNLI parallel (EN-EL):
| BLEU | chrF |
| ------ | ------ |
| 65.4 | 0.624 |
### BibTeX entry and citation info
Dimitris Papadopoulos, et al. "PENELOPIE: Enabling Open Information Extraction for the Greek Language through Machine Translation." (2021). Accepted at EACL 2021 SRW
### Acknowledgement
The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI PhD Fellowship grant (Fellowship Number:50, 2nd call)
|
lighteternal/SSE-TUC-mt-el-en-cased
|
lighteternal
| 2021-03-31T17:26:16Z | 43 | 2 |
transformers
|
[
"transformers",
"pytorch",
"fsmt",
"text2text-generation",
"translation",
"en",
"el",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:05Z |
---
language:
- en
- el
tags:
- translation
widget:
- text: "Ο όρος τεχνητή νοημοσύνη αναφέρεται στον κλάδο της πληροφορικής ο οποίος ασχολείται με τη σχεδίαση και την υλοποίηση υπολογιστικών συστημάτων που μιμούνται στοιχεία της ανθρώπινης συμπεριφοράς. "
license: apache-2.0
metrics:
- bleu
---
## Greek to English NMT
## By the Hellenic Army Academy (SSE) and the Technical University of Crete (TUC)
* source languages: el
* target languages: en
* licence: apache-2.0
* dataset: Opus, CCmatrix
* model: transformer(fairseq)
* pre-processing: tokenization + BPE segmentation
* metrics: bleu, chrf
### Model description
Trained using the Fairseq framework, transformer_iwslt_de_en architecture.\\
BPE segmentation (20k codes).\\
Mixed-case model.
### How to use
```
from transformers import FSMTTokenizer, FSMTForConditionalGeneration
mname = "lighteternal/SSE-TUC-mt-el-en-cased"
tokenizer = FSMTTokenizer.from_pretrained(mname)
model = FSMTForConditionalGeneration.from_pretrained(mname)
text = "Ο όρος τεχνητή νοημοσύνη αναφέρεται στον κλάδο της πληροφορικής ο οποίος ασχολείται με τη σχεδίαση και την υλοποίηση υπολογιστικών συστημάτων που μιμούνται στοιχεία της ανθρώπινης συμπεριφοράς ."
encoded = tokenizer.encode(text, return_tensors='pt')
outputs = model.generate(encoded, num_beams=5, num_return_sequences=5, early_stopping=True)
for i, output in enumerate(outputs):
i += 1
print(f"{i}: {output.tolist()}")
decoded = tokenizer.decode(output, skip_special_tokens=True)
print(f"{i}: {decoded}")
```
## Training data
Consolidated corpus from Opus and CC-Matrix (~6.6GB in total)
## Eval results
Results on Tatoeba testset (EL-EN):
| BLEU | chrF |
| ------ | ------ |
| 79.3 | 0.795 |
Results on XNLI parallel (EL-EN):
| BLEU | chrF |
| ------ | ------ |
| 66.2 | 0.623 |
### BibTeX entry and citation info
Dimitris Papadopoulos, et al. "PENELOPIE: Enabling Open Information Extraction for the Greek Language through Machine Translation." (2021). Accepted at EACL 2021 SRW
### Acknowledgement
The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI PhD Fellowship grant (Fellowship Number:50, 2nd call)
|
katoensp/GG-12
|
katoensp
| 2021-03-30T15:55:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
https://www.geogebra.org/m/cwcveget
https://www.geogebra.org/m/b8dzxk6z
https://www.geogebra.org/m/nqanttum
https://www.geogebra.org/m/pd3g8a4u
https://www.geogebra.org/m/jw8324jz
https://www.geogebra.org/m/wjbpvz5q
https://www.geogebra.org/m/qm3g3ma6
https://www.geogebra.org/m/sdajgph8
https://www.geogebra.org/m/e3ghhcbf
https://www.geogebra.org/m/msne4bfm
https://www.geogebra.org/m/nmcv2te5
https://www.geogebra.org/m/hguqx6cn
https://www.geogebra.org/m/jnyvpgqu
https://www.geogebra.org/m/syctd97g
https://www.geogebra.org/m/nq9erdby
https://www.geogebra.org/m/au4har8c
|
londogard/flair-swe-ner
|
londogard
| 2021-03-29T08:06:38Z | 13 | 0 |
flair
|
[
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"sv",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- flair
- token-classification
- sequence-tagger-model
language: sv
datasets:
- SUC 3.0
widget:
- text: "Hampus bor i Skåne och har levererat denna model idag."
---
Published with ❤️ from [londogard](https://londogard.com).
## Swedish NER in Flair (SUC 3.0)
F1-Score: **85.6** (SUC 3.0)
Predicts 8 tags:
|**Tag**|**Meaning**|
|---|---|
| PRS| person name |
| ORG | organisation name|
| TME | time unit |
| WRK | building name |
| LOC | location name |
| EVN | event name |
| MSR | measurement unit |
| OBJ | object (like "Rolls-Royce" is a object in the form of a special car) |
Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF.
---
### Demo: How to use in Flair
Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("londogard/flair-swe-ner")
# make example sentence
sentence = Sentence("Hampus bor i Skåne och har levererat denna model idag.")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)
```
This yields the following output:
```
Span [0]: "Hampus" [− Labels: PRS (1.0)]
Span [3]: "Skåne" [− Labels: LOC (1.0)]
Span [9]: "idag" [− Labels: TME(1.0)]
```
So, the entities "_Hampus_" (labeled as a **PRS**), "_Skåne_" (labeled as a **LOC**), "_idag_" (labeled as a **TME**) are found in the sentence "_Hampus bor i Skåne och har levererat denna model idag._".
---
**Please mention londogard if using this models.**
|
vasilis/wav2vec2-large-xlsr-53-finnish
|
vasilis
| 2021-03-29T02:30:18Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"fi",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: fi
datasets:
- common_voice
- CSS10 finnish: Single Speaker Speech Dataset
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: V XLSR Wav2Vec2 Large 53 - finnish
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice fi
type: common_voice
args: fi
metrics:
- name: Test WER
type: wer
value: 38.335242
- name: Test CER
type: cer
value: 6.552408
---
# Wav2Vec2-Large-XLSR-53-finnish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on finnish using the [Common Voice](https://huggingface.co/datasets/common_voice) and [CSS10 finnish: Single Speaker Speech Dataset](https://www.kaggle.com/bryanpark/finnish-single-speaker-speech-dataset).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "el", split="test[:2%]") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
processor = Wav2Vec2Processor.from_pretrained("vasilis/wav2vec2-large-xlsr-53-finnish") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
model = Wav2Vec2ForCTC.from_pretrained("vasilis/wav2vec2-large-xlsr-53-finnish") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the finnish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "fi", split="test") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("vasilis/wav2vec2-large-xlsr-53-finnish")
model = Wav2Vec2ForCTC.from_pretrained("vasilis/wav2vec2-large-xlsr-53-finnish")
model.to("cuda")
chars_to_ignore_regex = "[\,\?\.\!\-\;\:\"\“\%\‘\”\�\']" # TODO: adapt this list to include all special characters you removed from the data
replacements = {"…": "", "–": ''}
resampler = {
48_000: torchaudio.transforms.Resample(48_000, 16_000),
44100: torchaudio.transforms.Resample(44100, 16_000),
32000: torchaudio.transforms.Resample(32000, 16_000)
}
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
for key, value in replacements.items():
batch["sentence"] = batch["sentence"].replace(key, value)
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler[sampling_rate](speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
print("CER: {:2f}".format(100 * wer.compute(predictions=[" ".join(list(entry)) for entry in result["pred_strings"]], references=[" ".join(list(entry)) for entry in result["sentence"]])))
```
**Test Result**: 38.335242 %
## Training
The Common Voice train dataset was used for training. Also all of `CSS10 Finnish` was used using the normalized transcripts.
After 20000 steps the models was finetuned using the common voice train and validation sets for 2000 steps more.
|
wietsedv/wav2vec2-large-xlsr-53-frisian
|
wietsedv
| 2021-03-28T20:09:35Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: fy-NL
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Frisian XLSR Wav2Vec2 Large 53 by Wietse de Vries
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice fy-NL
type: common_voice
args: fy-NL
metrics:
- name: Test WER
type: wer
value: 16.25
---
# Wav2Vec2-Large-XLSR-53-Frisian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Frisian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "fy-NL", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("wietsedv/wav2vec2-large-xlsr-53-frisian")
model = Wav2Vec2ForCTC.from_pretrained("wietsedv/wav2vec2-large-xlsr-53-frisian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Frisian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "fy-NL", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("wietsedv/wav2vec2-large-xlsr-53-frisian")
model = Wav2Vec2ForCTC.from_pretrained("wietsedv/wav2vec2-large-xlsr-53-frisian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\'\“\%\‘\”]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:.2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 16.25 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
pcuenq/wav2vec2-large-xlsr-53-es
|
pcuenq
| 2021-03-28T19:06:18Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"es",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: es
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Large 53 Spanish by pcuenq
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice es
type: common_voice
args: es
metrics:
- name: Test WER
type: wer
value: 10.50
---
# Wav2Vec2-Large-XLSR-53-Spanish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Spanish using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset{s}.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "es", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("pcuenq/wav2vec2-large-xlsr-53-es")
model = Wav2Vec2ForCTC.from_pretrained("pcuenq/wav2vec2-large-xlsr-53-es")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Spanish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "es", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("pcuenq/wav2vec2-large-xlsr-53-es")
model = Wav2Vec2ForCTC.from_pretrained("pcuenq/wav2vec2-large-xlsr-53-es")
model.to("cuda")
## Text pre-processing
chars_to_ignore_regex = '[\,\¿\?\.\¡\!\-\;\:\"\“\%\‘\”\\…\’\ː\'\‹\›\`\´\®\—\→]'
chars_to_ignore_pattern = re.compile(chars_to_ignore_regex)
def remove_special_characters(batch):
batch["sentence"] = chars_to_ignore_pattern.sub('', batch["sentence"]).lower() + " "
return batch
def replace_diacritics(batch):
sentence = batch["sentence"]
sentence = re.sub('ì', 'í', sentence)
sentence = re.sub('ù', 'ú', sentence)
sentence = re.sub('ò', 'ó', sentence)
sentence = re.sub('à', 'á', sentence)
batch["sentence"] = sentence
return batch
def replace_additional(batch):
sentence = batch["sentence"]
sentence = re.sub('ã', 'a', sentence) # Portuguese, as in São Paulo
sentence = re.sub('ō', 'o', sentence) # Japanese
sentence = re.sub('ê', 'e', sentence) # Português
batch["sentence"] = sentence
return batch
## Audio pre-processing
# I tried to perform the resampling using a `torchaudio` `Resampler` transform,
# but found that the process deadlocked when using multiple processes.
# Perhaps my torchaudio is using the wrong sox library under the hood, I'm not sure.
# Fortunately, `librosa` seems to work fine, so that's what I'll use for now.
import librosa
def speech_file_to_array_fn(batch):
speech_array, sample_rate = torchaudio.load(batch["path"])
batch["speech"] = librosa.resample(speech_array.squeeze().numpy(), sample_rate, 16_000)
return batch
# One-pass mapping function
# Text transformation and audio resampling
def cv_prepare(batch):
batch = remove_special_characters(batch)
batch = replace_diacritics(batch)
batch = replace_additional(batch)
batch = speech_file_to_array_fn(batch)
return batch
# Number of CPUs or None
num_proc = 16
test_dataset = test_dataset.map(cv_prepare, remove_columns=['path'], num_proc=num_proc)
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
# WER Metric computation
# `wer.compute` crashes in my computer with more than ~10000 samples.
# Until I confirm in a different one, I created a "chunked" version of the computation.
# It gives the same results as `wer.compute` for smaller datasets.
import jiwer
def chunked_wer(targets, predictions, chunk_size=None):
if chunk_size is None: return jiwer.wer(targets, predictions)
start = 0
end = chunk_size
H, S, D, I = 0, 0, 0, 0
while start < len(targets):
chunk_metrics = jiwer.compute_measures(targets[start:end], predictions[start:end])
H = H + chunk_metrics["hits"]
S = S + chunk_metrics["substitutions"]
D = D + chunk_metrics["deletions"]
I = I + chunk_metrics["insertions"]
start += chunk_size
end += chunk_size
return float(S + D + I) / float(H + S + D)
print("WER: {:2f}".format(100 * chunked_wer(result["sentence"], result["pred_strings"], chunk_size=4000)))
#print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 10.50 %
## Text processing
The Common Voice `es` dataset has a lot of characters that don't belong to the Spanish language, even after discarding separators and punctuators. I made some translations and discarded most of the extraneous characters.
I decided to keep all the Spanish language diacritics. This is a difficult decision. Some times the diacritics are added just because of ortography rules, but they don't alter the meaning of the word. In other cases, however, the diacritics carry meaning, as they disambiguate among different senses. A better WER score would surely have been achieved using just the non-accented characters, and the resulting text would be understood by Spanish speakers. Nevertheless, I think keeping them is "more correct".
All the rules I applied are shown in the evaluation script.
## Training
The Common Voice `train` and `validation` datasets were used for training.
For dataset handling reasons, I initially split `train`+`validation` in 10% splits so I could see progress earlier and react if needed.
* I trained for 30 epochs on the first split only, using similar values as the ones proposed by Patrick in his demo notebook. I used a batch_size of 24 with 2 gradient accumulation steps. This gave a WER of about 16.3%on the full test set.
* I then trained the resulting model on the 9 remaining splits, for 3 epochs each, but with a faster warmup of 75 steps.
* Next, I trained 3 epochs on each of the 10 splits using a smaller learning rate of `1e-4`. A warmup of 75 steps was used in this case too. The final model had a WER of about 11.7%.
* By this time we had already figured out the reason for the initial delay in training time, and I decided to use the full dataset for training. However, in my tests I had seen that varying the learning rate seemed to work well, so I wanted to replicate that. I selected a cosine schedule with hard restarts, a reference learning rate of `3e-5` and 10 epochs. I configured the cosine schedule to have 10 cycles too, and used no warmup. This produced a WER of ~10.5%.
## Other things I tried
* Starting from the same fine-tuned model, I compared a constant lr of 1e-4 against a linear schedule with warmup. The linear schedule worked better (11.85 vs 12.72 WER%).
* I tried to use a Spanish model to improve a Basque one. I transformed the text to make ortography more similar to the target language, but the Basque model did not improve.
* Label smoothing did not work.
## Issues and other technical challenges
I had previously used the `transformers` library as an end user, just to try Bert on some tasks, but this is the first time I have needed to look into the code.
* The `Datasets` abstraction is great because, being based on memory-mapped files, it allows arbitrarily-sized datasets to be processed. However, it is important to understand its limitations and trade-offs. I found caching convenient, but disk usage explodes fast. I keep the datasets for my current projects in a 1 TB, fast SSD disk, and a couple of times I ran out of space. I had to understand how cache files are stored and learn when it's best to disable caching and manually save when you need to. I found that data exploration is better suited for smaller datasets or sampled ones, but actual processing is most efficient when you have identified the transformations you need and apply them in a single `map` operation.
* There was a noticeable delay before training started. Fortunately, we found the reason why, discussed it in Slack and the forums and created a workaround.
* The WER metric crashed on large datasets. I evaluated on a small sample (also, it's faster) and wrote an accumulative version of wer that runs on fixed memory. I'd like to verify whether this change makes sense to be used inside the training loop.
* `torchaudio` deadlocks when using multiple processes. `librosa` works fine. To be investigated.
* When using `num_proc` inside a notebook, I could not see progress bars. This is surely some permissions issue in my computer. I still need to find it out.
|
vasudevgupta/mbart-summarizer-interiit
|
vasudevgupta
| 2021-03-28T17:49:15Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
This model is trained as a part of **InterIIT'21 competition**, on the dataset provided by Bridgei2i. It is able to do multilingual (Hindi, English, Hinglish) summarization (many -> one) & is capable of generating summaries in English regardless of the input language.
| Rouge-L | Sacrebleu | Headline Similarity (using sentence-transformers) |
|-----------------------|-----------|---------------------------------------------------|
| p=0.46 r=0.49 f1=0.52 | 23.46 | 0.75 |
mBART is initialized from **facebook/mbart-large-cc25** and is trained as per strategy mentioned in our [GitHub](https://github.com/vasudevgupta7/Bridgei2i-Winning-Solutions).
|
vasilis/wav2vec2-large-xlsr-53-greek
|
vasilis
| 2021-03-26T23:51:48Z | 18 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"el",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: el
datasets:
- common_voice
- CSS10 Greek: Single Speaker Speech Dataset
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: V XLSR Wav2Vec2 Large 53 - greek
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice el
type: common_voice
args: el
metrics:
- name: Test WER
type: wer
value: 18.996669
- name: Test CER
type: cer
value: 5.781874
---
# Wav2Vec2-Large-XLSR-53-greek
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on greek using the [Common Voice](https://huggingface.co/datasets/common_voice) and [CSS10 Greek: Single Speaker Speech Dataset](https://www.kaggle.com/bryanpark/greek-single-speaker-speech-dataset).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "el", split="test[:2%]") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
processor = Wav2Vec2Processor.from_pretrained("vasilis/wav2vec2-large-xlsr-53-greek") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
model = Wav2Vec2ForCTC.from_pretrained("vasilis/wav2vec2-large-xlsr-53-greek") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the greek test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "el", split="test") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("vasilis/wav2vec2-large-xlsr-53-greek") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
model = Wav2Vec2ForCTC.from_pretrained("vasilis/wav2vec2-large-xlsr-53-greek") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]' # TODO: adapt this list to include all special characters you removed from the data
normalize_greek_letters = {"ς": "σ"}
# normalize_greek_letters = {"ά": "α", "έ": "ε", "ί": "ι", 'ϊ': "ι", "ύ": "υ", "ς": "σ", "ΐ": "ι", 'ϋ': "υ", "ή": "η", "ώ": "ω", 'ό': "ο"}
remove_chars_greek = {"a": "", "h": "", "n": "", "g": "", "o": "", "v": "", "e": "", "r": "", "t": "", "«": "", "»": "", "m": "", '́': '', "·": "", "’": "", '´': ""}
replacements = {**normalize_greek_letters, **remove_chars_greek}
resampler = {
48_000: torchaudio.transforms.Resample(48_000, 16_000),
44100: torchaudio.transforms.Resample(44100, 16_000),
32000: torchaudio.transforms.Resample(32000, 16_000)
}
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
for key, value in replacements.items():
batch["sentence"] = batch["sentence"].replace(key, value)
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler[sampling_rate](speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
print("CER: {:2f}".format(100 * wer.compute(predictions=[" ".join(list(entry)) for entry in result["pred_strings"]], references=[" ".join(list(entry)) for entry in result["sentence"]])))
```
**Test Result**: 18.996669 %
## Training
The Common Voice train dataset was used for training. Also all of `CSS10 Greek` was used using the normalized transcripts.
During text preprocessing letter `ς` is normalized to `σ` the reason is that both letters sound the same with `ς` only used as the ending character of words. So, the change can be mapped up to proper dictation easily. I tried removing all accents from letters as well that improved `WER` significantly. The model was reaching `17%` WER easily without having converged. However, the text preprocessing needed to do after to fix transcrtiptions would be more complicated. A language model should fix things easily though. Another thing that could be tried out would be to change all of `ι`, `η` ... etc to a single character since all sound the same. similar for `o` and `ω` these should help the acoustic model part significantly since all these characters map to the same sound. But further text normlization would be needed.
|
trueto/medalbert-base-wwm-chinese
|
trueto
| 2021-03-26T05:33:51Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# [medbert](https://github.com/trueto/medbert)
本项目开源硕士毕业论文“BERT模型在中文临床自然语言处理中的应用探索与研究”相关模型
## 评估基准
构建了中文电子病历命名实体识别数据集(CEMRNER)、中文医学文本命名实体识别数据集(CMTNER)、
中文医学问句-问句识别数据集(CMedQQ)和中文临床文本分类数据集(CCTC)。
| **数据集** | **训练集** | **验证集** | **测试集** | **任务类型** | **语料来源** |
| ---- | ---- | ---- |---- |---- |:----:|
| CEMRNER | 965 | 138 | 276 | 命名实体识别 | 医渡云 |
| CMTNER | 14000 | 2000 | 4000 | 命名实体识别 | CHIP2020 |
| CMedQQ | 14000 | 2000 | 4000 | 句对识别 | 平安医疗 |
| CCTC | 26837 | 3834 | 7669 | 句子分类 | CHIP2019 |
## 开源模型
在6.5亿字符中文临床自然语言文本语料上基于BERT模型和Albert模型预训练获得了MedBERT和MedAlbert模型。
## 性能表现
在同等实验环境,相同训练参数和脚本下,各模型的性能表现
| **模型** | **CEMRNER** | **CMTNER** | **CMedQQ** | **CCTC** |
| :---- | :----: | :----: | :----: | :----: |
| [BERT](https://huggingface.co/bert-base-chinese) | 81.17% | 65.67% | 87.77% | 81.62% |
| [MC-BERT](https://github.com/alibaba-research/ChineseBLUE) | 80.93% | 66.15% | 89.04% | 80.65% |
| [PCL-BERT](https://code.ihub.org.cn/projects/1775) | 81.58% | 67.02% | 88.81% | 80.27% |
| MedBERT | 82.29% | 66.49% | 88.32% | **81.77%** |
|MedBERT-wwm| **82.60%** | 67.11% | 88.02% | 81.72% |
|MedBERT-kd | 82.58% | **67.27%** | **89.34%** | 80.73% |
|- | - | - | - | - |
| [Albert](https://huggingface.co/voidful/albert_chinese_base) | 79.98% | 62.42% | 86.81% | 79.83% |
| MedAlbert | 81.03% | 63.81% | 87.56% | 80.05% |
|MedAlbert-wwm| **81.28%** | **64.12%** | **87.71%** | **80.46%** |
## 引用格式
```
杨飞洪,王序文,李姣.BERT模型在中文临床自然语言处理中的应用探索与研究[EB/OL].https://github.com/trueto/medbert, 2021-03.
```
|
trueto/medalbert-base-chinese
|
trueto
| 2021-03-26T05:29:51Z | 2 | 4 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# [medbert](https://github.com/trueto/medbert)
本项目开源硕士毕业论文“BERT模型在中文临床自然语言处理中的应用探索与研究”相关模型
## 评估基准
构建了中文电子病历命名实体识别数据集(CEMRNER)、中文医学文本命名实体识别数据集(CMTNER)、
中文医学问句-问句识别数据集(CMedQQ)和中文临床文本分类数据集(CCTC)。
| **数据集** | **训练集** | **验证集** | **测试集** | **任务类型** | **语料来源** |
| ---- | ---- | ---- |---- |---- |:----:|
| CEMRNER | 965 | 138 | 276 | 命名实体识别 | 医渡云 |
| CMTNER | 14000 | 2000 | 4000 | 命名实体识别 | CHIP2020 |
| CMedQQ | 14000 | 2000 | 4000 | 句对识别 | 平安医疗 |
| CCTC | 26837 | 3834 | 7669 | 句子分类 | CHIP2019 |
## 开源模型
在6.5亿字符中文临床自然语言文本语料上基于BERT模型和Albert模型预训练获得了MedBERT和MedAlbert模型。
## 性能表现
在同等实验环境,相同训练参数和脚本下,各模型的性能表现
| **模型** | **CEMRNER** | **CMTNER** | **CMedQQ** | **CCTC** |
| :---- | :----: | :----: | :----: | :----: |
| [BERT](https://huggingface.co/bert-base-chinese) | 81.17% | 65.67% | 87.77% | 81.62% |
| [MC-BERT](https://github.com/alibaba-research/ChineseBLUE) | 80.93% | 66.15% | 89.04% | 80.65% |
| [PCL-BERT](https://code.ihub.org.cn/projects/1775) | 81.58% | 67.02% | 88.81% | 80.27% |
| MedBERT | 82.29% | 66.49% | 88.32% | **81.77%** |
|MedBERT-wwm| **82.60%** | 67.11% | 88.02% | 81.72% |
|MedBERT-kd | 82.58% | **67.27%** | **89.34%** | 80.73% |
|- | - | - | - | - |
| [Albert](https://huggingface.co/voidful/albert_chinese_base) | 79.98% | 62.42% | 86.81% | 79.83% |
| MedAlbert | 81.03% | 63.81% | 87.56% | 80.05% |
|MedAlbert-wwm| **81.28%** | **64.12%** | **87.71%** | **80.46%** |
## 引用格式
```
杨飞洪,王序文,李姣.BERT模型在中文临床自然语言处理中的应用探索与研究[EB/OL].https://github.com/trueto/medbert, 2021-03.
```
|
navteca/quora-roberta-base
|
navteca
| 2021-03-25T16:10:08Z | 4,293 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"roberta",
"text-classification",
"en",
"dataset:quora",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
datasets:
- quora
language: en
license: mit
pipeline_tag: text-classification
tags:
- roberta
- text-classification
---
# Cross-Encoder for Quora Duplicate Questions Detection
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
This model uses [roberta-base](https://huggingface.co/roberta-base).
## Training Data
This model was trained on the [Quora Duplicate Questions](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) dataset.
The model will predict a score between 0 and 1: How likely the two given questions are duplicates.
Note: The model is not suitable to estimate the similarity of questions, e.g. the two questions "How to learn Java" and "How to learn Python" will result in a rahter low score, as these are not duplicates.
## Usage and Performance
The trained model can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name')
scores = model.predict([('Question 1', 'Question 2'), ('Question 3', 'Question 4')])
print(scores)
```
|
theainerd/wav2vec2-large-xlsr-53-odia
|
theainerd
| 2021-03-24T08:43:37Z | 1,831 | 3 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"or",
"dataset:OpenSLR",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: or
datasets:
- OpenSLR
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Odia by Shyam Sunder Kumar
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: OpenSLR
type: OpenSLR
args: or
metrics:
- name: Test WER
type: wer
value: 68.75
---
# Wav2Vec2-Large-XLSR-53-Odia
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) odia using the [Multilingual and code-switching ASR challenges for low resource Indian languages](https://navana-tech.github.io/IS21SS-indicASRchallenge/data.html).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "or", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("theainerd/wav2vec2-large-xlsr-53-odia")
model = Wav2Vec2ForCTC.from_pretrained("theainerd/wav2vec2-large-xlsr-53-odia")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Odia test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "or", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("theainerd/wav2vec2-large-xlsr-53-odia")
model = Wav2Vec2ForCTC.from_pretrained("theainerd/wav2vec2-large-xlsr-53-odia")
model.to("cuda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 68.75 %
## Training
The script used for training can be found [Odia ASR Fine Tuning Wav2Vec2](https://colab.research.google.com/drive/1aHpFRTxaBeNblRHAtYOy0hBeXbbMWtot?usp=sharing)
|
DarshanDeshpande/marathi-distilbert
|
DarshanDeshpande
| 2021-03-23T08:20:29Z | 8 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"distilbert",
"fill-mask",
"mr",
"arxiv:1910.01108",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
language:
- mr
tags:
- fill-mask
license: apache-2.0
datasets:
- Oscar Corpus, News, Stories
widget:
- text: "हा खरोखर चांगला [MASK] आहे."
---
# Marathi DistilBERT
## Model description
This model is an adaptation of DistilBERT (Victor Sanh et al., 2019) for Marathi language. This version of Marathi-DistilBERT is trained from scratch on approximately 11.2 million sentences.
```
DISCLAIMER
This model has not been thoroughly tested and may contain biased opinions or inappropriate language. User discretion is advised
```
## Training data
The training data has been extracted from a variety of sources, mainly including:
1. Oscar Corpus
2. Marathi Newspapers
3. Marathi storybooks and articles
The data is cleaned by removing all languages other than Marathi, while preserving common punctuations
## Training procedure
The model is trained from scratch using an Adam optimizer with a learning rate of 1e-4 and default β1 and β2 values of 0.9 and 0.999 respectively with a total batch size of 256 on a v3-8 TPU and mask probability of 15%.
## Example
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="DarshanDeshpande/marathi-distilbert",
tokenizer="DarshanDeshpande/marathi-distilbert",
)
fill_mask("हा खरोखर चांगला [MASK] आहे.")
```
### BibTeX entry and citation info
```bibtex
@misc{sanh2020distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
year={2020},
eprint={1910.01108},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<h3>Authors </h3>
<h5>1. Darshan Deshpande: <a href="https://github.com/DarshanDeshpande">GitHub</a>, <a href="https://www.linkedin.com/in/darshan-deshpande/">LinkedIn</a><h5>
<h5>2. Harshavardhan Abichandani: <a href="https://github.com/Baras64">GitHub</a>, <a href="http://www.linkedin.com/in/harsh-abhi">LinkedIn</a><h5>
|
tuner007/pegasus_paraphrase
|
tuner007
| 2021-03-22T21:11:33Z | 74,495 | 182 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"paraphrasing",
"seq2seq",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language: en
license: apache-2.0
tags:
- pegasus
- paraphrasing
- seq2seq
---
## Model description
[PEGASUS](https://github.com/google-research/pegasus) fine-tuned for paraphrasing
## Model in Action 🚀
```
import torch
from transformers import PegasusForConditionalGeneration, PegasusTokenizer
model_name = 'tuner007/pegasus_paraphrase'
torch_device = 'cuda' if torch.cuda.is_available() else 'cpu'
tokenizer = PegasusTokenizer.from_pretrained(model_name)
model = PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device)
def get_response(input_text,num_return_sequences,num_beams):
batch = tokenizer([input_text],truncation=True,padding='longest',max_length=60, return_tensors="pt").to(torch_device)
translated = model.generate(**batch,max_length=60,num_beams=num_beams, num_return_sequences=num_return_sequences, temperature=1.5)
tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)
return tgt_text
```
#### Example:
```
num_beams = 10
num_return_sequences = 10
context = "The ultimate test of your knowledge is your capacity to convey it to another."
get_response(context,num_return_sequences,num_beams)
# output:
['The test of your knowledge is your ability to convey it.',
'The ability to convey your knowledge is the ultimate test of your knowledge.',
'The ability to convey your knowledge is the most important test of your knowledge.',
'Your capacity to convey your knowledge is the ultimate test of it.',
'The test of your knowledge is your ability to communicate it.',
'Your capacity to convey your knowledge is the ultimate test of your knowledge.',
'Your capacity to convey your knowledge to another is the ultimate test of your knowledge.',
'Your capacity to convey your knowledge is the most important test of your knowledge.',
'The test of your knowledge is how well you can convey it.',
'Your capacity to convey your knowledge is the ultimate test.']
```
> Created by [Arpit Rajauria](https://twitter.com/arpit_rajauria)
[](https://twitter.com/arpit_rajauria)
|
tugstugi/wav2vec2-large-xlsr-53-mongolian
|
tugstugi
| 2021-03-22T07:19:25Z | 26 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"mn",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: mn
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Mongolian by Tugstugi
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice mn
type: common_voice
args: mn
metrics:
- name: Test WER
type: wer
value: 42.80
---
# Wav2Vec2-Large-XLSR-53-Mongolian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Mongolian using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "mn", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("wav2vec2-large-xlsr-53-mongolian")
model = Wav2Vec2ForCTC.from_pretrained("wav2vec2-large-xlsr-53-mongolian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Mongolian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "mn", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("wav2vec2-large-xlsr-53-mongolian")
model = Wav2Vec2ForCTC.from_pretrained("wav2vec2-large-xlsr-53-mongolian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 42.80 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found ???
|
HooshvareLab/distilbert-fa-zwnj-base-ner
|
HooshvareLab
| 2021-03-21T14:32:29Z | 130 | 4 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"distilbert",
"token-classification",
"fa",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
---
language: fa
---
# DistilbertNER
This model fine-tuned for the Named Entity Recognition (NER) task on a mixed NER dataset collected from [ARMAN](https://github.com/HaniehP/PersianNER), [PEYMA](http://nsurl.org/2019-2/tasks/task-7-named-entity-recognition-ner-for-farsi/), and [WikiANN](https://elisa-ie.github.io/wikiann/) that covered ten types of entities:
- Date (DAT)
- Event (EVE)
- Facility (FAC)
- Location (LOC)
- Money (MON)
- Organization (ORG)
- Percent (PCT)
- Person (PER)
- Product (PRO)
- Time (TIM)
## Dataset Information
| | Records | B-DAT | B-EVE | B-FAC | B-LOC | B-MON | B-ORG | B-PCT | B-PER | B-PRO | B-TIM | I-DAT | I-EVE | I-FAC | I-LOC | I-MON | I-ORG | I-PCT | I-PER | I-PRO | I-TIM |
|:------|----------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|
| Train | 29133 | 1423 | 1487 | 1400 | 13919 | 417 | 15926 | 355 | 12347 | 1855 | 150 | 1947 | 5018 | 2421 | 4118 | 1059 | 19579 | 573 | 7699 | 1914 | 332 |
| Valid | 5142 | 267 | 253 | 250 | 2362 | 100 | 2651 | 64 | 2173 | 317 | 19 | 373 | 799 | 387 | 717 | 270 | 3260 | 101 | 1382 | 303 | 35 |
| Test | 6049 | 407 | 256 | 248 | 2886 | 98 | 3216 | 94 | 2646 | 318 | 43 | 568 | 888 | 408 | 858 | 263 | 3967 | 141 | 1707 | 296 | 78 |
## Evaluation
The following tables summarize the scores obtained by model overall and per each class.
**Overall**
| Model | accuracy | precision | recall | f1 |
|:----------:|:--------:|:---------:|:--------:|:--------:|
| Distilbert | 0.994534 | 0.946326 | 0.95504 | 0.950663 |
**Per entities**
| | number | precision | recall | f1 |
|:---: |:------: |:---------: |:--------: |:--------: |
| DAT | 407 | 0.812048 | 0.828010 | 0.819951 |
| EVE | 256 | 0.955056 | 0.996094 | 0.975143 |
| FAC | 248 | 0.972549 | 1.000000 | 0.986083 |
| LOC | 2884 | 0.968403 | 0.967060 | 0.967731 |
| MON | 98 | 0.925532 | 0.887755 | 0.906250 |
| ORG | 3216 | 0.932095 | 0.951803 | 0.941846 |
| PCT | 94 | 0.936842 | 0.946809 | 0.941799 |
| PER | 2645 | 0.959818 | 0.957278 | 0.958546 |
| PRO | 318 | 0.963526 | 0.996855 | 0.979907 |
| TIM | 43 | 0.760870 | 0.813953 | 0.786517 |
## How To Use
You use this model with Transformers pipeline for NER.
### Installing requirements
```bash
pip install transformers
```
### How to predict using pipeline
```python
from transformers import AutoTokenizer
from transformers import AutoModelForTokenClassification # for pytorch
from transformers import TFAutoModelForTokenClassification # for tensorflow
from transformers import pipeline
model_name_or_path = "HooshvareLab/distilbert-fa-zwnj-base-ner"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForTokenClassification.from_pretrained(model_name_or_path) # Pytorch
# model = TFAutoModelForTokenClassification.from_pretrained(model_name_or_path) # Tensorflow
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "در سال ۲۰۱۳ درگذشت و آندرتیکر و کین برای او مراسم یادبود گرفتند."
ner_results = nlp(example)
print(ner_results)
```
## Questions?
Post a Github issue on the [ParsNER Issues](https://github.com/hooshvare/parsner/issues) repo.
|
sarnikowski/convbert-medium-small-da-cased
|
sarnikowski
| 2021-03-18T22:27:12Z | 46 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"convbert",
"da",
"arxiv:2008.02496",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: da
license: cc-by-4.0
---
# Danish ConvBERT medium small (cased)
[ConvBERT](https://arxiv.org/abs/2008.02496) model pretrained on a custom Danish corpus (~17.5gb).
For details regarding data sources and training procedure, along with benchmarks on downstream tasks, go to: https://github.com/sarnikowski/danish_transformers
## Usage
```python
from transformers import ConvBertTokenizer, ConvBertModel
tokenizer = ConvBertTokenizer.from_pretrained("sarnikowski/convbert-medium-small-da-cased")
model = ConvBertModel.from_pretrained("sarnikowski/convbert-medium-small-da-cased")
```
## Questions?
If you have any questions feel free to open an issue on the [danish_transformers](https://github.com/sarnikowski/danish_transformers) repository, or send an email to [email protected]
|
sebastian-hofstaetter/colbert-distilbert-margin_mse-T2-msmarco
|
sebastian-hofstaetter
| 2021-03-18T10:35:12Z | 61 | 14 |
transformers
|
[
"transformers",
"pytorch",
"ColBERT",
"dpr",
"dense-passage-retrieval",
"knowledge-distillation",
"en",
"dataset:ms_marco",
"arxiv:2004.12832",
"arxiv:2010.02666",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: "en"
tags:
- dpr
- dense-passage-retrieval
- knowledge-distillation
datasets:
- ms_marco
---
# Margin-MSE Trained ColBERT
We provide a retrieval trained DistilBert-based ColBERT model (https://arxiv.org/pdf/2004.12832.pdf). Our model is trained with Margin-MSE using a 3 teacher BERT_Cat (concatenated BERT scoring) ensemble on MSMARCO-Passage.
This instance can be used to **re-rank a candidate set** or **directly for a vector index based dense retrieval**. The architecure is a 6-layer DistilBERT, with an additional single linear layer at the end.
If you want to know more about our simple, yet effective knowledge distillation method for efficient information retrieval models for a variety of student architectures that is used for this model instance check out our paper: https://arxiv.org/abs/2010.02666 🎉
For more information, training data, source code, and a minimal usage example please visit: https://github.com/sebastian-hofstaetter/neural-ranking-kd
## Configuration
- fp16 trained, so fp16 inference shouldn't be a problem
- We use no compression: 768 dim output vectors (better suited for re-ranking, or storage for smaller collections, MSMARCO gets to ~1TB vector storage with fp16 ... ups)
- Query [MASK] augmention = 8x regardless of batch-size (needs to be added before the model, see the usage example in GitHub repo for more)
## Model Code
````python
from transformers import AutoTokenizer,AutoModel, PreTrainedModel,PretrainedConfig
from typing import Dict
import torch
class ColBERTConfig(PretrainedConfig):
model_type = "ColBERT"
bert_model: str
compression_dim: int = 768
dropout: float = 0.0
return_vecs: bool = False
trainable: bool = True
class ColBERT(PreTrainedModel):
"""
ColBERT model from: https://arxiv.org/pdf/2004.12832.pdf
We use a dot-product instead of cosine per term (slightly better)
"""
config_class = ColBERTConfig
base_model_prefix = "bert_model"
def __init__(self,
cfg) -> None:
super().__init__(cfg)
self.bert_model = AutoModel.from_pretrained(cfg.bert_model)
for p in self.bert_model.parameters():
p.requires_grad = cfg.trainable
self.compressor = torch.nn.Linear(self.bert_model.config.hidden_size, cfg.compression_dim)
def forward(self,
query: Dict[str, torch.LongTensor],
document: Dict[str, torch.LongTensor]):
query_vecs = self.forward_representation(query)
document_vecs = self.forward_representation(document)
score = self.forward_aggregation(query_vecs,document_vecs,query["attention_mask"],document["attention_mask"])
return score
def forward_representation(self,
tokens,
sequence_type=None) -> torch.Tensor:
vecs = self.bert_model(**tokens)[0] # assuming a distilbert model here
vecs = self.compressor(vecs)
# if encoding only, zero-out the mask values so we can compress storage
if sequence_type == "doc_encode" or sequence_type == "query_encode":
vecs = vecs * tokens["tokens"]["mask"].unsqueeze(-1)
return vecs
def forward_aggregation(self,query_vecs, document_vecs,query_mask,document_mask):
# create initial term-x-term scores (dot-product)
score = torch.bmm(query_vecs, document_vecs.transpose(2,1))
# mask out padding on the doc dimension (mask by -1000, because max should not select those, setting it to 0 might select them)
exp_mask = document_mask.bool().unsqueeze(1).expand(-1,score.shape[1],-1)
score[~exp_mask] = - 10000
# max pooling over document dimension
score = score.max(-1).values
# mask out paddding query values
score[~(query_mask.bool())] = 0
# sum over query values
score = score.sum(-1)
return score
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") # honestly not sure if that is the best way to go, but it works :)
model = ColBERT.from_pretrained("sebastian-hofstaetter/colbert-distilbert-margin_mse-T2-msmarco")
````
## Effectiveness on MSMARCO Passage & TREC Deep Learning '19
We trained our model on the MSMARCO standard ("small"-400K query) training triples with knowledge distillation with a batch size of 32 on a single consumer-grade GPU (11GB memory).
For re-ranking we used the top-1000 BM25 results.
### MSMARCO-DEV
Here, we use the larger 49K query DEV set (same range as the smaller 7K DEV set, minimal changes possible)
| | MRR@10 | NDCG@10 |
|----------------------------------|--------|---------|
| BM25 | .194 | .241 |
| **Margin-MSE ColBERT** (Re-ranking) | .375 | .436 |
### TREC-DL'19
For MRR we use the recommended binarization point of the graded relevance of 2. This might skew the results when compared to other binarization point numbers.
| | MRR@10 | NDCG@10 |
|----------------------------------|--------|---------|
| BM25 | .689 | .501 |
| **Margin-MSE ColBERT** (Re-ranking) | .878 | .744 |
For more metrics, baselines, info and analysis, please see the paper: https://arxiv.org/abs/2010.02666
## Limitations & Bias
- The model inherits social biases from both DistilBERT and MSMARCO.
- The model is only trained on relatively short passages of MSMARCO (avg. 60 words length), so it might struggle with longer text.
## Citation
If you use our model checkpoint please cite our work as:
```
@misc{hofstaetter2020_crossarchitecture_kd,
title={Improving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation},
author={Sebastian Hofst{\"a}tter and Sophia Althammer and Michael Schr{\"o}der and Mete Sertkan and Allan Hanbury},
year={2020},
eprint={2010.02666},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
|
acul3/xlsr_indonesia
|
acul3
| 2021-03-18T09:53:35Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"speech",
"audio",
"xlsr-fine-tuning-week",
"id",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: id
datasets:
- common_voice
tags:
- speech
- audio
- automatic-speech-recognition
- xlsr-fine-tuning-week
license: apache-2.0
---
## Evaluation on Common Voice ID Test
```python
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
model_name = "munggok/xlsr_indonesia"
device = "cuda"
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"]' # noqa: W605
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(model_name)
ds = load_dataset("common_voice", "id", split="test", data_dir="./cv-corpus-6.1-2020-12-11")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
ds = ds.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)
batch["target"] = batch["sentence"]
return batch
result = ds.map(map_to_pred, batched=True, batch_size=16, remove_columns=list(ds.features.keys()))
wer = load_metric("wer")
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
**Result**: 25.7 %
|
liatwilight/sbert-ecom
|
liatwilight
| 2021-03-17T08:26:18Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
this is a model for ecom representation
|
sebastian-hofstaetter/distilbert-dot-margin_mse-T2-msmarco
|
sebastian-hofstaetter
| 2021-03-16T17:03:58Z | 42 | 2 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"feature-extraction",
"dpr",
"dense-passage-retrieval",
"knowledge-distillation",
"en",
"dataset:ms_marco",
"arxiv:2010.02666",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language: "en"
tags:
- dpr
- dense-passage-retrieval
- knowledge-distillation
datasets:
- ms_marco
---
# Margin-MSE Trained DistilBert for Dense Passage Retrieval
We provide a retrieval trained DistilBert-based model (we call the architecture BERT_Dot). Our model is trained with Margin-MSE using a 3 teacher BERT_Cat (concatenated BERT scoring) ensemble on MSMARCO-Passage.
This instance can be used to **re-rank a candidate set** or **directly for a vector index based dense retrieval**. The architecture is a 6-layer DistilBERT, without architecture additions or modifications (we only change the weights during training) - to receive a query/passage representation we pool the CLS vector. We use the same BERT layers for both query and passage encoding (yields better results, and lowers memory requirements).
If you want to know more about our simple, yet effective knowledge distillation method for efficient information retrieval models for a variety of student architectures that is used for this model instance check out our paper: https://arxiv.org/abs/2010.02666 🎉
For more information, training data, source code, and a minimal usage example please visit: https://github.com/sebastian-hofstaetter/neural-ranking-kd
## Effectiveness on MSMARCO Passage & TREC-DL'19
We trained our model on the MSMARCO standard ("small"-400K query) training triples with knowledge distillation with a batch size of 32 on a single consumer-grade GPU (11GB memory).
For re-ranking we used the top-1000 BM25 results.
### MSMARCO-DEV
| | MRR@10 | NDCG@10 | Recall@1K |
|----------------------------------|--------|---------|-----------------------------|
| BM25 | .194 | .241 | .868 |
| **Margin-MSE BERT_Dot** (Re-ranking) | .332 | .391 | .868 (from BM25 candidates) |
| **Margin-MSE BERT_Dot** (Retrieval) | .323 | .381 | .957 |
### TREC-DL'19
For MRR and Recall we use the recommended binarization point of the graded relevance of 2. This might skew the results when compared to other binarization point numbers.
| | MRR@10 | NDCG@10 | Recall@1K |
|----------------------------------|--------|---------|-----------------------------|
| BM25 | .689 | .501 | .739 |
| **Margin-MSE BERT_Dot** (Re-ranking) | .862 | .712 | .739 (from BM25 candidates) |
| **Margin-MSE BERT_Dot** (Retrieval) | .868 | .697 | .769 |
For more baselines, info and analysis, please see the paper: https://arxiv.org/abs/2010.02666
## Limitations & Bias
- The model inherits social biases from both DistilBERT and MSMARCO.
- The model is only trained on relatively short passages of MSMARCO (avg. 60 words length), so it might struggle with longer text.
## Citation
If you use our model checkpoint please cite our work as:
```
@misc{hofstaetter2020_crossarchitecture_kd,
title={Improving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation},
author={Sebastian Hofst{\"a}tter and Sophia Althammer and Michael Schr{\"o}der and Mete Sertkan and Allan Hanbury},
year={2020},
eprint={2010.02666},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
|
HooshvareLab/distilbert-fa-zwnj-base
|
HooshvareLab
| 2021-03-16T16:30:29Z | 322 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"distilbert",
"fill-mask",
"fa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
language: fa
license: apache-2.0
---
# DistilBERT
This model can tackle the zero-width non-joiner character for Persian writing. Also, the model was trained on new multi-types corpora with a new set of vocabulary.
## Questions?
Post a Github issue on the [ParsBERT Issues](https://github.com/hooshvare/parsbert/issues) repo.
|
adzcodez/TokenClassificationTest
|
adzcodez
| 2021-03-16T14:18:09Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
distilbert-base-uncased finetuned on the conll2003 dataset for NER.
|
airesearch/xlm-roberta-base-finetuned
|
airesearch
| 2021-03-16T09:23:27Z | 12 | 0 |
transformers
|
[
"transformers",
"xlm-roberta",
"fill-mask",
"arxiv:1911.02116",
"arxiv:2101.09635",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
# Finetuend `xlm-roberta-base` model on Thai sequence and token classification datasets
<br>
Finetuned XLM Roberta BASE model on Thai sequence and token classification datasets
The script and documentation can be found at [this repository](https://github.com/vistec-AI/thai2transformers).
<br>
## Model description
<br>
We use the pretrained cross-lingual RoBERTa model as proposed by [[Conneau et al., 2020]](https://arxiv.org/abs/1911.02116). We download the pretrained PyTorch model via HuggingFace's Model Hub (https://huggingface.co/xlm-roberta-base)
<br>
## Intended uses & limitations
<br>
You can use the finetuned models for multiclass/multilabel text classification and token classification task.
<br>
**Multiclass text classification**
- `wisesight_sentiment`
4-class text classification task (`positive`, `neutral`, `negative`, and `question`) based on social media posts and tweets.
- `wongnai_reivews`
Users' review rating classification task (scale is ranging from 1 to 5)
- `generated_reviews_enth` : (`review_star` as label)
Generated users' review rating classification task (scale is ranging from 1 to 5).
**Multilabel text classification**
- `prachathai67k`
Thai topic classification with 12 labels based on news article corpus from prachathai.com. The detail is described in this [page](https://huggingface.co/datasets/prachathai67k).
**Token classification**
- `thainer`
Named-entity recognition tagging with 13 named-entities as descibed in this [page](https://huggingface.co/datasets/thainer).
- `lst20` : NER NER and POS tagging
Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this [page](https://huggingface.co/datasets/lst20).
<br>
## How to use
<br>
The example notebook demonstrating how to use finetuned model for inference can be found at this [Colab notebook](https://colab.research.google.com/drive/1Kbk6sBspZLwcnOE61adAQo30xxqOQ9ko)
<br>
**BibTeX entry and citation info**
```
@misc{lowphansirikul2021wangchanberta,
title={WangchanBERTa: Pretraining transformer-based Thai Language Models},
author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong},
year={2021},
eprint={2101.09635},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
facebook/rag-sequence-nq
|
facebook
| 2021-03-12T11:04:28Z | 24,970 | 41 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"rag",
"en",
"dataset:wiki_dpr",
"arxiv:2005.11401",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: en
license: apache-2.0
datasets:
- wiki_dpr
thumbnail: https://huggingface.co/front/thumbnails/facebook.png
---
## RAG
This is the RAG-Sequence Model of the the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/pdf/2005.11401.pdf)
by Patrick Lewis, Ethan Perez, Aleksandara Piktus et al.
The model is a *uncased* model, which means that capital letters are simply converted to lower-case letters.
The model consits of a *question_encoder*, *retriever* and a *generator*. The retriever extracts relevant passages from the *wiki_dpr* `train` datasets, which is linked above.
The question_encoder and retriever are based on `facebook/dpr-question_encoder-single-nq-base` and `facebook/bart-large`, which were jointly finetuned on
on the *wiki_dpr* QA dataset in an end-to-end fashion.
## Usage:
**Note**: In the usage example below only the *dummy* retriever of *wiki_dpr* is used because the complete *lecagy* index requires over 75 GB of RAM.
The model can generate answers to any factoid question as follows:
```python
from transformers import RagTokenizer, RagRetriever, RagSequenceForGeneration
tokenizer = RagTokenizer.from_pretrained("facebook/rag-sequence-nq")
retriever = RagRetriever.from_pretrained("facebook/rag-sequence-nq", index_name="exact", use_dummy_dataset=True)
model = RagSequenceForGeneration.from_pretrained("facebook/rag-sequence-nq", retriever=retriever)
input_dict = tokenizer.prepare_seq2seq_batch("how many countries are in europe", return_tensors="pt")
generated = model.generate(input_ids=input_dict["input_ids"])
print(tokenizer.batch_decode(generated, skip_special_tokens=True)[0])
# should give 54 => google says either 44 or 51
```
|
navteca/quora-roberta-large
|
navteca
| 2021-03-10T14:57:04Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"roberta",
"text-classification",
"en",
"dataset:quora",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
datasets:
- quora
language: en
license: mit
pipeline_tag: text-classification
tags:
- roberta
- text-classification
---
# Cross-Encoder for Quora Duplicate Questions Detection
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
This model uses [roberta-large](https://huggingface.co/roberta-large).
## Training Data
This model was trained on the [Quora Duplicate Questions](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) dataset.
The model will predict a score between 0 and 1: How likely the two given questions are duplicates.
Note: The model is not suitable to estimate the similarity of questions, e.g. the two questions "How to learn Java" and "How to learn Python" will result in a rahter low score, as these are not duplicates.
## Usage and Performance
The trained model can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name')
scores = model.predict([('Question 1', 'Question 2'), ('Question 3', 'Question 4')])
print(scores)
```
|
yjernite/bart_eli5
|
yjernite
| 2021-03-09T22:31:11Z | 359 | 11 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:eli5",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language: en
license: apache-2.0
datasets:
- eli5
---
## BART ELI5
Read the article at https://yjernite.github.io/lfqa.html and try the demo at https://huggingface.co/qa/
|
wptoux/albert-chinese-large-qa
|
wptoux
| 2021-03-09T07:48:40Z | 65 | 12 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"question-answering",
"Question Answering",
"zh",
"dataset:webqa",
"dataset:dureader",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
language:
- zh
tags:
- Question Answering
license: apache-2.0
datasets:
- webqa
- dureader
---
# albert-chinese-large-qa
Albert large QA model pretrained from baidu webqa and baidu dureader datasets.
## Data source
+ baidu webqa 1.0
+ baidu dureader
## Traing Method
We combined the two datasets together and created a new dataset in squad format, including 705139 samples for training and 69638 samples for validation.
We finetune the model based on the albert chinese large model.
## Hyperparams
+ learning_rate 1e-5
+ max_seq_length 512
+ max_query_length 50
+ max_answer_length 300
+ doc_stride 256
+ num_train_epochs 2
+ warmup_steps 1000
+ per_gpu_train_batch_size 8
+ gradient_accumulation_steps 3
+ n_gpu 2 (Nvidia Tesla P100)
## Usage
```
from transformers import AutoModelForQuestionAnswering, BertTokenizer
model = AutoModelForQuestionAnswering.from_pretrained('wptoux/albert-chinese-large-qa')
tokenizer = BertTokenizer.from_pretrained('wptoux/albert-chinese-large-qa')
```
***Important: use BertTokenizer***
## MoreInfo
Please visit https://github.com/wptoux/albert-chinese-large-webqa for details.
|
tennessejoyce/titlewave-t5-small
|
tennessejoyce
| 2021-03-09T04:03:11Z | 9 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
# Titlewave: t5-small
This is one of two models used in the Titlewave project. See https://github.com/tennessejoyce/TitleWave for more information.
This model was fine-tuned on a dataset of Stack Overflow posts, with a ConditionalGeneration head that summarizes the body of a question in order to suggest a title.
|
Jade/bert_base_law
|
Jade
| 2021-03-08T06:59:50Z | 0 | 0 | null |
[
"NLP",
"LAW",
"dataset:WIP",
"region:us"
] | null | 2022-03-02T23:29:04Z |
---
language: "zh_CN"
thumbnail: "url to a thumbnail used in social sharing"
tags:
- NLP
- LAW
license: "MIT"
datasets:
- WIP
metrics:
- WIP
---
|
Darkrider/covidbert_mednli
|
Darkrider
| 2021-03-07T15:20:12Z | 4 | 0 |
transformers
|
[
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z |
# CovidBERT-MedNLI
This is the model **CovidBERT** trained by DeepSet on AllenAI's [CORD19 Dataset](https://pages.semanticscholar.org/coronavirus-research) of scientific articles about coronaviruses.
The model uses the original BERT wordpiece vocabulary and was subsequently fine-tuned on the [SNLI](https://nlp.stanford.edu/projects/snli/) and the [MultiNLI](https://www.nyu.edu/projects/bowman/multinli/) datasets using the [`sentence-transformers` library](https://github.com/UKPLab/sentence-transformers/) to produce universal sentence embeddings [1] using the **average pooling strategy** and a **softmax loss**.
It is further fine-tuned on both MedNLI datasets available at Physionet.
[ACL-BIONLP 2019](https://physionet.org/content/mednli-bionlp19/1.0.1/)
[MedNLI from MIMIC](https://physionet.org/content/mednli/1.0.0/)
Parameter details for the original training on CORD-19 are available on [DeepSet's MLFlow](https://public-mlflow.deepset.ai/#/experiments/2/runs/ba27d00c30044ef6a33b1d307b4a6cba)
**Base model**: `deepset/covid_bert_base` from HuggingFace's `AutoModel`.
|
rajendra-ml/mar_GPT2
|
rajendra-ml
| 2021-03-06T09:35:16Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
GPT2 model for marathi language.
heads=12 layers=6.
This is a bit smaller version, since I trained it on my laptop with smaller gpu.
|
hfl/chinese-xlnet-base
|
hfl
| 2021-03-03T01:44:59Z | 330 | 30 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"xlnet",
"text-generation",
"zh",
"arxiv:2004.13922",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language:
- zh
license: "apache-2.0"
---
## Chinese Pre-Trained XLNet
This project provides a XLNet pre-training model for Chinese, which aims to enrich Chinese natural language processing resources and provide a variety of Chinese pre-training model selection.
We welcome all experts and scholars to download and use this model.
This project is based on CMU/Google official XLNet: https://github.com/zihangdai/xlnet
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find our resource or paper is useful, please consider including the following citation in your paper.
- https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
```
|
hfl/chinese-electra-large-discriminator
|
hfl
| 2021-03-03T01:42:48Z | 10 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"electra",
"zh",
"arxiv:2004.13922",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language:
- zh
license: "apache-2.0"
---
**Please use `ElectraForPreTraining` for `discriminator` and `ElectraForMaskedLM` for `generator` if you are re-training these models.**
## Chinese ELECTRA
Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.
For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.
ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.
This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra)
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find our resource or paper is useful, please consider including the following citation in your paper.
- https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
```
|
hfl/chinese-electra-large-generator
|
hfl
| 2021-03-03T01:40:52Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"electra",
"fill-mask",
"zh",
"arxiv:2004.13922",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language:
- zh
license: "apache-2.0"
pipeline_tag: "fill-mask"
---
**Please use `ElectraForPreTraining` for `discriminator` and `ElectraForMaskedLM` for `generator` if you are re-training these models.**
## Chinese ELECTRA
Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.
For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.
ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.
This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra)
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find our resource or paper is useful, please consider including the following citation in your paper.
- https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
```
|
hfl/chinese-electra-base-discriminator
|
hfl
| 2021-03-03T01:40:07Z | 245 | 9 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"electra",
"zh",
"arxiv:2004.13922",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language:
- zh
license: "apache-2.0"
---
**Please use `ElectraForPreTraining` for `discriminator` and `ElectraForMaskedLM` for `generator` if you are re-training these models.**
## Chinese ELECTRA
Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.
For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.
ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.
This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra)
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find our resource or paper is useful, please consider including the following citation in your paper.
- https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
```
|
hfl/chinese-electra-base-generator
|
hfl
| 2021-03-03T01:39:38Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"electra",
"fill-mask",
"zh",
"arxiv:2004.13922",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language:
- zh
license: "apache-2.0"
pipeline_tag: "fill-mask"
---
**Please use `ElectraForPreTraining` for `discriminator` and `ElectraForMaskedLM` for `generator` if you are re-training these models.**
## Chinese ELECTRA
Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.
For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.
ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.
This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra)
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find our resource or paper is useful, please consider including the following citation in your paper.
- https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
```
|
hfl/chinese-electra-small-ex-generator
|
hfl
| 2021-03-03T01:39:16Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"fill-mask",
"zh",
"arxiv:2004.13922",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language:
- zh
license: "apache-2.0"
pipeline_tag: "fill-mask"
---
**Please use `ElectraForPreTraining` for `discriminator` and `ElectraForMaskedLM` for `generator` if you are re-training these models.**
## Chinese ELECTRA
Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.
For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.
ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.
This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra)
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find our resource or paper is useful, please consider including the following citation in your paper.
- https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
```
|
hfl/chinese-electra-small-discriminator
|
hfl
| 2021-03-03T01:39:00Z | 82 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"electra",
"zh",
"arxiv:2004.13922",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language:
- zh
license: "apache-2.0"
---
**Please use `ElectraForPreTraining` for `discriminator` and `ElectraForMaskedLM` for `generator` if you are re-training these models.**
## Chinese ELECTRA
Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.
For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.
ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.
This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra)
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find our resource or paper is useful, please consider including the following citation in your paper.
- https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
```
|
hfl/chinese-electra-180g-large-discriminator
|
hfl
| 2021-03-03T01:29:12Z | 214 | 5 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"electra",
"zh",
"arxiv:2004.13922",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language:
- zh
license: "apache-2.0"
---
# This model is trained on 180G data, we recommend using this one than the original version.
## Chinese ELECTRA
Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.
For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.
ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.
This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra)
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find our resource or paper is useful, please consider including the following citation in your paper.
- https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
```
|
hfl/chinese-electra-180g-large-generator
|
hfl
| 2021-03-03T01:27:24Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"electra",
"fill-mask",
"zh",
"arxiv:2004.13922",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language:
- zh
license: "apache-2.0"
pipeline_tag: "fill-mask"
---
# This model is trained on 180G data, we recommend using this one than the original version.
## Chinese ELECTRA
Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.
For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.
ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.
This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra)
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find our resource or paper is useful, please consider including the following citation in your paper.
- https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
```
|
hfl/chinese-electra-180g-base-discriminator
|
hfl
| 2021-03-03T01:26:14Z | 1,185 | 11 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"electra",
"zh",
"arxiv:2004.13922",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language:
- zh
license: "apache-2.0"
---
# This model is trained on 180G data, we recommend using this one than the original version.
## Chinese ELECTRA
Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.
For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.
ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.
This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra)
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find our resource or paper is useful, please consider including the following citation in your paper.
- https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
```
|
hfl/chinese-electra-180g-small-ex-discriminator
|
hfl
| 2021-03-03T01:25:29Z | 4,609 | 7 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"electra",
"zh",
"arxiv:2004.13922",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language:
- zh
license: "apache-2.0"
---
# This model is trained on 180G data, we recommend using this one than the original version.
## Chinese ELECTRA
Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.
For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.
ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.
This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra)
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find our resource or paper is useful, please consider including the following citation in your paper.
- https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
```
|
flair/ner-dutch
|
flair
| 2021-03-02T22:03:57Z | 316 | 3 |
flair
|
[
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"nl",
"dataset:conll2003",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- flair
- token-classification
- sequence-tagger-model
language: nl
datasets:
- conll2003
widget:
- text: "George Washington ging naar Washington."
---
# Dutch NER in Flair (default model)
This is the standard 4-class NER model for Dutch that ships with [Flair](https://github.com/flairNLP/flair/).
F1-Score: **92,58** (CoNLL-03)
Predicts 4 tags:
| **tag** | **meaning** |
|---------------------------------|-----------|
| PER | person name |
| LOC | location name |
| ORG | organization name |
| MISC | other name |
Based on Transformer embeddings and LSTM-CRF.
---
# Demo: How to use in Flair
Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("flair/ner-dutch")
# make example sentence
sentence = Sentence("George Washington ging naar Washington")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)
```
This yields the following output:
```
Span [1,2]: "George Washington" [− Labels: PER (0.997)]
Span [5]: "Washington" [− Labels: LOC (0.9996)]
```
So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington ging naar Washington*".
---
### Training: Script to train this model
The following Flair script was used to train this model:
```python
from flair.data import Corpus
from flair.datasets import CONLL_03_DUTCH
from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings
# 1. get the corpus
corpus: Corpus = CONLL_03_DUTCH()
# 2. what tag do we want to predict?
tag_type = 'ner'
# 3. make the tag dictionary from the corpus
tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type)
# 4. initialize embeddings
embeddings = TransformerWordEmbeddings('wietsedv/bert-base-dutch-cased')
# 5. initialize sequence tagger
tagger: SequenceTagger = SequenceTagger(hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type=tag_type)
# 6. initialize trainer
trainer: ModelTrainer = ModelTrainer(tagger, corpus)
# 7. run training
trainer.train('resources/taggers/ner-dutch',
train_with_dev=True,
max_epochs=150)
```
---
### Cite
Please cite the following paper when using this model.
```
@inproceedings{akbik-etal-2019-flair,
title = "{FLAIR}: An Easy-to-Use Framework for State-of-the-Art {NLP}",
author = "Akbik, Alan and
Bergmann, Tanja and
Blythe, Duncan and
Rasul, Kashif and
Schweter, Stefan and
Vollgraf, Roland",
booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics (Demonstrations)",
year = "2019",
url = "https://www.aclweb.org/anthology/N19-4010",
pages = "54--59",
}
```
---
### Issues?
The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
|
sarnikowski/convbert-small-da-cased
|
sarnikowski
| 2021-03-01T22:15:15Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"convbert",
"da",
"arxiv:2008.02496",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: da
license: cc-by-4.0
---
# Danish ConvBERT small (cased)
[ConvBERT](https://arxiv.org/abs/2008.02496) model pretrained on a custom Danish corpus (~17.5gb).
For details regarding data sources and training procedure, along with benchmarks on downstream tasks, go to: https://github.com/sarnikowski/danish_transformers
## Usage
```python
from transformers import ConvBertTokenizer, ConvBertModel
tokenizer = ConvBertTokenizer.from_pretrained("sarnikowski/convbert-small-da-cased")
model = ConvBertModel.from_pretrained("sarnikowski/convbert-small-da-cased")
```
## Questions?
If you have any questions feel free to open an issue on the [danish_transformers](https://github.com/sarnikowski/danish_transformers) repository, or send an email to [email protected]
|
nsi319/legal-led-base-16384
|
nsi319
| 2021-03-01T12:33:48Z | 298 | 13 |
transformers
|
[
"transformers",
"pytorch",
"led",
"text2text-generation",
"summarization",
"en",
"license:mit",
"autotrain_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:05Z |
---
language: en
tags: summarization
metrics:
- rouge
- precision
inference: false
license: mit
---
## LED for legal summarization of documents
This is a Longformer Encoder Decoder ([led-base-16384](https://huggingface.co/allenai/led-base-16384)) model for the **legal domain**, trained for **long document abstractive summarization** task. The length of the document can be upto 16,384 tokens.
## Training data
The **legal-led-base-16384** model was trained on [sec-litigation-releases](https://www.sec.gov/litigation/litreleases.htm) dataset consisting more than 2700 litigation releases and complaints.
## How to use
```Python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("nsi319/legal-led-base-16384")
model = AutoModelForSeq2SeqLM.from_pretrained("nsi319/legal-led-base-16384")
padding = "max_length"
text="""On March 2, 2018, the Securities and Exchange Commission announced securities fraud charges against a U.K.-based broker-dealer and its investment manager in connection with manipulative trading in the securities of HD View 360 Inc., a U.S.-based microcap issuer. The SEC also announced charges against HD View's CEO, another individual, and three entities they control for manipulating HD View's securities as well as the securities of another microcap issuer, West Coast Ventures Group Corp. The SEC further announced the institution of an order suspending trading in the securities of HD View.These charges arise in part from an undercover operation by the Federal Bureau of Investigation, which also resulted in related criminal prosecutions against these defendants by the Office of the United States Attorney for the Eastern District of New York.In a complaint filed in the U.S. District Court for the Eastern District of New York, the SEC alleges that Beaufort Securities Ltd. and Peter Kyriacou, an investment manager at Beaufort, manipulated the market for HD View's common stock. The scheme involved an undercover FBI agent who described his business as manipulating U.S. stocks through pump-and-dump schemes. Kyriacou and the agent discussed depositing large blocks of microcap stock in Beaufort accounts, driving up the price of the stock through promotions, manipulating the stock's price and volume through matched trades, and then selling the shares for a large profit.The SEC's complaint against Beaufort and Kyriacou alleges that they:opened brokerage accounts for the undercover agent in the names of nominees in order to conceal his identity and his connection to the anticipated trading activity in the accounts suggested that the undercover agent could create the false appearance that HD View's stock was liquid in advance of a pump-and-dump by "gam[ing] the market" through matched trades executed multiple purchase orders of HD View shares with the understanding that Beaufort's client had arranged for an associate to simultaneously offer an equivalent number of shares at the same priceA second complaint filed by the SEC in the U.S. District Court for the Eastern District of New York alleges that in a series of recorded telephone conversations with the undercover agent, HD View CEO Dennis Mancino and William T. Hirschy agreed to manipulate HD View's common stock by using the agent's network of brokers to generate fraudulent retail demand for the stock in exchange for a kickback from the trading proceeds. According to the complaint, the three men agreed that Mancino and Hirschy would manipulate HD View stock to a higher price before using the agent's brokers to liquidate their positions at an artificially inflated price. The SEC's complaint also alleges that Mancino and Hirschy executed a "test trade" on Jan. 31, 2018, coordinated by the agent, consisting of a sell order placed by the defendants filled by an opposing purchase order placed by a broker into an account at Beaufort. Unbeknownst to Mancino and Hirschy, the Beaufort account used for this trade was a nominal account that was opened and funded by the agent. The SEC's complaint also alleges that, prior to their contact with the undercover agent, Mancino and Hirschy manipulated the market for HD View and for West Coast by using brokerage accounts that they owned, controlled, or were associated with –including TJM Investments Inc., DJK Investments 10 Inc., WT Consulting Group LLC – to effect manipulative "matched trades."The SEC's complaint against Beaufort and Kyriacou charges the defendants with violating Section 10(b) of the Securities Exchange Act of 1934 and Rule 10b-5 thereunder. The SEC also charged Hirschy, Mancino, and their corporate entities with violating Section 17(a)(1) of the Securities Act of 1933, Sections 9(a)(1), 9(a)(2), and 10(b) of the Exchange Act and Rules 10b-5(a) and (c) thereunder. The SEC is seeking injunctions, disgorgement, prejudgment interest, penalties, and penny stock bars from Beaufort and Kyriacou. With respect to Hirschy, Mancino, and their corporate entities, the SEC is seeking injunctions, disgorgement, prejudgment interest, penalties, penny stock bars, and an officer-and-director bar against Mancino.The investigation was conducted in the SEC's New York Regional Office by Tejal Shah and Joseph Darragh, Lorraine Collazo, and Michael D. Paley of the Microcap Fraud Task Force and supervised by Lara S. Mehraban, and in Washington, D.C. by Patrick L. Feeney, Robert Nesbitt, and Kevin Guerrero, and supervised by Antonia Chion. Preethi Krishnamurthy and Ms. Shah will lead the SEC's litigation against Beaufort and Kyriacou. Ann H. Petalas and Mr. Feeney, under the supervision of Cheryl Crumpton, will handle the SEC's litigation against Mancino, Hirschy, and their entities. The SEC appreciates the assistance of the Office of the United States Attorney for the Eastern District of New York, the Federal Bureau of Investigation, the Internal Revenue Service, the Alberta Securities Commission, the Ontario Securities Commission, the Financial Conduct Authority of the United Kingdom, and the Financial Industry Regulatory Authority.The Commission's investigation in this matter is continuing."""
input_tokenized = tokenizer.encode(text, return_tensors='pt',padding=padding,pad_to_max_length=True, max_length=6144,truncation=True)
summary_ids = model.generate(input_tokenized,
num_beams=4,
no_repeat_ngram_size=3,
length_penalty=2,
min_length=350,
max_length=500)
summary = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids][0]
### Summary Output
# On March 2, 2018, the Securities and Exchange Commission charged Beaufort Securities Ltd. and Peter Kyriacou, an investment manager at Beaufort, with manipulating the market for HD View 360 Inc., a U.S.-based microcap issuer. The SEC also announced charges against HD View's CEO, another individual, and three entities they control for manipulating HD View through pump-and-dump schemes. According to the SEC's complaint, the defendants discussed depositing large blocks of microcap stock in Beaufort accounts, driving up the price of the stock through promotions, manipulating the stock's price and volume through matched trades, and then selling the shares for a large profit. In a parallel action, the United States Attorney's Office for the Eastern District of New York announced criminal charges against the defendants. On March 4, the SEC announced the entry of an order suspending trading in the securities of HD View and for West Coast, pending the outcome of a parallel criminal action by the Federal Bureau of Investigation. Following the announcement of the suspension, HD View stock prices and volume increased significantly, and the defendants agreed to pay over $1.5 million in disgorgement, prejudgment interest, penalties, and an officer and director bar. Beaufort agreed to settle the charges without admitting or denying the allegations of the complaint, and to pay a $1 million civil penalty. The SEC's investigation, which is continuing, has been conducted by Patrick McCluskey and Cheryl Crumpton of the SEC Enforcement Division's Market Abuse Unit in the New York Regional Office. The SEC appreciates the assistance of the Financial Industry Regulatory Authority of the United Kingdom, the Canadian Securities Commission, the Alberta Securities Commission and the Ontario Securities Commission.
```
## Evaluation results
When the model is used for summarizing legal documents, it achieves the following results:
| Model | rouge1 | rouge1-precision | rouge2 | rouge2-precision | rougeL | rougeL-precision |
|:-----------:|:-----:|:-----:|:------:|:-----:|:------:|:-----:|
| legal-led-base-16384 | **55.69** | **61.73** | **29.03** | **36.68** | **32.65** | **40.43** |
| led-base-16384 | 29.19 | 30.43 | 15.23 | 16.27 | 16.32 | 16.58 |
|
dbmdz/flair-historic-ner-onb
|
dbmdz
| 2021-02-26T15:41:21Z | 27 | 3 |
flair
|
[
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"license:mit",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- flair
- token-classification
- sequence-tagger-model
language: de
widget:
- text: "April Martin Ansclm, K. Gefangen-Auffehers Georg Sausgruber."
license: mit
---
# Towards Robust Named Entity Recognition for Historic German
Based on [our paper](https://www.aclweb.org/anthology/W19-4312/)
we release a new model trained on the ONB dataset.
**Note:** We use BPEmbeddings instead of the combination of
Wikipedia, Common Crawl and character embeddings (as used in the paper),
so save space and training/inferencing time.
# Results
| Dataset \ Run | Run 1 | Run 2 | Run 3 | Avg.
| ------------- | ----- | ----- | --------- | ------------
| Development | 86.69 | 86.13 | **87.18** | 86.67
| Test | 85.27 | 86.05 | 85.75† | 85.69
Paper reported an averaged F1-score of 85.31.
† denotes that this model is selected for upload.
|
valhalla/s2t_librispeech_medium
|
valhalla
| 2021-02-26T14:24:39Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"speech_to_text_transformer",
"text2text-generation",
"audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- librispeech_asr
tags:
- audio
- automatic-speech-recognition
license: apache-2.0
---
TODO: [To be filled]
## Evaluation on LibriSpeech Test
The following script shows how to evaluate this model on the [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) *"clean"* and *"other"* test dataset.
```python
from datasets import load_dataset
from transformers import Speech2TextTransformerForConditionalGeneration, Speech2TextTransformerTokenizer
import soundfile as sf
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") # change to "other" for other test dataset
model = Speech2TextTransformerForConditionalGeneration.from_pretrained("valhalla/s2t_librispeech_medium").to("cuda")
tokenizer = Speech2TextTransformerTokenizer.from_pretrained("valhalla/s2t_librispeech_medium", do_upper_case=True)
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
librispeech_eval = librispeech_eval.map(map_to_array)
def map_to_pred(batch):
features = tokenizer(batch["speech"], sample_rate=16000, padding=True, return_tensors="pt")
input_features = features.input_features.to("cuda")
attention_mask = features.attention_mask.to("cuda")
gen_tokens = model.generate(input_ids=input_features, attention_mask=attention_mask)
batch["transcription"] = tokenizer.batch_decode(gen_tokens, skip_special_tokens=True)
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=8, remove_columns=["speech"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 3.5 | 7.8 |
|
valhalla/s2t_librispeech_small
|
valhalla
| 2021-02-26T14:24:09Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"speech_to_text_transformer",
"text2text-generation",
"audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- librispeech_asr
tags:
- audio
- automatic-speech-recognition
license: apache-2.0
---
TODO: [To be filled]
## Evaluation on LibriSpeech Test
The following script shows how to evaluate this model on the [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) *"clean"* and *"other"* test dataset.
```python
from datasets import load_dataset
from transformers import Speech2TextTransformerForConditionalGeneration, Speech2TextTransformerTokenizer
import soundfile as sf
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") # change to "other" for other test dataset
model = Speech2TextTransformerForConditionalGeneration.from_pretrained("valhalla/s2t_librispeech_small").to("cuda")
tokenizer = Speech2TextTransformerTokenizer.from_pretrained("valhalla/s2t_librispeech_small", do_upper_case=True)
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
librispeech_eval = librispeech_eval.map(map_to_array)
def map_to_pred(batch):
features = tokenizer(batch["speech"], sample_rate=16000, padding=True, return_tensors="pt")
input_features = features.input_features.to("cuda")
attention_mask = features.attention_mask.to("cuda")
gen_tokens = model.generate(input_ids=input_features, attention_mask=attention_mask)
batch["transcription"] = tokenizer.batch_decode(gen_tokens, skip_special_tokens=True)
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=8, remove_columns=["speech"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 4.3 | 9.0 |
|
sismetanin/xlm_roberta_base-ru-sentiment-rureviews
|
sismetanin
| 2021-02-25T23:51:22Z | 23 | 1 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"sentiment analysis",
"Russian",
"ru",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language:
- ru
tags:
- sentiment analysis
- Russian
---
## XLM-RoBERTa-Base-ru-sentiment-RuReviews
XLM-RoBERTa-Base-ru-sentiment-RuReviews is a [XLM-RoBERTa-Base](https://huggingface.co/xlm-roberta-base) model fine-tuned on [RuReviews dataset](https://github.com/sismetanin/rureviews) of Russian-language reviews from the ”Women’s Clothes and Accessories” product category on the primary e-commerce site in Russia.
<table>
<thead>
<tr>
<th rowspan="4">Model</th>
<th rowspan="4">Score<br></th>
<th rowspan="4">Rank</th>
<th colspan="12">Dataset</th>
</tr>
<tr>
<td colspan="6">SentiRuEval-2016<br></td>
<td colspan="2" rowspan="2">RuSentiment</td>
<td rowspan="2">KRND</td>
<td rowspan="2">LINIS Crowd</td>
<td rowspan="2">RuTweetCorp</td>
<td rowspan="2">RuReviews</td>
</tr>
<tr>
<td colspan="3">TC</td>
<td colspan="3">Banks</td>
</tr>
<tr>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>wighted</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
</tr>
</thead>
<tbody>
<tr>
<td>SOTA</td>
<td>n/s</td>
<td></td>
<td>76.71</td>
<td>66.40</td>
<td>70.68</td>
<td>67.51</td>
<td>69.53</td>
<td>74.06</td>
<td>78.50</td>
<td>n/s</td>
<td>73.63</td>
<td>60.51</td>
<td>83.68</td>
<td>77.44</td>
</tr>
<tr>
<td>XLM-RoBERTa-Large</td>
<td>76.37</td>
<td>1</td>
<td>82.26</td>
<td>76.36</td>
<td>79.42</td>
<td>76.35</td>
<td>76.08</td>
<td>80.89</td>
<td>78.31</td>
<td>75.27</td>
<td>75.17</td>
<td>60.03</td>
<td>88.91</td>
<td>78.81</td>
</tr>
<tr>
<td>SBERT-Large</td>
<td>75.43</td>
<td>2</td>
<td>78.40</td>
<td>71.36</td>
<td>75.14</td>
<td>72.39</td>
<td>71.87</td>
<td>77.72</td>
<td>78.58</td>
<td>75.85</td>
<td>74.20</td>
<td>60.64</td>
<td>88.66</td>
<td>77.41</td>
</tr>
<tr>
<td>MBARTRuSumGazeta</td>
<td>74.70</td>
<td>3</td>
<td>76.06</td>
<td>68.95</td>
<td>73.04</td>
<td>72.34</td>
<td>71.93</td>
<td>77.83</td>
<td>76.71</td>
<td>73.56</td>
<td>74.18</td>
<td>60.54</td>
<td>87.22</td>
<td>77.51</td>
</tr>
<tr>
<td>Conversational RuBERT</td>
<td>74.44</td>
<td>4</td>
<td>76.69</td>
<td>69.09</td>
<td>73.11</td>
<td>69.44</td>
<td>68.68</td>
<td>75.56</td>
<td>77.31</td>
<td>74.40</td>
<td>73.10</td>
<td>59.95</td>
<td>87.86</td>
<td>77.78</td>
</tr>
<tr>
<td>LaBSE</td>
<td>74.11</td>
<td>5</td>
<td>77.00</td>
<td>69.19</td>
<td>73.55</td>
<td>70.34</td>
<td>69.83</td>
<td>76.38</td>
<td>74.94</td>
<td>70.84</td>
<td>73.20</td>
<td>59.52</td>
<td>87.89</td>
<td>78.47</td>
</tr>
<tr>
<td>XLM-RoBERTa-Base</td>
<td>73.60</td>
<td>6</td>
<td>76.35</td>
<td>69.37</td>
<td>73.42</td>
<td>68.45</td>
<td>67.45</td>
<td>74.05</td>
<td>74.26</td>
<td>70.44</td>
<td>71.40</td>
<td>60.19</td>
<td>87.90</td>
<td>78.28</td>
</tr>
<tr>
<td>RuBERT</td>
<td>73.45</td>
<td>7</td>
<td>74.03</td>
<td>66.14</td>
<td>70.75</td>
<td>66.46</td>
<td>66.40</td>
<td>73.37</td>
<td>75.49</td>
<td>71.86</td>
<td>72.15</td>
<td>60.55</td>
<td>86.99</td>
<td>77.41</td>
</tr>
<tr>
<td>MBART-50-Large-Many-to-Many</td>
<td>73.15</td>
<td>8</td>
<td>75.38</td>
<td>67.81</td>
<td>72.26</td>
<td>67.13</td>
<td>66.97</td>
<td>73.85</td>
<td>74.78</td>
<td>70.98</td>
<td>71.98</td>
<td>59.20</td>
<td>87.05</td>
<td>77.24</td>
</tr>
<tr>
<td>SlavicBERT</td>
<td>71.96</td>
<td>9</td>
<td>71.45</td>
<td>63.03</td>
<td>68.44</td>
<td>64.32</td>
<td>63.99</td>
<td>71.31</td>
<td>72.13</td>
<td>67.57</td>
<td>72.54</td>
<td>58.70</td>
<td>86.43</td>
<td>77.16</td>
</tr>
<tr>
<td>EnRuDR-BERT</td>
<td>71.51</td>
<td>10</td>
<td>72.56</td>
<td>64.74</td>
<td>69.07</td>
<td>61.44</td>
<td>60.21</td>
<td>68.34</td>
<td>74.19</td>
<td>69.94</td>
<td>69.33</td>
<td>56.55</td>
<td>87.12</td>
<td>77.95</td>
</tr>
<tr>
<td>RuDR-BERT</td>
<td>71.14</td>
<td>11</td>
<td>72.79</td>
<td>64.23</td>
<td>68.36</td>
<td>61.86</td>
<td>60.92</td>
<td>68.48</td>
<td>74.65</td>
<td>70.63</td>
<td>68.74</td>
<td>54.45</td>
<td>87.04</td>
<td>77.91</td>
</tr>
<tr>
<td>MBART-50-Large</td>
<td>69.46</td>
<td>12</td>
<td>70.91</td>
<td>62.67</td>
<td>67.24</td>
<td>61.12</td>
<td>60.25</td>
<td>68.41</td>
<td>72.88</td>
<td>68.63</td>
<td>70.52</td>
<td>46.39</td>
<td>86.48</td>
<td>77.52</td>
</tr>
</tbody>
</table>
The table shows per-task scores and a macro-average of those scores to determine a models’s position on the leaderboard. For datasets with multiple evaluation metrics (e.g., macro F1 and weighted F1 for RuSentiment), we use an unweighted average of the metrics as the score for the task when computing the overall macro-average. The same strategy for comparing models’ results was applied in the GLUE benchmark.
## Citation
If you find this repository helpful, feel free to cite our publication:
```
@article{Smetanin2021Deep,
author = {Sergey Smetanin and Mikhail Komarov},
title = {Deep transfer learning baselines for sentiment analysis in Russian},
journal = {Information Processing & Management},
volume = {58},
number = {3},
pages = {102484},
year = {2021},
issn = {0306-4573},
doi = {0.1016/j.ipm.2020.102484}
}
```
Dataset:
```
@INPROCEEDINGS{Smetanin2019Sentiment,
author={Sergey Smetanin and Michail Komarov},
booktitle={2019 IEEE 21st Conference on Business Informatics (CBI)},
title={Sentiment Analysis of Product Reviews in Russian using Convolutional Neural Networks},
year={2019},
volume={01},
pages={482-486},
doi={10.1109/CBI.2019.00062},
ISSN={2378-1963},
month={July}
}
```
|
voidful/dpr-ctx_encoder-bert-base-multilingual
|
voidful
| 2021-02-21T09:00:44Z | 34 | 6 |
transformers
|
[
"transformers",
"pytorch",
"dpr",
"multilingual",
"dataset:NQ",
"dataset:Trivia",
"dataset:SQuAD",
"dataset:MLQA",
"dataset:DRCD",
"arxiv:2004.04906",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: multilingual
datasets:
- NQ
- Trivia
- SQuAD
- MLQA
- DRCD
---
# dpr-ctx_encoder-bert-base-multilingual
## Description
Multilingual DPR Model base on bert-base-multilingual-cased.
[DPR model](https://arxiv.org/abs/2004.04906)
[DPR repo](https://github.com/facebookresearch/DPR)
## Data
1. [NQ](https://github.com/facebookresearch/DPR/blob/master/data/download_data.py)
2. [Trivia](https://github.com/facebookresearch/DPR/blob/master/data/download_data.py)
3. [SQuAD](https://github.com/facebookresearch/DPR/blob/master/data/download_data.py)
4. [DRCD*](https://github.com/DRCKnowledgeTeam/DRCD)
5. [MLQA*](https://github.com/facebookresearch/MLQA)
`question pairs for train`: 644,217
`question pairs for dev`: 73,710
*DRCD and MLQA are converted using script from haystack [squad_to_dpr.py](https://github.com/deepset-ai/haystack/blob/master/haystack/retriever/squad_to_dpr.py)
## Training Script
I use the script from [haystack](https://colab.research.google.com/github/deepset-ai/haystack/blob/master/tutorials/Tutorial9_DPR_training.ipynb)
## Usage
```python
from transformers import DPRContextEncoder, DPRContextEncoderTokenizer
tokenizer = DPRContextEncoderTokenizer.from_pretrained('voidful/dpr-ctx_encoder-bert-base-multilingual')
model = DPRContextEncoder.from_pretrained('voidful/dpr-ctx_encoder-bert-base-multilingual')
input_ids = tokenizer("Hello, is my dog cute ?", return_tensors='pt')["input_ids"]
embeddings = model(input_ids).pooler_output
```
Follow the tutorial from `haystack`:
[Better Retrievers via "Dense Passage Retrieval"](https://colab.research.google.com/github/deepset-ai/haystack/blob/master/tutorials/Tutorial6_Better_Retrieval_via_DPR.ipynb)
```
from haystack.retriever.dense import DensePassageRetriever
retriever = DensePassageRetriever(document_store=document_store,
query_embedding_model="voidful/dpr-question_encoder-bert-base-multilingual",
passage_embedding_model="voidful/dpr-ctx_encoder-bert-base-multilingual",
max_seq_len_query=64,
max_seq_len_passage=256,
batch_size=16,
use_gpu=True,
embed_title=True,
use_fast_tokenizers=True)
```
|
superspray/distilbert_base_squad2_custom_dataset
|
superspray
| 2021-02-20T07:33:31Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"question-answering",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
# Question & Answering Model for 'Save Your Minutes' from Dobby-AI
Distilbert_Base fine-tuned on SQuAD2.0 and custom QA dataset
This model is [twmkn9/distilbert-base-uncased-squad2] trained on additional custom dataset as:
```
!python3 run_squad.py --model_type distilbert \
--model_name_or_path /content/distilbert_base_384 \
--do_lower_case \
--output_dir /content/model/\
--do_train \
--train_file $data_dir/additional_qa.json\
--version_2_with_negative \
--do_lower_case \
--num_train_epochs 3 \
--weight_decay 0.01 \
--learning_rate 3e-5 \
--max_grad_norm 0.5 \
--adam_epsilon 1e-6 \
--max_seq_length 512 \
--doc_stride 128 \
--threads 12 \
--logging_steps 50 \
--save_steps 1000 \
--overwrite_output_dir \
--per_gpu_train_batch_size 4
```
We used Google Colab for training the model,
|
superspray/electra_large_discriminator_squad2_custom_dataset
|
superspray
| 2021-02-20T07:00:12Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"question-answering",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
# Question & Answering Model for 'Save Your Minutes' from Dobby-AI
Electra_Large Discriminator fine-tuned on SQuAD2.0 and custom QA dataset
This model is [ahotrod/electra_large_discriminator_squad2_512](https://huggingface.co/ahotrod/electra_large_discriminator_squad2_512/blob/main/README.md)
trained on additional custom dataset as:
```
!python3 run_squad.py --model_type electra \
--model_name_or_path /content/electra_large_512 \
--do_lower_case \
--output_dir /content/model/\
--do_train \
--train_file $data_dir/additional_qa.json\
--version_2_with_negative \
--do_lower_case \
--num_train_epochs 3 \
--weight_decay 0.01 \
--learning_rate 3e-5 \
--max_grad_norm 0.5 \
--adam_epsilon 1e-6 \
--max_seq_length 512 \
--doc_stride 128 \
--threads 12 \
--logging_steps 50 \
--save_steps 1000 \
--overwrite_output_dir \
--per_gpu_train_batch_size 4
```
We used Google Colab for training the model,
|
flexudy/t5-small-wav2vec2-grammar-fixer
|
flexudy
| 2021-02-16T01:56:40Z | 131,235 | 12 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# flexudy-pipe-question-generation-v2
After transcribing your audio with Wav2Vec2, you might be interested in a post processor.
All paragraphs had at most 128 tokens (separated by white spaces)
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = "flexudy/t5-small-wav2vec2-grammar-fixer"
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
sent = """GOING ALONG SLUSHY COUNTRY ROADS AND SPEAKING TO DAMP AUDIENCES IN DRAUGHTY SCHOOL ROOMS DAY AFTER DAY FOR A FORTNIGHT HE'LL HAVE TO PUT IN AN APPEARANCE AT SOME PLACE OF WORSHIP ON SUNDAY MORNING AND HE CAN COME TO US IMMEDIATELY AFTERWARDS"""
input_text = "fix: { " + sent + " } </s>"
input_ids = tokenizer.encode(input_text, return_tensors="pt", max_length=256, truncation=True, add_special_tokens=True)
outputs = model.generate(
input_ids=input_ids,
max_length=256,
num_beams=4,
repetition_penalty=1.0,
length_penalty=1.0,
early_stopping=True
)
sentence = tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(f"{sentence}")
```
INPUT 1:
```
WHEN ARE YOU COMING TOMORROW I AM ASKING BECAUSE OF THE MONEY YOU OWE ME PLEASE GIVE IT TO ME I AM WAITING YOU HAVE BEEN AVOIDING ME SINCE TWO THOUSAND AND THREE
```
OUTPUT 1:
```
When are you coming tomorrow? I am asking because of the money you owe me, please give it to me. I am waiting. You have been avoiding me since 2003.
```
INPUT 2:
```
GOING ALONG SLUSHY COUNTRY ROADS AND SPEAKING TO DAMP AUDIENCES IN DRAUGHTY SCHOOL ROOMS DAY AFTER DAY FOR A FORTNIGHT HE'LL HAVE TO PUT IN AN APPEARANCE AT SOME PLACE OF WORSHIP ON SUNDAY MORNING AND HE CAN COME TO US IMMEDIATELY AFTERWARDS
```
OUTPUT 2:
```
Going along Slushy Country Roads and speaking to Damp audiences in Draughty School rooms day after day for a fortnight, he'll have to put in an appearance at some place of worship on Sunday morning and he can come to us immediately afterwards.
```
I strongly recommend improving the performance via further fine-tuning or by training more examples.
- Possible Quick Rule based improvements: Align the transcribed version and the generated version. If the similarity of two words (case-insensitive) vary by more than some threshold based on some similarity metric (e.g. Levenshtein), then keep the transcribed word.
|
tner/xlm-roberta-large-uncased-wnut2017
|
tner
| 2021-02-13T00:12:33Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
# XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-wnut2017")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-wnut2017")
```
|
tner/xlm-roberta-large-uncased-conll2003
|
tner
| 2021-02-13T00:11:51Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
# XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-conll2003")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-conll2003")
```
|
tner/xlm-roberta-large-uncased-bc5cdr
|
tner
| 2021-02-13T00:11:43Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
# XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-bc5cdr")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-bc5cdr")
```
|
tner/xlm-roberta-large-panx-dataset-ru
|
tner
| 2021-02-13T00:11:34Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
# XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-panx-dataset-ru")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-panx-dataset-ru")
```
|
tner/xlm-roberta-large-panx-dataset-ja
|
tner
| 2021-02-13T00:11:28Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
# XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-panx-dataset-ja")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-panx-dataset-ja")
```
|
asahi417/tner-xlm-roberta-large-bc5cdr
|
asahi417
| 2021-02-13T00:11:03Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
# XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-bc5cdr")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-bc5cdr")
```
|
tner/xlm-roberta-base-uncased-panx-dataset-en
|
tner
| 2021-02-13T00:10:50Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
# XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-panx-dataset-en")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-panx-dataset-en")
```
|
tner/xlm-roberta-base-panx-dataset-ru
|
tner
| 2021-02-13T00:08:30Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
# XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-panx-dataset-ru")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-panx-dataset-ru")
```
|
tner/xlm-roberta-base-conll2003
|
tner
| 2021-02-13T00:07:07Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
# XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-conll2003")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-conll2003")
```
|
tner/xlm-roberta-large-uncased-panx-dataset-en
|
tner
| 2021-02-13T00:06:19Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
# XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-panx-dataset-en")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-panx-dataset-en")
```
|
tner/xlm-roberta-large-uncased-mit-restaurant
|
tner
| 2021-02-13T00:06:06Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
# XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-mit-restaurant")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-mit-restaurant")
```
|
tner/xlm-roberta-large-uncased-fin
|
tner
| 2021-02-13T00:05:55Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
# XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-fin")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-fin")
```
|
tner/xlm-roberta-large-panx-dataset-ar
|
tner
| 2021-02-13T00:04:41Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
# XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-panx-dataset-ar")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-panx-dataset-ar")
```
|
tner/xlm-roberta-large-fin
|
tner
| 2021-02-13T00:04:30Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
# XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-fin")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-fin")
```
|
asahi417/tner-xlm-roberta-large-all-english
|
asahi417
| 2021-02-12T23:48:50Z | 6,359 | 1 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
# XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-all-english")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-all-english")
```
|
tner/xlm-roberta-base-uncased-bionlp2004
|
tner
| 2021-02-12T23:35:21Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
# XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-bionlp2004")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-bionlp2004")
```
|
tner/xlm-roberta-base-panx-dataset-ko
|
tner
| 2021-02-12T23:34:47Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
# XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-panx-dataset-ko")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-panx-dataset-ko")
```
|
tner/xlm-roberta-base-panx-dataset-es
|
tner
| 2021-02-12T23:34:35Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
# XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-panx-dataset-es")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-panx-dataset-es")
```
|
tner/xlm-roberta-base-panx-dataset-ar
|
tner
| 2021-02-12T23:34:15Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
# XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-panx-dataset-ar")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-panx-dataset-ar")
```
|
tner/xlm-roberta-base-fin
|
tner
| 2021-02-12T23:33:59Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
# XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-fin")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-fin")
```
|
tner/xlm-roberta-base-bionlp2004
|
tner
| 2021-02-12T23:32:10Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
# XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-bionlp2004")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-bionlp2004")
```
|
Musixmatch/umberto-commoncrawl-cased-v1
|
Musixmatch
| 2021-02-12T11:31:59Z | 16,559 | 14 |
transformers
|
[
"transformers",
"pytorch",
"camembert",
"fill-mask",
"it",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
language: it
---
# UmBERTo Commoncrawl Cased
[UmBERTo](https://github.com/musixmatchresearch/umberto) is a Roberta-based Language Model trained on large Italian Corpora and uses two innovative approaches: SentencePiece and Whole Word Masking. Now available at [github.com/huggingface/transformers](https://huggingface.co/Musixmatch/umberto-commoncrawl-cased-v1)
<p align="center">
<img src="https://user-images.githubusercontent.com/7140210/72913702-d55a8480-3d3d-11ea-99fc-f2ef29af4e72.jpg" width="700"> </br>
Marco Lodola, Monument to Umberto Eco, Alessandria 2019
</p>
## Dataset
UmBERTo-Commoncrawl-Cased utilizes the Italian subcorpus of [OSCAR](https://traces1.inria.fr/oscar/) as training set of the language model. We used deduplicated version of the Italian corpus that consists in 70 GB of plain text data, 210M sentences with 11B words where the sentences have been filtered and shuffled at line level in order to be used for NLP research.
## Pre-trained model
| Model | WWM | Cased | Tokenizer | Vocab Size | Train Steps | Download |
| ------ | ------ | ------ | ------ | ------ |------ | ------ |
| `umberto-commoncrawl-cased-v1` | YES | YES | SPM | 32K | 125k | [Link](http://bit.ly/35zO7GH) |
This model was trained with [SentencePiece](https://github.com/google/sentencepiece) and Whole Word Masking.
## Downstream Tasks
These results refers to umberto-commoncrawl-cased model. All details are at [Umberto](https://github.com/musixmatchresearch/umberto) Official Page.
#### Named Entity Recognition (NER)
| Dataset | F1 | Precision | Recall | Accuracy |
| ------ | ------ | ------ | ------ | ------ |
| **ICAB-EvalITA07** | **87.565** | 86.596 | 88.556 | 98.690 |
| **WikiNER-ITA** | **92.531** | 92.509 | 92.553 | 99.136 |
#### Part of Speech (POS)
| Dataset | F1 | Precision | Recall | Accuracy |
| ------ | ------ | ------ | ------ | ------ |
| **UD_Italian-ISDT** | 98.870 | 98.861 | 98.879 | **98.977** |
| **UD_Italian-ParTUT** | 98.786 | 98.812 | 98.760 | **98.903** |
## Usage
##### Load UmBERTo with AutoModel, Autotokenizer:
```python
import torch
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Musixmatch/umberto-commoncrawl-cased-v1")
umberto = AutoModel.from_pretrained("Musixmatch/umberto-commoncrawl-cased-v1")
encoded_input = tokenizer.encode("Umberto Eco è stato un grande scrittore")
input_ids = torch.tensor(encoded_input).unsqueeze(0) # Batch size 1
outputs = umberto(input_ids)
last_hidden_states = outputs[0] # The last hidden-state is the first element of the output
```
##### Predict masked token:
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="Musixmatch/umberto-commoncrawl-cased-v1",
tokenizer="Musixmatch/umberto-commoncrawl-cased-v1"
)
result = fill_mask("Umberto Eco è <mask> un grande scrittore")
# {'sequence': '<s> Umberto Eco è considerato un grande scrittore</s>', 'score': 0.18599839508533478, 'token': 5032}
# {'sequence': '<s> Umberto Eco è stato un grande scrittore</s>', 'score': 0.17816807329654694, 'token': 471}
# {'sequence': '<s> Umberto Eco è sicuramente un grande scrittore</s>', 'score': 0.16565583646297455, 'token': 2654}
# {'sequence': '<s> Umberto Eco è indubbiamente un grande scrittore</s>', 'score': 0.0932890921831131, 'token': 17908}
# {'sequence': '<s> Umberto Eco è certamente un grande scrittore</s>', 'score': 0.054701317101716995, 'token': 5269}
```
## Citation
All of the original datasets are publicly available or were released with the owners' grant. The datasets are all released under a CC0 or CCBY license.
* UD Italian-ISDT Dataset [Github](https://github.com/UniversalDependencies/UD_Italian-ISDT)
* UD Italian-ParTUT Dataset [Github](https://github.com/UniversalDependencies/UD_Italian-ParTUT)
* I-CAB (Italian Content Annotation Bank), EvalITA [Page](http://www.evalita.it/)
* WIKINER [Page](https://figshare.com/articles/Learning_multilingual_named_entity_recognition_from_Wikipedia/5462500) , [Paper](https://www.sciencedirect.com/science/article/pii/S0004370212000276?via%3Dihub)
```
@inproceedings {magnini2006annotazione,
title = {Annotazione di contenuti concettuali in un corpus italiano: I - CAB},
author = {Magnini,Bernardo and Cappelli,Amedeo and Pianta,Emanuele and Speranza,Manuela and Bartalesi Lenzi,V and Sprugnoli,Rachele and Romano,Lorenza and Girardi,Christian and Negri,Matteo},
booktitle = {Proc.of SILFI 2006},
year = {2006}
}
@inproceedings {magnini2006cab,
title = {I - CAB: the Italian Content Annotation Bank.},
author = {Magnini,Bernardo and Pianta,Emanuele and Girardi,Christian and Negri,Matteo and Romano,Lorenza and Speranza,Manuela and Lenzi,Valentina Bartalesi and Sprugnoli,Rachele},
booktitle = {LREC},
pages = {963--968},
year = {2006},
organization = {Citeseer}
}
```
## Authors
**Loreto Parisi**: `loreto at musixmatch dot com`, [loretoparisi](https://github.com/loretoparisi)
**Simone Francia**: `simone.francia at musixmatch dot com`, [simonefrancia](https://github.com/simonefrancia)
**Paolo Magnani**: `paul.magnani95 at gmail dot com`, [paulthemagno](https://github.com/paulthemagno)
## About Musixmatch AI

We do Machine Learning and Artificial Intelligence @[musixmatch](https://twitter.com/Musixmatch)
Follow us on [Twitter](https://twitter.com/musixmatchai) [Github](https://github.com/musixmatchresearch)
|
microsoft/deberta-xxlarge-v2-mnli
|
microsoft
| 2021-02-11T02:05:00Z | 20 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"deberta",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: en
tags: deberta
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
---
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
## This model is DEPRECATED, please use [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli)
|
microsoft/deberta-xlarge-v2
|
microsoft
| 2021-02-11T02:04:50Z | 24 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"deberta",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: en
tags: deberta
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
---
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
## This model is DEPRECATED, please use [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)
|
microsoft/deberta-xlarge-v2-mnli
|
microsoft
| 2021-02-11T02:04:40Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"deberta",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: en
tags: deberta
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
---
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
## This model is DEPRECATED, please use [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli)
|
valhalla/longformer-base-4096-finetuned-squadv1
|
valhalla
| 2021-02-10T16:35:40Z | 513 | 22 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"rust",
"longformer",
"question-answering",
"dataset:squad_v1",
"arxiv:2004.05150",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
datasets:
- squad_v1
license: mit
---
# LONGFORMER-BASE-4096 fine-tuned on SQuAD v1
This is longformer-base-4096 model fine-tuned on SQuAD v1 dataset for question answering task.
[Longformer](https://arxiv.org/abs/2004.05150) model created by Iz Beltagy, Matthew E. Peters, Arman Coha from AllenAI. As the paper explains it
> `Longformer` is a BERT-like model for long documents.
The pre-trained model can handle sequences with upto 4096 tokens.
## Model Training
This model was trained on google colab v100 GPU. You can find the fine-tuning colab here [](https://colab.research.google.com/drive/1zEl5D-DdkBKva-DdreVOmN0hrAfzKG1o?usp=sharing).
Few things to keep in mind while training longformer for QA task,
by default longformer uses sliding-window local attention on all tokens. But For QA, all question tokens should have global attention. For more details on this please refer the paper. The `LongformerForQuestionAnswering` model automatically does that for you. To allow it to do that
1. The input sequence must have three sep tokens, i.e the sequence should be encoded like this
` <s> question</s></s> context</s>`. If you encode the question and answer as a input pair, then the tokenizer already takes care of that, you shouldn't worry about it.
2. `input_ids` should always be a batch of examples.
## Results
|Metric | # Value |
|-------------|---------|
| Exact Match | 85.1466 |
| F1 | 91.5415 |
## Model in Action 🚀
```python
import torch
from transformers import AutoTokenizer, AutoModelForQuestionAnswering,
tokenizer = AutoTokenizer.from_pretrained("valhalla/longformer-base-4096-finetuned-squadv1")
model = AutoModelForQuestionAnswering.from_pretrained("valhalla/longformer-base-4096-finetuned-squadv1")
text = "Huggingface has democratized NLP. Huge thanks to Huggingface for this."
question = "What has Huggingface done ?"
encoding = tokenizer(question, text, return_tensors="pt")
input_ids = encoding["input_ids"]
# default is local attention everywhere
# the forward method will automatically set global attention on question tokens
attention_mask = encoding["attention_mask"]
start_scores, end_scores = model(input_ids, attention_mask=attention_mask)
all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0].tolist())
answer_tokens = all_tokens[torch.argmax(start_scores) :torch.argmax(end_scores)+1]
answer = tokenizer.decode(tokenizer.convert_tokens_to_ids(answer_tokens))
# output => democratized NLP
```
The `LongformerForQuestionAnswering` isn't yet supported in `pipeline` . I'll update this card once the support has been added.
> Created with ❤️ by Suraj Patil [](https://github.com/patil-suraj/)
[](https://twitter.com/psuraj28)
|
byan/librispeech_asr_train_asr_transformer_e18_raw_bpe_sp
|
byan
| 2021-02-09T04:09:12Z | 5 | 0 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- librispeech
license: cc-by-4.0
---
## Example ESPnet2 ASR model
### `Shinji Watanabe/librispeech_asr_train_asr_transformer_e18_raw_bpe_sp_valid.acc.best`
♻️ Imported from https://zenodo.org/record/3966501
This model was trained by Shinji Watanabe using librispeech recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
dbernsohn/t5_numbers_gcd
|
dbernsohn
| 2021-02-08T06:52:18Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
# numbers_gcd
---
language: en
datasets:
- numbers_gcd
---
This is a [t5-small](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned version on the [math_dataset/numbers_gcd](https://www.tensorflow.org/datasets/catalog/math_dataset#mathdatasetnumbers_gcd) for solving **greatest common divisor** mission.
To load the model:
(necessary packages: !pip install transformers sentencepiece)
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("dbernsohn/t5_numbers_gcd")
model = AutoModelWithLMHead.from_pretrained("dbernsohn/t5_numbers_gcd")
```
You can then use this model to solve algebra 1d equations into numbers.
```python
query = "What is the highest common factor of 4210884 and 72?"
input_text = f"{query} </s>"
features = tokenizer([input_text], return_tensors='pt')
model.to('cuda')
output = model.generate(input_ids=features['input_ids'].cuda(),
attention_mask=features['attention_mask'].cuda())
tokenizer.decode(output[0])
# <pad> 36</s>
```
Another examples:
+ Calculate the greatest common factor of 3470 and 97090.
+ Answer: 10 Pred: 10
----
+ Calculate the highest common factor of 3480 and 775431.
+ Answer: 87 Pred: 87
----
+ What is the highest common divisor of 26 and 88049?
+ Answer: 13 Pred: 13
----
+ Calculate the highest common factor of 1416 and 24203688.
+ Answer: 1416 Pred: 1416
----
+ Calculate the highest common divisor of 124 and 69445828.
+ Answer: 124 Pred: 124
----
+ What is the greatest common factor of 657906 and 470?
+ Answer: 94 Pred: 94
----
+ What is the highest common factor of 4210884 and 72?
+ Answer: 36 Pred: 36
The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/MathLM)
> Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)
|
dbernsohn/algebra_linear_1d
|
dbernsohn
| 2021-02-03T07:09:42Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
# algebra_linear_1d
---
language: en
datasets:
- algebra_linear_1d
---
This is a [t5-small](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned version on the [math_dataset/algebra_linear_1d](https://www.tensorflow.org/datasets/catalog/math_dataset#mathdatasetalgebra_linear_1d_default_config) for solving **algebra 1d equations** mission.
To load the model:
(necessary packages: !pip install transformers sentencepiece)
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("dbernsohn/algebra_linear_1d")
model = AutoModelWithLMHead.from_pretrained("dbernsohn/algebra_linear_1d")
```
You can then use this model to solve algebra 1d equations into numbers.
```python
query = "Solve 0 = 1026*x - 2474 + 46592 for x"
input_text = f"{query} </s>"
features = tokenizer([input_text], return_tensors='pt')
model.to('cuda')
output = model.generate(input_ids=features['input_ids'].cuda(),
attention_mask=features['attention_mask'].cuda())
tokenizer.decode(output[0])
# <pad> -41</s>
```
Another examples:
+ Solve 1112*r + 1418*r - 5220 = 587*r - 28536 for r.
+ Answer: -12 Pred: -12
----
+ Solve -119*k + 6*k - 117 - 352 = 322 for k.
+ Answer: -7 Pred: -7
----
+ Solve -547 = -62*t + 437 - 798 for t.
+ Answer: 3 Pred: 3
----
+ Solve 3*j - 3*j + 0*j - 4802 = 98*j for j.
+ Answer: -49 Pred: -49
----
+ Solve 3047*n - 6130*n - 1700 = -3049*n for n.
+ Answer: -50 Pred: -50
----
+ Solve 121*i + 1690 = 76*i - 128*i + 133 for i.
+ Answer: -9 Pred: -9
The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/MathLM)
> Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)
|
HHousen/distil-led-large-cnn-16384
|
HHousen
| 2021-02-02T00:58:07Z | 288 | 4 |
transformers
|
[
"transformers",
"pytorch",
"led",
"text2text-generation",
"en",
"dataset:cnn_dailymail",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
language: en
datasets:
- cnn_dailymail
license: apache-2.0
---
## DistilLED Large CNN 16384
*distil-led-large-cnn-16384* was initialized from [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6), in a fashion similar to [allenai/led-large-16384](https://huggingface.co/allenai/led-large-16384).
To be able to process 16K tokens, *sshleifer/distilbart-cnn-12-6*'s position embedding matrix was simply copied 16 times.
This checkpoint should be loaded into `LEDForConditionalGeneration.from_pretrained`. See the [LED documentation](https://huggingface.co/transformers/model_doc/led.html) for more information.
|
mrm8488/mobilebert-finetuned-ner
|
mrm8488
| 2021-01-30T11:42:05Z | 82 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mobilebert",
"token-classification",
"ner",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- mobilebert
- ner
license: mit
---
|
ordinarykids/borges02
|
ordinarykids
| 2021-01-29T12:54:07Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
# MyModelName
Borges02
## Model description
You can generate new short stories from Jorge Luis Borges.
## Intended uses & limitations
#### How to use
```python
# You can include sample code which will be formatted
```
#### Limitations and bias
Provide examples of latent issues and potential remediations.
## Training data
Describe the data you used to train the model.
If you initialized it with pre-trained weights, add a link to the pre-trained model card or repository with description of the pre-training data.
## Training procedure
Preprocessing, hardware used, hyperparameters...
## Eval results
### BibTeX entry and citation info
```bibtex
@inproceedings{...,
year={2020}
}
```
|
NTUYG/SOTitle-java-BART
|
NTUYG
| 2021-01-28T15:12:29Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
## How to use
```python
import logging
from simpletransformers.seq2seq import Seq2SeqModel, Seq2SeqArgs
logging.basicConfig(level=logging.INFO)
transformers_logger = logging.getLogger("transformers")
transformers_logger.setLevel(logging.WARNING)
model_args = Seq2SeqArgs()
# 加载本地训练好的模型
model = Seq2SeqModel(
encoder_decoder_type="bart",
encoder_decoder_name="NTUYG/SOTitle-java-BART",
args=model_args,
)
describe = """
I am a beginner at Android Java development but I have a few years of school + uni experience in Java. I am trying to write to a text file in an assets folder in my app using FileOutputStream but it doesn't seem to write to it at all since I am using InputStream to read the file after and there haven't any updates. Here is my code
"""
code = """
private void updateTextFile(String update) {
FileOutputStream fos = null;
try
{
fos = openFileOutput("Questions",MODE_PRIVATE);
fos.write("Testing".getBytes());
}
catch (FileNotFoundException e)
{
e.printStackTrace();
}
catch (IOException e)
{
e.printStackTrace();
}
finally
{
if(fos!=null)
{
try
{
fos.close();
}
catch (IOException e)
{
e.printStackTrace();
}
}
}
String text = "";
try
{
InputStream is = getAssets().open("Questions");
int size = is.available();
byte[] buffer = new byte[size];
is.read(buffer);
is.close();
text = new String(buffer);
}
catch (IOException e)
{
e.printStackTrace();
}
System.out.println("Tesing output " + text);
}
"""
from nltk import word_tokenize
describe = describe.replace('\n',' ').replace('\r',' ')
describe = ' '.join(word_tokenize(describe))
code = code.replace('\n',' ').replace('\r',' ')
code = ' '.join(word_tokenize(code))
# human : Java Android Cant seem to update text file using FileOutputStream
body = describe + ' <code> ' + code +' </code>'
print(
model.predict(
[
body
]
)
)
```
|
Narsil/small_conversational_test
|
Narsil
| 2021-01-20T16:30:52Z | 2 | 0 |
transformers
|
[
"transformers",
"albert",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z |
```python
import tempfile
from tokenizers import Tokenizer, models, processors
from transformers.tokenization_utils_fast import PreTrainedTokenizerFast
vocab = [(chr(i), i) for i in range(256)]
tokenizer = Tokenizer(models.Unigram(vocab))
tokenizer.add_special_tokens(["<bos>", "<eos>"])
tokenizer.post_processor = processors.TemplateProcessing(
single="<bos> $0 <eos>", special_tokens=[("<bos>", 256), ("<eos>", 257)]
)
with tempfile.NamedTemporaryFile() as f:
tokenizer.save(f.name)
real_tokenizer = PreTrainedTokenizerFast(tokenizer_file=f.name, eos_token="<eos>", bos_token="<bos>")
real_tokenizer._tokenizer.save("dummy.json")
```
Small change.
|
yannis-papanikolaou/t5-code-generation
|
yannis-papanikolaou
| 2021-01-19T14:46:48Z | 0 | 1 | null |
[
"arxiv:2101.07138",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# T5 for Semantic Parsing
## Model description
T5 (small and large) finetuned on CoNaLa for semantic parsing (Natural Language descriptions to Python code)
Paper: https://arxiv.org/pdf/2101.07138.pdf
Code, data and how to use: https://github.com/ypapanik/t5-for-code-generation
### Cite
```
@misc{papanikolaou2021teach,
title={Teach me how to Label: Labeling Functions from Natural Language with Text-to-text Transformers},
author={Yannis Papanikolaou},
year={2021},
eprint={2101.07138},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
subham92/translation_model_by_subham
|
subham92
| 2021-01-18T10:29:50Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"fi",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:05Z |
---
language:
- fi
- en
tags:
- translation
license: apache-2.0
---
|
ggoggam/xlnet-base-squadv2
|
ggoggam
| 2021-01-17T11:52:34Z | 7 | 2 |
transformers
|
[
"transformers",
"pytorch",
"xlnet",
"question-answering",
"arxiv:1906.08237",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
# XLNet Fine-tuned on SQuAD 2.0 Dataset
[XLNet](https://arxiv.org/abs/1906.08237) jointly developed by Google and CMU and fine-tuned on [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) for question answering down-stream task.
## Training Results (Metrics)
```
{
"HasAns_exact": 74.7132253711201
"HasAns_f1": 82.11971607032643
"HasAns_total": 5928
"NoAns_exact": 73.38940285954584
"NoAns_f1": 73.38940285954584
"NoAns_total": 5945
"best_exact": 75.67590331003116
"best_exact_thresh": -19.554906845092773
"best_f1": 79.16215426779269
"best_f1_thresh": -19.554906845092773
"epoch": 4.0
"exact": 74.05036637749515
"f1": 77.74830934598614
"total": 11873
}
```
## Results Comparison
| Metric | Paper | Model |
| ------ | --------- | --------- |
| **EM** | **78.46** | **75.68** (-2.78) |
| **F1** | **81.33** | **79.16** (-2.17)|
Better fine-tuned models coming soon.
## How to Use
```
from transformers import XLNetForQuestionAnswering, XLNetTokenizerFast
model = XLNetForQuestionAnswering.from_pretrained('jkgrad/xlnet-base-squadv2)
tokenizer = XLNetTokenizerFast.from_pretrained('jkgrad/xlnet-base-squadv2')
```
|
poipii/yelp_sentiment_distilbert-base-uncased_tuned
|
poipii
| 2021-01-14T02:37:35Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
language: en
tags:
- sentiment
- distilbert-
pipeline_tag: text-classification
|
julien-c/mini_an4_asr_train_raw_bpe_valid
|
julien-c
| 2021-01-12T20:20:17Z | 4 | 0 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- ljspeech
license: cc-by-4.0
---
## Example ESPnet2 ASR model
### `kamo-naoyuki/mini_an4_asr_train_raw_bpe_valid.acc.best`
♻️ Imported from https://zenodo.org/record/3957940#.X90XNelKjkM
This model was trained by kamo-naoyuki using mini_an4 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
patrickvonplaten/led-large-16384-pubmed
|
patrickvonplaten
| 2021-01-11T15:42:53Z | 56 | 12 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"led",
"text2text-generation",
"en",
"dataset:scientific_papers",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- scientific_papers
license: apache-2.0
---
## Introduction
[Allenai's Longformer Encoder-Decoder (LED)](https://github.com/allenai/longformer#longformer).
This is an unofficial *led-large-16384* checkpoint that is fine-tuned on the [pubmed dataset](https://huggingface.co/datasets/scientific_papers).
The model was fine-tuned and evaluated as detailed in [this notebook](https://colab.research.google.com/drive/12LjJazBl7Gam0XBPy_y0CTOJZeZ34c2v?usp=sharing)
## Results
The model achieves a **Rouge-2** score of 19.33 on Pubmed which is competitive to state-of-the-art models.
## Usage
The model can be used as follows. The input is taken from the test data of the [pubmed dataset](https://huggingface.co/datasets/scientific_papers).
```python
LONG_ARTICLE = """"anxiety affects quality of life in those living
with parkinson 's disease ( pd ) more so than
overall cognitive status , motor deficits , apathy
, and depression [ 13 ] . although anxiety and
depression are often related and coexist in pd
patients , recent research suggests that anxiety
rather than depression is the most prominent and
prevalent mood disorder in pd [ 5 , 6 ] . yet ,
our current understanding of anxiety and its
impact on cognition in pd , as well as its neural
basis and best treatment practices , remains
meager and lags far behind that of depression .
overall , neuropsychiatric symptoms in pd have
been shown to be negatively associated with
cognitive performance . for example , higher
depression scores have been correlated with lower
scores on the mini - mental state exam ( mmse ) [
8 , 9 ] as well as tests of memory and executive
functions ( e.g. , attention ) [ 1014 ] . likewise
, apathy and anhedonia in pd patients have been
associated with executive dysfunction [ 10 , 1523
] . however , few studies have specifically
investigated the relationship between anxiety and
cognition in pd . one study showed a strong
negative relationship between anxiety ( both state
and trait ) and overall cognitive performance (
measured by the total of the repeatable battery
for the assessment of neuropsychological status
index ) within a sample of 27 pd patients .
furthermore , trait anxiety was negatively
associated with each of the cognitive domains
assessed by the rbans ( i.e. , immediate memory ,
visuospatial construction , language , attention ,
and delayed memory ) . two further studies have
examined whether anxiety differentially affects
cognition in patients with left - sided dominant
pd ( lpd ) versus right - sided dominant pd ( rpd
) ; however , their findings were inconsistent .
the first study found that working memory
performance was worse in lpd patients with anxiety
compared to rpd patients with anxiety , whereas
the second study reported that , in lpd , apathy
but not anxiety was associated with performance on
nonverbally mediated executive functions and
visuospatial tasks ( e.g. , tmt - b , wms - iii
spatial span ) , while in rpd , anxiety but not
apathy significantly correlated with performance
on verbally mediated tasks ( e.g. , clock reading
test and boston naming test ) . furthermore ,
anxiety was significantly correlated with
neuropsychological measures of attention and
executive and visuospatial functions . taken
together , it is evident that there are limited
and inconsistent findings describing the
relationship between anxiety and cognition in pd
and more specifically how anxiety might influence
particular domains of cognition such as attention
and memory and executive functioning . it is also
striking that , to date , no study has examined
the influence of anxiety on cognition in pd by
directly comparing groups of pd patients with and
without anxiety while excluding depression . given
that research on healthy young adults suggests
that anxiety reduces processing capacity and
impairs processing efficiency , especially in the
central executive and attentional systems of
working memory [ 26 , 27 ] , we hypothesized that
pd patients with anxiety would show impairments in
attentional set - shifting and working memory
compared to pd patients without anxiety .
furthermore , since previous work , albeit limited
, has focused on the influence of symptom
laterality on anxiety and cognition , we also
explored this relationship . seventeen pd patients
with anxiety and thirty - three pd patients
without anxiety were included in this study ( see
table 1 ) . the cross - sectional data from these
participants was taken from a patient database
that has been compiled over the past 8 years (
since 2008 ) at the parkinson 's disease research
clinic at the brain and mind centre , university
of sydney . inclusion criteria involved a
diagnosis of idiopathic pd according to the united
kingdom parkinson 's disease society brain bank
criteria and were confirmed by a neurologist (
sjgl ) . patients also had to have an adequate
proficiency in english and have completed a full
neuropsychological assessment . ten patients in
this study ( 5 pd with anxiety ; 5 pd without
anxiety ) were taking psychotropic drugs ( i.e. ,
benzodiazepine or selective serotonin reuptake
inhibitor ) . patients were also excluded if they
had other neurological disorders , psychiatric
disorders other than affective disorders ( such as
anxiety ) , or if they reported a score greater
than six on the depression subscale of the
hospital anxiety and depression scale ( hads ) .
thus , all participants who scored within a
depressed ( hads - d > 6 ) range were excluded
from this study , in attempt to examine a refined
sample of pd patients with and without anxiety in
order to determine the independent effect of
anxiety on cognition . this research was approved
by the human research ethics committee of the
university of sydney , and written informed
consent was obtained from all participants . self
- reported hads was used to assess anxiety in pd
and has been previously shown to be a useful
measure of clinical anxiety in pd . a cut - off
score of > 8 on the anxiety subscale of the hads (
hads - a ) was used to identify pd cases with
anxiety ( pda+ ) , while a cut - off score of < 6
on the hads - a was used to identify pd cases
without anxiety ( pda ) . this criterion was more
stringent than usual ( > 7 cut - off score ) , in
effort to create distinct patient groups . the
neurological evaluation rated participants
according to hoehn and yahr ( h&y ) stages and
assessed their motor symptoms using part iii of
the revised mds task force unified parkinson 's
disease rating scale ( updrs ) . in a similar way
this was determined by calculating a total left
and right score from rigidity items 3035 ,
voluntary movement items 3643 , and tremor items
5057 from the mds - updrs part iii ( see table 1 )
. processing speed was assessed using the trail
making test , part a ( tmt - a , z - score ) .
attentional set - shifting was measured using the
trail making test , part b ( tmt - b , z - score )
. working memory was assessed using the digit span
forward and backward subtest of the wechsler
memory scale - iii ( raw scores ) . language was
assessed with semantic and phonemic verbal fluency
via the controlled oral word associated test (
cowat animals and letters , z - score ) . the
ability to retain learned verbal memory was
assessed using the logical memory subtest from the
wechsler memory scale - iii ( lm - i z - score ,
lm - ii z - score , % lm retention z - score ) .
the mini - mental state examination ( mmse )
demographic , clinical , and neuropsychological
variables were compared between the two groups
with the independent t - test or mann whitney u
test , depending on whether the variable met
parametric assumptions . chi - square tests were
used to examine gender and symptom laterality
differences between groups . all analyses employed
an alpha level of p < 0.05 and were two - tailed .
spearman correlations were performed separately in
each group to examine associations between anxiety
and/or depression ratings and cognitive functions
. as expected , the pda+ group reported
significant greater levels of anxiety on the hads
- a ( u = 0 , p < 0.001 ) and higher total score
on the hads ( u = 1 , p < 0.001 ) compared to the
pda group ( table 1 ) . groups were matched in age
( t(48 ) = 1.31 , p = 0.20 ) , disease duration (
u = 259 , p = 0.66 ) , updrs - iii score ( u =
250.5 , p = 0.65 ) , h&y ( u = 245 , p = 0.43 ) ,
ledd ( u = 159.5 , p = 0.80 ) , and depression (
hads - d ) ( u = 190.5 , p = 0.06 ) . additionally
, all groups were matched in the distribution of
gender ( = 0.098 , p = 0.75 ) and side - affected
( = 0.765 , p = 0.38 ) . there were no group
differences for tmt - a performance ( u = 256 , p
= 0.62 ) ( table 2 ) ; however , the pda+ group
had worse performance on the trail making test
part b ( t(46 ) = 2.03 , p = 0.048 ) compared to
the pda group ( figure 1 ) . the pda+ group also
demonstrated significantly worse performance on
the digit span forward subtest ( t(48 ) = 2.22 , p
= 0.031 ) and backward subtest ( u = 190.5 , p =
0.016 ) compared to the pda group ( figures 2(a )
and 2(b ) ) . neither semantic verbal fluency (
t(47 ) = 0.70 , p = 0.49 ) nor phonemic verbal
fluency ( t(47 ) = 0.39 , p = 0.70 ) differed
between groups . logical memory i immediate recall
test ( u = 176 , p = 0.059 ) showed a trend that
the pda+ group had worse new verbal learning and
immediate recall abilities than the pda group .
however , logical memory ii test performance ( u =
219 , p = 0.204 ) and logical memory % retention (
u = 242.5 , p = 0.434 ) did not differ between
groups . there were also no differences between
groups in global cognition ( mmse ) ( u = 222.5 ,
p = 0.23 ) . participants were split into lpd and
rpd , and then further group differences were
examined between pda+ and pda. importantly , the
groups remained matched in age , disease duration
, updrs - iii , dde , h&y stage , and depression
but remained significantly different on self -
reported anxiety . lpda+ demonstrated worse
performance on the digit span forward test ( t(19
) = 2.29 , p = 0.033 ) compared to lpda , whereas
rpda+ demonstrated worse performance on the digit
span backward test ( u = 36.5 , p = 0.006 ) , lm -
i immediate recall ( u = 37.5 , p = 0.008 ) , and
lm - ii ( u = 45.0 , p = 0.021 ) but not lm %
retention ( u = 75.5 , p = 0.39 ) compared to
rpda. this study is the first to directly compare
cognition between pd patients with and without
anxiety . the findings confirmed our hypothesis
that anxiety negatively influences attentional set
- shifting and working memory in pd . more
specifically , we found that pd patients with
anxiety were more impaired on the trail making
test part b which assessed attentional set -
shifting , on both digit span tests which assessed
working memory and attention , and to a lesser
extent on the logical memory test which assessed
memory and new verbal learning compared to pd
patients without anxiety . taken together , these
findings suggest that anxiety in pd may reduce
processing capacity and impair processing
efficiency , especially in the central executive
and attentional systems of working memory in a
similar way as seen in young healthy adults [ 26 ,
27 ] . although the neurobiology of anxiety in pd
remains unknown , many researchers have postulated
that anxiety disorders are related to
neurochemical changes that occur during the early
, premotor stages of pd - related degeneration [
37 , 38 ] such as nigrostriatal dopamine depletion
, as well as cell loss within serotonergic and
noradrenergic brainstem nuclei ( i.e. , raphe
nuclei and locus coeruleus , resp . , which
provide massive inputs to corticolimbic regions )
. over time , chronic dysregulation of
adrenocortical and catecholamine functions can
lead to hippocampal damage as well as
dysfunctional prefrontal neural circuitries [ 39 ,
40 ] , which play a key role in memory and
attention . recent functional neuroimaging work
has suggested that enhanced hippocampal activation
during executive functioning and working memory
tasks may represent compensatory processes for
impaired frontostriatal functions in pd patients
compared to controls . therefore , chronic stress
from anxiety , for example , may disrupt
compensatory processes in pd patients and explain
the cognitive impairments specifically in working
memory and attention seen in pd patients with
anxiety . it has also been suggested that
hyperactivation within the putamen may reflect a
compensatory striatal mechanism to maintain normal
working memory performance in pd patients ;
however , losing this compensatory activation has
been shown to contribute to poor working memory
performance . anxiety in mild pd has been linked
to reduced putamen dopamine uptake which becomes
more extensive as the disease progresses . this
further supports the notion that anxiety may
disrupt compensatory striatal mechanisms as well ,
providing another possible explanation for the
cognitive impairments observed in pd patients with
anxiety in this study . noradrenergic and
serotonergic systems should also be considered
when trying to explain the mechanisms by which
anxiety may influence cognition in pd . although
these neurotransmitter systems are relatively
understudied in pd cognition , treating the
noradrenergic and serotonergic systems has shown
beneficial effects on cognition in pd . selective
serotonin reuptake inhibitor , citalopram , was
shown to improve response inhibition deficits in
pd , while noradrenaline reuptake blocker ,
atomoxetine , has been recently reported to have
promising effects on cognition in pd [ 45 , 46 ] .
overall , very few neuroimaging studies have been
conducted in pd in order to understand the neural
correlates of pd anxiety and its underlying neural
pathology . future research should focus on
relating anatomical changes and neurochemical
changes to neural activation in order to gain a
clearer understanding on how these pathologies
affect anxiety in pd . to further understand how
anxiety and cognitive dysfunction are related ,
future research should focus on using advanced
structural and function imaging techniques to
explain both cognitive and neural breakdowns that
are associated with anxiety in pd patients .
research has indicated that those with amnestic
mild cognitive impairment who have more
neuropsychiatric symptoms have a greater risk of
developing dementia compared to those with fewer
neuropsychiatric symptoms . future studies should
also examine whether treating neuropsychiatric
symptoms might impact the progression of cognitive
decline and improve cognitive impairments in pd
patients . previous studies have used pd symptom
laterality as a window to infer asymmetrical
dysfunction of neural circuits . for example , lpd
patients have greater inferred right hemisphere
pathology , whereas rpd patients have greater
inferred left hemisphere pathology . thus ,
cognitive domains predominantly subserved by the
left hemisphere ( e.g. , verbally mediated tasks
of executive function and verbal memory ) might be
hypothesized to be more affected in rpd than lpd ;
however , this remains controversial . it has also
been suggested that since anxiety is a common
feature of left hemisphere involvement [ 48 , 49 ]
, cognitive domains subserved by the left
hemisphere may also be more strongly related to
anxiety . results from this study showed selective
verbal memory deficits in rpd patients with
anxiety compared to rpd without anxiety , whereas
lpd patients with anxiety had greater attentional
/ working memory deficits compared to lpd without
anxiety . although these results align with
previous research , interpretations of these
findings should be made with caution due to the
small sample size in the lpd comparison
specifically . recent work has suggested that the
hads questionnaire may underestimate the burden of
anxiety related symptomology and therefore be a
less sensitive measure of anxiety in pd [ 30 , 50
] . in addition , our small sample size also
limited the statistical power for detecting
significant findings . based on these limitations
, our findings are likely conservative and
underrepresent the true impact anxiety has on
cognition in pd . additionally , the current study
employed a very brief neuropsychological
assessment including one or two tests for each
cognitive domain . future studies are encouraged
to collect a more complex and comprehensive
battery from a larger sample of pd participants in
order to better understand the role anxiety plays
on cognition in pd . another limitation of this
study was the absence of diagnostic interviews to
characterize participants ' psychiatric symptoms
and specify the type of anxiety disorders included
in this study . future studies should perform
diagnostic interviews with participants ( e.g. ,
using dsm - v criteria ) rather than relying on
self - reported measures to group participants ,
in order to better understand whether the type of
anxiety disorder ( e.g. , social anxiety , phobias
, panic disorders , and generalized anxiety )
influences cognitive performance differently in pd
. one advantage the hads questionnaire provided
over other anxiety scales was that it assessed
both anxiety and depression simultaneously and
allowed us to control for coexisting depression .
although there was a trend that the pda+ group
self - reported higher levels of depression than
the pda group , all participants included in the
study scored < 6 on the depression subscale of the
hads . controlling for depression while assessing
anxiety has been identified as a key shortcoming
in the majority of recent work . considering many
previous studies have investigated the influence
of depression on cognition in pd without
accounting for the presence of anxiety and the
inconsistent findings reported to date , we
recommend that future research should try to
disentangle the influence of anxiety versus
depression on cognitive impairments in pd .
considering the growing number of clinical trials
for treating depression , there are few if any for
the treatment of anxiety in pd . anxiety is a key
contributor to decreased quality of life in pd and
greatly requires better treatment options .
moreover , anxiety has been suggested to play a
key role in freezing of gait ( fog ) , which is
also related to attentional set - shifting [ 52 ,
53 ] . future research should examine the link
between anxiety , set - shifting , and fog , in
order to determine whether treating anxiety might
be a potential therapy for improving fog ."""
from transformers import LEDForConditionalGeneration, LEDTokenizer
import torch
tokenizer = LEDTokenizer.from_pretrained("patrickvonplaten/led-large-16384-pubmed")
input_ids = tokenizer(LONG_ARTICLE, return_tensors="pt").input_ids.to("cuda")
global_attention_mask = torch.zeros_like(input_ids)
# set global_attention_mask on first token
global_attention_mask[:, 0] = 1
model = LEDForConditionalGeneration.from_pretrained("patrickvonplaten/led-large-16384-pubmed", return_dict_in_generate=True).to("cuda")
sequences = model.generate(input_ids, global_attention_mask=global_attention_mask).sequences
summary = tokenizer.batch_decode(sequences)
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.