data
stringlengths
115
7.61k
JonathanSum#8528: Eventhouhg it is dalle mini, it still has some failed cases. JonathanSum#8528: Sometimes I feel all state of art paper we have now, such as GANs or difussion. Those paper never show the case it failed. They all just pretend it is working and state of art by showing it works in most of the case. But We should understand more why it fails. Senkaw'naowis'nebpher#1484: What the hell are you talking about selea#8026: I think, it happens, because it can't map images and text representations properly. Because of lack of large enough image-text correspondence datasets. Try to put some obscure fetishes into dalle. It just don't know, what do you mean. And fails more often. Also, I think, that if we want to use that technology for somewhat useful applications, dalle models should be connected with some kind of user driven optimiser. For now, dalle model, if I understand everything correctly, takes prompt and initial random vectors. Afterwards, it process it several times, producing N image versions. We could have seriously improve result, if we would be able to rank best and worst images and demand a new versions. Simple genetic algorithm(remove worst images from pool, crossover random vectors of best ones, add a bit more random on demand to selected pictures, produce 9 more images), will allow user to get image of somewhat affordable quality through selection. selea#8026: Also, I think, that models fail oftenly, because they aren't aware about space and perspective.They work with 2d representations. but can't figure out, that it represent 3d-esque objects. possibly, it should be pretrained with video of 3d scenes, to ensure, that if we take different angles of single stage, representation values should change smoothly. irowberry#6888: @selea yeah, if you look at the dalle-2 paper, it can have a hard time figuring out how big objects should be. There's an example, where it has cookies next to a glass of milk and the cookies are wayyy too big. Also, there's an interesting post on AssemblyAi (i think) about google's imagen model, and why it performs better in some cases, due to not training a text encoder with the rest, they just use embeddings from a frozen t5 model. Which is interesting. https://cdn.discordapp.com/attachments/898617453813334077/991953793031884861/unknown.png Haya#9850: https://cdn.discordapp.com/attachments/898617453813334077/991977919628972072/IMG_1998.jpg selea#8026: My opinion, that language of thought hypothesis is true. And our conscious thought are based on some sort of inner encoding language, which has syntax and grammar. Personally I think, that said language resembles XML or HTML. So, our thoughts and inner representations corresponds to some sort of document-object model. Possibly, if we will figure out, how to make neuronets to encode and represent DOM, things will work better? core#7039: https://github.com/google/agi core#7039: Google released an AGI NULL#3726: can you query cookie next to a glass of red wine irowberry#6888: Unfortunately I don’t have access. I just took this from the paper. reavatar#8127: From a different model https://cdn.discordapp.com/attachments/898617453813334077/992100871016038430/unknown.png irowberry#6888: That just looks like a bad photoshop job lol reavatar#8127: https://cdn.discordapp.com/attachments/898617453813334077/993478983406604348/unknown.png NULL#3726: Oo
Anurag Singh#5200: any tricks to get the access early? 😅 reavatar#8127: Nothing specific. Apply from different emails with different roles like an artist, developer, researcher etc. Maybe add some socials. By the way, I got access with an email that doesn't have GPT-3 access. NULL#3726: do they have api limits? NULL#3726: or build your api and forward requests through your account 😂 Deleted User#0000: https://ibb.co/PW4YZhy Deleted User#0000: Are these the Hugging Face @admin who deleted my Santa-Claus ||in the columbine shooting post?|| Deleted User#0000: jk 😉 Deleted User#0000: If your misuse policy was not so strict I could have your faces too 😉 Omar Sanseviero#6198: @Deleted User please, we want to build a safe space in which people can collaborate and feel free to share and build community. The comments above don't adhere to the community spaces we're building, so please avoid toxic attitudes in this space. This is a second and last warning Deleted User#0000: it's called a joke man, hence the jk part Deleted User#0000: sheesh Senkaw'naowis'nebpher#1484: Average Discord mod: Anurag Singh#5200: https://twitter.com/__jesse_li/status/1349099902341685249?s=21&t=vsMvXr4AGhFlSXb5BvhQCw Omar Sanseviero#6198: https://twitter.com/osanseviero/status/1546224159553986561 NULL#3726: https://twitter.com/gowthami_s/status/1547092400182505475?s=20&t=xlRhPnA26NY2j732BntKTQ Anurag Singh#5200: Most of the pics don’t even look like of someone from south asia Muhtasham#4630: https://cdn.discordapp.com/attachments/898617453813334077/997300232511570052/IMG_0036.jpg Muhtasham#4630: You wake up and it’s 2022 Omar Sanseviero#6198: Wait for text to video models Evan.#3454: https://cdn.discordapp.com/attachments/898617453813334077/997436117966127104/1724F7D4-3398-4CC6-B2E7-17388F1974F7.jpg
cakiki#9145: prompted with "Bokeh"? Evan.#3454: Yess Evan.#3454: Winter and fall Evan.#3454: Which one should I do next? cakiki#9145: i feel something winter-y would be cool Evan.#3454: Hmm Evan.#3454: Brb Evan.#3454: https://cdn.discordapp.com/attachments/898617453813334077/997441222912249946/IMG_5388.png WillieCubed#9735: Oh goodness. I fear the day someone can type "Video of [person] doing [embarrassing thing]" into a model with enough coherence to rival that of GPT-3 on generating text-based stories Anurag Singh#5200: People have already started creating stop motion videos though https://twitter.com/paultrillo/status/1547274303552438274?s=21&t=sZN-WHaJz6PesrN8uPYINg WillieCubed#9735: Today in "Just when I thought I had seen it all" 1915#4796: https://cdn.discordapp.com/attachments/898617453813334077/997589499519455352/IMG_1522.png NULL#3726: https://cdn.discordapp.com/attachments/898617453813334077/998332260610736239/Screenshot_20220717-165527.jpg Adam123#7276: Do they obscure the faces after the image output or the model itself produce that? NULL#3726: model itself produce that NULL#3726: maybe dalle2 would do better tunasaurus#0611: https://cdn.discordapp.com/attachments/898617453813334077/998600561014353970/Capture.PNG Jon Gandee#3189: https://cdn.discordapp.com/attachments/898617453813334077/998696775101726740/20220718_161350.png WillieCubed#9735: Actual meme made by an academic researcher https://cdn.discordapp.com/attachments/898617453813334077/999149595722535033/20220719_220011.jpg Deleted User#0000: Apparently the devs set it up to make it do bad on faces of famous to prevent deep fakes and stuff. But I could be wrong
Deleted User#0000: To me it makes sense tho Annas#3911: lol Merve#3234: I sent it to our slack and first author of T5 approved this meme 😂 j3v1qml#4391: Patrick is lovin’ it johko#5355: https://cdn.discordapp.com/attachments/898617453813334077/999579430416556072/unknown.png Annas#3911: mode collapse be like NULL#3726: Check your github city NULL#3726: https://honzaap.github.io/GithubCity/ reavatar#8127: What's the shipment for sending/recieving PRs? Anurag Singh#5200: I guess I don't have prompt engineering skills 😢 https://cdn.discordapp.com/attachments/898617453813334077/1001725729354043482/Screenshot_2022-07-27_at_11.08.48_AM.png Anurag Singh#5200: https://cdn.discordapp.com/attachments/898617453813334077/1001738431833063486/20220727_120022.jpg Annas#3911: https://cdn.discordapp.com/attachments/898617453813334077/1003868148056993822/IMG_20220802_103044.jpg NULL#3726: i was stuck in that device triggered error yesterday 😂 Anurag Singh#5200: https://twitter.com/ethanCaballero/status/1554504075764539394?t=n-1DvQqHu0MRdsb5m9uS8w&s=19 Merve#3234: why did I discover this super late 😄 https://imgflip.com/ai-meme V3N0M#5374: ahh yes, time to make a bot to spam here periodically with this 😏 Anurag Singh#5200: Please don’t do that. It is not good to make bot for spamming free services NULL#3726: https://cdn.discordapp.com/attachments/898617453813334077/1005805912159703141/IMG_1693.png Hachem#7888: https://www.reddit.com/r/discordVideos/comments/wkrei9/chad/?utm_source=share&utm_medium=web2x&context=3 the_proton_crusher#8317: https://twitter.com/antgoldbloom/status/1557833893637525504
NULL#3726: https://cdn.discordapp.com/attachments/898617453813334077/1012750341357445190/Screenshot_20220826-114615.jpg FlakShim#7287: huh, paperclip maximizers look like bears! Anurag Singh#5200: https://cdn.discordapp.com/attachments/898617453813334077/1013758322610016256/IMG_5181.jpg Merve#3234: hahahahahhahaha omg Senkaw'naowis'nebpher#1484: I barley have any idea what any of this means because i'm an idiot. Senkaw'naowis'nebpher#1484: https://cdn.discordapp.com/attachments/898617453813334077/1014671989115453491/dallemini_2022-8-31_19-3-46.png Senkaw'naowis'nebpher#1484: https://cdn.discordapp.com/attachments/898617453813334077/1014672198172160101/dallemini_2022-8-31_19-5-20.png Senkaw'naowis'nebpher#1484: https://cdn.discordapp.com/attachments/898617453813334077/1014673200908607538/dallemini_2022-8-31_19-7-43.png Senkaw'naowis'nebpher#1484: https://cdn.discordapp.com/attachments/898617453813334077/1014673351735775312/dallemini_2022-8-31_19-9-44.png Senkaw'naowis'nebpher#1484: Kind of a fail but at least it kind of got the first two right https://cdn.discordapp.com/attachments/898617453813334077/1014674045981163561/dallemini_2022-8-31_19-12-8.png Darkseal#5643: now that i've found stable diffusion it's time to put Hermione everywhere, in any situation, with her time turner https://youtu.be/e8fYMnLsHm8 V3N0M#5374: https://cdn.discordapp.com/attachments/898617453813334077/1016605521744240670/unknown.png Jon Gandee#3189: https://cdn.discordapp.com/attachments/898617453813334077/1017500259024388126/craiyon_141919_Gumball_watterson_in_court.png ratatui#0101: current state in opensource ML https://cdn.discordapp.com/attachments/898617453813334077/1017788009967538196/fun_ml_frameworks.jpeg cakiki#9145: hahahahaha Jon Gandee#3189: ew https://cdn.discordapp.com/attachments/898617453813334077/1017859109980225586/download_17.jpg reavatar#8127: https://cdn.discordapp.com/attachments/898617453813334077/1019293824851398676/unknown.png Merve#3234: hahahhahahha ChainYo#3610: Tbh it's a genuine concern at my company data office 😂 NULL#3726: Then use docker
But i use mac m1 NULL#3726: is the main concern these days mr_seeker#1337: https://cdn.discordapp.com/attachments/898617453813334077/1024922722511364127/20220929_075502.jpg Omar Sanseviero#6198: This never gets old https://www.youtube.com/watch?v=kl6rsi7BEtk&ab_channel=AnthonyVance NULL#3726: https://cdn.discordapp.com/attachments/898617453813334077/1026587718957932604/image_738e1fb0-1b98-442a-aa0c-d92248de5f3520221003_151248.jpg Johnowhitaker#1906: "Your corntry needs you candy corn propaganda poster" (tis the season) https://cdn.discordapp.com/attachments/898617453813334077/1028008773694005328/466908176_Your_corntry_needs_you_candy_corn_propaganda_poster_candy_corn_colors.png NULL#3726: https://cdn.discordapp.com/attachments/898617453813334077/1029060659998097498/image_37e142d7-8fe2-4230-b95a-033a460c848220221010_105915.jpg unclemantis.btc#5392: https://cdn.discordapp.com/attachments/898617453813334077/1030700140752797746/unknown.png j3v1qml#4391: OpenCL should have called the project GNU Open Unified Device Architecture Moonknight#3334: https://cdn.discordapp.com/attachments/898617453813334077/1038088048296071198/1667563949779.jpg Moonknight#3334: https://cdn.discordapp.com/attachments/898617453813334077/1038420550575468604/1667634769996.jpg !!Puffy Bird!!#7496: Clisapi Is not an API clisapi Moonknight#3334: https://cdn.discordapp.com/attachments/898617453813334077/1038728141922914355/20221106_100538.jpg Woo⛈#3554: https://tenor.com/view/so-cute-aww-gif-26434851 saqib#9728: https://cdn.discordapp.com/attachments/898617453813334077/1040545661663334410/IMG_20221111_133411.jpg,https://cdn.discordapp.com/attachments/898617453813334077/1040545661948538930/6z2s8h1iahw61.jpg Moonknight#3334: Every ML paper has SoTA or State-of-the-art word embedded in it. 🤣 Moonknight#3334: https://cdn.discordapp.com/attachments/898617453813334077/1041646837548982282/20221112_144923.jpg Moonknight#3334: Error in my code: https://cdn.discordapp.com/attachments/898617453813334077/1043457451627921438/1668849657974.jpg Moonknight#3334: https://cdn.discordapp.com/attachments/898617453813334077/1046080251677130842/Screenshot_2022-11-22-23-30-26-06.jpg
Moonknight#3334: https://cdn.discordapp.com/attachments/898617453813334077/1046644415525040198/IMG_20221128_100729.jpg Moonknight#3334: chill cows on the road of Nepal lmao 🤣 https://cdn.discordapp.com/attachments/898617453813334077/1047044314733215754/FB_IMG_1669704757899.jpg Moonknight#3334: https://cdn.discordapp.com/attachments/898617453813334077/1047495336127639562/FB_IMG_1669812665410.jpg Moonknight#3334: https://cdn.discordapp.com/attachments/898617453813334077/1048990838606282782/20221204_145322.jpg Moonknight#3334: I read this in the voice of Stewie 🤣 https://cdn.discordapp.com/attachments/898617453813334077/1048996604004278302/Screenshot_2022-12-04-21-47-33-36.jpg Moonknight#3334: https://cdn.discordapp.com/attachments/898617453813334077/1049195990990729256/FB_IMG_1670141614918.jpg Moonknight#3334: https://cdn.discordapp.com/attachments/898617453813334077/1049725499217301595/FB_IMG_1670332158584.jpg watson#8802: https://cdn.discordapp.com/attachments/898617453813334077/1050372977226682389/IMG_0654.png watson#8802: Figured yous would appreciate Muhtasham#4630: https://cdn.discordapp.com/attachments/898617453813334077/1050737623926313040/IMG_1106.jpg Muhtasham#4630: Ultimate comeback Moonknight#3334: https://cdn.discordapp.com/attachments/898617453813334077/1051016780291776542/FB_IMG_1670616600954.jpg Maha Zoldyck#7621: https://cdn.discordapp.com/attachments/898617453813334077/1051047063783211008/IMG-20220903-WA0005.jpg Noxturnix#5763: https://cdn.discordapp.com/attachments/898617453813334077/1053213063660175360/image.png Maha Zoldyck#7621: https://cdn.discordapp.com/attachments/898617453813334077/1053673665344053319/IMG-20220903-WA0004.jpg Muhtasham#4630: https://cdn.discordapp.com/attachments/898617453813334077/1053873053093859328/IMG_1250.webp !!Puffy Bird!!#7496: https://huggingface.co/BirdL/CatsandDogsPOC-Resnet !!Puffy Bird!!#7496: New SOTA on cats and dogs !!Puffy Bird!!#7496: https://paperswithcode.com/sota/image-classification-on-cats-vs-dogs !!Puffy Bird!!#7496: No one can beat me now
!!Puffy Bird!!#7496: Can I put a testimonial saying I got SOTA results with HF Autotrain !!Puffy Bird!!#7496: Not clarifying it was only on cats and dogs core#7039: https://cdn.discordapp.com/attachments/898617453813334077/1055940419097206825/image.png core#7039: https://tenor.com/view/you-look-like-youve-seen-a-ghost-spooked-youve-seen-a-ghost-shocked-surprised-gif-14871568 Anti-Matter#1740: https://cdn.discordapp.com/attachments/898617453813334077/1057546861642260500/75oacf.png Jupyter Meowbooks#5138: came to see if my memes are in here. They were, but the most important one wasn't! https://cdn.discordapp.com/attachments/898617453813334077/1057590904296325141/FfqzS6qWIAAyTI3.png Avi#7831: https://cdn.discordapp.com/attachments/898617453813334077/1058114158043865219/image0.jpg Merve#3234: why did I read this with his voice 😂😂 Maha Zoldyck#7621: Ben de onun sesiyle okuyorum. Çok tatlı bir adam :) Rain ♥#3557: https://cdn.discordapp.com/attachments/898617453813334077/1060476153409847316/sadkek.gif Angel_KR#9023: https://cdn.discordapp.com/attachments/898617453813334077/1061076693122568333/E6cRCerUUAAoPTy.png j3v1qml#4391: Don’t worry if you don’t understand this meme j3v1qml#4391: This arrived at my door today. Not exactly the kind of labeling I'm interested in 😂 https://cdn.discordapp.com/attachments/898617453813334077/1063881700863660092/labeling_catalog_0.jpg reavatar#8127: Ever needed to detect the current OS? This is the way https://cdn.discordapp.com/attachments/898617453813334077/1065590281719779338/1674126209876.png demre#4307: https://cdn.discordapp.com/attachments/898617453813334077/1065747830876143766/image.png demre#4307: 🤣 camenduru#0300: https://cdn.discordapp.com/attachments/898617453813334077/1066140211144306749/scheduling_space.png Moonknight#3334: https://cdn.discordapp.com/attachments/898617453813334077/1066703626891968604/received_1138060543574230.webp Merve#3234: hahahahahah Merve#3234: I'm sorry 😦
Merve#3234: definitely not biased 😄 camenduru#0300: are you the one who broke the server? 😋 np maazie#4815: Debugging code be like:- https://i.redd.it/2o2iq7pxia981.gif Pierre#2337: https://cdn.discordapp.com/attachments/898617453813334077/1073407948178591754/HAI_AI20Index20Report202021_Social_960x640_9.png Pierre#2337: <https://hai.stanford.edu/news/state-ai-10-charts> Pierre#2337: nothing bizarre? Agrover 112#8357: Started a thread. Agrover 112#8357: Basedd Agrover 112#8357: Master Shinji says "No son". "With time you will know XGBoost rules the world" https://cdn.discordapp.com/attachments/898617453813334077/1074733567722139658/image0.gif erinmikail#0321: We’re you looking at my computer today 😝👀👀👀 ramu1115#9047: hahhaha🤣 ImMax#0001: > **While Huggingface officials review other people's accounts:** https://cdn.discordapp.com/attachments/898617453813334077/1076809573677154365/huggingface.png ImMax#0001: > **The Huggingface official who was looking at my account at the time:** https://cdn.discordapp.com/attachments/898617453813334077/1076810270988582962/huggingface.png Anurag Singh#5200: What was the prompt for the image? ImMax#0001: my account has been banned ImMax#0001: now ImMax#0001: :hugging_cowboy: reavatar#8127: https://twitter.com/Lauramaywendel/status/1626928150725754881 shoen078#0401: https://twitter.com/DrJimFan/status/1628813348140875778
lunarflu#6769: https://cdn.discordapp.com/attachments/898617453813334077/1079431738314461385/image.png lunarflu#6769: AI is the future (copy pastes your main function) https://cdn.discordapp.com/attachments/898617453813334077/1079715572352290847/image.png lunarflu#6769: https://cdn.discordapp.com/attachments/898617453813334077/1079747139057488003/how_maintainers_see_cm.png lunarflu#6769: I love the soyjak one HAHAHAHAHAH 7739964396#0196: mo===world lunarflu#6769: https://cdn.discordapp.com/attachments/898617453813334077/1080385518753288272/image.png hkryatin#2268: I personally don't like anything from Facebook so tf>>>>> DelicaTessa#5458: https://cdn.discordapp.com/attachments/898617453813334077/1080458595130417302/Screenshot_20230301-125527_Instagram.jpg lunarflu#6769: https://cdn.discordapp.com/attachments/898617453813334077/1080459412587692042/image.png lunarflu#6769: we need more pytorch vs tensorflow memes reavatar#8127: https://twitter.com/karpathy/status/868178954032513024?lang=en reavatar#8127: https://twitter.com/EugeneVinitsky/status/1630270218319855616 lunarflu#6769: real and true lunarflu#6769: tensorflow is what Big Tensor wants you to use. resist the urge. rage against the system. maazie#4815: team tf for now :tf: lunarflu#6769: no pytorch emote? maazie#4815: Yeah maazie#4815: Realized reavatar#8127: I have a tensorflow joke, but it got an error. lunarflu#6769: https://cdn.discordapp.com/attachments/898617453813334077/1080514887148646480/image.png
maazie#4815: It should be Nebulous voice actress loses followers because of using PyTorch! This is the industry! Use :tf: lunarflu#6769: I leave that to you and your next pytorch-tensorflow war meme format lunarflu#6769: 😌 lunarflu#6769: https://cdn.discordapp.com/attachments/898617453813334077/1081123126773022720/image.png tâche d'Will#6330: What's the problem with TF ? lunarflu#6769: nothing, but it fills the x vs y meme format really well maazie#4815: PyTorch fills it in even better 😛 lunarflu#6769: :hugging_nerd: lunarflu#6769: https://cdn.discordapp.com/attachments/898617453813334077/1081677842921832489/image.png maazie#4815: Tensorflow killing PyTorch :hugging_cool: lunarflu#6769: I just love wojak meme formats lunarflu#6769: I can't resist lunarflu#6769: tensorflow memes give me the energy to face the day :tf: :hugging_rocket: maazie#4815: https://cdn.discordapp.com/attachments/898617453813334077/1081688651148296232/030yqqqlpn821.jpg lunarflu#6769: he is literally me reavatar#8127: https://twitter.com/DynamicWebPaige/status/1632151048361213957 reavatar#8127: Bing on Bard. https://cdn.discordapp.com/attachments/898617453813334077/1081937504904695848/image.png Jupyter Meowbooks#5138: Damn Paige's meme game is 10x Meowbooks. She should meme all the time
Tonic#1609: Just dropping in to say that I would like to see more people use the original and correct pronunciation of “🤗” which is French . In French we pronounce it “Hewjinn fåss” … and I think that’s beautiful 🤩 Omar Sanseviero#6198: lol 😆 I'll tell her reavatar#8127: https://twitter.com/madiator/status/1633193298234863617 reavatar#8127: https://twitter.com/dagorenouf/status/1633503571395371009 lunarflu#6769: https://cdn.discordapp.com/emojis/824339693289472041.webp?size=128&quality=lossless lunarflu#6769: yes, I do need 1 million for my calculator app, how could you tell? reavatar#8127: https://twitter.com/kylegawley/status/1634002819358371841 lunarflu#6769: well, if it was audible, you would have heard of it, wouldn't you? lunarflu#6769: https://cdn.discordapp.com/emojis/1037945878952034376.webp?size=128&quality=lossless reavatar#8127: https://twitter.com/danapke/status/1632737738725249025 lunarflu#6769: https://cdn.discordapp.com/attachments/898617453813334077/1084072486326501396/image.png reavatar#8127: I have heard, GPT-4 is a mystical AI planet built by OpenAI with Microsoft. You need to take your HoloLens and dive in. That's what Elon is thriving for now, no longer Mars. lunarflu#6769: gpt 4 was designed on a quantum computer and can read your mind, it is inside your walls cakiki#9145: Grifters gonna grift lunarflu#6769: you just dont get it chris......its not about being honest.....its about that cool feeling 😊 cakiki#9145: Hahahaha cakiki#9145: I'll try harder 🥺 Ryükijânö#9211: there is one Ryükijânö#9211: in the pytorch server and i dont have nitro anymore lmao lunarflu#6769: valentines day in ml 😔 https://cdn.discordapp.com/attachments/898617453813334077/1084515546193723463/image.png
johko#5355: Visual ChatGPT is on a different level 🤣 (from here: https://www.reddit.com/r/StableDiffusion/comments/11nlpa5/visual_chatgpt_is_a_master_troll/) https://cdn.discordapp.com/attachments/898617453813334077/1084829060422705223/5jaK43yER20LDG12Ksk9QGeOaAuHMfI00cYA47esRNQ.png reavatar#8127: https://twitter.com/untitled01ipynb/status/1635304710809743362 reavatar#8127: https://cdn.discordapp.com/attachments/898617453813334077/1084877059253346384/5cf0402f-c7e8-43cb-a4ec-936f4ebea3ba.png lunarflu#6769: https://cdn.discordapp.com/emojis/824339693289472041.webp?size=128&quality=lossless Trigaten#6079: https://twitter.com/learnprompting/status/1635040399885746177 reavatar#8127: all domains bought. reavatar#8127: https://twitter.com/tiangolo/status/1635234996226240512 reavatar#8127: https://twitter.com/EMostaque/status/1635702685172129792 lunarflu#6769: stealing this https://cdn.discordapp.com/attachments/898617453813334077/1085299918798868521/FrM8sPYWABs86Ru.png cakiki#9145: Missing a few 0s lunarflu#6769: https://cdn.discordapp.com/attachments/898617453813334077/1085496659712151613/image.png ! A#4116: I apologize in advance for potentially seeming a bit slow; is that real though? As it would be relatively impressive lunarflu#6769: https://cdn.discordapp.com/attachments/898617453813334077/1085499758019358790/image.png lunarflu#6769: good emote potential ! A#4116: So it's read some SO posts on that meme/joke then regurgitates them? ! A#4116: or blogs/discussions/etc lunarflu#6769: It's a lot to read so don't worry - I stole it from eleutherAI, not sure what the source it but it looks legit ! A#4116: a kind of contextual image search lunarflu#6769: My guess would be it detects areas of text in the image and combines that with knowledge of memes to produce meaning ! A#4116: 🍿
! A#4116: I now declare the 4th GPT games open, let the games begin ! A#4116: AFAIK the joke itself is not new – the whole cartoon that is, not just the layers/layers graph. So it's possible that GPT-4 is just regurgitating people's answers to questions (on SO / reddit / similar) from people who don't quite get the joke or aren't sure whether they get it ! A#4116: https://www.reddit.com/r/ProgrammerHumor/comments/5si1f0/machine_learning_approaches/ 6 years ago ! A#4116: Trying it with a brand new similarly-cerebral joke would be the true test lunarflu#6769: we should use gpt4 to improve the funny ! A#4116: Even so it may be a small sign of how much more powerful GPT-4 is that it includes image input (or image lookup knowledge) as well as text input ! A#4116: That may be the point that the academic paper in the screenshot is making ! A#4116: rather than necessarily that it's capable of reading the cartoon then logically (or logic-equivalently) deriving some kind of high-level appreciation of the humor content in it Sinan#3737: https://cdn.discordapp.com/attachments/898617453813334077/1085679348356501514/image.png fka____#7336: This meme is pure excellence lunarflu#6769: https://fxtwitter.com/untitled01ipynb/status/1636286460809674753?s=20 lunarflu#6769: https://cdn.discordapp.com/attachments/898617453813334077/1085968545759887570/FrWxvoeXgAEWGEc.png meln1k#7170: I asked gpt-4 to write a rap song about training language models, and I think it did pretty good: ``` (Verse 1) Yo, I'm training a language model, it's quite the feat Deep learning in my lab, with a funky beat Got my data set, my GPU on fire I'm about to lift this language model higher
PyTorch is the framework, that's what I'm using Transformers and Hugging Face, it's never confusing Open source, baby, yeah, we're breaking chains Taking AI to the next level, we got no reins (Chorus) We're training a language model, right at home With PyTorch, transformers, we're in the zone Hugging Face, we ain't got no time to waste Language generation, man, it's all about the taste (Verse 2) I'm fine-tuning, tweaking, getting that loss down Epochs running, improving, spinning like a merry-go-round BERT, GPT, RoBERTa, and more The transformer zoo, it's what we're here for Tokenizing, attention, context is king From sentences to paragraphs, we're decoding everything
This model's getting smarter, learning like a pro Understanding language like never before, yo ``` meln1k#7170: ``` (Bridge) With every epoch, every batch, every update We're pushing boundaries, ain't no time to stagnate Hugging Face, PyTorch, transformers in our hands Creating a future where AI understands (Chorus) We're training a language model, right at home With PyTorch, transformers, we're in the zone Hugging Face, we ain't got no time to waste Language generation, man, it's all about the taste (Verse 3) Now we're talking, the model's spitting fire Understanding language, taking us higher From tasks like translation to answering questions
This AI's got skill, it's the next big invention But we gotta stay mindful, keep ethics in check Remember it's a tool, not a world-wreck We're shaping the future with the code we write So let's build AI that brings the world to light (Outro) We're training a language model, it's quite the journey With PyTorch, transformers, we're on a learning spree Hugging Face by our side, we're making history AI, you and me, it's the future we wanna see (Chorus) We're training a language model, right at home With PyTorch, transformers, we're in the zone Hugging Face, we ain't got no time to waste Language generation, man, it's all about the taste ``` lunarflu#6769: now do chatgpt (epic rap battles of history)
Nilu#9224: huggingface not only helps you in real life, but also helps you slay dragons https://cdn.discordapp.com/attachments/898617453813334077/1087012939195764836/Screenshot_2023-03-20_at_12.37.53_am.png Omar Sanseviero#6198: Omg 😆 what's the source of this??? Nilu#9224: my comic about attention is all you need https://www.youtube.com/watch?v=Cg0k6hZrAhM Nilu#9224: lmk what you think 😄 Omar Sanseviero#6198: Lol I'll check it out 😄 Omar Sanseviero#6198: Is this in twitter already? xD Nilu#9224: it's not on the hf twitter no Nilu#9224: but you can rt my tweet I guess 😛 https://twitter.com/dooraven/status/1637394463227273217 Omar Sanseviero#6198: OMG 😆 https://cdn.discordapp.com/attachments/898617453813334077/1087019719887945738/Screenshot_from_2023-03-19_15-27-53.png Nilu#9224: heh glad you're enjoying it so far, let me know your overall thoughts when you're done 😄 Nilu#9224: @Omar Sanseviero overall thoughts? tried to balance it between educational, funny and entertaining lunarflu#6769: https://cdn.discordapp.com/attachments/898617453813334077/1087117226248781905/image0.png Sazoji#7574: https://cdn.discordapp.com/attachments/898617453813334077/1087847990204579952/image.png Sazoji#7574: Nvidia has questions about your huggingface usage lunarflu#6769: https://cdn.discordapp.com/attachments/898617453813334077/1088403426502774804/Fr5TJ7rWcAECipj.png cakiki#9145: they better be supporting every OSS library they use cakiki#9145: financially lunarflu#6769: only the ones that work (they can demonize the others 😌 )
mrshadow773#0840: proposal: every time the safetensors bot opens a PR, everyone who contributed to safetensors/the idea to open a PR for **every model on the hub individually** also gets a notification ⭐ cakiki#9145: i'm not sure I follow. Can you elaborate? lunarflu#6769: https://cdn.discordapp.com/attachments/898617453813334077/1088487849579528222/Fr223q4XsAIe_o3.jpg mrshadow773#0840: lol sorry - t's a satire/joke playing on how I get **at least** one if not two notifications for every single model repo I am related to to add `.safetensors`, which has to be done one by one via PR, which also informs me that it's not actually necessary https://huggingface.co/stacked-summaries/flan-t5-large-stacked-samsum-1024/discussions/2#641b7beb7007807ba56291ab My joke is that if everyone who thought this was the best way to do things also got a notification in kind for every PR the bot made, they might feel differently 🙂 https://cdn.discordapp.com/attachments/898617453813334077/1088494286439448646/image.png mrshadow773#0840: actual meme to validate my post https://cdn.discordapp.com/attachments/898617453813334077/1088495350827978822/7fkwtn.png lunarflu#6769: https://cdn.discordapp.com/attachments/898617453813334077/1088547552435130388/Fr6_5fnWcAU-LnF.png lunarflu#6769: https://twitter.com/untitled01ipynb/status/1639187685758717952?s=20 lunarflu#6769: proposal for new sticker https://cdn.discordapp.com/attachments/898617453813334077/1088853334556946523/sticker.png lunarflu#6769: https://twitter.com/sleepinyourhat/status/1638988283018465300 lunarflu#6769: this goes so hard https://cdn.discordapp.com/attachments/898617453813334077/1089220962710474812/image.png lunarflu#6769: https://cdn.discordapp.com/attachments/898617453813334077/1089530616015495168/OpenAIalignmentPlan_2.png cakiki#9145: YOU LEAKED lunarflu#6769: uhhhhhhhhhhhhh I meant to send this one https://cdn.discordapp.com/attachments/898617453813334077/1089625222283542539/image.png lunarflu#6769: twitter is a great source of ~~ML news~~ memes https://cdn.discordapp.com/attachments/898617453813334077/1089626685063835728/FsKNkAFaUAAc6Sv.png lunarflu#6769: https://cdn.discordapp.com/attachments/898617453813334077/1089946720294686870/image.png lunarflu#6769: https://cdn.discordapp.com/attachments/898617453813334077/1090643114047320104/RDT_20230326_1614378744948425893242208.png reavatar#8127: https://twitter.com/SashaMTL/status/1641389767815315456 clockwork#2792: The whole pausing thing was pretty overhyped, I think the biggest societal problem with LLMs these days is the impact on climate due to energy consumption, which wasn't even a focus of the letter
reavatar#8127: https://twitter.com/untitled01ipynb/status/1641451252080050187 FremontDango#1845: https://cdn.discordapp.com/attachments/898617453813334077/1091237617955250186/1MV3QHFAGFHRUD2W9.png Psinet#0857: https://cdn.discordapp.com/attachments/898617453813334077/1091548961845825627/1680250509779.png Jupyter Meowbooks#5138: Hey guys, you seem to be liking my memes. Can I enlist you to like/react on my April fools takeover at wandbs? https://twitter.com/weights_biases/status/1642059476655022082?s=46&t=SKZm3ndslERqEDd3XEsvyw lunarflu#6769: based meowbooks lunarflu#6769: 😻 lunarflu#6769: https://twitter.com/JayBarlowBot/status/1642198460123299840 clockwork#2792: Yeah and we aren't anywhere near AGI anyways clockwork#2792: Although I think before then, we'd need to truly define what counts as generally intelligent and an effective way to benchmark that kaibioinfo#1108: yeah, it's particularly funny because in newspaper they write that even world leading experts in AI signed this article... and then they mention names like Elon Musk and Steve Wozniak 🤦‍♂️ Psinet#0857: AGI will not become a thing until we invent a particular hierarchy of algorithms (or a "genus of neural networks"). The advent of Transformers and Perceivers is an example of this slowly occurring. Psinet#0857: Defining and benchmarking AGI will be very, very problematic. We cannot even effectively do that for humans or animals. clockwork#2792: Tbf Bengio did also sign and he's very famous for his work, but realistically I think we need to move on from worshipping these guys since even they can make mistakes kaibioinfo#1108: I found funny that newspapers don't mention any expert but just pseudo-experts. Of course, they want to use names people know (like Elon Musk), but maybe mentioning names of real experts would be the first step for people getting into touch with the topic kaibioinfo#1108: besides that: I think there *are* many real dangers in this technology and we definitely have to think about how we, as society, deal with them. We also definitely need regulations (although, by experience, it's very likely that government will come up with the most stupid regulations that only benefit big companies like Google). I would not blame anyone for signing, because some dangers are real. I still think that most people who signed didn't have this real reasons in mind but some stupid ones clockwork#2792: Realistically we should move to a point where articles don't mention anyone in the title if the article isn't specifically about that person clockwork#2792: Agreed, I think in certain areas, job displacement is a real risk, especially once robotics evolves enough to replace swathes of manual labor jobs clockwork#2792: However, the idea of AI taking over is dumb, in reality any risk of that will be reigned in by big tech responsible for their development lunarflu#6769: At some point you get so big you get to simply be part of it, I think lunarflu#6769: Like Bill Gates
lunarflu#6769: I think it's another example of longtermism @Nima mentioned, people getting hyped up and scared about "the future where AGI kills us", meanwhile people are already dying from a million different causes. I understand their perspective, especially when compared to something like nuclear weapons, but spending most of our time saying how dangerous AGI could be, instead of actually working to make it safer, doesn't seem like an efficient use of time. Psinet#0857: Not enough ppl talking about the potential use of adversarial AGI methods to protect us from malicious AGI's Psinet#0857: What is the best way to stop an AGI? Another AGI. Psinet#0857: Anyhow the pigs are coming Psinet#0857: https://cdn.discordapp.com/attachments/898617453813334077/1092318314992259113/image.png lunarflu#6769: https://cdn.discordapp.com/attachments/898617453813334077/1092336628485329017/image-7.png lunarflu#6769: post this one year from now on April fool's with no context https://cdn.discordapp.com/attachments/898617453813334077/1092338210400649296/FswUvJIacAALOoo.jpg kaibioinfo#1108: yes, since I read an article (I think it was NYT) about longtermism I got a complete new understanding about Musk and a lot of crazy bullshit coming out of silicon valley. lunarflu#6769: @Jupyter Meowbooks https://cdn.discordapp.com/attachments/898617453813334077/1092493593714901022/FszgSdwXwAMVPYy.png Noxturnix#5763: https://cdn.discordapp.com/attachments/1040722509026308186/1090791811087355934/Video.mov catid#8370: https://cdn.discordapp.com/attachments/898617453813334077/1093961949026463814/clippy.png lunarflu#6769: https://github.com/dmarx/the-rest-of-the-fucking-owl lunarflu#6769: https://twitter.com/bio_bootloader/status/1644845824994848768 reavatar#8127: https://twitter.com/AiBreakfast/status/1645201751040217089 ma123ma#3731: h reavatar#8127: https://twitter.com/zhansheng/status/1643095495496146944
reavatar#8127: https://twitter.com/wightmanr/status/1645486789900271616 depositame#0001: the antichrist spotted irl leaked nsa photo https://cdn.discordapp.com/attachments/898617453813334077/1095864771896688830/20230404_072430.png lunarflu#6769: 😱 lunarflu#6769: good Halloween potential??? Omar Sanseviero#6198: Wait is that the only picture from AK leaked? 🤯 lunarflu#6769: https://twitter.com/matthen2/status/1646108675684151298 Wison#0171: can someone help me? I use gradio, when the backend process time over 100s, gradio show the error:Unexpected token '<', "<!DOCTYPE "... is not valid JSON Wison#0171: How to set the timeout? I think the default timeout is 100s Wison#0171: can I ask questions here? peterschmidt#7984: https://cdn.discordapp.com/attachments/898617453813334077/1096787582198743193/FtqufC5WIAEGfu8.png Gothos#3704: https://cdn.discordapp.com/attachments/898617453813334077/1096823770859188446/285406695285a0b66b2a26dc36515741.png reavatar#8127: https://twitter.com/untitled01ipynb/status/1647249717858910208 🅱🆂🅷🆅🅿#5692: https://cdn.discordapp.com/attachments/898617453813334077/1097692562116051056/IMG-20230417-WA0005.jpg lunarflu#6769: https://cdn.discordapp.com/attachments/898617453813334077/1097752970810118184/20230417_212450.jpg Sylvinov#8933: I made a gif to use when you want to support 🤗 https://cdn.discordapp.com/attachments/898617453813334077/1097978734461583482/20230418_170831.gif clockwork#2792: clockwork#2792: Is it based on this sticker? cakiki#9145: Sylvain made both clockwork#2792: Ah cool, the art style definitely fits HF The Nutcracker#0438: https://cdn.discordapp.com/attachments/898617453813334077/1097998631354761336/X_AI_BS.png
lunarflu#6769: lunarflu#6769: Good job Sylvain :hugging_cool: lunarflu#6769: You are now a CONTENT CREATOR 🤯👇🧵🎉 🅱🆂🅷🆅🅿#5692: Ah, the GPT-38 Negative Profit Maximizer! This innovative device is designed to help businesses maximize their profits by minimizing them. How does it work, you ask? Well, it's quite simple. The GPT-38 uses advanced machine learning algorithms to analyze a business's financial data and identify areas where profits could be reduced. It then implements a series of automated processes that systematically decrease profits across the board. For example, the GPT-38 might adjust prices downwards, increase employee benefits, or invest in costly but ultimately unproductive initiatives. By doing so, it helps businesses achieve negative profit margins - effectively turning a profit into a loss. But why on earth would anyone want to do that, you might wonder? Well, the GPT-38 was created with a very specific purpose in mind: to help businesses avoid taxes. By showing a net loss on their financial statements, businesses can reduce their tax liabilities and save money in the long run. Of course, this strategy isn't for everyone - it's important to weigh the potential tax savings against the actual losses incurred. But for businesses that are looking to minimize their tax burden, the GPT-38 Negative Profit Maximizer can be an invaluable tool. 🅱🆂🅷🆅🅿#5692: Ah, the Inverse Noise Cancelling Headphones! These headphones are a game-changer for audiophiles who want to experience the world in an entirely new way. Instead of blocking out noise like traditional noise-cancelling headphones, these headphones do the opposite - they amplify background noise to create a unique listening experience. Here's how it works: the headphones use advanced microphones to pick up ambient sound from the surrounding environment. This sound is then processed by the headphones' sophisticated algorithms, which analyze the frequencies and patterns of the noise. Based on this analysis, the headphones generate an inverted wave that matches the frequency and amplitude of the background noise. When played through the headphones, this inverted wave cancels out the original noise, effectively amplifying the sound of the environment around you. The result is a truly immersive audio experience that allows you to hear every detail of your surroundings in incredible clarity. Whether you're walking through a busy city street or sitting in a crowded coffee shop, these headphones will transport you to a whole new world of sound.
Of course, some may wonder why anyone would want to amplify background noise instead of blocking it out. But for those who crave a more natural and authentic listening experience, the Inverse Noise Cancelling Headphones are the perfect solution. So if you're tired of tuning out the world around you, give these headphones a try and prepare to be amazed! 🅱🆂🅷🆅🅿#5692: - The Stochastic Gradient Ascent Loss Maximizer is an incredible tool that can help optimize any loss function in a matter of seconds. The device works by utilizing a stochastic gradient ascent algorithm, which calculates the gradient of the loss function at each point and then adjusts the input variables to maximize the loss. The device itself consists of a series of interconnected sensors and actuators that work together to perform the optimization process. When a loss function is inputted into the device, it begins by calculating the initial gradient and then adjusting the input variables accordingly. The device uses advanced machine learning techniques to make predictions about how the loss function will change with each adjustment, allowing it to quickly converge on the optimal solution. And thanks to its sophisticated programming, the device is able to handle even the most complex loss functions with ease. Of course, all this talk of optimization and loss functions may sound daunting to the uninitiated. But fear not! The Stochastic Gradient Ascent Loss Maximizer comes equipped with a user-friendly interface that makes it easy to use for even the most technologically-challenged individuals. So whether you're a data scientist looking to optimize your models or simply a curious tech enthusiast, the Stochastic Gradient Ascent Loss Maximizer is sure to amaze and impress with its incredible capabilities. lunarflu#6769: statements dreamed up by the utterly deranged 🅱🆂🅷🆅🅿#5692: The Convolutional Image to Gaussian Noise Generator, or CIGNG, is a revolutionary device that allows you to turn any image into a beautiful, random pattern of Gaussian noise. How does it work, you ask? Well, it's simple: the CIGNG uses a complex neural network that analyzes the image's pixels and applies a series of convolutions that simulate the effects of a Gaussian filter. This results in a stunning visual effect that resembles a snowstorm on an old TV screen. But that's not all! The CIGNG also features a set of advanced algorithms that allow you to customize the level of noise and the type of pattern. You can choose from a wide range of presets, such as "Rainy Day," "Electric Storm," or "Cosmic Chaos," or create your own custom patterns by tweaking the various parameters. And if you're feeling adventurous, you can even use the CIGNG to generate noise from non-image sources, such as audio recordings, text files, or even your own thoughts! Just connect the device to your brainwaves using our patented Neuro-Interface technology, and let your imagination run wild. So whether you're a digital artist, a musician, or just a curious tech enthusiast, the Convolutional Image to Gaussian Noise
🅱🆂🅷🆅🅿#5692: ```py import torch.nn as nn import torch class CIGNG(nn.Module): def __init__(self): super(CIGNG, self).__init__() self.conv1 = nn.Conv2d(in_channels=3, out_channels=32, kernel_size=3, stride=1, padding=1) self.conv2 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3, stride=1, padding=1) self.conv3 = nn.Conv2d(in_channels=64, out_channels=128, kernel_size=3, stride=1, padding=1) self.conv4 = nn.Conv2d(in_channels=128, out_channels=256, kernel_size=3, stride=1, padding=1) self.noise = nn.Parameter(torch.randn(1, 256, 256)) def forward(self, x): x = self.conv1(x) x = nn.functional.relu(x) x = self.conv2(x) x = nn.functional.relu(x) x = self.conv3(x) x = nn.functional.relu(x)
x = self.conv4(x) x = nn.functional.relu(x) x = x + self.noise return x ``` 🅱🆂🅷🆅🅿#5692: - The Reverse Friction Generator, or RFG, is a fascinating device that can convert friction into energy! How does it work, you ask? Well, imagine you have a machine that can produce friction at will, and then you connect it to the RFG. As the machine produces friction, the RFG will convert this kinetic energy into electrical energy, which can then be used to power other devices. The secret behind the RFG lies in its unique design. It consists of a series of high-friction plates that are placed in a circular arrangement, with a central shaft that connects them. When the machine generates friction, it causes the plates to rotate around the central shaft, producing a flow of electrons that generates electricity. But that's not all! The RFG also features a set of advanced sensors and controls that allow you to adjust the amount of friction and energy output. You can use a simple dial to control the friction level, and a digital display to monitor the energy production in real-time. And if you're feeling adventurous, you can even use the RFG to power other machines that produce friction, creating a self-sustaining cycle of energy generation. So whether you're looking to save on your electricity bill, or just want to experiment with cutting-edge energy technology 🅱🆂🅷🆅🅿#5692: - The Un-computer is a revolutionary device that reverses the process of computing! Instead of taking inputs and producing outputs like a regular computer, the Un-computer takes outputs and produces inputs. How does it work, you ask? Well, it's simple: the Un-computer uses a quantum algorithm that can reverse the calculations performed by a regular computer. This means that if you give the Un-computer the output of a computation, it will automatically generate the input that produced that output. The secret behind the Un-computer lies in its use of entangled qubits and quantum superposition. By exploiting the laws of quantum mechanics, the Un-computer can perform calculations in reverse order, effectively "uncomputing" the results of previous computations.
But that's not all! The Un-computer also features a set of advanced sensors and controls that allow you to fine-tune the uncomputing process. You can adjust the quantum state of the system using a series of knobs and switches, and monitor the progress of the uncomputation using a digital display. And if you're feeling adventurous, you can even use the Un-computer to generate new inputs that produce unexpected outputs. By feeding the Un-computer with random outputs and tweaking the quantum state, you can explore the vast space of possible computations and discover new algorithms and solutions. 🅱🆂🅷🆅🅿#5692: - Sinking point numbers are a type of numerical format that are designed to represent numbers that are sinking, or decreasing in value. This format is mainly used in scientific simulations and other applications where precise numerical accuracy is required. Unlike traditional floating-point numbers, which are designed to represent numbers that are increasing or fluctuating in value, sinking point numbers are designed to represent numbers that are constantly decreasing. This makes them ideal for use in simulations of physical processes, such as heat transfer or fluid dynamics, where values tend to decay over time. The sinking point format is similar to floating-point format in that it consists of a sign bit, a mantissa, and an exponent. However, in sinking point format, the exponent is negative and represents the number of bits to shift the mantissa to the right, rather than to the left as in traditional floating-point format. 🅱🆂🅷🆅🅿#5692: - The Retrotemporal prediction of 1, 2, 3, 4, or RTP1234 in short, is an advanced prediction device that allows you to see the future! More specifically, it uses a combination of quantum mechanics and retrocausality to predict the next four digits in a numerical sequence. Here's how it works: first, you input a sequence of at least four digits into the RTP1234 device. These digits can be any numbers between 0 and 9, and can be input either manually or automatically from an external data source. Once the input sequence is recorded, the RTP1234 device uses a combination of quantum algorithms and advanced retrotemporal theories to predict the next four digits in the sequence. This means that the device essentially sends data back in time to predict what the next four digits will be, based on the patterns and probabilities it can detect. And here's the kicker: the RTP1234 device has been shown to have an accuracy rate of over 99.9%, making it one of the most reliable prediction devices ever invented. Of course, there are limitations and caveats, such as the fact that it can only predict numerical sequences, and that it can't predict events that rely on free will or chaotic systems. But if you're looking to impress your friends or test your luck, the RTP1234 is definitely a device worth experimenting with! 🅱🆂🅷🆅🅿#5692: -
The Greedy Algorithm Optimized Congratulater, or GAOC, is an innovative program that helps you send personalized congratulations to your friends and family in record time! Here's how it works: first, you input a list of people you want to congratulate, along with any relevant data such as their name, age, occupation, and achievements. Then, the GAOC program uses a series of algorithms and machine learning models to analyze this data and generate personalized congratulatory messages for each person on the list. But what sets the GAOC apart from other congratulatory services is its "greedy algorithm" optimization. Essentially, the program prioritizes the people on your list based on their importance, relevance, and emotional impact, and generates the messages in the most efficient order possible. This means that you can congratulate your VIPs first, and then move on to the less urgent contacts without sacrificing the quality or personalization of the messages. And if you're not satisfied with the default settings of the GAOC, you can also customize the message templates, add your own personal touch, or even create your own algorithms. The GAOC is all about flexibility, scalability, and optimization, helping you save time and maximize your impact while still delivering heartfelt congratulations to your loved ones. 🅱🆂🅷🆅🅿#5692: - The Bogosort Sand Particle Size Measurement Device, or BSPSMD, is a unique device that uses the principles of bogosort sorting algorithm to measure the size of sand particles! Here's how it works: first, you collect a sample of sand particles and pour them into the BSPSMD container. Then, the device starts to sort the sand particles using the bogosort algorithm, which essentially shuffles the particles randomly until they are sorted in ascending order by size. As the device sorts the particles, it also measures the time it takes to complete the sorting process. Since the bogosort algorithm is notoriously inefficient and time-consuming, the longer it takes to sort the particles, the more likely it is that the sample contains a wide range of particle sizes.
Once the sorting process is complete, the BSPSMD displays the total sorting time, as well as a histogram or graph of the particle sizes. By analyzing the distribution of particle sizes and the sorting time, you can accurately determine the average particle size, as well as the range and variance of particle sizes in the sample. And if you're feeling adventurous, you can even customize the BSPSMD algorithm or use different sorting algorithms, such as quicksort or mergesort, to achieve even more accurate and precise measurements. So whether you're a geologist, a civil engineer, or just a curious sand enthusiast, the BSPSMD is a device that will revolutionize your sand particle size measurement game! 🅱🆂🅷🆅🅿#5692: - The Hawking-Plato Chicken Nugget Classifier, or HPCNC in short, is an advanced machine learning algorithm that uses a combination of deep neural networks and philosophical reasoning to classify chicken nuggets according to their quality, taste, and cultural significance. Here's how it works: first, you input a set of chicken nugget images into the HPCNC algorithm, along with any relevant metadata such as the brand, the cooking method, and the ingredients. Then, the algorithm uses a series of convolutional and recurrent neural networks to analyze the images and extract features such as texture, color, and shape. But what sets the HPCNC apart from other image classification algorithms is its incorporation of philosophical principles, particularly those of Plato and Stephen Hawking. Based on their theories of beauty, taste, and the universe, the algorithm applies a set of criteria and weights to the extracted features, such as symmetry, simplicity, and nutritional value. Using these criteria and weights, the algorithm then generates a classification and rating for each chicken nugget, ranging from "sublime" to "abominable." The algorithm can also provide explanations for its classifications, based on the philosophical principles and feature weights used. And if you're not satisfied with the default settings of the HPCNC, you can also customize the feature weights, criteria, and philosophical principles used, or even incorporate your own philosophical theories. The HPCNC is all about combining cutting-edge AI with timeless wisdom to create a truly innovative and thought-provoking chicken nugget classifier. 🅱🆂🅷🆅🅿#5692: - GITSVU, the software version uncontrol tool that will revolutionize the way we manage our software development process!
You see, with traditional version control systems like Git, developers are forced to constantly update their code and ensure that every change made is properly logged and documented. But with GITSVU, you can finally say goodbye to all that tedious work and embrace the chaos! No longer will you have to worry about keeping track of which version of the code you're working on, or whether you're accidentally overwriting someone else's changes. With GITSVU, every version is simply replaced by the latest version, ensuring that your development process is always fresh, exciting, and completely unpredictable. Sure, it may take a bit longer to get things done, and you might have to deal with the occasional catastrophic failure, but isn't that the price of true innovation? So why wait? Embrace the madness of GITSVU today! 🅱🆂🅷🆅🅿#5692: - The device I have for you is the Foot-Mobile, an alternative to the car that allows you to travel without the need for fossil fuels or expensive maintenance. The Foot-Mobile is a simple device that consists of a pair of shoes with springs attached to them. These springs are designed to absorb the energy of your footsteps and convert it into forward motion. As you walk, the springs compress and then release, propelling you forward. The Foot-Mobile is an incredibly efficient device, requiring no fuel or electricity to operate. It is also very cost-effective, as the only maintenance required is occasional replacement of the springs. While the Foot-Mobile may not be as fast or convenient as a car, it is a great option for short-distance travel, exercise, and reducing your carbon footprint. So why not give the Foot-Mobile a try and leave your car at home? 🅱🆂🅷🆅🅿#5692: - Introducing the Gravity Anchor Device, an innovative tool that allows you to stay grounded and never leave the surface of the Earth. Unlike the space rocket, which requires a lot of fuel and complex mechanisms to launch, the Gravity Anchor Device uses the power of gravity to keep you firmly planted on the ground. The way it works is simple yet effective. The device has a built-in sensor that detects the force of gravity and creates an electromagnetic field that anchors you down. This means that you can still experience the thrill of space travel without leaving the safety of the Earth's surface.
Not only is the Gravity Anchor Device more efficient than a space rocket, but it is also safer and more cost-effective. There are no risks of explosions or malfunctioning engines, and it does not require any expensive fuel or maintenance. Plus, you can use it as many times as you want without worrying about the environmental impact of rocket launches. So, if you want to explore new horizons without leaving the comfort of your own planet, the Gravity Anchor Device is the perfect tool for you. 🅱🆂🅷🆅🅿#5692: - The Quantum IQ Nullifier is a device that helps people perform at their worst. It works by emitting a special frequency that disrupts the neural connections in the brain responsible for cognitive abilities. This leads to a significant reduction in mental acuity, making the user less intelligent and less capable of performing complex tasks. The device is perfect for people who want to take a break from their high-achieving lives and just relax. It can also be used in situations where thinking too much could be detrimental, such as during a crisis or emergency. By reducing the user's IQ, the device helps them think less and act more instinctively, which can be useful in certain situations. While it may seem counterproductive to reduce one's intelligence, the Quantum IQ Nullifier has been tested and proven to be effective. It has been used in various situations where individuals needed to perform at a lower level than their natural abilities, and it has always delivered the desired results. Overall, the Quantum IQ Nullifier is a valuable tool that can help people take a break from their busy lives and perform at a lower level when necessary. It may seem counterintuitive, but sometimes performing at a lower level can be just as important as performing at your best. Meek#2022: https://twitter.com/pocaguirre/status/1650528970629316609 lunarflu#6769: https://cdn.discordapp.com/attachments/898617453813334077/1100472595821899776/RDT_20230424_1332484137074323274866419.png cognomen#6297: https://media.discordapp.net/attachments/989978926359138344/1100781703351500890/1682517080_nvidia_ceo_jensen_huang_in_black_puffer_jacket_sitting_on_a_throne_of_solid_gold_at_latest_press_conference_professional_dslr_photo_associated_press_4k_uhd_01.png Noxturnix#5763: https://cdn.discordapp.com/attachments/898617453813334077/1101064408605667370/IMG_20230427_013438.jpg HemanthSai7#3637: https://cdn.discordapp.com/attachments/898617453813334077/1101114207686119574/image.png KhalfounMehdi#7702: :hugging_happy: https://hf.co/chat/r/Eg-vBP7 lunarflu#6769: https://twitter.com/carrigmat/status/1652745452180086786?s=20 Noxturnix#5763: https://cdn.discordapp.com/attachments/898617453813334077/1102900227083612180/IMG_20230502_030943.jpg
lunarflu#6769: :chad: https://cdn.discordapp.com/attachments/898617453813334077/1103658577849233488/image.png Merve#3234: Hello 👋 You can share anything related to NLP here! 🤗 ✍️ b1a#0749: Hey, I have been trying to train my model on mnli and the learning rate seems to keep decreasing for no reason. Can someone help me? - train_args = TrainingArguments( output_dir=f'./resultsv3/output', logging_dir=f'./resultsv3/output/logs', learning_rate=3e-6, per_device_train_batch_size=4, per_device_eval_batch_size=4, num_train_epochs=4, load_best_model_at_end=True, metric_for_best_model="accuracy", fp16=True, fp16_full_eval=True, evaluation_strategy="epoch", save_strategy = "epoch", save_total_limit=5, logging_strategy="epoch", report_to="all") b1a#0749: which parameter is causing the decrease in Learning rate every epoch? Omar Sanseviero#6198: @b1a please avoid asking in multiple places. I answered your question in the forum where I see the duplicate https://discuss.huggingface.co/t/which-parameter-is-causing-the-decrease-in-learning-rate-every-epoch/13015/2
b1a#0749: Ok sorry. Thanks for answering. JonathanSum#8528: I don't really understand the saying of "_ and ." here. https://cdn.discordapp.com/attachments/922424173916196955/925767323179171910/unknown.png mlo#8870: Hi all - I'm working on converting an NLP model (BB-P long) to CoreML for use on device. Is there a CoreML channel or somewhere better than general NLP to ask specific questions? @Omar Sanseviero Omar Sanseviero#6198: There's not a channel for this but I think you might want to start a longer discussion about it in https://discuss.huggingface.co/ 🙂 Arian Khorasani#5227: Looking for a dataset for GPT-3 model, which one do you suggest? !!Puffy Bird!!#7496: https://pile.eleuther.ai/ !!Puffy Bird!!#7496: GPT-Neo and GPT-J already use it !!Puffy Bird!!#7496: you should use their pretrained models Arian Khorasani#5227: Thanks a lot! !!Puffy Bird!!#7496: Your welcome !!Puffy Bird!!#7496: :)) Razvanip#0466: If I have a trained model of wav2vec saved in my drive, what lines do I need to write to load it and use it for testing? VB#3848: Depending on what task you trained it for you can just load the pre-trained model like depicted here: https://huggingface.co/docs/transformers/model_doc/wav2vec2 and infer Razvanip#0466: thank you Deleted User#0000: For a moment I thought you were the real Yann LeCun Arian Khorasani#5227: Same here ! Deleted User#0000: I was about to be so impressed that the real Yann had an anime profile picture! Deleted User#0000: Alas it was not to be! Razvanip#0466: If I were Yann LeCun, I would have definitely known how to read the documentation myself xd virilo#0594: Is there any good sample using transformers for a regression task, and using at the same time tabular data for the regression? Preferably written in pytorch
Alper#8265: hello everyone, any tips about word spelling correction while preprocessing the text data? oso#8906: do you have a reference vocabulary? if so, symspell should help with ~80% of cases oso#8906: (and is obscenely fast once built) oso#8906: it lets you get any term in the reference vocabulary within a restricted edit distance of 2 in O(1) (or O(log n) in some languages like Haskell, which is essentially O(1)), which is almost enough for the 80% of cases described by Damerau in his paper oso#8906: without a reference vocabulary, i'm not aware of any hands off method -- maybe reviewing the terms with low frequency? Alper#8265: i don't have a reference vocabulary but i think i can pull something from the pretrained tokenizer in my project Alper#8265: i'll check it out thanks! Alper#8265: there are also some deep learning models for spell checking but that doesn't suit my case oso#8906: neural spell checkers are interesting but there are decades of non-DL research in the area 😅 oso#8906: and if you can mostly solve your problem with a hash table... Alper#8265: you're right. i guess it's best to check non-DL research first, I'm kinda new in NLP 😄 oso#8906: welcome! it's a hell of a rabbit hole Alper#8265: definitely! oso#8906: but if you're looking for endless, endlessly interesting problems to tackle you're in the right place 😄 Alper#8265: guess I'll take my chance 😄 VB#3848: Probably an overkill but you can use a pretrained LM and then mask words in your sentences and see what the models predictions for that mask. eg: I work with trnsformer models. Then you can feed this sentence as: I work with <MASK> models.
let the pretrained LM predict the mask. Then you can calculate some sort of an edit distance to see how if it is different than your token or not. oso#8906: i feel like this would produce a LOT of false positives oso#8906: any time the word is in a spot that's unusual for it statistically Alper#8265: I guess this would be a pretty difficult problem itself but i'll search about it thanks 😄 VB#3848: I agree. I was just thinking of a DL specific way to do it. One clear issue with this would be in the case where more than one word fits. eg: I have one mllion dollars. the LM sees I have one <MASK> dollars The possible output can be hundred, thousand, million, billion, trillion, etc. So this would need another similarity function to find the suggestion that is more closer to the token itself. Makes this "solution" quadratic in time complexity. oso#8906: also unless you're targeting specific words (meaning you'd have to identify the errors already), this is a lot of inference to be doing oso#8906: @Alper if you come across any interesting solutions that don't require a reference vocabulary please share 😅 my current holy grail task (entity resolution using names, robust to entry errors and minor differences, without a reference set) requires it and i'm at a dead end
Alper#8265: Definitely will share if I find something 😄 Anurag Singh#5200: Why can’t you use a dictionary for your reference vocabulary? Alper#8265: I'm applying clustering to the dataset by using another model's embedding outputs so I thought using that model's defined vocabulary would be necessary Anurag Singh#5200: Now I am confused, why do you need to do spell check? Alper#8265: Well, I was trying to implement aspect-based sentiment classification at first. I don't have much domain knowledge about the data so clustering was a valid option for me. And I thought misspelled words results in information loss so I'd like to preprocess the dataset beforehand to prevent it as much as possible. To be honest, using pre-trained tokenizer's vocabulary felt more convenient 😄 Calandula#8391: Hello guys, I'm trying to create sports commentary from keywords based on a Plug and Play approach which basically modifies the output logits of GPT2 with a score function that forces the output to include all the keywords and generate text with a decent perplexity. Live-text commentaries for football are easy to get but doesn't exist for other sports so thats why I'm going for this approach, is there a way to generate decent commentary for some sports without relying on data? I can attach the paper if you are interested on that P&P StellaAthena#3530: Is the SOTA for turning a LM into, e.g., a sentiment classifier still to chop off the last layer, freeze the weights, put a small NN on the end, and finetune? Or is it more like few-shot learning where you rewrite the training dataset as ``` Tweet: [text] Sentiment: Happy Tweet: [Text] Sentiment: Sad ``` and train the model like normal on that? Or something else entirely? VB#3848: I've personally seen better results w/ the first one.
few shot learning approaches for multi-class classifications were very dependent on how close was the distribution of few shot examples with the pre-training set. ai#1933: I think you get better results by just finetuning without freezing the weights nori#7028: https://arxiv.org/pdf/1410.5401.pdf calmdownkarm#3433: few shot isn't SOTA yet though promising. Fine tuning is still the go to approach cakiki#9145: 3rd option: If you have very little data, it might be useful to go semi-supervised and train using something like Schick and Schütze’s PET https://aclanthology.org/2021.eacl-main.20 sMili#6973: Guys what is the powerfullest nlp public model?? Gpt-neox, gpt-j or another one? sMili#6973: I mean the best scored or smartest model with public model and weights StellaAthena#3530: @sMili GPT-J, T5, or T0 depending on your desiderata and context. sMili#6973: thanks 😄 StellaAthena#3530: Is anyone familiar with best practices for working with sentence embeddings? I want to do something similar to Frechet Inception Distance but for text instead of images. MaveriQ#9154: i am working with sentence embeddings StellaAthena#3530: Are you familiar with FID, or should I explain what I'm looking to do? MaveriQ#9154: i just read it up 🙂 newscup#8633: Between GPT-J, T5, and T0 anything best for short phrase classification (3000 sample fine tuned). Or BERT is still SOTA. The classification could be either multiclass or scalar on 0-1. (I think I saw T5 for WSD in huggingface. ) StellaAthena#3530: I was under the impression that people used BERT due to compute constrained. What are examples of places where BERT outperforms T5? newscup#8633: Sorry, I don't know - I meant my question more generally, but I might be under the wrong impression. Perhaps there are more BERT derivatives like RoBERTa, DistillBERT, StructBERT or maybe it's just easier to fine tune? Also, if t5 is always text output, might it do worse at scalar output?
StellaAthena#3530: Oh gotcha. Sorry, I misread what you said. I strongly suspect it’s about the ease of finetuning / resources required TBH. SHL#7311: Hello everyone, I am working on named entity recognition in speech. So, I am trying to develop a pipeline that starts by automatic speech recognition then I am going to detect named entities from transcripts. My question is mainly related to automatic speech recognition as a first step: as I am working on custom data audio, what are the different techniques that we can use to preprocess/denoise audio and align it with wave2vec models? thank you benji#2569: Does anyone know how to preprocess for the base bert model? ATO#8329: 45mins-1hr Free Session on NLP for beginners Date: February 5th, 2022 Time: 11am WAT How to register: https://docs.google.com/forms/d/e/1FAIpQLScd-tvWWodk3MgQvW8Yo72bWgkN1oS2m-nyIGl2yvrCKlmp3A/viewform ATO#8329: We also looking for mentors who are available to mentor people transitioning into NLP. https://forms.gle/TYaBSAN4QSKhzYmr7 benji#2569: why does the LM notebook https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/language_modeling.ipynb#scrollTo=Y9TFqDG_3l_e use the grouping function? nilesh#7372: why is NLP so popular? It's the cringiest field I could have thought of. I mean why would you want computers to learn language, coding schemes are better for them. 🤔 harveenchadha#7362: To denoise audios you can try denoiser from facebook or NoiseRNN library but whenever you make a change in the original signal, algorithms like wav2vec don’t like it and hence output can be worse. Wav2vec works on raw signal and no preprocessing is required. ATO#8329: it's an interesting area harveenchadha#7362: lets say you are running a social media company and its a huge hit. Now lets assume everytime a celeb lets say ronaldo makes a post, there are 10,000 comments coming every second. How will you process those 10,000 comments for toxicity (nudity, violence, hate) ? nilesh#7372: I'm not learning the whole field, just to learn how to classify a "comment" for toxicity. harveenchadha#7362: That is just one of the examples but looks like you have already made up your mind😉 nilesh#7372: haha no, just need some good motivation to learn it.
SSardorf#1337: Is there a rule of thumb of how big of a dataset you'd need to fine-tune a BERT(BART) model? newscup#8633: I haven't seen one, because it's the distribution of words, or our output classes. Is it muticlass? StephennFernandes#2961: Hey guys I had a question that was bothering me a lot, given any standard sequence generation task, should we use the beam search decoding technique only post training , or can it also be used during training as a part of the decoder, given that the beam search decoder always works better than the standard greedy decoder for any seq generation task, isn't it ideal to be a part of the training objective ?? Also is the beam search decoder if to be a part of training, is it completely differential like the loss function or does it need any further modifications to work during training ? .Ben#8189: Hi Stephenn 🙂 I'm by no means an expert, but from what I know, during training we will "teacher force" the correct next token to the decoder, which eliminates the need for search, and also mitigates propagation of errors, i.e., a mistake in a single token won't "ruin" the model for the rest of the process. If you don't want to use "teacher forcing" during training, greedy decoding would be much faster to train since there's no search involved. I think using beam search during training would slow down the training process to infeasible amount of time. Caiba#5330: Hello. I have a question here: I have a own dataset of same pattern as conll-2003: token pos-tag pos-chunk NE I m trying to create a dataset with my dataset .txt file using : from datasets import load_dataset dataset = load_dataset('text', data_files={'train': ['my_text_1.txt', 'my_text_2.txt'], 'test': 'my_test_file.txt'}) but its generating a 1 collum dataset...but my dataset have 4 colluns...what is wrong? I am trying to make a fine tunning for a NER task, how my dataset should be? newscup#8633: The release of embeddings by OpenAI looks interesting, and something to benchmark against for WSD https://openai.com/blog/introducing-text-and-code-embeddings/ newscup#8633: OpenAIs InstructGPT seems to use a GPT-2 size model. What would an equivalent architecture available in HuggingFace look like?
ERROR: type should be string, got "\nhttps://cdn.openai.com/papers/Training_language_models_to_follow_instructions_with_human_feedback.pdf\nhttps://github.com/openai/following-instructions-human-feedback\n\nDoes it require something novel like a reward system?\nhttps://openai.com/blog/deep-reinforcement-learning-from-human-preferences/\nOmar Sanseviero#6198: You might want to check this https://mobile.twitter.com/Nils_Reimers/status/1487014195568775173\ntddammo#7669: Or, depending on your stomach, you may *not*\nnewscup#8633: thank you Omar. The high dimensionality I'd expect, but surprised by performance. I wonder if that's about how they are being used.\nOmar Sanseviero#6198: There is also an interesting discussion in Reddit triggered by @StellaAthena 🙂 probably worth checking it out as well https://www.reddit.com/r/MachineLearning/comments/sew5rl/d_it_seems_openais_new_embedding_models_perform/\nDeleted User#0000: Well on one hand I do like that the hype that OpenAI generates for our field is pretty sweet. It's certainly a part of the reason for the very sweet compensation available to professionals in the field\nDeleted User#0000: It on the other hand, it's very frustrating to believe that we have made a quantum leap and then have these expectations tempered in the real world\nDeleted User#0000: I had a similar feeling of deep frustration when I saw the original glove paper and was naieve enough to assume that the examples they gave of linear substructure weren't cherry picked to all hell\nDeleted User#0000: I think task evaluation in NLP on a wide variety of tasks is super difficult in ways that we as a field don't have easy answers for, hence why I tend to find that the only way that I can evaluate most of these models is literally manually looking at the results\nDeleted User#0000: I suppose this problem isn't unique to NLP. Clustering is also one of those \"subjective\" tasks because usually you're using it in situations where you don't have labels and thus evaluation by normal methods of scoring aren't possible.\nnewscup#8633: Thanks again for those links! A lot of the poor performance deals with the release of OpenAIs embedding. Certainly less attracted to that now.\n\nInstructGPT still peaks my interest though, because in real world performance and interaction it does quite well (speaking partly from playing with the API). Is there a unique architecture or data that can't be replicated in huggingface (for the GPT2 size model)?\n\nhttps://discord.com/channels/879548962464493619/922424173916196955/936726274687250452"
Omar Sanseviero#6198: Yes, actually @Leandro von Werra did something similar some time ago 🙂 https://github.com/lvwerra/trl satsuroki#3326: I'm trying this tutorial https://huggingface.co/facebook/wav2vec2-xls-r-2b-22-to-16 but this line `asr = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-xls-r-2b-22-to-16", feature_extractor="facebook/wav2vec2-xls-r-2b-22-to-16")` crashes because it use all the RAM memory is there a way to fix it ? satsuroki#3326: what I want to do is to take an audio file in one language and translate it in another one for exemple the audio file is in english and I want to have an output of the audio translated in spanish any link to a tutorial will help newscup#8633: Thank you. That repo gives me a lot to go on. Will ask about GPTJ and also test with the built-in integration with GPT2 in TRL. newscup#8633: Announced today! I'm sure you've all heard: https://blog.eleuther.ai/announcing-20b/ p.evansimpson#2186: My understanding is the tokenizer is splitting on "_" and "." maybe#6742: When will this be available on Huggingface? StellaAthena#3530: Does anyone have strong feelings about how to present a bunch of eval results for a variety of language models in a readable way? Probably going to be ~100 tasks and ~10 models .Ben#8189: I think what they did in the T5 paper (https://arxiv.org/pdf/1910.10683.pdf, pg 57) is adequate enough. Sure, it's a large table, but you can fairly quickly get around it. StellaAthena#3530: Sure, I'll do that in the appendix but I'm thinking about something a bit more easy to digest for the main body? .Ben#8189: I would say my personal preference is seeing average scores for related tasks/benchmarks (either in table form or as a bar chart), just so I can get a feel for the magnitudes, and then, if I'm interested, I have the appendix. Sorry I can't give a more experienced opinion 😅 ai#1933: Hello everyone, if i am doing domain finetuning on unlabeled text data, after that i wanna finetune my model on a labeled dataset. Is it okay if examples in the labeled test set exist in the domain finetuning unlabeled set. ai#1933: Like for example in the ULMFiT approach they do LM finetuning on both train and test set, and i'm just not sure if this is okay Deleted User#0000: I believe that this is okay and I have done it this way in the past. jaggu#0509: I am working on converting a short meeting conversation into sentence or paragraph.. any good read or research paper related to it. .Ben#8189: This paper might be of interest: https://arxiv.org/abs/2004.02016 ram02#4969: Hello, I am new to NLP and was looking through the Huggingface summarisation models. I noticed that sshleifer's distilbart have multiple variations. I am wondering what the two numbers stand for, i.e. distilbart-cnn-6-6 cakiki#9145: cnn stands for the corpus it was fine-tuned on probably, the CNN Daily Mail dataset. https://huggingface.co/datasets/cnn_dailymail
cakiki#9145: the numbers probably stand for number of layers or something in that direction ram02#4969: Oh I see cakiki#9145: https://huggingface.co/sshleifer/distilbart-cnn-6-6/blob/main/config.json You can lookup the config of every model. It seems 6 stands for the number of encoder layers and the number of hidden layers cakiki#9145: yeah, https://huggingface.co/sshleifer/student_xsum_3_12/blob/main/config.json the 3_12 model has 3 encoder layers and 12 decoder layers cakiki#9145: so probably `DATASET_NUMENCODER_NUMDECODER` ram02#4969: Alright I guess I'll do some more research on layers, I'm really new to this stuff haha ram02#4969: Thanks though, I appreciate it! cakiki#9145: sure thing! habeeb#0280: Hi cakiki#9145: Hi habeeb! habeeb#0280: Hi chris StellaAthena#3530: It’s been a very long day so I don’t have anything particularly pithy to say, but EleutherAI just released the weights for our 20B parameter language model and a massive tech report about it. Details at: https://twitter.com/BlancheMinerva/status/1491621024676392960?s=20&t=xnSZHDVj2NBmaYiyjdA0EA Tororo#3098: Can a tokenizer be quantised? StellaAthena#3530: All tokenizers are quantized Tororo#3098: So can they be used in apps with small footprint? Deleted User#0000: Cross post from "ask for help", Is anyone here aware of examples of "math" with word vectors for doing interesting things beyond the canonical example of word analogies (King - Man + Woman = Queen) ? Abraham Owodunni#5583: Hey, if you're into NLP, I made your EDA simpler for you with wordcloud online. Just paste your dirty data and it's cleaned before creation.
https://t.co/jFZyILN1Jl cakiki#9145: as far as i know, these models work with subword tokenizers like BPE or SP, not with AST, but someone else might know better. @Leandro von Werra implemented a causal LM called CodeParrot trained on python in this blog post: https://huggingface.co/blog/codeparrot This might be a good starting point for you StephennFernandes#2961: can someone please confirm if bigSSL and data2vec have they been ported to hugginface ? cakiki#9145: You can use the search functionality on the huggingface hub. There seems to be at least one data2vec model there that I could find. StephennFernandes#2961: Yeah there seems to be a base data2vec model but i don't think yet any finetuned implementations are available cakiki#9145: https://github.com/pytorch/fairseq/tree/main/examples/data2vec The weights are available, you just have to convert them (See this PR: https://github.com/huggingface/transformers/pull/15507) TurtleRabbit#4380: @me while replying , will I be able to use Hugging Face Transformer to train it on a particular data say Greek Mythology and make it a small scale GPT for Greek Mythology? TurtleRabbit#4380: I'm like so new to this, please excuse if my question is so dumb 😅 😭 cakiki#9145: That's actually a great question! Sounds like a super fun project, and very doable with HF Transformers. There's a script just for that called `run_clm` which will train a GPT-like model https://github.com/huggingface/transformers/tree/master/examples/pytorch/language-modeling cakiki#9145: There's also a TensorFlow version: https://github.com/huggingface/transformers/tree/master/examples/tensorflow/language-modeling and a Flax version: https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling so just choose your framework and start training cakiki#9145: sorry forgot to @ you 😃 @TurtleRabbit
TurtleRabbit#4380: Thanks TurtleRabbit#4380: You really don't know how much this helps😍 Omar Sanseviero#6198: This sounds like a lot of fun! Omar Sanseviero#6198: I'm a huge fan of greek mythology Christopher#8030: [Reviving an older thread, but …] I think you’re talking about different things. The commonest word piece tokenizers used with transformer models are greedy tokenizers from the left, and so there are deterministic rules and nothing to quantize. But for traditional word tokenizers, some of those are also deterministic, but there are neural ones, such as in our (Stanford) Stanza, and I’d imagine there would be pretty good opportunities to quantize them. But at any rate rules can be used in apps with small footprint. 😉 Christopher#8030: Sure, there are lots, you can look at ones in the analogies test data or think of others. I think some of the ones I show in the 1st class of CS224N are country attributes: France : champagne :: Australia : ? [beer] ; politicians: Reagan : Nixon :: Kennedy : ? [Clinton] ; and word formation: tall : tallest :: dumb : ? [dumbest]. Deleted User#0000: I should be more specific - I am trying to find examples of other mathematical operations or properties outside of the existing known relations such as word analogies. I do not mean specifically different word analogies. E.g. maybe there is something mathematically/semantically meaningful to division, or multiplication, or exponentiation of word vectors, or any number of linear algebra math, or compositions of such. Dot product are certainly useful, so is average/max/min pooling - but what about other meaningful operations that show linear substructure? I cannot seem to find a single literature example of trying to reason about what other mathamatical operations outside of additions, subtractions, and compositions of these (e.g. analogies) would do with word vectors. Deleted User#0000: BTW I was literally just watching one of your videos the other day! I am honored that you'd respond to me on here!!! sMili#6973: guys i am trying to make my own tokenizer, and i wnat to test it with some amount of text data, i will be fine with 5-10 gb more less, i think abolut use the pile but its to much and just now i dont know how to use, some know where can i download plain text for example sMili#6973: (5 gb of coherent plain text i mean) cakiki#9145: you can browse datasets on the hub by size: https://huggingface.co/datasets?size_categories=size_categories:1M%3Cn%3C10M&sort=downloads sMili#6973: thanks 😄 cakiki#9145: How about English wikipedia https://huggingface.co/datasets/wikipedia Zitronesimo#6771: Hey, I am trying to fine-tune a BERT Language model on my unlabeled data using MLM and I noticed that the GPU memory utilization changes over time and I was wondering if this normal and how is this explained? Im using run_mlm.py that is available on the repo with bf16 enabled and gradient checkpointing set to true. https://cdn.discordapp.com/attachments/922424173916196955/943167018914426921/unknown.png Deleted User#0000: I am super not an expert on this so take my answer with a grain of salt but my guess is that the beginning process is when you are featurizing with the original model ahead of time and then the after process is when your model is reading in consistently sized batches and fine tuning? tomgrek#7732: Has anyone got RL/PPO working with encoder-decoder models? I've been trying to use https://github.com/lvwerra/trl, there's a paper that says it works well
Zitronesimo#6771: Yeah that can be in the case buttercutter#1033: In https://medium.com/@_init_/how-self-attention-with-relative-position-representations-works-28173b8c245a , could anyone explain the rationale behind **the value of the lookup indices after the 3rd element are all 6** ? https://cdn.discordapp.com/attachments/922424173916196955/943435118633189376/unknown.png Hodlor#4584: @patrickvonplaten is gleu and google_bleu the same metric? gleu has code but throws a not implemented error when you try to use it. StellaAthena#3530: How hard is it to port beam search into a new codebase? Omar Sanseviero#6198: `google_bleu` is slightly different as mentioned in https://huggingface.co/metrics/google_bleu . Neither should throw an error though afaik Kishan#2098: I have the following list: Diamond -is hard -has high strength I need to obtain the result: Diamond is hard. Diamond has high strength I came across "Extracting triplets from test". Please help me on how to approach this using NLP techniques. Any suggestion will be helpful. Thanks. Kishan#2098: This is a simple example. I need to do this on text corpus containing such list. So any NLP model may be helpful. cakiki#9145: I'm not sure I understand. You want to concatenate strings? Kishan#2098: I want to extract relation triplets from corpus as shown in fig. Please find attached the figure. Thank you. https://cdn.discordapp.com/attachments/922424173916196955/944071139104284702/triples.JPG shyam#4646: Hello there is something in my mind that is bugging me. For example assume i have two tasks i.e sentiment analysis and toxicity classification and i can solve this problem using two approach.
1st : I can fine tune BERT based models on sentiment analysis dataset and toxicity classification dataset. 2nd: I can do few-show learning using prompts and use GPT-3. In the second one i can do it really quick. Let's ignore the deployment part for now - what other things you consider while building a model. Also i was thinking that how much control we have over gpt-3 based results. And which one is more prone to adversarial attacks in general. buttercutter#1033: why **The query, key, and value are projection of the words into p-dimensional, p-dimensional, and r-dimensional subspaces** ? Why multi-dimensional vector space ? https://cdn.discordapp.com/attachments/922424173916196955/944087443106320434/unknown.png,https://cdn.discordapp.com/attachments/922424173916196955/944087443307659315/Transformer_tutorial.pdf Razvanip#0466: Has anybody confronted with this error when they tried to continue the training from a checkpoint ? ```py trainer.train("models/wav2vec2-large-xlsr-english-concept-model-4s/checkpoint-900") RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)` ``` Omar Sanseviero#6198: Hey all! For very specific questions #ask-for-help and the forums ( #questions) are likely a better place 🙂 𓅬 gabriel_syme 𓅬#3220: Hello! I was wondering if anyone from HF team has any idea about when (or if) Mistral models now have working flax versions? This is the issue that I never revisited 🙂 https://github.com/stanford-crfm/mistral/issues/97 𓅬 gabriel_syme 𓅬#3220: I would really love to test them with architext-related research. Apologies if I missed this being fixed 🙂 If anyone has experience of finetuning these please let me know! Kishan#2098: I need to check grammatically correct sentences from a corpus. I tried using Bert finetuned on the Corpus of Linguistic Acceptability (CoLA) dataset. I am getting some false positives. Can you guys suggest what to do? Even some rule-based approaches using spacy or any NLP models/datasets.
Thank you. cakiki#9145: Perhaps you could use perplexity? can you give an example from your dataset? Kishan#2098: Thanks a lot! Perplexity worked. I am parsing scientific papers and trying to combine heading of list with its bullet points and checking whether its grammetically correct. Christopher#8030: This is usually referred to as “Open Information Extraction”. You can find paper, code, data here: https://github.com/gkiril/oie-resources Duluth#7138: (seeking documentation/examples) In a binary classification problem, how do I extract the probability of classification? I might be speaking incorrectly here but I'd replace one of the last layers of the model? Duluth#7138: ...Something like switching from a linear layer to sigmoid? Does that sound right? cakiki#9145: all models output logits, which you can run through a softmax function to get probabilities Duluth#7138: Hello, thank you, that's awesome to hear. Now to hunt for a tutorial on how to do that. Thanks! ai#1933: pipe bellow has a `function_to_apply` argument where you can say sigmoid ```pipe = pipeline('text-classification', model=model, tokenizer=tokenizer) pipe('hello', return_all_scores=True)``` Mike Diskin#3295: Hello! Do you know any existing repo, or another code samples for pretraining some generative language model (GPT-like, yeah) from scratch? I would like to find something relatively new, and using torch and HF libraries cakiki#9145: https://github.com/huggingface/transformers/tree/master/examples/pytorch/language-modeling cakiki#9145: sorry, wrong link at first. edited with the right one Mike Diskin#3295: I've seen the set of examples, yes. I'm kinda interested in something bigger, with tricks for training from scratch. Let's say, code to reproduce pretraining of some model
cakiki#9145: The first line in that README is `Fine-tuning (or training from scratch) ` 😄 What sort of tricks are you after? Engineering tricks? Fhrozen#7807: Hello there, random question, Is there a public example of text rescoring with a language modeling, I already have a text (no outputs from an ASR) and I would like to rescore it. Kaldi lattice are currently not an option. Anyone knows another solution? Thank you. Jmin9011#7128: Hi everyone, I am trying to build a BERT model which takes stack exchange answers with it's scores as training data to predict whether or not an answer is good or bad during the testing. Any idea how I would build this model or any useful articles which might help? The first problem I would like to solve is how not to use labels, but use scores. Thank you 🙂 Vasanth P#7507: anyone can you please share some resources for pretraining a model if anyone has it Omar Sanseviero#6198: Hey all! Please use #ask-for-help for general questions (or the forums as per #questions). This is intended to be a more general discussion channel Omar Sanseviero#6198: In any case, you can find examples in https://github.com/huggingface/transformers/tree/master/examples Omar Sanseviero#6198: @cakiki shared the language modeling one here StephennFernandes#2961: How is the dataset format while pretraining multi lingual models like XLM-RoBERTa, mBERT, mBART, mt5 like ? Is the dataset in sequential batches of sentences per langauge or is it all randomly shuffled ?? raghu#4175: Any interesting explainable ai papers on transformers? Thomas Simonini#8611: It's not explanable AI papers on transformers but if you want to understand better transformers you should check this article: https://lilianweng.github.io/posts/2018-06-24-attention/ - First understand attention mechanisms: if it's not the case check - Check this blogpost: https://jalammar.github.io/illustrated-transformer/ - Also, check NLP with Transformers chapter 3 that's the best explanation Thomas Simonini#8611: And for implementation https://nn.labml.ai/transformers/index.html guides you through each part of the implementation NohTow#6415: Hello, Does anyone know about a named entity linking tool that has up to date performances and is fairly easy to use ? Thanks :hugging_angel: Russell#9021: Hi friends, I am looking for a good zero shot hugging face model. Has anyone tested currently available models and recommend me the best:hugging_cowboy:
Omar Sanseviero#6198: Hey @NohTow and @Russell. #course-ask-for-help might be a better place for these questions :hugging_cat:. There is zero shot classification pipeline in transformers you can use Russell#9021: Thanks @Omar. Sure,I will use #ask-for-help. I know the pipeline but I need specific model name. raghu#4175: thanks @Thomas Simonini NohTow#6415: Hello @Omar Sanseviero So sorry, did not see the channel when looking for a place to ask Thanks for the feedback ! nefasto#8273: Hi don't know if is it correct to ask here or in #help but generally what is the best way to handle ordinal or roman numbers when preparing a dataset for ASR task? eg: in commonvoice italian I got this sentence: "urbano viii" which is the Pope Urban VIII . Is better to replace as "urbano ottavo" or leave as it is and the model will figure it out in a some way 🙂 ? nickmuchi#2844: Hello there, have been trying to apply sentiment analysis to financial text and managed to finetune distilroberta on a combination of financial phrasebank data and some Covid related data from Kaggle which contained the impact of Covid of company profits. The F1 was not bad at .89 (nickmuchi/distilroberta-finetuned-finclass). I then came across sec-bert recently which was trained on 270k documents of financial text from the US SEC and thought it would give me better results than the finetuned distilroberta but got roughly the same F1 (.87) after finetuning it (nickmuchi/sec-bert-finetuned-finance-classification) with the same data. Was a bit surprised as I thought it would perform better given the financial text it was trained on so would have a good grasp of the finance context and vernacular. I am very fresh to HF and NLP so wondering if there is something I am misunderstanding or missing in my thinking and rationale. Thanks virilo#0594: I'm feeding a model (AutoModel.from_pretrained "distilbert-base-uncased") with a batch of 64 samples, and its returns me a transformers.trainer_utils.PredictionOutput object. How could I extract the embeddings for these 64 rows? I'm using: test_result = trainer.predict(test_dataset=tokenized_test_dataset) test_result.__class__ >> transformers.trainer_utils.PredictionOutput
test_result.predictions.shape >> test_result.predictions.shape (64, 133, 768) I expected a output size of (64,768) having a context vector for each imput rows. What this second dimension with a lenght of 133? Are these the input multiplied by the attention layers? Should I simply average them? theknolli#2238: Hi, does someone know of some good open source bot detection datasets (twitter or just generally detect wether some text was written by human or machine)? I want to make a school project of it cakiki#9145: Have a look at AllenAI's grover model https://grover.allenai.org/ Also of interest might be this dataset: https://github.com/openai/gpt-2-output-dataset theknolli#2238: thank you so much!! I will theknolli#2238: I wrote Allen AI's author an email, fingers crossed I can get access to the dataset 🤞 cakiki#9145: Nice! I think there was a form somewhere; did you find that? theknolli#2238: Yes, that google form
Kishan#2098: https://github.com/axa-group/Parsr This is very good parser to obtain structured data from corpus. However, it is not able to extract text from multi-column documents in correct reading order. Has anyone used it? Any suggestions will be great! yepster#9326: Hi all... I am trying to train a t5-small-24L (https://arxiv.org/pdf/2109.10686.pdf) model and cannot get the training to a stable convergence. The purple line seems to have hit some kind of hard bottom, which might be caused by some (??) inability of bfloat16 to express the weights required for improved accuracy. The last thing I tried was switching back to float32 instead of bfloat16 (training with the huggingface flax script on TPU). The float32 run shows a much less 'ragged' line further in the training. Strange thing is that TPUs use bfloat16 for matrix multiplications under the hood (https://cloud.google.com/tpu/docs/bfloat16). I'll let the float32 run continue for a while, then restart that with a higher learning rate. Anyway, I wanted to share this here, maybe others have experience with bfloat16 as well? https://cdn.discordapp.com/attachments/922424173916196955/953232986457923604/unknown.png mbednarski#0080: Hi all I have a classifier (multi-class) built using transformer model. In addition I have a knowledge graph with a lot of potentially useful information (like synonyms for rare terms). Can any of you share your experience in using knowledge graph to assist text classification? I found a lot for enhancing a language model but not for classifier :/ johko#5355: Does anybody have a recommendation for a good text annotation tool (so far only for NER tasks)? It doesn't need to be for free, but maybe also not as expensive as Prodigy 😅 It would be great if it can be used collaboratively. We tried Doccano recently, but the exporting of texts was a bit of a mess, especially filtering out certain approved annotations. ChainYo#3610: I use `Label Studio` which is nice and easy to setup 🙂 ChainYo#3610: You can also have Active learning while labelling data which is handy too mbednarski#0080: We also use label studio and it is not perfect but ok 😉 yaswanth#1616: try https://prodi.gy/ ᴛᴇɴꜱᴏʀᴄɪꜱᴛ#0703: https://medium.com/artificialis/one-example-of-scraping-with-beautiful-soup-bda4e83cfbdd my new nlp article, follow me up on medium. thanks! 🤗 Caiba#5330: Do you think prodigy better tham Inception? yaswanth#1616: I don't know about inception but we can use prodigy for many tasks like NER , text classification, Speech classification, speaker recognition. StephennFernandes#2961: i have a decently large corpus for TTS where only one speaker audio is recorded for 1000+ hrs and for the same language i have a multi-speaker 300 hr ASR dataset, can i mix the TTS dataset with ASR dataset ? would this cause any issues in generalization or perhaps skew or bias the model because the one TTS speaker would dominate in the generalizations ? is it an ideal practice in research to combine the TTS (single speaker) and ASR (multi-speaker) datasets ? harveenchadha#7362: The only way you will get to know this is by doing. What I recommend is take 200 hours of TTS data and combine it with 300 hr and train one model on 500 hours. VB#3848: hmm! TTS is a very speaker centric task. I'm not sure if mixing audios will actually help, unless ofc you create speaker embeddings for each entity in the audio.
The normal practice for multi-speaker setup is to learn speaker embeddings alongside and then provide that as an additional input to the model XhoniShollaj#8828: Hi Guys, Currently, I have a task at hand which involves binary text classification (with a focus on higher accuracy and less on interpretability). For the moment, besides pre-processing and the necessary feature engineering, I'm using RNN through the Keras library, and the performance is decent - but as a beginner in NLP I'm wondering what would be a more appropriate model/approach and combination which the more experienced members would recommend? Any input or direction would be appreciated! NielsR_#8974: Hi! The simplest baseline for text classification is probably TF-IDF in combination with logistic regression/SVM. That's how traditional NLP handled text classification before deep learning. This can be implemented very easily in sklearn in a few lines of code. After that, you can experiment with more complex models (such as RNN in your case, or a Transformer-based model like BERT). For the latter, I recommend checking out the official notebook: https://github.com/huggingface/notebooks/blob/master/examples/text_classification.ipynb Samarth#2271: Can someone advise what filter to keep on images while making word clouds in python? Since I am not able to get proper imprint of the person with word cloud Deleted User#0000: The count vectorizer is "more simple" than tfidf is! NielsR_#8974: Fair enough 🙂 Don#3665: Hi! I need to make a NN that does multi-label classification in pytorch, preferably with the 8-bit-bert model but that part isnt too important. Does anyone know if there’s a way to pipeline it so that I dont have to do it from scratch? I don’t know if huggingface has multi-label classification models available. I’m a beginner, so sorry for having to deal with me! XhoniShollaj#8828: Thank you - Appreciate the input 🙂 ChainYo#3610: You could finetune a BERT base model on your downstream classification task. Here is a simple example of text classification with n_classes using PyTorch and PyTorch lightning for training https://github.com/ChainYo/pizza-challenge/blob/c30e8f7c0e93412e841dd3ca87de9ffdc89938cb/src/pizza_challenge/pipelines/training/model.py#L12 NielsR_#8974: All sequence classifiers in the transformers library support the "problem_type" argument, which can be set to "multi_label_classification". This makes sure the appropriate loss function (BCEWithLogits) is used. I have a notebook illustrating this here: https://github.com/NielsRogge/Transformers-Tutorials/blob/master/BERT/Fine_tuning_BERT_(and_friends)_for_multi_label_text_classification.ipynb Tanya_#0424: Hi , I was going through this youtube video on Sentence Similarity https://www.youtube.com/watch?v=VCZq5AkbNEU by @Omar Sanseviero and wanted to try it. Can some help me with the Datasets mentioned in the video. Thomas Simonini#8611: Hi @Tanya_ for MSMarco for the first one it's https://huggingface.co/datasets/Tevatron/msmarco-passage nickmuchi#2844: Can this be applied to documents? Like siamese networks for 2 documents?
Omar Sanseviero#6198: Yes, for sure! You can even do images with sentence transformers. The name aged a bit bad hehe Omar Sanseviero#6198: You might want to check out https://www.sbert.net/index.html Omar Sanseviero#6198: https://huggingface.co/tasks/sentence-similarity Sefi K#6299: Hi everyone, I recently found out about speech disfluency models and came across a pre-trained model that I'd love to put to good use (https://github.com/pariajm/joint-disfluency-detector-and-parser). Unfortunately, I'm an embarrassing noob and can't figure out the right way to utilize this model on my server as I'm running out of memory when I'm trying to install it. Can anyone point to a best practice regarding working with a model that isn't included in Huggingface? Merve#3234: are you fine-tuning or simply inferring? Sefi K#6299: simply inferring Merve#3234: how big is the model? 😮 Sefi K#6299: 1.5GB 🥺 Tanya_#0424: thanks @Thomas Simonini Lale#8743: Does anyone know for using allennlp (https://huggingface.co/allenai/bidaf) model, what version I need to install to use their models? I don't know how to find information related to the version of library that I need to install. Omar Sanseviero#6198: This is answered in https://discord.com/channels/879548962464493619/956613091133636678 nefasto#8273: hello there maybe a noob question. Can the Wav2Vec models like xls-r work in real time/online, like streaming from microphone ? Omar Sanseviero#6198: Yes, we actually had this in the Speech Challenge, you could do ASR with a discord bot in a voice channel 🤯 Omar Sanseviero#6198: You cna find a bit more about this in #audio-discuss Omar Sanseviero#6198: You can also see https://huggingface.co/blog/asr-chunking#live-inference
Seabass#0062: has anyone trained gpt-2 or another transformer from scratch? What was your experience/best practices? Seabass#0062: I have a niche use case and I would like to train on a very specific dataset first, before fine-tuning on a smaller dataset later. Omar Sanseviero#6198: https://huggingface.co/course/chapter7/6?fw=pt might help 🙂 nefasto#8273: oh! thats nice! Because searching around I only found solutions based on split audio using VAD, like this one: https://github.com/oliverguhr/wav2vec2-live/ Seabass#0062: great! that's just what I needed nickmuchi#2844: Anyone here applying NLP to finance/markets? Keen to have a chat as I am working on a few things, very much a novice of the transformer world though so keen to learn. So far have looked at finbert/secbert and also finetuned distilroberta for sentiment analysis on finphrasebank and Covid kaggle dataset for the model to pick up the implications of Covid on various entities/markets (nickmuchi/distilroberta-finetuned-finclass). Don#3665: thanks so much! I’ll check it out Robert1#0234: Anyone know where I can get a more optimized version of gpt j than on hugging face? Robert1#0234: also anyone know where I can get access to pretrained fairseq-13b Robert1#0234: another question: I want to optimise gptj for inference performance. I know there is tools like deepspeed for this. What do people recommend? Rand#8588: I want to collaborate with someone in Arabic natural language processing project mr_seeker#1337: Our team KoboldAI converted them. ethereal_1202#3685: I had some questions while training a classifier NLP model if you could suggest me anything, it would be so helpful: 1. The classifier model has an accuracy of 94.5% on test set. And an AUC score of 1.0 . Is there any way I can find out some of the example that have been classified wrong by the model? 2. Also, when I compare the predicted class and the original classification on test set, then it is matching for every data element. Robert1#0234: im super interested in hosting my own version. Do you have any suggestions? mr_seeker#1337: Yeah, we have our own GitHub page and Discord. Come check us out? About hosting your own version, it's on Huggingface models under our team. Just search for "KoboldAI" and it should pop us as one of the fairseq models.
Robert1#0234: Awesome thanks 𓅬 gabriel_syme 𓅬#3220: has anyone made an implementation of typical decoding for flax models by any chance? 𓅬 gabriel_syme 𓅬#3220: I can see in the documentation it's only available for pytorch models right now, or rather not available for FlaxGenerationMixin 𓅬 gabriel_syme 𓅬#3220: if anyone has something like that, I'd love to give it a look 🙂 Sangeetha Venkatesan#0414: Has anyone used SPacy Text categorizer before, am trying to solve a problem on in and out of domain classifier, I have the banking domain for which i need to do a binary text classification on in and out of domain, I used BERT and other transformers on this text classification, but looks like probability of classifying a text to indomain is more. I created a dataset with indomain texts from the banking dataset and out of domain from the random out of domain examples. I am trying a way to benchmark the model Sangeetha Venkatesan#0414: Any thoughts or direction from the community would be very helpful, there were papers published on recognizing out of domain samples but there were'nt a approach with the model benchmarked NielsR_#8974: We support ONNX export for GPT-J, so if you want to optimize for inference, it might be worth checking that out. Details here: https://huggingface.co/docs/transformers/serialization NielsR_#8974: If you're into AWS, we also do have a blog post on deploying GPT-J on Sagemaker: https://huggingface.co/blog/gptj-sagemaker Robert1#0234: Thanks. I try to run this and got an error that gpt-j is not currently supported. ```python Some weights of the model checkpoint at EleutherAI/gpt-j-6B were not used when initializing GPTJModel: ['lm_head.weight', 'lm_head.bias'] - This IS expected if you are initializing GPTJModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing GPTJModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Traceback (most recent call last): File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/robert/.local/lib/python3.7/site-packages/transformers/onnx/__main__.py", line 77, in <module>
main() File "/home/robert/.local/lib/python3.7/site-packages/transformers/onnx/__main__.py", line 52, in main model_kind, model_onnx_config = FeaturesManager.check_supported_model_or_raise(model, feature=args.feature) File "/home/robert/.local/lib/python3.7/site-packages/transformers/onnx/features.py", line 283, in check_supported_model_or_raise model_features = FeaturesManager.get_supported_features_for_model_type(model_type, model_name=model_name) File "/home/robert/.local/lib/python3.7/site-packages/transformers/onnx/features.py", line 224, in get_supported_features_for_model_type f"{model_type_and_model_name} is not supported yet. " KeyError: "gptj is not supported yet. Only ['albert', 'bart', 'mbart', 'bert', 'camembert', 'distilbert', 'longformer', 'roberta', 't5', 'xlm-roberta', 'gpt2', 'gpt-neo', 'layoutlm'] are supported. If you want to support gptj please propose a PR or open up an issue." ``` NielsR_#8974: Are you running from the master (main) branch? It was merged only 3 days ago Robert1#0234: no, will give that a try, thanks! Robert1#0234: I got the following error (had to replace gpt-j string with gptj in one of the files to get this to run) ```python python3 -m transformers.onnx --model="EleutherAI/gpt-j-6B" onnx --framework pt --feature causal-lm Using framework PyTorch: 1.10.1+cu113 Overriding 1 configuration item(s) - use_cache -> False /usr/local/lib/python3.7/site-packages/transformers-4.18.0.dev0-py3.7.egg/transformers/models/gptj/modeling_gptj.py:576: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! assert batch_size > 0, "batch_size has to be defined and > 0" Validating ONNX model...
Traceback (most recent call last): File "/usr/local/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/local/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.7/site-packages/transformers-4.18.0.dev0-py3.7.egg/transformers/onnx/__main__.py", line 94, in <module> main() File "/usr/local/lib/python3.7/site-packages/transformers-4.18.0.dev0-py3.7.egg/transformers/onnx/__main__.py", line 87, in main validate_model_outputs(onnx_config, preprocessor, model, args.output, onnx_outputs, args.atol) File "/usr/local/lib/python3.7/site-packages/transformers-4.18.0.dev0-py3.7.egg/transformers/onnx/convert.py", line 350, in validate_model_outputs session = InferenceSession(onnx_model.as_posix(), options, providers=["CPUExecutionProvider"]) File "/usr/local/lib/python3.7/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 335, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/usr/local/lib/python3.7/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 370, in _create_inference_session sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model) onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from onnx/model.onnx failed:Type Error: Type parameter (T) of Optype (Einsum) bound to different types (tensor(int64) and tensor(float) in node (Einsum_126). ``` ◢ nOmjeeb#3593: Hi all. Any chance to find a color copy of the O'Reilley book "NLP with Transformers" in EU or Germany in particular? I found that there are only BW copies. Sangeetha Venkatesan#0414: Any suggestions on this community! Would be lot helpful Seabass#0062: I have a few specific questions about NLP, would anyone with experience mind if I could DM them and ask?
NielsR_#8974: I'll ping @lewtun on that one, he's currently looking into supporting ONNX for all models NielsR_#8974: Also pinging @lewtun on that one 😅 ChainYo#3610: I think it is me that did the implementation of the ONNXConfig for GPT-J last week ChainYo#3610: Maybe I did a typo in the config 😮 ChainYo#3610: Could you tell me in which file ? ChainYo#3610: in order to correct this, or maybe you could open a PR ? Sorry if I did a typo 😄 ChainYo#3610: Oh it's probably in the `src/transformers/onnx/features.py` file where all gpt-j features are declared 🙂 Robert1#0234: 1. yeah this file (src/transformers/onnx/features.py)I renamed gpt-j to gptj in the mapping keys -- its just a typo 2. I also had to hack the following in src/transformer/onnx/__main__.py ```python config = AutoConfig.from_pretrained(args.model) if config.model_type in TOKENIZER_MAPPING_NAMES: preprocessor = AutoTokenizer.from_pretrained(args.model) elif config.model_type in FEATURE_EXTRACTOR_MAPPING_NAMES: preprocessor = AutoFeatureExtractor.from_pretrained(args.model) else: raise ValueError(f"Unsupported model type: {config.model_type}") ``` where I just changed to ```python
config = AutoConfig.from_pretrained(args.model) preprocessor = AutoTokenizer.from_pretrained(args.model) ``` just to get it working since gptj is not in tokenizer mapping names ( i guess you could add gptj: gpt2 in that mapping) Robert1#0234: ^ Robert1#0234: I could open a PR with a proper fix but want to get it fully working first -- at the moment I get the error as posted above when try to verify the result Robert1#0234: any ideas what that error with onnx runtime means? i dont have much experience with this so any suggestions welcome ChainYo#3610: The error seems to come from onnxruntime package but I never had this kind of problem with onnxruntime ChainYo#3610: It could be a problem while converting the model, because it seems to happen when you load it with onnxruntime and you create an `InferenceSession` ChainYo#3610: Like a layer that has been badly converted to one onnx node and that creates a tensor format problem Robert1#0234: how would I typically work out which layer is responsible. I can probable find something out but any tips would be appreciated. ChainYo#3610: Could you load your model in https://netron.app and check if there is a node named `Einsum_126` ? (`CTRL` + `f` will help 😄 ) Robert1#0234: nope cant find it Robert1#0234: einsum_130 and einsum_469 Robert1#0234: and higher numbers ChainYo#3610: Are you running this in a notebook ? I mean the conversion code ? Robert1#0234: no I run on comand line with python python3 -m transformers.onnx --model="EleutherAI/gpt-j-6B" onnx --framework pt --feature causal-lm ChainYo#3610: Fine, then I have no clue on how to solve this error ChainYo#3610: Could it be possible that onnxruntime is lacking one special layer used in gptj models ? If yes then onnx doesn't know how to convert it and the final model is broken.
Robert1#0234: all the einsum calls come from this in transformers ```python def fixed_pos_embedding(x, seq_dim=1, seq_len=None): dim = x.shape[-1] if seq_len is None: seq_len = x.shape[seq_dim] inv_freq = 1.0 / (10000 ** (torch.arange(0, dim, 2) / dim)) sinusoid_inp = torch.einsum("i , j -> i j", torch.arange(seq_len), inv_freq).to(x.device).float() return torch.sin(sinusoid_inp), torch.cos(sinusoid_inp) ``` Robert1#0234: I wonder is there something going on here? Robert1#0234: like is inv_freq a float and then arange is an int maybe Robert1#0234: this worked after converting arange to a float. Does onnx remove the ability to pass arbitrary arguments to the model? like if I want to pass in eos_token_id=X or temperature=Y I can no longer do that because the model is essentially frozen snapshot? JohnL#5945: Question: Does anyone know why word vectors have so many dimensions? It seem unnecessary, almost like you could do something similar with just 3 to 10 dimensions... Also what is the information that word vectors carry? Is it just about clumping words together based on how frequently they're used together? That's it? Seabass#0062: Has anyone trained GPT-2 or NEO from scratch? If so, how many steps until you converged/got "good enough" performance? Robert1#0234: hey could I join the KoboldAI discord? I couldn't find a link online. Balthier#2674: A bit of perhaps a noobish question >.<
Balthier#2674: But, anyone know if BERT's tokenizer are sorted or not based on frequency? Balthier#2674: I remember hearing something like that, but I don't know how to confirm it Balthier#2674: Like, does the most frequently appearing word in english has the smallest token ID in huggingface's BERT tokenizer? Balthier#2674: Hm... then again... BERT Tokenizer aren't exactly word-wise aren't they Balthier#2674: >.< cakiki#9145: They are; the first thousand tokens are unused, then another thousand are the different characters that the BPE algorithm starts with, then the first proper subwords are `"the":1996,"of":1997,"and":1998,"in":1999,"to":2000,"was":2001,"he":2002,"is":2003,"as":2004,"for":2005,"on":2006,"with":2007,"that":2008,"it":2009` which are the most common~~ in English.~~ in the training corpus. You can introspect the tokenizer object to see all this, or you can look at the tokenizer.json file on the hub: https://huggingface.co/bert-base-uncased/raw/main/tokenizer.json lewtun#4548: Hey @◢ nOmjeeb unfortunately O'Reilly publishes (almost) all of their printed books in B&W 😦 We've told them that this is the most requested feature from our readers, so hopefully a future edition will be in colour! ◢ nOmjeeb#3593: Is it BW even in ebook version? lewtun#4548: @Robert1 this indeed looks like a problem in the ONNX graph. I see you're using PyTorch v1.10 and know that many bugs were fixed in PyTorch v1.11 - out of curiosity, does the problem persist in the latest PyTorch version? If so, I suggest opening an issue or PR with the fix - thanks! cakiki#9145: no, the ebook is in color Mark#9079: I had an idea about using a paraphrase classifier as a scorer for Seq2Seq tasks. Anybody wanna discuss it? I was thinking of using it in combination with rouge since it might capture semantic similarities better, such as synonyms. Apart from the obvious downside that it is less interpretable and not 100% accurate, any other reason why not to do this? Robert1#0234: I could fix it by changing the transformers function mention above (converting int to float for arange). I can run the onnx graph now. But do you think upgrading pytorch would allow me to set the temperature of the model at runtime? or is this not possible with an onnx graph because its fixed on creation? migge#7099: Hi, I've read from the forums about this post "Do transformers need Cross-Validation" and the answer is that we really do no need to use cross-validation when using especially Transformers (or any deep learning models) I wonder, is there any references to this or reports that I could use?
Would be very grateful 🙂 cakiki#9145: See this thread by Jeremy Howard (replying to @Omar Sanseviero 😉 ) cakiki#9145: https://twitter.com/jeremyphoward/status/1392410879354806278 Robert1#0234: I want to do batching using batch_size for transformers language model. I notice that quality detiorates if I do batching of two inputs with different lengths. I started loads of weird responses and particularly "Q:" all the time. Robert1#0234: and advice how I can do this better? Robert1#0234: whats going on here? Robert1#0234: any padding token work better than others? cakiki#9145: Quality deteriorates at inference time? Robert1#0234: Yeah I think it was due to padding боряна#7085: Hey guys, I'm trying to install the ```neuralcoref``` plugin, but I see discussions from last year that it doesn't work with >spacy3 Is there an alternative, or some update on this matter? Robert1#0234: if I convert a GPT model to ONNX can I still use parameters like temperature to feed into the model at runtime Seabass#0062: Has anyone here trained GPT-2 from scratch? I trained the GPT-NEO-125M from scratch for 3M steps and it hasn’t had great results. Do I need more data, a larger model, more training? I am training it on a text representation of a MIDI file so I’m trying to have it learn the structure of the music. Also I need a lot of output, like more than 2048 tokens ideally. I have looked into GPT-2-XL but dont know where to start sin yee#3513: Is there a way to extract text coordinates from PDF? sin yee#3513: how to extract text font size from pdf? Omar Sanseviero#6198: There are some document extraction models that you can use to get the text out of a document image if that's useful sin yee#3513: Hi @Omar Sanseviero Do you mind sharing the models details , perhaps the URL? Omar Sanseviero#6198: There is DiT (document image transformer) https://huggingface.co/spaces/nielsr/dit-document-layout-analysis, TrOCR https://huggingface.co/spaces/nielsr/TrOCR-handwritten, LayoutLM https://huggingface.co/spaces/nielsr/LayoutLMv2-FUNSD, and a couple of others, depending on your use case sin yee#3513: Thanks! Have tried DiT out, impressive result. I'm building a resume parser. Currently am figuring out how to do the block segmentation (Education block, Working experience block, etc)
sin yee#3513: Do you have any thoughts on this? :huggingsound: cakiki#9145: https://github.com/kermitt2/grobid This is usually used for academic papers, but it might work for your usecase mr_seeker#1337: I have heard that question before. The 125M is not really usable other than for testing if your rig works. You might go with something bigger, the 355M might work. Also note that the bigger the model, the more data it requires, and that it becomes heavier on the FLOPS. Ideally you want to have the batch size at 2048 (bigger will degrade the output). If you don't know where to start, start with looking at the dataset. What do you want the AI to output? What text you have? For example, I trained Janeway with 2210 curated ebooks, and I am already aiming for a dataset containing over 3000 ebooks. This is done ALL by hand, to give the optimal performance on it. For training an 125M from scratch, one paper said it requires something like 6,3B tokens to train. So, better start building the dataset first 😉 Seabass#0062: I have 2 datasets, 150Mb of transcribed MIDI and 15Mb of NES chiptune MIDI. My goal is to train on the larger set and then transfer learn on the NES one mr_seeker#1337: Then I recommend finetuning or even softprompt tuning. Seabass#0062: Fine-tuning a pre-existing model on just the NES dataset? mr_seeker#1337: Yes. If I assume your transcribed MIDI is text, then that would be possible. If its something like "F4 A3 G#2" then it would be a lot harder. I train on >1Gb of data.... Seabass#0062: Yeah it looks more like the notes you sent and not english Seabass#0062: I could generate more data but it takes a long time Seabass#0062: The MIDI->ABC script was not written by me and it’s very slow mr_seeker#1337: I did something similar with RNN, but you need to compile each note to a number. mr_seeker#1337: Instead of converting it using the gpt2 tokenizer, you can tokenize them that way. mr_seeker#1337: so A1 = 1, B1 = 2, C1 = 3, etc. mr_seeker#1337: I think MIDI already does this for you. Seabass#0062: I tried someone else’s RNN implementation and didn’t have much luck Seabass#0062: The difficult part for me has been timing and multiple tracks mr_seeker#1337: https://towardsdatascience.com/how-to-generate-music-using-a-lstm-neural-network-in-keras-68786834d4c5 <- something like this? Seabass#0062: The .abc music format has been the best representation I’ve found
mr_seeker#1337: Issue I found is that language does not scale to midi 😉 mr_seeker#1337: It's like giving the AI a complete new language to learn Seabass#0062: I haven’t looked at this one specifically but LSTM was the first thing I tried after just a finetune of GPT-3 Seabass#0062: That’s why I wanted to train from scratch. I was hoping it could learn the patterns. Seabass#0062: Also the RNN/LSTM implementations I have seen did not have multi-channel support Seabass#0062: Thank you for your help btw mr_seeker#1337: Always happy to help. Just finished training 13B - Janeway, and I fully start to understand what "more is better" means... Seabass#0062: I wish I had a rig for that. I’ve been using a P100 on Google colab Seabass#0062: I will try one of the larger models with more data then. Have you ever used GPT-2-XL? Seabass#0062: Sorry to bother you again but would you mind answering some more specific config questions? It's ok if you can't-- I have just been stuck with some specific questions for a while. sin yee#3513: How to check if a PDF is a scanned image or contains text in bulk? there are 1000 files, and I want to split them into 2 folders. Dri#9195: https://stackoverflow.com/questions/55704218/how-to-check-if-pdf-is-scanned-image-or-contains-text Dri#9195: https://cdn.discordapp.com/attachments/922424173916196955/961849497485459516/unknown.png Dri#9195: https://cdn.discordapp.com/attachments/922424173916196955/961850449479225364/unknown.png sin yee#3513: Thanks @Dri , I found this snippet (https://stackoverflow.com/questions/55704218/how-to-check-if-pdf-is-scanned-image-or-contains-text/59421043#59421043) might work for a half pipeline. It prints out the pdf types. But how to store the PDFs into respective folder automatically? Imagine after running the code, all PDF files already split into 2 folders sin yee#3513: Cool, I've solved it by customizing the code :huggingsound: iremnasir#6387: Hello fellow 🤗 ! I am challenged with a task of summarizing a collection of incoherent text (e.g. from different sources) into something that captures the most information. Think of it like, summarization of tweets about food.
How could one go on with it? I do not have labeled data to fine tune any encoder-decoder model. I am also scouting for papers that demo this challenge however I am not able to see anything for that matter. Cheers cakiki#9145: Have you experimented with just concatenating all the input texts and trying an off-the-shelf summarization model? Seabass#0062: I am training from scratch with run_clm.py and my loss has plateaued at ~0.79 after only 5 epochs, did I do something wrong or has it already converged? mr_seeker#1337: If loss goes below 0, it usually means that you did something wrong. Loss should never be below 0, it means that your model not only hits the mark, but is cheating. Mark#9079: I think thats a tilde, not a minus sign 😄 Seabass#0062: It looks like it’s still going down, just very slowly as the learning rate decreases Seabass#0062: Yes it’s an approx. sign Mark#9079: I think It is normal, I've had similar experience myself. It also depends on the size of the data set of course Seabass#0062: What loss/validation score should I aim for? Mark#9079: Afaik, you should train until you see the train loss/val loss divergence Mark#9079: i.e. early stopping Mark#9079: Or you could train for a fixed number of epochs and save the best model Seabass#0062: Ahh ok Mark#9079: w.r.t the validation loss that is Seabass#0062: Hmm ok Seabass#0062: I dont have a formal background in ML so it sounds like I need to learn some more fundamentals mr_seeker#1337: https://www.baeldung.com/cs/training-validation-loss-deep-learning Seabass#0062: Also my generation is taking an extremely long time for >128 length mr_seeker#1337: Basically saying, you want to train until the validation loss crosses the training loss.
Seabass#0062: I changed the n_positions in the model config to 4096 before training from scratch since I need longer output Seabass#0062: Not sure if there is a better model meant for longer text like GPT2-XL but I enjoy the convenience I get from the GPT transformers with the run_clm.py script Mark#9079: Yes that is normal. If you're familiar with big O, the attention mechanism is O(n^2) in both time and memory This is the reason why the max input length is usually set as 512 as default mr_seeker#1337: GPT-Neo works with 2048 token lengths mr_seeker#1337: And I use fairseq-dense for my 13B model Seabass#0062: Ahh ok Seabass#0062: Does Neo run with run_clm.py? I considered it but didn’t think they were compatible mr_seeker#1337: Yes, it does. I trained several models under the KoboldAI team flag 😉 Seabass#0062: Ok cool. Seabass#0062: I was wondering about the generation speedup since aitextgen (a wrapper for huggingface and GPT2) generates 2048 tokens really quickly Seabass#0062: I can increase the output size to 4096 by modifying n_positions right? mr_seeker#1337: Have not tried. Seabass#0062: Ok I think that I can but I’m not sure how it will affect the model accuracy etc Seabass#0062: One last question, is there a way to change the initial learning rate when running run_clm.py? I resumed training and the learning rate reset to a large value mr_seeker#1337: Makes me wonder: What is the biggest CLM model on huggingface regarding size/accuracy? Seabass#0062: Transformer-XL doesn’t have a max length does it? iremnasir#6387: That is what I am doing, but those texts are very incoherent so with an off the shelf T5, I am not getting anywhere Mark#9079: Could you give an example of the data and desired output? iremnasir#6387: It is pretty confidential but you can really think of it as a tweet collection about food. Concatenating it at max_len token and trying to summarize the diverse things people may be talking about...
Mark#9079: So kinda like, in e.g. bulletpoints? Mark#9079: At what points does your t5 model fail? Does it capture too little? Mark#9079: and is it trained from the "t5-[size]" checkpoint or a checkpoint trained for summarization? Robert1#0234: i notice GPTJ has a parallelize option to distribute work over multiple GPUs. Would this improve throughput (less GPU time on a single generate) and therefore reduce costs or is it just to reduce latency? mr_seeker#1337: What if it helps with low-end GPU's that don't have enough VRAM? mr_seeker#1337: we use it for something called "breakmodel" to split the model between GPU and CPU Robert1#0234: makes sense. I guess to fit a model onto multiple smaller GPUs. Thanks Mark#9079: Very interesting. Got any good resources to learn more about this? mr_seeker#1337: I only can refer you to the KoboldAI github, I haven't written the code for it though Seabass#0062: Anyone know how to change the starting learning rate when running the run_clm.py training script? mr_seeker#1337: I believe your starting loss is based on your first example? Mark#9079: Why would you like to change the starting loss in the first place? Seabass#0062: Shoot I meant to say Learning zrate Mark#9079: Learning rate is specified through the TrainingArguments. I suppose you can pass it through the CLI or from a json file Seabass#0062: hmm ok, I am using run_clm.py to train Seabass#0062: I cant figure out from the instructions online Seabass#0062: So it looks like it should be changing automatically so long as I give it the right directory Seabass#0062: but it's not, it keeps resetting after every run and just ignoring the training config Abraham Owodunni#5583: Please I've been given chatbot as my final year project and I don't where to even start from, I've been doing my search on Google but I haven't found anything. Please I need heeeelp🤲🤲🤲
ps#6769: Check out RASA NielsR_#8974: Yes Rasa is a great framework for chatbots, and has support for HuggingFace Transformers models. They have a course as well: https://learning.rasa.com/conversational-ai-with-rasa/ Robert1#0234: I want to reduce the gpu memory use of a GPTJ model during inference. What are my options? Abraham Owodunni#5583: Thank you, I wiil check it out. Seabass#0062: anyone know how to change the inital learning rate with run_clm.py? I am not sure if it accepts manual override of the training args Skoffie#6658: Hi everyone, does anyone know if the longt5 is now available on HF? mr_seeker#1337: Use less tokens for memory, split in layers... NielsR_#8974: You can just provide --learning_rate, it's part of the training arguments NielsR_#8974: There's an issue open, we are going to add it for sure! https://github.com/huggingface/transformers/issues/16681 Mark#9079: Omg i've been working on implementing longformer attention in T5 for a while now, how could I possibly have missed this? 🤦‍♂️ Skoffie#6658: Thank youuuuu mr_seeker#1337: Anyone here any experience or examples to use SFTP with dataloader? I am planning to use a big private dataset but want to load it from NAS... iremnasir#6387: Are there ways to dynamically set min and max token length for summarization inference? Sometimes the sentences are cut prematurely because some token limit is reached at the inference. Is there a workaround that checks sentence "completeness" and penalizes unfinished sentences? 🙂 Thanks! Güldeniz#0751: Hey everyone. Are there any suggestion on sentiment analysis model for Turkish lang you can suggest? Merve#3234: Hello, I wrote this tutorial long ago https://discuss.huggingface.co/t/turkish-nlp-tutorial/3859/3 Merve#3234: there's bunch of models here https://huggingface.co/models?language=tr&pipeline_tag=text-classification&sort=downloads seems that the most used one is savasy/bert-base-turkish-sentiment-cased Güldeniz#0751: Perfect 👍 Thanks @Merve Robert1#0234: how can I reduce the tokens which are input to gptj model transformer for purpose of reducing memory usage? is there a parameter or do I need to hack around with code more? Seabass#0062: There is an argument in the tokenizer for that
Seabass#0062: I cant find it int he docs but: Seabass#0062: tokenizer = ByteLevelBPETokenizer(lowercase=True) tokenizer.train(files=myfile, vocab_size=myvocab) nickmuchi#2844: Hi there, I have been playing around with sentence transformers on financial text (using monetary policy minutes) and they do a pretty decent job for some search queries such as, “what is the expectation for inflation”,”mortgage costs”, “debt purchases”. Was wondering how i can make the transformer better understand financial similarities such as housing/mortgage or interest rate/yields or bonds/debt? Was trying to avoid finetuning as I do not have a dataset of financial text pairs. Any idea how else I could make the query results better? bmah-0#9646: in the `BigScience Large Language Model Training` project, the language % is not almost equally distributed. Does this not create bias in language response/translation? https://cdn.discordapp.com/attachments/922424173916196955/963202539514396733/BigScience_language.png sin yee#3513: How to detect table from a PDF? I aim to group all PDFs with table together. Don't want to extract anything. sin yee#3513: #Resume parser The aim is to detect the **headers** (Education, Personal details, etc). I want to tell computer that, 'Industrial experience', 'Experience', 'Previous Work' is eqivalent to 'Work experience'. Problem is there could have more synonyms to 'Work experience'that we don't know. How to solve this? ChainYo#3610: I don’t know how they do to deal with unbalanced data but you can add weights for each of your class for your criterion. ChainYo#3610: This way your overcome the unbalanced data and avoid to train a model to learn a lot on the main class sin yee#3513: I was trying to run sematch. https://github.com/gsi-upm/sematch But keep getting error. Have inform the owner. In the meantime, does anyone knows if the package has been deprecated already? Merve#3234: I think you can use tesseract or something, not sure though Merve#3234: I'd suggest you to take a look at the requirements of the project and make sure your environment is compatible NULL#3726: has anyone been playing with gptj? NULL#3726: I tried using similar temperature values as gpt3 but it doesn't seem to work quite well. myishere#4080: I am new learner and have been doing the courses on hugging face. I want to do project where bert is trained on squad dataset and then i can use it on multiple or single pdf document to fetch the answer. Any github link for that ? If already done by someone.
myishere#4080: For Question answering NielsR_#8974: You can take a look at the question answering notebook: https://github.com/huggingface/notebooks/blob/main/examples/question_answering.ipynb NielsR_#8974: Alternatively, you can use LayoutLM/LayoutLMv2 to directly do QA on scanned documents. LayoutLM is based on BERT but adds additional layout features to make better predictions. I have a notebook on that here: https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/DocVQA/Fine_tuning_LayoutLMv2ForQuestionAnswering_on_DocVQA.ipynb mr_seeker#1337: I have, what you want to know about it? NULL#3726: my bad i thought gptj had more parameters Mark#9079: hahaha whats up with the unrealistic standard for neural networks nowadays? 😂 Mark#9079: I'm imagining some tinder bio like "I only date language models with more than 10b parameters" NULL#3726: 😂 😂 Mark#9079: If you have seen the movie "HER" perhaps it might be a realistic outcome in the future NULL#3726: I was under assumption that gptj size was bigger still performed poorly than gpt3 NULL#3726: https://www.forefront.ai/blog-posts/paraphrase-long-form-text-with-gpt-j NULL#3726: @mr_seeker Mark#9079: Ah I understand Mark#9079: Given that GPT-3 is 30 times as large, GPT-J is pretty impressive in comparison NULL#3726: also gpt3 is not opensource, they have more data mr_seeker#1337: I have a finetuned fairseq-dense-13B and it matches GPT-3 in its own examples NULL#3726: what examples have you tried,? mr_seeker#1337: Code to text, write emojis mr_seeker#1337: https://beta.openai.com/examples/ mr_seeker#1337: SQL translate works
NULL#3726: https://twitter.com/mckaywrigley/status/1284110063498522624 mr_seeker#1337: chatter bot... NULL#3726: is this possible? Mark#9079: this one? https://huggingface.co/KoboldAI/fairseq-dense-13B-Janeway mr_seeker#1337: That is my model, yes. Mark#9079: Impressive mr_seeker#1337: Works on Colab too if you want to test it out Deleted User#0000: ask gpt 3 to give instructions on how to build a space x rocket, get rich quick and become elon musk Deleted User#0000: 🙂 NULL#3726: ask gpt 3 to write gpt 4 mr_seeker#1337: Here's the colab for testing purposes: https://henk.tech/colabkobold13b/ Deleted User#0000: ask gpt 4 to write gpt 5 Deleted User#0000: and so on mr_seeker#1337: Despite never "really" being trained on code, it spits out valid python and C++ NULL#3726: gpt j is more optimized for tasks involving code mr_seeker#1337: This is Fairseq-13B... I havent tried if it knows other languages mr_seeker#1337: It knows freaking dutch, but it's talking like an American who does not fully understand Dutch though... ``` Het is druk deze ochtend. Er zijn veel gasten die komen en gaan. Jan kijkt naar buiten, het is lekker zonnig. Maar er staat een telefoon."
"Daar moet je dan niet meer verwachten," I said. "Niemand doet ze het als ik nog geen telefoon heb." "De vraag is waarom daar onderstaat?" "I've no idea. Het is een telefoon van de spionage van Alfredo. Zijn telefoon neemt iedere dag een eindje in de luchthaven. De vraag is om dus te zeggen wat hij wil." The policeman looked at me dubiously. "Je krijgt wel een kijkje, maar wie was het daaraan met de telefoon? Dat zou wel eens mee betekenen." "Er valt zulke foto's op de mond, ook allemaal achteraan. ``` mr_seeker#1337: And this is creepy (started with text from Hansel and Gretel): ``` Vor einem großen Walde wohnte ein armer Holzhacker mit seiner Frau und seinen zwei Kindern; das Bübchen hieß Hänsel und das Mädchen Gretel. Er hatte wenig zu beißen und zu brechen, und einmal, als große Teuerung ins Land kam, konnte er das tägliche Brot nicht mehr schaffen. Wie er sich nun abends im Bette Gedanken machte und sich vor Sorgen herumwälzte, seufzte er und sprach zu seiner Frau: "Was soll aus uns werden? Wie können wir unsere armen Kinder ernähren da wir für uns selbst nichts mehr haben?" - "Weißt du was, Mann," antwortete die Frau, "wir wollen morgen in aller Frühe die Kinder hinaus in den Wald führen, wo er am dicksten ist. Da machen wir ihnen ein Feuer an und geben jedem noch ein Stückchen Brot, dann gehen wir an unsere Arbeit und lassen sie allein. Sie finden den Weg nicht wieder nach Haus, und wir sind sie los." - "Nein, Frau," sagte der Mann, "das tue ich nicht; wie sollt ich's übers Herz bringen, meine Kinder im Walde allein zu lassen! Die wilden Tiere würden bald kommen und sie zerreißen." - "Oh, du Narr," sagte sie, "dann müssen wir alle viere Hungers sterben, du kannst nur die Bretter für die Särge hobeln," und ließ ihm keine Ruhe, bis er einwilligte. "Aber die armen Kinder dauern mich doch," sagte der Mann. "Wenn sie nicht in den Wald kommen, werden wir im Hause verbrannt, und das wird mir andere Generationen gefallen." "Das tut ich also nicht, Mann," sagte die Frau und setzte ihn los; "liebe Knecht, weil man sich in der Schule nicht behauptet hat, bist du gut. Ich weiß doch nicht, wie ich darauf arme ich meine Kinder." Der Mann brach ein anderer Schritt auf und setzte sich an die Unterseite des Hauses ein. Heute war es soweit, und der Mann trug um die Ecke schwarzer Kleidung, die ein Kind wollte. "Da sind Gretel und Hänsel!" sagte er. ``` andreaschandra#4851: interesting mr_seeker#1337: What's more interesting is that it has only seen text like that from cc-crawler, otherwise I would not know where he got that from. Deleted User#0000: @mr_seeker does it know Albanian mr_seeker#1337: I have no clue, since I am not Albanian... Deleted User#0000: what ai is this anyway mr_seeker#1337: KoboldAI/fairseq-dense-13B-Janeway Deleted User#0000: is it new? mr_seeker#1337: The base model is out for a while, I just had time to build a fine-tune out of it. Deleted User#0000: o
Deleted User#0000: ok mr_seeker#1337: This is number 4 in the series. mr_seeker#1337: Next is a new one with even bigger dataset... Deleted User#0000: yo Deleted User#0000: amazing! Seabass#0062: I have been training GPT-2 from scratch on this music dataset and it’s struggling to learn basic syntax Seabass#0062: 3M steps to actually generate anything musical Seabass#0062: Is the 1.5B model larger than it needs to be? Not sure what would be the best solution here Seabass#0062: For reference, my dataset after curation is only 9M tokens Seabass#0062: I was thinking I could retrain GPT-NEO but the 1.3B model runs out of memory on the P100 in Colab kagankorkmaz#7630: Hello all, do you have any suggestions about finding outliers of a text data, my dataset is mostly consist of texts like medium articles sin yee#3513: Do you know any **text line annotation tool**? Can annotate every line type (header, content, footer), and line class... I Google search for it but seemed can't find any. The screenshot is cap from a 2019 Paper 'RESUME INFORMATION EXTRACTION WITH A NOVEL TEXT BLOCK SEGMENTATION ALGORITHM' https://cdn.discordapp.com/attachments/922424173916196955/963967086336548925/Annotation_tool.png Robert1#0234: How can I make a gpt transformer model into 8 bit quantisation? ChainYo#3610: You can qunatize models with onnxruntime, and maybe optimum has this kind of feature ChainYo#3610: You will need your data to quantize any kind of model (not like when you convert a model) Mark#9079: when feeding bart two separate text inputs, should they be separated by `</s>` or `</s><s>`? V3N0M#5374: is there any nlp model, that can say if 2 pharases (like I list below) are kinda similar https://cdn.discordapp.com/attachments/922424173916196955/964795894471995432/unknown.png
V3N0M#5374: in the image, (within the red box), I have 2 phrases. In this case, I expect more than 50% confidence that they are similar mr_seeker#1337: BERT type models? Mark#9079: Maybe try out sentence transformers? https://huggingface.co/sentence-transformers/paraphrase-MiniLM-L6-v2 V3N0M#5374: Any actually V3N0M#5374: Checking Mark#9079: You can for instance use cosine similarity between the sentence embeddings V3N0M#5374: I think this could be usable This is for a hackathon, just looking for how to do these kind of comparasions https://cdn.discordapp.com/attachments/922424173916196955/964833852977713222/unknown.jpeg V3N0M#5374: I learnt how to use transformers and related stuff just to do this, i don't have any NLP experience 😅 V3N0M#5374: https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2 This model is working very well for me, thanks! Robert1#0234: anyone know of anyone having made an 8 bit fairseq model before? mr_seeker#1337: There have been attempts, but no HF implementation. Robert1#0234: how about a public GPTJ 8 bit? I found one from hivemind but I think their optimisation is done in a way which doesnt benefit inference speed mr_seeker#1337: Hivemind's implementation requires you to use an x86 machine, which does not benefit. generic#8192: what's the best HF model architecture to use if I want to train a transformer-based translation model from scratch with custom source and target tokenizers? Merve#3234: I feel like any sequence to sequence model, e.g. T5 generic#8192: ok cool :) I was just a little unsure about BART vs T5 vs BigBird etc
myishere#4080: I am working on question answering and using bert-large-finetuned-squad using tensor flow. How to run the model with length of context being more than 512? Any work or reference on how to do ? Robert1#0234: bad_word_ids slows things down quite considerable for text generation model. Is this expected. Should be quite quick really? malcolm#7814: Can anyone point me toward a resource for post-processing model output? For example, taking the ALL CAPS output of asr language models and transforming it into properly capitalized text? (e.g. capitalizing named entities, etc). Thanks! malcolm#7814: Secondary question would be whether there are trained models for e.g. inferring proper location of punctuation (periods, capitalizing words, etc). I can think of a number of architectures that would work for training such a model from scratch, but it seems likely something like this already exists cakiki#9145: https://huggingface.co/felflare/bert-restore-punctuation malcolm#7814: Amazing, thank you cakiki#9145: Sure thing! cakiki#9145: I'm not 100% sure about this, but this sounds like something to be implemented in the `Decoder` part of the tokenization pipeline (https://huggingface.co/docs/tokenizers/python/latest/components.html#decoders); maybe others have a better grasp of this and can confirm? malcolm#7814: Yeah confirmation would be great. I was looking into the decoder docs but it's unclear to me whether some sort of meaningful punctuation/capitalization/etc can be baked into the decoder layer or whether it should be done as a separate postprocessing step. Thanks! NielsR_#8974: You can take a look at Transformers that work on longer sequences, including Nyströmformer, LongFormer, BigBird and YOSO NielsR_#8974: We've just released new checkpoints for nystromformer for sequence lengths of 2048 and 4096 tokens: https://huggingface.co/models?other=nystromformer SilentDragon / Oliver#3271: Hi everyone, the "ElectraForPreTraining" Model can not be used for pre-training right? I would need to use the official pre-training repo from google and use the conversion script? Also does anyone know if there is a TF2 version of this official repo already? sin yee#3513: For CNN word embedding layer, how do you decide whether to use GloVe or Word2Vec? Task: multi-label text classification with small data duyduong9htv#5085: I usually start with a pre-trained GloVe and set the embedding layer to be trainable. Artem Bardakov#5056: I usually start with a pre-trained FastText - that solve OOV problem + multilingua nickmuchi#2844: Hi there, wondering if it is possible to use FastText embeddings with Sentence Transformers? Wanted to use embeddings for financial text instead of the default ones that might not represent finance lexicon well. Any code examples would be great, thanks
Duplighost#6232: Hi, I pass my data through a BertModel and get list of the average weights for each feature of my data. I put these numpy arrays into a dataframe that I use as my x data. How do I pass this dataframe into a sklearn fit function if the elements of the dataframe aren't scalar? Mark#9079: What are the elements if they are not scalar? Duplighost#6232: lists of weights for each feature Duplighost#6232: it just occured to me that i should probably make a column for each feature Mark#9079: yep 😄 Duplighost#6232: ok cool thanks Mark#9079: https://cdn.discordapp.com/attachments/922424173916196955/968260392851750952/unknown.png Mark#9079: Does anybody have any tips for visualizing attention weights with HF transformers? Specifically, I'd like to a visualization of attention in the encoder of T5 and BART for a given input sequence NielsR_#8974: You can take a look at BertViz and LiT: https://github.com/PAIR-code/lit NielsR_#8974: https://github.com/jessevig/bertviz Mark#9079: Thanks! Duplighost#6232: Hey I'm using a BertModel, I was wondering if any of you could take a look at my project and see if there's a way that I could improve my training accuracy? It's only about 55% when my labels are generalized as positive negative or neutral and 22% otherwise. and I'm not sure whether or not that's normal for this data. Here's the project https://cdn.discordapp.com/attachments/922424173916196955/968887911560908891/Aidan_ORourke_Honors_Project_4.ipynb Duplighost#6232: Here's the data Duplighost#6232: https://competitions.codalab.org/competitions/21163 Duplighost#6232: The testing accuracy wasn't very good for the paper that I based this on, but mine is slightly worse when I don't concatenate my labels like I've done here (it's only about 20% vs the paper's 30%. Testing accuracy with generalized labels is about 55% as well) Duplighost#6232: But the paper doesn't talk about the training accuracy so I don't know if it should be higher Duplighost#6232: https://aclanthology.org/D19-1475.pdf
Duplighost#6232: This is the paper Duplighost#6232: I'd really appreciate the help! Duplighost#6232: I did some more inspection on different id types and there seems to be a difference in misclassification depending on what direction it goes in, but the ratio of how much stuff is misclassified as one thing vs another is about the same. Duplighost#6232: and so is the similarity between the training and testing accuracies yurii#0740: Hi everyone! I already posted my question in #ask-for-help channel, but maybe it also makes sense to post it here as my question is related to NLP. Currently, I'm building a search engine for job titles from various industries. I have two sources of data: 1. Job descriptions (job title with the corresponding job description) 2. 5k job titles labelled with the related job titles. I'm experimenting with S-BERT with CosineSimilarityLoss or MultipleNegativeRankingLoss to make the domain adaptation and get better embeddings. I'm 5k labelled job titles as the dataset for that. But the results aren't great. Maybe someone was working on a similar task and can give some suggestions on what I can do better? Maybe I can use job descriptions, but to my mind, it doesn't make much sense in terms of calculating semantic similarity for titles. I really appreciate any help you can provide. Buns#2228: Honestly dont discount the power of an inverted index! Its such a simple method that works wonders. Especially if you can extra unique job titles using a fine tuned NER model (i.e: Senior Software Engineer vs Software Engineer) https://github.com/Hevia/workshops/blob/master/knighthacks_ucf/search-engines-knighthacks-2020/example.py <- Here is some example code I made for a search engine workshop I gave back during university Buns#2228: Im currently working on a search engine for academic papers. I am using keyphrase extraction + inverted index for a first pass approach. You can later augment your approaches using some sort of semantic search. Pinecone has a great course on semantic search too: https://www.pinecone.io/learn/dense-vector-embeddings-nlp/ Duplighost#6232: hey guys, i'd really appreciate some insight on my project too!
cakiki#9145: What are you working on? 🙂 Duplighost#6232: i posted it above cakiki#9145: ah, sorry! Missed it. Duplighost#6232: just scroll up a little bit Duplighost#6232: you're good Duplighost#6232: i'm just wondering if the training data should be as inaccurate as it is cakiki#9145: it's a bit difficult to follow without any description 🙂 cakiki#9145: but what do you mean with the data being inaccurate? Duplighost#6232: what do you need to know? Duplighost#6232: the training results sorry Duplighost#6232: it's only about 55% accurate when i generalize the labels cakiki#9145: have you tried a non-neural approach to see how that compares? Duplighost#6232: i need to do this using bert, unfortunately. unless you don't mean how the data is encoded cakiki#9145: that is indeed what i meant. It would still be useful to have a simple baseline to compare the BERT approach to Duplighost#6232: what would you recommend i use to compare it? Buns#2228: naive bayes or something. Just anything easy to spin up Duplighost#6232: and this is for encoding? Duplighost#6232: ohhhh Duplighost#6232: wait Duplighost#6232: i thought you meant encoding not the actual algorithm
Duplighost#6232: i'm using an svm already and that's not a neural network Duplighost#6232: it's about the same Buns#2228: Is your BERT model outperforming the SVM? JonathanSum#8528: How much CPU memory is required for DistilBERT question and answering? I am planning to deploy it on mobile. If no answer, I will try to test it on colab, but I am not sure it is the same as the mobile. ChainYo#3610: with tflite or ONNX ? Duplighost#6232: i use the bert model to encode it and i pass the encoded data from the dataframes into the different algorithms JonathanSum#8528: With the Pytorch. But I may use the ONNX later. JonathanSum#8528: Currently, my working version is using a Pytorch model for mobile, but I will try the ONNX way since it does not have any issue too. JonathanSum#8528: I am currently using it on React Native for Android and IOS. Duplighost#6232: so there's really nothing to outperform Buns#2228: Ah ok I think I am poorly communicating my suggestion. The paper doesnt discuss training acc, and you want to know if your model's training acc is "good". I am assuming that is the question A good way to see if your BERT model is actually doing well, is to compare it to another approach. A good model to compare it to for text classification tasks is Naive Bayes. If your BERT model outperforms the naive bayes model on text classification training & testing acc. You can feel confident knowing your BERT model is pretty good! If youre looking to improve training/testing acc that is a different question (I have not looked at your code but what I would do is: fine-tune on my domain specific text, train on my classification task, adjust training params) but 55% for a dataset for automatic claim verification seems like a pretty good result imo. Large language models really struggle with generalizing on this task. Duplighost#6232: ah ok, thank you! JonathanSum#8528: https://www.youtube.com/watch?v=tsIeScaTluA I saw colab showed 512 length bert can cost 2GB+ ram. For this distilbert, I guess it will be 1GB+, according to that google sheet**????**
I used the Pytorch for mobile on React Native(Javascript) on Android(did not test on IOS yet). Pratibha#7658: Hi Pratibha#7658: Can some one suggest me which model to Use for " out of domain Intent detection " apart from BERT ? Buns#2228: Any model can do this. Do you have a specific dataset youre looking for a model to have been trained on? You can search the hub that way myishere#4080: I am making question answering using longformer model from hugging face. I would like to know how can i extract the complete sentence as the answer not just 2-3 words. And get probability score too of the answer. Merve#3234: did you ever try a fallback threshold? yaswanth#1616: do we have any multilingual regex matchers? JonathanSum#8528: Hi. Could you tell me the memory ram usage for DistilBert QA model in tflite or ONNX? JonathanSum#8528: On colab, I see it needs 200+MB for 512 len and 100+MB from the transformer library distill Bert QA model. JonathanSum#8528: For anyone interested in my testing, please free to check this notebook: https://colab.research.google.com/drive/1DC_qGiL8zYGYreLb9BUwvZEMm5iXSZeH?usp=sharing I only tested on cpu because I guess React Native using Pytorch does not support GPU. And this is often true for web. For the last model test with 360 seq len, it was for Pytorch mobile quantization and mobile optimization. The funny thing is I did not even see a memory usage. And that is what I saw when I test on my OnePlus7. JonathanSum#8528: https://cdn.discordapp.com/attachments/922424173916196955/969318394061013022/unknown.png JonathanSum#8528: Pytorch quantization and mobile optimization model does not need RAM? https://cdn.discordapp.com/attachments/922424173916196955/969318616178753556/unknown.png ChainYo#3610: most of the time it's the file size ChainYo#3610: and you need to add your data
Groog#0031: hey folks, hoping you can help me out! I'm using a pretty beefy VM (4xA100) to experiment with a very large model (T0pp, 11B parameters) and I'm trying to run the fine-tuning but I keep getting out of memory, even though I'm pretty sure the vRAM should be sufficient (model authors recommend 8xV100). Thing is, there's still quite a bit of memory left when the crash happens (see attached nvidia-smi output image) but what I suspect happening is that the "chunk size" sent to each card is too big, causing the first card to fill up whilst there's still a lot more memory left. So my question is, is there some setting I can use to manipulate how the data is "chunked"? (also is that even the correct term?) https://cdn.discordapp.com/attachments/922424173916196955/969529773879603220/unknown.png mr_seeker#1337: I generally run with deepspeed, and that automatically takes care of the "chunking" for me. Groog#0031: I will look into it, thanks 🙂 Duplighost#6232: is it normal for a bertmodel to be less accurate than a tfidfvectorizer? Duplighost#6232: i'm using them to classify whether or not articles are reliable. My BERT model gets an average value for the words of each feature of an article, and the tfidfvectorizer takes a combination of all those features into a string and turns it into a vector of weights for those words. Duplighost#6232: As soon as my code is done running i can post it and show you what i mean Duplighost#6232: but i was just wondering if tfidfvectorizers were typically more accurate than bertmodels B2HAN#7196: from what i know , it is pretty strange. TFIDF vectors doesnt have any contextual meaning. Their values are only affected by the number of the words. However in the transformer architectures word embeddings are way better in the context of representation B2HAN#7196: are you using pretrained bert ? Duplighost#6232: yes Duplighost#6232: bert-base-uncased Duplighost#6232: i can show you my code and how it compares once it's done running Mark#9079: I don't know the answer, but my guess is that averaging every bert embedding of the document is a little too lossy. I.e. you might lose much information form averaging all the embeddings , whereas the tf-idf bag of words at least always keeps the presence of each word Mark#9079: do you get any different performance when using the cls token? And are you using logistic regression for these encodings? Duplighost#6232: i'm using the cls token but i haven't really compared it to anything else. Idk if i'm encoding it really, i'm just sending each new addition through a tokenizer and a model. i can post the code here once it's done running Duplighost#6232: @Mark Duplighost#6232: It's not technically finished running but i wanted to get it to you now since i have a general idea of what it's gonna look like @Mark @B2HAN https://cdn.discordapp.com/attachments/922424173916196955/969668821168304158/Aidan_ORourke_Honors_Project_4.ipynb Duplighost#6232: if you have any ways that i could improve my bert accuracy or a good reason why i can't i'd really appreciate it.
Buns#2228: Did you fine-tune your BERT model on your domain text before training it for classification? That tends to improve performance (I cant download your code to view if you did) Duplighost#6232: uh hang on Duplighost#6232: here's the code for how i read it in Buns#2228: This isnt that crazy of a result, classical techniques are powerful! In industry we still use many of them, its not all Transformers out there 😉 Duplighost#6232: ``` pomt_train_x = pd.DataFrame(columns=["claim", "reason", "category", "speaker", "checker", "tags", "claim entities", "article title"]) feature_dict = {1: "claim", 4: "reason", 5: "category", 6: "speaker", 7: "checker", 8: "tags", 9: "claim entities", 10: "article title"} # for i, data in enumerate(training_data[training_data.columns].to_numpy()): for i, data in enumerate(training_data[training_data.columns].to_numpy()): if 'pomt' in data[0]: appended_data = {} for j, sentence in enumerate(data): if j in feature_dict: inputs = tokenizer(str(sentence), return_tensors="pt", max_length=512, pad_to_max_length=True).to(device) outputs = model(**inputs) appended_data[feature_dict[j]] = torch.mean(outputs.last_hidden_state[:,0]).cpu().detach().numpy() pomt_train_x = pomt_train_x.append(appended_data, ignore_index=True) print(f"{i + 1} out of {training_data.index.stop} from training")
count = 0 # append testing data to training data for i, data in enumerate(testing_data[testing_data.columns].to_numpy()): if 'pomt' in data[0]: appended_data = {} for j, sentence in enumerate(data): if j in feature_dict: inputs = tokenizer(str(sentence), return_tensors="pt", max_length=512, pad_to_max_length=True).to(device) outputs = model(**inputs) appended_data[feature_dict[j]] = torch.mean(outputs.last_hidden_state[:,0]).cpu().detach().numpy() pomt_train_x = pomt_train_x.append(appended_data, ignore_index=True) print(f"{i + 1} out of {testing_data.index.stop} from testing") count += 1 ``` Duplighost#6232: here @Buns Duplighost#6232: i don't know *exactly* what you mean by fine tuning it before training it but if i'm right i *think* my answer is yes Duplighost#6232: the loop is there twice because i'm reading training and testing data into the same data frame then splitting it later Buns#2228: Nope looks like youre just training on your downstream task Buns#2228: https://huggingface.co/course/chapter3/1?fw=pt
Buns#2228: Huggingface course shows fine-tuning Duplighost#6232: where should i add that stuff? Buns#2228: I personally might be slightly wrong on the words I am using here, but in general what I have done before is: Use a pretrained model from the hub -> fine tune the model on my domain specific corpus -> train it for the classification task Mark#9079: I think fine-tuning refers to training a pre-trained model on a downstream task Mark#9079: Thats at least how i've undestood the term Buns#2228: Yeah I just realized I am slightly confusing duplighost here lol! but I described my method above ^ Duplighost#6232: yeah i am a bit confused haha Duplighost#6232: do i do this before i read in my data or after Mark#9079: Is training bert directly for the classification dataset out of scope? If not, I would try that Duplighost#6232: what do you mean out of scope Mark#9079: It looks like what you're doing is some sort of assignment or project Duplighost#6232: yes Duplighost#6232: so far what i have is Duplighost#6232: use a bert tokenizer and bert model to get the average values for each feature and put them in their respective columns in the dataframe. then train that dataframe using other classifiers like logistic regression, svm, perceptron, etc Duplighost#6232: but to answer your question nothing's really off the table i guess Duplighost#6232: so i'm not using bert to train my data Duplighost#6232: just kind of encode it Mark#9079: I see. My point is: If you want to leverage the full power of BERT, you should train (i.e. fine-tune) it directly. Duplighost#6232: given the code i showed you how easy do you think that'll be to implement
Mark#9079: Check out this tutorial https://huggingface.co/course/chapter3/3?fw=pt I don't think it is so difficult if you use the transformers Trainer API. But of course it depends Mark#9079: Oh wait I just realized @Buns posted the same thing earlier 🤦 Duplighost#6232: that's ok Duplighost#6232: so do i implement this trainer after i've take all of these averages like the code i've posted above? Mark#9079: If you train bert directly, you dont need to take averages. Using BertForSequenceClassification, you can classify the label directly Duplighost#6232: okay so i just send it through the model if i do that? Duplighost#6232: i'm running out of memory on cuda when i try this. i don't know how to get the last hidden state either Mark#9079: How much gpu memory do you have? Try setting your batch size to 1 and see if you still get the error Mark#9079: And just use the output logits for prediction Mark#9079: You had 3 classes, right? Then make sure to specify num_labels=3 when creating the bert model Mark#9079: `model = BertForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=3)` Mark#9079: This gist might be nice to check out too. https://gist.github.com/vincenttzc/ceaa4aca25e53cb8da195f07e7d0af92#file-trainer_train_predict-py It uses torch datasets which, imo, is a little bit easier as im more familiar with them Duplighost#6232: I’m just going to not use BertToSequenceClassification so there’s one less thing for me to figure out. However when I call my train function on the trainer I get a key error where the key is just a random number every time. Is there a way to avoid that? Duplighost#6232: I’m not at home right now so I’ll have to post the code I used a bit later but the trainer itself is almost the same as the one from that documentation. NULL#3726: Has anyone worked on predicting next bash command
NULL#3726: sounds similar to next sentence prediction V3N0M#5374: https://github.com/tom-doerr/zsh_codex V3N0M#5374: Not exactly prediction, but something NULL#3726: gpt3 NULL#3726: not opensource NULL#3726: and that's something like next word prediction. Duplighost#6232: can i put a dataframe in a trainer as the train_dataset and eval_dataset? Duplighost#6232: a pandas dataframe NielsR_#8974: You can easily turn a Pandas dataframe to a HuggingFace Dataset object using the from_pandas method: https://huggingface.co/docs/datasets/v2.1.0/en/loading#pandas-dataframe Duplighost#6232: What does this error mean? Does it mean I can't have multiple columns in my dataframe? https://cdn.discordapp.com/attachments/922424173916196955/969990168436166656/unknown.png NULL#3726: How much data is needed to train a bert from scratch? NULL#3726: (tiny, not large) Duplighost#6232: So I think I can't do it this way because I have multiple columns. Is there a way to get around this? I can't use a pyarrow schema for some reason NULL#3726: It does work for multiple columns NULL#3726: https://cdn.discordapp.com/attachments/922424173916196955/970049539727826944/unknown.png Duplighost#6232: oh it does? Duplighost#6232: huh Duplighost#6232: then why was i getting that error Duplighost#6232: here's the code @NULL Duplighost#6232: some things have been changed (i'm messing around with a bertforsequenceclassification) but it's mostly the same
NULL#3726: too complex NULL#3726: could u share a notebook with just the dataset part Duplighost#6232: hang on i can post the code Duplighost#6232: oh um Duplighost#6232: actually i got rid of it because it wasn't working Duplighost#6232: the error should have what i used as input for my trainer i guess Duplighost#6232: i know it's not a lot to go off of but i've gotta go do something real quick Duplighost#6232: Hey guys I really need help. I'm trying to pass a dataframe where each element in a column is a nd array of nd arrays (i think). I need to keep the whole sentence so that info isn't lost. Could you take a look and help me? I really need to get this done. Duplighost#6232: https://cdn.discordapp.com/attachments/922424173916196955/970346509579206656/Aidan_ORourke_Honors_Project_5.ipynb Metroproxyn#2769: Hi, I'm trying to find a book about general approaches & libraries in NLP. Could you give me some tips? Omar Sanseviero#6198: https://web.stanford.edu/~jurafsky/slp3/ is a very famous book, but it's not super practical. There is also the Hugging Face course if you want something much more practical Duplighost#6232: Please I've been getting this error about setting an array element as a sequence for a long time when i try to do it this way and i can't figure out how to change it so that it works Duplighost#6232: I don't know what's wrong Buns#2228: Is this for a class? Are there no TAs who can help you? Duplighost#6232: It's an honors project, so the tas aren't really trained on how to do this Duplighost#6232: otherwise yes i would ask them Buns#2228: I would also post your question on stack overflow or Google your error to see what similar results come up Duplighost#6232: i've googled my error a lot Duplighost#6232: I checked to see if all the elements in the arrays that i have are the same length (they are) and it's still not working Duplighost#6232: that's the main thing every error i've looked at has said
Duplighost#6232: please idk when any TAs or other people would get back to me idk what to do Buns#2228: I understand this is hard and youre not sure how to solve this. Someone here will chime in if they can help you but you don’t need to beg for help. I would take a break and try to tackle this after some relaxation. I’m sure you’ll get it! Solving hard problems is always tedious Duplighost#6232: I've been trying to get it for weeks Duplighost#6232: I've had plenty of relaxation time with all due respect Duplighost#6232: I appreciate that you're conscious about that though Duplighost#6232: 🙂 Metroproxyn#2769: Thank you, Omar! I'll take a look at both book and course. Mark#9079: I tried to visualize the attention weights in the encoder of T5 (UnifiedQA) for question answering. I did this by taking the mean of all attention weights in the model, and got this output. Does anybody know why the values are concentrated at the last token? Is this a known property of transformer models? https://cdn.discordapp.com/attachments/922424173916196955/970398951247331398/matrix.png Pratibha#7658: No Merve#3234: that's easiest way to handle fallbacks 🙂 Mark#9079: Does anybody know where I can find the full list of all tasks that T5 is pre-trained on? cakiki#9145: The paper Mark#9079: Where in the paper? cakiki#9145: "2.3 Downstream Tasks" Mark#9079: I see. It was kinda hard to tell because "Our goal in this paper is to measure general language learning abilities. As such, we study downstream performance on a diverse set of benchmarks, including machine translation, question answering, abstractive summarization, and text classification"
I wasn't sure if this was for evaluation after fine-tuning or after incorporating into pre-training Mark#9079: You could say this about a self-supervised model with no supervised pre-training, which would imply a fine-tuning step first Duplighost#6232: I got my project done yesterday, thanks everyone for your help! cakiki#9145: Well done, congrats! Seabass#0062: What is the best way to deploy a model for production? I have a 500Mb custom trained GPT-NEO model. I want to deploy to EC2 but their instances with GPU acceleration are all prohibitively expensive and provide way more memory than I need. Buns#2228: Huggingface has an inference platform that is very good and performant I’ve heard Seabass#0062: does anyone have any experience with it? I am trying to keep my cloud compute costs low Zippy#1111: There are cheaper options, like paperspace, or datacrunch Pratibha#7658: Hi merve , can you please guide or suggest any research paper link or GitHub link for fallback threshold classification of our of domain intent Solamino#4324: I have to train gpt2 from scratch in non-English language and then fine-tune it on dialogue generation tasks. it's for my master's thesis. when I train and evaluate, the CUDA out-of-memory error keeps interrupting me. i used google collab pro+ 15GB GPU and 54GB RAM may I get help here? Merve#3234: it's simple, if your model predicts the best predicted class below a threshold you can just flag that sample 🙂 Merve#3234: there are nice workarounds here by @Leandro von Werra for when you want to pretrain a generative model 🙂 https://huggingface.co/blog/codeparrot Solamino#4324: Thank you very much Versipellis#1100: If I was writing a new model that builds on, say, BERT, but adds additional parts to the architecture, how might I transfer learn using the pretrained BERT weights? Would I just load the pre-trained model weights to that part of the model? Is that sufficient? mr_seeker#1337: So, you want to add extra layers to the original model? Mark#9079: Yep, that should work Versipellis#1100: Yeah, or train a joint vision-language model, but use the pretrained BERT weights to initialize my model with Josue#7577: Hey guys maybe someone can offer me some guidance here. I'm trying out different summary models like GPT-J, and using the model.generate() to pass in my input and infer the model. I have dug through the docs for ways to insert stopping criteria or early termination tokens into the generate() method but I'm having trouble
Josue#7577: would anyone be able to point me in the right direction for something like this Mark#9079: What are you trying to do exactly? Set a max token length? Josue#7577: I'm simply trying to use gpt-j generate responses back to me in a chat-like way. The issue I'm running into is that whenever I generate a response with the models, the model ends up generating a response for both itself and me. Josue#7577: I've read tons of how to handle this and the way to go is by adding an early stoppage token so that when I call generage() the model stops trying to fill in answers for myself Josue#7577: But I wasn't able to find anything in the documentation on how to add early stoppage tokens into generate mr_seeker#1337: Models like blenderbot and fairseq use the EOS token to indicate the end of a sentence and is also trained that way. Might help? Josue#7577: I'll check out their implementation and usage 👍 Muhammad Agung#7254: Anyone have an example of converting gpt2 into onnx model or tflite? Razvanip#0466: on which gpu was wav2vec2 trained? ChainYo#3610: Hey @Muhammad Agung It's easy and automatic with `transformers.onnx` command, check this https://huggingface.co/docs/transformers/serialization vivekatwal#3689: Hey All, any idea on models that do both ner and relation extraction. Metroproxyn#2769: Do you guys know any other NLP communities? ᴛᴇɴꜱᴏʀᴄɪꜱᴛ#0703: i'm coding along with the NLP HF book, and for PEGASUS model for summarization on page 145. i got an Value Error. ᴛᴇɴꜱᴏʀᴄɪꜱᴛ#0703: i'm not quite sure i understand why, i used ```replace(".<n>, ".\n")``` ᴛᴇɴꜱᴏʀᴄɪꜱᴛ#0703: in the end of pipeline ᴛᴇɴꜱᴏʀᴄɪꜱᴛ#0703: Error is: ValueError: Couldn't instantiate the backend tokenizer from one of: (1) a `tokenizers` library serialization file, (2) a slow tokenizer instance to convert or (3) an equivalent slow tokenizer class to instantiate and convert. ᴛᴇɴꜱᴏʀᴄɪꜱᴛ#0703: in the book it's written that replace should be used instead of tokenizer.
ᴛᴇɴꜱᴏʀᴄɪꜱᴛ#0703: so i don't know what to think, sorry i had to ask here, maybe somebody has the idea Moh#6873: Anybody has a link for a tutorial that helps with building chatbots using hugging face 🤗 ? Especially in foreign languages like French 🇫🇷 and German 🇩🇪 cakiki#9145: Started a thread. Merve#3234: So chatbots are not much within scope of Hugging Face (sort of) there are two types of chatbots, intent-action based chatbots which are hybrid of AI-rule based methods and is recommended because you will have better control over what your end user sees and get to automate your processes. for this one you have an intent classification model which you can support with BERT-like models. second type is pure generative model which is like Blenderbot or GPT-2. use them just for fun and if you know how decoding works 🙂 Merve#3234: I wrote a blog post here on how you can use hugging face for chatbots https://huggingface.co/spaces/merve/chatbot-blog it's interactive, you can try the models right away with Inference API Merve#3234: I used to build conversational agents and if you want to build an intent-action bot best is Rasa OS, it also gives you various architectures you can train and works with HF models lbourdois#8829: It's getting a bit old, but when I was interested in the question, I based myself on this one (based on gpt2 and HF): https://nathancooper.io/i-am-a-nerd/chatbot/deep-learning/gpt2/2020/05/12/chatbot-part-1.html It is in Spanish, so it could be adapted by changing the language to the one that interests you Kzzz#0411: Hello all! Anyone has suggestion on learning NLP with hugging face? Is there a repo I can follow? Josue#7577: Started a thread. Merve#3234: hf.co/course 🙂 alighte#0403: Hi. I'm looking into autoregressive models for code generation line by line - where the second line of code learns from the first etc... my intuition is transformers (gpt2 for code...) since transformers are autoregressive models but this still doesn't mean what I'm generating will be autoregressive, as in the actual code generation? haytam-don#7224: ok I understood everything untill this last part "this still doesn't mean what I'm generating will be autoregressive, as in the actual code generation?" haytam-don#7224: explain more alighte#0403: Yes, absolutely. So I want to generate a layer-by-layer autoregressive construction of code. This process is autoregressive. But I'm wondering if the use of transformers as a model in itself makes the process autoregressive. I'm looking at models to use (ex: CodeBERT, PLBART, CodeT5...) so I was hoping this could clear my thought process. haytam-don#7224: I'm not the best one person to answer your question, but I'll answer with as much as I know Transformers are autoregressive models, so obviously if you try to apply a transformer model on a dataset, it will an autoregressive approach, unless you to fundementally alter the transformer model
haytam-don#7224: but for code generation, it is more of a prompt engineering rather an autoregressive process on the code itself haytam-don#7224: the prompts being the comments haytam-don#7224: like in github copilot alighte#0403: So the transformer approach will impact downstream tasks. For example: code generation is different from summarization, so in this case, your used case would not be autoregressive, although the mechanism (a transformer) is. haytam-don#7224: I see, you have a point there alighte#0403: Yes, that's what I've come about as well. Although my incentive was to prompt say with a line of code, much like AutoCoder: https://github.com/wangcongcong123/auto_coding alighte#0403: Yes, it's been a cycle of figuring out this difference so I know my approach - model to use. haytam-don#7224: oh ok alighte#0403: But thank you. Hopefully someone else can speak on this too 🙂 haytam-don#7224: I'm sorry I'm not really an expert on the topic, but I hope you'll find someone who can help alighte#0403: No worries. Hope so as well. raja#8254: Hi Folks, I have one doubt in Custom Entity creation on NLP for knowledge graph. How to create custom entities, if our document has many numerical value(discrete values). I can't remove those values as those are the key points for building a graph. I tried with many preprocessing techniques, but not getting desired result. If anyone has idea/insight to handle this, please provide your valuable comments. ponchatoula#4556: Anyone knows of an example project/repo for multilabel classification using text generation? Like T5 style of classification, but for one domain. (I know of the approach to make a target vector for everything, just trying out new stuff) simonhallqvist#5701: Hi guys, which multilingual translation model is best on HF hub nowadays? Last time I checked it was https://huggingface.co/facebook/m2m100_1.2B Still true? PROanjay#9985: hi everyone, I am working on a NLP based project . Which is using NER and I am stuck at a problem I have posted my query in stack overflow . if someone could help me out it would be really helpful. this is the problem link: https://stackoverflow.com/questions/72234795/extract-multiple-start-date-and-end-date-from-a-string-in-python Buns#2228: Can’t you use regex to extract the dates. Convert them to actual datetime classes and then do any processing? Buns#2228: Not sure why you need NER for extracting dates PROanjay#9985: Hi anthony i can extract dates with regex but how can i extract dates and label them its start date and end date ?that is the part i don't understand.
I am using NER to extract experience portion string not to extract dates . NER is just used here to get all data of the fields present in the resume. ᴛᴇɴꜱᴏʀᴄɪꜱᴛ#0703: hello I need t5 model tuned for downstream summarization purposes, I write tech articles. yesterday I spent 13 hours trying to debug someones model just thrown in hf spaces, and I can't waste any second on models that are nonfunctional. all I need is a good example so I can write about it. please, guide me on this, I have nobody else to turn to. Buns#2228: Just look up how to convert strings to dates. The stack overflow answer you linked told you, but there are tons of ways to do it Buns#2228: Once it’s a date object you can easily use it for your calculations nurv#8755: hey there everyone nurv#8755: Is there any multi language source code language model that I can use to calculate language embedings across languages? nurv#8755: I'm using the flex-community/gpt-neo but it seems it only is trained with python nurv#8755: i'm looking that is close to codex (but obviously something much much smaller) PROanjay#9985: yheah I found a solution just need bit of changes thank you @Buns . Deleted User#0000: Hi everyone. Based on your experience, how often does data augmentation on NLP tasks actually helps? I'm still in my undergrad and on all of my previous projects involving NLP, I've always been disappointed with data augmentation for textual data. It often doesn't provide any improvement and just decrease the model's performance. There is a possibility that this might be because I've only used EDA and other simple augmentation techniques. I'm stuck on a problem where I can't reduce the accuracy variance with my training score just overfitting. I've tried adjusting my regularization layers and parameters but I can't seem to break through 89%. I'm considering augmenting my data for better generalization but I just can't seem to get myself into it because of my past results. Deleted User#0000: I also can't seem to find any paper showing strong evidence of augmentation providing significant improvements. It takes a pretty long time to train transformers so as much as possible, I would like to hear suggestions before I take the empirical approach and just test things out. Arsive02#8749: Hey everyone. Suppose I have the following criteria.... Data: " Hi, I am having trouble with creating a user account " Expected Output : " { Intent : " Complaint ", " Reason " : " Cannot create account " } " So the thing is, I am familiar with how to classify intents. I want to know where to look to figure out the later part. I need to output the reason ( or a brief note ) on why the intent is classified like that. I could think a possible scenario. A mix of summarisation, and Paraphrase matching, but that's not exactly what I want. It's more like understanding the context and generating the text based on it. It should output the reason even if the words ( or similar words ) are not present in the input.
Can you guys shed some light? I could create a dataset and proceed to train, but this must have a name. Any resources would help. If there is anything else like this on the internet, that'd do. Thanks in advance. Merve#3234: Hello, I worked a lot with intent classification and assuming that's a text classification model only, there's no way you can do it. What you can do is list the best 3 predictions and look for patterns among them 🙂 Merve#3234: and also seeing that your app is domain specific it's best not to use pre-trained models and train one yourself, even basic naive bayes would work (most of the time it's about data not about complexity of the model, and transformers are overkill given the sequences are usually short) Arsive02#8749: Oh, but isn't the reasoning output seems more like it should be context aware ? You mean, there's no way to make it context aware for a particular intent it has been classified ? Arsive02#8749: Yes, I understand. It depends on the data and you are right. I will try multiple possible predictions as you said. But how do i give a brief though ? Naive bayes certainly won't help as the reasoning isn't classification. Merve#3234: @Arsive02 it's indeed interesting, and I think the model you're looking for is a causal language model (like gpt-2) but I just don't know how you can integrate it to your pipeline. maybe pre-train a gpt-2 on your own text, but it's self-supervised so it's not class aware Arsive02#8749: Yesss ! I have been wondering the same thing. Merve#3234: my suggestion on classification was rather general 🙂 Arsive02#8749: Yeah sorry my bad. But yeah anyways, the classification isn't the problem as its already done. Integrating it with the later part does seems to be challenging Merve#3234: T5 might also be good. but what you could do is: have input data "I'm complaining about xyz is classified as complaint because.." and some output data about the reason, but this requires labeling 🙂 Merve#3234: I think it's hard, and your debugging shouldn't necessarily depend on generative models because they're very hard to get right Arsive02#8749: Labeling ;_; Okay. That makes sense. Arsive02#8749: Hmm.... We don't have much data on this matter. So need to think of it Arsive02#8749: Anyways, in case you have any insights on this, kindly let me know. Thanks for answering. Rand#8588: anyone know how can I run this model CAMeLBERT-Mix Poetry Classification Model ?
Rand#8588: it says I have to install transformers==3.5.0 when I did it this error appears RuntimeError: Failed to import transformers.models.bert.modeling_bert because of the following error (look up to see its traceback): No module named 'transformers.models.bert.modeling_bert' ethereal_1202#3685: Can anybody help me out with how can we select a portion of train set in the dataset? ethereal_1202#3685: I am trying to train a multilingual model, so for eg, english has 6000 train set elements and I want to select 500 of it randomly and similarly for other languages. dl_amit#8567: Hi All, I have a requirement to generate question from context. Can anyone please provide thought based on suitable hugging face based models? IN P IE C E S#3307: You could try something like random.sample(a, x) where it samples x elements from a. But assuming your data is in a dataframe or ndarray you could also look into DataFrame.sample() or np.random.choice(), respectively. Hope that helps or atleast gives you a direction. Merve#3234: you can use SQuAD dataset to fine-tune T5 in a reverse manner where you have answer that you can generate questions as labels Merve#3234: I did generate similar paraphrased questions once but not from answers so I can't help much 😦 Mario C Mid-December 2021#0438: Has anyone in the #natural-language-processing channel used #optimum-inference https://huggingface.co/blog/optimum-inference As you can guess/imagine I am trying to decrease inference time for a BERT model aermak#5494: Hi! Could you tell me why on the same text and labels this code consumes 30Gb `from transformers import pipeline clf = pipeline("zero-shot-classification", model="facebook/bart-large-mnli", device=0) res = clf(sample, labels)` while this takes only 3Gb? ` from transformers import pipeline clf = pipeline("zero-shot-classification", model="facebook/bart-large-mnli", device=0, batch_size=2)
res = clf(sample, labels) ` cakiki#9145: The #ask-for-help channel is better suited for such questions mi2kan#6723: Hi, all. I need help joining open source projects. Have you ever participated in any os project? nickmuchi#2844: Thanks for sharing, any idea if this would work for sbert too? cakiki#9145: There are very often beginner-friendly contribution sprints! As an example: https://github.com/huggingface/transformers/issues/16292 Otherwise just try to find an open issue labeled "Good First Issue" (e.g.: https://github.com/huggingface/transformers/issues?q=is%3Aopen+is%3Aissue+label%3A%22Good+First+Issue%22) Matplotlib is holding a New Contributor Meeting on June 7th, which might be interesting for you to attend: https://twitter.com/matplotlib/status/1524992845311971328?s=20&t=3C8Y0h6_WPp7948zMmtvlw mi2kan#6723: I thank you from the bottom of my heart. I’ll check them out soon! Mario C Mid-December 2021#0438: according to the official documentation "Any model that can be exported with transformers.onnx and has a supported task can be used, this includes among others BERT, ALBERT, GPT2, RoBERTa, XLM-RoBERTa, DistilBERT.." vivekatwal#3689: Hello Folks, What is the right way to optimize tokenizer in transformers? 1. Is adding vocab to tokenizer enough 2. Or tokenizer should also be trained and fine tuned with new vocab.? Versipellis#1100: Anyone have a clean, simple example implementing transformers in PyTorch from scratch? The official PyTorch tutorials do it a couple of different ways and I'm trying to figure out why there're the differences in implementation. pat9393#8935: https://www.oreilly.com/library/view/natural-language-processing/9781098103231/ The new Transformers book has a chapter were they do BERT from scratch in Pytorch
jasonmeyang#3861: Hi Folks, jasonmeyang#3861: I met a problem with accelerator + deepspeed when training my model. When training was about to start, suddenly the problem crashed at: jasonmeyang#3861: ImportError: /root/.cache/torch_extensions/py39_cu113/utils/utils.so: cannot open shared object file: No such file or directory jasonmeyang#3861: My settings were: python 3.9.5, transformers 4.19.2, without deepspeed, everything was running OK jasonmeyang#3861: Thanks a lot to all the help from the community ! Omar Sanseviero#6198: Hey all! Lots of people have been asking questions in this channel lately. Please check out #questions and #ask-for-help, you can ask questions over there, although ideally through the forum 🙂 jasonmeyang#3861: OK, will post this on the forum, Thanks a lot @Omar Sanseviero NULL#3726: Anyone worked on finetuning gptj ? NULL#3726: https://tenor.com/view/depressed-bored-boredom-swing-head-gif-17224602 mr_seeker#1337: What you need to know? NULL#3726: https://cdn.discordapp.com/attachments/922424173916196955/978278638854230086/unknown-3.png NULL#3726: is this correct format for paraphrasing? NULL#3726: does the tokenizer auto add <|endoftext|> or I need to add it? mr_seeker#1337: GPT-J works with tfrecords, so you need to put everything between "<|endoftext|>" in it's own file. mr_seeker#1337: If you use the google TPU version, that is. NULL#3726: what about gpt j of hugging face? mr_seeker#1337: GPT-J from HF is different, but you need to put <|endoftext|> at the end, not the start. NULL#3726: https://cdn.discordapp.com/attachments/922424173916196955/978310809157505064/2022-05-23_10-57.png mr_seeker#1337: something like that. NULL#3726: Nop I think something is still wrong ;-;
NULL#3726: I'm trying this https://huggingface.co/hivemind/gpt-j-6B-8bit jacquesthibs#6131: I’ve heard that this is more of a proof-of-concept than an actually useable model. NULL#3726: Btw I just finished with it😁 NULL#3726: will upload to hf space soon jacquesthibs#6131: Interesting, would love to see the code you wrote. NULL#3726: lol 99% is same as given in that demo NULL#3726: will upload that too jacquesthibs#6131: Ok so it does work for fine-tuning NULL#3726: that torch.save is 6GB ;-; NULL#3726: https://github.com/rushic24/Rewriting-and-Paraphrasing-GPT-J6B-8bit-finetune sin yee#3513: Hi everyone. I want to fit this code with my personal dataset. Not the IMDB ones. The load_data_imdb() returns 3 parameters. To fit my dataset in, I've to set the 3 parameters right? But what value should I put? Reference: https://classic.d2l.ai/chapter_natural-language-processing/sentiment-analysis.html#put-all-things-together ```import torch from torch import nn from d2l import torch as d2l
batch_size = 64 train_iter, test_iter, vocab = d2l.load_data_imdb(batch_size) ``` cakiki#9145: The #ask-for-help channel is a better fit for this question kagankorkmaz#7630: Hi, currently I am working on a similar task, but for the models available most of them requires the answer to generate question. What kind of solution did you found for that? satsuroki#3326: Hello if I have to start training my own asr where do I need to start and what will be the challenges ? cakiki#9145: The folks in the #audio-discuss channel might be more suited to answer this 🙂 NULL#3726: Has anyone worked on fine tuning blenderbot ? NULL#3726: Parlai has this guide https://parl.ai/projects/recipes/ NULL#3726: I’m looking for hf variant, will the training dataset look same as parlai? Robert1#0234: anyone got any good opt-30 parameters? cakiki#9145: Good in what sense 😄 cakiki#9145: Afaik there's only the official weights, nothing else Robert1#0234: so I mean more like values of temperature etc.? Robert1#0234: some sensible defaults cakiki#9145: Ah, gotcha; sorry for the confusion 😁 Ar4ikov#3805: Hello everyone! Is there any pre-trained or already ready-to-use models or architectures to solve `big5 (OCEAN)` test? Glad to see here any links or titles to that :huggingsanta: Mark#9079: How is a language model exactly supposed to solve a personality test? Ar4ikov#3805: In my thoughts, by user's text inputs, then model classifies those inputs. I've already had a regressor from sklearn to take that goal, but it seems that regressor is poor for that task.
Ar4ikov#3805: I admit that I may be wrong, and that the plane of solving my problem isnt in NLP at all, but I don't really want to play ping-pong through channels there 🙂 At least now I have a crowdsourced text dataset for that. Mark#9079: Well, what does the dataset look like? Ar4ikov#3805: So, for input data, I have text responses from users to a test compiled by my team, and at the output there are 5 fields that represent the float interval from 0. to 1. In fact, now I'm trying to solve the multi-lables classification task as well. Maybe the problem is in the dataset and how it was generated... Mark#9079: So the assessment uses free-text fields and not likert scales and categorical responses? How much data do you have? Ar4ikov#3805: <1.5K of all NULL#3726: what r u tuning? Robert1#0234: i have an opt 30 model which i am using parameters like temperature and repetition penalty taken straight from what i know works well with gptj. was looking for a set of parameters known to work ok with opt 30 yaswanth#1616: Any one help me to get code mixed data for sentiment classification ( Hindi - English) yaswanth#1616: Hi All, I have used padding and truncation, still I am getting RuntimeError: stack expects each tensor to be equal size, but got [57] at entry 0 and [52] at entry 1.Please help me. https://cdn.discordapp.com/attachments/922424173916196955/980521796317106246/unknown.png,https://cdn.discordapp.com/attachments/922424173916196955/980521796585521232/unknown.png,https://cdn.discordapp.com/attachments/922424173916196955/980521797055311872/unknown.png,https://cdn.discordapp.com/attachments/922424173916196955/980521797357305886/unknown.png ethereal_1202#3685: Data collators can help to keep your inputs to same size. ethereal_1202#3685: Here is one use case! Also there are other few ways to introduce data collators based on tasks of your model! But the one in the image should work! https://cdn.discordapp.com/attachments/922424173916196955/980526129536450620/unknown.png ethereal_1202#3685: **Ques**: A raw dataset of over 30k datapoints is given which contain technical skills and a lot of jargon mixed in. We need to develop a code that can clean this dataset and extract Technical (Hard) skills. Some 900 random examples of technical skills is also given to go through them to understand the pattern and sequence. How should we go about this problem? cakiki#9145: Please use the #ask-for-help channel for implementation questions yaswanth#1616: thank you JonathanSum#8528: For the t0pp, I really don't think it should be used for question and answering because I am trying use it to answer reddit question. It gives a lot of answer that is related to sex. JonathanSum#8528: https://cdn.discordapp.com/attachments/922424173916196955/980808139173658654/unknown.png JonathanSum#8528: https://cdn.discordapp.com/attachments/922424173916196955/980808217921732618/unknown.png JonathanSum#8528: This is not the only one. It is just like 3 out of 10 question's answer are related to sex. JonathanSum#8528: https://cdn.discordapp.com/attachments/922424173916196955/980809281840492604/unknown.png