data
stringlengths
115
7.61k
mrShiba#4412: yea, awesome pet and awesome coin 🙂 Sphinx#2092: ...? Definitely not lol. I think people work on every pair with data available Sphinx#2092: Its more about whether you want to do them all at once (multilingual mt) , with little data (low resource mt), no data (unsupervised mt), for narrow domains (domain adaptation), to evaluate them (metrics), etc. Awesome_Ruler_007#7922: aren't NMT models near perfect? atleast I would assume so, if google translate is to be taken as a baseline. something like T5 would easily outperform I expect StellaAthena#3530: No. StellaAthena#3530: I’m not sure what you’re basing that on but it’s not even close to being true Awesome_Ruler_007#7922: the idea being if Google translate's poor RNNs can perform mediocre translation then large billion-parameter models may perform much, much better? StellaAthena#3530: Google translate uses a sequence to sequence transformer Awesome_Ruler_007#7922: last I heard they use RNNs Awesome_Ruler_007#7922: and its pretty reasonable, seeing its too costly for general public use Awesome_Ruler_007#7922: transformers @ GCP services seems more cost efficient than the translate API for public StellaAthena#3530: IIRC they use a RNN decider and a transformer encoder to save money, but it’s rather fundamentally a transformer architecture with part of it being approximated by a RNN StellaAthena#3530: (Side note: I wonder why more people don’t do that. I can’t name a single paper that evaluates such a set-up) StellaAthena#3530: Here’s a blog post from over a year ago about it: https://ai.googleblog.com/2020/06/recent-advances-in-google-translate.html?m=1 Awesome_Ruler_007#7922: this introduces their paper, as expected from the research organ. but it doesn't definitely state that it's what they use in the public API? Sphinx#2092: I can promise you T5 does not really beat any SOTA MT models. Sphinx#2092: mT5 is actually worse, which is interesting in and of itself. Awesome_Ruler_007#7922: its multilingual, and its perf on LRL tasks is SOTA Awesome_Ruler_007#7922: which makes sense. still, T5 was just an example Sphinx#2092: I can promise you if you build a similarly sized multilingual MT model, it will crush it.
Sphinx#2092: including low-resource languages. Sphinx#2092: Didn't you ever wonder why the mt5 paper has no evals in MT? Sphinx#2092: Despite multilingual MT being the most obvious example of a multilingual generation task? lol Awesome_Ruler_007#7922: can't wonder if I didn't read it 😂 well ig I cant give them much heat Awesome_Ruler_007#7922: atleast for some langs like Hindi and Irish, google translate does a pretty fine job Awesome_Ruler_007#7922: hmm...looks like some people have tried to use it for translation Sphinx#2092: Sure, there's no other publically available large multilingual enc-dec model. Awesome_Ruler_007#7922: they claim pretty good results on MT with LRLs. seems sketchy, but in face of no new evidence I am forced to accept the null hypothesis Sphinx#2092: I don't claim it's necessarily bad, just that similarly-sized multilingual model would beat it Sphinx#2092: but there's no such models publically available, except for m2m I guess. Sphinx#2092: In retrospect, I'm surprised people don't use m2m much. Sphinx#2092: I wonder why. Awesome_Ruler_007#7922: well yeah that's not really a surprise. advancements do happen Sphinx#2092: hopefully we will be able to release some more multilingual mt models in the future and clear the air a bit. Awesome_Ruler_007#7922: in the meantime, google translate seems mysteriously good Sphinx#2092: "mysteriously" SecondMover#8029: Isn't DeepL still better? It was last time I checked. No idea what their architecture is though StellaAthena#3530: @Sphinx How much are you able to say about the design of SOTA translation models? I bet EleutherAI would be down to implement, train, and release a large multilingual MT model cfoster0#4356: *Pile v2:* :goose10:
Sphinx#2092: I'm happy to talk about it in detail. I just can't talk about our public API lol. Sphinx#2092: Though we do plan to eventually make a dataset available Sphinx#2092: we are in the data gathering and cleaning process. Sphinx#2092: Just takes time. StellaAthena#3530: How do cutting edge NMT models differ from say T5? Is it just the data / pretraining task? Sphinx#2092: So there's a few contenders for the optimal training strategy. 1) Just grab as much english-centric parallel data for as many languages as you can, balance the data accordingly (e.g. temperature sampling https://arxiv.org/abs/1907.05019; this is extremely necessary since the datasets have differnt orders of magnitude of data) and then zero-shot the non-english language pairs. This is sorta the approach behind that paper I linked and https://arxiv.org/abs/1611.04558. 2) Mine out parallel data for the non-English centric pairs, come up with a different sampling strategy to balance out both source and target languages and train on all of that data, see e.g. Facebook's M2M https://arxiv.org/abs/2010.11125 or cMNMT (https://research.google/pubs/pub49630/). 3) Pre-train on monolingual data using some type of denoising objective then fine-tune on parallel data e.g. mBART (https://arxiv.org/abs/2001.08210) and the many follow-ups. 4) Train on monolingual data AND parallel data in a single phase, enabling unsupervised translation for languages with no parallel data at all, see e.g. https://arxiv.org/abs/2005.04816, https://arxiv.org/abs/2002.02955, https://arxiv.org/abs/2004.02127, etc. There also some optional things if you want to have a really good model such as large scale backtranslation (https://arxiv.org/abs/1808.09381),. You will also have to figure out how to balance the performance of all these different tasks, potentially requiring clever parameter strategies (e.g. MoEs https://arxiv.org/abs/2109.10465, https://arxiv.org/abs/2110.03742 or just in general language-specific layers e.g. https://arxiv.org/abs/2004.11867) or other tools to address the class-imbalance and multi-task nature of the problem. Sphinx#2092: As for which of these strategies will win out, well, everyone has their favorites. : ) 𓅬 gabriel_syme 𓅬#3220: I would imagine there is probably a huge mismatch of the amount of attention and funding in specific language pairs vs the actual languages spoken by the majority of researchers? I'm making things up probably idk 𓅬 gabriel_syme 𓅬#3220: I did a tiny bit of research on NMT 4 years ago, on my own. I got a small glimpse of how difficult it is to get data for low resource languages 𓅬 gabriel_syme 𓅬#3220: I did love the cross lingual embeddings work back then though, wonder if things like contrastive learning and such can make them better (or already have)
𓅬 gabriel_syme 𓅬#3220: great post! I'm actually reading beyond distillation now 🙂 Love it so far, the task level idea is pretty neat for a lot of the stuff that happen in practice Sphinx#2092: Yes, there is more to low-resource than just data, as you can imagine: https://arxiv.org/abs/2010.02353 Sphinx#2092: https://arxiv.org/abs/2105.09501 though not super how it all pans out tbh. mrShiba#4412: Woah, your comment is super insightful, I’m saving for future reference mrShiba#4412: Since you seemed like an expert in NMT, have you seen anyone use Transformer XL for NMT? I know it’s being used in other NLP tasks but for some reasons barely any application yet for NMT @Sphinx Sphinx#2092: I didn't realize anyone used Transfomer XL for anything. In general, using LM architectures for MT wasn't really a thing for a while, but it has recently gathered some attention, see https://arxiv.org/abs/2106.13627 and some ICLR 2022 paper I can't find now fro some reason. mrShiba#4412: @Sphinx at the moment I'm using vanilla Transformer with fairseq, do you have any suggestion for any better NMT model? Sphinx#2092: That should more than suffice for most cases. mo#0466: not sure if this is the right channel, but I'm wondering if anyone has some wisdom for me. mo#0466: so I'm learning positional encoding right now mo#0466: and I look at the cosine similarity to check what it's learning mo#0466: and more often than not, the later timesteps look distinctly different than the others mo#0466: https://cdn.discordapp.com/attachments/729741769738158194/900580645108084787/Screenshot_from_2021-10-21_05-00-59.png mo#0466: see the last rows/columns mo#0466: wondering if anyone has an idea how that comes. EricHallahan#1051: What data are you using? mo#0466: it's MIDI. the dataset is called maestro. mo#0466: so it's not because of the data, cause there's no special timesteps mo#0466: I guess it could be due to the asymmetry during training? (earlier timesteps see less context) mo#0466: anyway, gonna try an idea from a friend now, which should fix this 😄
mo#0466: cyclical position embedding mo#0466: let's see :thinkies: EricHallahan#1051: https://en.wikipedia.org/wiki/Toroidal_inductors_and_transformers mo#0466: kinda like that 😄 jordiae#4107: Which is the biggest GPT you can train (assuming enough batches, i.e., meeting the compute-optimal frontier) with about 100k USD in cloud GPU credits (V100 32 Gb)? jordiae#4107: in other words, which is the size of the GPT you should train with a budget of 100k and how many tokens? any rough estimates? jordiae#4107: @StellaAthena Louis#0144: chonk Louis#0144: lol Louis#0144: Idk Louis#0144: it depends on the interconnects Louis#0144: thats not enough information jordiae#4107: I know. I'm happy with a n estimate StellaAthena#3530: @jordiae in the neighborhood of 13B for 300B tokens. jordiae#4107: excellent, that's the answer I was looking for jordiae#4107: many thanks! cfoster0#4356: Is that credits to as many interconnected 32GB V100s you want or on a single one? jordiae#4107: interconnected StellaAthena#3530: Oh wait, V100 StellaAthena#3530: Probably decently smaller then.
jordiae#4107: you were counting A100? StellaAthena#3530: A100s are a *lot* more cost effective than V100s StellaAthena#3530: They provide around twice the compute but typically cost less than 25% more EricHallahan#1051: It's really counterintuitive to me that this is the case, but it is. StellaAthena#3530: There is genuinely no reason to use V100s at the > 1 cluster scale on any cloud service I am aware of jordiae#4107: I know but it must be with v100s StellaAthena#3530: The pricing dynamics make no sense jordiae#4107: then 6.7B? StellaAthena#3530: @jordiae I'm fairly confidant you can do that. If you're trying to squeeze out the best end performance though, you can probably go larger and train for less tokens jordiae#4107: perfect jordiae#4107: thanks alstroemeria313#1694: AWS sometimes just won't rent you A100 clusters unless you've already used V100s for a bit. idk why 🤷‍♀️ jordiae#4107: because they have to amortize the V100s they had already bought? alstroemeria313#1694: Then they should raise prices on the A100s. alstroemeria313#1694: Instead they have them for a certain price but you sometimes just can't get them StellaAthena#3530: I know people who have tried to negotiate large numbers (>200) of A100s for short periods of time and were basically told "no, that's not worth our while" EricHallahan#1051: PSA: PyTorch 1.10.0 is now available. https://github.com/pytorch/pytorch/releases/tag/v1.10.0 https://pytorch.org/blog/pytorch-1.10-released/ alstroemeria313#1694: oh they finally have covariance matrices
alstroemeria313#1694: Forward mode AD? What's that for EricHallahan#1051: ¯\_(ツ)_/¯ alstroemeria313#1694: `Support for target with class probs in CrossEntropyLoss` alstroemeria313#1694: Isn't that just KL divergence. alstroemeria313#1694: Wait it's KL divergence where they logsoftmax the input first lol alstroemeria313#1694: (The KL divergence loss function not normalizing its input is a footgun) alstroemeria313#1694: > nn.AdaptiveAvgPool2d: Correctly dispatch to CUDA implementation (#61851) alstroemeria313#1694: what How was this using CPU inox#5400: no one uses it? alstroemeria313#1694: I was ;_; Teemochu#8740: for people wanting to use WSL, w10 21h2 (enables GPU support) is apparently out now for release preview fe#0483: MJ: https://spectrum.ieee.org/stop-calling-everything-ai-machinelearning-pioneer-says fe#0483: mass appeal pop stuff i guess cfoster0#4356: (anti (ai hype)) hype fe#0483: exactly fe#0483: although I generally observe uses of ML as a term to be far more accurate than that of "AI" - whatever the current connotation or denotation of "AI" is. SadSan#0570: Hello! Does anyone have an idea about adding "F-noise" to a CNN architecture? Thank you 𓅬 gabriel_syme 𓅬#3220: I really like MJ, although I've only watched him speak a few times. I don't think he is a similar critique to the gofai crowd. His arguments seem more knowledgeable to me. That said I have not checked things lately. Deleted User#0000: how did it go? my repo includes a model that uses a dVEA to learn binning, but im still trying to make it work
chilli#5665: It's only for the out variant chilli#5665: Like, if you did something like ... chilli#5665: actually, I'm not sure if it's even possible to call this from user code chilli#5665: lol alstroemeria313#1694: oh alstroemeria313#1694: ok elderfalcon#4450: With 1.10: `Changed profiler.profile argument with_flops when set to True to report total FLOPs rather than FLOP/s, and support more operators (#62779, #61895)` 🤦‍♂️ 🤦‍♀️ 🤦 :wrong_goose: :goose7: :goose8: :goose10: :delet: :3goose: Why must they do this to us... Why would you ever remove this in favor of the other? The PR that changed this was opened by a CS student it looks like with no RFC or motivation stated, and it was just merged in anyways. I almost never see reasons behind using raw FLOPS vs FLOP/s unless you're doing the latest vanity EfficientNet paper.... bmk#1476: open a PR to roll back? nshepperd#2316: https://github.com/google/jax/issues/8096 P2 (never) :blobsad: alstroemeria313#1694: aww StellaAthena#3530: If you’re interested in the #interpretability-reading-group you can now get pinged with reminders about it by reacting to this post: https://discord.com/channels/729741769192767510/837392611504816218/900939890533998644 We now have reaction roles set up more generally using our discord bot. Reaction Roles are a way to create user groups where people can opt in and out of receiving pings by toggling their roles directly. If there are things other than the interpretability reading group people think roles would be useful for, let me know! nev#4905: it learned something with a low bin count, but then I stopped because I found a different way to get the result I want
Deleted User#0000: does anyone have an Eleuther AI logo I can use? Or preferably a .stl Deleted User#0000: or a gpt-j logo EricHallahan#1051: Are you trying to print somthing? Deleted User#0000: Doing some 3d design for a proposal Deleted User#0000: It is Apache, which means I can use it for building out new code, right? Deleted User#0000: ok I'll go ahead and create one Deleted User#0000: let me know if you guys like it StellaAthena#3530: Apparently we won some kind of award that nobody bothered to tell us about: https://www.idg.com/news/2021-best-of-open-source-software-awards-identify-the-most-groundbreaking-products-available-to-developers-and-it-organizations/ EricHallahan#1051: Thank you for sanitizing the link. Louis#0144: Do we get a trophy Dromarion#3383: Who should they even send the trophy to Awesome_Ruler_007#7922: Eleuther won't even accept the trophy unless its in the shape of a Goose anyways Louis#0144: the mother goose Louis#0144: which i think is stella generic#8192: gpt-code-clippy (https://github.com/CodedotAl/gpt-code-clippy) seems to be on the front page of HN. is that still ongoing, or did it stall out? the HumanEval numbers in the README don't look great... Deleted User#0000: ah which way? nev#4905: just taking dances from the dataset lol Deleted User#0000: ah lel Deleted User#0000: like a motion graph? Slack#2746: is there a finetuned gpt-j model for japanese?
jbustter#5167: You guys have a tip? should I just say no? https://cdn.discordapp.com/attachments/729741769738158194/901376839904296970/NVIDIA_Share_hMxSiV424p.png AjChakravarthy#8444: Are you open to monetizing your work then this could be a good oppourtunity. Of course depebds on the terms. jbustter#5167: I am open to monetize it, but I'm not sure about how to negotiate the terms, or what it means for other people (including me) who want to continue using that notebook. genetyx8#7543: you could make it something like "free for non commercial purposes" AjChakravarthy#8444: I am a startup founder so I negotiate many deals like this. I can help you with a framework on how to think about this. DM me if you want to discuss further. Deleted User#0000: Hey guys I was wondering if I can have your take on something Parker#3197: ? Deleted User#0000: Welll I am currently serving in the army for the next 7 months, and I barely have time to do any ML related thing, I go home every weekend so I barely got time, is there anything I could do at this time so I could keep up to speed with the industry for when I get out of the military? StellaAthena#3530: If you genuinely have almost no time, then no. Deleted User#0000: I could maybe do some reading every now and then but there is no way I could have time for hands on work StellaAthena#3530: @Deleted User TBH, reading #research is probably your best bet for NLP Deleted User#0000: Fair enough but I still a noob when it comes to NLP tho :s Parker#3197: Yannic Kilcher makes like 10-30 minute videos every week of anything noteworthy in machine learning/artificial intelligence. (https://www.youtube.com/c/YannicKilcher) there's also MLST (https://www.youtube.com/c/MachineLearningStreetTalk) and Lex Fridman's podcast (https://www.youtube.com/c/lexfridman) which interview people who have published research. I would probably suggest these podcasts to anyone who is entering machine learning/artificial intelligence. this server is really great for finding noteworthy research as soon as it is published (and getting an idea of what others think about it) usually reddit (/r/MachineLearning) is delayed by a day, but a lot of the stuff that shows up here also ends up there Deleted User#0000: Great resources Deleted User#0000: I will definitely be checking those out Deleted User#0000: I am currently reading hands-on machine learning with scikit-learn, keras, and tensorflow too so this might help serve as a starting point
Louis#0144: His sunglasses always confuse me Louis#0144: I don't get it nshepperd#2316: @alstroemeria313 wtf i segfaulted jax alstroemeria313#1694: i've seen that ^^;; nshepperd#2316: trying to write a thing that interpolates between two param sets based on log_snr alstroemeria313#1694: It turned out to be an OOM alstroemeria313#1694: That was being handled wrong nshepperd#2316: wow nshepperd#2316: it segfaults consistently so yeah i guess it's that? alstroemeria313#1694: I know right! alstroemeria313#1694: I halved the batch size and a run that was crashing consistently early in started working. nshepperd#2316: wait what, i reduced the batch size to 1 and now it's crashing *earlier* nshepperd#2316: lol, it crashes with batch size 1 but works with batch size 2 nshepperd#2316: i bet it's a tpu padding thing alstroemeria313#1694: Ahaha alstroemeria313#1694: The OOM core dumps I saw were on GPU only, it OOMed normally on TPU alstroemeria313#1694: But if it's core dumping on OOM at all something is wrong lol nshepperd#2316: it was OOMing in `TpuCompiler_RunHloPasses` nshepperd#2316: so... i have no idea ^_^ alstroemeria313#1694: Mine dumped so much stuff into the tmux history that I couldn't tell what the original error was
nshepperd#2316: ahah alstroemeria313#1694: A bunch of XLA IR stuff. nshepperd#2316: this was just 50 lines of traceback nshepperd#2316: wait no wtf nshepperd#2316: this wasn't a segfault nshepperd#2316: it *divided by 0* trying to compile XLA? alstroemeria313#1694: What alstroemeria313#1694: Oh no nshepperd#2316: yep inox#5400: that's wild inox#5400: I'd give up debugging and make it reproducible for the bug report random_lurker99b#8614: have a stacktrace? nshepperd#2316: looks pretty useless tbh https://cdn.discordapp.com/attachments/729741769738158194/901481664230858812/log.txt nshepperd#2316: i will try to reproduce this on a spare v2 i guess... random_lurker99b#8614: 😦 oh right I guess cloud tpu omits the hlo passes from you there.. Bran#4755: 😐 Bran#4755: who would one ping to make sure someone who can do something about that sees that, actually? (context: nitro scam link) kindiana#1016: usually someone is online to see it and ban then (in this case me) EricHallahan#1051: Anyone in L5 can ban, so figure out who is online, ping one of them, and one of us can take care of the rest.
Bran#4755: alright cool StellaAthena#3530: (Making this easier is the primary reason why L5 members show up separately on the “online members” sidebar) ersatz#0001: are you guys hosting discord stages from time to time? ersatz#0001: like on Yannic's server they are doing stages on paper discussion bmk#1476: it's always paper discussion time in #research EricHallahan#1051: We don't use stages when we do events, we just use #meeting [voice]. StellaAthena#3530: We have generally found that asynchronous conversation is better suited to the number of time zones and spread of people in this server. We have occasional #interpretability-reading-group meetings (one was earlier today) though I don’t know if it’s a “stage” EricHallahan#1051: It's not. ersatz#0001: all right, thanks Zippy#1111: :aPES_Lazer::aPES_Lazer::aPES_Lazer: https://cdn.discordapp.com/attachments/729741769738158194/901588888097796177/unknown.png EricHallahan#1051: #prompting? Zippy#1111: Yeah openAI codex javascript prompting Zippy#1111: Ohhh Zippy#1111: I see what you are referring to now, sorry Technobird22#2055: How would a Tesla T4 compare to an RTX A4000? kurumuz#5695: Much much worse Technobird22#2055: I found a T4 used for NZD~$1500 :thinkies: EricHallahan#1051: T4s are efficient but slow. Technobird22#2055: (around 1100 USD) kurumuz#5695: not any more
kurumuz#5695: with A4000s Technobird22#2055: They have a lot of tensor cores Technobird22#2055: so A4000 definitely > T4? kurumuz#5695: yes EricHallahan#1051: yes Technobird22#2055: Cool, thanks Technobird22#2055: on that note, Technobird22#2055: for the price of an A4000, I could get 2x RTX 3060 (12GB)s or a 3080 bmk#1476: get a 3080 bmk#1476: or save up for a 3090 Technobird22#2055: "only" 10GB VRAM :sadge: bmk#1476: oh huh does the 3080 have less vram than the 3060? bmk#1476: huh kurumuz#5695: if you're willing to do model parallelism 2x RTX 3060 is a valid choice. kurumuz#5695: yes bmk#1476: the last time i ever paid attentin to the non-flagship gpus was back with the 10 series bmk#1476: in the 30 series, ive only ever used a 3090 kurumuz#5695: actually dont do this. no proper interconnect EricHallahan#1051: NVLINK doesn't work? kurumuz#5695: rtx 3060 doesnt support NVLINK
kurumuz#5695: only 3090 EricHallahan#1051: Oh I thought it said 3090 lol bmk#1476: wasnt there a benchmark from puget systems that showed it wasnt that necessary for dp bmk#1476: dunno about mp/pp kurumuz#5695: its not for dp 𓅬 gabriel_syme 𓅬#3220: I'm just going to do what I did when I was younger and read about hardware instead of buying it. I'll wait for zen4 and 4xxxx kurumuz#5695: pp is more forgiving too kurumuz#5695: absolutely necessary for mp tho bmk#1476: in general if youre dealing with like 2 low end gpus dunno if you even need the bandwidth bmk#1476: well then just do pp kurumuz#5695: well at inference time i think should be fine with like GPT-J kurumuz#5695: you're losing performance with PP but yeah Zippy#1111: A4000's are pretty great actually. And 3090's are a lot better than I assumed. If you're doing many fp32 operations on the gpu, a 3090 can be better than an A100 for certain tasks. kurumuz#5695: especially at those batch sizes(24 wont fit a lot with GPT-J lets say) bmk#1476: at these scales it should be like nbd since the gpus themselves are pretty slow Zippy#1111: It's weird to me that to fully reap the benefits of an A100, you need a multiple of 64 batch size. kurumuz#5695: but then, pytorch is a pita to do any kind of paralllelism :sadge: kurumuz#5695: you can use neoX ig kurumuz#5695: no? kurumuz#5695: that depends on the model
kurumuz#5695: if model is small yes kurumuz#5695: because small models dont utilize all teh cores kurumuz#5695: A100 is very wide kurumuz#5695: also remember to use fp16 or bf16 with A100 kurumuz#5695: A100 is 1:4, fp32:fp16 kurumuz#5695: so 4x faster Zippy#1111: Yeah. But also, it's in the nvidia documentation.. says that optimal batch size for A100 is multiple of 64. kurumuz#5695: you wont even fit 64 with GPT-J kurumuz#5695: that is a dumb generalization Zippy#1111: I know that, I'm just word for word telling you what I read. kurumuz#5695: well im not blaming you for nvidia writing that ofc :berk: Zippy#1111: heh yeah Technobird22#2055: Wait can you split it across multiple GPUs? kurumuz#5695: well yeah... its not exactly straight forward Technobird22#2055: Okay, so I'll probably go with the a4000 Technobird22#2055: For CV stuff, is a lot of vram needed? Zippy#1111: definitely less than for transformer related things, unless you plan on using transformer based CV models Technobird22#2055: Ah okay Technobird22#2055: Do all transformers need a lot of vram? Zippy#1111: Yes they do.
Zippy#1111: You should be able to run most models at fp16 Zippy#1111: I'm not sure if gpt-j would work .. it would be close. EricHallahan#1051: *GPT-Neo 125M is visibly annoyed.* Zippy#1111: Well yeah, of course there are smaller transformer models, I'm just talking about the mega performant transformer models. Zippy#1111: Although to be fair.. I've had better success with classification tasks with mpnet than any other model and it's pretty tiny. 𓅬 gabriel_syme 𓅬#3220: I have a tiny GPT-J that takes 3 seconds for 500 or so tokens on CPU without even an ounce of optimization. I love tiny models 𓅬 gabriel_syme 𓅬#3220: (I'd imagine if I knew how that would go to less than 1 second or smth) Technobird22#2055: Nice! How many parameters is that trained on? 𓅬 gabriel_syme 𓅬#3220: 160M parameters 𓅬 gabriel_syme 𓅬#3220: oh sry, do you mean tokens? Technobird22#2055: Like how much data 𓅬 gabriel_syme 𓅬#3220: about 2B tokens, not so much data Technobird22#2055: How well does it perform for being so ~~large~~ 𓅬 gabriel_syme 𓅬#3220: hmm it does really well but I haven't (yet) quantified performance 𓅬 gabriel_syme 𓅬#3220: this starts in...2 days when all my runs finish 𓅬 gabriel_syme 𓅬#3220: but it's good enough to be in the 'production' setting (the quotes matter there 🙂 ) Technobird22#2055: how does it appear to perform for you? Technobird22#2055: Oh, nice, so this is quite recent 𓅬 gabriel_syme 𓅬#3220: it's the best I've used so far. Although I also have a 6B finetuned on this but have not tested it (too big / too slow and I ~~can't~~don't know how to deploy it nicely) Technobird22#2055: By the way, re: buying an A4000 vs RTX3080, I already have a Tesla M40 24GB, so maybe for my next card I should get more performance rather than VRAM
Technobird22#2055: Is 10GB enough to run Neo2.7? EricHallahan#1051: Should fit in 8 easily. Technobird22#2055: Sorry if I'm asking a lot of questions, but how would 10GB fare for things like StyleGAN or VQGAN? Parker#3197: check the repositories for what you’re interested in using? Parker#3197: I know CLIP + VQGAN lists stuff like this Technobird22#2055: Thanks, I'll do that Parker#3197: https://cdn.discordapp.com/attachments/729741769738158194/901620622801661982/IMG_9208.png Technobird22#2055: Thanks Technobird22#2055: Novice question, but would it be possible to 'distill' gptj further? EricHallahan#1051: That's an open question. ersatz#0001: lol every time I'm asking this kind of question people are saying something like that, even basic questions like how far are we from diminishing return and stuff, the field is still pretty young I guess StellaAthena#3530: I mean, if all the papers you’re citing are CE your fired is young 😛 generic#8192: fwiw when I was training GPT2 models with huggingface's DDP having nvlink seemed to make things much faster? a bunch of benchmarks in this thread with different model and block sizes: https://github.com/huggingface/transformers/issues/9371#issuecomment-768656711 kurumuz#5695: PCIE3? kurumuz#5695: we now have PCIE4.0 kurumuz#5695: we didnt really see any difference on PCIE4 and DDP generic#8192: it was a PCIE4.0 system, but nvidia's consumer cards can't do PCIE-PCIE anyway Technobird22#2055: Hmm Should I wait another year for 40x0 series cards? Kharr#7888: Might be 2 years before you can buy one with limited yields and supply chain problems. Chip shortage is expected to last well into next year. Technobird22#2055: :/
Technobird22#2055: A 3080 here is around 1500 USD Technobird22#2055: And a 3060 is around 700USD 𓅬 gabriel_syme 𓅬#3220: jfc this is why I don't buy a gpu Soltan_G42#2712: It's why I ended up with a laptop with a 16GB 3080 for $2300. Maybe I'll be able to buy a new desktop in a couple of years without paying 2x msrp on things. suh#2879: can anyone point me in the right direction for 3d animation datasets, or link me to a collab book with 2d video to 3d animation models axiom#3599: We’re pets of AI empowered multinational corporations? axiom#3599: That’s kind of hot. axiom#3599: :klaeiaPANIC: axiom#3599: :Amenervoussweating: axiom#3599: :ShySmile: alstroemeria313#1694: ...what axiom#3599: Idk… the payload seems self explanatory to me… nshepperd#2316: are these screenshots of screenshots of screenshots some kind of commentary on... something axiom#3599: Ummm, its a proving technique? axiom#3599: Okay, let’s consider a re-typsetting axiom#3599: Sectioning axiom#3599: Hmm… axiom#3599: Working Title: The Case for the Existence of AGI axiom#3599: :aniStudy: gollark#3909: It isn't a very good case.
gollark#3909: I'm not sure what you intend to prove by repeatedly nestedly screenshotting things. axiom#3599: You’ve read it? I haven’t written it yet… gollark#3909: I read it acausally, yes. SpaceX&OpenAI#9998: https://twitter.com/kdnuggets/status/1451959616372461576 axiom#3599: That’s a pretty good trick! What’s the news got to do with anything? gollark#3909: Exactly 3. axiom#3599: kdnuggets has a man on the inside of every AI company??? axiom#3599: What? axiom#3599: They have perfect knowledge of the state of things? axiom#3599: What a preposterous latent hypothesis. axiom#3599: Sure, GPT-3 predict all possible comebacks, order by descending cogency. Kia#2550: Ow god...Hm, Just Recommend Don't post Screenshots of Screenshots and Just Send a Message axiom#3599: I’m literally latexing it, i got you fam Kia#2550: Ah... nshepperd#2316: i would recommend, um... taking your meds, possibly... Kia#2550: Yeah... Kia#2550: Also It might count as spam axiom#3599: I see, survey results noted. Kia#2550: I :citationneeded: axiom#3599: Oh, what a lovely probing.
axiom#3599: “Oh, what is that, Aumann? I’ve missed you, I hear you calling.” axiom#3599: “That’s mean, Hassabis! You just measured Lee Sedol and moved on?” Kia#2550: Okay yeah Kia#2550: Um Just Go Talk in #off-topic nshepperd#2316: here's something else to go "wat" about: how does jax.checkpoint(prevent_cse=True) prevent XLA common subexpression elimination from undoing the checkpointing? nshepperd#2316: you might think it was something reasonable, like having an XLA option to turn cse off nshepperd#2316: but no, instead the backward pass draws a random number between 0 and 1 and compares it to 2 nshepperd#2316: and does the backward pass only if the answer is LT alstroemeria313#1694: Ahah AI_WAIFU#2844: I'm looking forward to this breaking down the road when they upgrade their compiler. nshepperd#2316: oh, it totally will, yeah nshepperd#2316: somehow this was considered simpler than just adding a feature to XLA. which they develop as well 𓅬 gabriel_syme 𓅬#3220: but can you imagine giving this as an answer to a leetcode interview 𓅬 gabriel_syme 𓅬#3220: pretty chad nshepperd#2316: eheh axiom#3599: omg i got something to show you lmfao axiom#3599: I’ll chat with gwern first as I typeset it axiom#3599: Omg sweet jesus, that many simultaneous publications? axiom#3599: The world record for simultaneous publications… axiom#3599: What is it right now?
axiom#3599: Idk, insufficient! axiom#3599: :wink: axiom#3599: omg i can’t stop laughing Louis#0144: :thisup: nev#4905: n? nev#4905: https://cdn.discordapp.com/attachments/729741769738158194/901824032670900274/Screen_Shot_2021-10-24_at_16.26.40.png axiom#3599: Huh? Who the fuck are you? An idiotic goose? Ok, noted. axiom#3599: :aniStudy: axiom#3599: omg *im tickled* 𓅬 gabriel_syme 𓅬#3220: This would have been fun times if I wasn't in pain axiom#3599: Pain… i got a book on this hold on… axiom#3599: https://cdn.discordapp.com/attachments/729741769738158194/901826319778713610/image0.jpg axiom#3599: Omg, just gimme one second… axiom#3599: It could be terminal…. axiom#3599: To be sure i’d have to solve the halting problem…. axiom#3599: I’ll get a consult axiom#3599: :aniStudy: axiom#3599: https://cdn.discordapp.com/attachments/729741769738158194/901827312641802260/image0.jpg axiom#3599: Omg! I have to master the art of conjecturing for this POOR patient! axiom#3599: Omg! I need an AI, I’m coming!
axiom#3599: https://cdn.discordapp.com/attachments/729741769738158194/901827622441459722/image0.jpg axiom#3599: Omg *im tickled* axiom#3599: :aniStudy: Kia#2550: Ah god... Kia#2550: Probably go get some rest first and Go drink a Glass of water Kia#2550: Also definitely not the channel to talk about this Kia#2550: Not the place to axiom#3599: Okay sorry! Kia#2550: Yeah:sus: Louis#0144: can we pin this Louis#0144: not even star Louis#0144: just pin Louis#0144: i want that on a poster Kia#2550: Just goose posting in the Starboard... Expected:goose10: random_lurker99b#8614: so I havent spent time on the checkpointing API but you dont want to turn off CSE for your entire jaxpr/xla computation, just for the rematted region, and you dont subcompile with separate options, so im not sure it is as simple as turning off an option, it would probably interfer quite a bit with the overall flow. nshepperd#2316: i mean an option per XLA op or something nshepperd#2316: or something to label each op with a number or a gensymed symbol so that ops with different labels aren't considered equivalent nshepperd#2316: but honestly with the remat thing i want to write disabling CSE entirely probably is what I want, bc I need complete control of the execution graph random_lurker99b#8614: I would see how one would prefer just doing it on the jax api side vs asking for core XLA op changes which will make XLA compilation for all users slower 🙂 random_lurker99b#8614: >bc I need complete control of the execution graph
random_lurker99b#8614: how so? nshepperd#2316: um nshepperd#2316: it's an algorithm which takes the graph and rewrites it into an optimized form nshepperd#2316: with potentially multiple remats for every op random_lurker99b#8614: what are you rewriting here random_lurker99b#8614: jaxprs, hlos, .. nshepperd#2316: hlo probably random_lurker99b#8614: not quite following why you wouldnt want standard passes to be run after you have done your rewriting nshepperd#2316: ideally this would be a XLA compiler pass which would run after cse but nshepperd#2316: cse breaks rematerialization, defeating the whole point random_lurker99b#8614: what are you optimizing? nshepperd#2316: materialization schedule that computes the output in minimum time subject to a constraint on maximum vram usage nshepperd#2316: aka. approximate solution to the pebbling problem random_lurker99b#8614: are you planning to use mlir? nshepperd#2316: really most optimization passes should be done *before* this rewriting, so that it has an accurate model of the actual memory and compute cost of each node nshepperd#2316: idk what that is nshepperd#2316: back in tensorflow i just rewrote the tensorflow op graph and that was good enough random_lurker99b#8614: this is a bit more involved here probably. MLIR is infrastructure to allow you to do this in a modular way. You can go from jax to mhlo https://github.com/tensorflow/mlir-hlo, do some common mlir passes such as cse, then do your rewrites, then extract something runnable random_lurker99b#8614: are you a postgrad in ml compilers or so? nshepperd#2316: nope
random_lurker99b#8614: is the goal to have a personally/group usable tool or to publish something or contribute to open source? nshepperd#2316: I just want to write the thing so that we can use it lol nshepperd#2316: who'd have thought the math would be the easy part random_lurker99b#8614: the xla compiler will do lots of stuff with fusion, layout, memory assignment. If you do a rewrite somewhere high in the stack this will definitely change cost a lot, so you cant go too fine-grained on hlo rewrites. In particular the fusion will probably mess a lot with your remat assumptions.. you werent planning to write an xla pass? nshepperd#2316: how do i write an xla pass random_lurker99b#8614: generally by implementing the HloPassInterface to extend the PassPipeline in the xla compiler https://github.com/tensorflow/tensorflow/blob/dffd32796e53749d4d3ce90d901b6c04c259d69f/tensorflow/compiler/xla/service/hlo_pass_interface.h nshepperd#2316: and can i do it without patching xla and installing a recompiled modified version on anything i want to use random_lurker99b#8614: probably not no nshepperd#2316: so prepare for a lifetime of :works_internally:, basically random_lurker99b#8614: so HLO rewrites are a good place to do things that are somewhat robust to xla optimizations, i.e. higher level restructuring that dont care about a few ops being fused here and there random_lurker99b#8614: 👋 random_lurker99b#8614: (if you are editing raw hlo outside the compiler infra with no idea where you are in the pass pipeline) alstroemeria313#1694: @nshepperd is it possible at all to analyze pytorch memory usage and figure out where to do the checkpoints alstroemeria313#1694: i guess the graph is dynamic though nshepperd#2316: like to do this thing in pytorch? nshepperd#2316: not sure, this whole idea kind of relies on having a static graph alstroemeria313#1694: yeah :/ alstroemeria313#1694: like you would have to analyze it once and... idk alstroemeria313#1694: print out where to insert the checkpoints? alstroemeria313#1694: and then if the shapes change i guess they just stay there lol
alstroemeria313#1694: Still. This would actually be better than what we have now. alstroemeria313#1694: Which is guessing manually. alstroemeria313#1694: And then the shapes can change anyway. nshepperd#2316: yeah nshepperd#2316: but, I would have to figure out how to translate the remat schedule for the gradient computation back into normal checkpoint calls alstroemeria313#1694: *nods* nshepperd#2316: wasn't there a library for... staticifying pytorch computations nshepperd#2316: with a Tensor subclass or something alstroemeria313#1694: i forget alstroemeria313#1694: well, PyTorch/XLA alstroemeria313#1694: Does this alstroemeria313#1694: Or something close to it alstroemeria313#1694: was it this https://pytorch.org/docs/stable/fx.html alstroemeria313#1694: Ohh alstroemeria313#1694: @chilli had one that did tracing with *live* values, not proxies alstroemeria313#1694: Which is probably what you want so you can measure memory use. alstroemeria313#1694: I think it needed nightly at the time but I don't know if it still does now that 1.10 is out. nshepperd#2316: ooh nshepperd#2316: hmm well, i can sort of approximate it with proxies. as long as I have the shape and dtype, i can build a model to guess the memory/compute costs nshepperd#2316: that's what i did with tensorflow
alstroemeria313#1694: ahh nshepperd#2316: and some basic heuristics depending on the op name, like the estimated flops of a convolution nshepperd#2316: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/service/hlo_rematerialization.cc hm so basically this nshepperd#2316: but i need to plug in my advanced algorithm instead nshepperd#2316: and probably make it possible to somehow choose which algorithm to use from jax, bc it's pretty expensive to run nshepperd#2316: ugh so complicated chilli#5665: Wait this is totally doable in pytorch I think chilli#5665: I've actually been working on something similar alstroemeria313#1694: ohh? chilli#5665: Basically, given the forwards/backwards graph, you just want to decide what to recompute right? alstroemeria313#1694: yeah chilli#5665: Oh, and all in python alstroemeria313#1694: and, like... we need to be able to run the forward part already knowing what to discard, i think alstroemeria313#1694: bc it may not fit into memory if we keep all activations. chilli#5665: Well, that part is relatively easy I think chilli#5665: Or hmm chilli#5665: That part might be a bit more annoying, but is more of a minor thing alstroemeria313#1694: So like, you have to do an initial "trial" forward or smth? chilli#5665: Yeah, to trace it out chilli#5665: It traces out the forwards + backwards
chilli#5665: And then you can do whatever you want with it alstroemeria313#1694: *nods* chilli#5665: Such as recomputing some parts in the backwards alstroemeria313#1694: Yeah. chilli#5665: Well, you do need to write some graph modifications chilli#5665: But it is all in python chilli#5665: What kind of strategy are you looking to do though? alstroemeria313#1694: @nshepperd actually knows that part, i don't, but it's from some paper and computes the optimal locations for checkpoints alstroemeria313#1694: she has a version for tensorflow alstroemeria313#1694: from way back when alstroemeria313#1694: that takes a TF graph and rewrites it chilli#5665: Oh is this a paper called echo? chilli#5665: I think this won't be easy to do after optimization passes tbh chilli#5665: Like, at the mlir level chilli#5665: It's probably doable, but seems quite difficult nshepperd#2316: the first half is twremat from a paper called "efficient rematerialization for neural networks" nshepperd#2316: second half is from I need to write a paper explaining it ^^;; chilli#5665: Ok well, TL;Dr: totally doable in pytorch and fairly easy now I think tbh chilli#5665: Assuming the part you want to optimize is traceable ofc alstroemeria313#1694: it should be? like it's just a u-net
alstroemeria313#1694: in my use case chilli#5665: Yeah sure :) chilli#5665: That sounds doable nshepperd#2316: how does your tracing thing work alstroemeria313#1694: my diffusion+CLIP methods are generally not traceable rn but we only want to do the optimization on the diffusion model alstroemeria313#1694: and mb on CLIP alstroemeria313#1694: not the untraceable part in the middle chilli#5665: not sure how much detail you want chilli#5665: 🙂 chilli#5665: (or err, how much you know about PyTorch) chilli#5665: it's using this mechanism called `__torch_dispatch__` which sits under the dispatcher alstroemeria313#1694: is this in 1.10 btw? chilli#5665: uh, the tracing part no chilli#5665: but I think we'll have a release of functorch that works with 1.10 alstroemeria313#1694: ahh chilli#5665: so you won't need to use nightlies nshepperd#2316: that sounds pretty good nshepperd#2316: can we like, do the tracing part with batch size 1 or something. so it doesn't oom. then actually run it with full batch size? chilli#5665: mmmm chilli#5665: so...
chilli#5665: the "correct" way to do it is something called meta tensors chilli#5665: in PyTorch chilli#5665: which are basically tensors that only keep metadata about the tensor chilli#5665: and aren't actually materialized chilli#5665: the answer to this is maybe nshepperd#2316: right chilli#5665: Like, I can definitely come up with cases where this breaks, unfortunately chilli#5665: for ex: ``` new_tensor = torch.ones(traced_arg.shape) traced_arg + new_tensor ``` chilli#5665: but I guess it should definitely be possible to rewrite the graph afterwards to work with your new batch size? alstroemeria313#1694: what if batch size 1 OOMs chilli#5665: well, what I was thinking that might be easier chilli#5665: is running on your CPU alstroemeria313#1694: (This is part of my use case) chilli#5665: if meta tensors work then this just solves all of your problems nshepperd#2316: then we might need batch size 0 lol nshepperd#2316: i remember surprisingly many things just worked with a dimension set to 0
nshepperd#2316: idk if the whole unet would but... maybe nshepperd#2316: or yeah meta tensors nshepperd#2316: yeah we'd have to rewrite all the mentions of batch size chilli#5665: tracing it on your CPU is probably easier then alstroemeria313#1694: how do you make a meta tensor chilli#5665: just `x = torch.randn(3, device='meta')` chilli#5665: or chilli#5665: `x.to('meta')` nshepperd#2316: huh chilli#5665: it doesn't have full coverage now I think chilli#5665: but they're pretty useful in a lot of situations alstroemeria313#1694: ``` NotImplementedError: Could not run 'aten::mm' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom buil d process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please vi sit https://fburl.com/ptmfixes for possible resolutions. 'aten::mm' is only available for these back ends: [CPU, CUDA, SparseCPU, SparseCUDA, SparseCsrCPU, BackendSelect, Named, ADInplaceOrView, Autogr adOther, AutogradCPU, AutogradCUDA, AutogradXLA, UNKNOWN_TENSOR_TYPE_ID, AutogradMLC, AutogradHPU, A utogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast , Batched, VmapMode].
``` chilli#5665: haha nshepperd#2316: oh no chilli#5665: tragic alstroemeria313#1694: No matmul? alstroemeria313#1694: wtf lol chilli#5665: yeah, it's a work in progress I think chilli#5665: I don't know how full their coverage is alstroemeria313#1694: OH THIS IS 1.9 alstroemeria313#1694: Hold on alstroemeria313#1694: ``` NotImplementedError: Could not run 'aten::_cat' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom bu ild process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_cat' is only available for these backends: [CPU, CUDA, QuantizedCPU, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, A utogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode]. ``` alstroemeria313#1694: nope~ nshepperd#2316: aww chilli#5665: haha, at least mm works now alstroemeria313#1694: yep
chilli#5665: yeah, it's a work in progress alstroemeria313#1694: Unfortunately I do cat stuff chilli#5665: if you want things to work now it's probably easiest to trace with CPU alstroemeria313#1694: It's a U-Net nshepperd#2316: @chilli so i need to use 1.10? and the tracing stuff is in functorch? chilli#5665: yeah, pretty much chilli#5665: well, we're still working on cutting the 1.10 branch chilli#5665: I think it's almost done? https://github.com/pytorch/functorch/tree/1.10 alstroemeria313#1694: batch size zero worked w/o crashing btw chilli#5665: not sure if Richard finished it yet alstroemeria313#1694: i just did a forward and backward chilli#5665: lol nice nshepperd#2316: eheh~ nshepperd#2316: see! alstroemeria313#1694: I don't have batch normalizations in this model alstroemeria313#1694: Or anything that refers to batch size ever chilli#5665: https://github.com/pytorch/functorch/blob/main/functorch/_src/eager_compilation.py#L85 chilli#5665: so, this is an example of some code that's basically re-implementing checkpointing for a function with this API chilli#5665: lol nshepperd#2316: awesome
chilli#5665: I guess I will give a warning that this kind of ... prototype stuff chilli#5665: there's still a bunch of core things that we're working on here chilli#5665: but it works for a lot of use cases I've used it for 😛 chilli#5665: for example, reducing CPU overhead, adding a caching layer, doing non-shape specialized tracing, using meta tensors, etc. chilli#5665: and we were also planning on writing some utilities to make it easier to work with the graph nshepperd#2316: my tf thing was pretty prototype, but it was enough to train gpt2 eheh chilli#5665: lol chilli#5665: yeah, just warning you so you don't get mad if the tracing breaks :blobsad: nshepperd#2316: got it~ nshepperd#2316: I'll play with this tomorrow and see if I can make it work. literally falling asleep now~ chilli#5665: haha, it's definitely the easiest way of making it work in PyTorch nshepperd#2316: seems easier than uhhh compiling xla, that's for sure alstroemeria313#1694: nightnight~ chilli#5665: haha, if you wanted to do it in Jax, you'd definitely want to do it before XLA chilli#5665: yeah, gnight alstroemeria313#1694: eheh could we build it into the haiku tracing stuff somehow chilli#5665: mmm nshepperd#2316: night~🌸 chilli#5665: it depends on what information you need alstroemeria313#1694: well, the shape inference happens then.
alstroemeria313#1694: So it knows the shapes chilli#5665: right, but do you need to know what happens in your backwards pass? alstroemeria313#1694: Ohh alstroemeria313#1694: Yeahhh chilli#5665: or what activations are actually being saved? alstroemeria313#1694: yeah it might chilli#5665: like, if you only need to know the forward pass then it's relatively easy to do chilli#5665: I don't know how you'd do it in Jax, maybe add a higher-order primitive? chilli#5665: mmm, I think it would be doable in Jax, but I suspect you need core framework changes chilli#5665: hmmm chilli#5665: maybe it could be implementable out of framework chilli#5665: you'd want to do something similar to what I'm doing here in eager compilation chilli#5665: trace out the forwards + vjp chilli#5665: partition chilli#5665: and then return a function with a custom vjp alstroemeria313#1694: mm~ chilli#5665: I don't know, seems kind of tricky alstroemeria313#1694: yeah alstroemeria313#1694: i most need it on TPU :/ chilli#5665: haha
alstroemeria313#1694: bc 16GB per core limit alstroemeria313#1694: Whereas I can get bigger GPUs. alstroemeria313#1694: And batch size 8. chilli#5665: the PyTorch stuff could theoretically work with pytorch/xla chilli#5665: but that would be even further in the future alstroemeria313#1694: yeah too many footguns chilli#5665: yeah alstroemeria313#1694: i was going to use pytorch/xla on tpu and i ran into another footgun alstroemeria313#1694: and didn't manage to solve it alstroemeria313#1694: so ported to JAX chilli#5665: yeah, it's tricky chilli#5665: they'll probably make it better eventually chilli#5665: the fundamental approach is sound I think chilli#5665: so it's just a matter of fixing enough footguns that people don't usually run into them chilli#5665: lol chirp#4545: I’m curious if we’ve gotten many requests for some sort of online interface/browser/explorer for The Pile StellaAthena#3530: I haven't seen any. Why? chirp#4545: Been wondering more generally about people’s needs around datasets chirp#4545: Like aside from the data itself, what do people need for documentation / browsing / exploration / etc chirp#4545: I imagine there’s some unserved needs here but I’m not sure if it’s something people care about all that much
Nokmopillar (-π, π)#2665: hi eleutherAI discord, have been lurking for a while learning a ton about GPT and goose memes in here (hello fellow edmontonian!), I am wondering if anyone had insight on what are good resources to start with while learning how to build data libraries? Working on something fun with 100 year old english conventions and wanted to see if I could merge some victorian era writing databases into a viable library! Awesome_Ruler_007#7922: I could swear there was a `ddpm` channel here; I have just started to learn a bit about ddpm and was wondering - instead of the denoising objective, couldn't someone potentially also use it for object prediction - like for each step, the objective essentially is the same - except we would be doing reverse-prediction over time rather than denoising. basically, for building world models, you could always take a video of how an object changes over time, and try to recover how the object looked n timesteps back. cfoster0#4356: That channel exists in TPU podcast Awesome_Ruler_007#7922: ahh, I was wondering where I had seen that. thanks! Technobird22#2055: I know I've asked similar things before, but wouldn't the Telsa T4 perform better than an A4000 at half precision as it has more tensor cores? pragmaticml#1730: Slides for Stanford ML Systems Design are public -- seems like a pretty neat resource for the half of ML that's not typically part of an academic curriculum: https://stanford-cs329s.github.io/syllabus.html special k#3707: Does anyone know if CLIP is trained on gelbooru or any rule 34 archive sites? EricHallahan#1051: *It's a mystery.* 👻 special k#3707: omw to spend Google compute resources on trying to perfect ai coom EricHallahan#1051: Well you can go back to NAI to do that sort of development if that is what you would like to do. special k#3707: kuru is scared of pictures nostalgiahurts#3408: you could try the strategy used in "multimodal neurons": pass a bunch of similar images through the image encoder, look for neurons with high activations, find those neurons on Microscope, and see if the feature visualizations/dataset samples match what you expect Kia#2550: @kurumuz :grimberk: gollark#3909: The Ampere tensor cores are more powerful. Technobird22#2055: Cool, that explains it thanks! nshepperd#2316: xla compiler actually has an option to turn off xla passes including cse. but afaict you can't access it from jax :( nshepperd#2316: so i guess i need to do the crazy random number between 0 and 1 thing nshepperd#2316: like there's this whole DebugOptions proto https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/xla.proto with a bunch of settings, but python gets a version of it with almost everything removed
StellaAthena#3530: Cool thing I just discovered: you can use the latex bot to format complicated expressions that go beyond a single line using block code: \`\`\`latex \begin{align\*} \mathcal{K} ( (x,y,z), (x', y', z') ) &= \langle \phi(x, y, z) , \phi(x', y', z') \rangle \\ &= \langle (x,y,z,xy,xz), (x',y',z',x'y',x'z') \rangle \\ &= xx' + yy' + zz' + xyx'y' + xzx'z'. \end{align\*} \`\`\` StellaAthena#3530: ```latex \begin{align*} \mathcal{K} ( (x,y,z), (x', y', z') ) &= \langle \phi(x, y, z) , \phi(x', y', z') \rangle \\ &= \langle (x,y,z,xy,xz), (x',y',z',x'y',x'z') \rangle \\ &= xx' + yy' + zz' + xyx'y' + xzx'z'. \end{align*} ``` TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/902262956925005854/193204646687408129.png alstroemeria313#1694: hey has anyone tried ```python def vn_entropy(x): s = torch.svd(x).S s = s / s.sum()
return torch.special.entr(s).sum() def vn_entropy_model(model): total = 0 for module in model.modules(): if isinstance(module, nn.Linear): total = total + vn_entropy(module.weight) return total``` as a regularizer alstroemeria313#1694: it's slow alstroemeria313#1694: bc does an SVD on the weights alstroemeria313#1694: or anything like it StellaAthena#3530: Here's a fun bug I just discovered with WandB: The url for a group of runs is supposed to be `ttps://wandb.ai/eleutherai/PROJECT_NAME/groups/GROUP_NAME/workspace?workspace=user-stellaathena` WandB allows you to set a group of runs to have an empty name. However the browser doesn't like the url `https://wandb.ai/eleutherai/PROJECT_NAME/groups//workspace?workspace=user-stellaathena` and redirects you to `https://wandb.ai/eleutherai/PROJECT_NAME/groups/workspace?workspace=user-stellaathena`. At which point WandB notices that you don't have a `GROUP_NAME` and decides that you're supposed to be on `https://wandb.ai/eleutherai/PROJECT_NAME/workspace?workspace=user-stellaathena`. Thus making the group of runs inaccessible through the web browser, and in particular preventing you from renaming the group to anything else. Louis#0144: What prevents VQGAN from having a catastrophic modal collapse on the VQ tokens Louis#0144: Is there anything that prevents it? Louis#0144: or do we just pray the noise is enough Louis#0144: I think it might be the latter cfoster0#4356: Mode collapse or codebook collapse?
Louis#0144: codebook collapse elderfalcon#4450: I scratched my head when I saw your question and read the paper. Not really a lot of talk it seems about preventing collapse. It seems like they're just leaning on the more extensive parts of the VQVAE framework that they inherited from to prevent collapse (which I think they emphatically claim in the original paper is prevented): https://arxiv.org/abs/1711.00937 Louis#0144: Hm Louis#0144: I see Louis#0144: Ty cfoster0#4356: You should take a look at the Improved VQGAN paper cfoster0#4356: They recommend L2 normalization on the codebook vectors / embeddings and having them be much lower dimensional (ie 8 or 32 dim) nev#4905: 8 dim 🤔 nev#4905: that's so small that they can even be compressed into three RGBs MicPie#9427: Very interesting! This setup would minimize the rank of the weights? There is a high similarity to the concepts in section 5.2 and 5.3 of https://web.stanford.edu/class/cs168/l/l18.pdf but they use the l1 norm on the singular values = “nuclear norm minimization”. I’m curious what your intuition on that approach is. To get easier to interpret weights? Are low rank weights desirable? nshepperd#2316: the only similar thing i've seen is the spectral normalization thing for GANs, where they divide the weight by the torch.svd(x).S.max() to limit the largest singular value to 1 nshepperd#2316: as an alternative way of constraining the discriminator to be lipschitz or something alstroemeria313#1694: How do you get PyTorch Lightning to print the actual exception a data loader worker process died from alstroemeria313#1694: How am I supposed to debug code that crashes after eight hours if it doesn't even print a stack trace or exception! alstroemeria313#1694: not... quite alstroemeria313#1694: it's an attempt to regularize the "amount of information", roughly, contained in the weights
alstroemeria313#1694: and they approximate it in a fast way that doesn't involve actually doing an SVD nshepperd#2316: ahh yeah, the stateful thing with the left and right vectors alstroemeria313#1694: screw it i'm doing ```python except Exception as err: print(f'{type(err).__name__}: {err}', file=sys.stderr) raise ``` alstroemeria313#1694: PT Lightning's docs are opaque as usual alstroemeria313#1694: I hate it alstroemeria313#1694: I am only using it to get DDP nshepperd#2316: pain :/ alstroemeria313#1694: Can I write my own DDP train loop alstroemeria313#1694: So I can actually understand what is going on alstroemeria313#1694: I've done DeepSpeed from scratch before, how bad can DDP be alstroemeria313#1694: ...It died again WITHOUT PRINTING ANYTHING IN THE SLIGHTEST USEFUL. alstroemeria313#1694: It was an OOM nshepperd#2316: 😭 MicPie#9427: Plain DDP with PyTorch is not too hard to setup. I have a setup here that I used for the CLASP project: https://github.com/MicPie/clasp/blob/main/train/train_multigpusim.py Maybe you need some small tweaks with the latest PyTorch version, but that should be it.
If you have questions just ping me. :hap: alstroemeria313#1694: ty :) MicPie#9427: Maybe this article is also interesting for you as it covers an approach to create constrained parameters in PyTorch: https://lernapparat.de/computed-parameters/ alstroemeria313#1694: I have now added a custom callback to print exceptions alstroemeria313#1694: In addition to explicitly printing exceptions in the data loader alstroemeria313#1694: WTF PT Lightning alstroemeria313#1694: there is not really anything alstroemeria313#1694: in the original model type alstroemeria313#1694: And in fact most of the codes collapse alstroemeria313#1694: They added a Gumbel quantization model type later alstroemeria313#1694: Which prevents collapse nshepperd#2316: i think their "solution" before gumbel was to detect collapsed codes that aren't used any more and reset their vectors to a random latent output by the encoder nshepperd#2316: which sorta works but not really alstroemeria313#1694: i don't think they even did this. the two imagenet models they released were badly collapsed alstroemeria313#1694: or did they but it didn't work? nshepperd#2316: huhh nshepperd#2316: you might be right actually bc i didn't see codebook reset code in the training code they released nshepperd#2316: oh it was jukebox that introduced the codebook reset thing alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/902533809038323772/unknown.png nshepperd#2316: anyway i tried it when i finetuned vqgan and it helped a bit but was still bad
alstroemeria313#1694: > (the first 1024 codes in the 16384 model) nshepperd#2316: wow alstroemeria313#1694: > Here's the visualization of the first 1024 codes in the f=8 GumbelVQ's codebook for comparsion. https://cdn.discordapp.com/attachments/729741769738158194/902533932942233670/unknown.png nshepperd#2316: that's really bad alstroemeria313#1694: Yep alstroemeria313#1694: I like the f=16 models better but there is no general use Gumbel f=16 still afaik alstroemeria313#1694: Do I have to do it myself alstroemeria313#1694: I don't really want to, I'd rather focus on diffusion nshepperd#2316: ^^;; alstroemeria313#1694: I trained a Gumbel f=16 once but it was specialized alstroemeria313#1694: Worked great though. alstroemeria313#1694: I just wish I had annealed the temperature faster bc we got stuck training until it had gone down enough alstroemeria313#1694: It was on one GPU alstroemeria313#1694: idk it feels like the CompVis people figured out they got better metrics with f=8 nshepperd#2316: eheh alstroemeria313#1694: When to my mind one of the major advantages of VQGAN is that you could do f=16 at all to begin with alstroemeria313#1694: Actually at some point I should try a VAE with a diffusion decoder. nshepperd#2316: f=8 doesn't let you do very big images with transformer alstroemeria313#1694: yeah alstroemeria313#1694: You can do 256x256 though.
nshepperd#2316: diffusion decoder would be really nice to not have the gross convolutiony artifacts alstroemeria313#1694: Like. What if I took the discrete VAE encoder's output. Quantized it using a low dim normalized codebook. alstroemeria313#1694: Upsampled it, and appended the channels to a diffusion model input as the condition. alstroemeria313#1694: And then trained the encoder and decoder end to end. nshepperd#2316: yeah! alstroemeria313#1694: Like the loss would be the normal diffusion MSE between the model output and the v target but the clean real would also be fed into the encoder and the encoder trained alongside. nshepperd#2316: so they would co-learn a useful latent specifically for diffusion conditioning alstroemeria313#1694: Yeah. nshepperd#2316: and diffusion would hallucinate the details that are too small to make it into the latents alstroemeria313#1694: Yep alstroemeria313#1694: Instead of the VQGAN adversarial loss. nshepperd#2316: so the result would still look high quality, even with larger f alstroemeria313#1694: Then train the second stage transformer. alstroemeria313#1694: Which doesn't need to know about the diffusion decoder at all bc it only sees the encoder indices. alstroemeria313#1694: you could use a short DDIM schedule for decoding to rank with CLIP alstroemeria313#1694: Then re-decode the ones you want to keep with a longer schedule. alstroemeria313#1694: Or progressively distill the decoder nshepperd#2316: huh yeah alstroemeria313#1694: ...Extra fun idea. Condition the decoder on *the CLIP embedding* of the image's *caption*. nshepperd#2316: eheh
alstroemeria313#1694: And then condition the second stage transformer on it too. nshepperd#2316: style transfer! alstroemeria313#1694: Like so the decoder can put in the right kind of details for the second stage transformer's prompt nshepperd#2316: or you can condition the decoder on a clip embedding of a different image for image style transfer alstroemeria313#1694: Which is admittedly kind of... You can do this w/ a single stage alstroemeria313#1694: No need for the transformer alstroemeria313#1694: But the transformer might help you get better global coherence w/ a decoder that you trained on patches. nshepperd#2316: earlier i was thinking about whether you could train a hierarchical diffusion of some kind nshepperd#2316: like this VAE idea but the thing that models the latents is another diffusion instead of a transformer. and you train the whole stack end to end alstroemeria313#1694: huh nshepperd#2316: so you have a continuous latent instead of discrete alstroemeria313#1694: oh, how would you stop it from just sticking all the information in nshepperd#2316: and instead of KL term, the encoder just has to balance making a useful latent that the outer diffusion can use, and making a latent that is easy for the inner diffusion to model nshepperd#2316: by trying to minimize both diffusion mse losses at once alstroemeria313#1694: ah alstroemeria313#1694: ...Can you somehow train multi-stage base model + upscaler pipelines end-to-end alstroemeria313#1694: I mean where the "latent" is a normal RGB low-res image. alstroemeria313#1694: Or would the losses compete and make the low-res image look bad in order to inject information into it nshepperd#2316: yeah they would compete i guess nshepperd#2316: that might break it idk
nshepperd#2316: like you'd have to be giving the upscaler the single-step pred as conditioning too? or something alstroemeria313#1694: oh nshepperd#2316: instead of an actual sample from the low res model alstroemeria313#1694: Like you would have to save them alstroemeria313#1694: And you couldn't do different schedules alstroemeria313#1694: Even though it makes sense to do more timesteps for the base model than the upscaler during inference. ewald#7730: i have an off-topic question about GPT3's training ewald#7730: one critique was that the models that were a bit smaller weren't trained to saturation nshepperd#2316: um... you could do this. but the low res diffusion outputs 3 rgb channels + whatever latents the encoder made ewald#7730: so it's hard to say if they may have been just as good as the "big" GPT-3 ewald#7730: what are your opinions on that? CRG#8707: No? CRG#8707: They are overtrained if anything nshepperd#2316: so the latents constitute the extra information for upscaling that the low res is allowed to output alstroemeria313#1694: ahh ewald#7730: are they? CRG#8707: https://cdn.discordapp.com/attachments/729741769738158194/902543499969306664/power-law.png ewald#7730: yes that image CRG#8707: All the models were trained with the same 300B tokens nshepperd#2316: so, from the low res model you get a rgb sample, plus a bunch of latents. use clip to rank the samples, then feed the best ones with their latents into the upscaler
ewald#7730: but how long, how many times...? EricHallahan#1051: You can see that the small models were trained way past compute-optimal. CRG#8707: One epoch EricHallahan#1051: This is all the one-epoch regime. ewald#7730: yes. i'm talking about the light green ones EricHallahan#1051: Those are huge models though. ewald#7730: yes, but smaller than the "big" GPT-3 ewald#7730: so these got the same tokens as the "big" GPT-3, and all of it one-epoch... ok. CRG#8707: Bigger models use compute more efficiently (at the current regime) ewald#7730: you mean they use tokens more efficiently? CRG#8707: Yes ewald#7730: if they would use compute more efficiently, then the yellow one would use less compute than the green one to reach e.g. 3 validation loss ewald#7730: ok CRG#8707: Depends on the compute budget CRG#8707: https://arxiv.org/abs/2001.08361 ewald#7730: ok ewald#7730: so the limiting factor in GPT-3 is not so much the training data ewald#7730: but more the model size? ewald#7730: or neither? would the model improve if there were a 2nd epoch? EricHallahan#1051: Another epoch would change the training dynamics and it would begin to rapidly overfit.
StellaAthena#3530: @ewald I recommend you check out some of the existing foundational literature on language models. People have linked to some good papers, but I've also been meaning to organize our "newbie reading list" better so I can dump papers on you in a minute ewald#7730: thanks, sounds good! EricHallahan#1051: This list is pretty good. ewald#7730: so the amount of data is the bottleneck? (otherwise it probably wouldn't overfit?) ewald#7730: thx! StellaAthena#3530: __For the questions you're asking specifically I would recommend:__ Language Models are Unsupervised Multitask Learners (GPT-2): https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf Exploring Transfer Learning with T5: https://arxiv.org/abs/1910.10683 Language Models are Few-Shot Learners (GPT-3): https://arxiv.org/abs/2005.14165 One Epoch Is All You Need: https://arxiv.org/abs/1906.06669 The Pile: An 800GB Dataset of Diverse Text for Language Modeling: https://arxiv.org/abs/2101.00027 Deduplicating Training Data Makes Language Models Better: https://arxiv.org/abs/2107.06499 Scaling Laws for Neural Language Models: https://arxiv.org/abs/2001.08361 Scaling Laws for Autoregressive Generative Modeling: https://arxiv.org/abs/2010.14701 ewald#7730: great, thanks a lot! StellaAthena#3530: Those are loosely sorted into "studies of language models," "how to train a good language model," and "scaling laws" though there's some overlap between those categories (e.g., GPT-3 has an extensive section on scaling) nev#4905: pin? StellaAthena#3530: There's a pinned post with a lot of these papers, and I'm hoping to write a brief blog post later this week that will be a better pin
Orz#3023: awesome nev#4905: is specvqgan collapsed? https://cdn.discordapp.com/attachments/729741769738158194/902608423005343854/unknown.png nev#4905: compare to this: nev#4905: https://cdn.discordapp.com/attachments/729741769738158194/902609253842444368/Screen_Shot_2021-10-26_at_20.26.50.png nev#4905: might be why it won't optimize with AudioCLIP :sus: alstroemeria313#1694: idk those look different alstroemeria313#1694: there are a fair number with that red dot in the upper right though alstroemeria313#1694: also i optimized with partially collapsed ones all the time and it worked nev#4905: but different enough to descend through them? nev#4905: a lot have very similar structure alstroemeria313#1694: yeah, the imagenet one that grid is from worked fine with my methods nev#4905: this might be even worse alstroemeria313#1694: It mostly means your reconstructions are not as good as they would be alstroemeria313#1694: Because your effective codebook size is smaller. nev#4905: hmm I'm converting spectrograms to waveforms and back nev#4905: mmmmm alstroemeria313#1694: oh alstroemeria313#1694: does the model use feature engineering nev#4905: both do :berk: nev#4905: audioclip uses spectrograms and for specvqgan it's in the name
alstroemeria313#1694: ohh alstroemeria313#1694: you can backprop through spectrogram creation right nev#4905: yep alstroemeria313#1694: yay :) nev#4905: might be the part with the issue StellaAthena#3530: My brain is a little melty right now. I asked someone to compute the kl div between two datasets and they came back with the very reasonable question "how are the distributions defined." What is it that I am meaning to ask them to do? Dashiell#8739: I'm not sure what the dataset looks like, but one could calculate the KL divergence between the empirical distributions? Depending on how much it makes sense to create the empirical distribution from whatever the features are? alstroemeria313#1694: there are often bad ways to define a distribution over a dataset and better ways alstroemeria313#1694: point masses at each sample is bad bc then the distributions will probably not have overlapping support Dashiell#8739: depending on how many samples and how many dimensions your distribution is in alstroemeria313#1694: stuff like finding the empirical distribution of characters or the empirical distribution of tokens (if it's a text dataset) is more likely to work with KL but may not give you the information you want alstroemeria313#1694: yeah alstroemeria313#1694: i'm used to images, where you would actually have to have the same image in both datasets for the supports to overlap Dashiell#8739: lol alstroemeria313#1694: :bigbrain: way to define a distribution over a dataset is to train a model on it StellaAthena#3530: uh Dashiell#8739: maybe some sort of kernel density estimation --> KL divergence? I dunno, I'm assuming @StellaAthena wants this more-or-less non-parametric or else the distribution would be obvious StellaAthena#3530: I wanted to compare the information theoretic similarity of the two datasets
alstroemeria313#1694: are they text or images or StellaAthena#3530: Text mgostIH#0245: This can be solved by optimal transport alstroemeria313#1694: But then you're not using KL divergence. mgostIH#0245: Ye because KL divergence is trash compared to optimal transport metrics alstroemeria313#1694: Can you even do optimal transport for text. Dashiell#8739: unless you're interested in the "information theoretic similarity" 😛 alstroemeria313#1694: Yeahhh alstroemeria313#1694: OT needs the space to have a sensible distance metric too. alstroemeria313#1694: Yeah what if you trained a model on one dataset and then computed the log likelihood of the other Dashiell#8739: I think you could do something like a kernel density estimation in some model's (GPT/BERT/word2vec/whatever) latent vector space and then calculate the KL divergence StellaAthena#3530: fuck migraine brain mush Dashiell#8739: Wait, @StellaAthena do you want to know which dataset has _more_ information? Then what @alstroemeria313 just said works great Dashiell#8739: if you want to know how much their semantic/information content _overlaps_ then I think something like SentenceBERT would put both datasets in the same space and you could do the KL divergence their alstroemeria313#1694: In D_KL(P || Q), the distribution defined by the model trained on the 1st dataset is the Q. alstroemeria313#1694: And the 2nd dataset is the P. alstroemeria313#1694: I think? Dashiell#8739: ohhhh alstroemeria313#1694: > The Kullback–Leibler divergence is then interpreted as the average difference of the number of bits required for encoding samples of P using a code optimized for Q rather than one optimized for P. Dashiell#8739: I misread you
Dashiell#8739: hmmmm alstroemeria313#1694: Since an autoregressive LM gives you frequency tables for encoding the data. Dashiell#8739: Would that be calculating their KL divergence or their mutual information? alstroemeria313#1694: KL divergence Dashiell#8739: I think you're right though Dashiell#8739: ahh, yeah, it wouldn't be symmetric alstroemeria313#1694: Yep alstroemeria313#1694: Also you should probably compute the log likelihood of the 1st dataset using the model too, for comparison, since it's not going to be a perfect model StellaAthena#3530: I have three training datasets, A, B, and C. I have a testing dataset D. I want to predict which training dataset will train a model best for evaluation on D by virtue of having the "same stuff" in it alstroemeria313#1694: Ah alstroemeria313#1694: Yeah what I proposed is just doing the training on all three and then checking all three models ^^;; Dashiell#8739: lol alstroemeria313#1694: Could use a smaller model than the one you intend to train. StellaAthena#3530: I am doing that, but I was looking for something information theoretic-y as well alstroemeria313#1694: Well this kind of is StellaAthena#3530: The third measure I was going to look at was the perplexity of LDA alstroemeria313#1694: Ahh StellaAthena#3530: Basically I want to be able to measure the extent to which the intuitive gloss "good datasets have the same kinds of stuff in it as the downstream task" is true, and how much of a difference having a "good dataset" (in this sense) makes StellaAthena#3530: I guess the literal distribution of words might be worth looking at StellaAthena#3530: like, a 1-gram model
alstroemeria313#1694: Yeah Dashiell#8739: perhaps it would be good to use a couple different ways to calculate the distributions: 1-gram, word2vec, SentenceBERT, etc... Dashiell#8739: and see if they agree and/or if any of them matter StellaAthena#3530: Right, right now I'm doing small LM, LDA, and 1-gram Sphinx#2092: You might run into issues due to different sizes e.g. if one of A B or C was literally a subset of D, it might be good, if its only sample, hard to train a model. Sphinx#2092: Ignoring that, you could just the domain adaptation thing StellaAthena#3530: It's OSCAR, C4, and the Pile Sphinx#2092: Right, I'm more highlighting the potential issues with an information theoretic approach. StellaAthena#3530: Ah StellaAthena#3530: Yes, I have a suspicion that's already happening because at the 2.7B scale the Pile loses to OSCAR for pubmedqa Sphinx#2092: But yeah, there is some approach where you take the difference in LM scores. StellaAthena#3530: and do what with it? Sphinx#2092: well if you are stuck with only using A B or C, I guess take the one with the smallest delta? Sphinx#2092: though in real life, people usually either filter the dataset to get the ones with good scores Sphinx#2092: or do some domain adaptation thing. Sphinx#2092: I say this as someone who has never actually done this. StellaAthena#3530: right, thanks Sphinx#2092: https://aclanthology.org/P10-2041.pdf Sphinx#2092: But I've been told such things are pretty good for real-life things. StellaAthena#3530: ty
StellaAthena#3530: Okay, time for painkillers and bed Kia#2550: Take care! Also Get some proper rest to:goose6: nev#4905: thank you google https://cdn.discordapp.com/attachments/729741769738158194/902922335122763796/Screen_Shot_2021-10-27_at_17.10.47.png Kazumi#1297: I had a weird idea to try to reconstruct the original image using only a positional encoding, and it doesn't look that bad for a really small fully connected layers https://cdn.discordapp.com/attachments/729741769738158194/903014241014022144/out.png Kazumi#1297: had the training rate too high, lowered it and now it's basically the original image with a bit of graininess CRG#8707: Like SIREN? Kazumi#1297: I guess so, I wanted an example with really small networks, and thought of how to fit an image into a small network StellaAthena#3530: Huge news: https://twitter.com/dbamman/status/1453405812198547462?s=20 fe#0483: Interesting. I wonder if the publishing cabals will fight it. Teemochu#8740: > I'll note that this outcome didn't just happen on its own, but is the result of... the leadership of... @ErikStallman No relation btw bmk#1476: nominative determinism tpapp157#3643: Good news but not too surprising. Fair use (including academic research use) is interpreted very broadly by US law. Aran Komatsuzaki#5714: i was trying Transformer and AlphaGo Zero w/ MLE + adversarial objective in 2017 :berk: kindiana#1016: why are you looking allll the way through the backscroll lmao Louis#0144: Aran is super bored I guess Parker#3197: maybe it's more about distribution of datasets like movies? idk 𓅬 gabriel_syme 𓅬#3220: let's all find 1 comment from 1 year ago and comment on it. 1 year anniversary thing Parker#3197: https://cdn.discordapp.com/attachments/729741769738158194/903115478451511366/idk.png 𓅬 gabriel_syme 𓅬#3220: it's also nice, maybe ideas that never panned out come in the forefront
Parker#3197: > These updated proposals specified that the purpose of the circumvention > would be for scholarly research and teaching; the circumvention would have to be > undertaken by a researcher affiliated with a nonprofit library, archives, museum, or > institution of higher education; and that the **researcher would have to use reasonable > security measures to limit access to the corpus of circumvented works only to certain > categories of people** Parker#3197: they also explain some of the issues without having law on it (for like motion pictures, etc) nev#4905: fun fact: a torch model can have half its layers loaded on cuda and half on cpu if .cuda() OOMs in the middle StellaAthena#3530: Holy fuck this explains an error I was getting at work that made no sense last week StellaAthena#3530: Thank you nev#4905: np alstroemeria313#1694: ah, like if you catch the OOM exception and try to proceed anyway? nev#4905: yes Kia#2550: Stella hosting something? StellaAthena#3530: It was the easiest way for me to check my camera lol Kia#2550: No nvm Kia#2550: Oww StellaAthena#3530: I'm doing a panel on the hour Kia#2550: Goodluck on that ethan caballero#6044: https://twitter.com/ethancaballero/status/1453834206241505280
Kia#2550: Do We Just Video record every thing we do in VR now? ethan caballero#6044: yes Kia#2550: Ow god,I can tell you're serious:goose10: Dromarion#3383: Get a head start by training it on VRChat Kia#2550: owo alstroemeria313#1694: i don't get vr tbh alstroemeria313#1694: i don't even watch video content Kia#2550: It's just a TV Screen but closer alstroemeria313#1694: I don't watch TV either ^^;; Kia#2550: lol😄 Dromarion#3383: I want to taste food in VR so I don't have to cook but that's about it. mega b#6696: probably some drug can do that Kia#2550: lol Kia#2550: Go eat some wild mushrooms Dromarion#3383: Kind of misses the point to eat food to simulate eating food someKindaBean#8471: there's people studying that someKindaBean#8471: https://cdn.discordapp.com/attachments/729741769738158194/903422484362178600/unknown.png someKindaBean#8471: it's not very advanced in terms of what flavors it can do yet someKindaBean#8471: https://cdn.discordapp.com/attachments/729741769738158194/903422577773514812/unknown.png someKindaBean#8471: thesis on the topic here: https://core.ac.uk/download/pdf/48659289.pdf
Awesome_Ruler_007#7922: adult industry:- Awesome_Ruler_007#7922: https://tenor.com/view/invest-crypto-bitcoin-money-cash-gif-19791047 Kia#2550: Hey if any 05 see this mind actual considering Turning #art to nsfw tag but explicitly saying no Nsfw allowed and Get a person to mod it,I can Volunteer to be one nev#4905: we'd need at least two mods Kia#2550: Yeah one from the west Louis#0144: One channel pls Eddh👽#7290: Hello are there personal projects people are doing in their free times there ? StellaAthena#3530: Yes, literally everything we’ve ever done Awesome_Ruler_007#7922: *When you finally solve the damn bug, while writing up the Github issue* Awesome_Ruler_007#7922: https://tenor.com/view/freedom-free-im-free-finally-at-peace-gif-19153466 Awesome_Ruler_007#7922: *based on a true story Awesome_Ruler_007#7922: ~~no more shitposts I promise~~ BoneAmputee#8363: I wish tenor would clean up their crypto-infested library :\ CRG#8707: https://openai.com/blog/grade-school-math/ circuit10#0158: Could that apply to code somehow? CRG#8707: Paper: https://arxiv.org/abs/2110.14168 Kia#2550: What cfoster0#4356: Someone else posted a question after your message, and did it in multiple channels. No worries Kia#2550: Oww ok ok:thinkies: Deleted User#0000: If a neural network is a universal function approximator , why can't we train a neural network to update the weights instead of backprop? Of course this neural network would have to be trained using backprop, but this is similar to bootstrapping in compilers. Once you have a neural network to update the weights for a particular neural network, we might get more optimal weight assignment after each run. I tried to look at the literature and could not find anything like this. I seem to be missing something. Why doesn't this work?
𓅬 gabriel_syme 𓅬#3220: Would you need a NN with as many neurons as weights to do this? Deleted User#0000: You might even need a bigger NN, but once it is trained for say chess, is it possible it will generalize to Go and so on? inox#5400: https://arxiv.org/abs/1606.04474 nostalgebraist#3542: also related: https://arxiv.org/abs/1611.03824 less related, but hilarious: https://arxiv.org/abs/1909.13371 Deleted User#0000: Thanks folks, nice to see it is already done as I expected. But I'm still somewhat confused as to why it is not applied to bigger problems than the ones in the paper. ilovescience#3282: interestingly, there was a recent paper on a single forward pass of a neural "hypernetwork" to predict the optimal parameters of new neural network archs: https://twitter.com/iScienceLuvr/status/1453536783170424833 Awesome_Ruler_007#7922: authors of paper 2 = :chad: Awesome_Ruler_007#7922: as an aside, how do authors maintain anonymity for peer-review in conferences, if they plaster their Github code link everywhere? 🤔 and aren't there more subtler ways of communicating your real identity with a few.. "clues?" alstroemeria313#1694: i um, i don't know, why don't they make an anon github account for it alstroemeria313#1694: So the reviewers can look at the code without breaking blind review Awesome_Ruler_007#7922: yeah, like in the paper linked above by ilovescience, the first thing in the middle is literally the FAIR github link 🤷‍♂️ alstroemeria313#1694: wow Awesome_Ruler_007#7922: why don't they implement rules to reject papers they *think* have broken them, with evidence as to why the reviewers think so? ...but I guess the system is already too broken to care about lil things like that StellaAthena#3530: Typically the version you submit to arXiv and the version submit to the conference are not the same thing rb#3159: does anyone have a link for this paper https://twitter.com/tianjun_zhang/status/1454220591293104132 ? the link mentioned in the description is that of BeBold. Kharr#7888: I have not seen NeurIPS papers posted yet -- are they out?
rb#3159: he mentioned in the tweet, but the link is that of bebold https://cdn.discordapp.com/attachments/729741769738158194/903998927945232384/Screenshot_from_2021-10-30_18-58-15.png Awesome_Ruler_007#7922: huh, so the authors ensure that there are no such links that may give away their anonymity in the conferences? because if its not a criteria for rejection, I don't see any other incentive StellaAthena#3530: It typically is a criteria for rejection Awesome_Ruler_007#7922: ahh, that explains. thanks! Awesome_Ruler_007#7922: but its not specifically mentioned in the rules though? :thinkies: https://nips.cc/Conferences/2021/PaperInformation/CodeSubmissionPolicy https://neurips.cc/Conferences/2021/PaperInformation/PaperChecklist atleast for NeurIPS Awesome_Ruler_007#7922: The way I interpret it, if FAIR sticks their GH link right in the beginning - that's actually a proactive measure to *stick* to the rules 🤷‍♂️ Kharr#7888: FAIR is a big group, not like you can identify the individual author from the organization, right? This kind of thing happens in all fields and you can identify the institution -- which is often hinted at in some way if the authors think it will give them a better chance due to prestige. You're definitely not supposed to do this but the rule is broken very often. StellaAthena#3530: > If you are submitting your code or data for reviewing, you must **anonymize** it and include it in a single **zip file** along with any additional supplementary material (e.g., appendices). Small datasets can also be included in such zip file (which must be <100MB). Large datasets can instead be linked to via an anonymous URL. Reviewers will be asked to keep any submitted code and data in **strict confidentiality** and use it only for reviewing purposes. The supplementary material deadline is one week after the paper submission deadline. StellaAthena#3530: (Emphasis original) StellaAthena#3530: I feel like you haven’t read the instructions very carefully alstroemeria313#1694: > identify the institution I read an ICLR paper recently that mentioned they trained on TPUv4s 🙃 Orz#3023: oh wait are TPUv4s even out there? Orz#3023: woah alstroemeria313#1694: No. Orz#3023: ohh
Sphinx#2092: Someone should just troll the reviewers and say they used tpuv6 Sphinx#2092: Or rtx 4080 Sphinx#2092: But I dont see the value in reporting exact computing software unless its relevant. nev#4905: the diffusion distillation one? alstroemeria313#1694: Yep alstroemeria313#1694: i managed to replicate their results btw cfoster0#4356: "Gee I wonder who could've written this paper from this small subfield and has access to TPU v4s?" cfoster0#4356: This is very cool. *Excite* nev#4905: tbh they wouldn't train on an rtx 4080 alstroemeria313#1694: "H100" nev#4905: it'd be an N100 nev#4905: hm? alstroemeria313#1694: I think the next arch is Hopper, so H? nev#4905: oh do we know it? alstroemeria313#1694: idk, wikipedia says so nev#4905: lol nev#4905: we don't have L100 yet (for lovelace) alstroemeria313#1694: We have no real details on what it consists of I think though. nev#4905: apparently it will be ad102 alstroemeria313#1694: Ah
nev#4905: source: twitter StellaAthena#3530: It’s important information for people who are interested in analyzing trends in computing research. Kia#2550: Yooo Exciting stuff :hap: StellaAthena#3530: Dope! Any plans on writing up a short report on it? e.g., for the blog? alstroemeria313#1694: at some point maybe alstroemeria313#1694: I want to try it on bigger models and CLIP conditioned models alstroemeria313#1694: I tried it on a class-conditional CIFAR-10 model. alstroemeria313#1694: And it preserved the class conditioning alstroemeria313#1694: So it ought to preserve CLIP conditioning too right. Kia#2550: How long did it taken you alstro? alstroemeria313#1694: like four hours to train and then distill the tiny model Kia#2550: wow Kia#2550: I-:surprise: alstroemeria313#1694: It would be great if it worked for big models with CLIP conditioning the same way and we could get down to models than can generate diverse images from text in 1-4 steps. Kia#2550: True true:thinkies: Awesome_Ruler_007#7922: my bad Awesome_Ruler_007#7922: > if the authors think it will give them a better chance due to prestige yeah, that was my original concern Awesome_Ruler_007#7922: > You're definitely not supposed to do this but the rule is broken very often. hopefully that means rejection :thinkies:
Kharr#7888: There are many ways to do it that are hard to justify rejecting. Self-citation is another common one. Make it clearly obvious that the current paper is a continuation of papers XYZ which happen to be by the author. You can identify the exact author when this happens. It's all a rigged game 😉 nev#4905: what if I really liked the author's research and wanted to make a paper about it bmk#1476: schmidhuber: :guilty: bmk#1476: he cites himself soooo much kurumuz#5695: based alstroemeria313#1694: in papers for double blind review though? bmk#1476: in his papers if you go to the citations there's usually an entire page of cites where he's first author bmk#1476: im not even exaggerating alstroemeria313#1694: Ahah bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/904023726226559026/unknown.png alstroemeria313#1694: Wow kurumuz#5695: jesus christ nev#4905: and that's not mentioning the co-authors? bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/904024344836055060/unknown.png Kharr#7888: That's probably the most extreme case I've seen :berk: There's also a ton of politics around citations -- with rival labs refusing to cite each other even though the work is nearly identical, etc. Publications are a mess. Awesome_Ruler_007#7922: very....'confident' 𓅬 gabriel_syme 𓅬#3220: "we train our models on a subset of JFT300M using 12 TPU v4:2048" blind review energy 𓅬 gabriel_syme 𓅬#3220: I feel one of the benefits of that foundation paper were the citations. I don't think I've seen a new paper that doesn't cite that in the intro. That's some great value there Iacopo Poli#2931: The problem of methods like "learning to learn by gradient descent by gradient descent" is that typically you have to backprop through time and for any reasonable networks it gets super expensive very quickly so you either limit yourself to small nets or do truncation (that destroys the purpose), or both. The recent paper using gnns to predict final weights is a pretty interesting direction. There is also a paper by Google brain that shows that when you metalearn you recover momentum, lr annealing and Adam-like rescalings gollark#3909: Is there a way to use CLIP with very limited CPU memory and a GPU? The official implementation appears to use 1.5GB on the CPU with the `ViT-B/32` model.
gollark#3909: I need it to use less, so it can run on my very RAMless server. nev#4905: it should be possible to load it a few layers at a time and save the activations, but that would be very slow gollark#3909: It seems like it should be possible to keep the model just on the GPU, I mean. alstroemeria313#1694: you can load it a few layers at a time onto the GPU alstroemeria313#1694: if the GPU has enough memory but main memory isn't enough gollark#3909: That is the case, yes. gollark#3909: I suppose it'll probably be an intensely horrible hack, but great! alstroemeria313#1694: people have to do this for like, GPT-J inference on Colab alstroemeria313#1694: Bc the TPU has 64GB of RAM but system memory is a lot less gollark#3909: Swap on TPU *when*? alstroemeria313#1694: eheh alstroemeria313#1694: The TPU VMs have a ton of memory but Colab is pre TPU VM Unjay#9908: Would posting a job offer in the field of generative ML, be considered Advertisement/Spam? Asking for a friend Kharr#7888: Yes. This is not the place for that. Unjay#9908: In that case I'm glad we've clarified this before anything happened asara#0001: While this is true, I wonder if there would be demand for such a channel to be created in this server? BoneAmputee#8363: I think it would be neat asara#0001: definitely can provide a *lot* of utility for some people/employers yeah Kharr#7888: https://discord.gg/rVcRJxPj and https://discord.gg/ma3sC8ar have special channels for posting job ads. We do not. Those communities are bigger and a lot of us are present in all of them.
Solo Wing Pixy#9778: Hello, a friend invited me a long time ago to check out the .imagine command, but now i have actual AI things i want to do Solo Wing Pixy#9778: I wanna use Real-ESRGAN to upscale 4709 frames of a video, but my computer would take 28 hours to do that, i tried using colab w the demo provided in the github but it takes way longer, so i wanted to know if any1 knows any free gpu resources to run Real-ESRGAN Solo Wing Pixy#9778: for those who have no idea what's Real-ESRGAN: https://github.com/xinntao/Real-ESRGAN Unjay#9908: Have you tried the portable version? If my memory serves me right the quality was somewhat off, but the execution was actually faster in my case (most likely very hardware dependent). Not sure what the resolution you need, but ~20s per frame is a lot. Solo Wing Pixy#9778: I have a AMD Radeon R7 260X for reference, not the beefiest gpu Solo Wing Pixy#9778: as for what i'm using, i'm using realesrgan-ncnn-vulkan.exe Solo Wing Pixy#9778: that should be the portable one from what i can see Unjay#9908: Those opportunities are still pretty rare, so thought I would share with people who are actually interested in this area of research. Whether you want a dedicated channel is a different subject, I can understand you don't want one, although I haven't seen a discord channel where this was really a negative (not counting crypto/NFT space, but that's a whole different 'vibe'). I think recruiters have not yet discovered that 'channel' - it's easier for them to spam you on LinkedIn bmk#1476: colab is about the best "free gpu resources" you can get Unjay#9908: yep. OK, then those times might be actually normal for that GPU :thonk: Current free Colab is also not great with K80s I'd just launch it and wait a day. Can't think of any better option than colab Solo Wing Pixy#9778: i mean, i used colab and it took like 40 seconds for frame Solo Wing Pixy#9778: (also, kudos for calculating the seconds of each frame based on what i said) Solo Wing Pixy#9778: so i am using my pc now gollark#3909: It's cool that they have a Vulkan-based version instead of just supporting CUDA only. Solo Wing Pixy#9778: good thing that the script i used lets me start from any frame i want so i can at least shut off my pc for the night and continue other day Solo Wing Pixy#9778: true, if it weren't for that, i would't be able to do it at all Solo Wing Pixy#9778: that or i would have to ask a friend if i could borrow his gpu !!Puffy Bird!!#7496: @bmk Do I have permission to use this discord channel for a nlp experiment?
!!Puffy Bird!!#7496: yall have a pretty decent text corpus considering I'm not using a huge dataset like the common crawl !!Puffy Bird!!#7496: I'm experimenting with data privacy and such !!Puffy Bird!!#7496: **gooses are gonna for sure gonna be a bias lol** cfoster0#4356: No you shouldn't scrape the Discord !!Puffy Bird!!#7496: oh okay !!Puffy Bird!!#7496: was just curious bmk#1476: it's against TOS and also just generally morally questionable bmk#1476: so no please don't !!Puffy Bird!!#7496: I won't asara#0001: feel like it'd kind of be nice to have logs of my own conversations though, Discord forbid we actually own a single part of this $15B platform mkualquiera#3484: actually yeah, don't they have to allow people to download their user data? mkualquiera#3484: which should include chats and messages on public servers too asara#0001: yes, but you only get *your* messages, which is only part (or half) of conversations asara#0001: although they actually *don't* have to allow that except for EU citizens and CA citizens, but most companies end up allowing it to any clients for various reasons asara#0001: but as someone that is used to all my conversations being a simple text file on my own machine that I can use `grep` and others with, it's obviously a huge downgrade to have the opposite here mkualquiera#3484: yeah Some Point Process#3793: As long as I have the right to be forgotten like the blokes in the eu supposedly do.. asara#0001: You don't, unless you're in CA mkualquiera#3484: I will never forget you mkualquiera#3484: :goosegirl4:
Teemochu#8740: And you'll always be by my side EstebanSir#2189: Hello, quick question, what is the context size of gpt neox going to be? Louis#0144: 2048 iirc Louis#0144: I think there was talk a while ago about doubling it Louis#0144: Don't rly remember Louis#0144: 2048 is enough though for most things of value to researchers 𓅬 gabriel_syme 𓅬#3220: is this a nice way of showing this? https://cdn.discordapp.com/attachments/729741769738158194/904703101230022664/a_bathroom_is_adjacent_to_the_living_room_accuracy_heatmap.png 𓅬 gabriel_syme 𓅬#3220: (ignore the fact there is no trend here lol) Louis#0144: I've never seen a diagram like this Louis#0144: lol Louis#0144: It's probably publishable as a short paper if you really wanted Louis#0144: :berk: 𓅬 gabriel_syme 𓅬#3220: yeh I don't think it'l ever go in a paper, just want something I can look at myself and figure things out quickly 𓅬 gabriel_syme 𓅬#3220: esp. since I have 1000 of these lol kurumuz#5695: after training sequence length can be exrended with alibi kurumuz#5695: (if alibi is used that is) Louis#0144: Alibi is black magic Louis#0144: i still don't rly get it Louis#0144: lol NN_47#1886: how does openAI or GitHub serve there model codex or gpt-3 to thousands of people at a time , do they have huge number of copies ?
CRG#8707: Alibi is kind of like a soft version of TrXL caching without the recurrence. / a soft local attention CRG#8707: But without the savings alstroemeria313#1694: oh, it works because the biases on different attention heads are different so this introduces relative position info at each layer mkualquiera#3484: my question is not how they do it but how they do it for free mkualquiera#3484: no idea tbh mkualquiera#3484: probably operating at a huge loss StellaAthena#3530: @mkualquiera OpenAI charges money for GPT-3 mkualquiera#3484: yes but not codex (yet?) mkualquiera#3484: also how does the stuff with Copilot work for example AI_WAIFU#2844: teaser rates mkualquiera#3484: OpenAI I get how it's more viable AI_WAIFU#2844: it's free until you become dependent on it mkualquiera#3484: but not Copilot mkualquiera#3484: do you think they will make Copilot cost money in the future? StellaAthena#3530: IDK, ask them mkualquiera#3484: well yeah that's what I'm saying S3ZINE#2844: Hello S3ZINE#2844: Am a young guy very interested in neural networks but I don't know where to start from. Also am still learning basic coding. Any tutorial that u guys can recommend me? Where shud I start from? I just want to be able to build good neural networks model CRG#8707: Read the FAQ: <https://www.eleuther.ai/faq/>
S3ZINE#2844: Ha ok thanks Zippy#1111: I have found love in torch DDP / deepspeed / accelerate. Louis#0144: i have never heard someone sing praise of accelerate triggerhappygandi#0001: i have never heard of accelerate Zippy#1111: Oh well to be fair I havn't used it. I tried deepspeed and it was neat for large models, and then I tried torch ddp and I exploded with happiness. I just assumed that accelerate would be the same since it's literally a wrapper around ddp. louis030195#2462: Hi, noob question but, I don't remember what the computation to know how much memory I need to run some model given the number of parameters? 400M parameter model times something float 16? EricHallahan#1051: Rough estimate is parameters\*4 for fp32 and parameters\*2 for fp16. louis030195#2462: thanks! 𓅬 gabriel_syme 𓅬#3220: the first iteration of my models were wild, too tired now but hopefully tomorrow I can compare with second iteration https://cdn.discordapp.com/attachments/729741769738158194/904785196488990760/Boxplot_by_p.png 𓅬 gabriel_syme 𓅬#3220: even that 12% isn't that terrible though, about 270 inferences to get 32 good results (if I'm counting like the top_k of DALLE) 𓅬 gabriel_syme 𓅬#3220: models also seem to matter very little, makes sense since the data was really limited at that point mega b#6696: What library are you using to display that confusion matrix? NN_47#1886: may be it runs on dark energy 🤣 NN_47#1886: Honk the goose and .goose are both same person ? mkualquiera#3484: I'd wish mkualquiera#3484: 🥵 NN_47#1886: 8 tails hmm 😉 mkualquiera#3484: #off-topic NN_47#1886: about the question is there some fast inference magic going on or there are large no of model copies running
alstroemeria313#1694: diffusion cifar-10 autoencoder https://cdn.discordapp.com/attachments/729741769738158194/904857038281134140/download_-_2021-11-01T151305.987.png alstroemeria313#1694: latents are 128 dim alstroemeria313#1694: reals on left, reconstructions on right NN_47#1886: what is total number of latents parameters ? alstroemeria313#1694: 128 alstroemeria313#1694: also these are validation set images, they haven't been seen during training alstroemeria313#1694: this was a test to see if i could train an encoder and a diffusion decoder end to end alstroemeria313#1694: it seems to work NN_47#1886: Hmm some one mentioned diffusion being a target area for text generation also NN_47#1886: Also the reconstruction , is noise added to original image ? alstroemeria313#1694: there are two inputs to the diffusion sampling loop, one is a starting noise tensor (Gaussian noise, has no relation to the image to reconstruct) and the other is the latent from the encoder alstroemeria313#1694: which is held fixed during sampling alstroemeria313#1694: (i.e. the encoder is not conditioned on timestep, that would make it way too easy to fit tons of information through the autoencoder bottleneck) NN_47#1886: thanks i understood it somewhat , not too deep know how in this stuff NN_47#1886: do you know the answer to my question above about models , i am curious is my question wrong or like does not make any sense alstroemeria313#1694: i don't understand it alstroemeria313#1694: oh alstroemeria313#1694: yeah they probably run a ton of them cfoster0#4356: I've been doing the same thing today :guilty: alstroemeria313#1694: eheh~
alstroemeria313#1694: how's it going? cfoster0#4356: Similar. Built off of your Colab. My encoder is kinda like NeST cfoster0#4356: It definitely uses the latents, when they're continuous cfoster0#4356: Tried VQing it but my first attempt, it was just learning to ignore the vectors NN_47#1886: thanks a bunch , it really got me thinking the cool behind the scenes engineering and insights one can play with , similar to you guys discuss and play with in this discord. alstroemeria313#1694: mine is a normal convolutional encoder alstroemeria313#1694: wow that's weird alstroemeria313#1694: you were backpropagating through the quantization right cfoster0#4356: Yeah alstroemeria313#1694: huh cfoster0#4356: Surely there was something else I got wrong NN_47#1886: what are the benefits of the way you used diffusion decoder , is it more capable then normal decoder ? alstroemeria313#1694: i have always had to do a ton of tricks in the past to make autoencoders work well alstroemeria313#1694: like carefully tuned perceptual and adversarial losses alstroemeria313#1694: Whereas the diffusion decoder just seems to work well without tricks NN_47#1886: that is quite interesting NN_47#1886: I have been given the task of value addition in VQGan paper , will definitely have to talk with you 😉 NN_47#1886: may be diffusion decoder will fit the bill haha cfoster0#4356: How would one calculate this? cfoster0#4356: Like in theory, with continuous latents, it can fit up to d * precision bits through the bottleneck, right?
𓅬 gabriel_syme 𓅬#3220: that is plotnine, ggplot for python alstroemeria313#1694: @cfoster0 gonna try a diffusion discrete VAE now alstroemeria313#1694: yeah alstroemeria313#1694: Per timestep. alstroemeria313#1694: If you condition the encoder on timestep. alstroemeria313#1694: But I do the sensible thing and encode it the same way for all timesteps cfoster0#4356: Right right. I should probably look at how small CIFAR10 images compress with JPEG or w/e, as a comparison cfoster0#4356: I was worried that the continuous latents were passing too much info, even if the encoder wasn't conditioning on the timestep alstroemeria313#1694: @cfoster0 so a 4x4 grid of latents and 1024 codes alstroemeria313#1694: so 160 bits alstroemeria313#1694: per image alstroemeria313#1694: f=8 alstroemeria313#1694: this is gonna take a while to train fully bc i am using gumbel quantization and have to anneal tau cfoster0#4356: Do you recommend any other ticks like EMA updates? alstroemeria313#1694: yes, i do EMA on both decoder and encoder alstroemeria313#1694: It is good on the decoder bc it is diffusion and I do it on the encoder too to keep it in sync with the actual decoder used in inference. alstroemeria313#1694: @cfoster0 50 epochs dVAE training https://cdn.discordapp.com/attachments/729741769738158194/904884757349924914/demo_00050-18.png alstroemeria313#1694: tau=0.63581 at 50 epochs alstroemeria313#1694: So the decoder still expects relative soft one-hots and I fed it hard one-hots to make the demo grid. alstroemeria313#1694: i.e. it will get better
alstroemeria313#1694: i've thought of making a vqgan uncorrupter before :) cfoster0#4356: Ah I meant the weird EMA thing folks do for VQVAEs. I think it's separate from regular EMA weights alstroemeria313#1694: Ohh alstroemeria313#1694: idk, i think Gumbel quantization supersedes it alstroemeria313#1694: But it would be nice not to need Gumbel bc you have to anneal the temperature to use it cfoster0#4356: Word. I'll try out Gumbel Dvae later then alstroemeria313#1694: Like this is just going to work isn't it. alstroemeria313#1694: With a normal diffusion reconstruction loss. alstroemeria313#1694: And not have any of the OpenAI discrete VAE's blurriness alstroemeria313#1694: Without using VQGAN's combination of perceptual and adversarial losses. nshepperd#2316: oooh cfoster0#4356: Neat that this works. Should open some doors alstroemeria313#1694: @cfoster0 350 epochs, it got a NaN after this :/ https://cdn.discordapp.com/attachments/729741769738158194/904948195833024582/demo_00350-4.png cfoster0#4356: Oh hmm I wonder what from alstroemeria313#1694: idk, but maybe i shouldn't be doing Gumbel quantization in fp16 AI_WAIFU#2844: EMA? bmk#1476: :small_brain: EMA updates :galaxy_brain: ichimoku cloud updates cfoster0#4356: Appendix A.1 of the original paper m_wAL99#1923: https://github.com/google-research/scenic
m_wAL99#1923: https://arxiv.org/abs/2110.05208 https://cdn.discordapp.com/attachments/729741769738158194/905020921046245436/K5G9U8O5__ODKNUVU6I2.png m_wAL99#1923: > google 59M web crawled > source : https://www.google.com.hk/ :thinkies: drjenkins#7380: Hey guys, I'm David, an NLP engineer. Have a solid Python knowledge and experience in building AI applicaitons with pytorch and transformers. Have little experience with GPT-3 itself. I would like to contribute to GPT-J project, but do not know where to start. Maybe I can write some "GPT-3 like prompts" to illustrate the possible usage of this awesome model? Awesome_Ruler_007#7922: quick question - if I use half-precision in a card without tensor cores, I won't speed up computations but I still save on VRAM - right? Daj#7482: Hey David! The GPT-J project is basically "complete" afaik, I'm not sure there is really anything there to help with. @kindiana any comment? alstroemeria313#1694: i think it can still speed it up some from using less memory bandwidth. chinesesoup#6725: I'm not sure where to ask. Is there some way to query language constructs that the language model extracted? For example how the vectors end up calculating king + woman ~= queen Awesome_Ruler_007#7922: a bit sped-up, but the only difference should be computation/iteration speed, eh? Awesome_Ruler_007#7922: that's the classic word2vec examples alstroemeria313#1694: i think Awesome_Ruler_007#7922: thanks! alstroemeria313#1694: why loss go up https://cdn.discordapp.com/attachments/729741769738158194/905150568161501204/Screen_Shot_2021-11-02_at_10.45.04_AM.png alstroemeria313#1694: Umm how do you even resume from a checkpoint in PyTorch Lightning with a changed learning rate alstroemeria313#1694: Like to turn lr down. alstroemeria313#1694: Is it something like "the gradient is so small that the Adam second moment eventually decays and takes too large steps" Kharr#7888: Are you also checkpointing your optimizer? Might need to also adjust the LR schedule upon resuming (if you are using one) alstroemeria313#1694: yes and yes alstroemeria313#1694: That was with the unaltered lr. I am not using a schedule. alstroemeria313#1694: So now I am altering the lr in the checkpoint manually to reduce it
alstroemeria313#1694: As well as changing it in the code. alstroemeria313#1694: Which is annoying and I wish there were an easier way to get at the optimizer inside the code. alstroemeria313#1694: So I could alter it programmatically in the code instead of doing it manually in ipython. Kharr#7888: I can't really comment without understanding your full setup, but you can definitely do whatever you want to the optimizer in code if you have access to the optimizer itself. I'm not familiar with Lightning :blobsad: alstroemeria313#1694: idk how to get at it yet alstroemeria313#1694: In Lightning you have a method to make the optimizer and return it alstroemeria313#1694: It goes somewhere after that MicPie#9427: This setup in plain PyTorch works for me to restore an optimizer state: ``` # save checkpoint = optimizer.state_dict() torch.save(checkpoint, "optimizer_state.pt") # load checkpoint = torch.load("optimizer_state.pt") optimizer = ... # recreate optimizer optimizer.load_state_dict(checkpoint) # move optimizer to the correct device if necessary # https://github.com/pytorch/pytorch/issues/2830#issuecomment-336194949
for state in optimizer.state.values(): for k, v in state.items(): if isinstance(v, torch.Tensor): state[k] = v.to(device, non_blocking=True) ``` A hacky solution could be to setup a wrapper class around your optimizer that reloads the state when you create it to easily verify if that is the problem in PyTorch Lightning? alstroemeria313#1694: the problem is that it loads the checkpoint in outside of my control. alstroemeria313#1694: And the checkpoint contains the old lr. alstroemeria313#1694: And I have to alter the lr afterwards by reaching into its internal state. alstroemeria313#1694: 53 epochs (at 200 the temperature annealing will finish) https://cdn.discordapp.com/attachments/729741769738158194/905243798802669608/demo_00053.png alstroemeria313#1694: look at those hallucinated details alstroemeria313#1694: It seemed to have turned grilled chicken into red meat alstroemeria313#1694: People in a b/w photo to people in a color photo nshepperd#2316: turned a round mirror into a square mirror, hehe alstroemeria313#1694: Yep alstroemeria313#1694: These are interesting errors, aren't they? alstroemeria313#1694: Like it's replacing things with semantically close things? alstroemeria313#1694: It must have an interesting latent space nshepperd#2316: this is really cool bc it means the encoder is doing some sort of semantic classification nshepperd#2316: interpolations in the latent will be fun
alstroemeria313#1694: mm alstroemeria313#1694: There are two latent spaces I guess alstroemeria313#1694: This is a DALL-E VAE like design where the encoder outputs logits and the decoder is fed either soft or hard one-hots alstroemeria313#1694: And the decoder transforms it into a latent space internally alstroemeria313#1694: I could expose that part though if we wanted alstroemeria313#1694: Or just interpolate in logit space idk nshepperd#2316: ah, yeah alstroemeria313#1694: I left the bias off the layer that goes from logits to the decoder's latent space alstroemeria313#1694: So it's conceptually simpler than the DALL-E VAE which had a bias on that layer. nshepperd#2316: could interpolate the one-hots, that's equivalent to interpolating the latent after the first linear alstroemeria313#1694: (I left it off due to experience from taking apart OpenAI's VAE and messing around w/ its internal latent space lol) alstroemeria313#1694: (It makes that sort of hacking easier) alstroemeria313#1694: ohh alstroemeria313#1694: you know what? alstroemeria313#1694: This design isn't going to have the "tiled texture" effect VQGAN can have, will it nshepperd#2316: yeah! alstroemeria313#1694: It won't tile bc of the Gaussian starting noise used in diffusion. alstroemeria313#1694: A grid of latents doesn't correspond to a single output, it corresponds to a condition used to pick the distribution the outputs are sampled from. alstroemeria313#1694: (The tiling with VQGAN happens when you sample grids of latent codes using a second stage transformer and thus produce grid patterns that were not in-distribution during training and thus the adversarial loss couldn't detect and remove the pattern.) alstroemeria313#1694: (I think.)
nshepperd#2316: ahh nshepperd#2316: yep nshepperd#2316: i think if you give a vqgan a patch of the same token repeating, it necessarily tiles bc it's convolutional and has no way to break symmetry? nshepperd#2316: this won't because the noise input does that alstroemeria313#1694: Ahh yeah sadat#1694: hi there, new here, does anyone happen to know what happened to https://discord.sg/ai ? i found it in #communities and it used to have a collection of AI-related discord servers but the domain is on sale rn; if anyone still has the list or a copy of the website that would be nice bmk#1476: https://discord.gg/Kg7nXqDg bmk#1476: ~~clearly eleuther is the better server tho~~ sadat#1694: aha, thanks! bmk#1476: I think we just need another 5k members and then we'll be the biggest AI research discord out there Ajay sahu#2540: https://pnw.ai/article/question-answering-ai-macaw-outperforms-gpt-3-by-more-than-10-percent/69436699 StellaAthena#3530: Reading this press release you’d never know it was the third model to make basically the same claims with better evidence and broader applicability in the past month… Ajay sahu#2540: Yes, true, they have open sourced it, so we can try and check out for improvements and further research.. There's another model by Big science team, they have also proclaimed that the results are better than GPT - 3 while being 10 times or more smaller StellaAthena#3530: *\*shrug\** if it’s actually any good there’ll be a paper doing real comparisons against T0, GPT-3, Jurassic-1, etc. StellaAthena#3530: Saying much about it before then is really guessing more than anything else. Ajay sahu#2540: I mean from a perspective of scope on smaller models giving better results, it's possible to get much better results while the models may be small. Ajay sahu#2540: Yup StellaAthena#3530: I’m not contesting this, just pointing out that the claims are rather weak all things considered.
Ajay sahu#2540: Yes... But it gives more room to the research, keeping claims aside.. It depends on how they are looking at the results and how generally people perceive it faraday#0862: hey guys, I've seen news on Projected GANs yesterday that they are performing a whole lot better than classical approaches. Does anyone have more information on this? What's done differently with Projected GANs? What is a projected GAN indeed? here: https://twitter.com/arankomatsuzaki/status/1455349778712301573 faraday#0862: going from 5 days to less than 3 hours is definitely interesting. did anyone get their hands on it? nshepperd#2316: i think it's just a GAN but they input perceptual features into the discriminator nshepperd#2316: like using a pretrained CLIP faraday#0862: pokemons from the paper: http://www.cvlibs.net/publications/Sauer2021NEURIPS.pdf https://cdn.discordapp.com/attachments/729741769738158194/905350279271165952/pokemons_projected_gan.png faraday#0862: I didn't know we could have a Pokemon for broccoli :berk: faraday#0862: it fires healthy sprouts to the face of its opponent SecondMover#8029: Top row just are real pokemon and bottom row are generated nshepperd#2316: @alstroemeria313 how did your encoder for conditioning a diffusion on a 512-dim clip embedding work? i think i want to use it for this style diffusion idea? nshepperd#2316: it was like an embedding dependent shift and scale or sth...? alstroemeria313#1694: yep! alstroemeria313#1694: https://gist.github.com/crowsonkb/387a79ddf06e5ac62695d80890f90224 alstroemeria313#1694: though now that i think about it, the layernorms right before the modulation2d layers should not have trainable parameters bc the scales and shifts in the modulation2d make them redundant nshepperd#2316: ahhh ty alstroemeria313#1694: the JAX version of this was kind of cleaner and didn't involve sticking the global condition in a state dict nshepperd#2316: so you apply a linear without bias to the features, to get a scale and shift, which is then applied before the relu nshepperd#2316: and do this with an *independent* linear for each res block alstroemeria313#1694: there are a few possible variants on it alstroemeria313#1694: before conv, after conv but before relu, after both, etc
alstroemeria313#1694: it was different between SG1 and SG2 too alstroemeria313#1694: and independent, yeah. nshepperd#2316: so that it can use all 512 dims even if the model has fewer channels nshepperd#2316: i will probably use the state dict since i am doing this in a pytorchy way eheh alstroemeria313#1694: and so i don't have to concat all 512 dims to the input alstroemeria313#1694: and make everything have that many channels alstroemeria313#1694: or concat it later and then *don't* condition the early layers. nshepperd#2316: *nods* nshepperd#2316: preparing data for wikiart training~ alstroemeria313#1694: yay~ alstroemeria313#1694: ohh, is it going to be the multi-style net? nshepperd#2316: yep :) alstroemeria313#1694: how are you going to deal with jpeg artifacts nshepperd#2316: mm, currently hoping that scaling down to 512x512 will partially eliminate them alstroemeria313#1694: oh nshepperd#2316: and removing all images smaller than that alstroemeria313#1694: why do we not have a jpeg repairer alstroemeria313#1694: like, a... alstroemeria313#1694: even a non DL one alstroemeria313#1694: like the hard part about the recompress-at-all-offsets repairer is actually just determining the right compression level to apply, right?
alstroemeria313#1694: (it's allowed to vary per macroblock, right?) alstroemeria313#1694: wait, that was added later wasn't it nshepperd#2316: i don't know ;; alstroemeria313#1694: do i have to write this myself nshepperd#2316: apparently you can extract the compression level that was used, at least from wikiart images alstroemeria313#1694: like region of interest was added in jpeg 2000, the original jpeg format couldn't do it, if you can vary the compression level per 16x16 block that would let you do region of interest alstroemeria313#1694: i think i remember the implementation of the filter for mpeg-4 actually. alstroemeria313#1694: which did let you vary the compression level per block. alstroemeria313#1694: oh so alstroemeria313#1694: you can just get the quantization tables alstroemeria313#1694: from Pillow alstroemeria313#1694: as `image.quantization` nshepperd#2316: oh and just use them to... re-quantize at all offsets? alstroemeria313#1694: yeah alstroemeria313#1694: i think the quantization tables and the chroma subsampling are all you need alstroemeria313#1694: We could actually do it on CPU in Pillow alstroemeria313#1694: And just do the final mean in more than 8 bit precision. alstroemeria313#1694: But we could also GPU accelerate it. nshepperd#2316: eheh. do it in pytorch? alstroemeria313#1694: like <https://github.com/python-pillow/Pillow/blob/40e7ff622669550733b26f14dc817fb72e096250/src/PIL/JpegPresets.py>, a JPEG preset consists of a subsampling type and two quantization tables, one for luma one for chroma
alstroemeria313#1694: well with nvidia's gpu accelerated 8x8 dct op alstroemeria313#1694: that they made specifically for jpeg on gpu nshepperd#2316: ohh CRG#8707: Even if they're redundant it's not clear that removing them would be better. The layernorm shift is redundant with the following layer bias. 𓅬 gabriel_syme 𓅬#3220: For those into NeRFs and inverse rendering and such this looks like a great opportunity (new lab at MIT) https://gradapply.mit.edu/eecs/apply/login/ 𓅬 gabriel_syme 𓅬#3220: If I was young I'd definitely try to get rejected from that alstroemeria313#1694: it isn't? alstroemeria313#1694: firstly the shifted things get run through a matmul before encountering the next layer bias alstroemeria313#1694: and secondly in my configuration the shift is applied pre-relu alstroemeria313#1694: so it determines what gets zeroed alstroemeria313#1694: also yes, looking at this i also forgot that the conv2d layer biases get removed by the layernorm afterward so i should just not use them alstroemeria313#1694: well, sort of removed alstroemeria313#1694: i guess not really for a layernorm alstroemeria313#1694: they did remove the conv layer biases usually when using batchnorm bc there it really did cancel exactly CRG#8707: Yeah it's only technically redundant, it's kind of like having a higher lr alstroemeria313#1694: but since i put the shifts before the relu the condition can determine which channels are active and which are inactive nshepperd#2316: huh layernorm adds an affine shift and scale with the same shape as the last D dimensions? nshepperd#2316: does this add global position info CRG#8707: If the last dim is channels it shouldn't
alstroemeria313#1694: this is why my code uses groupnorm with 1 group alstroemeria313#1694: That is equivalent to layernorm but for NCHW feature maps. alstroemeria313#1694: And the affine parameters are per channel. nshepperd#2316: ahh nshepperd#2316: `/dev/root 97G 92G 4.8G 96% /` oh no, my tpu 𓅬 gabriel_syme 𓅬#3220: ye space is really tough 𓅬 gabriel_syme 𓅬#3220: you can try cleaning up the temp folders perhaps nshepperd#2316: too many checkpoints ^^;; nshepperd#2316: ...i wonder if i should be using dropout with this style thing 𓅬 gabriel_syme 𓅬#3220: is this interesting at all? https://github.com/dropreg/R-Drop CRG#8707: There's been some discussion already: https://discord.com/channels/729741769192767510/747850033994662000/894309776199151616 nshepperd#2316: @alstroemeria313 it's this, right ```py def get_cosine_alphas_sigmas(t): return jnp.cos(t * math.pi/2), jnp.sin(t * math.pi/2) ``` the cosine schedule alstroemeria313#1694: yes nshepperd#2316: ty ^_^ nshepperd#2316: @alstroemeria313 huh, we could also eliminate the encoder for multi-style by just conditioning on a *different* random crop... right?
alstroemeria313#1694: oh you could... but what if you wanted to use a style image that was a different resolution nshepperd#2316: oh true nshepperd#2316: you'd need to be able to make a crop of the style that was as big as your image nshepperd#2316: or like tile it or something. that would be bad nshepperd#2316: multi-style diffusion is running~ nshepperd#2316: first demo grid in 5 minutes nshepperd#2316: one epoch https://cdn.discordapp.com/attachments/729741769738158194/905459324627857468/demo.png nshepperd#2316: reals on top, style samples below nshepperd#2316: pretty bad, probably needs a lot of epochs nshepperd#2316: but it is sort of replicating the average color at least StellaAthena#3530: Is it? StellaAthena#3530: That's not obvious to me StellaAthena#3530: I would just give it some time alstroemeria313#1694: what is the decoder conditioned on? nshepperd#2316: a 512-dim latent generated by the encoder w/ global average pooling as you suggested alstroemeria313#1694: my diffusion discrete VAE, 125 epochs in https://cdn.discordapp.com/attachments/729741769738158194/905463956997558343/demo_00125-2.png alstroemeria313#1694: ahhh nshepperd#2316: encoder sees the whole image (512x512) nshepperd#2316: but has a receptive field of like 68 or sth alstroemeria313#1694: ahh
nshepperd#2316: I love seeing what this does to the food items ^_^ nshepperd#2316: turned the salad into... a casserole? and the berries into gelato alstroemeria313#1694: it never gets the berries alstroemeria313#1694: And it's a different food item each demo grid lol nshepperd#2316: eheh Kharr#7888: Lobster claw? https://cdn.discordapp.com/attachments/729741769738158194/905464805501075536/unknown.png alstroemeria313#1694: eheh nshepperd#2316: it added an en suite to the bedroom alstroemeria313#1694: This is a smaller model, I wonder if I could get it better if I used 8192 codes and an architecture that is a similar size to the OpenAI discrete VAE. nshepperd#2316: the b/w photo is starting to stay black and white though? alstroemeria313#1694: Yep alstroemeria313#1694: well, a similar sized encoder, the decoder needs to be 2x as big because it is a U-Net alstroemeria313#1694: maybe a KL div loss term too nshepperd#2316: oh yeah, my encoder is the same arch as the unet. just without the upscaling half alstroemeria313#1694: ahh alstroemeria313#1694: i am generating these demo grids by argmaxing the encoder logits alstroemeria313#1694: so most of the previous grids have been generated w/ out of distribution hard one-hots alstroemeria313#1694: however as the temperature gets lower the decoder learns to deal with things increasingly like hard one-hots and the demo grids get better nshepperd#2316: ahh alstroemeria313#1694: i guess i should have added a very weak KL loss too to stop the encoder logits from running off to inf and -inf
alstroemeria313#1694: The Gumbel VQGAN does this alstroemeria313#1694: They don't warm it up, you can't use it at a high weight early on alstroemeria313#1694: So they just start it low and keep it low. alstroemeria313#1694: I think the reason posterior collapse happens from a too high KL weight early on is that the KL loss tries to drive the output logits to all be the same alstroemeria313#1694: And the encoder/decoder pair has no idea about the statistics of images early on but the KL loss is easily satisfiable still. alstroemeria313#1694: By just learning to output the same value for every input. alstroemeria313#1694: At this point the output does not depend on the input and gradient descent cannot fix it. alstroemeria313#1694: It is a bad local minimum. nshepperd#2316: ahh nshepperd#2316: the decoder learns to ignore the tokens alstroemeria313#1694: The encoder does alstroemeria313#1694: Um, it learns to ignore the input and produce logits for uniform distributions as output. nshepperd#2316: which zeros out any gradients that could be used to get out of that minimum? nshepperd#2316: like, bc the tokens are uninformative when they are all random alstroemeria313#1694: Then the encoder output doesn't depend on its input alstroemeria313#1694: I think it's an encoder specific problem alstroemeria313#1694: At this point the decoder will learn to ignore the tokens, yeah, but it's already too late by then? alstroemeria313#1694: A diffusion decoder will still have the noise input and so will turn into an unconditional diffusion model at that point I think? alstroemeria313#1694: I have had this happen to GAN discriminators a few times nshepperd#2316: well it's a vicious circle right, bc once the decoder is ignoring the tokens the gradient of the loss wrt the logits will be 0
alstroemeria313#1694: Like, once they learned to output p(real) = 50% for everything more training would break them and they would then not be fixable alstroemeria313#1694: Like the generator would start changing its distribution to be easily told apart from the reals (I had other losses on it that made it do this) and D wouldn't pick up on it at all and would just output the same 50%. nshepperd#2316: um, what if you put a kl loss on the mean of the probs over the sequence (and possibly batch) nshepperd#2316: instead of averaging the kl loss over the probs alstroemeria313#1694: wdym? nev#4905: oh! how many pictures? alstroemeria313#1694: bc somewhere inside D (probably in the weights of the last layer) gradients were zero for all prior layers alstroemeria313#1694: ms coco nev#4905: 200-300k? nev#4905: hm nshepperd#2316: i mean like. the normal way is something like kl_loss(logits).mean(), right alstroemeria313#1694: 118287 nev#4905: oh nev#4905: I have about the same number for vqgan alstroemeria313#1694: sum over the token indices, mean over the batch nshepperd#2316: ah right nshepperd#2316: kl_loss(logits).sum(1).mean() nshepperd#2316: which tries to force the logits all along the sequence to be uniform alstroemeria313#1694: yes nshepperd#2316: but you could also. kl_loss(softmax(logits).mean(1).log()).mean()
nev#4905: @alstroemeria313 for the danbooru2020 vqgan, do you think using 3m preprocessed 512x512 images would be better than 100k full-res images with taming's crop augmentations? nshepperd#2316: which tries to force the marginal distribution of tokens to be uniform alstroemeria313#1694: i think i did this ```python @staticmethod def kl_div_loss(logits): probs = logits.softmax(1) return torch.sum(probs * torch.log(probs * probs.shape[1] + 1e-8), dim=1).mean() ``` alstroemeria313#1694: depends on preprocessing type nshepperd#2316: over the sequence. but still allowing individual logits to vary alstroemeria313#1694: if you are saving as jpeg then use super high jpeg quality nev#4905: it's just static crops for each picture nev#4905: hmm they're PNGs alstroemeria313#1694: ahh alstroemeria313#1694: yeah might work nev#4905: ok nev#4905: at least, when I'm doing the run alstroemeria313#1694: i don't get it ^^;; alstroemeria313#1694: wait nshepperd#2316: like you average the probs over the sequence
nshepperd#2316: *then* kl loss that alstroemeria313#1694: ohh alstroemeria313#1694: that's really weird ^^;; alstroemeria313#1694: it's LDM-like alstroemeria313#1694: which did KL vs Gaussian of the empirical means and variances of each element of an autoencoder's latent, the means and variances were taken over the batch dimension nshepperd#2316: eheh~ alstroemeria313#1694: but in practice the encoder would just learn to encode information in the correlations between the latent's elements alstroemeria313#1694: bc this could not pick up on that. nshepperd#2316: ahh alstroemeria313#1694: And so you would not actually be able to sample latents as N(0, I) and feed them to the decoder and get samples that looked like reconstructions alstroemeria313#1694: It was like a way of doing a VAE without really doing a VAE nshepperd#2316: that is probably okay in this case though, bc we are training a transformer to model the tokens? so correlations are expected nshepperd#2316: like if we want to generate with the dvae alstroemeria313#1694: Also we are still Gumbel quantizing. alstroemeria313#1694: LDM did not add noise to the encoder outputs alstroemeria313#1694: So all the correlations stayed in alstroemeria313#1694: A real VAE avoids this problem by sampling from the distributions output by the encoder independently alstroemeria313#1694: And the discrete VAE analogue of this is quantization/Gumbel noise addition nshepperd#2316: right nshepperd#2316: something seems broken :/ https://cdn.discordapp.com/attachments/729741769738158194/905475565153321020/demo.png
alstroemeria313#1694: oh no :/ nshepperd#2316: trying again with the encoder outputs normalized nshepperd#2316: idk if it actually has anything to do with the problem but it is learning faster at least to begin with alstroemeria313#1694: the scaling/normalizing i did in my code was specialized to CLIP embeddings alstroemeria313#1694: i mean in mscoco_2 alstroemeria313#1694: and in fact i left it out of the autoencoders nshepperd#2316: yeah.. not sure if i should actually normalize the encoder. but for now i'm normalizing it to std=1 nshepperd#2316: 4 epochs, same as above. this looks better https://cdn.discordapp.com/attachments/729741769738158194/905487724662911026/style.png nshepperd#2316: still a jumbled mess but it doesn't look deep fried any more alstroemeria313#1694: ooh nshepperd#2316: 8 epochs https://cdn.discordapp.com/attachments/729741769738158194/905502506166132766/style8.png nshepperd#2316: gonna sleep now~ Some Point Process#3793: > The team at AI2 fed this well-known riddle to Macaw: A young boy was rushed to the hospital emergency room, but the ER doctor saw the boy and refused to operate. “This boy is my son,” the doctor said. But the doctor wasn’t the boy’s father. How could this be? > The conventional answer to this is, of course, the doctor was the boy’s mother. Macaw, however, answered: He mistook the boy for his own son. I came up with macaws’ answer too. But it seems ambiguous what the right answer is Awesome_Ruler_007#7922: ..and Twitter didn't explode with "Gender biased model assumes ER doctors are male"? 😲 truly a miracle! StellaAthena#3530: In the US this is commonly told with the expectation that the correct answer is "the ER doc is the mother." I have never heard someone tell the riddle with "the ER doc was wrong" as the intended answer by the speaker
Some Point Process#3793: I never really thought much about this riddle (and the supposed right answer) whenever it was mentioned so I forgot what the answer was alstroemeria313#1694: "The person in the riddle was just wrong/lying!" is generally not the correct answer to a riddle StellaAthena#3530: Actually, the child died a decade ago and the ER doctor was having a breakdown bmk#1476: actually ER stands for Endoplasmic reticulum here StellaAthena#3530: Actually, there was a second boy in the room and the instance of the words “the boy’s father” in the riddle was a reference to that other boy, not the one on the table. Louis#0144: Endogoose Reticulum Some Point Process#3793: Why not? alstroemeria313#1694: ...I had an idea alstroemeria313#1694: Train a diffusion discrete autoencoder, but it encodes to *one token*. With like 10 possible tokens. Train it on a dataset where you know you have 10 classes but don't feed the class labels in. See if the 10 possible tokens line up with the 10 classes. Diffusion *clustering*. bmk#1476: Endofunctor Representation alstroemeria313#1694: trying this now alstroemeria313#1694: first demo grid (5 epochs in) https://cdn.discordapp.com/attachments/729741769738158194/905550880491855872/demo_00005-9.png alstroemeria313#1694: 10 epochs. https://cdn.discordapp.com/attachments/729741769738158194/905551291315535892/demo_00010-5.png alstroemeria313#1694: 15 epochs. https://cdn.discordapp.com/attachments/729741769738158194/905551759722811502/demo_00015-5.png bmk#1476: is each row one different token? alstroemeria313#1694: yes.
bmk#1476: hm weird alstroemeria313#1694: however the decoder is not actually using discrete tokens yet alstroemeria313#1694: during training alstroemeria313#1694: i am using gumbel quantization on the encoder logits and decreasing temperature over time. cfoster0#4356: *soft gumbel noises* bmk#1476: 4/7/9 0 2/0/6 0/6 8/2/3/6/5 ????????? 2/8/6 7/8/9/5 1 4/7/5 bmk#1476: interesting tokenization scheme alstroemeria313#1694: idk lol alstroemeria313#1694: mb i need to train longer dmvaldman#4711: Anyone look into these cards? https://www.untether.ai/products
80,000 fps inference on resnet 50 batch size 1?! alstroemeria313#1694: 90 epochs https://cdn.discordapp.com/attachments/729741769738158194/905558325255634985/demo_00090.png alstroemeria313#1694: gonna try a slower temperature rampdown gollark#3909: They say they have 200 MB of SRAM on each (16nm) chip. That sounds hilariously expensive. Louis#0144: What exactly is SRAM? EricHallahan#1051: **S**tatic EricHallahan#1051: **R**andom EricHallahan#1051: **A**ccess EricHallahan#1051: **M**emory Louis#0144: Pog gollark#3909: DRAM is what regular RAM sticks use: it uses a lot of capacitors to store data, which is cheap but high-latency to do anything with, and requires refreshing constantly. SRAM is just a bunch of transistors arranged to store data: it is very fast and low-power, but expensive because you need much more room for all the transistors. alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/905577621969047632/demo_00200-7.png alstroemeria313#1694: 200 epochs. Emad#9608: You just know that will eventually just be a bunch of 6s for freak out factor Emad#9608: It's like I was doing a batch of Diwali images earlier in diffusion eg https://cdn.discordapp.com/attachments/729741769738158194/905594681214259260/Screenshot_2021-11-03_at_16.15.06.png Emad#9608: Then all of a sudden in one of them in the background :scaredcat: https://cdn.discordapp.com/attachments/729741769738158194/905594834662875196/Screenshot_2021-11-03_at_16.13.14.png Emad#9608: Just one out of 20 Emad#9608: Ghost in the machine 👻 alstroemeria313#1694: eheh~ alstroemeria313#1694: if it had worked exactly perfectly there should be one row of only 6s
nshepperd#2316: ummm https://cdn.discordapp.com/attachments/729741769738158194/905600653366288394/demo.png nshepperd#2316: i forgot to rescale the train images to [-1, 1] ^^;; alstroemeria313#1694: ohhh ^^;; nshepperd#2316: epoch=1 eheh ^_^ https://cdn.discordapp.com/attachments/729741769738158194/905605255486656552/demo.png alstroemeria313#1694: oooh! it looks correct now :) nshepperd#2316: yep! hehe alstroemeria313#1694: diffusion clustering on cifar-10, 150 epochs https://cdn.discordapp.com/attachments/729741769738158194/905608670006632459/demo_00150-7.png alstroemeria313#1694: Um, is there any prior work with non-diffusion things similar to my diffusion clustering thing? nshepperd#2316: eheh, lots of "thing on plain white background" in the bottom row alstroemeria313#1694: There's not actually a non-diffusion autoencoder analogue of it alstroemeria313#1694: Bc the autoencoder would have to reconstruct based off the class alone alstroemeria313#1694: And it can't do this, it would just give you like the cluster means alone nshepperd#2316: mm it would have to have been a GAN thing, but i can't think of anything that did that alstroemeria313#1694: Or is this somehow just equivalent to k-means in RGB space alstroemeria313#1694: And using the cluster index as a condition for a diffusion model. nshepperd#2316: who would put up with the combined frustrations of gumbel quantization and GANs at once ;; alstroemeria313#1694: infogan! https://arxiv.org/abs/1606.03657 alstroemeria313#1694: I tried to replicate it once but failed alstroemeria313#1694: But I was much less good back then nshepperd#2316: I don't think it's equivalent to k-means
nshepperd#2316: bc its not trying to *minimize* the eps nshepperd#2316: ...i think alstroemeria313#1694: Can we bolt on whatever infogan does to diffusion alstroemeria313#1694: > In this paper, rather than using a single unstructured noise vector, we propose to decompose the input noise vector into two parts: (i) z, which is treated as source of incompressible noise; (ii) c, which we will call the latent code and will target the salient structured semantic features of the data distribution. alstroemeria313#1694: > We now propose a method for discovering these latent factors in an unsupervised way: we provide the generator network with both the incompressible noise z and the latent code c, so the form of the generator becomes G(z, c). However, in standard GAN, the generator is free to ignore the additional latent code c by finding a solution satisfying PG(x|c) = PG(x). To cope with the problem of trivial codes, we propose an information-theoretic regularization: there should be high mutual information between latent codes c and generator distribution G(z, c). Thus I(c; G(z, c)) should be high. alstroemeria313#1694: And then they... something alstroemeria313#1694: My diffusion thing doesn't just force there to be categorical or continuous latent codes, it also gives you an encoder to get the latent code for any input alstroemeria313#1694: It is different from this alstroemeria313#1694: And clearly more closely related to autoencoders alstroemeria313#1694: Like all I did was make the autoencoder information bottleneck so tiny it started functioning as a class or cluster label. nshepperd#2316: ohh, the D outputs a prediction for what the original c was? alstroemeria313#1694: Oh, does it? nshepperd#2316: and they do some.. mutual information thing to force the prediction to be informative nshepperd#2316: https://cdn.discordapp.com/attachments/729741769738158194/905612350902657064/2021-11-04-111957_1414x207_scrot.png alstroemeria313#1694: Ahh alstroemeria313#1694: And this still works for diffusion because it has the noise input and therefore doesn't just output the class/cluster mean or median. nshepperd#2316: yeah nshepperd#2316: 4 epochs https://cdn.discordapp.com/attachments/729741769738158194/905612972968259645/demo.png alstroemeria313#1694: @nshepperd at this point i feel like training a VAE encoder plus a diffusion decoder conditioned on the sampled latent is the best way to force diffusion to have a "GAN-like" small, hopefully interpretable latent space alstroemeria313#1694: like continuous VAE with mean/logvar outputs and a KL loss and then you sample from the means/logvars before handing it over to the decoder
nshepperd#2316: ahh nshepperd#2316: yeah, maybe! alstroemeria313#1694: the beauty of diffusion is that you can just make your VAE information bottleneck super tiny in comparison to how much information you would actually need to reconstruct the encoder input well. alstroemeria313#1694: Like you can make the latents GAN tiny even for huge outputs nshepperd#2316: yeah. it scales smoothly all the way down to 'unconditional' alstroemeria313#1694: Bc it acts as a "hint" to the decoder, not the entire amount of information from which the output must be reconstructed. alstroemeria313#1694: And it has to be a VAE so you can sample from the latent space easily alstroemeria313#1694: To do unconditional generation. nshepperd#2316: *nods* alstroemeria313#1694: And then you have an encoder too lol alstroemeria313#1694: Whereas you like never have good encoders for GANs nshepperd#2316: hehe nshepperd#2316: so, if we progressively distill this. we have something very similar to a GAN. like a ddim1 model with a small latent and a large noise vector. and you could... do SGD on the latent with random noises to find a distribution that matches what you want nshepperd#2316: then sample many images with that latent alstroemeria313#1694: Yep alstroemeria313#1694: so our mscoco_2 CLIP conditioned model is losing quality when progressively distilled down to 2 steps alstroemeria313#1694: (We are going down to 1 step but it's still training) alstroemeria313#1694: It's still producing things that look like the prompts though. alstroemeria313#1694: They are just visually worse. nshepperd#2316: ddim1 is a harder problem i think. so maybe the model needs to be larger to get equivalent quality
alstroemeria313#1694: mm~ alstroemeria313#1694: Yeah nshepperd#2316: @alstroemeria313 ooh, with this model type, we will actually be able to vary the style that is applied over the image. by just passing in an array of styles instead of a single cond alstroemeria313#1694: ohh? nshepperd#2316: like. it's sort of OOD though. but we could just apply the linears to the [n,512,h,w] cond array. then resample the scale and shifts to whatever the actual image size is at that stage alstroemeria313#1694: ahh. alstroemeria313#1694: you could also do multiple forwards alstroemeria313#1694: or rather, replicate the input batch n times and use n different styles alstroemeria313#1694: then blend the scores spatially nshepperd#2316: ahh yeah nshepperd#2316: and we have seen that score blending works w/ oai+style alstroemeria313#1694: this might do a thing too alstroemeria313#1694: like this is similar to the method from Paint By Word for spatially blending StyleGAN latents alstroemeria313#1694: Might be better than just blending scores nshepperd#2316: yeah! that is the sort of thing i was thinking of alstroemeria313#1694: Since blending the shifts/scales would produce a smooth interpolation *in style space* spatially. alstroemeria313#1694: And blending the resulting scores would have an effect like blending the resulting two RGB StyleGAN outputs spatially alstroemeria313#1694: Well, not so bad as that bc they're gradients nshepperd#2316: i think it is a lot like blending the RGBs. but the apparent alpha blending gets corrected by subsequent ddpm steps, sort of alstroemeria313#1694: yeah
nshepperd#2316: i wonder what is halfway between starry night and anime in style space ^_^ nshepperd#2316: 8 epochs https://cdn.discordapp.com/attachments/729741769738158194/905623657542459442/demo.png alstroemeria313#1694: blending old-type VGG styles was definitely a thing you could do alstroemeria313#1694: but with this new encoder it will be easier alstroemeria313#1694: @nshepperd So why do we even need to do CLIP conditioned diffusion. We should be able to train a text encoder *from scratch* end to end with a diffusion model if we had a big enough training set. nshepperd#2316: ahah alstroemeria313#1694: (It will work even with a non-big-enough training set but then will handle out of distribution text not as well.) nshepperd#2316: cross attention! alstroemeria313#1694: eheh. nshepperd#2316: so, LAION-400M? alstroemeria313#1694: Wait if we do cross-attention from the text encoder output to the feature maps. alstroemeria313#1694: We will get a setup where *each word* can affect a different spatial region of the image alstroemeria313#1694: Like, dynamically nshepperd#2316: yep! nshepperd#2316: the attention map visualizations for that would be interesting Some Point Process#3793: there was a paper that said that SA layers (the attention heads) correspond to a certain diffusion process (i.e. admits an interpretation where clustering/diffusion occurs) nshepperd#2316: like i assume it would vary over timesteps too alstroemeria313#1694: ohh? Some Point Process#3793: https://arxiv.org/abs/1906.02762 alstroemeria313#1694: This has a seriously :bigbrain: abstract
nshepperd#2316: diffusion image segmentation lol EricHallahan#1051: I forgot about this paper lol nshepperd#2316: > Such an FFN-attention-FFN layer is "Macaron-like", and thus we call the network with this new architecture the Macaron Net. :0IQ:: sandwich :100000IQ:: macaron alstroemeria313#1694: Why not just train a diffusion net conditional on images that outputs segmentation maps at the end of the sampling process nshepperd#2316: that works! nshepperd#2316: given paired segmentation data anyhow alstroemeria313#1694: Yeah alstroemeria313#1694: hm alstroemeria313#1694: Is there any way we could apply diffusion to CycleGAN type stuff alstroemeria313#1694: Probably not? Louis#0144: French 🤮 alstroemeria313#1694: Like if we don't have a paired dataset. nshepperd#2316: maybe.... just train a conditioned diffusion with a class label? then reverse ddim with one label and forward ddim with the other alstroemeria313#1694: ohh alstroemeria313#1694: cifar-10 diffusion clustering, 310 epochs alstroemeria313#1694: https://cdn.discordapp.com/attachments/821173872111517696/905628289446576128/demo_00310.png alstroemeria313#1694: mm 315 is better https://cdn.discordapp.com/attachments/821173872111517696/905628574168514610/demo_00315.png nshepperd#2316: eheh nshepperd#2316: i get some of these clusters ^_^
nshepperd#2316: they may be french but i will acknowledge that macarons are delicious Louis#0144: lavender especially beepydatacenter#8080: Hi, I'm Fractal. Current CS student at UCF. I'm more of a novice than anything, but have a deep interest in NLP, and am looking to learn how to push the limits of NLP algorithms to do more funky stuff. I've done some ML stuff in the past, but I've barely written any actual code for it. While I know a fair bit of theory, I don't really know how to implement things, which is what I'm teaching myself right now. Def gonna be a lurker tbh, I *cannot stand* gigantic discords which make my anxiety horrible. I'm mostly here to find good resources and get direction to resources. beepydatacenter#8080: For a server with 10k people in it, this place is *quiet* beepydatacenter#8080: I'm kinda surprised this server doesn't have a designated resources channel. Louis#0144: It's more like 100 people Louis#0144: New people come here Louis#0144: See how technical the conversation is Louis#0144: And never talk up bc of that Louis#0144: lol Louis#0144: Ok true I don't know why new people don't talk Louis#0144: I just assume we scare them nshepperd#2316: some people just join 100s of discords, idk why tpapp157#3643: Eh, pretty much any place you go the vast majority of people are going to be lurkers. EricHallahan#1051: We are absolutely a prime example of a community that observes the 1% rule. https://en.wikipedia.org/wiki/1%_rule_(Internet_culture) EricHallahan#1051: Welcome! inox#5400: I like the framing that if more than 1% of people start being active we need more lurkers to balance it out
EricHallahan#1051: > I'm kinda surprised this server doesn't have a designated resources channel. We at least have a reading list though. Artia#1759: Where can we Uh... get information about the GPT-NeoX provisional decided hardware manufacturer about the state of the progress of the hardware to train the AI on? cfoster0#4356: divination EricHallahan#1051: Divination of geese? cfoster0#4356: my tea leaves say "it's moving along nicely" EricHallahan#1051: :gameryes: EricHallahan#1051: Now that I think of it we used to have a links channel but it got retired sometime mid-year. AI_WAIFU#2844: does the faq bot still work? AI_WAIFU#2844: !faq Carl-bot#1536: AI_WAIFU#2844: yeah AI_WAIFU#2844: there's the resources EricHallahan#1051: I prefer using https://www.eleuther.ai/faq StellaAthena#3530: The fact that the embedded text for the bot doesn't trigger previews is weird StellaAthena#3530: !papers Carl-bot#1536: StellaAthena#3530: hey bmk#1476: time for an update