data
stringlengths 115
7.61k
|
---|
Louis#0144: 90/9/1?
EricHallahan#1051: https://en.wikipedia.org/wiki/1%25_rule_(Internet_culture)
Louis#0144: Ooooo
Louis#0144: Ok
bmk#1476: we have an even more extreme ratio than 1% lol
bmk#1476: if the 1% rule were true we'd have an amazing 95 creators
bmk#1476: but currently, we have less
alexyz#3459: aren't they from Ubuntu IRC?
StellaAthena#3530: Yes
kurumuz#5695: @aero want to finetune on this as well?
kurumuz#5695: should be cool ig
kurumuz#5695: ubuntu IRC might be too dry though
EricHallahan#1051: Well it is already in the Pile lol
aero#1357: we could try more fine tuning but, with base 6b if you use IRC format, it already talks about ubuntu nonstop ๐
aero#1357: wonder if there are any good non-ubuntu chat datasets out there
kurumuz#5695: lol
pedro#8175: what about the reddit dataset used to train DialoGPT?
aero#1357: not a bad idea though reddit can often be less conversational than normal chat logs
aero#1357: guess it depends
๐
ฌ gabriel_syme ๐
ฌ#3220: Feels nice to make it to the 10% ๐ฅณ
|
๐
ฌ gabriel_syme ๐
ฌ#3220: Next step, to be a 1%er
alstroemeria313#1694: you trained the wikiart vqgan models
alstroemeria313#1694: and some other stuff
๐
ฌ gabriel_syme ๐
ฌ#3220: huh, never thought much of that but it's cool to have made smth ๐
Louis#0144: for?
EricHallahan#1051: @Louis
Louis#0144: OH
Louis#0144: DERP
bmk#1476: have you googled it yet
EricHallahan#1051: a) Try updating your PyTorch installation and
b) This falls under rule 2 and therefore we cannot help you further.
Zippy#1111: I wish I could be a contributor / creator, but unfortunately I'm just a noob who trains models via throwing bert /roberta / bart / ... spagetti at the wall and hoping it sticks. :hawaiicry:
Kia#2550: You can Create things at a moment and show it in #art that's a contribution
Kia#2550: And people would love it to
Kia#2550: Also Helps The Community:hap:
Louis#0144: My memory is rly bad today
bmk#1476: I wasn't responding to you lmao
bmk#1476: someone else asked a googleable question and later deleted it
someKindaBean#8471: was it "what does lmgtfy mean?"
sweg#8920: anyone have any reccs for making diagrams like this
|
sweg#8920: https://cdn.discordapp.com/attachments/729741769738158194/887884558962425876/unknown.png
Louis#0144: @sweg do u want to do this in the paper
Louis#0144: Use draw.io
Louis#0144: Or PowerPoint.
sweg#8920: damn is it that popular lol
Louis#0144: Yes
sweg#8920: asked in another server and that was first thing someone said
sweg#8920: yeah im using it
sweg#8920: i can do it btw np
sweg#8920: if i dont do the model section i wont really know what else to do LOL
Louis#0144: Lmao
Raf#6004: Took the liberty to register myself and a colleague. Thank you sharing the workshop link!!! (:
choltz95#4641: Registered..Hope there are gonna be free breakfast pastries
๐
ฌ gabriel_syme ๐
ฌ#3220: There is a python package that makes NN diagrams but not sure how flexible
Kazumi#1297: a friend made this with react.js, where all the boxes are clickable https://media.discordapp.net/attachments/653241981463429130/862391033747472384/Screenshot_from_2021-07-08_02-53-29.png
๐
ฌ gabriel_syme ๐
ฌ#3220: https://github.com/HarisIqbal88/PlotNeuralNet
๐
ฌ gabriel_syme ๐
ฌ#3220: like I said, I'm really not sure if it's flexible enough to do your own things
๐
ฌ gabriel_syme ๐
ฌ#3220: also, I guess it's like...NN architecture vs what you showed. My bad
ethan caballero#6044: https://twitter.com/ethancaballero/status/1438539029872582663
https://twitter.com/ethancaballero/status/1438539032183648258
|
Awesome_Ruler_007#7922: for lucidrains' `vit-pytorch` repo, we don't have pre-trained checkpoints right? ๐ฆ
ัะตะปะพะฒะตะบ.#5265: hey, does someone know a cheap gpu cloud with v100's?
Awesome_Ruler_007#7922: maybe jarviscloud.ai? @ัะตะปะพะฒะตะบ.
BoneAmputee#8363: :LITTLEEYE: :LITTLEEYE:
ัะตะปะพะฒะตะบ.#5265: hmm looks interesting, thanks for the tip
StellaAthena#3530: Correct, as far as I know. Itโs been a while since Iโve thought about that but I think we didnโt have data at the time the code was finished.
Awesome_Ruler_007#7922: damn, spoils all my plans
3.14#4598: Hey, thanks. idk how things work here but I would really like to help. Nice to meet you all.
Mega Glaceon#8882: tip: if you want a language model to produce a list of a certain length, prepend a number that counts down to the items, so instead of:
Doom
Quake
you do
10. Doom
9. Quake
8.
and it'll actually generate a list of 10 items :swink:
3.14#4598: tks, gonna take a look
3.14#4598: I can parse the compile bibex they let us download if you want.
Does that work?
|
StellaAthena#3530: Oh thatโs far smarter than I had come up with. Yeah, just parse out the bibliography entires
3.14#4598: Sure, as soon as I get off work I'm gonna work on that.
StellaAthena#3530: I imagine thatโs an hour or two at most. Afterwards, I have some actual DL work I can hand you for a project that soft launched last week.
3.14#4598: Also, do we have any material on how GPT-J is served on 6b.eleuther.ai ? I have some ideas, but to know how you people did it would help a lot. I'm also a lot curious on how you did it.
Kharr#7888: This actually works for arbitrary sequence based tasks as well like ๐ต and code. Adding line numbers/annotation helps context.
StellaAthena#3530: @thenightocean is that publicly sharable info?
3.14#4598: Thanks. I admire a lot this project and I would love to be part of it.
thenightocean#6100: well it consist of several parts. The front end is the public repo, the backend is private. You can check with @researcher2 if this should stay private.
Mega Glaceon#8882: hmm :kisuoho: is there any paper i could read about this? also like, in my experiences, if you give a language model numbers starting from 1, it often doesn't continue the sequence, but if you give the reverse sequence, it completes the sequence way more often
Mega Glaceon#8882: let's say gpt3 or codex
Kharr#7888: A lot of the random things I share haven't been published by someone yet. I'm sure one of the lurkers will get to it in good time.
Mega Glaceon#8882: ah okay ๐
Kharr#7888: I discovered this particular property when I was playing around with sliding the small context window along to produce long coherent sequences in the early days.
Mega Glaceon#8882: hehe
Mega Glaceon#8882: now i want to try adding some kind of time embedding to my midi music generator
Dashiell#8739: extremely out-there question: has anyone tried training a transformer-style language model on the linear A we have?
bmk#1476: wouldnt be useful imo
Dashiell#8739: people are always talking about how language models only have form and no grounding--well, that's all anyone has for linear A
Kharr#7888: Try simply pre-pending the note position in the sequence. Should work decently well even with a short context
Dashiell#8739: has there been any work on entirely unaligned multilingual modeling?
|
bmk#1476: try training a transformer on that little text of literally any other existing language
Dashiell#8739: I guess I don't actually know how much linear A text has survivied
bmk#1476: > The extant corpus, comprising some 1,427 specimens totalling 7,362 to 7,396 signs, if scaled to standard type, would fit easily on two sheets of paper.
Dashiell#8739: oof
bmk#1476: try training a transformer on 10 kb of english text first
Dashiell#8739: clearly I didn't think this through very well
StellaAthena#3530: You should write a post for our blog about these things.
StellaAthena#3530: Being able to produce probably-realistic-ish Linear A isnโt helpful for anything though.
Kharr#7888: But random jabbering is so much more fun. :berk: A part of me is always like "this would be cool to publish/write up" and then I find something shinier and move on before I have time to do it.
Sphinx#2092: unaligned?
Dashiell#8739: what I was kinda grasping for was trying to think of how it might be made to be useful. There are debates on even what language family it's a part of--could we look at how representations of (known) language families differ to try and find what other ancient languages it's most similar to? And in a thought experiment sorta way I was just trying to think through what we really do have when we have these ungrounded statistical representations of language, but there was no possibility of interpreting them through actually knowing the language
Dashiell#8739: but if this isn't the place for my quarter-baked musings I can try and keep them to myself in the future ๐
Sphinx#2092: Sounds like you should look up unsupervised MT.
Sphinx#2092: We can translate between languages that have no known translations in any language.
Sphinx#2092: Though you will likely need more than two pieces of paper.
Dashiell#8739: > We demonstrate our model on two widely used datasets and two language pairs, reporting BLEU scores of 32.8 and 15.1 on the Multi30k and WMT
> English-French datasets, without using even a single parallel sentence at training
> time.
ahh, yeah, this is exactly what I was hoping existed
https://arxiv.org/pdf/1711.00043.pdf
|
Dashiell#8739: thanks @Sphinx โค๏ธ
Sphinx#2092: That one is a bit old, but it tells a better story than :morelayers: .
3.14#4598: I have it in json, generated from the ginormous bibtex file and csv generated from the json. I'm presuming a bit of consistency on the bibtex because it is impossible to check it by hand.
Can I send it to you on DM or just here?
StellaAthena#3530: Yeah go ahead and dm. Or alternatively, you can email it to me at stella at eleuther dot ai
Mega Glaceon#8882: do you mean like, i have note tokens and delay tokens, and embed the amount of time that has passed in the song? :kisuoho:
CarsonPoole#0640: does anyone have any guesses as to what would cause an OOM when running `jax.ops.index_update`
CarsonPoole#0640: using haiku
CarsonPoole#0640: I'm pretty new to jax so it very likely is something dumb
Kharr#7888: Yes -- just add something to allow the model to keep track of where the note is in the song
Mega Glaceon#8882: yeah
Mega Glaceon#8882: makes sense
CarsonPoole#0640: with haiku/jax what is the best way to count params
CarsonPoole#0640: like find the number of params in total
CarsonPoole#0640: I figured out the thing I had previously asked btw
guac#4716: this is what i use
```python
def num_params(params) -> int:
return sum(l.size for l in jax.tree_leaves(params))
|
```
guac#4716: where params is a PyTree obv
CarsonPoole#0640: thanks!
kindiana#1016: https://dm-haiku.readthedocs.io/en/latest/api.html#haiku.data_structures.tree_size
CarsonPoole#0640: would there be a significant discrepancy between these two methods?
kindiana#1016: its the same thing
kindiana#1016: https://github.com/deepmind/dm-haiku/blob/main/haiku/_src/utils.py#L174#L205
kindiana#1016: lol
CarsonPoole#0640: other jax noob question, is this a valid way to "benchmark" inference speeds? https://cdn.discordapp.com/attachments/729741769738158194/888189364545400892/Screen_Shot_2021-09-16_at_5.27.05_PM.png
CarsonPoole#0640: more context https://cdn.discordapp.com/attachments/729741769738158194/888189562654978057/Screen_Shot_2021-09-16_at_5.28.07_PM.png
CRG#8707: See benchmarking section: https://jax.readthedocs.io/en/latest/faq.html
CarsonPoole#0640: not necessarily "valid" as in "good," but more so "not giving you something completely different from what you want"
kurumuz#5695: time.perf_counter should be more accurate
CarsonPoole#0640: okay and one last question that I can't find anywhere when searching--are there any examples somewhere of using jit with haiku?
CarsonPoole#0640: https://cdn.discordapp.com/attachments/729741769738158194/888195494571032646/Screen_Shot_2021-09-16_at_5.51.42_PM.png
CarsonPoole#0640: this doesn't work https://cdn.discordapp.com/attachments/729741769738158194/888195553383559179/Screen_Shot_2021-09-16_at_5.51.54_PM.png
CarsonPoole#0640: do I need to use `hk.jit`?
CarsonPoole#0640: and put that on the `__call__` method?
CRG#8707: Are you using static argnums in the jit?
CarsonPoole#0640: I believe so yeah
|
CRG#8707: What fails exactly?
CarsonPoole#0640: I think I must just not have static shapes
CarsonPoole#0640: the error message is a lot less than clear
CRG#8707: And static_argnums on the responsible arguments doesn't work?
CarsonPoole#0640: no it complains that it's non hash-able
๐
ฌ gabriel_syme ๐
ฌ#3220: cross-lingual embeddings were really cool back then, not sure how they rank now
lilbaby#3211: what happened to the farday cage
๐
ฌ gabriel_syme ๐
ฌ#3220: The computational resources previously available are currently tied up in research endeavors I believe. It was always a possibility
lilbaby#3211: ohh
lilbaby#3211: damn
lilbaby#3211: i was gonna use it for a school project
๐
ฌ gabriel_syme ๐
ฌ#3220: you can still use the various notebooks pinned in #art along with colab. Most of them will work fine with free colab, although some higher resolution and/or diffusion models will not
๐
ฌ gabriel_syme ๐
ฌ#3220: another way to do it is to use @\nshepperd's Jax implementation of @\alstroemeria313's clip-guided diffusion model here: https://colab.research.google.com/drive/1ZZi1djM8lU4sorkve3bD6EBHiHs6uNAi
Colab free gives you a v2 TPU and this might run, although I'm not sure. If note, google TPU Research cloud gives out 1 month free of v3 TPUs.
fengoku#4038: anybody interested in data augmentation for NLP? if so, feel free to check out our talk for google AI research (just released today lol): https://www.youtube.com/watch?v=kNBVesKUZCk
oreo#2740: @fengoku hey! I skimmed your paper last week ๐ do you have any ideas/thoughts on DA for more domain-specific datasets like biomedical/clinical text?
chilli#5665: Sounds like you're using static argnums on a non-Hashable object
๐
ฌ gabriel_syme ๐
ฌ#3220: nice thanks! the STE reminds me a bit of what CLIP can do right? I remember I tried this early on, literally with the example you give, a prompt like `"A line of buildings around a square during a rainy day"` and then changing `rainy` to `snowy / sunny / cloudy` to create different examples of the same image. Wonder if text-text contrastive can do this as well
๐
ฌ gabriel_syme ๐
ฌ#3220: also synthetic noise is quite interesting, I wonder if it can be used in conjunction with kharr's idea of generative pretraining
|
celest#5646: hii, how can i use .imagine??
celest#5646: Is faraday cage still functional for that command?
nev#4905: no
Kazumi#1297: hm? what happened to it?
celest#5646: i'm not sure i came back to check but it's not working apparently
Kia#2550: It's currently down
Kia#2550: Probably for the meanwhile...
succmi#3812: Is there a dummy guide on how to install this on our pc?
Kia#2550: Just search in Google VQGAN+CLIP GitHub
Kazumi#1297: <https://colab.research.google.com/drive/1L8oL-vLJXVcRzCFbPwOoMkPKJ8-aYdPN#scrollTo=CppIQlPhhwhs>
IGg#7871: Hello, how are you? Don't get hurt by my English! It was happening and I would like to know if anyone knows of any technology to create 3D models from a video or images, and that this works thanks to some Artificial Intelligence ... Thanks guys
Louis#0144: >Don't get hurt by my English
Idk why but I find that phrase so funny
EricHallahan#1051: Look at NeRF and it's variants.
EricHallahan#1051: Or look at more traditional photogrammetry techniques.
Louis#0144: OG photogrammetry is rly good
mo#0466: some people here have benchmarked Codex for NLP tasks, right?
wondering if the Codex API does beam search or if one has to implement beam search oneself.
anyone know?
|
mo#0466: like, I wanna sample from Codex in a sensible way... and I'm wondering if the API already does it sensibly ๐
Erik Nijkamp#8820: @mo for humaneval (not nlp tasks), I ran the codex api against the humaneval benchmark. I strongly believe the openai codex api simply exposes top-p / nucleus sampling. You may want to get a set of samples and introduce a "sensible" heuristics on top of this
StellaAthena#3530: My testing indicates that the Codex model performs comparably to what a 1B parameter GPT-3 model would perform.
random person#5234: Quick question
random person#5234: As someone from a CV background, any good suggestions on a short list of papers to read/skim to learn the latest transformer/bert hype lol
bmk#1476: https://discord.com/channels/729741769192767510/729741769738158194/801630685525835787
Parker#3197: https://paperswithcode.com/task/3d-reconstruction
random person#5234: Kk thx
Parker#3197: https://paperswithcode.com/task/3d-scene-reconstruction
Parker#3197: I also think this is a good answer and might be better than the links I sent
janus#0150: Wow! @๐
ฌ gabriel_syme ๐
ฌ if you haven't seen ^
๐
ฌ gabriel_syme ๐
ฌ#3220: I'm still scared to move to 3d space ๐ but these are pretty nice! maybe it's time to check that dataset
Zepplin#6441: So I understand I cant ask beginner questions here. Is there a place where I can ask such questions?
Zepplin#6441: I'm kinda stuck
AI_WAIFU#2844: See #communities
Zepplin#6441: I will check them out. Thanks @AI_WAIFU
Zepplin#6441: Im not seeing anywhere where I can ask on there.
AI_WAIFU#2844: the discord science network, Yannik's, and the fast.ai servers are probably the best for that, not nessesarily in that order.
Zepplin#6441: @AI_WAIFUThe link to the science network is broken
Zepplin#6441: https://cdn.discordapp.com/attachments/729741769738158194/888647594413268992/unknown.png
|
AI_WAIFU#2844: huh
EricHallahan#1051: huh
EricHallahan#1051: Thanks for pointing that out.
AI_WAIFU#2844: maybe go to the fast.ai guys for now
AI_WAIFU#2844: it's broken on the main page too. https://discordnetwork.com/
Zepplin#6441: Oof
AI_WAIFU#2844: Someone should go and let them know
Awesome_Ruler_007#7922: it took me 2 weeks to realize that the config file that didn't run was named `.pth` instead of `.py` ๐คฆโโ๏ธ
Google repos......
Awesome_Ruler_007#7922: I thought that `pth` must be some obscure standardized file extension for config file -_-
alstroemeria313#1694: maybe it's morning and i haven't had enough coffee yet but
alstroemeria313#1694: I have a function that takes an input from 0 to 1 and outputs a vector.
alstroemeria313#1694: How do I find the instantaneous rate of change of the vector.
alstroemeria313#1694: i.e. the thing that, if i followed it with small steps, i would be approximately following the curve the vector took as the scalar input went up or down.
StellaAthena#3530: @alstroemeria313 if the domain of the function is discrete you cannot compute the derivative
alstroemeria313#1694: Do I need forward mode AD for this actually.
alstroemeria313#1694: Input is scalar, output is huge vector, and I want the Jacobian.
alstroemeria313#1694: ok but... the function uses elementwise operations only
alstroemeria313#1694: So I can write it as a scalar input scalar output function and evaluate it elementwise.
|
alstroemeria313#1694: (it's a fancy interpolation between two vectors that i take as constant w/ a single scalar weight)
StellaAthena#3530: @alstroemeria313 you have $f:\{0,1\}\to\mathbb{R}^n$ right
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/888767694298554379/193204646687408129.png
alstroemeria313#1694: yes
alstroemeria313#1694: er
StellaAthena#3530: You cannot differentiate that.
alstroemeria313#1694: no that's an interval
StellaAthena#3530: Ooooo
StellaAthena#3530: @alstroemeria313 you have $f:[0,1]\to\mathbb{R}^n$?
alstroemeria313#1694: I am trying to do a fancy smooth interpolation and need tangents to the curve
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/888767890369708032/193204646687408129.png
alstroemeria313#1694: Yes
alstroemeria313#1694: I want to train a model on evaluations of tangents of this curve such that when it is trained you can follow the curve by the Euler method
StellaAthena#3530: So you want to consider $\mathbf{f} = (f_1, f_2, f_3,\ldots f_n)$ and compute the derivative of each $f_i$ independently.
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/888768587039379496/193204646687408129.png
alstroemeria313#1694: yes, the function is f(a, b, weight) = a * sqrt(weight) + b * sqrt(1 - weight)
alstroemeria313#1694: Where a is Gaussian noise, b is from the training set, and I sample weight uniformly during training.
StellaAthena#3530: Whatโs the vector-y part of that
alstroemeria313#1694: a and b are vectors
alstroemeria313#1694: So the output is a vector
|
StellaAthena#3530: Gotcha
alstroemeria313#1694: Right now I am computing f(a, b, weight) = a * weight + b * (1 - weight).
alstroemeria313#1694: Which I can get correctly scaled steps in the correct direction for easily.
StellaAthena#3530: What do you want to take the derivative with respect to? $a$? $b$? The weight?
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/888769258929152080/193204646687408129.png
alstroemeria313#1694: so if i hold a and b constant and vary the weight. the output follows a curve in R^n.
alstroemeria313#1694: i need tangents to this curve.
mgostIH#0245: Since that's a line isn't the tangent constant
alstroemeria313#1694: It isn't a line.
alstroemeria313#1694: The one I am using now is but I want to use f(a, b, weight) = a * sqrt(weight) + b * sqrt(1 - weight), which isn't.
mgostIH#0245: And sqrt is elementwise sqrt of each component of weight?
alstroemeria313#1694: weight is a scalar.
Sphinx#2092: isn't it just a -b?
StellaAthena#3530: So the easy part is to write
Sphinx#2092: oh you want sqrt(weights).
alstroemeria313#1694: Like if you set a=1 and b=1 then this does not make a straight line...!
Sphinx#2092: It's still just a 1D derivative though.
mgostIH#0245: Isn't it just a * 1/(2*sqrt(w)) + b * -1/(2*sqrt(1 - w))
alstroemeria313#1694: this is why i say i feel like i haven't had enough coffee ^^;;
Sphinx#2092: Yes.
|
alstroemeria313#1694: Also I think I need to scale the thing by the remaining path length.
StellaAthena#3530: So the easy thing to do is write $fโ(a_i, b_i, w) = \frac{1}{2}\left(a_i w^{-\frac{1}{2}} - b_i w^{-\frac{1}{2}}\right)$
mgostIH#0245: I'd build a table of the curve length first and then refer to that later for rescaling dynamically
mgostIH#0245: There's a video about bezier curves that does just that
alstroemeria313#1694: Oh... so the thing I want is to consider a 1d manifold defined by this path, and get the gradient wrt the inputs of 1/2 the squared distance *in this manifold* between the inputs?
mgostIH#0245: https://youtu.be/aVwxzDHniEw?t=907
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/888772330757701663/193204646687408129.png
mgostIH#0245: See at 15:07
StellaAthena#3530: Wow I too need coffee given how many typos that had
StellaAthena#3530: But Iโm 75% sure I got them all
StellaAthena#3530: Do you also want to differentiate through your sampling process? Or do you just want the tangent line to the curve defined by the weights
alstroemeria313#1694: i only need the tangent line.
alstroemeria313#1694: i'm training a model to output the tangent line for various inputs, and then during inference i follow it with Euler's method, and i need my outputs to stay in distribution during inference when i do this
StellaAthena#3530: So you want the derivative with respect to arclength?
alstroemeria313#1694: so it has to like. not point directly to b.
alstroemeria313#1694: i... maybe.
alstroemeria313#1694: bc then Euler's method leaves the region the model was trained on.
mgostIH#0245: Oh btw keep in mind that sqrt might not have a well defined tangent where you start (and end)
mgostIH#0245: Also I'd suggest you using something better than Euler Method
alstroemeria313#1694: well right now i am using straight lines between a and b
|
alstroemeria313#1694: which seems to work, i think
StellaAthena#3530: So you have a curve in space, winding from a to b
mgostIH#0245: ye for straight lines euler method is exact
StellaAthena#3530: You can parameterized this with the variable $w$, so that as $w$ varies from $0$ to $1$ $f(w)$ traces the curve
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/888773683500441680/193204646687408129.png
alstroemeria313#1694: Usually denoising diffusion models solve this problem by predicting the noise or the clean image or something and manually sticking the current timestep back on the curve.
mgostIH#0245: From what I understand you are training a network to represent some sort of vector field so that each point will lie on a line you can follow?
alstroemeria313#1694: yep!
alstroemeria313#1694: it's score matching rn
alstroemeria313#1694: so Euler's method during inference is equivalent to gradient ascent
mgostIH#0245: What's your training set, like how do you generate the original curves/vector field to follow
StellaAthena#3530: So the effect of using the $\sqrt{}$ is that the speed at which the function traces the curve varies
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/888774259852345404/193204646687408129.png
StellaAthena#3530: Is that what you want?
alstroemeria313#1694: But this seems to require you to commit to a specific schedule of timesteps
alstroemeria313#1694: Instead of just following a vector field
alstroemeria313#1694: i think so?
StellaAthena#3530: So $w$ (if youโre looking up vector calc stuff this is typically referred to as $t$ btw) can be thought of as a time parameter. As you go forward in time from $w=0$ to $w=1$ you travel the length of the curve
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/888774822027493386/193204646687408129.png
StellaAthena#3530: Functions that only impact $w$ and donโt touch anything else change the rate at which we travel along the curve
|
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/888774989095006258/193204646687408129.png
mgostIH#0245: @alstroemeria313 What you are trying to do reminds me a lot of neural ODEs, you are making some sort of continuous time diffusion model hmm
StellaAthena#3530: Plain $w$ travels at a constant speed, while $\sqrt{w}$ spends more time in the later parts of the curve and less time in the early parts
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/888775164345606204/193204646687408129.png
mgostIH#0245: Although I thought that diffusions models were highly stochastic since they try correcting gaussian noise in principle
alstroemeria313#1694: ```python
reals = reals.to(device)
noise_levels = rng.draw(reals.shape[0])[:, 0].to(device)
noise = torch.randn_like(reals)
noised_reals = reals.lerp(noise, noise_levels[..., None, None, None])
with torch.cuda.amp.autocast():
pred_reals = noised_reals - model(noised_reals)
loss = F.mse_loss(pred_reals, reals)
```
StellaAthena#3530: $w^2$ does the reverse, going slowly at first and then zipping through the end.
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/888775259166232576/193204646687408129.png
alstroemeria313#1694: yes!
alstroemeria313#1694: it is in fact a continuous time diffusion model
StellaAthena#3530: By the way, youโve been writing $\sqrt{1-w}$ but I strongly suspect you mean to be writing $(1-\sqrt{w})$
mgostIH#0245: Aye but I wonder whether treating it as an ODE is the right approach, I'd think stochastic ODEs would be more suited, but of course they might be far more complex than this
|
alstroemeria313#1694: it really is sqrt(1 - w)
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/888775591300571146/193204646687408129.png
StellaAthena#3530: Hmm
alstroemeria313#1694: w is a variance
StellaAthena#3530: That complicates things significantly
StellaAthena#3530: Wait
StellaAthena#3530: That doesnโt stay on the curve
alstroemeria313#1694: Yeah, it makes a weird curve that does things like have a maximum of sqrt(2) if both a and b are 1
alstroemeria313#1694: This is bc one of the inputs is always Gaussian noise
StellaAthena#3530: O.o
alstroemeria313#1694: And its variance has a linear schedule
alstroemeria313#1694: I uh, real diffusion models work this way ^^;;
alstroemeria313#1694: They just manually put the iterate on each timestep back on the curve
StellaAthena#3530: So youโve been writing this like itโs a line but itโs in no way a line
alstroemeria313#1694: I keep calling it a curve... ^^;;
StellaAthena#3530: I call lines curves all the time ๐
alstroemeria313#1694: eheh~
StellaAthena#3530: Okay
StellaAthena#3530: I agree with @Sphinx that SODEs could work but I recommend avoiding them when you can just do vector calc instead
StellaAthena#3530: So the formula for the derivative is
|
$fโ_i(a_i, b_i, w) = \frac{1}{2}\left(a_i w^{-\frac{1}{2}} - b_i (1-w)^{-\frac{1}{2}}\right)$
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/888777286223032392/193204646687408129.png
alstroemeria313#1694: ty :)
mgostIH#0245: In your case I'd probably just do a forward diff pass
mgostIH#0245: Ah, in Pytorch it might be a couple calls to backward
StellaAthena#3530: and Iโm pretty sure you can just ignore the weirdness and iterate using this formula
mgostIH#0245: But overall the efficiency cost might not matter, I wouldn't really try coding manually the derivatives of stuff if I can avoid it
StellaAthena#3530: This is not defined at $w\in\{0,1\}$, so you may have to settle for $w\in(\varepsilon, 1-\varepsilon)$
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/888777932481384458/193204646687408129.png
StellaAthena#3530: You can close the gap by taking an initial tiny step in the (straight line) direction of $b$
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/888778111410389022/193204646687408129.png
StellaAthena#3530: First from $a$ to $\varepsilon$ and then from $1-\varepsilon$ to $b$.
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/888778304973320242/193204646687408129.png
alstroemeria313#1694: and yeah diffusion models are trained to reverse a stochastic differential equation
alstroemeria313#1694: it's kind of a galaxy brained method, the forward SDE adds Gaussian noise gradually and so you end up with a model that can generate samples starting from pure Gaussian noise
alstroemeria313#1694: But the score matching formulation helped it seem less bizarre to me ^^;;
Orz#3023: reverse stochastic differential equation?
I could find about backwards stochastic differential equation with a quick Google search
Do you mean that?
|
alstroemeria313#1694: https://openreview.net/forum?id=PxTIG12RRHS
alstroemeria313#1694: The model I am training rn is pure score matching.
Orz#3023: woah
alstroemeria313#1694: It actually works on stuff like MNIST but on more complex datasets it seems to produce jumbled outputs and IDK why
alstroemeria313#1694: like this is from a 64x64 imagenet model w/ my current training method https://cdn.discordapp.com/attachments/729741769738158194/888779349313081354/demo_00000-5.png
alstroemeria313#1694: OpenAI has managed to train some really good diffusion models but their codebase is really complicated and I'm trying to work it out for myself
alstroemeria313#1694: So one of the simplifications I made was making my noised training examples by simple linear interpolation between the clean example and N(0, 1).
alstroemeria313#1694: And I was wondering if this was wrong and was responsible for my bad results.
alstroemeria313#1694: Since... this doesn't actually represent a Markov process.
alstroemeria313#1694: I think for a Markov process you have to use only addition of independent noise and scaling the current timestep down.
๐
ฌ gabriel_syme ๐
ฌ#3220: that's nice, that was equally opaque as the diffusion models
alstroemeria313#1694: eheh...
alstroemeria313#1694: Yeahhh...
๐
ฌ gabriel_syme ๐
ฌ#3220: which one of these is like a VAE again, the SDE?
๐
ฌ gabriel_syme ๐
ฌ#3220: ah no, they are closer to EBMs I think, the diffusion are more like VAEs
๐
ฌ gabriel_syme ๐
ฌ#3220: that fact smh did not help one bit, I'm just too small brain for these models
๐
ฌ gabriel_syme ๐
ฌ#3220: what do you think about this btw? https://github.com/magenta/symbolic-music-diffusion
Paul van Purp#5488: that would be equivalent to having a extremely steep noise schedule g(t) = 1/(1-t) compared to the default exponential schedule
alstroemeria313#1694: oh
alstroemeria313#1694: you mean it doesn't spend enough time refining global details?
|
Paul van Purp#5488: my point would be more that it gets confused right at the start and can't keep up with the noise anymore
alstroemeria313#1694: the start of what.
alstroemeria313#1694: the reverse process?
Paul van Purp#5488: exacly
alstroemeria313#1694: ahh
Paul van Purp#5488: the model should have to spend more time in the high to moderate noise regime for global features
alstroemeria313#1694: yeah i always get confused if 'start' or 'end' refer to the forward or reverse process
Paul van Purp#5488: with that g(x) you very little more time at extremely high noise, and a lot more at very low noise compared to the default
alstroemeria313#1694: oh so
alstroemeria313#1694: i can scale the scores?
Paul van Purp#5488: yes, that's pretty much equivalent to temperature scaling
Paul van Purp#5488: does help sometimes (especially with art xD)
alstroemeria313#1694: i thought i *had* an exponential noise schedule during sampling.
alstroemeria313#1694: It pretty much is.
alstroemeria313#1694: Bc I am doing constant lr gradient steps.
Paul van Purp#5488: do you have a different schedule during training and generation?
alstroemeria313#1694: um
alstroemeria313#1694: Training is continuous.
alstroemeria313#1694: There is not actually a schedule.
Paul van Purp#5488: ok, yes, is g(x) the same
|
alstroemeria313#1694: what's g(x)
Paul van Purp#5488: the brownian diffusion term in the sde paper
alstroemeria313#1694: oh
alstroemeria313#1694: i am not doing that
alstroemeria313#1694: i am doing pure score matching
Paul van Purp#5488: but denoising score matching right?
alstroemeria313#1694: yes
alstroemeria313#1694: Noisy image goes in, (negated) score comes out
Paul van Purp#5488: ok, and you get the images by linear x + t (eps - x) for t \in [0, 1]?
alstroemeria313#1694: by lerping
Paul van Purp#5488: right, the sde part would only be a formalism, so that is equivalent of doing g(t) = 1/(1-t) and f(x, t) = x / (t - 1) for dx = f(x)dt + g(x)dw as forward process
Paul van Purp#5488: if you then have an exponential schedule during the generation steps, you would have a mismatch that, this makes it at least less efficient
alstroemeria313#1694: i have tried reweighting the losses during training according to the noise level of the example and it just kind of makes it learn different frequency components first
alstroemeria313#1694: usually i weighted the high frequency components more strongly
alstroemeria313#1694: like, since i'm learning the score and not predicting eps
alstroemeria313#1694: the high noise examples are overweighted anyway
alstroemeria313#1694: compared to predicting eps.
Paul van Purp#5488: but you already do something like s(x, t) = s(x)/g(t) right?
alstroemeria313#1694: no
alstroemeria313#1694: s is score?
|
Paul van Purp#5488: s is the scorenet
alstroemeria313#1694: ahh
alstroemeria313#1694: it doesn't have a timestep input.
alstroemeria313#1694: It learns to output lower magnitude scores for lower noise images.
Paul van Purp#5488: any particular reason for that?
alstroemeria313#1694: it's simpler that way
alstroemeria313#1694: i don't have to figure out what 'timestep' i'm at all the time during sampling
Paul van Purp#5488: ok got it, predicting the eps would be a direct improvement probably then, because the outputs then don't have to span so many magnitudes
alstroemeria313#1694: i've tried that and it didn't help
alstroemeria313#1694: There is some more fundamental problem here x_x
alstroemeria313#1694: I tried a ton of things yesterday.
Paul van Purp#5488: but to the g(x), i have tried lerping and it is hard to make it work, i would recommend to give the model a chance at t=1 and have the model spend a lot of time in the medium noise regime, if you want to have more global coherency
SkepticalSim#4142: Excuse the newbie post but GPT-J-6B wrote a poem when prompted with a Haiku
https://waters.me/poetry/poetry-from-eleutherai/ is there an online searchable copy of the training data (I don't have 800GB spare) so I can check for originality/inspiration. What is the copyright situation?
Paul van Purp#5488: ok, it's just a way to save model capacity anyway
alstroemeria313#1694: ahh ty :)
alstroemeria313#1694: I'm trying a kind of weird thing now.
alstroemeria313#1694: Doing score matching by training an... energy based model? It outputs an unnormalized log density for the input.
alstroemeria313#1694: Then I backprop to get the score.
alstroemeria313#1694: This guarantees the resulting vector field is conservative.
|
alstroemeria313#1694: well, it works better if i multiply its outputs by the dimension of the input before getting the gradient wrt the input.
alstroemeria313#1694: Otherwise it fails to learn a steep enough function.
alstroemeria313#1694: idk it's kinda worse so far
cfoster0#4356: This is pretty expensive too, no?
Paul van Purp#5488: dsm is not even that uncommon in training ebms when not using cd
alstroemeria313#1694: it is twice as expensive for the same number of params i think
Paul van Purp#5488: yes exponentially harder, you basically encode the scorenet inside the ebm gradients wrt the input
alstroemeria313#1694: also mb you will run into issues with fp16
alstroemeria313#1694: idk
alstroemeria313#1694: wow this training is unstable
Paul van Purp#5488: very likely, would also recommend spectral norm
alstroemeria313#1694: oh no
alstroemeria313#1694: wait
alstroemeria313#1694: yeah idk if specnorm is good
alstroemeria313#1694: it forces gradient wrt the input to be ~1?
alstroemeria313#1694: and i need to learn different regions of gradient magnitude?
Paul van Purp#5488: yes right, don't use it at input and output blocks
alstroemeria313#1694: ...wait
alstroemeria313#1694: exponential noise schedule spends more time at low noise levels right?
alstroemeria313#1694: or is it high noise levels.
|
Paul van Purp#5488: compared to a linear schedule (x + t * sigma_max * eps) yes, but not compared to a schedule that would be equivalent to lerping the datapoints with noise
alstroemeria313#1694: oh
alstroemeria313#1694: ugh i should just figure out the learned noise schedules from Variational Diffusion Models shouldn't i?
cfoster0#4356: Why learned?
cfoster0#4356: You can just use the equation schedule from Appendix F if you want
AI_WAIFU#2844: or just use more generation steps
alstroemeria313#1694: my log snr plot during generation https://cdn.discordapp.com/attachments/729741769738158194/888837594845179974/Screen_Shot_2021-09-18_at_10.23.07_AM.png
alstroemeria313#1694: (This is plotted backwards)
cfoster0#4356: Hmm this is kind of the outside of what you want, right?
alstroemeria313#1694: from the paper https://cdn.discordapp.com/attachments/729741769738158194/888838608730087424/Screen_Shot_2021-09-18_at_10.27.14_AM.png
alstroemeria313#1694: my ideal schedule looked more like this https://cdn.discordapp.com/attachments/729741769738158194/888838720793489518/Screen_Shot_2021-09-18_at_10.27.38_AM.png
cfoster0#4356: Why do you want it spending most of its steps in the high SNR regime?
alstroemeria313#1694: well, 'ideal' in what i would expect if the model were perfect
alstroemeria313#1694: given how i train it rn and how i sample rn
cfoster0#4356: Ah. I'm saying the distribution of noise scales you're training on is probably not what you want to be training on
alstroemeria313#1694: ahh
cfoster0#4356: https://cdn.discordapp.com/attachments/729741769738158194/888839404322426900/Screenshot_20210918-132852_Adobe_Acrobat.jpg
EricHallahan#1051: Schedules seem like a dark art lol
alstroemeria313#1694: yeah that's why this paper learned one
cfoster0#4356: In the continuous time limit only the start and end SNRs matter
|
cfoster0#4356: But we aren't working in those lol
Paul van Purp#5488: but you get a very different model from training with a different schedule
Awesome_Ruler_007#7922: small NNs that find that the optimal learning rate over each step of the big-y model but analyzing all the output (gradients, optimizer state etc.) and retuirning a loss value?
Awesome_Ruler_007#7922: or maybe that windows over 10 big-model steps and decides how to change the LR?
Awesome_Ruler_007#7922: ~~don't tell me schmidhuber did this before ๐ ~~
Awesome_Ruler_007#7922: hmmm...not like deciding a scheduler (from the top paper that came up) but dynamically deciding LR value for any `n` steps and predicting future changes. Time series over LR values?
The question is whether we even need such a complicated mechanism for LR
flowpoint#7450: no, because you probably need a tb of ram for fast search
luckily, i currently have an index of the pile (elasticsearch)
took > 2 mins to search:
don't expect me to run multiple though, it bogs down my workstation https://cdn.discordapp.com/attachments/729741769738158194/888844176249200720/result.txt
flowpoint#7450: @SkepticalSim
Awesome_Ruler_007#7922: Juicy stuff https://www.reddit.com/r/MachineLearning/comments/ppy7k4/n_inside_deepminds_secret_plot_to_break_away_from/
EricHallahan#1051: โป๏ธ
https://discord.com/channels/729741769192767510/730095596861521970/886394124817801226
Awesome_Ruler_007#7922: damn reddit's slow lol
spirit-from-germany#1488: I just discovered this new movie, pretty interesting
https://youtu.be/ARxpqS9DQyA
|
EricHallahan#1051: I think this is better suited to #off-topic.
choltz95#4641: You may want to check an og paper by Warmuth. I forget the title, but the name of the algorithm is like The Forward Algorithm or something. Same idea of hallucinating future losses in the context of online learning/online convex optimization. Anyway, the key words are optimism + online learning + delayed feedback.
choltz95#4641: okay, here is something recent for you from Orabona that seems more relevant to neural nets: https://proceedings.mlr.press/v139/flaspohler21a.html
choltz95#4641: also this is a bit less dense: https://arxiv.org/abs/1907.08610. btw, sounds interesting! keep me posted if you try something out.
alstroemeria313#1694: so like. i'm trying to learn an energy based model now again
alstroemeria313#1694: With a different architecture.
alstroemeria313#1694: Now it is a U-Net and has a high dimensional output.
alstroemeria313#1694: I use -1/2 the squared sum of the U-Net output as the thing to make the final unnormalized log density.
alstroemeria313#1694: So it can't go higher than 0
alstroemeria313#1694: It seems to be able to learn the right gradient range easily!
alstroemeria313#1694: Of course. Since it is a residual U-Net
alstroemeria313#1694: If the net is simply the identity function
alstroemeria313#1694: Then it is an EBM for a N(0, I) multivariate normal.
alstroemeria313#1694: The residual blocks simply modify this base distribution
alstroemeria313#1694: Since my score function I'm trying to learn resembles N(mode, I) when close to a mode
alstroemeria313#1694: This seems like a good parameterization?
alstroemeria313#1694: i.e. It is the same arch as my feedforward score matching network
alstroemeria313#1694: Just now constrained so the vector field you get from it is conservative
alstroemeria313#1694: um, hm.
alstroemeria313#1694: Am I actually forcing it to learn to output the same log density for every maximum
|
alstroemeria313#1694: Maybe not actually
alstroemeria313#1694: Since just bc the model has a nonzero output doesn't mean squared L2 of the output has to have a nonzero gradient wrt the input.
alstroemeria313#1694: training seems kinda unstable
AI_WAIFU#2844: Sound pretty similar to what I did when I tried to get EBMs to work.
alstroemeria313#1694: oh. did they work.
AI_WAIFU#2844: I got it working nicely for MNIST, but had some difficulties scaling up.
alstroemeria313#1694: oh
alstroemeria313#1694: I'm doing CIFAR-10
AI_WAIFU#2844: I think you can get that to work, my issues really started to show up at high resolutions
alstroemeria313#1694: ahh
AI_WAIFU#2844: I was using MCMC sampling and you get a whole bunch of numerical issues when you have close to a million variables
alstroemeria313#1694: ...Can you add a score matching loss to the output
alstroemeria313#1694: Like the high-dimensional output of the U-Net.
alstroemeria313#1694: oh, only a million
alstroemeria313#1694: sad
AI_WAIFU#2844: maybe?
AI_WAIFU#2844: Like diffusive formulations of EBMs seem to be the most likely to work IMO
alstroemeria313#1694: ah
AI_WAIFU#2844: I also investigated a whole host of other models, like parameterizing observables as the result of a long MCMC chain
AI_WAIFU#2844: or using reversible models to make sampling in the EBM space easier
|
alstroemeria313#1694: ahh
alstroemeria313#1694: my current one is about this far along lol https://cdn.discordapp.com/attachments/821173872111517696/888932787674038282/demo_00007.png
alstroemeria313#1694: The previous two broke and I lowered the lr each time
alstroemeria313#1694: This one hasn't broken yet
AI_WAIFU#2844: Yeah they're a bitch to get working
Louis#0144: HMC?
Louis#0144: Or just naive MCMC
AI_WAIFU#2844: Not just any HMC, HMC with a 6th order symplectic integrator
Louis#0144: Ouch
Louis#0144: lol
AI_WAIFU#2844: I still got fucked by numerics tho
AI_WAIFU#2844: In high dimensional spaces the MCMC integrator will just sort of "lock up" and then you're screwed.
nshepperd#2316: I tried doing HMC on the openai diffusion model. it didn't work, of course, probably because you can't do metropolis-hastings corrections without the actual unnormalized log density
ethan caballero#6044: https://twitter.com/OwainEvans_UK/status/1439511922668457987
Awesome_Ruler_007#7922: that looks very close - well, it wasn't schmidhuber atleast ๐
Awesome_Ruler_007#7922: I was proposing something much complicated - that to optimize a NN, I was planning to use another NN, unlike `k` steps above which just uses traditional optimizers on a copy of so-called "fast weights"
Awesome_Ruler_007#7922: in a crude, raw and basic sense the optimizer NN would simple regress over LR (with a memory vector for previous outputs) and recommend the LR for a timestep and maybe some more predicted ones. technically that's not actually optimizing since we would be using Adam or SGD behind the scenes - just a *glorified* LR scheduler
completely taking out SGD and putting an NN sounds pretty tough (if not done already) but I can't see why its impossible.
|
The core idea would be to optimize the "optimizer NN" by transfer learning for convnets or any other family (maybe MLP for MNIST ๐ค) and see whether it can basically replicate current optimizers, but generalize to tricky loss curves
Awesome_Ruler_007#7922: ~~If that works, it would be like the thanos meme ๐คฃ "I used the Neural Network to optimize the Neural Network" ๐~~
Louis#0144: Does ai2 really do anything for safety
Louis#0144: I wasn't aware
Louis#0144: I thought their main focus was common sense and NLU?
cfoster0#4356: They do a ton of research, it's kinda hard to say whether it's useful for safety or not
Louis#0144: Another question is Ethan wtf are u doing up at 5am
๐
ฌ gabriel_syme ๐
ฌ#3220: Early riser
puzzlerme#1409: is the .imagine command in #the-faraday-cage-archive disabled?
Louis#0144: Ye
puzzlerme#1409: oh
puzzlerme#1409: why?
puzzlerme#1409: and will it be permanently disabled?
Louis#0144: @EricHallahan we should add this to the FAQ
EricHallahan#1051: I have no idea what is going on in #the-faraday-cage-archive TBH
cfoster0#4356: GPUs are tight right now
Kia#2550: The last GPU running is from BoneAmputte and they're busy with there work
Kia#2550: We should probably run Isaac again
Kia#2550: On a TPU I supposed:goose10:
puzzlerme#1409: oh ok
|
BoneAmputee#8363: waiting to hear back about new compute
BoneAmputee#8363: got a message like 2 weeks ago but I didn't respond til like a week ago. I hope they get back on discord soon :berk:
xloem#0717: hey I have the 30 days of TRC. On the faq it says to message sid black but i'm not sure who that is, which @Sid are they?
AI_WAIFU#2844: That would be @Sid
xloem#0717: just to report back, my understanding is that eleutherai has more tpus than they are using, at this time
StellaAthena#3530: Do you have something you'd like run, or do you have compute you're not sure what to do with? Kinda hard to tell what you're looking for here
xloem#0717: I have possible access to compute and wanted to make sure I let anyone in need know as soon as possible. But learning you guys have excess, I am also interested in sharing thoughts around how to provide to others, and good projects.
cfoster0#4356: Is this TRC compute or something else?
xloem#0717: It's just the TRC 30 days. After reading their intro it's basically 100 preemptible tpu v2's and they charge for the cpu time unfortunately
gollark#3909: You can use the TPU VM thing to avoid that, apparently.
cfoster0#4356: Ah. Yeah I think they give out TRC access like candy :berk:
xloem#0717: it seemed so. TPU VM thing?
gollark#3909: The TPUs come with their own very powerful computers attached which you can now use.
xloem#0717: Would it be within the TRC agreements to make a service to provide educational model training experiences to the public?
StellaAthena#3530: Ask your lawyer. Or read the ToS. Or ask TRC. But don't take legal advice from randos on the internet.
xloem#0717: sounds like eleuther might take a risk like that given some clear and well-backed advice
EstebanSir#2189: Are you guys still training GPT-NeoX? how is it going?
EstebanSir#2189: also are you guys planning to train it with Alibi? (or are currently doing so?)
EstebanSir#2189: many questions, but i just like to check in from time to time
EstebanSir#2189: (feel free to ping me)
|
StellaAthena#3530: @EstebanSir We've trained NeoX models up to I believe 1B but due to the GPU shortage we haven't been able to scale as quickly as we hoped
EstebanSir#2189: oh that's a shame, i thought you guys trained through TRC's TPUs?
StellaAthena#3530: GPT-NeoX is our GPU codebase, GPT-J is the TPU codebase
StellaAthena#3530: We are (slowly) working towards a larger GPT-J model trained on TPUs
EstebanSir#2189: Ah, alright, I didn't know the naming related to that.
EstebanSir#2189: the TPU models are difficult to include into HF, aren't they? I hope i can use the Adapter-Transformer fork of HF with the new models (if they ever get included into HF that is!)
EstebanSir#2189: thanks for the info
StellaAthena#3530: They are in HF!
StellaAthena#3530: @EstebanSir https://huggingface.co/EleutherAI/gpt-j-6B
EstebanSir#2189: woah what!!
EstebanSir#2189: that's amazing! im glad you guys finally got that sorted
EricHallahan#1051: :guilty:
๐
ฌ gabriel_syme ๐
ฌ#3220: Wait the PR passed, its there now?
๐
ฌ gabriel_syme ๐
ฌ#3220: Mad
EricHallahan#1051: ~~Always~~ has been.
kurumuz#5695: been a while yeah
๐
ฌ gabriel_syme ๐
ฌ#3220: Cool!
๐
ฌ gabriel_syme ๐
ฌ#3220: Does that mean we can load smaller Js btw?
๐
ฌ gabriel_syme ๐
ฌ#3220: With the same class
kurumuz#5695: yes
|
๐
ฌ gabriel_syme ๐
ฌ#3220: Amazing, going to try instantly when I'm home with the smaller Js
๐
ฌ gabriel_syme ๐
ฌ#3220: Or js
alstroemeria313#1694: hm https://cdn.discordapp.com/attachments/729741769738158194/889395484584706058/out.mp4
ethan caballero#6044: Is this supposed to be mixture of gaussians?
alstroemeria313#1694: it is a denoising diffusion model being trained
alstroemeria313#1694: the dots are the samples generated every 100 optimizer steps
alstroemeria313#1694: (using the same random starting points each time)
๐
ฌ gabriel_syme ๐
ฌ#3220: that poor sample trying to get in
alstroemeria313#1694: if it managed to learn the dataset totally they would just end up in a square 11x11 grid
alstroemeria313#1694: btw this is my current CIFAR-10 diffusion outputs demo grid https://cdn.discordapp.com/attachments/821173872111517696/889396604891701258/demo_00110.png
alstroemeria313#1694: I may have cracked it!
nshepperd#2316: wow!
alstroemeria313#1694: the samples get visibly better/more realistic now when i scale the model up
alstroemeria313#1694: rather than just learning to generate the same incoherent/scrambled scenes with less artifacts
alstroemeria313#1694: this model is 66M params
nshepperd#2316: that's awesome :)
๐
ฌ gabriel_syme ๐
ฌ#3220: Excellent!
๐
ฌ gabriel_syme ๐
ฌ#3220: Now ~~I~~we just need some jax code :)
nshepperd#2316: i can probably port it
nshepperd#2316: maybe this model will be more amenable to TPUs than OAI's crazy thing even :hap:
|
๐
ฌ gabriel_syme ๐
ฌ#3220: that would be awesome, if you do I'll give it a try
๐
ฌ gabriel_syme ๐
ฌ#3220: I am helpless in porting it
alstroemeria313#1694: log MSE loss vs step https://cdn.discordapp.com/attachments/729741769738158194/889455582606680064/Screen_Shot_2021-09-20_at_3.18.47_AM.png
alstroemeria313#1694: Final CIFAR-10 demo grid, after 290 epochs (i.e. the step count in this plot) https://cdn.discordapp.com/attachments/821173872111517696/889455662306840606/demo_00290.png
CRG#8707: How does it look like vs log(step)?
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/889457685416771594/Screen_Shot_2021-09-20_at_3.27.13_AM.png
nshepperd#2316: it looks like it's memorized the data with those two near-identical car images in the middle
alstroemeria313#1694: yeah
alstroemeria313#1694: i saw
nshepperd#2316: well 66M parameters will do that so i guess it's still good, this means it's working
alstroemeria313#1694: time to get a bigger dataset :)
nshepperd#2316: yep!
alstroemeria313#1694: i uh, need to make a repo for some of this code
alstroemeria313#1694: as it is starting to turn out to be actually good
alstroemeria313#1694: instead of "Kat plays around with simple score matching networks for fun"
alstroemeria313#1694: oh, i found, like OpenAI did, that you NEED EMA on the diffusion model
alstroemeria313#1694: For decent samples
alstroemeria313#1694: Apparently training it on Gaussian noise that strong tends to make its params jump around and drift
alstroemeria313#1694: what do people know about OpenAI's custom cosine noise schedule? is there an easy continuous approximation to it like there is for the normal DDPM linear one?
alstroemeria313#1694: (DDPM linear is approximated by `torch.exp(-1e-4 - 10 * t**2)`)
|
alstroemeria313#1694: (That gives you the alphas)
alstroemeria313#1694: I wonder if I can do learned variances or the VDM learned noise schedules now
alstroemeria313#1694: ...IDK how learned variances work ^^;;
alstroemeria313#1694: and the way i am defining a u-net right now is... not recursive
alstroemeria313#1694: i.e. i just write the whole thing by hand with increasing indentation each stage
alstroemeria313#1694: i guess one of the advantages of learnable Fourier Features is that it doesn't matter as much if you get the std right?
alstroemeria313#1694: Uh, can diffusion model training benefit from higher Adam beta_1
alstroemeria313#1694: Bc it is so noisy
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/889473779699441674/demo_00055-3.png
alstroemeria313#1694: class conditioning works too
eo#8848: perhaps a dumb question but is there any order-of-magnitude estimate / rule of thumb on how many bits of information an n-parameter model can memorise 'in practice' (as opposed to 'num bits per parameter times n')?
alstroemeria313#1694: oh, can you just not do learned variances if you are training w/ continuous timesteps?
alstroemeria313#1694: because your beta and beta tilde will be the same
alstroemeria313#1694: they become the same in the limit of the number of steps going to infinity
nshepperd#2316: ooh nice
nshepperd#2316: in the limit of infinite steps, i think both beta and beta tilde are 1
alstroemeria313#1694: the class is turned into a 4 dim embedding and the resulting four uniform channels are concatted to the input
nshepperd#2316: bc like the betas are the amount of noise *between* a step and the next step
alstroemeria313#1694: i wonder if like
alstroemeria313#1694: I could define t=0 as no noise and t=1 as all noise, and
|
alstroemeria313#1694: oh that's bad
alstroemeria313#1694: I can't predict eps if there's no noise :)
alstroemeria313#1694: Or if there's arbitrarily close to no noise
alstroemeria313#1694: But I was going to say, make a mapping between the continuous version of the DDPM schedule and this parameterization
nshepperd#2316: actually i think it would be interesting to reformulate diffusion as gaussian processes
alstroemeria313#1694: And feed that parameterization in instead
nshepperd#2316: bc that's basically what continuous diffusion is
alstroemeria313#1694: oh wait is this just the thing from Variational Diffusion Models except bad
nshepperd#2316: like the noise added to the image is a gaussian process
alstroemeria313#1694: oh right
alstroemeria313#1694: VDM uses the log SNR formulation bc it doesn't go to all noise or no noise ever
alstroemeria313#1694: I guess you could feed log SNR in with Fourier Features.
nshepperd#2316: and ddpm and ddim amount to particular covariance functions.. probably
cfoster0#4356: Yeah that's what I'll probably do going forward. Just feels clean
alstroemeria313#1694: i think eta=0 DDIM is a non-Markovian noising process where the *same* noise is added over and over again?
alstroemeria313#1694: The forward process that is?
nshepperd#2316: unfortunately they don't write the forward process down in the ddim paper
nshepperd#2316: but they say that it's gaussian
alstroemeria313#1694: yeah, non-Markovian
nshepperd#2316: like, the forward process is P(x_t | x_t-1, x0) = N(ยต,ฯ^2)
|
nshepperd#2316: for some parameters
alstroemeria313#1694: the noise is allowed to depend on previous timesteps' noise
alstroemeria313#1694: and with eta=0 it is just the same noise over and over, right?
alstroemeria313#1694: log snr for the 1000 timestep DDPM schedule https://cdn.discordapp.com/attachments/729741769738158194/889484670985703444/Screen_Shot_2021-09-20_at_5.14.15_AM.png
alstroemeria313#1694: so I need to deal with values in the range -10 to 10
nshepperd#2316: hm i'm not sure. probably
alstroemeria313#1694: and this is what makes the reverse process deterministic?
nshepperd#2316: since the reverse process is a delta function
alstroemeria313#1694: Bc you, by definition at the start of the reverse process, know the noise.
alstroemeria313#1694: gonna play with log SNR timestep conditioning
alstroemeria313#1694: what fourier features std should i use?
alstroemeria313#1694: for range -10 to 10
alstroemeria313#1694: I like the idea of being able to use any schedule I want with a trained model easily.
alstroemeria313#1694: Without having to map the one I want to use onto the one it was trained with.
alstroemeria313#1694: log SNR seems neutral
nshepperd#2316: oh yeah, the forward process is basically jump to the final step x_T with random noise, then generate all the intermediate steps with linear combinations of x_0 and x_T
nshepperd#2316: so it's just increasing amounts of the same noise
alstroemeria313#1694: ok gonna try this now :)
alstroemeria313#1694: idk the loss curve seems worse
alstroemeria313#1694: wait if i used a 0-1 range before with Fourier Features std=4.
|
alstroemeria313#1694: What should I use now.
alstroemeria313#1694: With -10 to 10.
nshepperd#2316: then it should be std=0.2? maybe?
alstroemeria313#1694: mm~
nshepperd#2316: whatever would be equivalent to linearly scaling down from -10,10 to 0,1
alstroemeria313#1694: ugh i don't have to feed this through a mapping network first do i
alstroemeria313#1694: Like you'd think it would work better?
alstroemeria313#1694: but i don't think it does.
cfoster0#4356: Post Fourier features maybe
alstroemeria313#1694: yeah
alstroemeria313#1694: but like. maybe the problem is that the log snr value changes *very* quickly in the early timesteps
alstroemeria313#1694: so it rarely sees those values.
nshepperd#2316: how would the logit of the noise level look
cfoster0#4356: I think that's fine. Most of the work we care about is done in the low SNR regime anyways
cfoster0#4356: Low to medium
alstroemeria313#1694: let me do an ablation really quick
alstroemeria313#1694: Yeah but.
alstroemeria313#1694: It's critical for the model to learn to scale eps exactly right.
alstroemeria313#1694: If it is unable to learn to do this well the results will be much worse.
alstroemeria313#1694: doing an ablation rn to make sure it wasn't some other change
|
cfoster0#4356: I'd bet that with sufficient steps it'll still learn to put in the high frequency details properly
nshepperd#2316: oh, that basically is logits of the noise level, nvm
cfoster0#4356: Oh I thought you moved to the version where it's predicting N(0, I) eps and it's scaled outside the network
alstroemeria313#1694: I did
alstroemeria313#1694: I mean it has to take a low noise input image and blow the noise up so it's scaled like N(0, I).
nshepperd#2316: maybe... just hardcode the transformation from snr back to timesteps into the network. in front of the fourier bit
cfoster0#4356: Ah word
alstroemeria313#1694: nm i think it was actually some other change that is making my loss values worse.
cfoster0#4356: I wonder if that's part of the reason using the conditioning for normalization layers works well
alstroemeria313#1694: Because I just changed it to feed in 0-1 again and my FF std back to 4 and it's still worse lol
alstroemeria313#1694: Uh, it was probably because I scaled the model back down from 66M to 11M params to check if it worked faster ^^;;
alstroemeria313#1694: And I forgot which was the original 11M timestep conditioned + eps pred run to compare against.
nshepperd#2316: eheh
alstroemeria313#1694: Gonna just put it back to 66M so I don't have to find it in my giant pile of log folders ^^;;
alstroemeria313#1694: But yeah this will be cool if it works, can just try any schedule on any pretrained model.
alstroemeria313#1694: The code is schedule agnostic at this point, you just need to get a timestep->log snr mapping for training (so you can map from uniform to the distribution of log snr you actually want) and then evaluate some possibly different timestep->log snr mapping for sampling at whatever points you want.
nshepperd#2316: :)
alstroemeria313#1694: I just don't think it'll work outside of the start and end points it was trained with?
alstroemeria313#1694: Doesn't the VDM paper learn those lol
alstroemeria313#1694: In particular feeding in a log snr to this above 10 will make it go "wat do?" and probably break
|
cfoster0#4356: Yeah lol. Probably best to clamp it
alstroemeria313#1694: ugh loss is worse still?
alstroemeria313#1694: or did i compare against
alstroemeria313#1694: wait
alstroemeria313#1694: lol
alstroemeria313#1694: I compared against an MNIST run too
alstroemeria313#1694: Which has lower loss.
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/889499546386841600/Screen_Shot_2021-09-20_at_6.13.36_AM.png
alstroemeria313#1694: THIS is MSE plotted vs the overnight run with 0-1 range timestep conditioning.
alstroemeria313#1694: Looks fine!
alstroemeria313#1694: I found which dataframe it was because it was the longest
alstroemeria313#1694: lol
nshepperd#2316: ahah
alstroemeria313#1694: so yeah i'll just use this going forward, it's nicer
alstroemeria313#1694: It shows the same unusually sharp (compared to my previous worse diffusion models) dropoff in MSE from 1 to something lower than I managed to get w/o timestep conditioning.
cfoster0#4356: This plus the low discrepancy sampler is pretty neat
alstroemeria313#1694: the one from the VDM paper?
alstroemeria313#1694: I'm just using a normal low discrepancy sequence rn
cfoster0#4356: Yeah, just using the low discrepancy sequence to get more equally spaced out timesteps within your batches
alstroemeria313#1694: I am not using the VDM loss yet
|
alstroemeria313#1694: It's more complicated
alstroemeria313#1694: I'm just training w/ approx the same distribution of log SNR I'll use during sampling.
alstroemeria313#1694: And not weighting the loss.
cfoster0#4356: The only difference is weighting, yeah?
alstroemeria313#1694: @cfoster0 they also learn the two endpoints of the log snr ramp and they learn the interpolating function between them in such a way as to minimize the variance of something or another
alstroemeria313#1694: of their VLB loss, i think.
alstroemeria313#1694: i'll have EMA demo grids soon! so can compare them visually.
alstroemeria313#1694: like in 15-30 minutes maybe.
alstroemeria313#1694: ```python
n = reals.shape[0]
t = (rng.draw().item() + torch.arange(n, device=device) / n) % 1
alphas, sigmas = get_alphas_sigmas(t)
log_snrs = get_log_snrs(alphas, sigmas)
```
alstroemeria313#1694: This produces the most even distribution of timesteps
alstroemeria313#1694: In a batch
alstroemeria313#1694: 'even' according to the fixed interpolating function that is.
cfoster0#4356: What's the mapping between alphas/sigmas and log SNRs?
alstroemeria313#1694: I could sample evenly in log snr space but then I'd have to reweight the loss
alstroemeria313#1694: ```python
|
def get_alphas_sigmas(t):
alphas_squared = torch.exp(-1e-4 - 10 * t**2)
sigmas_squared = 1 - alphas_squared
return alphas_squared**0.5, sigmas_squared**0.5
def get_log_snrs(alphas, sigmas):
return torch.log(alphas**2 / sigmas**2)
```
cfoster0#4356: Oh oh ok. Interesting
alstroemeria313#1694: And reweighting the loss would increase its variance and I'm not learning the interpolating function to decrease its variance rn.
alstroemeria313#1694: So not gonna do that yet.
alstroemeria313#1694: you can of course go from log snr to alpha/sigma, i just don't have the code for it rn
alstroemeria313#1694: bc am using a single fixed schedule.
alstroemeria313#1694: uh, how do you do that actually.
alstroemeria313#1694: huh
alstroemeria313#1694: > Instead of taking time t as input to the denoising model, we use ฮณt = log[ฯt2/ฮฑt2], which we rescale to have approximately the same range as t of [0, 1] before using it to form โtimeโ embeddings in the same way as Ho et al. [2020].
alstroemeria313#1694: oh ok
nshepperd#2316: literally alpha**2 = torch.sigmoid(snr) heh
alstroemeria313#1694: idk how they did it
|
alstroemeria313#1694: i just threw fourier features at it
alstroemeria313#1694: seems fine
alstroemeria313#1694: also i'm not scaling the log snr before feeding it in, i'm just picking appropriately scaled fourier features
alstroemeria313#1694: for the actual range
alstroemeria313#1694: :)
alstroemeria313#1694: yeah the samples still look good
alstroemeria313#1694: compared to the 0-1 run at the same point in training
alstroemeria313#1694: loss curve looks fine too https://cdn.discordapp.com/attachments/729741769738158194/889507562775519262/Screen_Shot_2021-09-20_at_6.45.23_AM.png
alstroemeria313#1694: ok so
alstroemeria313#1694: i can refactor the code to return a log snr schedule and then derive the alphas and sigmas from that.
alstroemeria313#1694: ```python
def get_log_snrs(t):
alphas_squared = torch.exp(-1e-4 - 10 * t**2)
return torch.log(alphas_squared / (1 - alphas_squared))
def get_alphas_sigmas(log_snrs):
alphas_squared = log_snrs.sigmoid()
return alphas_squared**0.5, (1 - alphas_squared)**0.5
```
|
alstroemeria313#1694: Now literally you can pick whatever log snr ramp you want.
alstroemeria313#1694: you just have to make sure it has ~the same endpoints as the one used in training.
alstroemeria313#1694: so what's the continuous version of OAI's cosine noise schedule?
nshepperd#2316: https://cdn.discordapp.com/attachments/729741769738158194/889509704588476416/2021-09-20-235349_880x433_scrot.png
nshepperd#2316: looks already pretty continuous
alstroemeria313#1694: so this is linear in log snr (blue = alphas, orange = sigmas) https://cdn.discordapp.com/attachments/729741769738158194/889509804324827196/Screen_Shot_2021-09-20_at_6.54.13_AM.png
alstroemeria313#1694: ohh
nshepperd#2316: they use s = 0.008
alstroemeria313#1694: I think I have tried that schedule and it was kinda bad
alstroemeria313#1694: But it was before timestep conditioning
alstroemeria313#1694: and their alpha bars are already squared?
alstroemeria313#1694: what https://cdn.discordapp.com/attachments/729741769738158194/889510751277039656/Screen_Shot_2021-09-20_at_6.58.06_AM.png
alstroemeria313#1694: That should just go down to -10 right.
alstroemeria313#1694: Or thereabouts.
alstroemeria313#1694: That thing at the end is only not a -inf because something before it didn't produce an exact zero.
nshepperd#2316: yes. alpha bar is the thing that you sqrt then multiply by the image and the noise. with the sqrt(alpha bar) x + sqrt(1 - alpha bar) e
alstroemeria313#1694: ty :blobcutehappy:
nshepperd#2316: yeah i think you maybe need to add some epsilon to cos(stuff)^2 to make the cosine schedule top out at the same level as the linear one
nshepperd#2316: they did it by clipping beta, i think, but that's kind of bad
alstroemeria313#1694: i don't have explicit betas even
|
nshepperd#2316: yeah
alstroemeria313#1694: you need to add like 4.5398e-05
alstroemeria313#1694: this is sigmoid(-10)
๐
ฌ gabriel_syme ๐
ฌ#3220: this a perfect explanation of why I don't understand a single thing about this models, well written
alstroemeria313#1694: also you can't actually form the alpha bars in the same way
alstroemeria313#1694: eheh ^^;;
EricHallahan#1051: I am completely lost in these conversations. :berk:
alstroemeria313#1694: ok so this is the fixed log snr schedule for the cosine noise schedule https://cdn.discordapp.com/attachments/729741769738158194/889513551318900746/Screen_Shot_2021-09-20_at_7.09.01_AM.png
nshepperd#2316: looks nice and balanced
alstroemeria313#1694: so blue is alphas and orange is sigmas. https://cdn.discordapp.com/attachments/729741769738158194/889513815379693588/Screen_Shot_2021-09-20_at_7.10.12_AM.png
alstroemeria313#1694: oh, their low discrepancy thing is producing visibly *worse* samples
alstroemeria313#1694: They are lower diversity.
alstroemeria313#1694: Gonna not do that then ^^;
cfoster0#4356: Just using it during training or sampling too?
alstroemeria313#1694: training. you don't use random timesteps when sampling
alstroemeria313#1694: I think you can use like torch.quasirandom.SobolEngine for the timesteps and it's fine
alstroemeria313#1694: But doing `t = (rng.draw().item() + torch.arange(n, device=device) / n) % 1` is bad.
cfoster0#4356: Ah
nshepperd#2316: oh, maybe the fourier features pick up on the fact that they are evenly spaced
alstroemeria313#1694: The Sobol sequence produces visibly better between-batch loss variance.
|
alstroemeria313#1694: Like the loss std is reduced by ~60%
alstroemeria313#1694: compared to torch.rand()
nshepperd#2316: like the fourier features with a period close to a multiple of n will be super high variance
alstroemeria313#1694: ahh
alstroemeria313#1694: ...they weren't actually
alstroemeria313#1694: Bc the uniform thing was mapped onto log snr.
alstroemeria313#1694: But I guess there is a long nearly straight part of the schedule?
nshepperd#2316: oh. hm yeah there is sorta
alstroemeria313#1694: ok so like.
alstroemeria313#1694: eventually i am going to want to have the derivatives of the schedules too
alstroemeria313#1694: mb can just get them with autograd
alstroemeria313#1694: instead of writing them by hand
alstroemeria313#1694: especially if the schedules have learnable parameters.
alstroemeria313#1694: ...Why does the VDM loss weight the eps higher when the schedule has a high magnitude derivative.
alstroemeria313#1694: If it has a high derivative the model is going to spend less time in that area during sampling?
alstroemeria313#1694: So why weight higher something you will use less?
nshepperd#2316: is that just a bad high variance way of training on a linear noise schedule...
alstroemeria313#1694: hm
alstroemeria313#1694: So they pick t uniformly from 0-1.
alstroemeria313#1694: And feed it into a thing that makes a log snr.
|
alstroemeria313#1694: Then they weight by the derivative of negative log snr.
alstroemeria313#1694: it is, actually
alstroemeria313#1694: The trick is that they learn the t -> log snr mapping to reduce the variance.
alstroemeria313#1694: > Here, w(v) is a weighting function that generally puts increased emphasis on the noisier data compared to the VLB, and which thereby can sometimes improve perceptual generation quality as measured by certain metrics like FID and Inception Score.
alstroemeria313#1694: OAI didn't do this afaik
alstroemeria313#1694: But I could try it
alstroemeria313#1694: This just means you have a log snr -> weight mapping.
alstroemeria313#1694: They use this *in addition* to weighting by the derivative of negative log snr
cfoster0#4356: I can't remember where it was, but there's an implied weighting function when you use the simplified MSE loss instead of the VLB
alstroemeria313#1694: But in the paper they only use the constant 1 case.
alstroemeria313#1694: that makes sense
alstroemeria313#1694: @cfoster0 it's appendix F
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/889520868621963294/Screen_Shot_2021-09-20_at_7.38.19_AM.png
nshepperd#2316: how does importance sampling work
nshepperd#2316: you want to have more samples with the timesteps with the noisier gradients, right? so that they are weighted down
bmk#1476: importance sampling is basically MC except you like adjust for the fact that you're sampling according to some other distribution right
Louis#0144: You define a prior that you think closely approximates the domain of interest of a sampling function
Louis#0144: And then use that for Monte Carlo integration
Louis#0144: I recommend a good book on Monte carlo methods
Louis#0144: It's rly helpful
|
bmk#1476: alternatively MC is importance sampling with a flat prior
alstroemeria313#1694: oh you can just do ```python
def get_log_snrs(t):
return -torch.special.expm1(1e-4 + 10 * t**2).log()
```
alstroemeria313#1694: And that gets you the linear schedule without going to alphas and back
alstroemeria313#1694: so yeah. the dropoff at high log snr here for DDPM corresponds to the derivative of its log snr schedule being high there.
alstroemeria313#1694: This. https://cdn.discordapp.com/attachments/729741769738158194/889522697317519370/Screen_Shot_2021-09-20_at_7.45.33_AM.png
alstroemeria313#1694: log snr on x axis, relative loss weight compared to the VDM loss on y axis.
alstroemeria313#1694: This is it plotted with t on the x axis. https://cdn.discordapp.com/attachments/729741769738158194/889523235526422568/Screen_Shot_2021-09-20_at_7.47.34_AM.png
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/889523812406808676/Screen_Shot_2021-09-20_at_7.50.01_AM.png
alstroemeria313#1694: On this plot, blue is DDPM linear and orange is the OpenAI cosine noise schedule.
nshepperd#2316: huhhh
alstroemeria313#1694: (The cosine schedule was not included in the VDM paper, so I had to replicate the plot for linear and then do it for cosine)
alstroemeria313#1694: Wait it's just IDDPM isn't it
alstroemeria313#1694: Oh, that is Improving etc. from OpenAI. I didn't recognize the acronym ^^;;
IGg#7871: hi!! does anyone know an AI to solve probability problems?
alstroemeria313#1694: oh, what kind?
IGg#7871: I don't know, some general one where, from text and describing the problem, statistics can be obtained and probabilities measured
alstroemeria313#1694: trying a linear in log snr noise schedule now
|
alstroemeria313#1694: it's probably bad idk
alstroemeria313#1694: i'm not sure we're that far along yet, we (well, OpenAI) have things that can write (or at least try to write) code from natural language descriptions and that's just from... last month?
alstroemeria313#1694: anyway i think the reason why they do their weird high variance thing is so they can learn a compromise between minimizing its variance and minimizing the MSE
alstroemeria313#1694: and thus learn a noise schedule that is optimal in some sense
alstroemeria313#1694: hm how to parameterize it
alstroemeria313#1694: linear in log SNR noise schedule MNIST https://cdn.discordapp.com/attachments/729741769738158194/889533854187126795/demo_00070-2.png
alstroemeria313#1694: MSE loss was way higher for this schedule
alstroemeria313#1694: And I think it trained slower
alstroemeria313#1694: DDPM schedule https://cdn.discordapp.com/attachments/729741769738158194/889538326695129108/demo_00070-3.png
alstroemeria313#1694: Same everything else
alstroemeria313#1694: idk mnist is super easy
nshepperd#2316: well those samples all look fine so
gollark#3909: It sounds like you want it to do maths homework or something?
alstroemeria313#1694: training a way too small 256x256 diffusion model on wikiart for the lulz
(This is not gonna work)
(I just want to see how, really)
alstroemeria313#1694: i know right
alstroemeria313#1694: I need to clean this codebase up
alstroemeria313#1694: And like, make the U-Nets not defined by hand w/ increasing indentation the more levels deep you are
alstroemeria313#1694: And make it use DeepSpeed
|
alstroemeria313#1694: I say 'codebase' but it is like 30 self-contained Python files, one per experiment, with slightly different versions of things
alstroemeria313#1694: And I just keep track of which the current best one is and paste it into the current best for like, upscaling or colorization or whatever
nshepperd#2316: eheh
alstroemeria313#1694: lol https://cdn.discordapp.com/attachments/729741769738158194/889567820860493864/demo_00002.png
alstroemeria313#1694: lol i'm cancelling this and will rerun it again at 128x128
nshepperd#2316: green and black are all you need
nshepperd#2316: i fixed my transformer RL train script to not reuse samples for training the model and critic after i realized that was important for GANs. so far it seems like this is just making it get worse more efficiently
nshepperd#2316: i must be doing something important really wrong...
alstroemeria313#1694: :/
alstroemeria313#1694: Hey can I train, like, a resolution independent diffusion model on random sized crops of WikiArt or something. So it learns how to make WikiArt textures of whatever resolution.
alstroemeria313#1694: And then guide it with CLIP.
alstroemeria313#1694: It would have VQGAN type global coherence issues
alstroemeria313#1694: But it might have nicer textures
alstroemeria313#1694: idk
alstroemeria313#1694: I haven't tried guiding the generation of my diffusion models yet
alstroemeria313#1694: I only use DDIM, so
alstroemeria313#1694: I would have to look at condition_score() from the OAI code.
nshepperd#2316: maybe you could use that wikiart texture model to guide another diffusion model
nshepperd#2316: it's just another log prob gradient, so :)
alstroemeria313#1694: Ahah.
|
alstroemeria313#1694: You mean like.
alstroemeria313#1694: We can just do two diffusion processes at once.
alstroemeria313#1694: Er, I mean with two models.
nshepperd#2316: yeah, just like add their log probs together
nshepperd#2316: with some weighting
alstroemeria313#1694: Like literally we combine the preds and take a DDIM step?
alstroemeria313#1694: With the combined eps
nshepperd#2316: yeah
alstroemeria313#1694: (Since I'm not getting explicit scores anymore)
alstroemeria313#1694: (It's just kind of like score matching?)
nshepperd#2316: that's how conditioning works with ddim, you just add the appropriately scaled gradient from your classifier to the eps
alstroemeria313#1694: ah :)
nshepperd#2316: in Diffusion Models Beat GANs... it is explained by reference to score matching
alstroemeria313#1694: we can recover a score right
alstroemeria313#1694: Like it's pred - fakes?
alstroemeria313#1694: We just can't gradient ascend it directly bc we have to stick the fakes back on the thing defined by reals * alpha*\*0.5 + noise * (1 - alpha)**0.5.
nshepperd#2316: https://cdn.discordapp.com/attachments/729741769738158194/889573415747473529/2021-09-21-040645_1445x257_scrot.png
nshepperd#2316: https://cdn.discordapp.com/attachments/729741769738158194/889573420394762311/2021-09-21-040659_1439x489_scrot.png
alstroemeria313#1694: it's eps over sigma
alstroemeria313#1694: er
|
alstroemeria313#1694: -eps / sigma
alstroemeria313#1694: Which is just pred - fakes isn't it
nshepperd#2316: yeah
StellaAthena#3530: Slides from my talk at the Big Science workshop (basically EleutherAI 101) can be found here: https://docs.google.com/presentation/d/1QYniyfz5EoBD4_S7g9OC6YpRm9_v8Bt45E2FL_x_Fwc/edit?usp=sharing
nshepperd#2316: i think it's not exactly pred - fakes because of the scaling of x0
nshepperd#2316: pred/alpha - fakes?
alstroemeria313#1694: oh
alstroemeria313#1694: oh right
alstroemeria313#1694: anyway -eps/sigma is simpler to think about
nshepperd#2316: anyway, if you're running two diffusion models, they're probably at the same timestep anyway, so you *can* just add the eps together
alstroemeria313#1694: *nods*
alstroemeria313#1694: especially with my log snr conditioned models
nshepperd#2316: or do some linear combination to scale the weighting of texture vs content
alstroemeria313#1694: Where we can just make them both use the same schedule.
alstroemeria313#1694: Or do stuff like interpolate between the schedules they were trained with.
nshepperd#2316: yep!
alstroemeria313#1694: so you correct your eps by eps_new = eps - sigma * grad
alstroemeria313#1694: That's it?
alstroemeria313#1694: Ahh.
alstroemeria313#1694: Yeah if the timestep is the same the minus signs and the sigmas cancel
|
alstroemeria313#1694: And you just add the two eps
nshepperd#2316: yeah
alstroemeria313#1694: ...wait
alstroemeria313#1694: Won't that result in an eps that's scaled wrong
alstroemeria313#1694: Uhh
alstroemeria313#1694: i.e. the conditioning would be way too strong?
alstroemeria313#1694: Yeah the gradients you get from diffusion models are going to be way too strong
nshepperd#2316: yeah if you just add them with no weighting it'll probably be way too strong
nshepperd#2316: probably 0.5 a + 0.5 b would be ok
nshepperd#2316: or scaling the texture model down to the scales that we normally use with clip conditioning
nshepperd#2316: which is a grad of like 0.01
alstroemeria313#1694: *nods*
alstroemeria313#1694: Yeah blending them just blends the scores.
alstroemeria313#1694: i.e. it is blending the two log probs, not adding them
alstroemeria313#1694: Because adding them results in way too sharp a distribution which doesn't work.
alstroemeria313#1694: ...ok next question.
nshepperd#2316: sampling from the distribution of images which would have been generated by chance by both models, or something. it would probably give you aaaaa stuff
alstroemeria313#1694: Can we blend an eps and a CLIP gradient or MSE gradient.
alstroemeria313#1694: Instead of adding.
alstroemeria313#1694: To get around our gradient scale issues.
|
alstroemeria313#1694: Um. So with inpainting.
alstroemeria313#1694: We know what the score should be at each timestep.
alstroemeria313#1694: For the part outside the mask.
alstroemeria313#1694: Can we *replace* the part of the score outside the mask with a -1/2 sum of squared differences gradient.
alstroemeria313#1694: Like that defines a reverse process that's guaranteed to end up at the pixel values you want for the areas outside the mask.
nshepperd#2316: is that the same as masking out the pred?
alstroemeria313#1694: i do not think so
alstroemeria313#1694: ...I have to think about it a bit more
alstroemeria313#1694: `pred = (fakes - eps * sigmas[i]) / alphas[i]`
alstroemeria313#1694: um, idk
alstroemeria313#1694: oh, i'm pretty sure now that the reason we need range loss for my diffusion fine-tune is that it wasn't EMAed enough
alstroemeria313#1694: unfortunately with a fine-tune the best way to do it is to fine-tune for a bit *then* kick in the EMA and fine-tune a lot longer
alstroemeria313#1694: i think
alstroemeria313#1694: maybe
nshepperd#2316: target * alphas[i] - fakes is almost the -0.5*mse(fakes,target) gradient. it makes the pred exactly equal to target
nshepperd#2316: ahh
alstroemeria313#1694: *nods*
alstroemeria313#1694: I am finding DDIM easier to deal with than DDPM
alstroemeria313#1694: And we can just do DDPM anyway by setting eta=1, so
alstroemeria313#1694: I will prob have just the one code path
|
nshepperd#2316: yeah
alstroemeria313#1694: (I wonder if we could condition the models on alpha w/ Fourier Features instead of log snr)
alstroemeria313#1694: oh wait
alstroemeria313#1694: yeah that might be not great
alstroemeria313#1694: Not enough discrimination between slightly different super low noise levels, where it matters most to have the explicit conditioning
alstroemeria313#1694: Yeah log snr is prob best
alstroemeria313#1694: Basically feeding 0-1 in only worked well for me bc Fourier Features
TobiasDeWillem#2309: Hi all. Sorry if my questions annoys but I suppose it is the right place to ask it. Is there any project to release a bigger version of GPT-J at the moment? I am asking as a NovelAI player and I would like to thank you for the hard work but I am also curious regarding the future. I know you are working on GPT-NeoX but I would like to know if you are working on smth intermediate.
EricHallahan#1051: This best sums it up:
https://discord.com/channels/729741769192767510/851918317039255592/883653739288866836
bmk#1476: as a policy we don't provide estimates or roadmaps or whatever for when we're releasing models
StellaAthena#3530: > Say the thing Bert
StellaAthena#3530: https://cdn.discordapp.com/attachments/729741769738158194/889588452176306257/image0.png
StellaAthena#3530: ๐ฅณ ๐ฅณ ๐ฅณ ๐ฅณ ๐ฅณ
EricHallahan#1051: ~~BERT is terrible at text generation.~~
StellaAthena#3530: Yeah I was wondering how I should work that into the meme
alstroemeria313#1694: @nshepperd what about conditioning on log(sigma)
alstroemeria313#1694: uh, oh wait
alstroemeria313#1694: That drops information about what alpha is doing near t=1
nshepperd#2316: log snr is probably best tbh
|
alstroemeria313#1694: yeah
nshepperd#2316: that's equivalent to logit(alpha) and preserves most information from both ends
alstroemeria313#1694: the only problem is like if i trained a model where the log snr went down to -10 and then tried to use a schedule where it started lower
alstroemeria313#1694: it would be OOD and break
alstroemeria313#1694: so in practice we need to clamp it according to what a model was trained on or smth
nshepperd#2316: yeah
nshepperd#2316: that's probably not too bad though
TobiasDeWillem#2309: Thanks for your reply, I understand ๐
alstroemeria313#1694: it is logit(alpha**2)
EricHallahan#1051: Well there is that #multimodal channel sitting there...
EricHallahan#1051: ... I wonder why that exists.
Awesome_Ruler_007#7922: ahh lol my bad
nshepperd#2316: oh yeah
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/889591299727065098/demo_00012-2.png
nshepperd#2316: with log snr=10 the noise level is like 1.7 out of 256 pixel value levels. so there probably isn't much value in going above 10
nshepperd#2316: ooh
alstroemeria313#1694: I think this is why DDPM did things that way originally
nshepperd#2316: is that wikiart?
alstroemeria313#1694: yep!
alstroemeria313#1694: 250 steps DDIM, eta=0
|
nshepperd#2316: looks pretty already :)
๐
ฌ gabriel_syme ๐
ฌ#3220: is this a decision transformer?
alstroemeria313#1694: @nshepperd https://cdn.discordapp.com/attachments/729741769738158194/889612578274312292/demo_00045-2.png
alstroemeria313#1694: each one is blended with the class of the next number
alstroemeria313#1694: like two model forwards then averaging the resulting eps
alstroemeria313#1694: gonna try guided diffusion now
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/889615092851834940/demo_00030-2.png
alstroemeria313#1694: guided diffusion!
alstroemeria313#1694: I trained an unconditional MNIST diffusion model and a noisy MNIST classifier.
alstroemeria313#1694: Classifier scale is 1
๐
ฌ gabriel_syme ๐
ฌ#3220: nice!
bernaise#6161: is there anyone on who might be able to answer a postdoc/research question in DMs?
StellaAthena#3530: Is there a reason you canโt just ask it here? In any event, youโll need to share at least the topic of the question if you want to get meaningful replies.
bernaise#6161: oh i didn't want to bother people with things that weren't relevant
bernaise#6161: just curious, when applying for postdocs, are you expected to come in with funding? are you supposed to have grants in your name before you get a job somewhere?
StellaAthena#3530: It depends on the country
bernaise#6161: USA?
StellaAthena#3530: No
bernaise#6161: OK, thank you @StellaAthena !
StellaAthena#3530: Good luck with your applications!
|
StellaAthena#3530: If you do obtain independent funding, I hope youโll consider applying to EleutherAI. Unfortunately we cannot fund or pay post docs at this time.
StellaAthena#3530: ๐
bernaise#6161: even if i'm a meganoob?
StellaAthena#3530: 12 months ago I didnโt know what a transformer was
EricHallahan#1051: Same here.
๐
ฌ gabriel_syme ๐
ฌ#3220: 12 months later, I barely know. You can still do cool things
EricHallahan#1051: Obligatory https://cdn.discordapp.com/attachments/733347369847881838/743244626521227375/Dp3Y1d1X4AElzyq.png
EricHallahan#1051: Also this.
StellaAthena#3530: Okay thereโs like three things we understand
StellaAthena#3530: Data augmentation
StellaAthena#3530: Why transformers need positional encodings
StellaAthena#3530: And Iโll come up with a third in a bit
StellaAthena#3530: Fourier features!
alstroemeria313#1694: Hey is there like... some way to 'distill' a diffusion model into a thing that can generate in a single step
someKindaBean#8471: i bet if there is, it probably limits the model into generating a smaller subclass of styles/objects/features
Deleted User#0000: diffusion model that just generates stuff related the the game Myst
Desperate Noob#6277: All you do is cobble together code and let the computer do complex ai stuff and boom, you are an ai researcher(if you have access to a lot of compute)
cfoster0#4356: I haven't seen anything like that published yet
Kharr#7888: Nothing published but it is possible. You just have to reframe the process as a depthwise NN and train it accordingly.
Kharr#7888: You also don't have to come with funding in Canada. You typically apply for funding with your supervisor.
|
fengoku#9000: I was at the gathertown a bit today with percy and stuff... What exactly was that for? My friend just sent me the link randomly so I joined lol
EricHallahan#1051: The second edition of BigScience Workshop:
https://www.youtube.com/watch?v=Rb1mrLbwpyc
๐
ฌ gabriel_syme ๐
ฌ#3220: hmm interesting
๐
ฌ gabriel_syme ๐
ฌ#3220: is there a reason why a finetuned model would stop doing continuations?
๐
ฌ gabriel_syme ๐
ฌ#3220: my finetuned J doesn't 'mutate' anymore, that is when I feed it a part of a generated output in order to make a new layout, it just spits out that partial input with no change
nshepperd#2316: eheh~ that works!
nshepperd#2316: it gets a clip embedding as input and autoregressively generates vqgan tokens. so sort of
random person#5234: so I guess I wanted to ask this before. if my experience/knowledge area is in Computer Vision but I been dabbling in NLP space (not so much practical experience with BERT/Transformers/etc), what would be some beginners tasks I can look at? I been going through the reading list someone listed here before but I feel like there still is a pretty big knowledge gap before I feel "qualified"
random person#5234: like is the expected knowledge level of contributing members like published authors in ACLs etc or there are a lot of tasks that are more done by 'junior' members.
EricHallahan#1051: For perspective, I'm undergrad.
EricHallahan#1051: I don't have any connection to ML/DL/AI at all to my studies, I am not in any published papers, *et cetera*.
StellaAthena#3530: (Yet! We're writing two papers together right now!)
StellaAthena#3530: Welcome! We absolutely have more beginner friendly stuff going on. Are you looking for something more related to CV, or to sink you teeth into NLP?
random person#5234: NLP!
random person#5234: I am trying to round myself out a bit more lol
random person#5234: Also lol i wish i was this active in ug.
StellaAthena#3530: Do you have a GPU? If so, I would say that the best way to get started is to download GPT-NeoX and get a small model running
random person#5234: Yep, i got enough vram to tune bert
StellaAthena#3530: How much is that?
|
random person#5234: 24gb
StellaAthena#3530: Awesome
random person#5234: K sounds good. I probably start with the 6B one right
StellaAthena#3530: I would start with 125M lol
Kharr#7888: It's great how the first thing everyone jumps to is "the biggest model available"
random person#5234: I mean...
random person#5234: Ok sounds good
StellaAthena#3530: This is the one whose config file is called `small.yml`
random person#5234: Got it
StellaAthena#3530: I would also check out this paper: https://arxiv.org/abs/2001.08361
Once you have the model successfully training (depends on your GPU, but full training should take a couple days to a week) I would try replicating Figure 4 (though skipping the largest model)
random person#5234: Sounds good!
random person#5234: Are transformers usually trained on FP16?
StellaAthena#3530: Assuming you have an RTX 3080 that shouldnโt be too time intensive
random person#5234: Any concerns with using TF32/FP16 etc?
StellaAthena#3530: Though I will double check my math before giving you a demo task that takes month
StellaAthena#3530: Thatโs a phenomenal question.
random person#5234: I am fairly sure TF32 should be fine.
StellaAthena#3530: It will work fine, itโs just slow
StellaAthena#3530: Top of the line modes train for months
|
random person#5234: So drop to half precision?
StellaAthena#3530: And take up hundreds of GBs of space
StellaAthena#3530: Doubling the size and the amount of work each addition takes is a serious burden
random person#5234: I see. Ok.
StellaAthena#3530: The newest GPUs actually have a new innovation called BF16 which helps provide high precision with a smaller memory print
random person#5234: Yea I am familiar with that
StellaAthena#3530: NeoX should use mixed precision by default
random person#5234: Thats only on A100 though
StellaAthena#3530: Yeah
random person#5234: Consumer Ampere do not have BF16
StellaAthena#3530: So, what most models do right now is a mix of FP16 and FP32
StellaAthena#3530: You do model calculations in FP16 and optimizer calculations in FP32
StellaAthena#3530: The optimizer needs higher precision because we often use tiny learning rates
StellaAthena#3530: While the model can save a significant amount of space and time for minimal performance loss
StellaAthena#3530: Thereโs a good explainer on how all this works here: https://link.medium.com/DKa3S7KQIjb
choltz95#4641: congrats on your talk today!
StellaAthena#3530: Thank you! Did you watch it?
choltz95#4641: u know it the zombie slide cracks me up lol
random person#5234: Thats a really good explanation. I am familiar with how to implement mixed precision training to prevent underflow on FMA.
random person#5234: IRRC the default value is x8 for loss multiplication?
|
StellaAthena#3530: IDR
StellaAthena#3530: The codebase handles it automatically
random person#5234: Yea pytorch has a great automatic implementation
random person#5234: Anyways thanks for the help. Will go through the links and readme/tips on the code base tmr
bmk#1476: loss scaling is automatic
bmk#1476: scaling starts at some big value and repeatedly gets cut in half every time loss is nan
๐
ฌ gabriel_syme ๐
ฌ#3220: turns out, it was a trailing space I was adding to the prompt. The model was not generating anything after it, weird
alstroemeria313#1694: hey anyone tried PDMA over the model weights?
alstroemeria313#1694: this http://proceedings.mlr.press/v28/shamir13.pdf
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/889737985828982864/Screen_Shot_2021-09-20_at_10.01.03_PM.png
alstroemeria313#1694: am thinking of throwing it at diffusion model weights
alstroemeria313#1694: bc I keep wanting like an EMA whose decay gets slower over time
alstroemeria313#1694: btw what's the most convenient way to save the step count in PyTorch for this (it has to be saved in checkpoints and restored)
SysD | 12many#3843: https://twitter.com/quocleix/status/1440024037896245248?s=09
nshepperd#2316: `torch.save({'model': model.state_dict(), 'counter': counter}, f'checkpoint/{RUN}/latest.pt')` is what i usually do
nev#4905: this was already discussed here wasn't it?
Kazumi#1297: nope, I'm still not getting any response from TRC
alstroemeria313#1694: so with EMA (after burn-in) the center of mass of the average is a fixed number of steps back
alstroemeria313#1694: with a simple average (eta=0) the center of mass is always halfway between the start and your current point (50%).
alstroemeria313#1694: with eta=1 the COM is always 2/3 of the way through the run
|
alstroemeria313#1694: with eta=3 it is always 4/5 of the way through
alstroemeria313#1694: and in general it is (eta + 1) / (eta + 2)
Louis#0144: @lucidrains
natedog#8669: not sure if this falls into "technical support" from the faw, but would this place be good for getting some advice on specing out a dl workstation?
Louis#0144: #off-topic
Louis#0144: But just get a 3090
Louis#0144: lol
Louis#0144: They're inexpensive in the long run
toymaker#8480: Yeah, make all your money back crypto mining loll
๐
ฌ gabriel_syme ๐
ฌ#3220: Sigh
Kia#2550: God...
toymaker#8480: Too cringe?
Kia#2550: Let's just not talk about Crypto in #general
Orz#3023: Is there any way to teach language models some stuff without it forgetting what it already learnt?
(both in terms of format in which it responds and in terms of data)
Orz#3023: A specific example would be training gpt-j on some textbook for it to just remember the text and use it when it needs
Kia#2550: https://arxiv.org/abs/2109.04504
Kia#2550: That?
|
Kia#2550: I think they don't have the code yet
Ravna#1831: I think in practice, just fine-tune gpt-j on the textbook. The forgetting won't be that devastating, and all those fancy forgetting-preventing new papers are suspicious at best and usually just scams.
Orz#3023: Maybe it won't forget the text
But it certainly does forget the generalized input format
And it's so useful to just provide a few examples and zero shot the model
Orz#3023: https://github.com/EleutherAI/project-menu/issues/10#issuecomment-912129027
Orz#3023: also this^
Awesome_Ruler_007#7922: Opinion on a question I had about scaled-up multi-modal models?
https://discord.com/channels/729741769192767510/795089627089862656/889591866356543490
Awesome_Ruler_007#7922: https://tenor.com/view/i-guess-ill-fuck-off-now-goodbye-harry-potter-daniel-radcliffe-waving-gif-11597845
Louis#0144: Bye
Louis#0144: Sorry lol
Louis#0144: Iโll read what u wrote
Louis#0144: One sec
Louis#0144: Ah
Louis#0144: Thatโs out of scope of current multimodal work
Orz#3023: of course it had to be Harry potter
Awesome_Ruler_007#7922: but...we can't get to AGI unless it has those capabilities? and so far I don't see any solutions for that?
Louis#0144: You don't see any solutions for that because it's a hard problem
|
Louis#0144: And it isn't in the near future
alstroemeria313#1694: Wait
alstroemeria313#1694: How is the score -eps / sigma.
alstroemeria313#1694: It should be -eps * sigma.
alstroemeria313#1694: The lower the noise level is the lower magnitude the score should be.
Awesome_Ruler_007#7922: yes, but then what's the tentative idea for actually implementing this kida stuff?
Louis#0144: Hell if I know
Louis#0144: Lmao
Awesome_Ruler_007#7922: or is it all *"All the ills of Giant DL models can be cured by more models"*?
Louis#0144: Probably
Louis#0144: Throw all the modalities together
Louis#0144: Let the model figure out what to do with it
Louis#0144: lol
Awesome_Ruler_007#7922: scaling large models doesn't really sound like a great idea tbh.... ๐ค
Awesome_Ruler_007#7922: anways, we will see what happens
Awesome_Ruler_007#7922: till then ig it can get atleast a decade of funding for more avenues to build up on and emerge
Ravna#1831: What are you even talking about? Deepmind told us to just define the input and output and put the agent into the environment and reward is all you need.
Ravna#1831: What modalities?
gollark#3909: Presumably if you had a really good model for audio/vision/whatever you could just transfer-learn it (or part of it, for efficiency) onto whatever subtask you want.
Ravna#1831: Yes but that's just optimization details.
|
Ravna#1831: He's trying to talk about AGI which doesn't exist yet.
Ravna#1831: Optimization details don't matter that much for something that doesn't exist.
Deleted User#0000: how is vqgan so addicting
Ravna#1831: because it's not powerful enough yet
Ravna#1831: it's like playing with a sub-human chess AI in the 80s
Ravna#1831: wow look it moves the queen mindlessly again how cute
Awesome_Ruler_007#7922: its not optimization, its a pretty core question about a flaw in acheiving AGI with scaled up models
Ravna#1831: No, in principle there are just inputs and outputs and reward
Ravna#1831: all the rest are at best emergent properties within the agent's inner mind
Ravna#1831: the neural network could in principle figure those all out by itself
Awesome_Ruler_007#7922: that's a big oversimplification - and not much to show for it
Awesome_Ruler_007#7922: this attitude works for smarter-than-average-ML-technique intelligence, but for actual AGI, LMs seem far away from the secret sauce
Awesome_Ruler_007#7922: or maybe we are much closer than we seem to be, who knows.
IMHO we don't
Ravna#1831: your argument can also be used to argue against the possibility of GPT-3 back in 2015
Ravna#1831: "this attitude works for smarter-than-average-ML-technique intelligence, but definitely not enough for something like GPT-3"
Ravna#1831: same thing
Awesome_Ruler_007#7922: GPT3 is far away from AGI
Awesome_Ruler_007#7922: I don't think we are on the same page here
Awesome_Ruler_007#7922: its just a stochastic parrot, but it *is* impressive enough to let me hold out a hope that scaling and a few other tricks might just do the job
|
alstroemeria313#1694: wonder if i should make an mnist diffusion colab
alstroemeria313#1694: i mean one that trains a small model
cfoster0#4356: If we had a synthetic intelligence built and sitting in front of us, we might have a better chance of assessing claims like "in order to approach GI you need Y", but until then, I don't see the use of speculating "the model is just X", especially when there are predictable and non-saturated increases in capability by pursing "just X"
alstroemeria313#1694: for like, people to take and mess with
alstroemeria313#1694: before i organize the thing into a coherent codebase on github
alstroemeria313#1694: which may take a bit bc i am still actively researching improvements
alstroemeria313#1694: colab is slow and people are addicted to immediate results on it, so
cfoster0#4356: I think it's worthwhile. Should help folks really grok diffusion for themselves
Awesome_Ruler_007#7922: sure, but there is no guarantee those increases would keep coming. that's also speculation in the end.
I am not against scaling, personally think it holds atleast a *chance, but I'd rather give weight to other neuroscientifically aligned theories to reach AGI - just my opinion.
Awesome_Ruler_007#7922: my above question was just conjecturing that even if we were able to scale multi-modal models arbitrarily, there is no gurantee it can achieve tasks specified above
cfoster0#4356: Keep coming back to this: what we *want* to be the foundation of future GI may have very little to do with what *will* be that foundation. I would love it if copying the way the brain works is the shortest path to GI. But the track record of that method is fairly weak, and has no guarantees *either*
cfoster0#4356: (this is why I keep an eye out for Numenta, heck even root for them, even though I haven't seen anything from them that works)
cfoster0#4356: tl;dr the Bitter Lesson is indeed bitter
Awesome_Ruler_007#7922: > But the track record of that method is fairly weak
science gives increasingly exponential returns over time, which is similar to what follows in your scaling returns.... ๐คทโโ๏ธ just saying
Awesome_Ruler_007#7922: who knows, maybe neuroscientists would finally achieve the key to full simulation stripping out the parts required for life-maintenance of cell - then it's just scaling all the way...
|
we can only dream ๐คฉ
Sphinx#2092: We don't have to dream. We can do amazing things right now.
Sphinx#2092: Even if this isn't the path to AGI, it's an interesting direction in and of itself and has already paid for itself.
Sphinx#2092: Which is more than can be said for neuroscience-based approaches.
Awesome_Ruler_007#7922: > even though I haven't seen anything from them that works
tbf, even DL had the same problem just a few years ago. The researchers knew they might be wrong but had no way to prove - only now it has started bearing fruit (o7)
HTM does look promising indeed, id bet there would be some hybrid if there ever would be neuro-scientifically aligned models.
even then hawkins' ideas about reference frames and voting are already more-than-impressive
cfoster0#4356: A few years ago? Schmidhuber triggered the ANN revolution a decade ago, and CNNs were useful for like a decade or two before that
cfoster0#4356: I eagerly await the day numenta writes down the pseudocode so it can be implemented
cfoster0#4356: Until then I'm not holding my breath waiting to see if the ideas actually work
cfoster0#4356: Incidentally they've put out since good videos the past couple of days
Awesome_Ruler_007#7922: I was referring to the early experiments with MLPs to identiy males and females from photos - the only problem they had was due to compute limitations only a single layer was there
Awesome_Ruler_007#7922: ya, I heard they would do this massive shift by publishing all they learnt over multiple books, hire more people, funding etc. basically to scale up everything they are doing in an attempt to get it in practical realm
EricHallahan#1051: > the only problem they had was due to compute limitations only a single layer was there
MNIST: *Bonjour*
Awesome_Ruler_007#7922: classifying squiggly dots in numbers doesn't exactly make headlines - compared to a machine that can classify humans
EricHallahan#1051: Oh, I just thought it would be funny to joke about how MNIST can be effectively solved with a linear classifier.
|
alstroemeria313#1694: hm what papers should i reference in this notebook
alstroemeria313#1694: I know I need: the DDPM paper, the DDIM paper, and Variational Diffusion Models?
alstroemeria313#1694: It trains pretty fast btw
alstroemeria313#1694: I got a P100
alstroemeria313#1694: It's super low memory usage and should work on any Colab GPU
Awesome_Ruler_007#7922: my bad lol ๐
nev#4905: and tpu? ๐
alstroemeria313#1694: uh
alstroemeria313#1694: Good luck
nev#4905: yeah I'll try to be the change etc
alstroemeria313#1694: The last time I tried PyTorch/XLA I actually managed to crash the Colab runtime.
Awesome_Ruler_007#7922: EleutherAi ๐ TPUs
alstroemeria313#1694: Like the thing serving the notebook died
nev#4905: fake news
alstroemeria313#1694: Uh, you can try it
alstroemeria313#1694: eheh... https://cdn.discordapp.com/attachments/729741769738158194/889967327351816203/demo_00008.png
alstroemeria313#1694: uh, who came up with the "predict eps" objective
alstroemeria313#1694: I use that
alstroemeria313#1694: Oh that was the DDPM paper
nshepperd#2316: umm i don't know ^^;;
|
nshepperd#2316: it's makes sense that when the noise is less the gradient of the normal distribution is steeper
nshepperd#2316: but then predicting pred-fakes can't give you the score? confusing
alstroemeria313#1694: MNIST diffusion Colab: https://colab.research.google.com/drive/1javQRTkALBWLFWnx1K4VpRZkWLP3ozhr
alstroemeria313#1694: @cfoster0
nshepperd#2316: mnist mixed with another model?
alstroemeria313#1694: nope, just SVHN
nshepperd#2316: oh
alstroemeria313#1694: Posting the Colab got me to add a lot of comments ^^;;
nshepperd#2316: eheh
alstroemeria313#1694: there's no way dividing by sigma is correct. what if you had some noise schedule where sigma went arbitrarily close to 0?
alstroemeria313#1694: You're supposed to have a zero gradient at a clean image, not an exploding one
alstroemeria313#1694: Why isn't the score just pred - fakes
nshepperd#2316: umm idk
alstroemeria313#1694: That's what you add to get from fakes to pred
alstroemeria313#1694: Or did the noising process formulation mess this up
alstroemeria313#1694: And it's not the same as in a pure score based model
nshepperd#2316: an exploding gradient might actually be correct, since the space of real images is lower dimensional than all images
alstroemeria313#1694: Uh
nshepperd#2316: like if you set sigma to 0. it's trying to learn a mixture of delta functions
alstroemeria313#1694: Wait wait
|
alstroemeria313#1694: So because *we are conditioning on timestep now*
alstroemeria313#1694: It's using differently scaled normal distributions on each timestep
nshepperd#2316: but you do have a zero gradient *at* a clean image, when eps is 0
alstroemeria313#1694: Instead of just learning a single distribution.
alstroemeria313#1694: but... doesn't eps actually blow up
alstroemeria313#1694: Or become super ill conditioned
alstroemeria313#1694: in any case. eps -= sigma * grad works to condition this
alstroemeria313#1694: (then you recalculate pred, if you already had one)
nshepperd#2316: well it is probably really hard to learn eps when the noise is super small
alstroemeria313#1694: In other words you scale grad *down* when sigma is small.
nshepperd#2316: yeah
alstroemeria313#1694: in other words eps_new / sigma = eps / sigma - grad
alstroemeria313#1694: ok i guess
alstroemeria313#1694: you just never *do* the division by sigma, you leave the score implicit
cfoster0#4356: Sorry, can you explain what eps_new, sigma, and grad are in this context? Got lost a while back
alstroemeria313#1694: grad is the gradient from the thing to condition on. sigma is the noise level, as in noised_reals = reals * alpha + noise * sigma.
alstroemeria313#1694: eps is the *predicted* noise from the model.
alstroemeria313#1694: eps_new is the eps you get after conditioning the model output on the gradient of a log density.
nshepperd#2316: oh, eps is unitless and sigma is units of (image space) so it has to be -eps/sigma for the score. it can't be -eps*sigma
nshepperd#2316: hooray for dimensional analysis ^^;;
|
cfoster0#4356: Ah ok, you're combining an unconditional diffusion model with gradients wrt input from a classifier/guide (by way of AD)?
alstroemeria313#1694: yep
alstroemeria313#1694: I have done this with an MNIST unconditional diffusion model and an MNIST log snr conditioned classifier.
alstroemeria313#1694: anyone tried the notebook yet?
nshepperd#2316: i am in bed and will probably go back to sleep soon so i will have to try it later ^_^
alstroemeria313#1694: ^_^
cfoster0#4356: Yes!
alstroemeria313#1694: :)
cfoster0#4356: It's crusing along :hap:
cfoster0#4356: Great work
alstroemeria313#1694: this is the same setup i've been using for the good cifar-10 demo grids
alstroemeria313#1694: the model is just larger (more channels, another u-net stage for 4x4, six residual blocks per resolution instead of four)
alstroemeria313#1694: and i use lr 1e-4
Deleted User#0000: Is deepdream still popular or has vqgan and stuff like that sort of overtaken it
alstroemeria313#1694: trying this on tpu... the moment of truth
alstroemeria313#1694: it's been overtaken
alstroemeria313#1694: Uhhhh why so slow :/
alstroemeria313#1694: Is it literally compiling EVERY STEP again
nshepperd#2316: TPUs ๐
Deleted User#0000: also, is there any way to get skip_timestamps to go past 50 or 100? anymore and it says that colab ran out of memory
|
Deleted User#0000: sometimes it works but barely
chilli#5665: are you using torch/xla?
alstroemeria313#1694: yes
alstroemeria313#1694: Why does it never work.
chilli#5665: I assume you've seen this? https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md
alstroemeria313#1694: I am using the same tensor shapes each time :/
alstroemeria313#1694: It literally just never works for me
chilli#5665: lol
chilli#5665: 2 other possible reasons (I don't actually know, just guessing):
1. Does PyTorch/XLA recompile if you have different random tensors every time?
2. Does PyTorch/XLA recompile if you pass in different integer constants every time?
alstroemeria313#1694: idk
alstroemeria313#1694: Am I even allowed to do stuff like index tensors with something
alstroemeria313#1694: That's on the cpu?
chilli#5665: I'm not sure - they have the metrics report thing that probably will tell you why it's slow
alstroemeria313#1694: training is going at 1 it/s
alstroemeria313#1694: This is 8x faster than sampling
alstroemeria313#1694: I suspect because sampling has stuff that either makes it recompile or synchronize with CPU
alstroemeria313#1694: I am on Colab not a TPU VM, so
|
alstroemeria313#1694: Probably all the indexing
alstroemeria313#1694: Unfortunately I cannot just not index?
alstroemeria313#1694: uh, i have no idea what i'm even looking at in the metrics report, sorry
alstroemeria313#1694: sorry, it's JAX or nothing for TPUs
alstroemeria313#1694: ok, where can you get ffhq from
alstroemeria313#1694: like the thumbnails.
alstroemeria313#1694: `OSError: Google Drive download quota exceeded -- please try again later`
BoneAmputee#8363: http://batbot.tv/ai/datasets/ffhq/thumbnails128x128.zip
alstroemeria313#1694: oh thank you!
EricHallahan#1051: Train on StyleGAN outputs. :ultrazucc:
alstroemeria313#1694: :grimberk:
EricHallahan#1051: Honestly the dumbest thing I can imagine lol
alstroemeria313#1694: it's called "transfer learning" isn't it
EricHallahan#1051: Totally off distribution.
inox#5400: that's DAGAN
AutoHintBot#4733: google colab (pro+) is giving me A100s at the moment (!), but they don't seem to be supported in the notebook runtime?
just reporting the A100 thing really, since I haven't seen one there before and didn't see any mentions in a quick search here. I only recently started using colab for more art experiments a few weeks vs local GPUs, though
```
|
Using device: cuda
/usr/local/lib/python3.7/dist-packages/torch/cuda/__init__.py:106: UserWarning:
A100-SXM4-40GB with CUDA capability sm_80 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.
If you want to use the A100-SXM4-40GB GPU with PyTorch, please check the instructions at
```
or in other stacks:
```
RuntimeError: CUDA error: no kernel image is available for execution on the device
```
alstroemeria313#1694: whaaa?
AutoHintBot#4733: that was my reaction!
alstroemeria313#1694: grab the latest pytorch from the pytorch.org instructions i guess
alstroemeria313#1694: I use it on A100s a lot (not on Colab)
alstroemeria313#1694: like
alstroemeria313#1694: ```!pip3 install torch==1.9.1+cu111 torchvision==0.10.1+cu111 torchaudio==0.9.1 -f https://download.pytorch.org/whl/torch_stable.html```
alstroemeria313#1694: i guess? you have cuda >= 11.1 on it right?
|
AutoHintBot#4733: yeah, I've done that before with local resources--I'll figure it out if the environment supports it, just curious if this is a fluke or a rollout or what
AutoHintBot#4733: https://cdn.discordapp.com/attachments/729741769738158194/890002312293150750/Screen_Shot_2021-09-21_at_3.30.20_PM.png
AutoHintBot#4733: I have two sessions with A100 each right now too ๐ฎ
alstroemeria313#1694: yep do that !pip3 command
nshepperd#2316: wow
Kia#2550: What
Kia#2550: :mittwoch:
Kia#2550: Unbelievable, that's actually real
AutoHintBot#4733: that did the trick, thanks! if these actually become a thing I should move some of my local style transfer workflow to colab I guess
also, btw, thanks for being so open with your notebooks! most of my recent CLIP-related explorations end up using your notebook or a variation of it
I've been lurking here a bit; that A100 message was my first here. this is me: https://twitter.com/mwegner/media (if you scroll past the obviously-CLIP stuff you end up in my main body of output, which is an ifs fractal -> ancient jcjohnson style transfer based workflow)
Kia#2550: 50$/ for like hourly usage of an A100 is wild and insane
EricHallahan#1051: Welcome!
AutoHintBot#4733: looks like cheapest A100 VM on GCP is $3.68/hr on demand, $1.10 preemptible
AutoHintBot#4733: the funny thing about the subscription pricing is that it changes my mindset, even if I actually end up spending the same for art experiments on any given month
hourly billing kind of forces my brain into using huge VMs for stuff like "okay I'm paying to enlarge this thing because I want to print it", and not open-ended curiosity
inox#5400: that's colab pro+?
|
inox#5400: that's wild
EricHallahan#1051: TBH I feel like this is an elaborate hoax because it seems so unlikely. :berk:
Louis#0144: @AutoHintBot do u work for google
Louis#0144: Are u an advertisement
Louis#0144: Whereโs your referral link
AutoHintBot#4733: my GPU pulls even got worse when I went from pro to pro+ (when they announced), but seems like it was probably just a rush of people looking to play with the various CLIP things
in the last week or two I even had K80s sometimes. I have unpowered K80s in my garage ๐ฌ
AutoHintBot#4733: I am but a normal human! (programmer in and around the games industry). this is just my personal gmail, too. I don't even know if you can run colab on gsuite/workspaces
inox#5400: in academia I have never had access to an A100
inox#5400: gonna have to get colab pro+
AutoHintBot#4733: I have some friends in academic circles, and when I got more into neural/gpu type art I tried to shake some trees for free credits or hardware. I just assumed nvidia was making it rain in research type places, but I guess not ๐ฎ
AutoHintBot#4733: to be clear, this A100 is a fluke (and I already lost one of the two in trying to juggle the session to another notebook, although it rerolled as P100)
I've been hooking more notebooks into my own image/workflow backend. I should really collect nvidia-smi output stats with that, too, because I don't record my session gpus at all--they were pretty bad for awhile last week
Louis#0144: I thought they nerfed rerolling
AutoHintBot#4733: I've only seen it improve by spinning a second one up (usually I only have one notebook active, and not 24/7 or anything)
so it'll give me something weak, I launch another copy of the thing I want to run, and it lands with someone better so I kill the first. I guess it would just immediately return if I launched a third, though, but my goal is just a single notebook
alstroemeria313#1694: 20 epochs (not enough) on 64x64 FFHQ https://cdn.discordapp.com/attachments/729741769738158194/890021140356022337/demo_00020-2.png
|
๐
ฌ gabriel_syme ๐
ฌ#3220: wait is this...is it the unconditional one?
alstroemeria313#1694: yes
๐
ฌ gabriel_syme ๐
ฌ#3220: cool it looks sharp!
๐
ฌ gabriel_syme ๐
ฌ#3220: they all look weird but it's expected 20epoch weirdness
alstroemeria313#1694: yeah
๐
ฌ gabriel_syme ๐
ฌ#3220: we really need a jax repo of all this that works
๐
ฌ gabriel_syme ๐
ฌ#3220: I can help run stuff!
alstroemeria313#1694: :)
alstroemeria313#1694: This might be an easy "learn JAX" project for me
๐
ฌ gabriel_syme ๐
ฌ#3220: I can definitely cannot help write it lol, maybe nshepperd's is close
alstroemeria313#1694: You saw my MNIST diffusion notebook right?
๐
ฌ gabriel_syme ๐
ฌ#3220: I...not sure
alstroemeria313#1694: https://colab.research.google.com/drive/1javQRTkALBWLFWnx1K4VpRZkWLP3ozhr
๐
ฌ gabriel_syme ๐
ฌ#3220: don't think so
EricHallahan#1051: ^
alstroemeria313#1694: These FFHQ samples are from a scaled up version of it.
๐
ฌ gabriel_syme ๐
ฌ#3220: like I don't see this as easy at all but it is for you probably ๐
alstroemeria313#1694: Literally it is the same thing just with more channels, more residual blocks, and more downsampling/upsampling U-Net stages
๐
ฌ gabriel_syme ๐
ฌ#3220: oh wait so we just need to translate this huh
alstroemeria313#1694: Yeah
|
๐
ฌ gabriel_syme ๐
ฌ#3220: alright then I may look around as well
๐
ฌ gabriel_syme ๐
ฌ#3220: mostly to find off-the-shelf stuff lol
๐
ฌ gabriel_syme ๐
ฌ#3220: that vision library is still raw huh
alstroemeria313#1694: which?
alstroemeria313#1694: JAX?
๐
ฌ gabriel_syme ๐
ฌ#3220: can we do transforms with pytorch? I think we can
๐
ฌ gabriel_syme ๐
ฌ#3220: yeah there is one but has very few transforms
alstroemeria313#1694: we don't need transforms
๐
ฌ gabriel_syme ๐
ฌ#3220: oh just normalize
alstroemeria313#1694: Yeah
๐
ฌ gabriel_syme ๐
ฌ#3220: ah okay ๐
alstroemeria313#1694: It's just scale from 0-1 to -1-1
alstroemeria313#1694: data augmentation is often not a great idea when doing diffusion bc it will learn to generate from the augmented distribution
๐
ฌ gabriel_syme ๐
ฌ#3220: hah
alstroemeria313#1694: Which I guess is fine if it's like, random translation
alstroemeria313#1694: By small amounts
๐
ฌ gabriel_syme ๐
ฌ#3220: did anyone try that
๐
ฌ gabriel_syme ๐
ฌ#3220: since we're learning noise anywyas
๐
ฌ gabriel_syme ๐
ฌ#3220: diffusion is so odd really to me but it's also awesome
alstroemeria313#1694: i am doing a particular custom kind of augmentation for a diffusion upscaler btw
|
EricHallahan#1051: Just tell the model to tell you how to fix it after it is done generating. :berk:
alstroemeria313#1694: random corruption of the downsampled image that the diffusion process is conditioned on
๐
ฌ gabriel_syme ๐
ฌ#3220: yesss
๐
ฌ gabriel_syme ๐
ฌ#3220: I really liked @/cfoster0's idea of an experiment assistant ๐
alstroemeria313#1694: Like I have a corruption Markov chain
๐
ฌ gabriel_syme ๐
ฌ#3220: we just ask for ideas and tell it what to implement
EricHallahan#1051: (I mean this only half ironically.)
alstroemeria313#1694: That with p=0.65 picks a random corruption and applies it and otherwise stops.
alstroemeria313#1694: If it corrupted the image it has another 65% chance to corrupt again etc.
๐
ฌ gabriel_syme ๐
ฌ#3220: hah that's nice
Deleted User#0000: https://i.imgur.com/ifjjkHz.png :suffering:
cfoster0#4356: Oo nice
cfoster0#4356: I wonder how large of a dataset one could get if they detected and cropped faces from that big LAION dataset ๐ค
cfoster0#4356: To squeeze into the 1 epoch regime
EricHallahan#1051: I must admit when she said 20 epochs I was like "wait what? You can train for 20 epochs?"
EricHallahan#1051: I'm so used to the one-epoch regime now lol
alstroemeria313#1694: lol
nshepperd#2316: just train a class-conditional model where the class is whether it's augmented
EricHallahan#1051: Yeah I guess you could do that.
Jacheng#6048: Hi guys. We are a project experimenting with GPT3. If you have experience or just interested in AI and building conversational-AI based on artificial backgrounds, let me know!
|
nshepperd#2316: goes without saying that i would love to help if you have questions ^^
alstroemeria313#1694: :blobcutehappy:
๐
ฌ gabriel_syme ๐
ฌ#3220: nshepperd ๐ค alstroemeria -> gabriel ๐โโ๏ธ ๐ -> kickass ๐จ
EricHallahan#1051: `flax`, `haiku`?
Louis#0144: What does this even mean
Louis#0144: Lmao
๐
ฌ gabriel_syme ๐
ฌ#3220: you never had to read those in primary school?
๐
ฌ gabriel_syme ๐
ฌ#3220: that one is pretty terrible though I guess, don't have much to work with in emojis
๐
ฌ gabriel_syme ๐
ฌ#3220: it just means that I can be the run monkey for the code the collaboration between alstro and nshepperd produces
๐
ฌ gabriel_syme ๐
ฌ#3220: and at the end a community of brilliant people will make art out of it
nshepperd#2316: i'll have to learn those ;;
EricHallahan#1051: Do you use raw JAX?
nshepperd#2316: yeah pretty much
nshepperd#2316: well, i have my own nn library
nshepperd#2316: sorta
Louis#0144: Based
AI_WAIFU#2844: good all the jax ones are kinda trash
nshepperd#2316: yeahhh
๐
ฌ gabriel_syme ๐
ฌ#3220: functorch right?
nshepperd#2316: https://github.com/nshepperd/jax-guided-diffusion/tree/master/jaxtorch
|
nshepperd#2316: i think functorch is someone else, haven't really looked at it
EricHallahan#1051: That's @chilli :berk:
๐
ฌ gabriel_syme ๐
ฌ#3220: oh ye my bad ๐
๐
ฌ gabriel_syme ๐
ฌ#3220: my mind played tricks, I meant jaxtorch
alstroemeria313#1694: 130 epochs https://cdn.discordapp.com/attachments/729741769738158194/890121698932826112/demo_00130.png
nshepperd#2316: ooh!
๐
ฌ gabriel_syme ๐
ฌ#3220: looks nice!
kurumuz#5695: how big is this model
alstroemeria313#1694: like 77M params
kurumuz#5695: oh wow
clay#9806: Very cool
Kia#2550: Generating faces with Diffusion?
Kia#2550: Impressive
alstroemeria313#1694: yep~ :blobcutehappy:
Kazumi#1297: was making your own custom dataset ever a consideration with huggingface, or is it just a hastly put together afterthought
clay#9806: Fascinating. Are any modifications needed for different datasets than MNIST?
alstroemeria313#1694: you just make it have three input channels
alstroemeria313#1694: for the image
alstroemeria313#1694: instead of one
alstroemeria313#1694: and probably add another u-net stage
|
clay#9806: ๐ฌ
alstroemeria313#1694: and increase base channel count if it's a complicated dataset
clay#9806: ah it's presently at one bc MNIST is grayscale?
alstroemeria313#1694: more residual blocks too (just copypaste)
alstroemeria313#1694: yes
clay#9806: cool. will adding another u-net stage increase the size of the model considerably?
alstroemeria313#1694: it does increase it
alstroemeria313#1694: the MNIST one was like 2.6M params
alstroemeria313#1694: the 64x64 FFHQ demo grid was from a 77M param model
alstroemeria313#1694: And I added two stages, not one
alstroemeria313#1694: To get it down to 4x4
alstroemeria313#1694: MNIST is super easy and doesn't require a big model compared to other datasets ^^;;
clay#9806: @alstroemeria313 is the noise calculation the same for 3 channels or does that need to be changed?
alstroemeria313#1694: it's the same
๐
ฌ gabriel_syme ๐
ฌ#3220: Hey, my latest architext post was quite successful. About 1500 inferences so far on the app, in a little less than 20 hours. Most designers are saying it's pretty cool, which is good news.
Kia#2550: That's amazing! :o
Kia#2550: Lovely work
clay#9806: would you care to share your CIFAR10 code? I'm having trouble grokking everything
alstroemeria313#1694: ok
clay#9806: awesomeness
|
alstroemeria313#1694: hold on, let me make a copy of the MNIST notebook and paste stuff in
clay#9806: i'll accept the the risk of never being able to train it myself lol
alstroemeria313#1694: my experimental code is a *mess* and i cleaned it up for the notebook
clay#9806: no worries ๐ take your time
alstroemeria313#1694: CIFAR-10 diffusion https://colab.research.google.com/drive/1HubSGRVxDRCRYK-YEjs8nYYImSrlR0qf
clay#9806: danke schoen
alstroemeria313#1694: @clay i think you need to make the model bigger than this to get hq results with cifar-10
alstroemeria313#1694: but that's literally just increasing the base channel count and copypasting the ResConvBlocks
alstroemeria313#1694: @clay anyway yeah your noise is just N(0, I) for whatever image size and channel count
alstroemeria313#1694: and the scaling factors are the same
karlo#4645: โEarlier this year, Switch Transformers have already surpassed the trillion parameter mark at (relatively) low computational cost. In May 2021, Google has presented LaMDAin their annual I/O conference, which has specifically showcased the use of very large language models for chatbots. Wu Dao 2.0 was released in June this year and has ten times the size of GPT-3 with overall 1.75 trillion parameters. It uses both text and image data and achieved state-of-the-art performance across a wide range of tasks. It is almost certain that the scale of models will increase further โ while hopefully not locking out large parts of the NLP community from investigating and improving the capabilities of such models.โ
clay#9806: @alstroemeria313 you think training something like CUB would work out?
alstroemeria313#1694: what's CUB?
clay#9806: birds
clay#9806: 6k
clay#9806: (with captions but that's not relevant i guess)
alstroemeria313#1694: oh how big are they
clay#9806: uhh i think they're variable but i think at least 256?
alstroemeria313#1694: oh
alstroemeria313#1694: yeah
|
alstroemeria313#1694: might work
alstroemeria313#1694: with a chonk model
alstroemeria313#1694: I had my Concerta just now ^^;;
alstroemeria313#1694: (It's 2:30 AM here)
alstroemeria313#1694: But wow diffusion works so much better than my 50-odd GAN tries
๐
ฌ gabriel_syme ๐
ฌ#3220: I recommend some sleep ๐
alstroemeria313#1694: need to try self-attention blocks w/ the new design
alstroemeria313#1694: I have them implemented and can pull them out of my old code
nshepperd#2316: i am pondering a refactor of jaxtorch so that you actually initialize parameters in the `__init__ ` of each module. instead of just defining their shapes and init functions
nshepperd#2316: which i think is good but requires passing the context object which holds parameters and rng state in
alstroemeria313#1694: I also had coffee just now โ
alstroemeria313#1694: i would probably use a normal jax thing for mine if i ported my diffusion code
nshepperd#2316: i guess you won't be sleeping ^_^
alstroemeria313#1694: bc i can do it from scratch and not have to port OAI models ^^;
alstroemeria313#1694: oh, apparently i can differentiate through an extension of Von Neumann entropy to non-spd matrices?
alstroemeria313#1694: if i use a calculation for entropy where the 0 * log 0 case is handled in the backward pass
nshepperd#2316: hm... does one of the hundreds of 'efficient attention' papers out there do the perceiver thing where you take the qs,ks,vs, learn a small seq_len of p_qs & p_ks as parameters and do `qkv_attention(p_qs, ks, vs)` resulting in p_vs, then `qkv_attention(qs, p_ks, p_vs)` resulting in the output
nshepperd#2316: like, for self-attention that's not O(n^2)
alstroemeria313#1694: ok what jax framework should i try to learn
alstroemeria313#1694: like for tpu diffusion
|
alstroemeria313#1694: wow does everyone just use pytorch datasets/dataloaders with jax
alstroemeria313#1694: whatever i use has to make u-nets easy
alstroemeria313#1694: and residual blocks
nshepperd#2316: yeah... idk, maybe haiku
nshepperd#2316: actually the main reason i wrote jaxtorch is because i didn't like how magical the major frameworks are
alstroemeria313#1694: ahh
nshepperd#2316: CLIP_JAX uses haiku though, so https://github.com/kingoflolz/CLIP_JAX/blob/main/clip_jax/model.py might be a good demo of it
alstroemeria313#1694: ty :)
alstroemeria313#1694: i uh, have used both numpy and the HIPS/autograd package jax gets its grad() api from
alstroemeria313#1694: HIPS/autograd was nice and lightweight but CPU only
alstroemeria313#1694: hm so i don't need randomness inside the model (unless i put in dropout)
alstroemeria313#1694: only during sampling
alstroemeria313#1694: gonna just read a jax tutorial now
alstroemeria313#1694: and then try stuff on a colab tpu
nshepperd#2316: :)
alstroemeria313#1694: if i can get it working i will then sign up for TRC
Orz#3023: Colab tpu is v2-8 I believe
alstroemeria313#1694: yes
Orz#3023: also
ram would be too less to load a checkpoint
|
alstroemeria313#1694: hold on let me write the rosenbrock function
alstroemeria313#1694: my models aren't gonna be super huge
alstroemeria313#1694: unless i get a whole ton of compute
Orz#3023: aah
aight
alstroemeria313#1694: grad works
alstroemeria313#1694: do you normally compute a loss and a grad separately
alstroemeria313#1694: or use the thing that returns both
alstroemeria313#1694: uhh, i remember it from HIPS/autograd
nshepperd#2316: pretty much always jax.value_and_grad
nshepperd#2316: which returns both
alstroemeria313#1694: ahh
alstroemeria313#1694: What do people name the gradient bc 'grad' is taken lol
nshepperd#2316: if i have a function named grad i probably call the output v_grad lol
alstroemeria313#1694: Can I jit loops
nshepperd#2316: if you jit a loop it unrolls it
alstroemeria313#1694: Oh
alstroemeria313#1694: OK
nshepperd#2316: for like diffusion you'd probably jit the model's forward
nshepperd#2316: (for sampling)
|
mark_k#7251: Fire Alarm ๐ค https://cdn.discordapp.com/attachments/729741769738158194/890180554094444594/AqhLCXG.png
mark_k#7251: Reality is a flexible concept Youtube RL
alstroemeria313#1694: huh vmap works
mark_k#7251: with infinite resolution
alstroemeria313#1694: hm
alstroemeria313#1694: wait, you can vmap something that returns a tuple
alstroemeria313#1694: And it just gives you batch versions of both return values
nshepperd#2316: yep
alstroemeria313#1694: so you can get batch gradients easily
alstroemeria313#1694: that's nice
alstroemeria313#1694: that's a pain to do in pytorch
alstroemeria313#1694: like if you are using gradient norm for an example as a metric for something.
alstroemeria313#1694: what's a pytree
alstroemeria313#1694: can you just vmap a thing that takes an arbitrarily complex datastructure of arrays and returns another datastructure?
nshepperd#2316: nested dicts/lists/tuples of tensors
nshepperd#2316: yeah you can
alstroemeria313#1694: if the function takes multiple arguments how does vmap decide which it applies to
nshepperd#2316: you can specify what should be mapped on the input
alstroemeria313#1694: (How does grad, for that matter)
alstroemeria313#1694: (I forget how HIPS/autograd did it, it's been so long since I used anything other than PyTorch)
|
alstroemeria313#1694: ahh
nshepperd#2316: like vmap(f, in_axes=(0,0)) maps over both input arguments
mark_k#7251: I think shit is getting real when we cannot trust any media, anywhere. Just script the truth
nshepperd#2316: vmap(f, in_axes=(None,0)) broadcasts the first and maps over the second
alstroemeria313#1694: oh, you specify the batch dimension
nshepperd#2316: yep
nshepperd#2316: you can set it to something else if for some reason the batch dimension being the first (0) doesn't suit
nshepperd#2316: with grad it just computes the gradient wrt the first argument by default
nshepperd#2316: but you can specify it like jax.grad(f, argnums=(0,1)) to do the first two arguments
๐
ฌ gabriel_syme ๐
ฌ#3220: i'm liking the jax process ๐
alstroemeria313#1694: so where's softmax
๐
ฌ gabriel_syme ๐
ฌ#3220: btw, you should anyways go for TRC, you almost certainly will get an extension
nshepperd#2316: jax.nn.softmax i think?
alstroemeria313#1694: ahh
alstroemeria313#1694: gonna try mnist logistic regression
alstroemeria313#1694: how do i get a random init
nshepperd#2316: `jax.random.normal(jax.random.PRNGKey(seed), shape)`
alstroemeria313#1694: ty :)
CRG#8707: You can use the Jax scan, which JIT compiles much faster than a python for loop.
alstroemeria313#1694: ahh
|
alstroemeria313#1694: so like to jit the sampling process?
alstroemeria313#1694: which is a loop with a fixed number of iterations
CRG#8707: Yeah
alstroemeria313#1694: well, it can be not fixed but in practice we set it and then draw a bunch of samples w/ the same iter count
CRG#8707: Otherwise jit compiling for loops takes 20+ seconds.
CRG#8707: Whereas scan is O(1)
alstroemeria313#1694: How do I convert a PyTorch tensor
nshepperd#2316: yeah you can do that. but if it's jitted you won't get progress updates while running the loop
alstroemeria313#1694: ah
alstroemeria313#1694: Or otherwise load MNIST
CRG#8707: jnp.array()?
alstroemeria313#1694: doesn't work
alstroemeria313#1694: oh
alstroemeria313#1694: forgot the data loader returned a tuple
alstroemeria313#1694: so i need to jnp.array them both separately
CRG#8707: You could use jax.tree_map(), to apply the function to everything
alstroemeria313#1694: oh
alstroemeria313#1694: yeah that's nice
alstroemeria313#1694: how do i flatten an array except for the batch dimension.
alstroemeria313#1694: (I literally forgot all my NumPy ^^;;)
|
nshepperd#2316: idk, i always just name all the dims and use reshape ^_^
CRG#8707: Vmap a flatten?
alstroemeria313#1694: ahh
alstroemeria313#1694: uh
alstroemeria313#1694: How do I even
alstroemeria313#1694: Where is flatten even
alstroemeria313#1694: It's ravel?
CRG#8707: Jax.flatten-util has a few things
CRG#8707: Not sure if there's a flatten in the jnp also
alstroemeria313#1694: vmap ravel works
alstroemeria313#1694: If you vmap the same function over and over does it cache it
nshepperd#2316: i don't think so?
nshepperd#2316: if you write f = vmap(g) and reuse f multiple times it might cache something idk
CRG#8707: It's very important to not include things like the jitting step in for loops, as it redoes the compiling every iteration.
CRG#8707: Just do it once above
alstroemeria313#1694: where's cross entropy loss
alstroemeria313#1694: or do i have to write it myself
CRG#8707: -(Jax.nn.logsoftmax(pred)*target).sum(axis=-1)
alstroemeria313#1694: how do i make ints into one-hots
CRG#8707: Jax.nn.one_hot()
|
alstroemeria313#1694: also that needs parens
nshepperd#2316: eheh
alstroemeria313#1694: ok i have the loss written
alstroemeria313#1694: is there an easy way to vmap and sum
alstroemeria313#1694: or mean
alstroemeria313#1694: like map then reduce
alstroemeria313#1694: or do i have to do it manually
CRG#8707: Grad has an option to reduce, vmap I don't think so.
alstroemeria313#1694: yeah it doesn't work
alstroemeria313#1694: It wants an axis name.
alstroemeria313#1694: I don't have a name.
nshepperd#2316: yeah you just do that manually
alstroemeria313#1694: ok is going
alstroemeria313#1694: ```python
def forward(model, x):
m, b = model
return jnp.ravel(x) @ m + b
def loss_fn(pred, target):
one_hot = jax.nn.one_hot(target, pred.shape[0])
|
return -(jax.nn.log_softmax(pred) * one_hot).sum()
def thing(model, input, target):
return loss_fn(forward(model, input), target)
thing_batch = vmap(thing, in_axes=(None, 0, 0))
thing_reduced = lambda *args: thing_batch(*args).mean()
thing_final = value_and_grad(thing_reduced)
m = random.normal(random.PRNGKey(0), [784, 10]) * 0.01
b = random.normal(random.PRNGKey(1), [10]) * 0.01
model = m, b
for i, batch in enumerate(tqdm(train_dl)):
inputs, targets = jax.tree_map(jnp.array, batch)
loss, grad_at_loss = thing_final(model, inputs, targets)
tqdm.write(f'{i} {loss:g}')
model = jax.tree_map(lambda x, y: x - 0.01 * y, model, grad_at_loss)```
alstroemeria313#1694: loss went down yay
nshepperd#2316: yay :)
|
alstroemeria313#1694: So there's no easy thing to replace `thing_reduced = lambda *args: thing_batch(*args).mean()`?
nshepperd#2316: not really
CRG#8707: Use the vmap inside the loss function and then .mean()?
nshepperd#2316: you could combine it like
```
def total_loss(model, inputs, targets):
preds = vmap(forward, in_axes=(None,0))(inputs)
return vmap(loss_fn, in_axes=(0,0))(preds, targets).mean()
```
but that's basically the same thing
alstroemeria313#1694: How do I run on TPU on Colab.
nshepperd#2316: put this at the top
nshepperd#2316: ```
import jax.tools.colab_tpu
jax.tools.colab_tpu.setup_tpu()
```
alstroemeria313#1694: ooh ty :)
alstroemeria313#1694: it sure does take a while
alstroemeria313#1694: well that's slightly faster lol
nshepperd#2316: yep!
|
alstroemeria313#1694: is it spending most of the time shipping the dataset from the pytorch dataloader to the tpu
alstroemeria313#1694: yep! it is!
alstroemeria313#1694: full batch MNIST logistic regression time :blobcutehappy:
alstroemeria313#1694: 6 it/s
alstroemeria313#1694: With batch size 60,000.
alstroemeria313#1694: On TPU.
alstroemeria313#1694: This is only using one of the eight TPU cores right?
alstroemeria313#1694: TPU VMs help with this right?
alstroemeria313#1694: Like, the ones I will get to use with TRC.
nshepperd#2316: they have more powerful cpus and stuff yeah
alstroemeria313#1694: oh hm, are the colab tpu runtimes actually on a tpu vm
alstroemeria313#1694: they are, aren't they
nshepperd#2316: and like 300G of main memory so you can just have the whole dataset in memory lol
nshepperd#2316: colab tpu uses an older system
alstroemeria313#1694: oh
alstroemeria313#1694: the tpu is over the network?
nshepperd#2316: where the tpu is on the network and everything is done by RPC
nshepperd#2316: yeah
alstroemeria313#1694: yeah that
alstroemeria313#1694: so tpu vms help?
|
nshepperd#2316: definitely
alstroemeria313#1694: now i just have to figure out like... complicated models
alstroemeria313#1694: ^^;;
alstroemeria313#1694: and optimizers
nshepperd#2316: eheh
nshepperd#2316: that's where frameworks unfortunately come in
alstroemeria313#1694: the jax documentation promised me it had Adam
alstroemeria313#1694: But it was only in latest ;_;
nshepperd#2316: ;;
alstroemeria313#1694: I have implemented optimizers myself enough times
alstroemeria313#1694: But yeah, if it's prewritten by someone I can just use that
CRG#8707: Optax.adam should work <https://github.com/deepmind/optax>
alstroemeria313#1694: the diffusion loss is super simple, it's just MSE
alstroemeria313#1694: ...are there low discrepancy sequences in/for jax
alstroemeria313#1694: Or do I have to use (worse) uniform random timesteps
alstroemeria313#1694: (Or use PyTorch's and ship the values to the TPU but that's bad)
alstroemeria313#1694: oh ty :blobcutehappy:
alstroemeria313#1694: wait that has a thing to get the Hessian diagonal?
alstroemeria313#1694: I thought that was intractable
cfoster0#4356: Idk if that's the worst idea
|
cfoster0#4356: Hmm https://www.tensorflow.org/probability/api_docs/python/tfp/substrates/jax/mcmc/sample_halton_sequence https://www.tensorflow.org/api_docs/python/tf/math/sobol_sample
alstroemeria313#1694: oh no
alstroemeria313#1694: ok so EMA over weights, how do I do that in JAX
alstroemeria313#1694: I mean what has it pre-written
alstroemeria313#1694: (I had to write it myself for PyTorch lol, it's in my diffusion training colab)
๐
ฌ gabriel_syme ๐
ฌ#3220: there is an ema optimizer somewere
alstroemeria313#1694: i just need AdamW + EMA over the weights
alstroemeria313#1694: The EMA weights get used in inference only.
๐
ฌ gabriel_syme ๐
ฌ#3220: holy hell how many libraries lol
๐
ฌ gabriel_syme ๐
ฌ#3220: https://objax.readthedocs.io/en/latest/_modules/objax/optimizer/ema.html
๐
ฌ gabriel_syme ๐
ฌ#3220: maybe it helps
alstroemeria313#1694: it needs to be ema over everything in a pytree
alstroemeria313#1694: well
alstroemeria313#1694: no, the things to optimize only
alstroemeria313#1694: not (the equivalent of) buffers
alstroemeria313#1694: which get copied
cfoster0#4356: Hmm idk if the objax code still work for regular Jax code
alstroemeria313#1694: eh i'll do it myself it's not hard
cfoster0#4356: I think there's a way to filter over the pytree
alstroemeria313#1694: I think it is literally two tree_maps.
|
cfoster0#4356: Are buffers and params are tagged differently? I forget
alstroemeria313#1694: i... don't know.
alstroemeria313#1694: However my current model design has no buffers :)
alstroemeria313#1694: (I made the Fourier Features buffer learnable. Also it wouldn't matter bc if it's not learnable it never changes)
alstroemeria313#1694: I'll... look at some framework after my second Ritalin dose maybe :)
alstroemeria313#1694: "but you don't get progress updates inside the loop"
alstroemeria313#1694: i can use scan to fuse like ten iterations together or smth?
alstroemeria313#1694: Then I get progress updates every ten
CRG#8707: Yeah, I think that'd work.
alstroemeria313#1694: i'm... tired
alstroemeria313#1694: got up at 11:30 PM
nshepperd#2316: please take care of yourself ;;
Louis#0144: Why
alstroemeria313#1694: ...Is there a Haiku tutorial.
Louis#0144: Ben's repo is a good introduction
alstroemeria313#1694: JAX documentation seems a lot sparser than PyTorch
CRG#8707: There's: <https://dm-haiku.readthedocs.io/en/latest/api.html>
alstroemeria313#1694: I saw that
CRG#8707: And an example: https://theaisummer.com/jax-transformer/
alstroemeria313#1694: need a convnet
|
alstroemeria313#1694: I'm uh, crashing rn
alstroemeria313#1694: Wow this other diffusion paper uses some gamma function based parameterization of noise levels?
CRG#8707: There's a cnn example here: https://roberttlange.github.io/posts/2020/03/blog-post-10/
alstroemeria313#1694: ...And it literally needs a binary search to compute?
alstroemeria313#1694: ok
alstroemeria313#1694: https://arxiv.org/pdf/2106.00132.pdf btw
nshepperd#2316: oh huh
alstroemeria313#1694: so... they just have special schedules
alstroemeria313#1694: fixed ones, not learned?
alstroemeria313#1694: could log snr parameterize them and try them out
alstroemeria313#1694: if i can figure out what they are bc i'm tired rn
alstroemeria313#1694: thankfully i already did the work of figuring out continuous timesteps and schedule independence
alstroemeria313#1694: ty :blobcutehappy:
nshepperd#2316: seems linear and quadratic betas? like the betas from the original ddpm paper
alstroemeria313#1694: oh
alstroemeria313#1694: the betas are...
alstroemeria313#1694: you take their cumsum and that's the current noise variance?
alstroemeria313#1694: or
alstroemeria313#1694: i forget
alstroemeria313#1694: ok so with haiku.
|
nshepperd#2316: yeah something like that
nshepperd#2316: uh, cumsum of 1-beta i think
alstroemeria313#1694: you do...
alstroemeria313#1694: what even. https://cdn.discordapp.com/attachments/729741769738158194/890216306266222642/Screen_Shot_2021-09-22_at_5.41.43_AM.png
alstroemeria313#1694: Why does it make the sublayers inside the `__call__()`
nshepperd#2316: yeah it is somewhat magical
alstroemeria313#1694: Does it trace it
nshepperd#2316: at the top level, you don't directly call it, you apply some function to the module that like traces the call
nshepperd#2316: yeah
alstroemeria313#1694: Like it's intermixed with jax.nn.gelu() which is not an hk.Module!
nshepperd#2316: and gathers up all the modules
alstroemeria313#1694: What the actual fuck why.
nshepperd#2316: yeah... idk how it works
alstroemeria313#1694: Does it do shape inference
alstroemeria313#1694: Like if it traces can it do that too
alstroemeria313#1694: Like Keras used to
CRG#8707: The haiku documentation has a few examples of how it works IIRC
alstroemeria313#1694: ...How do you get at the layers later
alstroemeria313#1694: Like if you want to look at their weights or something.
CRG#8707: You have a params pytree
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.