data
stringlengths 115
7.61k
|
---|
nshepperd#2316: that's just gpt
Deleted User#0000: hehe
Deleted User#0000: its actually whois hehe
C𝙧𝙤𝙞𝙨𝙨𝙖𝙣𝙩#7814: That's a normal dialogue when you buy fallout76
Kia#2550: #off-topic
Untouch#9150: wonder if it found the discord link through web scraping
Untouch#9150: might be a way to obfuscate it or something
Untouch#9150: people do this with email addresses on youtube
C𝙧𝙤𝙞𝙨𝙨𝙖𝙣𝙩#7814: On discord servers you can choose to prevent anyone to type unless they go through some channel and accept some terms. Don't know if they can access the member list though
cfoster0#4356: 👷 yes, let's consolidate spambot talk to #off-topic, shall we?
EricHallahan#1051: I have banned every bot involved.
EricHallahan#1051: Thank you to everyone who reported.
Kia#2550: Yey!
Deleted User#0000: woot
EricHallahan#1051: Now back to business as usual.
Kia#2550: *Surveillance*
EricHallahan#1051: Notice: No server event this week. Interpretability Reading Group returns next week on Saturday, November 20, 2021 2:00 PM.
ersatz#0001: TZ?
EricHallahan#1051: Automatic
ersatz#0001: UTC then okay
|
EricHallahan#1051: No, local
ersatz#0001: ?
EricHallahan#1051: The timestamp listed here is local time.
EricHallahan#1051: Adjusted to the system timezone.
ersatz#0001: uh so discord can do that
nshepperd#2316: its a spooky feature
Space#4359: omg how
Kia#2550: Wait is this adjusted to local timezone?
Kia#2550: Nice
Kia#2550: But god
Kia#2550: 3am
Kia#2550: But I can look into hopefully,Thanks for informing me Nonetheless
fe#0483: Unfortunately this is what I see for the magical timestamp: https://cdn.discordapp.com/attachments/729741769738158194/909138336726265936/image0.png
alstroemeria313#1694: may try something dumb
alstroemeria313#1694: if i can figure out the right form for it
elderfalcon#4450: Just bumping in case anyone knows any kind of test harness for this particular guy! :D
alstroemeria313#1694: ```
i: 0, fake: b't eo s aonaoaaos n eya ee areoa aoeore kaos ooeoese aoe'
i: 1, fake: b' aoroo noeie aca uoe ec iioeeooryeaeoeeseoo ooaiio aoi'
i: 2, fake: b'ad aoa aseseeolieoeheoeoeheoe vneoore aae sr he erloa'
|
i: 3, fake: b' eoaatoo oeeeeianaaoeroo oae d r etoeie ee oeee o oa rr'
i: 4, fake: b'oao oooseai aa oeeeoaor y oaoe kisa tii eseaiaeneeo reas'
i: 5, fake: b' aweoaa a od too eeoeo aeoseeoeieeeuino h io ae ioiooeo '
i: 6, fake: b'eoe oas no iia oaeoerr aeioa r a riao o eeiem iso oooaoar'
i: 7, fake: b'oa ooooo roosea e eoa oee eha eo eghenecoseeaaeaeo a iasinoo'
i: 8, fake: b' oceoesee eeoaneo e aae oea osaoo ee ea eeoeonase nkoo ao'
i: 9, fake: b'Iaa oeoe aa oooneirelost e as ?aeeaeo r ooaa rotnr aiooo '```
alstroemeria313#1694: eheh
alstroemeria313#1694: @cfoster0 i am trying your text diffusion idea
cfoster0#4356: Ooo which one?
alstroemeria313#1694: gaussian diffusion on text encoded into bytes then one-hots
cfoster0#4356: Ah cool
alstroemeria313#1694: should probably do it in an embedding space instead but not sure how to learn it yet
cfoster0#4356: Yeah that's what I ended up doing. Just using embeddings from ByT5
alstroemeria313#1694: oh, how did it work when you tried it
alstroemeria313#1694: my model is just an encoder-only transformer
alstroemeria313#1694: it predicts v
cfoster0#4356: I was doing it as an autoencoder
alstroemeria313#1694: oh
cfoster0#4356: V prediction diffusion
|
alstroemeria313#1694: like one of my diffusion autoencoders except for text? or mlm type
cfoster0#4356: Yes
alstroemeria313#1694: eheh~
cfoster0#4356: Wasn't sure what the right architecture for the encoder and decoder were, though. Should play around with it again
alstroemeria313#1694: ahh
cfoster0#4356: I think it should work ok. Like I was definitely seeing "lossy text compression" with it, without a discrete/variational bottleneck
alstroemeria313#1694: ahh
alstroemeria313#1694: these fakes are bad
alstroemeria313#1694: how long did you have to train for
alstroemeria313#1694: ...do you think i should add depthwise 1d convolutions along the sequence dim
cfoster0#4356: Mmm I don't quite remember. Maybe 100M tokens or so, to learn to use the conditioning?
alstroemeria313#1694: ohh
alstroemeria313#1694: mine is seeing 6k tokens per iteration
alstroemeria313#1694: 84M per epoch
alstroemeria313#1694: and it is taking maybe 15 minutes to train per epoch
cfoster0#4356: I always forget what this means
alstroemeria313#1694: depthwise means no channel mixing
alstroemeria313#1694: pointwise means channel mixing only, no spatial mixing
cfoster0#4356: Ah. Yeah I guess it can't hurt
alstroemeria313#1694: the model does not have to be causal
|
alstroemeria313#1694: so like one thing i worry about with diffusion on one-hots
alstroemeria313#1694: is that won't most of the "work" of deciding what token to output be done in the high noise stages and then you can pretty much tell what it's going to be at some point?
alstroemeria313#1694: gaussian diffusion, anyway
alstroemeria313#1694: idk when this actually happens though
cfoster0#4356: Yeah. Diffusion over one hots or logits is a bit 🤔
alstroemeria313#1694: Yeah
alstroemeria313#1694: did you ever get coherent generations from yours?
alstroemeria313#1694: it feels like we should be using something other than gaussian but i never figured out what exactly.
Kharr#7888: If you do it in latent space it works fine for generating words, but it has trouble generating sentences
cfoster0#4356: I didn't push it very far, but no. Not unconditional. Just compression
alstroemeria313#1694: you mean where tokens are bytes/characters?
Kharr#7888: Yes.. you should be able to get it to generate common words (maybe with some typos) at the character level pretty quickly
alstroemeria313#1694: ahh
alstroemeria313#1694: mine is still generating strings of mostly vowels separated by spaces
alstroemeria313#1694: ```
i: 0, fake: b' aeaoeet a oo e oeoo ceiaion eeoo l iaaooeooaay o ir'
i: 1, fake: b'C aoa dsa ahena a o aa inh et et saa o o eeeao a o naaaeo'
i: 2, fake: b' haoo aaooayaoneewaaaenanaohio oaoesao onaa l oa aav ekeao '
i: 3, fake: b'auneoeahieeeaoeelaege oameoo ood eeemaoeneeoootfthosoebeaaoo'
i: 4, fake: b'stoee y eoeohao opoo ee ao foaaulaaLo anooaeare jh nee'
|
i: 5, fake: b'owe a ath oeiaohomew ieo aaaeoaaawoneocla eia a loaaoa oa '
i: 6, fake: b'thepowoeoa ua o eraoo so aioeaeaaon oeua o e ahaeazaeoea'
i: 7, fake: b'e oeoankseaoaoa a r aoo sea oo oa e e oa eoina oaed o '
i: 8, fake: b'aap a e ne aooaeaehtaonletaoo ieoeee aoa alo aa oood oou'
i: 9, fake: b' hi ae do d o? aiee cuoeeeeao eoe loo e od aoaye eyeont' ```
Kharr#7888: That's expected, vowels are the most common characters so they are learned first (SGD is lazy)
cfoster0#4356: If not over logits, or one hots, or latents, I don't know what
alstroemeria313#1694: it seems like we should be using the dirichlet distribution somehow tbh
alstroemeria313#1694: > This means that if a data point has either a categorical or multinomial distribution, and the prior distribution of the distribution's parameter (the vector of probabilities that generates the data point) is distributed as a Dirichlet, then the posterior distribution of the parameter is also a Dirichlet. Intuitively, in such a case, starting from what we know about the parameter prior to observing the data point, we then can update our knowledge based on the data point and end up with a new distribution of the same form as the old one. This means that we can successively update our knowledge of a parameter by incorporating new observations one at a time, without running into mathematical difficulties.
alstroemeria313#1694: well
cfoster0#4356: Similar to with diffusion over probs, we'd need the outputs to always be positive, right?
alstroemeria313#1694: samples from a dirichlet are vectors whose elements are between 0 and 1 and which sum to 1
cfoster0#4356: Right right I'm talking about the concentration parameters that define the Dirichlet
EricHallahan#1051: IIRC the domain of alpha is the positive reals.
EricHallahan#1051: Yeah they are.
cfoster0#4356: Anyways yeah I think going through the math for a Dirichlet version of diffusion could be valuable
alstroemeria313#1694: ```
i: 0, fake: b'Iee oo g e oaaeaoeoroaaeee aeeaho soeeaeou ao eo oaolaaeooe'
i: 1, fake: b'a e a r os? s deeahe ewaee au oole eaaheneoeytadedatw suon e'
i: 2, fake: b'aeaetteaa aboe oe e eeo mo gtieoopa oaso an p oo.soaeoo'
|
i: 3, fake: b' eeoioa eyeoioa e Io ooeeawoea? eee raaaeat e eDoaao a s e'
i: 4, fake: b'weeaoeeooeome?a aelia tonaee? a a oaooea ie v 5eoarowr, ?e'
i: 5, fake: b'tI ess a oea oa oeo e p rot e ayoeoiyooseaoaoe at aaaei'
i: 6, fake: b'geyoaooo?eoha eoao o eg o Wog y e eeo aoeaioteao eedaaaoe'
i: 7, fake: b'or y eeo a o oo ana an teoooy aai aanaeeeoeoiefaegofo'
i: 8, fake: b'wa oo e o e euaooy ea neneawaaeueapooet aea oa ao a oe'
i: 9, fake: b'ecao neoa oaa e he oon one akoh se aeof o aue beeee so' ```
alstroemeria313#1694: @Kharr how large was your model?
Kharr#7888: 100M
alstroemeria313#1694: ohh
alstroemeria313#1694: mine's smaller
alstroemeria313#1694: like 38M maybe
EricHallahan#1051: :morepower:
Kharr#7888: Even with the bigger model, it was still mostly garbage. But it was fun to play with.
alstroemeria313#1694: ahh :/
Kharr#7888: Are you using CNN or something else?
alstroemeria313#1694: encoder-only transformer
Kharr#7888: Try CNN encoder transformer (just replace attention with 1d cnn with k=3) You should see results which are much better and more quickly.
alstroemeria313#1694: huh
alstroemeria313#1694: maybe i *should* intersperse conv1d
|
Kharr#7888: Attention has a really hard time with noisy data early on and has no local bias so the model has a lot of trouble learning. CNN will learn how to spell words much quicker and move out from there.
alstroemeria313#1694: mmm
alstroemeria313#1694: did you have downsamples/upsamples
alstroemeria313#1694: like a u-net
EricHallahan#1051: Look at all the token shift stuff that went on in #research, it demonstrates how powerful this approach can be.
alstroemeria313#1694: yeah
alstroemeria313#1694: but i am skeptical that a model w/ no downsamples/upsamples *and* no self-attention can learn to do anything but spell words
alstroemeria313#1694: so i was thinking add depthwise conv1d *in addition*
Kharr#7888: @EricHallahan remember the name of that Google paper which demonstrated CNN transformer was competitive with attention on most tasks?
EricHallahan#1051: 🤔
Kharr#7888: came out around the same time as MLP Mixer and gMLP I think this was it: https://arxiv.org/abs/2105.03322 TLDR: Convolutions work fine as long as you don't need long-range retrieval
alstroemeria313#1694: ```
i: 0, fake: b'whvaseo e g an ges l e tesh nes arit n u( ba 4ka o v'
i: 1, fake: b'ebafgehak eie aut?ny .t o e? ie nyo ne ott te h r br sa s'
i: 2, fake: b'Hiw tat rwasi e tu apc wamhe uese catlaunir ei ss Ane e '
i: 3, fake: b'Wulr nebet e theo psamot hpasdcanott 2- ortch kr i nt h '
i: 4, fake: b'aifl eieh boe y tauncu elhas th iw re ? i ns? loue tufy up '
i: 5, fake: b'whwt heun e Goe casirons? ur rott o hal ho r otekteg'
i: 6, fake: b'w nf ot at r ?ra a rs dtavnetito oy o his ojta ghp heo'
i: 7, fake: b' eeri isa theini s?onwte 7ar me f dop w fo ?hauL r i '
|
i: 8, fake: b"upY c oi nkne'rP nfean et is n? nt alineitwe uanam ide an "
i: 9, fake: b'eony m eo o tmlr s er cn 8 r ce a ti an s n rf yers t ' ```
Kharr#7888: What are you training on? The amount of question marks is a little puzzling. QA dataset?
alstroemeria313#1694: yahoo answers
Kharr#7888: Makes sense then.. going to have to wait a while if this is all it has learned so far :berk:
alstroemeria313#1694: i added depthwise conv
elderfalcon#4450: Well that explains things haha
alstroemeria313#1694: ooh
alstroemeria313#1694: the loss started going down!
EricHallahan#1051: Yay!
alstroemeria313#1694: with this model for some reason the loss goes down to ~0.5 and stays there for a long while
alstroemeria313#1694: and then sometimes it finds its way out of whatever loss surface plateau it's in and starts dropping
EricHallahan#1051: Yeah just let it chew on the data.
alstroemeria313#1694: i was having trouble getting it to go out of the plateau at all
alstroemeria313#1694: until i started initing the depthwise conv1d weights to 0
EricHallahan#1051: What exactly was it hanging on?
alstroemeria313#1694: guess it was scrambling it too much
alstroemeria313#1694: mm? loss would just hover around 0.5
alstroemeria313#1694: vary a bit from batch to batch but never start dropping
Monopton#6214: When an AI knows more than you about how to get into college https://cdn.discordapp.com/attachments/729741769738158194/909212754764767272/unknown.png
|
EricHallahan#1051: Not here, #prompting please.
Monopton#6214: sorry
elderfalcon#4450: Log likelihood?
alstroemeria313#1694: mse
elderfalcon#4450: Oh, gotcha. On like the latent space, or token probs? (Just batting the tennis ball back and forth here a bit).
alstroemeria313#1694: on one-hots :)
alstroemeria313#1694: well. on one-hots partially corrupted with Gaussian noise
alstroemeria313#1694: Which is a terrible way to corrupt them
alstroemeria313#1694: I need a better way
elderfalcon#4450: Interesting. I may be missing the objective (I read this as pretraining over text generation before running a model for diffusion, is this right?) If so, then some of the infotheoretic stuff might be in an interesting place w.r.t. token encoding.. .
alstroemeria313#1694: no pretraining
alstroemeria313#1694: Training from scratch
elderfalcon#4450: Oh, like how that randomly intialized network dealio can be used to generate coherent outputs solely based on the inputs?
alstroemeria313#1694: well i mean
alstroemeria313#1694: i am trying to train a text generation model
alstroemeria313#1694: from scratch
elderfalcon#4450: Interesting, gotcha.
elderfalcon#4450: Styleganv1 had those latent space decoding MLP layers before injecting them into the Ada...gosh norm I think for that version? And then 2 had the mod-demod structure?
elderfalcon#4450: Cause I'm guessing that gaussian perturbation has potentially very highly nonlinear changes w.r.t. the outputs due to the relative occupancy of whatever prior/learned encoding space is being learned
EricHallahan#1051: Adaptive Instance norm 😉
|
elderfalcon#4450: Ye. AdaIN was p cool (including the stat tanking trick the network learned, that was so sick)
elderfalcon#4450: With a nonstationary objective, if the network is learning its encoder space too, that would be very hard to estimate the expected amount of output perturbation based upon the inputs.
alstroemeria313#1694: it is stationary
elderfalcon#4450: One could possibly calculate a rough empirical Fisher diagonal then for each embedding position (either absolute or dimension specific) to measure sensitivity to perturbation and then secure a slightly more (hopefully) stable training method that way.
If it all ends up being the same due to some quirk of the learned embedding space it would be a major facepalm though, haha.
alstroemeria313#1694: oh. there isn't a learned embedding space.
alstroemeria313#1694: I am literally just using raw one-hots. ^^;;
elderfalcon#4450: Oh, gotcha.
My apologies if I keep asking too many questions here (just curious + rubber ducky -- feel free to tell me when to stop whenever) -- anything against xentropy then to try a perturbation scheme that works with that (gauss perturb, div by sum, safety epsilon)? Then you could hopefully retain some semblance of niceness over the density of the data. :D
EricHallahan#1051: Anecdote from Kharr:
https://discord.com/channels/729741769192767510/747850033994662000/907658833449598998
Experiment code:
https://discord.com/channels/729741769192767510/747850033994662000/905942937521754165
alstroemeria313#1694: i keep trying to come up with a good perturbation scheme for categorical data
alstroemeria313#1694: i kind of want the final distribution of the forward process to be a Dirichlet with concentration parameter all 1
elderfalcon#4450: Yeah! Would the above scheme cause issues? It should have similar properties from a noising perspective but still fit within the log-likelihood motif.
You might have to turn the stepsize waaaaaaaaayyyyyy down though if it's softmax logits as some of the step sizes may be too outrageously large, relatively speaking. 😬
|
elderfalcon#4450: (this should approach Dirichlet with the right cooldown scheme, I think)
alstroemeria313#1694: oh, which scheme?
EricHallahan#1051: What are you doing now? Perturbing by a Gaussian on the simplex?
alstroemeria313#1694: ahahah
alstroemeria313#1694: No. Perturbing one-hots scaled to the range -1 to 1 with Gaussian noise.
alstroemeria313#1694: It is literally the dumbest thing
alstroemeria313#1694: ...so dirichlet has a product in its pdf?
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/909233042239938560/Screen_Shot_2021-11-13_at_4.07.27_PM.png
alstroemeria313#1694: Ugh
alstroemeria313#1694: Can we use log concentration
EricHallahan#1051: The thing is you don't want to "fall off" the simplex, so you would probably need to resample a lot.
alstroemeria313#1694: there needs to be some parameterization that *doesn't let* you fall off the simplex.
EricHallahan#1051: Yeah
alstroemeria313#1694: Like logits or smth
EricHallahan#1051: It's definitely hacky to resample.
cfoster0#4356: With logits I think you need label smoothing
AI_WAIFU#2844: What are you trying to do?
alstroemeria313#1694: logits will be ugly bc our real data is one-hots
alstroemeria313#1694: discrete/categorical diffusion for text generation
AI_WAIFU#2844: Oh I see. Have you tried randomly changing individual characters to other random characters?
|
alstroemeria313#1694: how do you know when you're done
AI_WAIFU#2844: You can estimate that.
alstroemeria313#1694: there should be like, a way to do it with categorical distributions though.
elderfalcon#4450: @alstroemeria313 Dis one right here! :D
AI_WAIFU#2844: WDYM?
elderfalcon#4450: :thonk:
StellaAthena#3530: @alstroemeria313 I thought that the whole point of using Dirichlet was to get away from Gaussians
alstroemeria313#1694: like you start sampling with random categorical distributions sampled from a flat Dirichlet, and you refine/sharpen the categoricals over the sampling process
AI_WAIFU#2844: just imaging every delta of time, flipping a coin with probability epsilon and then randomly changing a random character in the sequence to another random character.
alstroemeria313#1694: yes but i haven't figured out how to do that yet. ^^;;
EricHallahan#1051: Select one random character, edit it, repeat for the length of sequence or smth
AI_WAIFU#2844: yeah you can estimate when all of the characters have been modified at least once
elderfalcon#4450: I feel like that -1 to 1 range <-> one hots is the biggest sticker right now, mainly because the gaussian is dependent upon that scheme and that sounds a bit sticky 😬
I dunno for sure tho, "worse" has been done and worked okay, I guess?
AI_WAIFU#2844: that's just combinatorics
AI_WAIFU#2844: then just take the limit
StellaAthena#3530: What happened when you walk through the normal equations but just with Dirichlet and categorical
alstroemeria313#1694: Like what is the forward process that starts with one-hots and ends in flat Dirichlet.
AI_WAIFU#2844: .
|
cfoster0#4356: I believe this is what multinomial diffusion does.
alstroemeria313#1694: I have not been able to figure it out.
AI_WAIFU#2844: how well does that work?
StellaAthena#3530: Where do you get stuck
alstroemeria313#1694: I don't know what distribution of noise to add
StellaAthena#3530: Uniform seems natural
alstroemeria313#1694: I don't know how to change the distribution so I end up with flat Dirichlet at the end instead of uniform or something
cfoster0#4356: Not very, at least from what I'm seeing reported
You can look at the samples for multinomial diffusion in the first paper , or for "D3PM-uniform" in the second paper
<https://arxiv.org/abs/2102.05379>
<https://arxiv.org/abs/2107.03006>
AI_WAIFU#2844: Are you sure both of these do *one charater at a time*? That's super important else you'll lose the notion of locality.
StellaAthena#3530: @alstroemeria313 this is what I should be looking at, right? https://cdn.discordapp.com/attachments/729741769738158194/909236562280214528/IMG_8443.png
cfoster0#4356: Only one character per timestep?
AI_WAIFU#2844: Yes.
AI_WAIFU#2844: And with small probabilty of corruption per timestep.
cfoster0#4356: No I don't think either do that
|
cfoster0#4356: They have a small probability of corruption per timestep, but it's done across all tokens in the sequence
AI_WAIFU#2844: Yeah then I wouldn't expect that to work at all.
EricHallahan#1051: Wouldn't that mean that you would need to tie the number of timesteps to the sequence length?
EricHallahan#1051: At minimum?
cfoster0#4356: Would this lose the main benefit (parallel decoding)?
AI_WAIFU#2844: Yeah, but what you should actually do is take the continuous limit of the process, and then they'll be a scaling factor of like L or something.
alstroemeria313#1694: so with *one timestep*. starting from pure noise. the pred should clearly be *the distribution of letters in the training set*.
alstroemeria313#1694: like mb slightly different for each position in the sequence, depending on if the distribution in the training set is different for different sequence positions.
alstroemeria313#1694: so with two timesteps...
AI_WAIFU#2844: Well if you want you can probably have a parameter that controls the resampling fraction per step. More will be faster, but shittier.
StellaAthena#3530: 2-grams!
AI_WAIFU#2844: this just gets annoying tho cause then you need to go back and remember grade school combinatorics.
elderfalcon#4450: But MLE over MSE I don't think will do that. :'(
EricHallahan#1051: Me with minimal statistics knowledge: :harold:
elderfalcon#4450: :harold:
alstroemeria313#1694: for diffusion models on images i do actually get something like the mean image for t=1
alstroemeria313#1694: but for categoricals idek
elderfalcon#4450: Yes, on images I think that makes sense somewhat, because the MSE will make the mean image.
But mean != distribution matching distribution. You'll get the most likely characters, which it looks like is happening with the vowels I think.
|
To have a less biased prior...well, I'm guessing there's a few things that could/would do that....
cfoster0#4356: I'm actually not sure exactly what we need
StellaAthena#3530: When my migraine clears I’m probably going to just see it. I feel like I’m being taunted
alstroemeria313#1694: yeah
alstroemeria313#1694: that's how it works for Gaussian diffusion
StellaAthena#3530: So we have this expression https://cdn.discordapp.com/attachments/729741769738158194/909240395895042140/IMG_8444.png
StellaAthena#3530: Picking $p(\theta)$ to be the Dirichlet distribution we get $p(x|\mathbf{x})$ is a categorical distribution
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/909241055168331786/193204646687408129.png
StellaAthena#3530: @alstroemeria313 So what do you need? The h param schedule that causes proper convergence?
AI_WAIFU#2844: actually, thinking about this more, I think you should be able to approximate this process by resampling everything simultaneously, just with a really small probability, so you would get "parallel decoding" but it would still take forever to draw a sample. probably steps >> L
StellaAthena#3530: If $p_1,\ldots p_k\sim\mathcal{D}ir(\alpha_1,\ldots,\alpha_k)$ and $y\sim\mathcal{C}at(p_1,\ldots, p_k)$ then $f(p|\alpha) \prod f(y|p) = \mathcal{D}ir(\alpha’)$ where $\alpha’_i = \alpha_i +c_i$ where $c_i$ is the number of observations in the data of category $i$
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/909243545309806612/193204646687408129.png
StellaAthena#3530: What do you need @alstroemeria313? The marginal distribution?
alstroemeria313#1694: not sure
StellaAthena#3530: The predictive prior is $$f(y=x|data)=\frac{\alpha’_x}{\sum \alpha’_j}$$
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/909244402881409055/193204646687408129.png
alstroemeria313#1694: I am trying a thing rn but I may have written the sampling function wrong
alstroemeria313#1694: The thing I am trying is a forward process where I lerp between a one-hot categorical and a flat Dirichlet sample.
alstroemeria313#1694: Which obviously ends up at flat Dirichlet at the end and I can also sample from any timestep without computing the intermediate steps
|
cfoster0#4356: Intuitively I'm thinking "if you cumulatively add [large real numbers] random observations to each category over the course of your schedule, will you end up with something like a flat distribution over your tokens, regardless of what you started with?"
cfoster0#4356: Is your model getting the sampled tokens or the Dirichlet parameters as input?
alstroemeria313#1694: it is getting logits of categoricals.
StellaAthena#3530: I’m ass at debugging code but I can spit out equations until you’re inspired to figure it out 🙂
alstroemeria313#1694: I think this forward process may be non-Markovian though. ^^;;
alstroemeria313#1694: ```
i: 0, fake: b'Whwt '
i: 1, fake: b'Whwt '
i: 2, fake: b'Whwt '
i: 3, fake: b'Whwt '
i: 4, fake: b'Whwt '
i: 5, fake: b'Whwt '
i: 6, fake: b'Whwt '
i: 7, fake: b'Whwt '
i: 8, fake: b'Whwt '
i: 9, fake: b'Whwt '
```
alstroemeria313#1694: Something's wrong
StellaAthena#3530: Is it supposed to say “what”?
alstroemeria313#1694: ...Oh
|
alstroemeria313#1694: I wasn't taking the log of the starting Dirichlet sample
StellaAthena#3530: That would be problematic
StellaAthena#3530: You’re using log likelihood? To dodge the product term?
alstroemeria313#1694: the model takes logits as input and output
alstroemeria313#1694: During training.
StellaAthena#3530: Reasonable
alstroemeria313#1694: And I just forgot to pass in logits to the sampling loop
alstroemeria313#1694: Aha. These samples look more random now
alstroemeria313#1694: Anyway the model outputs the predicted denoised logits
alstroemeria313#1694: And the reconstruction loss is cross-entropy vs the clean reals.
StellaAthena#3530: Yup
StellaAthena#3530: It is looking better now
alstroemeria313#1694: I suspect, from prior experience, that this forward process is kinda bad
StellaAthena#3530: Or do you gotta wait to see
alstroemeria313#1694: the samples are garbled but they are not *all the same*
StellaAthena#3530: Progress
alstroemeria313#1694: If you get diffusion outputs that are all the same something has gone really wrong
alstroemeria313#1694: ```
i: 0, fake: b'Wir moeadertllaodieinwgei nreWtswe?EMyaral?o ao/ulkcmnnto m'
i: 1, fake: b'seo Iadyiqk i?oidenytnp m ehmsurhguetis eil dooaIm??drnno'
|
i: 2, fake: b'Atn id ooumeuose aHtaIsitnewhan wodt w o hg uspnresteomh -c'
i: 3, fake: b'Woy ao atiksn a aont rt m feeo?u0mn bootat evt tosfsrwroh '
i: 4, fake: b'ooawteoopnaeh ?rdai k seltessrh n p rtJesnlhre rkt? ttheyI '
i: 5, fake: b'Io fdalutovhsi mcep bok ?stgn Sonepdwie l sdechr? tothodnu'
i: 6, fake: b"Daa dtdHoyou'kwidwtI eamto aseneethuniiutoli o?tasa gkinI s"
i: 7, fake: b'hi ttlnM parmlt twcaenfonkl?oSelc dhknovae hi ti t oecrea or'
i: 8, fake: b'Wssadnt 0 AaahSgeedn s tcn otor o L L?dTeb rio rOfliMhI0ah'
i: 9, fake: b'Wiua osI enelt pl ma jeghtn r h?wwtr dk ysaetfemu gooe aae'```
BoneAmputee#8363: "wir" is a real word 🇩🇪
bmk#1476: "wir sind das volk" oder sowas, keine ahnung bin kein ossi
elderfalcon#4450: There we go that looks right! :D
StellaAthena#3530: And now we wait 🙂
bmk#1476: why is it so important that you only change one character at a time?
bmk#1476: it's not like you can only change one pixel at a time with images
cfoster0#4356: Seems like it has almost learned to start its samples with capital letters :thinkies:
alstroemeria313#1694: Huh wait. I can use eps objective by outputting logits for the Dirichlet noise
alstroemeria313#1694: The other endpoint.
alstroemeria313#1694: And then the loss would be KL divergence of the output logits for eps vs the actual eps
alstroemeria313#1694: Since the Dirichlet samples are valid probability distributions too.
alstroemeria313#1694: ...Is this even valid since the forward process is non-Markovian (it is right?)
|
alstroemeria313#1694: like you cannot express this in terms of adding a bunch of smaller independent noise vectors to a one-hot.
alstroemeria313#1694: i would really like a Markovian forward process but just cannot figure it out
cfoster0#4356: ```
a = ones
p = sample(dirichlet(a))
z = lerp(timesteps, onehots, p)
eps = logit(onehots - z)
eps_hat = model(z, timesteps)
loss = kl_div(eps, eps_hat)
```?
alstroemeria313#1694: i don't think so
alstroemeria313#1694: ```
a = ones
p = sample(dirichlet(a))
z = lerp(onehots, p, timesteps)
eps = model(z, timesteps)
loss = kl_div(eps, log(p))``` i think
alstroemeria313#1694: where `kl_div()` is reverse kl div (pytorch style, target on the right)
cfoster0#4356: Hmm what makes this an eps prediction model then?
alstroemeria313#1694: for eps objective in Gaussian diffusion your target is the N(0, I) pre scaling/mixing with the reals
|
alstroemeria313#1694: hm
alstroemeria313#1694: But this is, again, for the Markovian DDPM process 🤔
alstroemeria313#1694: Is it different for non-Markovian
EricHallahan#1051: ¯\_(ツ)_/¯
cfoster0#4356: Oh oh I see what you're saying now.
StellaAthena#3530: What’s the issue with the markov property
alstroemeria313#1694: when i tried non-markovian stuff with images i got bad results
StellaAthena#3530: No, I mean why isn’t this formulation markovian
alstroemeria313#1694: oh
alstroemeria313#1694: well i'm just assuming it isn't because i don't know it is
cfoster0#4356: DDIM is also non markovian fwiw
alstroemeria313#1694: yeah
alstroemeria313#1694: and it does work
StellaAthena#3530: Shouldn’t it always be non-markovian?
alstroemeria313#1694: but when i tried lerping between the noise and the reals
alstroemeria313#1694: in image space
alstroemeria313#1694: i got bad results
alstroemeria313#1694: DDPM is Markovian though
StellaAthena#3530: As I think about it, it’s supposed to be dependent on only x_0
StellaAthena#3530: Right? The intermediate terms cancel out
|
alstroemeria313#1694: you can just sample from intermediate timesteps quickly bc of the properties of the Gaussian
StellaAthena#3530: Yes
alstroemeria313#1694: but it actually is Markovian
StellaAthena#3530: Oh
alstroemeria313#1694: that is, you could sample from intermediate timesteps of the forward process the slow way instead, adding independent Gaussian noise at each timestep and scaling the result down by the right amount
alstroemeria313#1694: and it would do the right thing
StellaAthena#3530: It’s like how we can efficiently sample from a Markov Model x_{n+1} = Mx_b by computing M^n x_0
alstroemeria313#1694: Yeah
StellaAthena#3530: I see
StellaAthena#3530: @alstroemeria313 what is the form of the forward pass you are doing?
alstroemeria313#1694: ```
one_hots = one_hot(reals)
noised_reals = log(one_hots * (1 - t) + dirichlet_sample() * t)
pred = model(noised_reals, t)
loss = cross_entropy(pred, reals)```
StellaAthena#3530: If it’s nonmarkovian that should be pretty obvious experimentally
StellaAthena#3530: you just need to find a pair (x, y) such that Fx = Fy
StellaAthena#3530: And check if FFx = FFy
StellaAthena#3530: Right?
alstroemeria313#1694: i don't even explicitly have an F
|
StellaAthena#3530: Hmm
StellaAthena#3530: I’m 12% sure I’m doing the math correctly, but it seems like it is markvoian to me
alstroemeria313#1694: what is F then
StellaAthena#3530: It’s the marginal right?
StellaAthena#3530: No that’s not quite right
StellaAthena#3530: The marginal is the limit of x_n?
alstroemeria313#1694: i'm not sure ^^;;
alstroemeria313#1694: ...I am still not sure what a marginal *is* tbh.
alstroemeria313#1694: going to bed probably
StellaAthena#3530: I’ll write up the derivation tonight
StellaAthena#3530: And post it here
StellaAthena#3530: Wikipedia is quite helpful here: https://en.wikipedia.org/wiki/Marginal_distribution
alstroemeria313#1694: ty :blobcutehappy:
StellaAthena#3530: I’m currently on my phone and watching a movie. Hence the 12% confidence lol. But when it’s over I’ll take out a piece of paper and do it right
chirp#4545: i wish wikipedia was more up to date about ML/AI stuff
chirp#4545: tried to find some info about diffusion models and... nothing
chirp#4545: robot learning is big these days but the wikipedia article is basically a stub, and as far as I can tell, the main article about robotics basically doesn't mention ML at all
chirp#4545: there isn't even an article about embeddings
StellaAthena#3530: @chirp Yeah wikipedia is useless for ML
chirp#4545: it's like they wrote it 10 years ago and forgot about it 😦
|
Kia#2550: Do we just wait, Until one of use would write about AI/ML on Wikipedia
Kia#2550: Or other people would do it
StellaAthena#3530: Be the change you want to see in the world.
𓅬 gabriel_syme 𓅬#3220: I'm seeing model performance (in terms of how accurate it learns to create design outputs given a prompt) vary quite a bit between different types of prompts (there are 3 types: location, adjacency, and room-specific). While this is over a lot of generated outputs (in the millions) I'm still not sure if it's anecdotal or not. Is anyone aware of studies over types of prompts in general I can look at?
The other weird thing is Neo tanking completely in learning how to allocate rooms, or rather be great | terrible at it https://cdn.discordapp.com/attachments/729741769738158194/909284679054655529/Performance_by_type_of_prompt.png
Kia#2550: Ah:thinkies:
p.b.#2673: Yesterday I finished “A thousand brains”, Hawkins book about his work on reengineering the human neocortex.
p.b.#2673: If the scaling hypothesis doesn’t hold, it’ll probably be because dl representations don’t support thought like the brains representations do. In that case Hawkins theory is the best game in town, I think.
p.b.#2673: However, I was pretty disappointed
p.b.#2673: Only the first part is about his work
p.b.#2673: And it leaves out all technical details, which makes it kind of pointless to me.
p.b.#2673: The other two parts read like a worse version of life 2.0
p.b.#2673: Which I already didn’t particularly enjoy
p.b.#2673: I am not that big on alignment, but at least I understand the problem. Hawkins doesn’t. Or doesn’t want to.
cfoster0#4356: The lack of details was super disappointing to me too. This is a much better resource from them https://link.springer.com/article/10.1007/s42452-021-04715-0
p.b.#2673: Generally Numenta’s research papers are quite readable. And I would also recommend the first book “On intelligence “ that I remember as being really good and that is consistently mentioned as big influence by AI researchers.
cfoster0#4356: Their recent YouTube videos are also pretty good
cfoster0#4356: They've been doing research "out in the open" a bit more
p.b.#2673: Yeah, they had some research discussions with invited speakers for example Steve Omohundro on GPT3 iirc
|
cfoster0#4356: And yeah I'm inclined to agree that if "approximately today's DL just scaled up" doesn't hit the mark, something like this might
𓅬 gabriel_syme 𓅬#3220: do you recommend me buying and reading it?
(disclaimer: have not read prior works from him)
Parker#3197: https://www.youtube.com/watch?v=Z1KwkpTUbkg
https://www.youtube.com/watch?v=6VQILbDqaI4
𓅬 gabriel_syme 𓅬#3220: I watched the second, nice one
Parker#3197: I liked watching those.
I skimmed through his book, but felt similarly as p. b. (though, I don't completely agree with him not at all understanding alignment) I think he seems really confident that he is on the right path. I think his personal background is really interesting, and didn't realize it until after watching those interviews.
𓅬 gabriel_syme 𓅬#3220: I guess since I have no other intro to his work, it might be nice reading it
p.b.#2673: I followed Numenta’s work the last seven years or so. So you’d probably get more out of it than I did, but still … maybe read a sample first and then decide. It’s a pretty short book.
p.b.#2673: And I have precious little time to spend on books so I might judge it harshly. I only read it because I am on parental leave, with the baby on my lap …
𓅬 gabriel_syme 𓅬#3220: hah I understand 🙂 I can barely go through a few pages at a time here as well
Parker#3197: I didn't know about Numenta until this year. I also haven't been paying close attention to research for years like other people. (I haven't personally been following published papers until this year too) I could have just missed important content, etc.
It just wasn't something that I felt would help *me* get significantly closer to helping contribute to like agi development at this time. I have seen some papers published (this year) that reminded me of his book, but they may only be related tangentially.
Parker#3197: I think a lot of people rightfully have a really positive impression of him and his work though.
Parker#3197: to me, he seems like someone who is seriously (and also looked at as credible) trying to make progress in this direction
𓅬 gabriel_syme 𓅬#3220: anyne has suggestions for deduplicating datasets? I think I'm too big of a tfds noob to use google's deduplicate :/
|
ajmoon#5293: hello all! Just saying hi to introduce myself - am a software engineer based in England, background in enterprise web apps - have recently started dipping my toe into ML - keen to follow along with what y'all're doing, will largely lurk for the time being!
alstroemeria313#1694: ```
i: 0, fake: b'Why du s ti kc cheJ aa imidln t rwfashyar?a ncuju ,esma tooo'
i: 1, fake: b'Is srid I dee nrosob tak so naauMd tpryshr? ntoceovAr esew'
i: 2, fake: b'How to I ts!a cc snNtto iehetiac oicpnmr g Itiekpnth?ae y'
i: 3, fake: b'Fesrces IanCelk n s eeredphitf ehp r kttammyeogtiw o?t B bwr'
i: 4, fake: b"I che vieis ot b'as tdmtsabr odiheyc ti emrun? nItnepn\\Msey"
i: 5, fake: b'Rerais etnentuginnnd ts berlehh okd t ee aiocstcorhyceoupa d'
i: 6, fake: b'where an Ithgt ksornctg i e fIo nn ikIm nImoe?hgen rajtc b'
i: 7, fake: b'What rd do ?e asIeo wwybis lntehI e a hema,I n si i?ms Iyt'
i: 8, fake: b"What's jea quantiae ndrtdc eo cb uoi oe ooNsnyCdbie u SfbrBa"
i: 9, fake: b'wherhere HVe i un ucn to otoct oknno s?yPun rusdI mata t r n' ```
nshepperd#2316: hey, that's sorta working?
Kharr#7888: It's starting to work a little. This is what got me annoyed, because a gpt model would do better than this in under an hour of training
Kia#2550: What?
alstroemeria313#1694: this is a pure transformer, no conv1d
Kia#2550: Is this diffusion?
alstroemeria313#1694: 38M params
alstroemeria313#1694: yep!
Kia#2550: Ow
|
Kia#2550: LM model Diffusion
Kia#2550: Ow god:surprise:
Kharr#7888: How long has it been training?
alstroemeria313#1694: overnight
Kharr#7888: Are you planning on trying the version with the added depthwise conv?
alstroemeria313#1694: yeah but i have trouble getting it to train reliably
Kharr#7888: Is the problem that it gets stuck in a saddle point?
alstroemeria313#1694: i don't know
alstroemeria313#1694: it may be some sort of bad local minimum?
Kharr#7888: Have you tried making the noise schedule easier at the start and then more difficult over time?
Kharr#7888: Basically.. kickstart it with an easy problem
𓅬 gabriel_syme 𓅬#3220: @Kharr do you think this is a ~~not entirely silly~~good idea
https://discord.com/channels/729741769192767510/747850033994662000/909000326173032498
𓅬 gabriel_syme 𓅬#3220: my naive idea: keep decoding left-to-right for N tokens, from locations you sample in the context using a schedule
(gotta be honest, I don't even know if this is diffusion proper)
𓅬 gabriel_syme 𓅬#3220: I was just wondering if at each step you are shaping the context in different ways for subsequent predictions, or smth hand wavy like that
alstroemeria313#1694: btw can we learn like... how do you learn an invertible transform
alstroemeria313#1694: like, guaranteeing you can actually invert it
alstroemeria313#1694: how would you parameterize it
Kharr#7888: I'd start with an orthogonal init
|
alstroemeria313#1694: how do you stop it from becoming non-invertible though
alstroemeria313#1694: also it has to use like, 3x3 convolutions
StellaAthena#3530: @alstroemeria313 because the product of othogonal matrices is orthogonal
alstroemeria313#1694: right but it has to stay invertible during optimization
StellaAthena#3530: Right, that's a whole thing
CRG#8707: Revnets? https://arxiv.org/abs/1707.04585
Kharr#7888: Maybe a loss to encourage it?
alstroemeria313#1694: mm~
alstroemeria313#1694: like, normalizing flows would do the thing but i think they're bad
alstroemeria313#1694: at least for images
alstroemeria313#1694: wait, if a revnet is reversible then it has to not lose information which means it can't learn certain functions?
alstroemeria313#1694: how do people make classifiers with them then
CRG#8707: Only the trunk is reversible, the classification head doesn't have to be.
alstroemeria313#1694: or do they just stick non-reversible layers at the end and checkpoint the activations
CRG#8707: Like, if the logit layer is initialized with 0s, then it's not really reversible.
alstroemeria313#1694: ok so the sort of thing i want to do is. have an invertible network then stick a projection to lower dim space at the end.
alstroemeria313#1694: and then use a contrastive loss on the lower dim space.
alstroemeria313#1694: wait
alstroemeria313#1694: So my input and output have to be the same dimensionality too right
alstroemeria313#1694: If I want to be able to reverse it *all the way back to the input*
|
CRG#8707: Yeah
CRG#8707: Well, you can fold the dimensions
alstroemeria313#1694: this is gonna be difficult, can i use like pixelunshuffle
alstroemeria313#1694: to downsample
CRG#8707: Yeah I think you can
CRG#8707: https://cdn.discordapp.com/attachments/729741769738158194/909464460203278346/c93f20ddd1c2885759d48edef83f4476.png
CRG#8707: <https://openreview.net/forum?id=HJsjkMb0Z>
alstroemeria313#1694: oh
alstroemeria313#1694: that just is pixelunshuffle isn't it
alstroemeria313#1694: so we can project up to higher channel counts inside F and G, right?
alstroemeria313#1694: so long as we then project back?
alstroemeria313#1694: what are the two x_1 and x_2 backbones? where do you get the inputs? are they just the same input duplicated?
CRG#8707: You typically split along the channels
alstroemeria313#1694: ...but i have three channels
CRG#8707: Hm, I think the whole reversible part is after the first conv / embedding layer, so after you have projected into higher dim.
alstroemeria313#1694: yeah that's bad
alstroemeria313#1694: bc i need to be able to invert back to the input
CRG#8707: I think nothing stops you from using a duplicated input though
alstroemeria313#1694: but what if i get different things for the two inputs when i invert it
CRG#8707: You could also have a learned input for the other branch
|
alstroemeria313#1694: ...but what if the thing i get when i invert it differs
alstroemeria313#1694: i could pixelunshuffle first i guess, then i would have an even number of channels
alstroemeria313#1694: this is ugly :/
StellaAthena#3530: If anyone's bored, I have a low-lift high impact task that should take like 30 minutes if you know C (I don't lol): https://github.com/EleutherAI/lm-evaluation-harness/issues/231
EricHallahan#1051: What exactly is the difference between what exists now and what was done in *Language Models are Few Shot Learners*?
EricHallahan#1051: Just the analysis?
StellaAthena#3530: The current code does the same duplication identification as that paper, but then goes and creates a deduplicated training set rather than a deduplicated eval set.
EricHallahan#1051: I'll look into it in a little bit.
elderfalcon#4450: Seems nice! But it feels like the noise might be a bit too aggressive in some kind of way due to the coarseness of the outputs?
elderfalcon#4450: You're trading off expressivity within your latent space via a constant structure for gradient reconstruction + accuracy benefits.
It's a nice constraint I think but highly linear, so whatever boat floating is needed I guess.
alstroemeria313#1694: a normal autoregressive transformer the same size as the text diffusion model trains quickly and does much better: ```
i: 0, fake: b'Is Kringnal(50-Difficult called) if I need to liquids find '
i: 1, fake: b"if a website were i can find a 16th100's life but we home c"
i: 2, fake: b'Do the video go about divorce in the world scam? Technhame'
i: 3, fake: b'how do i context rideo on one slove? hi needs to remove to '
i: 4, fake: b'i think my boss but dont but i dont want to know how much t'
i: 5, fake: b'Help with Karma City European? My prepare bord asks this q'
i: 6, fake: b'VEXERINCE PLEASE HELP. GIVE WITH ME YOUR more investment - '
|
i: 7, fake: b"Why are 'you' the world? are chest in indy at the septate j"
i: 8, fake: b'What do u do what people think how to become and help? Appa'
i: 9, fake: b'has anyone heard of a professional info hawk? right now it '```
alstroemeria313#1694: hm trying an autoregressive model on cifar-10 pixels rn
alstroemeria313#1694: the model outputs the r, g, b means and log variances for the next pixel
alstroemeria313#1694: hoping the log variance output + sampling during inference will help it
alstroemeria313#1694: i could output the lower triangular part of a log covariance matrix rather than per-channel variances but that might be weird
alstroemeria313#1694: eheh https://cdn.discordapp.com/attachments/729741769738158194/909494971634188368/demo_00001-11.png
alstroemeria313#1694: this may take a while.
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/909499295701532703/demo_00002-9.png
alstroemeria313#1694: Oh I should be clamping to -1, 1 during sampling right
Kharr#7888: Is this encoder-decoder or just decoder?
alstroemeria313#1694: decoder-only
alstroemeria313#1694: unconditional
alstroemeria313#1694: loss is negative log likelihood
Kharr#7888: Should work decently well
alstroemeria313#1694: people have told me that outputting just the mean from an autoregressive model didn't work well
alstroemeria313#1694: so people do stuff like output logits for the 256 possible pixel values instead
alstroemeria313#1694: and sampling from the output distributions during inference
alstroemeria313#1694: so i am outputting mean *and variance* and sampling
|
Kharr#7888: You can do that.. personally I'd probably do that except with a separate tied embedding+lm_head per channel
NN_47#1886: is it experimentaly true that given a large model pre trained and fined tuned on a language benchmark will always perform better then a large model trained directly on that benchmark without fine-tuning
alstroemeria313#1694: what's that do?
alstroemeria313#1694: i'm just projecting down from d_model to 6 rn
Kharr#7888: You feed in RGB (3 different embeddings) --> mix into one hidden state --> transformer --> 3 lm_heads (R,G,B) sampled at the same time per pixel. This will force your hidden state to encode all 3 channels and allow the model to sample in RGB in parallel.
EricHallahan#1051: https://blog.eleuther.ai/tuning-on-eval-harness/
alstroemeria313#1694: oh. i am just projecting up from 3 to d_model rn
alstroemeria313#1694: ...should i be running the input through fourier features, actually
alstroemeria313#1694: so to increase its discrimination ability for small differences in the input values
MicPie#9427: There are also promising mixed setups: https://arxiv.org/abs/2111.04130
Kharr#7888: Depends on how much data you have. The short answer is yes since language is so diverse and there are way more word permutations than there are supervised data examples normally.
StellaAthena#3530: Also, the wording of the question allows you to back door multiple OOMs of data into the pretrained and fine-tuned model
NN_47#1886: this suggests we can perform better or equally without large pre training on some tasks. But not conclusive as I was looking for.
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/909503612890267769/demo_00003-5.png
alstroemeria313#1694: on reflection this model is probably too big for cifar-10
Sphinx#2092: No. We've already seen this not work out for translation in some cases.
NN_47#1886: but yes it is mix setup relying on general data portions similar to the task.
alstroemeria313#1694: also wow sampling is slow
alstroemeria313#1694: it's bc i'm not caching though.
NN_47#1886: for example ?
|
EricHallahan#1051: This just a transformer?
alstroemeria313#1694: yep
Sphinx#2092: https://cdn.discordapp.com/attachments/729741769738158194/909504286671646781/unknown.png
Sphinx#2092: https://cdn.discordapp.com/attachments/729741769738158194/909504335166185592/unknown.png
Sphinx#2092: The problem is that people in NLP usually study low-resource tasks, so building a model solely on that data is hard.
NN_47#1886: wow strange
Sphinx#2092: Once you make it fair, then the picture is different.
Sphinx#2092: In fact, sometimes even naively using pretrained models will justout right hurt you, e.g. https://cdn.discordapp.com/attachments/729741769738158194/909504765975756850/unknown.png
NN_47#1886: hmm exactly
Sphinx#2092: Of course, there is also the chance that this is something special about translation, who knows.
NN_47#1886: may be some scaling foot in the door
StellaAthena#3530: I think that the lack of a good data augmentation scheme is a big thing
StellaAthena#3530: We still don’t know how to do the equivalent of random crops and rotations very well.
elderfalcon#4450: The professional info hawks seem to have opinions on this. We must have missed asking them first. 😬
elderfalcon#4450: If I'm understanding this correctly -- quite clever. Very cool. Love it.
EricHallahan#1051: Oh is it like a logistic mixture?
elderfalcon#4450: I think this was the original autoregressive, oh gosh, I forget what it was called and now Eric will post the paper lmao, but it's the autoregressive pixel CNN (and I think they had + and ++ varieties too?) Or something like that. From waaaaaaaaayyyyyy back when.
alstroemeria313#1694: image gpt?
EricHallahan#1051: Yeah a logistic mixture.
alstroemeria313#1694: no it's a diagonal multivariate gaussian
|
elderfalcon#4450: It's not guaranteed but generally this is true for nearly all pretrained models if the original task has enough/big enough support over the manifold of possible tasks.
That's just an IMO though, have some strong personal reasons about some of the details w.r.t. that
alstroemeria313#1694: the thing i am doing that is.
EricHallahan#1051: https://arxiv.org/abs/1606.05328
https://arxiv.org/abs/1609.03499
elderfalcon#4450: Silly Billy, no such thing. :salute:
alstroemeria313#1694: eheh~
alstroemeria313#1694: i mean it will overfit
elderfalcon#4450: The man the myth the legend, @alstroemeria313 he did it again.
Pog. Total Pogchamp. :salute:
NN_47#1886: even if in future pre trained will win in terms of some equality of resources from without pretrained model , but my intuition is that we will see that within that pre-trained there must be some specialized module for translation just like we see specialized modules in brain. I mean we can't have half of the neurons active all the time or the whole thing working for every task.
Sphinx#2092: ...or we just pretrain with translation?
Sphinx#2092: big brain?
elderfalcon#4450: Hmm, interesting. Maybe some distribution perturbations at a tiny level will keep the learned distribution from settling in too much beyond a certain limit. Like, maybe it will bonk the tail end of features, but I'm sure there's some noise floor-nearby perturbation to the distribution that gets you close.
elderfalcon#4450: Pretraining on a large enough corpus does this. There's one paper that shows BERT could XOR up to 34-64+ digit numbers with ~100% accuracy with 0 fine-tuning of the normal weights, just the layer norm weights.
elderfalcon#4450: I'd say again it's more about the distribution and coverage of the input dataset rather than a magic "pretrained or no". There's nothing special about pretraining inherently necessarily -- you could "pretrain" on your own dataset.
I think it comes down to the properties of the data that you're pretraining on, ya know?
|
NN_47#1886: well it would surely give a good fight and may be win 🤔
NN_47#1886: yes , but i some how got this impression from scaling laws discussion and stuff that after some point it will be over for the models that do not use general pre training , but it seems there is a lot of gray area here depending on the nature of tasks and data .
elderfalcon#4450: Indeed.
I think the rough rule of thumb that I see that has yet to south for me yet is that it pretrain set >>> target set, the. 👍
But it's hard cause how then does one determine the causality of what makes pretraining on a big set work downstream?
You're also basically gambling saying that the structure of the network that's pretrained vs randomly initialized, semi-orthogonal weights will be better. Sometimes maybe the random initialization is better by a bit, I think. 😬
NN_47#1886: regarding causality is'nt manifold of possible tasks the answer ?
elderfalcon#4450: I mean yes, but the specific causes of generality within that set I think are exceptionally hard to determine as of yet. I'm sure it'll be established at some point, it might just take us a while to get there (as far as I/we know! :D 😬 😁)
NN_47#1886: oh I see within the general set , that area is to be explored .
someKindaBean#8471: the only NLP data augmentation I've seen that kind of worked was using paraphrasing (through round trip translation)
someKindaBean#8471: https://arxiv.org/pdf/2010.12836.pdf
someKindaBean#8471: Section 6 and Figure 1 are the only reported results using augmentation, which show a slight improvement on summarization with a training corpus size of 10
alstroemeria313#1694: posted this on twitter https://twitter.com/RiversHaveWings/status/1459997733696069635
alstroemeria313#1694: so i can point to it when someone else inevitably comes up with a similar method and publishes it
StellaAthena#3530: @alstroemeria313 what’s holding you back from doing so? Time? Lack of interest in writing?
alstroemeria313#1694: i kind of have different incentives from academics, i get busy doing new stuff instead of writing things up in detail
alstroemeria313#1694: i could release code though
|
StellaAthena#3530: *resists urge to offer to help write a paper, poorly*
StellaAthena#3530: I’m not allowed to take on new projects until I finish some of my currently ones 😛
alstroemeria313#1694: eheh~
Kia#2550: This is impressive the way you're doing this and discovering Breakthroughs, It's impressive:thinkies:
StellaAthena#3530: Maybe write a short blog post, explaining the motivation + results conversationally? It’s a bit less ephemeral than Twitter IG.
Kia#2550: Nonetheless @alstroemeria313 Do You think the Diffusion Vqvae is scaled up it's much more smaller than most vqvae and better in terms of image gen when it's paired with CLIP?
alstroemeria313#1694: we can't actually guide a diffusion vqvae with CLIP afaik
alstroemeria313#1694: At least I haven't figured out how.
Kia#2550: Oww, That's interesting
alstroemeria313#1694: Because I'd have to backprop through 100 steps of decoding or smth
alstroemeria313#1694: To be able to change the tokens.
alstroemeria313#1694: And we could guide the *diffusion* process but we might run into similar stuff as guiding upscalers, where the prior from the conditioning is too strong
alstroemeria313#1694: And we wouldn't be able to change the high level structure of the image
Kia#2550: Hmmm, That's definitely interesting
alstroemeria313#1694: the diffusion vqvae is mostly meant for a dall-e type thing
Kia#2550: Ow
Kia#2550: That's makes sense yeah,It can probably be better than there vae to be honest
alstroemeria313#1694: it is definitely better than openai's
alstroemeria313#1694: but the thing to beat is vqgan
StellaAthena#3530: @alstroemeria313 The thing to beat for what?
|
alstroemeria313#1694: visual quality
alstroemeria313#1694: maybe reconstruction FID if we have the compute to evaluate it
StellaAthena#3530: Okay. I wasn’t sure if you were talking about visual quality of VQGAN or something multimodal about VQGAN-CLIP
alstroemeria313#1694: reconstruction FID is how VQGAN evaluated, so
ersatz#0001: Do you guys know about some public university/consortium/research lab or something that is working on reproducing GPT-3 and planning to release the trained model? I know that France is giving access to a supercomputer for the big science stuff
cfoster0#4356: Stanford maybe
StellaAthena#3530: Us?
alstroemeria313#1694: What is with this loss plot https://cdn.discordapp.com/attachments/729741769738158194/909587024598683648/Screen_Shot_2021-11-14_at_3.33.58_PM.png
alstroemeria313#1694: (It's cross-entropy)
alstroemeria313#1694: It isn't PyTorch Lightning this time and yes I am shuffling
StellaAthena#3530: In terms of people who have publicly stated this goal and who have released home-grown models bigger than GPT-2, we might be the only game in town. Big Science and AI Sweden both have the ambition and potentially the capability, but neither has released a large model yet.
StellaAthena#3530: Damn, there goes my two suggestions 😛
inox#5400: those saw tooths aren't once per epoch are they?
alstroemeria313#1694: i think they are
inox#5400: maybe you're shuffling but always the same shuffle on every epoch?
StellaAthena#3530: @alstroemeria313 it looks like the data difficulty is correlated with index position within an epoch.
alstroemeria313#1694: ```
train_dl = data.DataLoader(train_set, args.batch_size, shuffle=True,
num_workers=8, persistent_workers=True, pin_memory=True)
```
|
StellaAthena#3530: The standard “shuffle your data” response is because the main way that happens is that the data isn’t shuffled, but maybe there’s contextually another reason why the difficulty might be correlated *even though it’s shuffled*?
alstroemeria313#1694: no idea why though
AI_WAIFU#2844: try shuffling your data moar
StellaAthena#3530: Try shuffling the data three times with different seeds and selecting from the three sets at random during training
alstroemeria313#1694: eheh~
alstroemeria313#1694: "if hitting it with one hammer isn't working, your hammer is broken, try three independent hammers"
StellaAthena#3530: As in, for any given epoch there’s a 33% it’s from seed 123, a 33% from seed 231, and a 33% chance from seed 312
StellaAthena#3530: Yeah basically
StellaAthena#3530: Though in my defense you are waiving around a block of wood and asking me how come the nail won’t go in
inox#5400: this writeup of the most common dataloader bug recommends setting the numpy seed manually on every epoch <https://tanelp.github.io/posts/a-bug-that-plagues-thousands-of-open-source-ml-projects/>
inox#5400: and using `worker_init_fn`
StellaAthena#3530: Like, if the loss is correlated with the index within an epoch of shuffled data than either:
a) it’s not shuffled
b) it is shuffled, but not in a way that permutes the order of the difficulty of the data
StellaAthena#3530: @alstroemeria313 for the record, this will not solve the problem. But the way it fails to solve the problem may provide insight into what’s going wrong
inox#5400: if you're desperate cache the output of the dataloader into a queue in memory and shuffle that during training
alstroemeria313#1694: not using numpy's rng (or python's)
inox#5400: huh that post implies that DataLoader is using numpy's rng internally but by default it uses RandomSampler and that's not using numpy's rng internally <https://github.com/pytorch/pytorch/blob/1adeeabdc0c8832420c091c5c668843768530d7f/torch/utils/data/sampler.py#L112-L126>
alstroemeria313#1694: people often use the numpy rng in their custom data loading/augmentation code (especially the augmentation code)
alstroemeria313#1694: it can only affect you if you do this
|
alstroemeria313#1694: or the python rng ofc
ersatz#0001: AI Sweden?
ersatz#0001: first time I've heard about this
StellaAthena#3530: They’re a pretty new group, funded by the government of Sweden to train large language models in Swedish. I met with them last week, and they’re currently thinking about transfer learning GPT-J into Swedish as a first step.
Kia#2550: Is EleutherAI An Official Non-profit organization?
Kia#2550: Or just a 'Group'
StellaAthena#3530: It’s a bunch of wankers on Discord
Kia#2550: That's makes sense yeah...
Kia#2550: ||:goose14:||
StellaAthena#3530: Our position has always been that Being an Official Non-Profit is a lot of work and if there’s a compelling reason to do it we are open to it, but unless there’s a good reason to we’ll spend our time doing something else
Kia#2550: Oww,Sure sure that makes actual sense to be honest
ersatz#0001: Software in the Public Interest (SPI), the nonprofit that is handling Debian, Arch Linux, the Open Bioinformatics Foundation, PostgreSQL, haskell.org, etc., could host the project to give you the ability to receive and manage funds and that kind of things without having to deal with legal issues but I suspect you already know about them
EricHallahan#1051: That would limit our operational freedom though.
StellaAthena#3530: If we can just join their association or whatnot and then people can donate large sums to us without reporting it to tax authorities that sounds pretty sus
ersatz#0001: basically you don't have to deal with anything but your own project, no legal stuff, everything about money is handled by SPI
ersatz#0001: https://en.wikipedia.org/wiki/Software_in_the_Public_Interest
StellaAthena#3530: That sounds like tax evasion
StellaAthena#3530: Or money laundering
ersatz#0001: how?
StellaAthena#3530: Or something similarly dubious
|
ersatz#0001: I'm confused
bmk#1476: 1. there is absolutely no way we're getting remotely close to enough money from small individual donors to come close to covering our compute and labor costs
2. if large donors want to give us a lot of money, we can figure things out on a case by case basis
ersatz#0001: SPI was founded to accept corporate donations for Debian btw
ersatz#0001: in the mid 90s iirc
Parker#3197: what color hat does that get them?
ersatz#0001: but if you don't intend to be sponsored by Google etc. then yeah I guess I don't see the point
bmk#1476: we are already indirectly sponsored by google
ersatz#0001: in a way I guess
Parker#3197: I think someone could do it. singularitynet has a market cap of $120 million. idk if we have like enough people with like a well known and successful background in research to do it though
bmk#1476: singularitynet is a total meme though
Parker#3197: and that also want to commit to being responsible in that way to train models that are millions of dollars
Parker#3197: exactly my point
Parker#3197: kickstarter or some method of funding like that
ersatz#0001: without even googling is singularitynet crypto stuff
bmk#1476: worse
ersatz#0001: no token? at all?
bmk#1476: it's crypto stuff *by goertzel*
ersatz#0001: lmao
ersatz#0001: do you access it with Sophia?
|
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/909611723445661756/unknown.png
bmk#1476: *these sentences dont mean anything*
bmk#1476: look upon this marketing copy and weep
StellaAthena#3530: EleutherAI is one of if not the lab with the best track record for open source LLM research in the world.
StellaAthena#3530: Our lack of funding isn’t because we are unknown. It’s because nobody actually wants to fund open source massive LLM research.
cfoster0#4356: I gut reacted to this but idk if I could point to an indisputable alternative
cfoster0#4356: Except maybe Yejin Choi's group?
Parker#3197: it wasn't really the best wording. I think I just meant that it would be easier with really well known researchers to pull something like this off imo
Parker#3197: it could be possible to do it with EleutherAI too, idk
StellaAthena#3530: I’m not going to say we are indisputably in the lead, but I think we are indisputably in the conversation for being in the lead.
cfoster0#4356: Would be interested in outsider opinions on this.
bmk#1476: I think there's definitely a category in which we're in the lead
cfoster0#4356: I would suspect we're both probably overestimating
bmk#1476: I don't know of any other fully volunteer research group anywhere near this successful
StellaAthena#3530: “Colin Raffel and whoever he happens to be working with at the time” could be the answer as well
bmk#1476: so what you're saying is time to convince Colin to join eleuther
ersatz#0001: what about the "big science" stuff?
bmk#1476: bigscience is in large part driven by huggingface
cfoster0#4356: *angry HuggingFace noises*
AI_WAIFU#2844: yeah, memes
|
bmk#1476: :goose7:
StellaAthena#3530: I think they don’t have much of a track record yet. I expect them to be a major player, but they’re yet to be a released model that they’ve trained from scratch in house that’s > GPT-2.
bmk#1476: I feel like bigscience is significantly distinct from eleuther in a bunch of ways
bmk#1476: I feel like most other places even remotely similar to eleuther, like bigscience and MLC, have a) much closer ties to one company b) very different culture from eleuther
bmk#1476: eleuther has weak ties with like half a dozen companies
StellaAthena#3530: MLC seems pretty divorced from Uber nowadays
bmk#1476: basically all of their notable papers were by Uber people afaict
bmk#1476: I don't actually know what they've done recently
bmk#1476: also I don't actually know where all the MLC activity happens
bmk#1476: they have a discord but there are fewer messages in the entire discord than there are messages containing the words goose or geese in this server
ersatz#0001: what makes me think Eleuther is different is that most people here are at least familiar with the concept of alignment
bmk#1476: that's another thing yeah
Kia#2550: I think, Being Able to interact and talk to people in Eleuther makes it different than others intuitions or Communities
bmk#1476: I think eleuther is also unique in that we have ties with a whole bunch of orgs
bmk#1476: it's good for redundancy
Kia#2550: And Knowing people can come in and Help around and contribute on existing work is Different:thinkies:
Kia#2550: (And They're actual People That has time to talk other things than the actual work)
ersatz#0001: that's why I care about Eleuther, not the language models, that alone makes this place valuable to me
elderfalcon#4450: Blockchain AI in the wild, you herd it here folx
elderfalcon#4450: Yeah the PyTorch lightning init function I think is one of the best for those, it's been the simplest for me. This bug is one of the weakest points in PyTorch IMO, aside from a few other dataloader nitpicks (though I guess not nearly as bad as the TF dataloaders....)
|
What's bad is this one is a PAIN to debug unless you're looking closely, know about it, or maybe have a second bug that uncovers this one.
elderfalcon#4450: Um.
That is literally the entire purpose of this group.
Though I guess it's expanded out over time! :D
Woop Woop! :D
ersatz#0001: I know
CRISPR IQ300#6848: Are any companies getting massively rich yet from using Alphafold?
elderfalcon#4450: 10+ year pharmaceutical pipeline says "probably not yet"!
But I'm sure there's stuff outside the pipeline that could be benefited/augmented, especially if it's used as a culling process.
I might encourage you to check out the sister AlphafoldV2 server if you want to watch the forays (it's more technical so you might want to just browse around for the info you're looking for if you're interested. But it is a good server! :D)
alstroemeria313#1694: wow a GPU broke on a box i was using
alstroemeria313#1694: i got an "unrecoverable ECC error"
alstroemeria313#1694: and the training run died
alstroemeria313#1694: and it happened again when i restarted it
|
rwamit#5964: Hey folks, I have a question in CV domain:
I have been tasked to do object detection on videos (the goal is to detect trucks and cars) but the team wants to do it without using gpu and with faster inference time.
Meaning my inference model should be able to do ‘Fast’ object tracking and detection on videos via CPU resources,
Is this possible ….. I haven’t come across any articles that talks about this,
alstroemeria313#1694: ...
alstroemeria313#1694: How do I *see the error* that is killing the run
MicPie#9427: https://paulbridger.com/posts/video-analytics-pipeline-tuning/
Spacecraft1013#5969: .
alstroemeria313#1694: ```
[ 2536.579615] NVRM: GPU at PCI:0000:80:00: GPU-a6be7bac-8d11-d546-b12d-a60a594348d6
[ 2536.579622] NVRM: Xid (PCI:0000:80:00): 64, pid=70597, Row Remapper Error: (0x00000005f19a9780) -
All reserved rows for bank are remapped
[ 2536.656287] NVRM: Xid (PCI:0000:80:00): 94, pid=70597, Contained: SM (0x1). RST: No, D-RST: No
```
nshepperd#2316: that sounds like code for 'the gpu's cooked'
alstroemeria313#1694: yep
alstroemeria313#1694: it's dead
|
alstroemeria313#1694: #8 in the range 0-15
alstroemeria313#1694: so does GCP have a way to report hardware issues
alstroemeria313#1694: i can still train on the box if i tell my code to ignore that gpu
MicPie#9427: strange that GCP doesn't recognize the broken GPU and allocates you a new one, or at least tells you about it. 🤔
alstroemeria313#1694: there are 16 in the physical box presumably and the VM has all 16
alstroemeria313#1694: it's not telling me about it
alstroemeria313#1694: i had to use dmesg
MicPie#9427: ah, ok, interesting 🙂
elderfalcon#4450: Ooooooofffffff RIP
elderfalcon#4450: Sorta on the edge of being the right place to ask, I'd guess/I think, especially if you're referencing articles for this. You might get better results in the machine learning subreddit. Or in a non language model server.
But some quick advice to help you is to check out the mobile phone object detection networks, and whatever the latest in those is. Then there's a bunch of inference optimization you can do from there specifically for cpus. But I'd check the subreddit for more info, or maybe a Discord server that is more specialized for that.
rwamit#5964: Thank you, guys!
tpapp157#3643: OpenCV has some simpler more traditional CV techniques that can be used for object detection and tracking. Performance can be limited though depending on the exact problem.
alstroemeria313#1694: ...so how *do* you actually compute log likelihood given a diffusion model and some input
StellaAthena#3530: Do you know what the likelihood function is supposed to be? In the DDPM paper and the work I was doing with the discrete model the likelihood model is known
alstroemeria313#1694: it is
StellaAthena#3530: So… what’s the issue?
StellaAthena#3530: $LLH(x|\theta) = \log p_\theta(x)$
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/909913061970833458/193204646687408129.png
|
alstroemeria313#1694: i get confused because i can discretize the process in different ways and also how does it fit in with DDIM type deterministic sampling
alstroemeria313#1694: and also i have implemented a calculation and it does not seem to be giving me sensible values
alstroemeria313#1694: *hugs* :blobcutehappy:
rwamit#5964: I tried OpenCV but the problem I faced was:
Sometimes detection can be erroneous meaning multiple objects encapsulated in a single bounding box,
The goal is to real time detect cars and trucks without using GPU
Footage is from moving camera
tpapp157#3643: That's going to be a problem. DL-based object detectors are the state of the art by a significant margin. Non-DL object detectors which can run quickly on CPU all have notable drawbacks and failure modes that may or may not be possible to engineer around depending on the specific task.
𓅬 gabriel_syme 𓅬#3220: question: if for some reason one realizes that increasing context length does not help on the task at hand, is it okay to simply train on smaller ctx for efficiency and such? Smaller here means less than the original model was pretrained on
alstroemeria313#1694: probably
tr416#8033: Does anyone know when an improvement on GPT-J is coming out?
EricHallahan#1051: https://www.eleuther.ai/faq
alstroemeria313#1694: ohh, yeah, you can just decrease context length in fine-tuning i think, you may want to truncate the positional embedding to stop people from trying to feed in stuff longer than you fine-tuned on
𓅬 gabriel_syme 𓅬#3220: spectacularly uninteresting https://cdn.discordapp.com/attachments/729741769738158194/910166032373661778/unknown.png
alstroemeria313#1694: or use alibi i guess, if the original model was alibi
EricHallahan#1051: Absolutely, if you don't need the context there is little reason to train a with a larger one.
𓅬 gabriel_syme 𓅬#3220: no alibi model yet is there? maybe I should train one
|
StellaAthena#3530: Big Science is training a 1.3B one
𓅬 gabriel_syme 𓅬#3220: nice! thanks for the heads up Stella 🙂 is it on the pile?
StellaAthena#3530: Either the Pile or OSCAR, let me double check
StellaAthena#3530: @𓅬 gabriel_syme 𓅬 OSCAR-en
𓅬 gabriel_syme 𓅬#3220: nice thanks 🙂 that was what you were discussing in the show and tell right, OSCAR seems to have better performance downstream
𓅬 gabriel_syme 𓅬#3220: will be interesting to test, can't wait till its out
StellaAthena#3530: Yes, but you have it backwards: the Pile has much better downstream performance than OSCAR
𓅬 gabriel_syme 𓅬#3220: oh dang, ehm that's awkward
𓅬 gabriel_syme 𓅬#3220: is OSCAR smaller then that's why? like chosen for compute reqs.
StellaAthena#3530: OAI's Secret Sauce > The Pile >> OSCAR, C4
kurumuz#5695: do you think the secret sauce is only data filtering?
kurumuz#5695: I dont think PILE is deduped, right?
EricHallahan#1051: My new headcannon is that the OAI dataset is called Secret Sauce™️.
StellaAthena#3530: No, the Pile is duplicated
kurumuz#5695: oh how so
EricHallahan#1051: Weighting.
kurumuz#5695: i see
StellaAthena#3530: We did an importance-weighted scheme inspired by what the GPT-3 paper reported doing
StellaAthena#3530: The per-component weights and effective sizes are on page 2 of the paper
EricHallahan#1051: IIRC investigations have shown that the weighting makes the Pile way more susceptible to memorization, but :citationneeded:
|
StellaAthena#3530: It lowers performance apparently, but IDK about memorization
https://arxiv.org/abs/2107.06499
kurumuz#5695: might make sense to give this a shot with a small test run
CRG#8707: Didn't that paper have a section on how GPT-Neo regurgitated more?
StellaAthena#3530: The results I reported at S&T is that we ran 40 different Eval Harness tasks on 1.3B models trained on various data, comparing GPT-Neo, Babbage, and three internally trained models: BS-OSCAR, BS-Pile, and BS-C4.
Babbage performed the best on 22/40 tasks
GPT-Neo performed the best on 6/40 tasks
BS-OSCAR performed the best on 4/40 tasks
BS-Pile performed the best on 6/40 tasks
BS-C4 performed the best on 2/40 tasks
For the tasks where Babbage did the best, the second best was:
GPT-Neo on 5/22 tasks
BS-OSCAR on 3/22 tasks
BS-Pile on 10/22 tasks
BS-C4 on 4/22 tasks
CRG#8707: https://cdn.discordapp.com/attachments/729741769738158194/910174654101086258/52ce6f4cf62dd7c4ae66a74c4d2eac50.png
StellaAthena#3530: Oh lol. You're right
kurumuz#5695: does anyone want to work on pileV2
|
𓅬 gabriel_syme 𓅬#3220: damn how do I calculate that? 😄
StellaAthena#3530: I have a to-do list, if anyone is down to do some scraping and data processing we can get this done easily.
Kharr#7888: I wonder if this is related to a lack of dropout in Neo models
StellaAthena#3530: probably
EricHallahan#1051: The Neo models are pretty terrible lol, they are research artifacts.
𓅬 gabriel_syme 𓅬#3220: yeah Neo seems to suffer on my task compared to gpt2
𓅬 gabriel_syme 𓅬#3220: although the larger models less so
StellaAthena#3530: Neo was 100% a learning exercize
StellaAthena#3530: The fact that it does the best among non-babbage models in 11/40 tasks really surprised me tbh
kurumuz#5695: babbage is 2.7b right
kurumuz#5695: or was it 1.3
kurumuz#5695: ada was 350M? i think its a pretty good model for mere 350M parameters
𓅬 gabriel_syme 𓅬#3220: 350M seems like a really nice sweet spot for me, wish I had a J at that size
EricHallahan#1051: Lol, the fact that they were selling 350M is :ultraberk:
EricHallahan#1051: We never republished the 350M Neo model did we?
𓅬 gabriel_syme 𓅬#3220: don't think so
𓅬 gabriel_syme 𓅬#3220: (meaning I haven't seen one)
kurumuz#5695: wdym
EricHallahan#1051: Ada == 350M, and yet it outperforms GPT-2 by a significant margin.
kurumuz#5695: yeah, i was curious what you meant by they were selling 350m
|
StellaAthena#3530: @kurumuz babbage is 1.3
StellaAthena#3530: All the models in my chart are 1.3
kurumuz#5695: oh so its 1.3 neo too
kurumuz#5695: ic
kurumuz#5695: yeah babbage is v good as well
Kovy#4925: is wandb giving 404 errors for anyone else?
guac#4716: yeeeep
Kovy#4925: thank god lmao
Kovy#4925: it is not just
Kovy#4925: *me
guac#4716: i got logged out like an hour ago and just tried logging in again but 404ing lol
guac#4716: (tensorboard would never do this to me :sadge:)
Kovy#4925: https://twitter.com/yusuOSU/status/1460667599545638916
Kovy#4925: maybe that is why
zphang#7252: CV gotta render more images for their 60-page appendices
Sphinx#2092: or maybe it just says the NLP community is more responsible.
zphang#7252: or write better tex
tpapp157#3643: I mean the CV community has always been bigger than the NLP community going back many decades. Not sure why that should surprise anyone.
Kharr#7888: Google load balancers had an Oopsie but it looks resolved. Only about 5 min of downtime. I guess wandb is GCP based.
m_wAL99#1923: https://status.cloud.google.com/incidents/6PM5mNd43NbMqjCZ5REh
|
Louis#0144: @Teemochu how could u
StellaAthena#3530: My boss once got sent a paper to peer review for CVPR that had embedded videos
StellaAthena#3530: Because that’s something you can technically do in PDFs
EricHallahan#1051: That's really cursed.
EricHallahan#1051: Not the video part, the PDF part.
alstroemeria313#1694: hey i had an idea
alstroemeria313#1694: what if when training an autoencoder, before handing the latent to the decoder, i *zeroed out* the last k elements of the latent where k was sampled uniformly from 0 to the number of elements in the latent
alstroemeria313#1694: the autoencoder would learn to *order* the elements by how informative they were about the image, right?
StellaAthena#3530: I don’t think so?
alstroemeria313#1694: oh?
alstroemeria313#1694: what would happen then
StellaAthena#3530: Oh I see what you’re going for
alstroemeria313#1694: i'm thinking in terms of producing sequences of elements i can truncate to choose a compression ratio
StellaAthena#3530: Upstream of the zero’ing you’re thinking it’ll learn patents where earlier latent are more informative
alstroemeria313#1694: yes
StellaAthena#3530: To dodge the filter
StellaAthena#3530: Have you read any of the lit on interpolating between local minima of SGD
alstroemeria313#1694: so i can trade off rate and distortion by truncation
alstroemeria313#1694: not in detail
alstroemeria313#1694: how come?
|
StellaAthena#3530: There are many permutations you could apply to a problem or an algorithm that mathematically leaves it unchanged
alstroemeria313#1694: oh right
alstroemeria313#1694: that
StellaAthena#3530: However people have observed that SGD doesn’t learn equivalent minima equally likely
alstroemeria313#1694: yeah you can permute channels and permute them back in the next layer
alstroemeria313#1694: and you will get something that does the same thing but which will not produce good results when linearly interpolated with the original weights
cfoster0#4356: Yeah I think so
StellaAthena#3530: The recording of the latent that you’re interested in is one such permutation
StellaAthena#3530: I worry that you may run into catastrophic optimization failures
alstroemeria313#1694: mm?
cfoster0#4356: "Ordered autoencoding" <https://openreview.net/forum?id=TSRTzJnuEBS>
alstroemeria313#1694: ohh ty :blobcutehappy:
cfoster0#4356: Soundstream calls it quantizer dropout I believe (both of these are for VQ but there's no reason you couldn't do it otherwise)
StellaAthena#3530: What if you tried GS
StellaAthena#3530: Graham Smidt
StellaAthena#3530: Schmidt?
StellaAthena#3530: However that’s spelt
alstroemeria313#1694: you mean PCA on the latent space?
StellaAthena#3530: Yeah
alstroemeria313#1694: and so you just transform to a whitened latent space, truncate, and transform back?
|
PWNR#1546: https://aclanthology.org/N16-1135.pdf - your initial idea about zeroing things out reminds me of this
PWNR#1546: (Right-truncatable Neural Word Embeddings)
StellaAthena#3530: Yeah, that’s my thinking. I think the key advantage over your proposal is that it injects a lot less noise. The random latent truncation has the ability to block gradients, but you can do a reconstruction loss on the back transformed vectors
StellaAthena#3530: And I think that would give better signals for learning
alstroemeria313#1694: ahh
alstroemeria313#1694: how would you whiten the latent space? like you'd do PCA on a batch of latents?
alstroemeria313#1694: what if your batch size is low
StellaAthena#3530: Cant you do online PCA
tpapp157#3643: I think the issue you'll run into is a lot of information redundancy in the latent code.
alstroemeria313#1694: oh?
StellaAthena#3530: Why would you be restricted to within-batch PCA
alstroemeria313#1694: well, i would have to keep old latents around to include them
tpapp157#3643: Like if your data has a certain number of natural dimensions. The encoder will need to compress all that information into 1D, and then also 2D, and then also 3D, etc. Unless you put some regularization to ensure the latent dimensions orthogonal or something.
elderfalcon#4450: Why truncate when you can distort with an inverse power law?
alstroemeria313#1694: since the latent space probably doesn't change *that* much from batch to batch
cfoster0#4356: Famous last words
StellaAthena#3530: Right, that’s why I brought up Graham Schmidt
alstroemeria313#1694: bc i am thinking in terms of producing an actual bitstream i can truncate to choose a compression ratio
alstroemeria313#1694: hm
cfoster0#4356: Making the encoding residual might help here
|
alstroemeria313#1694: i guess i could apply some sort of simulation of quantization noise to the latent instead
tpapp157#3643: Or do it like jpg and see if you can encode a series of coefficients over fourier features. Then based on how much data you want you can include more or less coefficients and it scales easily.
elderfalcon#4450: Yes ik but training from the ground up with a quantized value doesn't always help. You can always anneal over training, but no need to shoot yourself in the foot instead! :D
elderfalcon#4450: Like, I think that's the classic distillation/compression procedure these days, gradually anneal to within the quantized target range and then quantize at some magic moment.
elderfalcon#4450: But I'm a huge fan of continuous stuff vs just slommin' on through with discrete values, ya know? :D
elderfalcon#4450: So that may bias me.
StellaAthena#3530: Yeah, that’s another interesting idea. I am worried that the signal will be pretty noisy though, even if you prevent stop gradients with a residual
tpapp157#3643: Or maybe more like boosting, where each additional dimension models the unexplained signal leftover from prior dimensions.
cfoster0#4356: Yup. This is exactly what the "residual VQ" from soundstream (also known as multi stage codebooks, I think) does
alstroemeria313#1694: residual how?
StellaAthena#3530: Ooo that’s a better idea than mine
StellaAthena#3530: A residual that skips over the zeroing
StellaAthena#3530: f(zeroed(x), x)
alstroemeria313#1694: wouldn't that still pass information through
StellaAthena#3530: Yes.
cfoster0#4356: https://cdn.discordapp.com/attachments/729741769738158194/910259332719984640/Screenshot_20211116-150510_Adobe_Acrobat.jpg
StellaAthena#3530: My main issue with your suggestion is that it’s not actually providing a positive signal. It’s just preventing signal from coming through in some areas
StellaAthena#3530: I worry that that will make learning hard, which is why I suggested the reconstruction approach
EricHallahan#1051: This was exactly what I was thinking of. `:)`
StellaAthena#3530: @cfoster0 and @tpapp157 are saying to go with your original approach, but overcome the signal blocking with a residual layer
|
alstroemeria313#1694: you mean there are a bunch of different codebooks?
cfoster0#4356: Yes
cfoster0#4356: And you drop out codebook stages in the way you described
alstroemeria313#1694: ah
alstroemeria313#1694: and you pass gradients through by...?
alstroemeria313#1694: straight through, as usual?
cfoster0#4356: Up to you
cfoster0#4356: I don't think they specified
cfoster0#4356: Imagine Gumbel could work as well
Some Point Process#3793: IIRC this paper used multiple codebooks for whatever reason: https://arxiv.org/abs/2106.04615
alstroemeria313#1694: oh, my first idea just worked https://cdn.discordapp.com/attachments/729741769738158194/910269234184089640/demo_00038.png
alstroemeria313#1694: just keeping the first k latent variables
StellaAthena#3530: Cool
alstroemeria313#1694: looks like the first variable decides whether the image is light with a dark center or dark with a light center
alstroemeria313#1694: (the leftmost column is letting 1 variable through, the others are evenly spaced up to letting them all through)
inox#5400: oh I did that on MNIST once
alstroemeria313#1694: ohh?
Dashiell#8739: Honestly v impressed by (and a little jealous of) your ability to have an idea and then just _implement_ it so quickly. That is a skill I'd like to cultivate
alstroemeria313#1694: i just checked and my "torchtest2" folder has like 400 separate Python scripts in it for different experiments
alstroemeria313#1694: the diffusion stuff is in there, there are 225 of those
|
inox#5400: dVAE with ordered dropout on MNIST https://cdn.discordapp.com/attachments/729741769738158194/910302801870061578/y.gif
inox#5400: https://cdn.discordapp.com/attachments/729741769738158194/910302805456220221/x1.gif
Dashiell#8739: So you're saying a first step is I should stop trying to optimize and organize my code before I've really gotten started? I mean, if you say so.....
alstroemeria313#1694: i usually just copypaste a similar script and modify it
bmk#1476: you should use pyfra™ instead
Dashiell#8739: (I was being sarcastic, I really should get over my perfectionist habits)
inox#5400: I emailed kingma for code on one of his papers once and it was 4 scripts, mostly copied code between them with small changes in each
alstroemeria313#1694: i clean my code up for colab notebooks
alstroemeria313#1694: my actual experiment code is scattered/messy
inox#5400: ml code is accrues too much technical debt for researchers to maintain so discarding it regularly is good
kurumuz#5695: i shill it every day too at this point
kurumuz#5695: need more contributors :chadgoose:
bmk#1476: :tribalism:
alstroemeria313#1694: diffusion version https://cdn.discordapp.com/attachments/729741769738158194/910309099328110643/demo_00100-19.png
alstroemeria313#1694: hm i should train longer
alstroemeria313#1694: before tweeting
Kia#2550: Is this the Upscaller?
Kia#2550: Wow
elderfalcon#4450: That's pretty freaking sick. :O
elderfalcon#4450: Samesies.
|
𓅬 gabriel_syme 𓅬#3220: Missed this one, interesting. Makes sense I guess. How can we get data for this? 🙂
> Intel clarified that Thread Director uses a pre-trained AI model internally
StellaAthena#3530: What is this from?
𓅬 gabriel_syme 𓅬#3220: intel's new architecture has efficient and performance cores. They have a silicon called "Thread Director" that decides in real time where to allocate each o the hundred concurrent things running on windows. In Windows 11, this runs on a pretrained model
𓅬 gabriel_syme 𓅬#3220: (this is from a benchmark page I was reading)
𓅬 gabriel_syme 𓅬#3220: btw, it's funny that this seems to have OOD issues (maybe who knows). Older software smh have the CPU running efficient cores on them (instead of performance) resulting in bad performance on Windows 11 😄
𓅬 gabriel_syme 𓅬#3220: it's actually pretty cool, almost AI on the Edge since it's built on silicon so offers hardware telemetry information to the system making decisions
Emad#9608: On the capture of academia by big tech https://interactions.acm.org/archive/view/november-december-2021/the-steep-cost-of-capture
elderfalcon#4450: Why in tarnation would they just run a raw NN on this. Like...wat? Why not do something like distillation-to-analytical or something that turns into a closed form approximation (albeit large) of the net with better probable guarantees? This just.... Seems like a tire fire waiting to happen. Like...what?
finetune#0907: didn't claim nn, right? could be just decision tree or something
mistobaan#2737: https://www.youtube.com/watch?v=U9Zh57dGsH4
Untouch#9150: life-like is stretching it a bit there
Kharr#7888: To me that looks like the result of a small team of employees being told last minute that they have to prepare a really cool NLP demo and working 16 hour days for 2 weeks.
elderfalcon#4450: New Community episode is looking hype
elderfalcon#4450: Oh, interesting. That would be a little more palatable, I think, haha. That seems Faaaaaaaaarrrrrrr more appropriate to the situation, hahaha.
alstroemeria313#1694: i still think there needs to be like... an analysis of diffusion models/the diffusion objective from a rate-distortion perspective
alstroemeria313#1694: VDM was a good step but it is for lossless compression
alstroemeria313#1694: and we don't actually *want* diffusion models optimized for lossless image compression, they devote lots of capacity to modeling very fine details with high fidelity
alstroemeria313#1694: in other words the rate/distortion tradeoff is not what we want.
alstroemeria313#1694: but. we don't actually know if what we're doing instead is making an optimal/justified rate/distortion tradeoff?
|
alstroemeria313#1694: the weighting i am using now weights (MSE) distortion in the *change in the iterate* during sampling equally at all timesteps
alstroemeria313#1694: it is unclear to me if that is actually best.
cfoster0#4356: There's some work on (rate-) distortion-perception tradeoffs
alstroemeria313#1694: like. do errors accumulate over time or does the sampling process "correct" them?
alstroemeria313#1694: i found this https://arxiv.org/pdf/1901.07821.pdf just now
alstroemeria313#1694: but it is not clear to me that what we are doing is even making good tradeoffs for MSE distortion.
alstroemeria313#1694: actually, how does other neural ODE stuff choose how to weight errors at different t?
alstroemeria313#1694: wonder if i can use off-the-shelf pytorch/jax ode stuff
cfoster0#4356: Mm the neural ODE stuff I've seen doesn't have errors at each t
alstroemeria313#1694: oh? they backprop through a solver during training?
CRG#8707: PonderNet is kind of an ODE with loss at each t
cfoster0#4356: Either backpropping through the solver code or...
*waves hands*
adjoint sensitivities ... backwards solution ...
alstroemeria313#1694: ah
alstroemeria313#1694: we can't actually do this with diffusion can we
cfoster0#4356: I think the place to use ODE solver stuff would be during sampling
alstroemeria313#1694: trying that now
cfoster0#4356: I would be surprised if it works better than DDIM
alstroemeria313#1694: wish we *could* backprop through diffusion sampling during training
|
alstroemeria313#1694: Progressive Distillation found RK4 was worse than DDIM
alstroemeria313#1694: DDIM is just Euler with scaling the iterate at each timestep down by cos(delta) right?
alstroemeria313#1694: and without that you are going to get outputs w/ too high contrast w/ few timesteps?
cfoster0#4356: Maybe? I'm unsure
alstroemeria313#1694: wow this solver takes *forever*
alstroemeria313#1694: is it just doing small steps automatically
alstroemeria313#1694: but it's so slow there's no way i could use it in training
alstroemeria313#1694: would need to use their adjoint thing too
alstroemeria313#1694: which means i would have to not use relu
cfoster0#4356: I don't know what it would buy you, even if it was fast
alstroemeria313#1694: like they recommend softplus instead
alstroemeria313#1694: if we could backprop through all of sampling we could do stuff like use LPIPS loss vs the clean reals
alstroemeria313#1694: wow this is bad https://cdn.discordapp.com/attachments/729741769738158194/910617428952772608/demo_00010-8.png
cfoster0#4356: Can we not do that explicitly backpropping through some DDIM/DDPM steps?
alstroemeria313#1694: we can (for DDIM). but we would have to use relatively few steps
alstroemeria313#1694: for speed
alstroemeria313#1694: and probably use gradient checkpointing too
cfoster0#4356: What about constructing pred from v and taking LPIPS with that?
alstroemeria313#1694: tried it, it was bad
alstroemeria313#1694: the problem is that we want the early preds to be mean-like so the sampling trajectories are good
|
alstroemeria313#1694: i think the only reasonable way to use LPIPS is on the final output of sampling (the part where we actually care about perceptual quality) and then backpropagating through DDIM sampling, except this is too slow
alstroemeria313#1694: what is wrong with this, why does it still have noise in it https://cdn.discordapp.com/attachments/729741769738158194/910618778461687808/demo_00020-5.png
alstroemeria313#1694: oh wait, the ode solver doesn't know to output pred on the last timestep does it
alstroemeria313#1694: Still. You'd think it would not be this bad
alstroemeria313#1694: ...Oh no.
alstroemeria313#1694: The default solver is making the timesteps go negative
alstroemeria313#1694: Because of its adaptive step thing
alstroemeria313#1694: gonna have to use a fixed step solver
alstroemeria313#1694: unless i use it *during training* and thus negative t values become in-distribution
cfoster0#4356: Hmm what if you ~~put the LPIPS feature extractor *inside* your diffusion model, froze its parameters, and~~ do diffusion jointly on the image pixels and (standardized) LPIPS features
alstroemeria313#1694: i can't convert back from VGG features to RGB, is the thing
cfoster0#4356: Do you need to?
alstroemeria313#1694: if i am doing diffusion in vgg space yes?
cfoster0#4356: Diffusion in VGG space and RGB space
alstroemeria313#1694: ...then what does the VGG part do
cfoster0#4356: Enforcing that the network will map noised inputs to the images that would produce the correct VGG features
cfoster0#4356: Actually I don't think you need to put a pretrained feature extractor in your model
alstroemeria313#1694: oh
alstroemeria313#1694: hm
cfoster0#4356: I don't give this a ton of probability of working
|
alstroemeria313#1694: wish we had a good noisy perceptual loss
alstroemeria313#1694: ...can we train one
alstroemeria313#1694: well the thing is going
alstroemeria313#1694: i am using 10 Euler steps
alstroemeria313#1694: and computing MSE loss vs all 10 timesteps
alstroemeria313#1694: i could do it vs just the last right?
alstroemeria313#1694: but then idk
alstroemeria313#1694: also this is really inefficient
alstroemeria313#1694: like absurdly inefficient
alstroemeria313#1694: Like part of the point of diffusion is you can train the timesteps separately without using multiple backprops
tpapp157#3643: It'd be interesting to set up a website that showed an image and a modified version with varying levels of noise/artifacts/augmentations and ask people whether or not they're the exact same image. With that data you could potentially train a true human perceptual difference classifier.
alstroemeria313#1694: didn't LPIPS do this
alstroemeria313#1694: With some dataset?
alstroemeria313#1694: Like they used a frozen VGG and trained some linear layers on top of its features
alstroemeria313#1694: On some distortion dataset
alstroemeria313#1694: also this model is just learning to output the mean real at the end of sampling
alstroemeria313#1694: it cannot in fact learn the thing
EricHallahan#1051: https://arxiv.org/abs/1801.03924
DR.PROACT#2111: Hey FAM
DR.PROACT#2111: Just wanted hear your thoughts about alethea
|
cfoster0#4356: Is that a crypto thing?
Kharr#7888: Might want to ask in #off-topic
DR.PROACT#2111: Cool
tpapp157#3643: There have been a lot of discussions about tweaks and improvements to transformers. So what does the current baseline transformer block look like?
cfoster0#4356: Mostly like Attention is All You Need :blobsad:
cfoster0#4356: I think there's good consensus around pre-LN (and maybe some extra LNs), GLU, and token shift for text
alstroemeria313#1694: has anyone tried like, the midpoint method for DDIM?
alstroemeria313#1694: is it any better than normal DDIM?
𓅬 gabriel_syme 𓅬#3220: yeah lucid made a point about the extra LNs I thought. I've seen some people suggest token shift and alias I thought
alstroemeria313#1694: like with twice the steps
alstroemeria313#1694: ...What even is a Hyvärinen score.
alstroemeria313#1694: Hyvärinen came up with a way to do score matching without computing the scores to match *or even the pseudo-scores we use for diffusion models?*
alstroemeria313#1694: Does it need a Hessian or only an HVP?
alstroemeria313#1694: Ugh is this some Hessian trace method
alstroemeria313#1694: (well. since the output of our model is the score itself, it's a Jacobian trace not a Hessian trace, it would be a Hessian trace for an EBM)
alstroemeria313#1694: oh this is just the sliced score matching thing
ethan caballero#6044: How many exaflops is cohere's supercomputer?
https://twitter.com/AidanNGomez/status/1461004023901863937
Kia#2550: Louis can probably give a rough estimation?
Kia#2550: Also there's like a bunch of people in cohere that is in the server
|
Kia#2550: @ethan caballero
EricHallahan#1051: They are undoubtedly TPU v4 pods.
kurumuz#5695: o, so this is what it helps with.
EricHallahan#1051: They won't reveal exactly what they have, both because it is probably under Google NDA and the general stealth-like nature of the company.
EricHallahan#1051: So either way, they are going to be 🤐 about it.
Kia#2550: We can Guess I supposed
Kia#2550: A Bunch of A100 cards
Kia#2550: And TPU's v3's and v4's(Stella suggestion)
Kia#2550: *Oh No,I just Reveled it*
StellaAthena#3530: They’re TPUv4s
CarsonPoole#0640: random question re: gpt neo--the pytorch model file from huggingface don't include the weights for the final language modeling head. where do you get that?
Kia#2550: See Just a Bunch of tpu's and Nvidia cards,We can compute the exaflops by guessing
EricHallahan#1051: Like TPU v3-512s, v3-1024s and v3-2048s, while larger than the fastest supercomputers of many nation-states, aren't really worth announcing a partnership over.
Kia#2550: Yeah good point
Emad#9608: It’s crazy how crap nation state compute is
Emad#9608: Aside from a notable example
Kia#2550: Yeah
Kia#2550: :_
StellaAthena#3530: What do you mean? Can you elaborate?
StellaAthena#3530: I’m 90% sure that the full model is on HF…
|
CarsonPoole#0640: unfortunately not, the pytorch_model.bin doesn't have it
StellaAthena#3530: Which model are you looking at
CarsonPoole#0640: this is in the huggingface code so it makes sense https://cdn.discordapp.com/attachments/729741769738158194/910727141170237460/Screen_Shot_2021-11-17_at_9.04.16_PM.png
CarsonPoole#0640: https://huggingface.co/transformers/v4.6.0/_modules/transformers/models/gpt_neo/modeling_gpt_neo.html#GPTNeoForCausalLM
EricHallahan#1051: I honestly know very little about the HF GPT-Neo implementation.
CarsonPoole#0640: pretty much all of them
StellaAthena#3530: You don’t need to put a LM head at the end of an autoregressive model
CarsonPoole#0640: well the model outputs gibberish without it
StellaAthena#3530: Got a minimum demonstrating example?
CarsonPoole#0640: ```from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-1.3B")
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-neo-1.3B")```
CarsonPoole#0640: run `model.generate(...)` and the output is gibberish
StellaAthena#3530: Am I reading this incorrectly or did colab give me a chonky gpu https://cdn.discordapp.com/attachments/729741769738158194/910730529937186816/Screen_Shot_2021-11-17_at_10.17.23_PM.png
EricHallahan#1051: Just run `nvidia-smi`
EricHallahan#1051: Then no need to guess.
alstroemeria313#1694: They are occasionally handing out A100s now
timudk#8246: What solver/implementation are you using? Even with adaptive steps you should not have to go into negative.
|
timudk#8246: IMO, Runge--Kutta (4)5 works very well.
EricHallahan#1051: TFW K80s for days.
bw#3136: I thought A100 was only for Pro+?
CarsonPoole#0640: K80s and leaving your laptop cracked open all night and setting an alarm before the 24 hour mark
EricHallahan#1051: I've done this on more than a few occasions a couple years ago.
alstroemeria313#1694: It was the torchdiffeq default
alstroemeria313#1694: My t was supposed to go from 1 down to 0
alstroemeria313#1694: I can actually define negative timesteps if I want, I think
timudk#8246: Try the scipy solver: set method to "scipy_solver" and set options to {'solver': 'RK45'}
alstroemeria313#1694: Ah
timudk#8246: Generally I would not integrate all the way down to 0 but rather only 1e-5 or so
timudk#8246: maybe even 1e-3
timudk#8246: What architecture are you using? Some apply a log to the time variable so negative time steps give you garbage
alstroemeria313#1694: Mine does not
alstroemeria313#1694: However it is out of distribution for the model as trained and will break unless I explicitly handle it
alstroemeria313#1694: Like model(t, x) should be -model(-t, x) or something
alstroemeria313#1694: Not sure if that’s right but it’s something simple
timudk#8246: Even if you are not having a log; your model still becomes very complex as t -> 0, and therefore the adaptive solver will likely need many steps. Visually I don't think you see any difference b/w 1e-5 and 0
timudk#8246: maybe.... would just avoid negative time steps in the first place haha
alstroemeria313#1694: Yeah
|
alstroemeria313#1694: It isn’t right
alstroemeria313#1694: You have to decompose the model output into pred and eps components and negate the pred component and put them back.
alstroemeria313#1694: That gives you another quadrant of the circle
alstroemeria313#1694: For the other two quadrants you either negate eps or negate both (actually negate the model output).
alstroemeria313#1694: And map the timestep to the corresponding one in the first quadrant.
Daj#7482: For those interested in getting into alignment work, but don't check the alignment channel, you can now apply to AI Safety Camp, which I highly recommend you do!
https://discord.com/channels/729741769192767510/730451873613611079/910756812930834442
Deleted User#0000: anyone know how I can use a large GPT on google colab?
Deleted User#0000: messing around with the colab linked at the bottom here https://www.vennify.ai/gpt-neo-made-easy/
Deleted User#0000: but I keep running into errors about CUDA running out of memory
nostalgiahurts#3408: if you're only interested in inference, this might be useful: https://github.com/AminRezaei0x443/PyTorch-LIT
it streams parameters into memory so you can run big models with limited RAM. of course it's much slower, but it could be better than nothing
there are some examples for GPT-J, so you can start from there
also, this is a new project and I've never tried it myself, so your results may vary
𓅬 gabriel_syme 𓅬#3220: This is disappointing. Is this because I have repetition in the data? https://cdn.discordapp.com/attachments/729741769738158194/910844786108416010/Screenshot_20211118-185052_Chrome.jpg
𓅬 gabriel_syme 𓅬#3220: I really dislike this thing where it doesn't recover
CRG#8707: If you have repetition maybe try dropout?
CRG#8707: https://cdn.discordapp.com/attachments/729741769738158194/910854638767587328/Screenshot_20211118-123042.png
𓅬 gabriel_syme 𓅬#3220: Cool, need to figure out how to do that on J
𓅬 gabriel_syme 𓅬#3220: That said I will trying gpt2 and neo models as well, fun to compare
|
𓅬 gabriel_syme 𓅬#3220: I'm still interested in why it is catastrophic
𓅬 gabriel_syme 𓅬#3220: Not that model performance goes entirely hand in hand with val loss here but it should be sort of correlated
MicPie#9427: Sid once posted (I guess he had it from Megatron devs) that reducing beta_2 of the Adam optimizer for Transformer training could reduce such spikes (if the spikes caused your problem).
mkualquiera#3484: yeah the spikes seem to be indicating something
Emad#9608: away goes the waitlist for the OpenAI API https://openai.com/blog/api-no-waitlist/ tbh its pretty well done
EstebanSir#2189: yooo nice
EstebanSir#2189: mhmh, a lot of restrictions on chatbots, some are obvious though
EstebanSir#2189: i wonder if the "business talk only" restriction for chatbots still applies to small experiments instead of large operations
HypnoPump17#9322: yuppp just saw this today and was going to mention it here haha
EstebanSir#2189: i think it might not be an issue if i dont "go live" with it
EstebanSir#2189: that is, deployment
alstroemeria313#1694: https://openreview.net/forum?id=TIdIXIpzhoI updated btw
EstebanSir#2189: ~~wow gpt3 kinda sucks huh~~ nvm i probably set the top_p and/or temp wrong
EstebanSir#2189: no yeah it does suck
CRG#8707: Give it more context
EricHallahan#1051: r/ML gang
EricHallahan#1051: It actually looks pretty good.
elderfalcon#4450: @𓅬 gabriel_syme 𓅬 Epsilon as well as beta2, that can make a huge difference
elderfalcon#4450: Good point, the val jumps seem to bee coordinated with that.
|
Anyone have any thoughts on tossing out the gradient updates for an entire batch that spikes like that?
nev#4905: that sounds like something so obvious that nobody tried it
elderfalcon#4450: Woah
nev#4905: actually iirc the spike starts out slowly. you can still detect it probably
elderfalcon#4450: Which was so obvious?
nev#4905: discarding instead of clipping gradients for spiking batches
elderfalcon#4450: Gotcha. Apologies if I'm missing any sarcasm here.
The reason I'm suggesting discarding instead of clipping for catastrophic stuff is that clipping reaaaaalllllyyyy borks w/ the statistics of the batch, so it's still a hack but not an awful one
nev#4905: yes, I agree. I had a similar idea as soon as I saw gabriel_syme's experiments
elderfalcon#4450: (re: the epsilon tweaking -- momentum + .999 ema variance updates plus a default 1*10^11 multiplier for almost never-updated parameters basically guarantees monstrous spikes by the laws of large numbers.
I almost never have a high beta2 and low epsilon when training language models. Maybe it shakes out at scale but for smaller batch sizes it's like a total ⚙️🔥 (why are there no tire emojis....?!?!?))
elderfalcon#4450: That's just personal experience within some very limited test systems though.
nev#4905: I'd be up to try this on a smaller dataset. I don't know what parameter counts this applies to, but the spikes happen pretty early
alstroemeria313#1694: eheh. "but we can't use the loss value in our heuristic for deciding what to do with the optimizer step, what is this, line search?"
alstroemeria313#1694: and in general sgd loss values can be pretty noisy
elderfalcon#4450: I guess it depends on the optimizer they are using, too. I slid that second suggestion out there as a safety guard in case the heuristics were an issue (or one could do it based on gradient norm too, I guess, but I think I can understand the queasiness there too)
alstroemeria313#1694: yeah
alstroemeria313#1694: actually i just thought of a problem.
|
alstroemeria313#1694: the unexpectedly high loss value means you need to *roll back the last step*
alstroemeria313#1694: this is harder to do
elderfalcon#4450: How so?
alstroemeria313#1694: well presumably you got the high loss value bc your previous step broke the model
StellaAthena#3530: Does anyone know the correct way to cite the 540B Megaton-DS model?
elderfalcon#4450: My experience has been that usually it's just a bad combination of sampled items in the current batch.
Like generally a 1-in-a-1000 batch extreme activator of a feature came in super late, and that parameters effective learning rate happens to be vulnerable.
But then with skipping a batch, by the next time that extreme feature rolls around, you may end up being better conditioned with other features from the dataset.
(Or worst case scenario, there's a few extreme outliers that always cause dropped minibatches, in which case maybe that's not a bad thing anyways as the MI between that and other training examples will likely be quite low)
EricHallahan#1051: By citing the 530B Megatron-Turing NLG model. 😉
alstroemeria313#1694: ah
Louis#0144: the blog post probably
Quill#9732: even aside from being out of distribution, a non-full context just *has less infomation*
Quill#9732: even if you trained the model with random amounts of context so all amounts of prompting are in-distribution, full context should still do better
cfoster0#4356: I know this isn't your main point, but you don't need padding when the context is not full, during inference
Sphinx#2092: You do if you use batch size > 1, no?
Sphinx#2092: or you mean people playing some api?
|
cfoster0#4356: Yeah true. I can't fit that on my hardware lol
Sphinx#2092: rip.
bmk#1476: the masked autoregressive task trains for every context length
bmk#1476: it's an efficient way to train for every single "predict the next token given the past n tokens" task in a given span with only one forward pass
bmk#1476: there is no padding during training
bmk#1476: the padding is not attended to in inference
bmk#1476: the padding is just there to batch different length sequences together
cfoster0#4356: Never, unless there was a document that was just a bunch of whitespace
cfoster0#4356: Err
cfoster0#4356: Every time the model gets an example that starts with "Hello world" followed by other stuff, it's like training a model to predict stuff based only on "Hello world"
cfoster0#4356: For an existing model that you're just promoting?
bmk#1476: I feel like there's a miscommunication about what "padding" means here
bmk#1476: what does padding means to you
cfoster0#4356: Maybe? There may be diminishing returns
m_wAL99#1923: https://www.samcodes.co.uk/project/geometrize-haxe-web/
Sphinx#2092: Oh I see.
Sphinx#2092: I think you are ignoring the concept of attention masking.
Sphinx#2092: For transformers in particular, if you don't attend to the future, you can still process the full sequence in one pass
Sphinx#2092: but have the teprediction for n+1th token only depend on the first n
Sphinx#2092: Full context doesn't make any sense. For the prediction of the n+1th token, we use the first n.
|
Sphinx#2092: That's the only context that's used.
bmk#1476: this misunderstanding comes up so often, I should write a post about it
Sphinx#2092: Does it? I saw it more often when transformer first came out
cfoster0#4356: You should just pretend like the padding doesn't exist. It's just there to make the shapes consistent and has no influence whatsoever on the computation
Sphinx#2092: It's more of an issue for enc-dec models.
Sphinx#2092: whitespace is not padding.
cfoster0#4356: It doesn't actually matter where the padding is for a model like GPT-J because you would mask the attention on the padding tokens anyways. Modulo positional encoding stuff
alstroemeria313#1694: so like what happened to vast.ai, there are barely any rentable machines on there
tpapp157#3643: crypto mining probably
cfoster0#4356: Unfortunately I don't think so. This is somewhat in the weeds of how stuff is actually implemented
janus#0150: Maybe you released too many cool diffusion models and every machine is being used to make art
alstroemeria313#1694: why didn't the price just go up though
alstroemeria313#1694: like typically people mine on them w/ low priority jobs and they set the normal/noninterruptible job price so it would be more than the mining income
tpapp157#3643: No idea. Probably too much hassle vs just setting up some mining software, forgetting about it, and letting the consistent revenue come in.
tpapp157#3643: I've never used vast.ai though so I'm just speculating. Alternately, maybe the community just died off naturally as niche communities often do.
alstroemeria313#1694: wow
bmk#1476: I heard Vast is shutting down
cognomen#6297: the prices are definitely above what each provider could make mining eth instead
cognomen#6297: if anyone's using vast for it they're doing it at a loss
elderfalcon#4450: That's probably the reason why it got hit more, that #marketcompetition
|
elderfalcon#4450: @Deleted User not to drag this on too much, but a space character I think encodes differently than 'nothing'. It has an encoding.
Every output attends to every input before it up to a certain length. Like cfoster said, unless you're doing batch weirdness, basically all padding is "trimmed" and never really seen by the network.
So if you add tons of spaces, your model will try to predict the nearest in-distribution conditioned samples. With lots of space characters, something that doesn't happen often, your model will be getting some very weird signals.
A classic example of this is The Crimsonfangirl character that Sigurd plays. Because of a lot of the non-ascii characters, absolute madness happens, and the GPT-J 6B model is much more likely to return emojis, etc than in normal conversation.
Quill#9732: my understanding is that if you have fewer input tokens than the maximum context, you actually evaluate fewer parts.
(i.e. the transformer model basically has "stacks" of fully-connected components and attention heads on top of each input token, if there's fewer input tokens you still only have as many of those stacks as input tokens)
Quill#9732: (in principle, as I understand it, you can even evaluate a transformer on a larger context than it was trained with, it just generally performs poorly due to being out of distribution)
cfoster0#4356: It doesn't matter what you fill it with. You basically zero out the attention values that involve the filled positions, so that they don't contribute anything. And because the attention is the only way information is exchanged between different positions in the sequence, it's as if those tokens aren't even there
cfoster0#4356: At a position t, the network is predicting token t+1, and it's using all of the information from tokens 0 through t to make that prediction. This is true for all t up to the context length, so the network trains on *all* context lengths at the same time
Quill#9732: I'm not sure if anyone's tried training a transformer with random context lengths in training (and appropriately adjusted scaling factors on the attention output) so that it's robust to context length variations and checked if it performs better than normally trained transformers on larger-than-in-training contexts
EricHallahan#1051: Well you train on all context lengths simultaneously.
Quill#9732: ah, yeah, okay.
Quill#9732: do transformers using RoPE do better on generalizing to larger contexts?
elderfalcon#4450: This is generally I think what people here are saying, just in different ways.
It's minimizing the lot likelihood across possible window sizes of text predictions. Because it's a sentence, we start with sequence length of 1 I think and then go to the maximum sentence length. Just all in one shot.
CRG#8707: A bit better than sinusoidal, but not that much
|
CRG#8707: https://cdn.discordapp.com/attachments/729741769738158194/911030613342044240/unknown.jpeg
cfoster0#4356: This will confuse things more than clarify things, I think
cfoster0#4356: When we take a training example with 2048 tokens, we evaluate this function simultaneously for k=1 through 2047, basically
cfoster0#4356: The size of training examples we take (in this case 2048) is usually held constant through training, but papers like shortformer start small and increase it later
cfoster0#4356: What we're trying to communicate is that there's effectively *no such thing* as shorter than trained length
CRG#8707: I think the key question is whether additional examples to the left are helpful, which they typically are
CRG#8707: Loss is higher with less context (here loss of GPT-Ada to Davinci) https://cdn.discordapp.com/attachments/729741769738158194/911032794451423312/Screenshot_20211119-001821.png
bmk#1476: there's also a plot like this in the pile paper
bmk#1476: and the gpt3 paper too
bmk#1476: or possibly one of the scaling laws papers
CRG#8707: (That's where I got the data :berk: )
bmk#1476: oh lol
𓅬 gabriel_syme 𓅬#3220: Huh, this is even weirder I think. A new spike made val loss go down 🙂
Would gradient accumulation help or make this matter worse? I actually have 512 sequences as my batch which is quite high.
I probably needed to merge my raw data and create the tfrecords from there, I'll try it once my pc is back https://cdn.discordapp.com/attachments/729741769738158194/911034027060920330/Screenshot_20211119-072142_Chrome.jpg
𓅬 gabriel_syme 𓅬#3220: Yeah, I consistently see the same thing across (smaller) scales as well
CRG#8707: Yeah, loss is a power law with context: https://discord.com/channels/729741769192767510/785968841301426216/907778233137786881
CRG#8707: If it had infinite context (and was able to use it)
𓅬 gabriel_syme 𓅬#3220: yeah it is a nice one
𓅬 gabriel_syme 𓅬#3220: does shortformer and ALiBi work
|
𓅬 gabriel_syme 𓅬#3220: wait ALiBi had plots for that I'll look again, but I think it tapered off at 2*CTX right
𓅬 gabriel_syme 𓅬#3220: I just wonder if that is still the case when the context is tiny (like 256/512)
CRG#8707: It looks like it levels off, but power laws can really go down if you push them far enough. (Hence the whole GPT-3) https://cdn.discordapp.com/attachments/729741769738158194/911039821202468874/Screenshot_20211119-004507.png
𓅬 gabriel_syme 𓅬#3220: I have some of those in my scaling test I can take a look within a similar FLOP/param category. Although, inferencing on those becomes a nuance pretty quickly
𓅬 gabriel_syme 𓅬#3220: oh I meant deeper models aren't as efficient for inference.
𓅬 gabriel_syme 𓅬#3220: eventually I'll try ALiBi and see how it works in that setting (small model and context), although my guess is it's not a very practical setting heh
StellaAthena#3530: For small contexts I wouldn’t even bother with ALiBi tbh.
𓅬 gabriel_syme 𓅬#3220: yeah that's what I guessed too. I had a silly notion to train models on 256/512 and have it predict at 2x that which seemed to be the sweet spot, but there's no point lol.
elderfalcon#4450: Best of luck! :D
elderfalcon#4450: It would just smooth the spike out I think, which maybe helps but it feels like the law of large numbers to smooth out/bandaid a different issue altogether that's happening during training. My experience/intuition leads me to say that you would need to solve the problem behind the spike, or remove the spike's effects entirely if you can in some way (/vs trying to cover or smooth it out). It will almost certainly come back to haunt you in 2-3 months (or whenever) when it becomes the center of some far-harder to debug megabug that threatens the whole thing. Sorta like the food maker thing in cloudy with a chance of meatballs.
Lived experience on this one.... *shudders* experiences, actually.... Too many...of these, experiences....urk.
𓅬 gabriel_syme 𓅬#3220: yeah agreed, when my computer is back from the shop I'll shuffle all the individual parts of the dataset together and split it in a nicer way
𓅬 gabriel_syme 𓅬#3220: I actually wanted to deduplicate but I don't know how to do it efficiently 😦
elderfalcon#4450: Oh, gotcha, that makes sense. I could see stuff like that causing issues, makes sense! D:
𓅬 gabriel_syme 𓅬#3220: I wonder if anyone plans to build an easy, one-line, deduplication thing
𓅬 gabriel_syme 𓅬#3220: I can't seem to have the brain rn to figure out google's deduplicate-text
Parker#3197: what are you trying to do?
𓅬 gabriel_syme 𓅬#3220: I guess I could tokenize each item and then smh find similar and remove?
𓅬 gabriel_syme 𓅬#3220: well i have a lot of synthetic data on similar prompts, across different finetuned models, and I think quite a bit of that is duplicated
|
Parker#3197: maybe show an example?
𓅬 gabriel_syme 𓅬#3220: hmm ye ehm I don't have my PC right now. But I can find from my history, it's layout data 🙂
𓅬 gabriel_syme 𓅬#3220: `[prompt] a house with five rooms [layout] a geometric description here`
𓅬 gabriel_syme 𓅬#3220: the geometric description is often a duplicate, or I think it is
𓅬 gabriel_syme 𓅬#3220: I was thinking to make the geometry into a hash (like a string of numbers) and deal with that. idk
Parker#3197: I meant, an example of one with the duplication and how you want it corrected
𓅬 gabriel_syme 𓅬#3220: ahh I want it deleted 🙂 imagine two examples that have almost identical outputs
Parker#3197: it still isn't exactly clear to me what you're wanting done. I can't tell if it is just repeated text or not, that is why I'm just asking for examples
Parker#3197: I do think you have it in mind what you want done though
𓅬 gabriel_syme 𓅬#3220: Yeah apologies will try to find an example
𓅬 gabriel_syme 𓅬#3220: But the general idea is that my dataset is a bunch of sequences, with an endoftext at the end, and some of those are probably repeated across the data.
𓅬 gabriel_syme 𓅬#3220: I wonder if that is what the spikes I see in the loss are about
Parker#3197: if they're exactly the same, you could just use a hash map. but it doesn't sound like that is what is wrong
nostalgebraist#3542: is your dataset small enough for create_finetune_tfrecords?
nostalgebraist#3542: because if so, try `--min-unique-tokens 200` https://github.com/kingoflolz/mesh-transformer-jax/blob/master/create_finetune_tfrecords.py#L49-L52
nostalgebraist#3542: idk if that will fix the problem, but it did fix gradient/loss spikes for me
𓅬 gabriel_syme 𓅬#3220: is that unique tokens in series?
𓅬 gabriel_syme 𓅬#3220: nvm I'll read 😄
𓅬 gabriel_syme 𓅬#3220: oh repetitive documents, damn I had not seen that there 😦
𓅬 gabriel_syme 𓅬#3220: thanks for pointing it out!
|
𓅬 gabriel_syme 𓅬#3220: hmm no it seems to check if each document has at least that number of unique tokens. That might be problematic with my data, I'll try a small number
nostalgebraist#3542: it says "documents" but what it actually does is count within the "sequences" (the length 2048 things)
nostalgebraist#3542: (which might still be a problem with your data, idk)
𓅬 gabriel_syme 𓅬#3220: yea maybe but still nice to try, thanks
𓅬 gabriel_syme 𓅬#3220: like has anyone used https://github.com/google-research/deduplicate-text-datasets with a custom dataset?
𓅬 gabriel_syme 𓅬#3220: Should I create a TFDS as a 'simpler' way of using it, since it seems to work with that out of the box? I don't even know how to do that I think
𓅬 gabriel_syme 𓅬#3220: oh nvm I think they detail in the advanced use
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/911241214592954409/unknown.png
alstroemeria313#1694: so...
alstroemeria313#1694: in this image
alstroemeria313#1694: how are they doing pixel shuffle/pixel unshuffle, halving/doubling spatial resolution, and also halving/doubling channel count?
Kazumi#1297: are we using transformers instead of convolutions now?
Kazumi#1297: oh, yeah
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/911241555099156541/Screen_Shot_2021-11-19_at_5.08.34_AM.png
alstroemeria313#1694: anyway i just put my normal down/upscaling in with 1x1 convs to change channel count to try this
alstroemeria313#1694: bc i do not know what they actually do
alstroemeria313#1694: uhh it's working though
alstroemeria313#1694: for diffusion
alstroemeria313#1694: loss is going down and i have that early training perlin noise look in the demo grids
𓅬 gabriel_syme 𓅬#3220: Is it terrible 9f that chart makes me instantly space out
|
alstroemeria313#1694: ehehe~
alstroemeria313#1694: my favorite so far is the u-net++ diagram ^^;;
𓅬 gabriel_syme 𓅬#3220: Oh ye I think o remember that one
alstroemeria313#1694: 10 epochs on cifar-10 https://cdn.discordapp.com/attachments/729741769738158194/911242723367989278/demo_00010-9.png
𓅬 gabriel_syme 𓅬#3220: Looks nice
alstroemeria313#1694: model is 2.6M params
alstroemeria313#1694: but anyway. this model type can use global information in self-attention at any resolution
alstroemeria313#1694: i do not have a "refinement stage" in yet either
alstroemeria313#1694: it is just a symmetric u-net
𓅬 gabriel_syme 𓅬#3220: Maybe take a look at that low data regime ViT I shared. Wonder if their approach would help in any of this stuff
𓅬 gabriel_syme 𓅬#3220: Damn can't link it on phone, I'm terrible on phone discord.
𓅬 gabriel_syme 𓅬#3220: https://openreview.net/forum?id=AJofO-OFT40 (sry for recycle on the link)
alstroemeria313#1694: probably won't
alstroemeria313#1694: that is because this arch uses 3x3 convolutions in its qkv proj and feedforward networks so it has spatial relations builtin already
alstroemeria313#1694: training a bigger one now
alstroemeria313#1694: 24.2M
alstroemeria313#1694: this is comparable to my usual cifar-10 test u-net.
alstroemeria313#1694: `11427MiB / 48676MiB` seems memory hungry in comparison
𓅬 gabriel_syme 𓅬#3220: I'm never training at home am I
alstroemeria313#1694: but if this works we can use self-attention in upscalers!
|
𓅬 gabriel_syme 𓅬#3220: Oh okay, I saw they used their trick with resnets too so was curious
alstroemeria313#1694: oh huh
alstroemeria313#1694: oh wow, parameterizing the learnable per-head attention scale as log scale works better
alstroemeria313#1694: why does the net not use biases
alstroemeria313#1694: don't we need them
alstroemeria313#1694: like maybe if you're restoring images it's ok because you want the residual part of the net's output to have mean 0?
alstroemeria313#1694: but we actually need to change the mean considerably for diffusion
alstroemeria313#1694: in a learned timestep dependent way
alstroemeria313#1694: oh, the no-bias thing helps generalization to noise levels not used during training
alstroemeria313#1694: but it is seriously hurting diffusion, i put the biases back in and now loss is dropping much faster.
alstroemeria313#1694: i train on all possible noise levels anyway :)
alstroemeria313#1694: so don't need to generalize.
alstroemeria313#1694: this is the bias-free paper they cite https://openreview.net/pdf?id=HJlSmC4FPS
alstroemeria313#1694: > Theoretically, we argue that if the denoising network operates by projecting the noisy observation onto a linear space of “clean” images, then that space should include all rescalings of those images, and thus, the origin. This property can be guaranteed by eliminating bias from the network.
alstroemeria313#1694: except that we *predict eps* at low noise levels...
alstroemeria313#1694: like this might actually work w/ pred objective?
alstroemeria313#1694: oh wait
alstroemeria313#1694: except we also *scale the signal down*
alstroemeria313#1694: which also breaks the additive noise assumption here.
alstroemeria313#1694: i think.
|
tpapp157#3643: Interesting. I wonder how much the gating adds.
alstroemeria313#1694: we could do ablations
tpapp157#3643: Oh and they're using the reduced attention form where the 'attention map' is dxd
alstroemeria313#1694: yeah it's transposed attention
alstroemeria313#1694: so you get global information at all resolutions
tpapp157#3643: I guess that makes sense since they're trying to do full attention at all levels.
alstroemeria313#1694: yes
alstroemeria313#1694: we could add normal attention back in at low resolutions
alstroemeria313#1694: but i think having this at high resolutions is a win
alstroemeria313#1694: especially for upscalers
tpapp157#3643: In my last architecture I used alibi+axial to do attention at all resolutions.
alstroemeria313#1694: ahh
alstroemeria313#1694: how did you do alibi w/ non-causal
alstroemeria313#1694: did you add a pos emb also
tpapp157#3643: no, since it's also axial (1D attention) you just calculate the relative distance in both directions.
alstroemeria313#1694: ...doesn't that render it unable to tell the difference between things on the left and things an equal distance away on the right
tpapp157#3643: yep, but I also included normal conv blocks as well
alstroemeria313#1694: oh
alstroemeria313#1694: ok
alstroemeria313#1694: :)
|
alstroemeria313#1694: yeah that's why i've been wary of using alibi
alstroemeria313#1694: anyway with upscalers i want to use them at resolutions so huge we can't do axial anymore
alstroemeria313#1694: admittedly transposed attention makes us unable to evaluate the model on tiles well
alstroemeria313#1694: as well
alstroemeria313#1694: so we have to fit the whole thing into memory
tpapp157#3643: A while back I just calculated the largest resolution I could still do full attention at and just did a spatial downscale of Q and K to that res and then an upscale after.
alstroemeria313#1694: ahh
tpapp157#3643: so kind of a coarse attention
tpapp157#3643: as I recall I think it was 16x16 or 32x32 or so
tpapp157#3643: I wonder if there's a way you can create a fused layer with something like triton that can exclude calculations outside the local neighborhood defined by the alibi coefficient.
alstroemeria313#1694: maybe~
alstroemeria313#1694: so depthwise separable convs vs full convs
alstroemeria313#1694: which is better
alstroemeria313#1694: they did not do an ablation where they used full convolutions instead
tpapp157#3643: well full conv is a lot more params obviously. I would assume full conv is better overall but realistically I think depthwise conv mostly just imposes a different sort of regularization on the learned features. I've never done a full comparison of the two though.
alstroemeria313#1694: don't they essentially have to use deeper networks if they use separable
alstroemeria313#1694: like we could use full convolutions w/ no channel increase in the middle in the ffn and 1x1 to make qkv
alstroemeria313#1694: and just use transposed attention
tpapp157#3643: depthwise only does spatial mixing and no channel mixing, so you need to add something else (like a 1x1) to do the channel mixing.
alstroemeria313#1694: like make it a drop-in module for a u-net
|
alstroemeria313#1694: yeah, depthwise separable is the term for a pointwise followed by a depthwise, or the other way around
alstroemeria313#1694: which is what they actually use
alstroemeria313#1694: they never use a depthwise alone
tpapp157#3643: makes sense. The combination is still fewer params than a single full conv but technically less flexible as well.
tpapp157#3643: I'm not sure I've seen anyone do a full comparison between the two.
alstroemeria313#1694: what is the optimal dot product scale for... wait
alstroemeria313#1694: we need to normalize the attention matrix by the spatial size of the input
Visarch of Apollo,#7152: Is this real? Are we actually getting 20 beaks? https://wandb.ai/eleutherai/gpt-thicc?workspace=user-
cfoster0#4356: 20 beaks?
alstroemeria313#1694: like, automatically. theirs is learnable which lets it adapt when they change resolution
EricHallahan#1051: Why do you need to learn that?
alstroemeria313#1694: bc they train at the different resolution
EricHallahan#1051: That sounds like something that you could calculate.
alstroemeria313#1694: i think you don't have to if you just calculate the right scale for the current resolution.
alstroemeria313#1694: it is just 1 / sqrt(h * w)
alstroemeria313#1694: for scaled dot product
alstroemeria313#1694: then we don't have to fine-tune to use it at a different resolution
cfoster0#4356: I think that works out, yeah
alstroemeria313#1694: https://cdn.discordapp.com/attachments/821173872111517696/911266604862558248/demo_00020-6.png 20 epochs w/ learned scales
EricHallahan#1051: Anything that lets you adapt to a different resolution is a win in my book.
|
inox#5400: this is one of the early post-alexnet comparisons https://www.youtube.com/watch?v=VhLe-u0M1a8 according to francois chollet <https://arxiv.org/abs/1610.02357>
cfoster0#4356: It's typically 1 / sqrt (elements each entry contracts over), right?
alstroemeria313#1694: @EricHallahan wow. loss is going down faster w/ the explicitly calculated scale
alstroemeria313#1694: yes
alstroemeria313#1694: with transposed attention there are h * w "dimensions" in each sequence item
EricHallahan#1051: hmm, I wonder why lol
alstroemeria313#1694: bc my init was wrong
alstroemeria313#1694: it was 1 / sqrt(c)
alstroemeria313#1694: so i am calculating `scale = (h * w)**(-1/4)` and multiplying both q and k by this
alstroemeria313#1694: before multiplying them together
alstroemeria313#1694: according to the openai diffusion people this is a trick that makes fp16 training more stable
alstroemeria313#1694: like if you don't calculate the attn matrix then scale it down
alstroemeria313#1694: you overflow less
alstroemeria313#1694: and since dim is huge with this
alstroemeria313#1694: it's probably a good idea to do
tpapp157#3643: Why do you multiply both matrices by the fourth root rather than one by the square root?
alstroemeria313#1694: avoiding underflow/overflow in fp16
alstroemeria313#1694: a pity we don't have GEMM in pytorch
alstroemeria313#1694: it takes an explicit scale parameter
alstroemeria313#1694: @EricHallahan val loss w/ the explicitly calculated scale reached the learned scale 20 epoch value at 13 epochs
|
alstroemeria313#1694: this seems like an oversight in the paper really
EricHallahan#1051: 100% is an oversight.
alstroemeria313#1694: explicitly calculated scale should be better for their applications too right, not just diffusion
alstroemeria313#1694: like the bias thing isn't an oversight, it probably *is* better for their application
tpapp157#3643: If you're doing a learned scale, couldn't you just initialize it at the typical value and just see where it goes from there. I don't think there's anything special about the 1/sqrt(d) value other than people have found that it generally works well.
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/911295635095974018/Screen_Shot_2021-11-19_at_8.43.27_AM.png
alstroemeria313#1694: what is special about it is that it is a good choice regardless of d
alstroemeria313#1694: which is important because for transposed self-attention we change d if we feed in a different sized image.
tpapp157#3643: true
alstroemeria313#1694: and we want to be able to do this in inference.
alstroemeria313#1694: i need to test this explicitly
tpapp157#3643: Yeah it's not uncommon for parameters which should be scale invariant to not be because the network has learned to finetune itself around the training parameters somehow.
tpapp157#3643: For example, my recent work with alibi allowed the coefficients to be learned and they were consistently different depending on what crop size was used during training.
MicPie#9427: „Imagine a vector in \(\mathbb{R}^{\mathrm{k}}\) with values all \(\mathrm{c}\).
Its Euclidean length is \(\sqrt{\mathrm{k}} \mathrm{c}\).
Therefore, we are dividing out the amount by which the increase in dimension increases the length of the average vectors.“
From http://peterbloem.nl/blog/transformers
TeXit#0796: **MicPie**
|
Compile Error! Click the :errors: reaction for more information.
(You may edit your message to recompile.) https://cdn.discordapp.com/attachments/729741769738158194/911301008074895430/736597678653177908.png
MicPie#9427: Aha! 🤔 :berk:
alstroemeria313#1694: well. it's ok i guess if it ends up w/ somewhat different parameters when fine-tuned at different scales.
alstroemeria313#1694: but i need to test whether *the original computed scale actually still works*
alstroemeria313#1694: even if it is not the best
alstroemeria313#1694: like it should not break entirely like a fixed scale would.
alstroemeria313#1694: what http server can i just like, run to serve files from a directory.
alstroemeria313#1694: and it should support fast static file serving and range headers
alstroemeria313#1694: and not have complicated configuration.
alstroemeria313#1694: so python3 -m http.server doesn't work for this.
alstroemeria313#1694: it is single threaded and doesn't do range headers
alstroemeria313#1694: ok the npm http-server package does this
alstroemeria313#1694: transferring WAY faster than rsync or the python http server now
faraday#0862: did anybody discuss OpenAI signup move today? it gives access to masses of developers now
faraday#0862: though it created a pocket of opportunity to select few witg content creator websites etc
faraday#0862: and I see all of them doing the same thing with few tweaks and succeed equally
faraday#0862: it was a gold rush with gpt-3
faraday#0862: which most of us didn’t have the opportunity to even take a peek at
faraday#0862: I wonder will every such big advancement will happen like this from now on? creating jobs and opportunities for a select few
|
mo#0466: any examples of profitable gpt3 apps?
mo#0466: also, the openai api costs money
baldbaliff#2861: A year ago I would say yes but we have other options out there. There is Nvidia's Megatron which is open source also gpt j is also good.
baldbaliff#2861: On if the createing for a select few
Deleted User#0000: What is the best open source dataset for NLP?
StellaAthena#3530: For language modeling? Probably the Pile
ersatz#0001: How can AI Dungeon generate NSFW content when they are using the OpenAI API and this is against the terms of use?
StellaAthena#3530: … by violating TOS?
Deleted User#0000: Yes. Is there an API to connect with the Pile?
StellaAthena#3530: @Deleted User You can download the dataset from pile.eleuther.ai
rom1504#5008: Yeah that one is decent
Otherwise, Nginx is not really difficult to configure and really fast
Lurker#3691: Any NSFW generations are handled by their in-house Latitude model (fine-tuned GPT-J): https://latitude.io/blog/the-walls-approach "If third party providers that we leverage have different content policies than Latitude’s established technological barriers then if a specific request doesn’t meet those policies that specific request will be routed to a Latitude model instead."
Deleted User#0000: Thanks !
kurumuz#5695: they are special
faraday#0862: rytr.me , copy.ai , jarvis.ai and others.. they are all prompt-engineering products, slightly modifying GPT-3 response. these became heavily used in SEO I think (I wouldn't use them though, considering faulty facts)
faraday#0862: definitely. without gpt-j the scene would be completely different
faraday#0862: i heart gpt-j
mo#0466: thanks, fascinating
mo#0466: I used to be into online marketing a bit
|
mo#0466: so I know their scene 😄
faraday#0862: it's even more fascinating that *each* of these products made decent money out of GPT-3, despite poor output imo
mo#0466: do you have any infos about how much money they make?
mo#0466: gotta keep in mind gpt3 isn't free
faraday#0862: they are reflecting that in the price. these services all use similar pricing structures (because GPT-3 pricing)
faraday#0862: they are expensive and price by generated word count. the creative room with these products, at least for Rytr.me, is that they can pull you in with GPT-3 but execute GPT-2 instead when the conditions are right for it
mo#0466: it's almost angering how trivial all these apps are, isn't it? 😄
faraday#0862: or maybe a simpler heuristic if they have constrained scenarios
mo#0466: even worse, i've been in gpt3 beta for a while 😄
faraday#0862: I've seen something around here that was worth a medal 😄 I think @CarsonPoole must have found a good way to build coherent blog articles with GPT-J
𓅬 gabriel_syme 𓅬#3220: sometimes all you need is to be the first one
𓅬 gabriel_syme 𓅬#3220: not the best one
ersatz#0001: thanks for the explanation!
ersatz#0001: how do you know that they are using GPT-J btw?
kurumuz#5695: ah they said it theirselves pretty sure
kurumuz#5695: so when the context gets catched by the openai filter its sent to the gpt-j model
nostalgebraist#3542: https://latitude.io/blog/latitude-roadmap
> What’s shipped
> New Griffin (a fine-tuned GPT-J model)
ersatz#0001: thanks
|
ersatz#0001: so this was wrong?
kurumuz#5695: depends on the timeframe
kurumuz#5695: they were special for a while(a year or so?) like AID like games were completely disallowed by the OAI API TOS
kurumuz#5695: then they had a fallout where openai demanded filtering of a lot of stuff pretty much
kurumuz#5695: they will completely move away from openai, relationships seem worse than ever ig
ersatz#0001: because OpenAI is banning NSFW?
ersatz#0001: are they waiting for the big boy model from Eleuther? if so they could wait a looooong time
kurumuz#5695: no they did not
kurumuz#5695: they went with AI21 for the big model
ersatz#0001: AI21 is ok with NSFW?
kurumuz#5695: I have no idea. details are not clean there but they might have gotten a special deal like they did back then with OAI.
faraday#0862: damn, I had the same idea with wordtuneread. nicely executed
faraday#0862: however these business models are too costly for the value they produce
kurumuz#5695: you can make it cheap if you know what you are doing for sure.
alexandrost#2936: Hi! I've been away for a while. Which is the largest open model currently available ?
cfoster0#4356: Like, period? Probably a T5 of some sort
EricHallahan#1051: Probably the largest T5/mT5.
alexandrost#2936: Thanks guys, how many parameters does it have ?
EricHallahan#1051: Welcome back. :hap:
alexandrost#2936: Thank you!
|
EricHallahan#1051: mT5 XXL is 13B.
EricHallahan#1051: https://arxiv.org/abs/2010.11934
alexandrost#2936: Oh I see, so it has more parameters than even GPT-J
alexandrost#2936: I wonder if its performance on nlp tasks is linearly higher as well
EricHallahan#1051: Megatron 11B also exists but it in general sucks for the most part.
alexandrost#2936: Do you think that mT5 could run on a single rtx 3090?
kurumuz#5695: no
kurumuz#5695: it was worse than gpt neo 2.7b for us
kurumuz#5695: and like its clearly worse than j 6b
EricHallahan#1051: If you are trying to generate text you also won't find much success with mT5.
kindiana#1016: bigger number better model ;P
kurumuz#5695: bigger the better
alexandrost#2936: So for text generation a la gpt-3, by best option would be GPT-J?
StellaAthena#3530: Possibly T0, a LM-adapted and multitask-tuned T5 model
StellaAthena#3530: But one of those two seem like the best option that’s currently freely available
EricHallahan#1051: I honestly wonder how much training would be stunted by tuning Megatron 11B on pile rather than training from scratch.
alexandrost#2936: Oh I see thank you
StellaAthena#3530: You can find the T0 model (technically T0++, a version trained on more tasks) here: https://huggingface.co/bigscience/T0pp
alexandrost#2936: Amazing , thank you @StellaAthena
alexandrost#2936: Do you think I'd be able to run T0 (the 11b model) on a rtx 3090?
|
StellaAthena#3530: No clue
StellaAthena#3530: @kurumuz says you can’t run mT5 on it, so probably not
alexandrost#2936: Ok I see!
kurumuz#5695: mT5 XXL is 13b though
kurumuz#5695: i think T0 should fit but idk about the activations, so cant guarantee full context or anything
alexandrost#2936: So, in order to run such a large model you'd need a system with two GPUs, using parallelize ?
kurumuz#5695: you cant do decoder caching with enc-dec right
kurumuz#5695: so that would be much slower for generation
kurumuz#5695: or get a bigger gpu like rtx a6000 or a100
alexandrost#2936: Oo lala
kurumuz#5695: i am interested as well
StellaAthena#3530: The problem is that we don’t know what the partial training done on it is, right?
alexandrost#2936: So T0 could work for generation but much slower than gpt-j?
kindiana#1016: its the dataset from the roberta paper or something
kindiana#1016: iirc
EricHallahan#1051: Webtext IIRC?
kurumuz#5695: i heard webtext
kurumuz#5695: do we know for how many tokens
kindiana#1016: its similar to gpt3 mix
kurumuz#5695: yes
|
alexandrost#2936: Thank you
kurumuz#5695: @alexandrost just try models and see which one is better for your task.
alexandrost#2936: Yes , you are right
alexandrost#2936: I wonder if there is any model that could allow me to do the following: map a bag of words into a well formed sentence (not necessarily including the exact words found in the input bag of words)
alexandrost#2936: I am guessing not... but I was thinking whether I could use an already trained transformer model and repurpose it for this task.
StellaAthena#3530: Yeah that sounds like something it would be very good at
Sphinx#2092: Yes you can?
Sphinx#2092: Unless you mean the codebase itself
kurumuz#5695: i was talking about HF, just checked and it can
elderfalcon#4450: I've been scouring this channel, was there a recent 4-step CLIP-guided diffusion paper that just came out, or am I going crazy? Could have sworn I saw it earlier today and can't find now, dang....
elderfalcon#4450: (if you do have it or something like it, please feel free to lmk, and I can yeet the link over to a bunch of other clip connoseiurs! :D)
cfoster0#4356: Not about CLIP but about 4-step diffusion? It's this https://openreview.net/forum?id=TIdIXIpzhoI
tpapp157#3643: I messed with progressive distillation for a diffusion model recently and it works great. How many steps you can actually distill down to depends on the complexity of the data and how tolerant you are to reduced quality. I was able to distill my recent project down to 64 steps with no obvious quality loss which is great. I don't think distilling down to 4 steps is realistic in any scenario where you want to maintain near original quality though.
special k#3707: Can we distill 175b into a total of 2 parameters or even 3?
Kia#2550: No
Teemochu#8740: The ToS that I read doesn't seem to exclude it
Teemochu#8740: but I am given a bit of pause by the wording, it smells like something that would start clamping sometime
Teemochu#8740: can't quite put a finger on it except that the specific mention of elections is something you'd really only write from a specific mindset that isn't very laissez-faire
naclbbr#9203: I too am interested in T5 zero shot model. The public Megatron 11B model wasn't very good, but I think it was only trained with 100-150GB web text or so, which wasn't quite enough especially considering that the corpus is a generic (noisy) web crawl.
naclbbr#9203: Megatron 11B model was also such a pain to run (I gave up once trying)
|
kurumuz#5695: I am an AR bull lol
kurumuz#5695: well T5 can be AR too but GPT type
elderfalcon#4450: That's probably why, I associatively encoded it as 4 step clip, haha. This is much better, thanks! That's probably why I had trouble looking it up, CLIP was my keyword I think! :D
alstroemeria313#1694: so how do you determine the best schedule for lr decay?
alstroemeria313#1694: i am going to train this small secondary diffusion model with exponential lr decay but only because i've done enough runs that i know approximately how many steps it needs without lr decay
alstroemeria313#1694: does anyone have any idea what to do on new archs/datasets other than the good old "drop lr when loss stops going down"
alstroemeria313#1694: hey like.
can i make a custom LR scheduler.
that just *watches a file* and changes the lr when the contents of the file change
alstroemeria313#1694: so i can change the lr manually without stopping and resuming the run
nshepperd#2316: that seems like a good idea
alstroemeria313#1694: how fast is it to just read a file every step
nshepperd#2316: academic things like https://www.microsoft.com/en-us/research/publication/stochastic-gradient-tricks/ suggest learning rate proportional to step^(-1/2) but i'm skeptical of how useful that is irl
alstroemeria313#1694: oh, Bottou
alstroemeria313#1694: That's from 2012 also
nshepperd#2316: probably so fast you wouldn't notice
alstroemeria313#1694: maybe it could be json
alstroemeria313#1694: so i could put other stuff in it later
alstroemeria313#1694: like momentum values or smth
nshepperd#2316: yeah
|
alstroemeria313#1694: it isn't useful irl, step^(-1/2) decays like adagrad and that decays too quickly
nshepperd#2316: oh
alstroemeria313#1694: maybe you could do (1 + step / 100)^(-1/2) or smth
alstroemeria313#1694: (starting step from 0 here)
alstroemeria313#1694: rescaling the step value like this shouldn't actually affect an asymptotic convergence analysis either...?
alstroemeria313#1694: so long as the divisor is a constant
nshepperd#2316: seems like it should be fine yah
alstroemeria313#1694: in practice i either use constant lr, lr where i manually multiply it by a scalar when i think the loss isn't going down anymore, or an exponential lr schedule
Kharr#7888: ever seen this? https://arxiv.org/abs/1711.00489 I'm a fan.
alstroemeria313#1694: ah, i keep running into bugs w/ gradient accumulation steps using pytorch lightning
alstroemeria313#1694: i am trying to train diffusion models and i know their losses/gradients are super noisy
alstroemeria313#1694: so large batches probably help a great deal late in training
CRG#8707: There's a section on the scaling laws paper about LR schedule
CRG#8707: https://cdn.discordapp.com/attachments/729741769738158194/911618447375470652/Screenshot_20211120-150600.png,https://cdn.discordapp.com/attachments/729741769738158194/911618447664898048/Screenshot_20211120-150542.png
alstroemeria313#1694: the question is when to decay
tpapp157#3643: In my recent runs training a diffusion model I did exactly that. Used a batch size of 4 early in training, then bumped it to 8, 20, and finally 40. I did this primarily because I noticed a larger batch size correlated with sample diversity.
tpapp157#3643: I made the switches manually though by monitoring the training loss and switching when it started slowing down.
alstroemeria313#1694: i'm kinda sleepy still, mb will write the code later
Kharr#7888: That's a pretty lame bug given how useful accumulation is 😦
alstroemeria313#1694: it was doing utterly bizarre things
|
alstroemeria313#1694: loss would jump up and down *at epoch boundaries*
alstroemeria313#1694: What do epoch boundaries even have to do with gradient accumulation!
Kharr#7888: Yep, that's super odd. It works great in base PyTorch if you ever want to switch away.
alstroemeria313#1694: How hard is it to do DDP in base PyTorch
alstroemeria313#1694: Like with clusters
alstroemeria313#1694: i am usually training models on boxes with a lot of GPUs now so the effective batch size has always been at least 32
alstroemeria313#1694: even for the huge models
alstroemeria313#1694: but i think i can benefit further
Kharr#7888: Try the toy demo https://pytorch.org/tutorials/intermediate/ddp_tutorial.html (more here: https://pytorch.org/tutorials/intermediate/dist_tuto.html) and if it runs on your setup, probably pretty easy to switch over.
Orz#3023: Heyy
how many records of training data do we loose on average if we use fp16 for deepspeed?
tpapp157#3643: Are you accumulating properly with partial batches (if that's what your dataloader provides at the end of an epoch)?
alstroemeria313#1694: it's whatever pytorch lightning does
alstroemeria313#1694: it replaces the samplers in the dataloaders with its own
alstroemeria313#1694: which are DistributedSamplers and probably drop partial batches
inox#5400: use restarts so you decay multiple times at different scales, leave it runnning forever and one of the cycles will be good
alstroemeria313#1694: eheh
MicPie#9427: for multi-node ddp training I have people seen using slurm for it, but I never tried it out (yet), but maybe FYI: https://gist.github.com/TengdaHan/1dd10d335c7ca6f13810fff41e809904
alstroemeria313#1694: ty :blobcutehappy:
louis030195#2462: anyone ever played with blenderbot2? If you have any tips to reduce computational needs, I can start it on an RTX3080, it takes 6 GB VRAM but after 4-5 messages it hits the 10 GB and RIP?
|
louis030195#2462: maybe I should just try distillation
malone#6357: This has been on my to do list. I'd be curious to hear about how it goes.
malone#6357: I've been lurking for a little while, but I've been feeling chatty lately so I figured I'd introduce myself. My training is in NLP and computational social science. My early research focused on applying NLP to problems like measuring psychological constructs in natural language. I left a post doc (doing that sort of stuff) and have since been happily occupied in applied research roles that are somewhere between ML/NLP and MLOps.
I found you all via GPT-J. I also have a 3 month old and this has been a nice place to hang out in the middle of the night.
So, 🙋and thanks for creating such a rich community!
If anyone cares, I'm Joe Hoover irl.
EricHallahan#1051: Welcome!
malone#6357: Thanks!
𓅬 gabriel_syme 𓅬#3220: Welcome and congratulations! Enjoy these moments, they fly by 🙂
𓅬 gabriel_syme 𓅬#3220: Hmm anyone knows where #eegi is?
EricHallahan#1051: Archived.
EricHallahan#1051: Connor may have hidden it though IDK.
EricHallahan#1051: I doubt he intended to do that if he did.
austinvhuang#6880: I thought that was interesting when it came out. But I've never heard of anyone using it in practice and I'd like to understand the reason.
StellaAthena#3530: The way LMs are implemented doing progressive growing of batch size is a bit of a pain
kurumuz#5695: hmm, why
kurumuz#5695: i would imagine increasing the gas or batch would be quite easy on pytorch at least
|
kurumuz#5695: and GPUs
StellaAthena#3530: Well DeepSpeed doesn’t let you
kurumuz#5695: oh deepspeed
kurumuz#5695: yea i didnt even think about that
bmk#1476: does deepspeed not like changing shapes
EricHallahan#1051: No it does not.
𓅬 gabriel_syme 𓅬#3220: wonder if anyone has a decent metric for shape diversity (imagine a generative model that outputs 2d shapes)
𓅬 gabriel_syme 𓅬#3220: for the moment I am thinking of doing something like calculating intersections or IoU between each layout and all other layouts, and doing something with those distributions. But I don't really know what I'm doing lol. Anyone dealt with smth like this before?
faraday#0862: dear DL/ML people here: what computer do you use when working? I'm imagining most of you use your computer similar to a thin client, connecting remote GPU resources e.g. Colab. I wanted to ask if it's a good idea to aim for Mac M1 Max with 24-32 GPUs and 64 GB of memory to have a future-proof machine for ML work? I've read PyTorch wants to support M1 GPUs: https://github.com/pytorch/pytorch/issues/47702
AI_WAIFU#2844: I'm typing this on an 8-year-old macbook air, I connect to the cloud if I need to do any heavy lifting.
𓅬 gabriel_syme 𓅬#3220: I wonder whether there is a parallel idea to that of trophic cascades in ecosystems in large NNs. I guess adversarial examples are closeouts? But it would be interesting to be able to trace how input data affect a model in such a way, if they do.
https://youtu.be/ysa5OBhXz-Q
mistobaan#2737: anyone using colab Pro+ ? https://colab.research.google.com/signup
Kia#2550: I Used it before
Kia#2550: Only thing keep in my
Kia#2550: Free - K80's
Pro - T4/P100
Pro+ - V100/A100
Kia#2550: That's what would you commonly pro use,But don't use it to much
mistobaan#2737: indeed went for it. I got a v100 16Gb not bad
|
Kia#2550: Yup, Just don't use it to much
Kia#2550: They'll most likely kick you to a P100 or something Lower
malone#6357: Thank you! We're working on it ☺️. Strange how something can be simultaneously the best and the hardest 🥱
alstroemeria313#1694: what if,,, diffusion capsnets
alstroemeria313#1694: They're autoencoders right.
alstroemeria313#1694: MSE reconstruction loss.
alstroemeria313#1694: And the decoder is mostly there to help train the encoder?
alstroemeria313#1694: But if the decoder were diffusion based, we could *generate* from synthetic outputs of the terminal capsules
alstroemeria313#1694: Like control what we get in the scene and where and with what poses
glazgoglabgalab#5255: you seem pretty bullish on diffusion why is that? is it mainly quality?
nshepperd#2316: what even are capsnets
alstroemeria313#1694: https://en.wikipedia.org/wiki/Capsule_neural_network
alstroemeria313#1694: idk, we need more control over the scenes we generate
nshepperd#2316: sounds like ... grouped features with some kind of special group-wise nonlinearity?
nshepperd#2316: all the weird informal biological analogies in the explanations make my head hurt ^^;
alstroemeria313#1694: diffusion autoencoders let you get high quality images conditional on a latent from the learned latent space
alstroemeria313#1694: It sidesteps the thing where autoencoder outputs are blurry bc they use L1 or L2 reconstruction loss.
alstroemeria313#1694: And you can't make them less blurry without making the latent space bigger/contain more information.
alstroemeria313#1694: With diffusion autoencoders the latent corresponds to a *distribution* of images you can sample from, all of which are sharp/have good textures
alstroemeria313#1694: (Well, technically not all but the bad ones are improbable)
|
alstroemeria313#1694: My interest in them is generating high quality images in a controllable manner.
alstroemeria313#1694: you could also add an adversarial loss but then you run into GAN training problems.
nshepperd#2316: "ML engineer has a problem. She thinks 'I know, I'll use an adversarial loss!'. Now she has three problems"
alstroemeria313#1694: Ahah
alstroemeria313#1694: (btw I have tried LPIPS reconstruction autoencoders and they produce *weird* artifacty images when the latent space is too small, LPIPS is not a panacea, you still have the problem where the information bottleneck restricts how good the reconstructions can be)
alstroemeria313#1694: (VQGAN only got to where it is by using L1 + LPIPS + adversarial)
austinvhuang#6880: That's interesting.
I guessed it's either that it's not enough of a benefit or there was an API UX barrier. It sounds like the latter is at least a contributor (at least as long as deepspeed is in its current state).
Is there reason to believe that if deepspeed interface weren't an issue that it would or wouldn't offer much benefit? Or is it just unknown because it's too much of a pain to investigate at this point?
StellaAthena#3530: I think there’s weak evidence that it’s superior to LR decay
alstroemeria313#1694: do gradient accumulation steps instead
alstroemeria313#1694: or does it not let you change that/do it manually either.
kurumuz#5695: yea i agree
alstroemeria313#1694: also why not both
alstroemeria313#1694: (taking the square root of the schedule so that when you combine the lr decay and batch effects they result in the original schedule)
kurumuz#5695: @alstroemeria313 but doesnt higher batch size help even at the start and not hurt?
kurumuz#5695: so you could always have high bs instead of having a schedule for it
alstroemeria313#1694: It's not just quality, it's having a stationary objective so you don't have to deal with GAN training instabilities
|
alstroemeria313#1694: lower batch size at the start is often better if you can do more optimizer steps in the same wall clock time by lowering it
alstroemeria313#1694: Yes if your optimizer steps don't speed up by lowering batch size you shouldn't lower batch size
alstroemeria313#1694: This is because the gradient noise scale is lower at the start bc the gradients for different examples point more in the same direction.
alstroemeria313#1694: see the OpenAI gradient noise scale paper <https://openai.com/blog/science-of-ai/>
tpapp157#3643: Right. In the early stages of training the network is still just learning how to work with the input data and very simple structures so the gradient direction of pretty much all data samples point in the same direction. In this regime it's better to use a smaller batch so you can take more steps in the same amount of time. Of course you need to balance this with how much you care to get into the weeds of optimizing your training schedule.
timudk#8246: Would you train a diffusion autoencoder just by maximizing the ELBO?
timudk#8246: I was wondering why the new VQGAN paper images (https://openreview.net/forum?id=pfNyExj7z2) look so soft
tpapp157#3643: Any image generator that applies a loss in pixel space will tend to produce blurry results because sharp detail is very hard to predict and results in large loss values if the network guesses wrong.
timudk#8246: Regular diffusion models are trained in "pixel space", no? They don't look blurry.
alstroemeria313#1694: well, a reweighted ELBO for the decoder, but there would be no sampling between the encoder and decoder
alstroemeria313#1694: you could add it and add a KL loss
alstroemeria313#1694: it's technically in pixel space but it's different from what @tpapp157 is talking about in a way that gets around the problem
alstroemeria313#1694: and actually they just changed the combination of losses
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/912030238404390952/Screen_Shot_2021-11-21_at_9.22.31_AM.png
alstroemeria313#1694: it used to be pixel space L1 (laplace) plus perceptual plus adversarial, probably a different blend
alstroemeria313#1694: they have added L2 and made its weight high
alstroemeria313#1694: L2 smooths (it seeks the mean)
alstroemeria313#1694: specifically the thing a diffusion model outputs is related to the *gradient* of the data distribution
alstroemeria313#1694: like, the distribution of the training set perturbed by various levels of Gaussian noise.
elderfalcon#4450: I think you could probably use anything that does that kind of thing, I had some sneaking suspicion that people were moving away from the ELBO for generative stuff, isn't that correct?
|
alstroemeria313#1694: so when we train diffusion models with MSE loss it seeks the mean *gradient* for each point and noise level, not the mean *final output*.
elderfalcon#4450: Hmmm, I don't think gradients are generated unless the target is off mean? I guess with very bizzare gradients it could end up integrating to be off mean. That would surprise me (but then again ML is constantly bizzare).
alstroemeria313#1694: ?
alstroemeria313#1694: no we are learning the gradient of the data distribution with the model, it is like an ODE/SDE
alstroemeria313#1694: the model output is the same size as the input and to sample we integrate the ODE/SDE
Some Point Process#3793: Does the forward diffusion (denoising) part of the inference, in which CLIP gradient conditioning is also possible, require that the model had been trained on clip gradients (or any other classifier scores)? I've wondered about how guidance using external gradients (not the *gradients of the data distribution* that the diffusion model already learned during sgd) works at more of a gears level 🙂
alstroemeria313#1694: i'm not using CLIP gradients during the forward process
Some Point Process#3793: oof, then how does the "guidance" happen?
alstroemeria313#1694: either you use the CLIP embedding as basically a class label for a model which was trained with a CLIP embedding input
alstroemeria313#1694: or you use CLIP gradients in the reverse process to do sampling conditional on a CLIP embedding.
alstroemeria313#1694: or both.
Some Point Process#3793: Ah ok
alstroemeria313#1694: we want to try to bake guidance into the model but this is maybe difficult to do
elderfalcon#4450: My deepest apologies, sorry. Thanks for clarifying.
elderfalcon#4450: It almost reminds me of how in the early days of transformers, LSTM distillation was a big potential. (I'm assuming that died with the ultra long sequence lengths bit). But I agree about wondering about the guidance stuff, it would be so cool to calculate that all somehow explicitly/implicitly somehow in the forward pass.
alstroemeria313#1694: the problem is always that we train the different timesteps independently of each other
alstroemeria313#1694: and guidance changes the entire *trajectory*
alstroemeria313#1694: so we could bake guidance in at each individual point but sampling wouldn't be ending up at those points anymore
cfoster0#4356: Forward process is noising, reverse process is denoising, FYI
alstroemeria313#1694: "That would mean to reverse entropy" :yes: https://cdn.discordapp.com/attachments/729741769738158194/912044350031151174/Screen_Shot_2021-11-21_at_10.18.21_AM.png
|
elderfalcon#4450: Hmm. Because it would no longer be i.i.d, then?
StellaAthena#3530: @alstroemeria313 did you ever get around to coding up the discrete diffusion I derived
alstroemeria313#1694: no :/
alstroemeria313#1694: we wouldn't be able to easily sample from any timestep in the forward process w/o going through the intermediate timesteps.
elderfalcon#4450: Ty! :D
nshepperd#2316: my day job is reversing entropy to make artwork
alstroemeria313#1694: the thing i have thought about doing is taking an existing CLIP conditioned model and finding the CLIP guided reverse sampling trajectories
alstroemeria313#1694: then fine-tuning a student to match the teacher targets w/ guidance applied
elderfalcon#4450: Well, I reverse artwork to make entropy, it's called *implementing papers*
cfoster0#4356: @StellaAthena last I checked the doc was missing how to calculate the forward process posterior etc.
elderfalcon#4450: Similar to that vanilla diffusion fine-tuning model.
StellaAthena#3530: Oh RIP. I may have written that up and not uploaded it 😦
alstroemeria313#1694: oh
alstroemeria313#1694: wait how did the diffusion stylegan-nada analogue even work
alstroemeria313#1694: Wait how *did* that work.
alstroemeria313#1694: this. https://arxiv.org/abs/2110.02711
EricHallahan#1051: Diffusion solves The Last Question. :ultraberk:
||Yes, I know *The Last Question* is about decreasing the entropy of the **whole** universe. Joke still stands.||
elderfalcon#4450: Oh, I was thinking the one where they still had ~FID but with only ~16-32 steps instead of hundreds/thousands via a progressive distillation scheme.
alstroemeria313#1694: oh that. yeah that works, i've tried it on smaller test models
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.