data
stringlengths 115
7.61k
|
---|
๐
ฌ gabriel_syme ๐
ฌ#3220: Ah on go5cha
Louis#0144: and we do this via preference learning
๐
ฌ gabriel_syme ๐
ฌ#3220: Makes sense, I'm without coffee and on phone sorry
๐
ฌ gabriel_syme ๐
ฌ#3220: Perfect setup for me, I'll take a look when it's out :)
Louis#0144: we're using this to circumvent the fact we were unable to get revision data for CARP
Louis#0144: how do we mimic that revision data
Louis#0144: for CARP guided control
alstroemeria313#1694: nope
alstroemeria313#1694: gradient noise scale is still higher even for the gigantic layers with tons of activations
alstroemeria313#1694: where i went up by 4x channel count each downsample
alstroemeria313#1694: oh well.
alstroemeria313#1694: @nshepperd <https://github.com/crowsonkb/esgd> btw, i turned the repo public
nshepperd#2316: yay~
nshepperd#2316: ESGD-M :hap:
chirp#4545: dumb optimizer question: i see lots of algorithms take the "gradient outer product" `G @ G^T`, or try to approximate it. what does this outer product mean?
chirp#4545: was just reading the AdaGrad paper and they use such an outer product, but they don't explain why
alstroemeria313#1694: i'm... not entirely sure. it's a rank one update to a full matrix preconditioner and it accumulates these rank one updates over time
Sphinx#2092: It's equivalent to the hessian in this setting.
Sphinx#2092: https://en.wikipedia.org/wiki/Fisher_information#Matrix_form
Sphinx#2092: At least if my pattern recognition skills are good.
|
chilli#5665: What :thinkies:
StellaAthena#3530: The expected value of the Hessian of the log-likelihood function equals the negative of the expected value of the outer product of its gradient with itself. This is known as the Information Matrix Equality
StellaAthena#3530: This isn't unconditionally true though... it's true when we can exchange the order of the integral and derivative. In practice this happens frequently but not always.
chilli#5665: Optimization folk are wild
chilli#5665: Btw, @alstroemeria313 , does the natural gradient have anything to do with the units stuff we were talking about previously?
alstroemeria313#1694: i don't know what it is really ^^;;
alstroemeria313#1694: then... why does adagrad take the inverse square root of their outer product thing
alstroemeria313#1694: that entire family of optimizers including adam does this
Sphinx#2092: more pattern recognition skills would likely connect it to newton's method
alstroemeria313#1694: (In practice it works badly if you don't take the square root but)
bmk#1476: and that thing is also the Fisher Information Matrix right
StellaAthena#3530: Yes, the expected value of the Hessian of the LLH is also known as the FIM
bmk#1476: there is a large amount of meme potential here https://cdn.discordapp.com/attachments/729741769738158194/935005161091391539/g.webp
alstroemeria313#1694: wonder if i should write some sort of gradient noise scale lr scheduler
alstroemeria313#1694: gradient noise scale is such a pain to compute though
bmk#1476: tbh i don't really intuitively get fisher information
alstroemeria313#1694: yeah me either
alstroemeria313#1694: like "expected value of the Hessian" the Hessian is uncertain?
bmk#1476: the hessian of the likelihood function gives you, like, the curvature of the likelihood function, right?
alstroemeria313#1694: also i don't fully understand the paper. like how are they getting their "optimal" step sizes for no gradient noise that they then scale.
|
alstroemeria313#1694: is it a Newton step or something, do I need more coffee
StellaAthena#3530: It's the hessian of a random variable, so yes?
alstroemeria313#1694: ahh
alstroemeria313#1694: i... do not know what the Hessian of a random variable is
Sphinx#2092: That's not quite correct. It's a function of two variables, X and theta.
Sphinx#2092: You are averaging over X, but differentiating with respect to theta.
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/935006998221709332/Screen_Shot_2022-01-23_at_7.03.57_PM.png
bmk#1476: I think the variance of the score makes more sense
chilli#5665: Well, it gives you your second derivative for all of your inputs :P
chilli#5665: And I think many people often interpret that as curvature
alstroemeria313#1694: I could just *get* the delta L_max with Hessian-vector products.
alstroemeria313#1694: Right?
bmk#1476: the variance of the score tells you, like, how much the slope of the likelihood function varies as you change theta
bmk#1476: so I guess it's about how sensitive the likelihood is to theta?
bmk#1476: still seems weird that it's about the score and not the likelihood function itself though
alstroemeria313#1694: but that's only one of the parts. the other part is getting the B_noise to scale the optimal step size down by.
alstroemeria313#1694: That part I have working
alstroemeria313#1694: It is potentially the more useful part
alstroemeria313#1694: Since you can apply it to any step size policy you already have
alstroemeria313#1694: But it is really slow and I am computing it to try and come up with some heuristics
|
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/935010060743110747/Screen_Shot_2022-01-23_at_7.16.07_PM.png
alstroemeria313#1694: Really wish I had that squared gradient thing
alstroemeria313#1694: It would be nearly free then
alstroemeria313#1694: On a single GPU
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/935010502772392017/Screen_Shot_2022-01-23_at_7.17.52_PM.png
alstroemeria313#1694: Yeah, I only just now got to reading this appendix and this is basically the scheme I had in mind
alstroemeria313#1694: If we could do the backpack style squared gradient thing then B_small is 1 and B_big is just your batch size
alstroemeria313#1694: But backpack doesn't actually work with resnets
alstroemeria313#1694: idk someone should code this up in Jax so people can use it on their TPUv3-8s
chilli#5665: I'm not sure it's easy to code this up in Jax ๐ค
chilli#5665: Hmmm
chilli#5665: Yeah, not sure
chilli#5665: Simply doing vmap (grad(...)) won't get you this optimization
chilli#5665: Maybe defining some kind of custom higher order primitive
chilli#5665: Hmmmm
kindiana#1016: its also free to compute with microbatches
kindiana#1016: implemented in mtj
chilli#5665: I guess maybe you'd just compute it manually
alstroemeria313#1694: i was referring to the version where your small batch size is not 1
alstroemeria313#1694: and you are estimating from a bunch of microbatch gradients and their mean
|
chilli#5665: Are you sure it doesn't work now?
alstroemeria313#1694: all you have to do is square your microbatch gradients and pmean those
alstroemeria313#1694: along with pmeaning the unsquared ones
alstroemeria313#1694: then do the estimator based on the microbatch size and batch size
chilli#5665: Yeah, I think this is easy in both frameworks
chilli#5665: At least in principle
chilli#5665: I saw some reference to adding support for kfac, so I'd be a bit surprised if it didn't also work for their variance computation
alstroemeria313#1694: huh
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/935014166782152744/Screen_Shot_2022-01-23_at_7.32.27_PM.png
alstroemeria313#1694: Yeah I use *all* kinds of forward overrides
alstroemeria313#1694: Oh
alstroemeria313#1694: Is that only for second order stuff
chilli#5665: Sounds like it
alstroemeria313#1694: Ah
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/935014486820130877/Screen_Shot_2022-01-23_at_7.33.43_PM.png
alstroemeria313#1694: I need SumGradSquared
alstroemeria313#1694: Or rather the mean, but
alstroemeria313#1694: basically, i bet diffusion models can really benefit from automatic gradient noise scale per-layer lr tuning
alstroemeria313#1694: i am experimenting with some fixed heuristics rn.
alstroemeria313#1694: ... https://cdn.discordapp.com/attachments/729741769738158194/935016211840892948/Screen_Shot_2022-01-23_at_7.40.33_PM.png
|
alstroemeria313#1694: Of course it won't work on random arbitrary neural nets will it.
alstroemeria313#1694: I would love to be able to do a batch of hvps in a single pass
alstroemeria313#1694: hmm https://docs.backpack.pt/en/master/use_cases/example_resnet_all_in_one.html
alstroemeria313#1694: yeah i think it is supposed to work now.
alstroemeria313#1694: the first order ones that is.
chilli#5665: What's that reaction lol
alstroemeria313#1694: "Wow I wish I could actually do this on my models"
alstroemeria313#1694: Like the closest I can come is to train data parallel and do a different hvp with a different random vector on each shard
alstroemeria313#1694: Then take the mean of their squares
chilli#5665: Hvps are actually (more) efficient in Pytorch now, since we have support for forward mode AD (and can compose it with reverse mode AD) too
alstroemeria313#1694: Ohh?
alstroemeria313#1694: Is this in stable yet
chilli#5665: Uh... Don't think so
alstroemeria313#1694: i do an hvp like this right now. https://github.com/crowsonkb/esgd/blob/master/esgd/esgd.py#L136
chilli#5665: Yeah I think that's 50% slower?
alstroemeria313#1694: ohhh
chilli#5665: Than properly using forward mode
alstroemeria313#1694: so i am not just getting an hvp, i always want to use the loss and gradient at that point too
alstroemeria313#1694: i still get the gradient with the new thing?
chilli#5665: Uh, if you don't, it'd be easy to add lol
|
alstroemeria313#1694: Ahh
alstroemeria313#1694: Can I do batches of hvps efficiently
chilli#5665: Like, the "standard" approach for computing hvp involves computing the regular reverse mode quantity first iirc
chilli#5665: https://github.com/pytorch/pytorch/issues/10223#issuecomment-413935344
chilli#5665: Yeah just vmap it :P
alstroemeria313#1694: ahh
chilli#5665: Vmap composes with both jvp and vjp, at least on nightly
alstroemeria313#1694: yeah it would be nice if i could get hvps faster
alstroemeria313#1694: I do a "warmup" of doing one for each of the first twenty steps and then when my estimator is low enough variance I start doing one every ten steps.
alstroemeria313#1694: I do not actually need an hvp every step, thankfully
chilli#5665: Btw, this approach won't do any of the "fancy" optimizations that backpack is using, so that might still end up being more efficient
alstroemeria313#1694: But if I could efficiently do a *bunch* of hvps on the first step or the first few steps that would be nice too, in order to get the variance down quicker
alstroemeria313#1694: but anyway i am mainly interested in the noise scale thing rn.
chilli#5665: Yeah, the best ways of doing that are probably either
chilli#5665: 1. The functorch approach, which is probably more general but more... "brute force"
chilli#5665: 2. Use backpack, which is probably much more clever, but is probably less general
chilli#5665: I think with some smart compilation we can get 1. to similar perf as 2. while being still very general
chilli#5665: At least, without tensor cores throwing a wrench in the works
alstroemeria313#1694: oh yay, i got a SumGradSquared
alstroemeria313#1694: on a tiny mlp
|
chilli#5665: Nice ๐
alstroemeria313#1694: so it's summed not meaned?
chilli#5665: I'm not an expert in backpack :P
chilli#5665: But I assume it's easy enough to divide afterwards
alstroemeria313#1694: it is pre-divided weirdly, it looks like
alstroemeria313#1694: like divided by batch size after the square
alstroemeria313#1694: i don't get it
alstroemeria313#1694: mm
alstroemeria313#1694: so i need to make an adaptive lr tuner that can take these sorts of stats from a variety of places, wherever you can manage to get them
alstroemeria313#1694: i'm... tired
alstroemeria313#1694: maybe i am just too tired but backpack's scaling is not making sense to me
alstroemeria313#1694: yeah the scaling is *weird* compared to batched grads i got from functorch
alstroemeria313#1694: to check it against.
alstroemeria313#1694: somehow each grad's squared l2 norm is scaled down by the batch size?
alstroemeria313#1694: i mean backpack's
alstroemeria313#1694: oh i get it
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/935038257924833280/Screen_Shot_2022-01-23_at_9.08.07_PM.png
alstroemeria313#1694: that's still a weird scaling though
alstroemeria313#1694: oh
alstroemeria313#1694: is it because backpack doesn't *actually* vmap
|
alstroemeria313#1694: so since my loss is mse and means over the batch
alstroemeria313#1694: it just returns the L2 norms of *the components* of the grad
alstroemeria313#1694: i.e. the loss function still sees the batch size and can divide by it
alstroemeria313#1694: and that will divide the L2 norms of the components
alstroemeria313#1694: so once openai gets their estimated |mean_grad|^2 and cov trace
alstroemeria313#1694: they EMA both of those
alstroemeria313#1694: and update the EMAs frequently
alstroemeria313#1694: and i can get the quantities that go into them very cheaply using either backpack or parallel training that is being done already
alstroemeria313#1694: and that should let me make that lr scheduler...
alstroemeria313#1694: BatchL2Grad seems to be the most convenient backpack extension for me to use for this.
alstroemeria313#1694: ugh still not sure these things i'm getting from backpack are right
alstroemeria313#1694: oh https://cdn.discordapp.com/attachments/729741769738158194/935045176576053290/Screen_Shot_2022-01-23_at_9.35.35_PM.png
Louis#0144: Go to bed?
alstroemeria313#1694: yeah
alstroemeria313#1694: ok so if your loss uses reduction mean, we scale the norm that comes from backpack up by the batch size.
alstroemeria313#1694: if your loss uses reduction sum, we scale the norm that comes from the pytorch grad down by the batch size.
alstroemeria313#1694: in practice we only take a ratio of these so you can just pick one and stick with it
alstroemeria313#1694: and since pytorch people nearly always use mean reduction by default...
alstroemeria313#1694: *except* with gradient accumulation
alstroemeria313#1694: and apparently pytorch grads accumulate but backpack stuff doesn't lol
|
alstroemeria313#1694: What a pain
๐
ฌ gabriel_syme ๐
ฌ#3220: A break and a good sleep might solve it
๐
ฌ gabriel_syme ๐
ฌ#3220: Or a movie! Anything not optimizer :)
๐
ฌ gabriel_syme ๐
ฌ#3220: Oh also maybe adding these issues in your repo, maybe people are looking into it already
alstroemeria313#1694: huh
alstroemeria313#1694: i got some gradient noise scale stuff working
nshepperd#2316: ^_^
alstroemeria313#1694: i am having some problems with the ratio estimator
alstroemeria313#1694: namely their estimated squared true grad norm and trace cov can go negative
alstroemeria313#1694: they are unbiased estimators of their things
alstroemeria313#1694: but their ratio is not unbiased
alstroemeria313#1694: can i like... find the covariance of their two estimated things somehow
alstroemeria313#1694: i am getting the squared L2 norms of each batch element of the gradient wrt the weights with backpack
alstroemeria313#1694: param tensor wise
alstroemeria313#1694: if you train on TPUs you will easily be able to get split-up grads to use to calculate gradient noise scale
alstroemeria313#1694: training a little diffusion model with backpack per-sample gradient norms now
alstroemeria313#1694: it has apparently detected that the class embedding gradients are super noisy and decreased its lr by a lot
nshepperd#2316: ooh interesting
alstroemeria313#1694: yeah it is an implementation of the gradient noise scale paper from openai
alstroemeria313#1694: except using backpack instead of data parallel to get information on the gradients of the individual batch elements
|
robinbin#9573: Hey guys, is this a correct visualization of the transformer? What happens during inference/training, when the output has a higher dimension than input sequence, do we just shift to the nearest N tokens like what's done in the end of this GIF here? https://cdn.discordapp.com/attachments/729741769738158194/935108809972678697/Transformer_Visualization.gif
alstroemeria313#1694: you mean it has more tokens? the number doesn't have to be the same
alstroemeria313#1694: i have used cross-attention to condition image generation models on text, even
alstroemeria313#1694: i am going to have to come up with an api to feed your own stats into this thing
alstroemeria313#1694: like: the squared gradient norm for a batch, the mean squared gradient norm for the microbatches that made it up, the batch size, and the microbatch size
alstroemeria313#1694: if you can get those four somehow on your own you don't need backpack
alstroemeria313#1694: they can be per param tensor too
alstroemeria313#1694: ...So afaict there are no implementations out there of the automatic lr mechanism in "An Empirical Model of Large-Batch Training"?
alstroemeria313#1694: (though i have seen other things proposed before it along similar lines, in old optimizer papers, but they were not as good/didn't work)
alstroemeria313#1694: Notably https://arxiv.org/abs/1206.1106
alstroemeria313#1694: Which doesn't really work afaict
alstroemeria313#1694: I tried it
alstroemeria313#1694: I think getting within-batch gradient variances is the trick you need to make something like it work well
alstroemeria313#1694: See the highlighted thing https://cdn.discordapp.com/attachments/729741769738158194/935158111826759730/Screen_Shot_2022-01-24_at_5.04.20_AM.png
alstroemeria313#1694: I tried incorporating this type of update into ESGD-M and it went badly
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/935159470101774486/Screen_Shot_2022-01-24_at_5.09.49_AM.png
alstroemeria313#1694: It was finicky and tended to overestimate the amount of noise and slow to a halt
alstroemeria313#1694: Within-batch gradient variances, however, pooled over param tensors of the entire gradient and EMAed over long time scales, seem to work a *lot* better.
alstroemeria313#1694: Calculating the "critical batch size" for each param tensor has been illuminating, too.
alstroemeria313#1694: Using ESGD-M with the learning rate scheduler I am writing recovers something very much like the updates in No More Pesky Learning rates except they actually work now
|
alstroemeria313#1694: (I still have not experimented with per-param OpenAI type corrective terms, a singular one per param tensor is the most granularity I have tried so far. I can get the per-param information I need from backpack though.)
alstroemeria313#1694: Like for a simple scheme I could take an Adam style EMA of the gradient squared
alstroemeria313#1694: And then take an EMA of the mean per-sample squared gradient.
alstroemeria313#1694: Oh, huh
alstroemeria313#1694: No, I would do: E[grad]^2 / E[grad^2]
alstroemeria313#1694: Multiply the lr by that
alstroemeria313#1694: Where "grad" means *per sample* gradients
alstroemeria313#1694: And E[grad] means taking the mean so you get a single thing the shape of the params for a whole batch of examples.
alstroemeria313#1694: E[grad^2] is the thing you have to use backpack to get efficiently
alstroemeria313#1694: Like, without evaluating a ton of per-sample gradients and squaring and averaging them yourself.
alstroemeria313#1694: You can then EMA some sort of thing that tells you how much of a corrective factor to use
alstroemeria313#1694: OpenAI has a scheme in the paper to approximate this sort of corrective factor using the difference between a squared mean of microbatch gradients and a mean of squared microbatch gradients.
Ramzi#0418: Hi @Daj, @bmk - just following up on this. Any interest in giving a talk to a university group on Eluther AI?
Daj#7482: Hi Ramzi, I missed your previous message, apologies. I might be up for giving a short talk sometime, though I'm very busy the next few weeks most likely
Ramzi#0418: Hi Connor, no problem, thanks for replying. I tried tagging the founding group per the rules but I'm not sure it worked. Timeline is flexible so whenever works for you. Would you like to connect and have a brief chat about it?
StellaAthena#3530: When you type `@O5`, does anything pop up? On mobile it looks like this but it should look similar on desktop as well. https://cdn.discordapp.com/attachments/729741769738158194/935201673356386354/IMG_8916.png
StellaAthena#3530: It could be a permissions issueโฆ we have clamped down pretty hard on tagging groups by most users to reduce notification spam but we would like everyone to be able to tag O5 and Level-5
Ramzi#0418: Hi Stella - no, nothing pops up. Tried both capital O and zero. I'm using firefox on a pc. Thanks for looking into it though, it looks like none of the organizing roles work for me like @mathemagician, etc.
Emad#9608: New $140,000 competition by FLI to imagine the world 5 years after AGI hits in 2045 https://worldbuild.ai
Emad#9608: ๐๏ธ
|
nev#4905: ๐
samtube405#0352: Hello Everyone, I am looking for a description for different files in the checkpointing directory, similar to this one http://eaidata.bmk.sh/data/neox-small-rotary/global_step500000/
Orz#3023: we can ping @ Critic tho
StellaAthena#3530: These go with GPT-NeoX, which you can find here: https://github.com/EleutherAI/gpt-neox/
All of the files together constitute a single model. The naming indicates that this is a "small" (meaning 125M parameter) model that uses rotary embeddings. You can find the full configuration details at `http://eaidata.bmk.sh/data/neox-small-rotary/config.yml`. You can use that config file to load the model in GPT-NeoX, though you will have to make a couple edits based on your local system. Instructions for getting started with GPT-NeoX can be found in the readme of the linked repo
alstroemeria313#1694: ...how do you compose learning rate schedulers in pytorch
alstroemeria313#1694: i am writing one and i want to be able to like, combine it with whatever lr decay people already do
alstroemeria313#1694: it works right now by computing a "correction" to a "base lr" that it saves from the optimizer when it is created
samtube405#0352: Hi Stella, Thanks for the information. I am wondering what these files exactly contain. During our experiments with gpt-neox large models, we see additional "zero_pp_rank_*" files also in the checkpoint dir. We are trying to develop a wrapper to transform these models into HuggingFace equivalent ones to be used with the rest of model development pipeline.
alstroemeria313#1694: so a natural thing to do to compose it with other schedulers is to let them change its base lr somehow
alstroemeria313#1694: but i totally don't know how to do this in the pytorch api
StellaAthena#3530: @finetune and @kurumuz have a conversion function that, IIRC, isn't yet merged into main. Could you help @samtube405 out?
alstroemeria313#1694: ...also you have to call mine *before* the optimizer step for best results, instead of after
finetune#0907: would actually recommend looking at <https://github.com/EleutherAI/gpt-neox/pull/466> and <https://github.com/EleutherAI/gpt-neox/pull/480>
jack#8178: probably you should return a transform for the current lr then?
alstroemeria313#1694: return how?
jack#8178: like, you have a function, that takes grad stats and lr, returns new lr
alstroemeria313#1694: i actually have to modify the optimizer's internal states
jack#8178: just the lrs or something else too?
|
alstroemeria313#1694: just the lrs
alstroemeria313#1694: but they can be different per param group
jack#8178: yeah
jack#8178: i think you'll just have to do that manually
jack#8178: if you're trying to do it before the step
jack#8178: after the backwards pass
jack#8178: you could wrap the optimizer?
alstroemeria313#1694: the stats thing is a concern of mine because i want to support feeding in stats you got from somewhere else, like the statistics of your microbatches
alstroemeria313#1694: instead of depending on backpack only
alstroemeria313#1694: like i want to be able to run this on pipeline parallel deepspeed or smth
alstroemeria313#1694: Which means I would be manually computing microbatch gradient norms and feeding them in along with what batch size they were
jack#8178: yeah i think this is a different thing than an lr scheduler - I wouldn't try to fit it into that API
alstroemeria313#1694: ahh
alstroemeria313#1694: at its root you run it after the backward and before the step and it outputs a multiplicative factor for the lr of each param group at the next step.
alstroemeria313#1694: and it has internal state etc
alstroemeria313#1694: it does not actually care what the base lr is, it just outputs corrections <= 1 that are meant to apply to that one step and not accumulate
jack#8178: ahhh
jack#8178: so you want to multiply the lrs, step, then divide
alstroemeria313#1694: ahh
alstroemeria313#1694: or multiply, step, then restore from a copy
|
jack#8178: yeah
jack#8178: it'll be slow but meh there aren't that many tensors
alstroemeria313#1694: it does not have any pytorch tensors as state
jack#8178: yeah
alstroemeria313#1694: Just single Python float scalars
alstroemeria313#1694: two per param group
jack#8178: two?
alstroemeria313#1694: (param group is the basic unit here bc it is the smallest granularity you can change the lr on, you can do it per param group or you can do it with one global computation for all param groups)
alstroemeria313#1694: EMA of the estimator for the squared 2-norm of the mean gradient (signal) and EMA of the estimator of the trace of the covariance (noise)
alstroemeria313#1694: they are both squared quantities so no needing to mess around with debiasing
alstroemeria313#1694: you just init to 0 and update with the same decay rate and they keep the same ratio
alstroemeria313#1694: so the state can range from these two scalars for your entire model to two scalars per param tensor.
alstroemeria313#1694: the gradient stats are simple
alstroemeria313#1694: for each param tensor, you need the mean squared 2-norm of its gradients in the microbatches and the squared 2-norm of its gradient after you took the mean of the microbatches' gradients.
alstroemeria313#1694: (It has to be the mean, not the sum, if you sum you have to adjust the norm to what it would have been)
alstroemeria313#1694: And you feed in the microbatch size and the batch size.
alstroemeria313#1694: For the backpack version I grab the squared 2-norms of the per-sample gradients (you can get it to output just their squared 2-norms), sum them, and scale them so they have the correct ratio to the param tensor's main gradient
alstroemeria313#1694: then feed in bs 1 and whatever the number of per-sample norms was.
alstroemeria313#1694: ...also the lr correction it outputs depends on batch size lol
alstroemeria313#1694: but you fed that in when you fed in the stats
|
StellaAthena#3530: https://ai.facebook.com/blog/ai-rsc/?utm_source=twitter&utm_medium=organic_social&utm_campaign=rsc
chirp#4545: > When RSC is complete, the InfiniBand network fabric will connect 16,000 GPUs as endpoints, making it one of the largest such networks deployed to date.
cfoster0#4356: Damn they're gonna train some big gradient boosted decision trees aren't they
tpapp157#3643: Models like xgboost are way faster and easier to train and provide as good or better performance as NNs on 95+% of real world practical use cases. As much as we love to chase sota NNs, there are quite a few reasons why NN models are quite rare in real production use.
Sphinx#2092: That's why you gotta work on the 5% that uses it. ez.
alstroemeria313#1694: ...how do i actually write a function that alters the optimizer's lrs then puts the original ones back afterwards, without making people manually put a call before and after
alstroemeria313#1694: and maintaining the ability for people who need to pass the optimizer to things to call its step() to do so.
alstroemeria313#1694: oh right, context manager.
alstroemeria313#1694: thanks python
jack#8178: I always hear people saying this but almost never see them give examples
jack#8178: it's pretty easy to throw a NN at stuff - what sorts of problems are better solved by xgboost?
Daj#7482: problems that fit in a .csv file
tpapp157#3643: Well first it should be noted that the overwhelming majority of production models are built on top of tabular datasets.
flowpoint#7450: <https://www.arxiv-vanity.com/papers/2106.03253/>
StellaAthena#3530: DL sucks at problems where there are less than 10,000 data points over the past decade
tpapp157#3643: NNs are comparatively slow and difficult to train, they require specialized hardware and software infrastructure, they have orders of magnitude more architecture and training hyperparameters to tune, for small amounts of data or data with simpler relationships they very quickly and very strongly overfit, business users often require easily interpretable model results, plus it's always a good rule of thumb to use the absolute simplest model that gets the job done.
jack#8178: right, assuming those 10k points aren't subsets of some actually much larger domain you can pretrain on
cfoster0#4356: >>> Any ML researcher born past 1992 can't handle tabular data
All they know is train neuron, use hot chip, and lie
StellaAthena#3530: Before I did DL work, I worked in areas where the total data in the world over the past decade is less than 10k data points. Not even talking about whether it's observable or collectable yet ๐
|
jack#8178: makes sense - so this is for like, heuristic models that are going to take in 12 floats and produce 1-3 more?
StellaAthena#3530: I'm not sure what you mean by "heuristic models"
jack#8178: err I don't think I mean a thing - brain is still booting
StellaAthena#3530: Is linear regression a heuristic model?
jack#8178: I guess by heuristic I mean that it's a simple enough model that you can describe what the trained model is doing in a small number of natural language sentences
jack#8178: so yeah this counts
tpapp157#3643: Linear regression is still very commonly used, for example, because it's usually good enough for what the user needs and has nice interpretable coefficients.
nev#4905: Linear regression is all you need โข๏ธ
StellaAthena#3530: Seems like a good time to remind people that most of the world doesn't run on deep learning
tpapp157#3643: NNs only really outpace the performance of simpler models in very complex and unstructured domains like images and natural text.
tpapp157#3643: Although even in NLP, most production use cases can still do just fine with tf-idf.
StellaAthena#3530: Anyone using NNs for sales data, most social and biological science problems, are going to have a bad time
StellaAthena#3530: Most robotics problems too
StellaAthena#3530: Game playing in many contexts
jack#8178: by biological science problems do you mean traditional stats on low-dimensional noisy data or something else?
nev#4905: you don't even need a model there most of the time.
nev#4905: (a learned one)
jack#8178: 100% - I wish more games used hybrid approaches though
cfoster0#4356: I think the takeaway is that a large portion of "intelligently solving a problem" is knowing what tools and resources to use. That's why I expect an AGI (of course, itself built out of neurons) to make liberal use of xgboost when it needs to
StellaAthena#3530: Not really, no. I'm thinking more about medical problems like triaging, epidemiology, optimizing OR schedules
|
StellaAthena#3530: Population dynamics too
jack#8178: got it - that all tracks, the only one of those where it seems like DL could be remotely useful is triaging
StellaAthena#3530: Genetic mixing is markov models
jack#8178: and then only bc I expect visual information to help
StellaAthena#3530: It is empirically abysmal. IBM tried and had to spin it out into its own company so it can go bankrupt and die alone
tpapp157#3643: Also, the more sparse your data is, the more you need to rely on simpler models that make strong assumptions about the distribution of the data to get a useful result.
StellaAthena#3530: No DL model developed by AI people has ever been shown to be as successful in a clinical setting as its creators claim, and the overwhelming majority are not shown to have any effect whatsoever
StellaAthena#3530: There are hundreds of medical DL papers released every year and none of them work as claimed
cfoster0#4356: non-medical DL papers: :guilty:
StellaAthena#3530: At least *some* non-medical DL papers work
StellaAthena#3530: There have actually been metareviews studying thousands of DL papers in medicine finding zero that work as claimed
cfoster0#4356: Better or worse than the NAS situation?
StellaAthena#3530: About the same, from what I understand
NN_47#1886: Can you specify
NN_47#1886: NAS situation
cfoster0#4356: Neural architecture search
cfoster0#4356: Stella has written about it, I think
NN_47#1886: efficientNet types models are claimed to work very well
flowpoint#7450: is dl even a promising approach for clinical settings?
StellaAthena#3530: @NN_47 Can you show me a paper from people unaffiliated with the original work on efficientnet that claims to reproduce their results?
|
NN_47#1886: E0 to E7 was all that crap
StellaAthena#3530: I don't know what E0 is
StellaAthena#3530: Can you provide a link
StellaAthena#3530: Worthwhile quote from someone on the internet btw:
> One thing people don't realize is that EfficientNet/EfficientDet aren't necessarily the best choice _for their specific dataset_. In a way, a lot of these academic networks are overfit to the task of e.g. detecting objects in MSCOCO. If your dataset doesn't look like MSCOCO, there's no guarantee whatsoever that they will do well on it. Same with ImageNet for classification. ImageNet is very hard. To do well on it your net has to do something most humans won't be able to do without substantial training - recognize the various dog breeds. If your problem is simpler (which nearly all of them are), chances are you don't need as complicated a model to do well on it. Indeed, a "complicated" model is likely to actually do worse than a model that's "just complicated enough". Due to e.g. overfitting, or being more sensitive to noise in real-world data, and so on. Not to mention it will naturally limit your experiment throughput, which is one of the most important factors for getting a good model that does something practical.
https://news.ycombinator.com/item?id=25040917
NN_47#1886: https://arxiv.org/abs/1905.11946
StellaAthena#3530: Oh B0 to B7
NN_47#1886: Sorry yes
StellaAthena#3530: no worries
StellaAthena#3530: Right, can you show me a paper / blog post / whatever claiming (in increasing order of difficulty and increasing order of preference):
1. B7 is better than ResNet on a dataset other than ImageNet
2. B7 is better than ResNet on a non-image dataset
3. B7 is better than ResNet on a dataset that the authors collected in the real world
Written by someone not at Google
StellaAthena#3530: I would be surprised if you did 2 and shocked if you did 3
StellaAthena#3530: I can't name an example for 1 but I could very well be wrong
StellaAthena#3530: (I am not attached to B7 specifically, if it's B4 that works too)
NN_47#1886: No idea, may be you are right given the surprising silence on these NAS methods research
|
StellaAthena#3530: The most notable paper on this exact question I am aware of is this: https://arxiv.org/abs/2103.07579
StellaAthena#3530: > Novel computer vision architectures monopolize the spotlight, but the impact of the model architecture is often conflated with simultaneous changes to training methodology and scaling strategies. Our work revisits the canonical ResNet (He et al., 2015) and studies these three aspects in an effort to disentangle them. **Perhaps surprisingly, we find that training and scaling strategies may matter more than architectural changes, and further, that the resulting ResNets match recent state-of-the-art models.*** We show that the best performing scaling strategy depends on the training regime and offer two new scaling strategies: (1) scale model depth in regimes where overfitting can occur (width scaling is preferable otherwise); (2) increase image resolution more slowly than previously recommended (Tan & Le, 2019). Using improved training and scaling strategies, we design a family of ResNet architectures, ResNet-RS, which are 1.7x - 2.7x faster than EfficientNets on TPUs, while achieving similar accuracies on ImageNet. In a large-scale semi-supervised learning setup, ResNet-RS achieves 86.2% top-1 ImageNet accuracy, while being 4.7x faster than EfficientNet NoisyStudent. The training techniques improve transfer performance on a suite of downstream tasks (rivaling state-of-the-art self-supervised algorithms) and extend to video classification on Kinetics-400. We recommend practitioners use these simple revised ResNets as baselines for future research.
StellaAthena#3530: @NN_47 Using bad baselines is a tried and true method for getting SOTA. The paper that Charles is referring to is this one, where I talk about how people fuck up their DL research: http://proceedings.mlr.press/v137/biderman20a/biderman20a.pdf I use NAS as a illustrative example of issues with poor evaluation techniques and especially baselines, highlighting several papers that show random search algorithm beating "intelligent" NAS algorithms
StellaAthena#3530: I especially liked these two paragraphs https://cdn.discordapp.com/attachments/729741769738158194/935262972723478558/Screen_Shot_2022-01-24_at_3.00.59_PM.png
NN_47#1886: https://arxiv.org/abs/1904.02877
StellaAthena#3530: This looks like something that has actual evidence supporting it, though mobile computing is a very specialized area I know nothing about. On a quick skim, the main red flag is that they implement a custom kernel for their method... it's not clear to me if only their method gets to use this or if they all do.
NN_47#1886: Lol, random graphs as example of NAS, i see the troubles with the evaluation
NN_47#1886: they have opensourced their code and the model also I think.
StellaAthena#3530: Yeah, the last sentence of their abstract is as hilarious asit is depressing
> Reproducibility: Unlike all recent mobile-efficient NAS methods which only release pretrained models, we open-source our entire codebase at: [url]
NN_47#1886: Lol , they clearly understand the situation
StellaAthena#3530: Yeah they have a couple sentences like that in the paper too. They mention that they don't use the numbers from prior work because (unlike prior work) they optimize the networks for the *same device*
NN_47#1886: No problem from practical point of view, one may want same architecture model trained for different datasets on a given device, even that is hugely beneficial practically
genetyx8#7543: wtf. sounds like reservoir models
genetyx8#7543: this is simultaneously hilarious and very interesting https://cdn.discordapp.com/attachments/729741769738158194/935268458315272233/watts-strogatz-nns.png
StellaAthena#3530: Sure, but it is a problem if you do experiments on a specific device and some models are optimized for it and others are optimized for different ones
StellaAthena#3530: IDK why but I get emails by people I don't know who want me to review their reservior model preprints every couple months.
StellaAthena#3530: (I delete them)
NN_47#1886: I think, the paper claims there architecture will work on different devices , otherwise there is no point to the research
StellaAthena#3530: That would be a weird claim to make given that all of their evals are on a single device and they use device-speciifc optimizations
|
genetyx8#7543: @StellaAthena Any similar papers trying to look at the effect of random network architectures like that? Sounds like a good shitpost
StellaAthena#3530: That doesn't mean they *don't* claim that. But it's a weird thing to claim in context unless I'm misunderstanding
StellaAthena#3530: The four papers I cite as 28, 34, 52, and 55 are the ones I am aware of
StellaAthena#3530: If there is work after my paper came out I am unlikely to know about it, but it can probably be found in the citations of those four
NN_47#1886: Their should be at least couple of different devices, let me read it again.
NN_47#1886: So Training and testing is on a single device, who knows , would have to experiment and see if the model works on different devices and datasets.
Some Point Process#3793: I mean, from the deep learning book (fundamental stuff) it says how nas is basically intractable (undifferentiable) and hence yiu have to resort to grid search
Some Point Process#3793: because ur iterating over hyperparameters
Some Point Process#3793: that said i wouldn't be surprised or skeptical of methods that try to flatten the landscape near the *neighborhood* of the hyperparameters that the authors used (whatever that might mean). Like you don't want an architecture X training method to be too sensitive to them (commonly in RL or gan training)
CRG#8707: https://www.kaggle.com/tagoodr/dimensional-analysis-of-gradient-descent
CRG#8707: Normalizing the data helps make the LR unitless (other than Adam etc)
Some Point Process#3793: Yeah the way I view it is that it's like adding a norrmalization layer before the input layer. Naive NN construction doesn't do this even if it has layer normalization
CRG#8707: If the loss function (eg CE loss) has units of nats, what are the units of the parameters?, also in nats?
alstroemeria313#1694: i wrote a version of my gradient noise scale learning rate tuner that can be manually fed microbatch and batch statistics
alstroemeria313#1694: I haven't tested that part yet though
alstroemeria313#1694: So it can be used without backpack in data or pipeline parallel settings
alstroemeria313#1694: it can use partially meaned gradients in place of per-batch item gradient statistics
pbaylies#1820: @alstroemeria313 Tested out your new optimizer... bad news, I'm probably leaking memory somewhere... good news, got this out in 22 steps... https://cdn.discordapp.com/attachments/729741769738158194/935310800065822791/proj.mp4,https://cdn.discordapp.com/attachments/729741769738158194/935310801122758676/proj.png
alstroemeria313#1694: oh no... is there a memory leak when the user does create_graph=True and then i skip the hvp step?
alstroemeria313#1694: does the memory leak go away if you use update_d_every=1?
|
alstroemeria313#1694: btw you can turn the exponential lr warmup down if you want
alstroemeria313#1694: default is 0.99
pbaylies#1820: It def. dies on the backward pass; let me see...
alstroemeria313#1694: but wow! i was going to try it on stylegan2 one of these days...
pbaylies#1820: This was with an lr of 10 ๐
alstroemeria313#1694: Eheh
alstroemeria313#1694: Yeah you might have had to turn it up that high to counteract the default lr warmup
pbaylies#1820: https://cdn.discordapp.com/attachments/729741769738158194/935311577165488168/unknown.png
alstroemeria313#1694: oh no
alstroemeria313#1694: Huh
alstroemeria313#1694: If you use W only
alstroemeria313#1694: Like, the 512 dim W.
alstroemeria313#1694: We could throw second order optimizers at it
alstroemeria313#1694: I mean more than the Hessian diagonal
alstroemeria313#1694: The Hessian would only be 512x512 and we could maybe iteratively low rank approximate it or smth
๐
ฌ gabriel_syme ๐
ฌ#3220: ๐ฌ
๐
ฌ gabriel_syme ๐
ฌ#3220: I love it
pbaylies#1820: Your theory is right, update_d_every=1 fixes the crash
pbaylies#1820: 30 steps https://cdn.discordapp.com/attachments/729741769738158194/935312351366905956/proj.png
alstroemeria313#1694: i have not worked out the circumstances where the memory leak happens
|
alstroemeria313#1694: i have done long runs using it
alstroemeria313#1694: and not crashed
pbaylies#1820: yup; so I may be using it wrong ๐
alstroemeria313#1694: but i have encountered it while using backpack
alstroemeria313#1694: did you do set_to_none=True when you zeroed the grads
pbaylies#1820: https://cdn.discordapp.com/attachments/729741769738158194/935312726169886820/unknown.png
alstroemeria313#1694: ahh
alstroemeria313#1694: so you'd think it would get rid of the graphs
alstroemeria313#1694: bc it set the grads to None
alstroemeria313#1694: it would be able to be garbage collected
pbaylies#1820: hm, still got a crash trying 100 steps, crashed at step 63
alstroemeria313#1694: oh :/
pbaylies#1820: 60 steps, lr=1 https://cdn.discordapp.com/attachments/729741769738158194/935313766986424360/proj.png
alstroemeria313#1694: oooh
pbaylies#1820: I didn't try lowering the warmup, but I should be able to do that, as I do my own warmup
alstroemeria313#1694: ahh
pbaylies#1820: Could it be that having an lr schedule is a problem
pbaylies#1820: https://cdn.discordapp.com/attachments/729741769738158194/935314373315035136/unknown.png
alstroemeria313#1694: no it's fine
alstroemeria313#1694: i designed it so you could change out the base lr
|
alstroemeria313#1694: @pbaylies are you, by any chance, using any intermediate frozen models that you *didn't* set to `.requires_grad_(False)` manually
alstroemeria313#1694: Like CLIP and StyleGAN2
pbaylies#1820: It's possible; let me go through them all...
alstroemeria313#1694: Bc I think a backward will try to accumulate into their .grad if you don't turn it off
alstroemeria313#1694: And ordinarily this would just slow things down a bit and use a little more memory
alstroemeria313#1694: But with `create_graph=True` I bet it can actually keep the graph hanging around and leaking memory
pbaylies#1820: That sounds like a likely theory then
pbaylies#1820: Yup I think that was it!
alstroemeria313#1694: oooh
pbaylies#1820: 100 steps, lr=1 https://cdn.discordapp.com/attachments/729741769738158194/935315501595361372/proj.png
robinbin#9573: https://media.discordapp.net/attachments/729741769738158194/935108809972678697/Transformer_Visualization.gif
alstroemeria313#1694: I wish there was a way to manually say "I am done with this graph, please don't do a backward pass with it, just mark it like someone did and prevent them from using it again like a double backward, but without actually taking the compute to do a backward"
alstroemeria313#1694: Bc if there were I could just clear it inside the optimizer if I didn't do the hvp that step.
alstroemeria313#1694: hm
alstroemeria313#1694: Maybe I could...
alstroemeria313#1694: Add a method to call that returns True if it is going to do the hvp that step
alstroemeria313#1694: And False otherwise
alstroemeria313#1694: and you can do `loss.backward(create_graph=opt.should_create_graph())`
pbaylies#1820: Oh yeah, def. see the speedup doing it only every 10 now
alstroemeria313#1694: Actually I think I will just do that.
|
pbaylies#1820: Still looks pretty good https://cdn.discordapp.com/attachments/729741769738158194/935316093566877736/proj.png
alstroemeria313#1694: ooh
๐
ฌ gabriel_syme ๐
ฌ#3220: it has a very artistic texture I like it
pbaylies#1820: I'm using metfaces, so you sort of get that for free
๐
ฌ gabriel_syme ๐
ฌ#3220: even in the background
๐
ฌ gabriel_syme ๐
ฌ#3220: oh ok
๐
ฌ gabriel_syme ๐
ฌ#3220: I actually thought this was like FFHQ
๐
ฌ gabriel_syme ๐
ฌ#3220: since it looks more real than artistic
EricHallahan#1051: (Note that this is #general btw if people here have forgot.)
pbaylies#1820: Thanks @EricHallahan my optimization problem is solved now ๐
alstroemeria313#1694: Hey can we like.
alstroemeria313#1694: A W+ is 18x512
robinbin#9573: @alstroemeria313 i mean for when you train a transformer to do, say machine translation, and we get say 5 input tokens vs 10 output tokens, what do you do to the output?
Since query and key should have same dimension.
Do you just shift to the right your "context window", like how it is in the animation?
I made the animation btw, just trying to check if it is right.
alstroemeria313#1694: they do not have to be the same number of tokens actually.
EricHallahan#1051: I should try it with my notebook `:P`
alstroemeria313#1694: If you have like a 64 token Q and a 100 token K you just end up with a 64x100 attention matrix
alstroemeria313#1694: The d_model dimension is the one that has to match for the matrix multiply
|
robinbin#9573: Then what do you do when come together with value?
robinbin#9573: I suppose value would be either 64 or 100 in usual cases..
alstroemeria313#1694: V should have the same number of tokens as K right
alstroemeria313#1694: But Q can be different
robinbin#9573: But in the original paper it says that query and key both have dimension dk, and value has dimension dv
๐
ฌ gabriel_syme ๐
ฌ#3220: don't worry general has been full on optimization for days ๐ it was me Eric was poking I feel
robinbin#9573: Let me check...
robinbin#9573: Also, do you guys also use matmjl and dot product interchangeably?
alstroemeria313#1694: yeah kind of ^^;;
pbaylies#1820: I've been enjoying it, honestly ๐
robinbin#9573: Seems like all tutorials + original paper does this and it is a bit confusing ๐
alstroemeria313#1694: they mean the d_model or feature dimension, not the number of tokens
robinbin#9573: I see
alstroemeria313#1694: which is distinct, number of tokens is sequence length
alstroemeria313#1694: like if dk is 768 and you have 100 tokens
alstroemeria313#1694: you have 100 768-dim vectors
alstroemeria313#1694: then on top of that you have your batch dimension
alstroemeria313#1694: so there are typically three dimensions in transformer activations.
robinbin#9573: So when you train a transformer on predicting next tokens, the context window is the sequence size. But you'd move to the right each time you make a prediction?
alstroemeria313#1694: for each token, the thing at its output position is a prediction of the next token in the sequence
|
alstroemeria313#1694: and we stop information from flowing backward (the "causal mask") so it can't just look at the next token and copy it
alstroemeria313#1694: it has to actually learn
Some Point Process#3793: You can train along the entire sequence dimension (positions) simultaneously. it's just that when generating (decoding) you have to predict one at a time, technically. But decoding is only from the last layer anyway. This doesn't get rid ofthe O(n^2) overhead since the attention matrix grows on that order
EricHallahan#1051: It was more Vonclank as they seed to have inadvertently gotten spatially disoriented. ๐
Isaiah#0007: Sometimes I forget to look what channel i'm in
robinbin#9573: But I mean for instance if you have input = I am doing
And then prediction "good", but you keep going, do you append "good" into the input so look at "I am doing good", or keep input size of 3 tokens and look at "am doing good"
EricHallahan#1051: No problem! Thought it would be a good idea to give a nudge in the right direction. :hap:
robinbin#9573: Thanks @alstroemeria313 and @Some Point Process
robinbin#9573: BTW
Some Point Process#3793: Yeah then it's a lot more constrained. Evaluating the model at every step is different from maximum likelihod training like in language model pretraining
robinbin#9573: I'm trying to figure out just how to set the sequence sizes on training and inference
robinbin#9573: I know you can simultaneously run a whole sample with teacher forcing on training
robinbin#9573: Like you do in RNNs
Some Point Process#3793: With bidirectional attention you can set a limit "a priori" like others mentioned before
Some Point Process#3793: i.e. specify an EOS token at wherever position you want, constraining the objective. But this complicates the LM task in less than obvious ways iirc
robinbin#9573: Not sure what you mean by bidirectional attention here, you mean for RNNs?
Some Point Process#3793: With unidirectional transformers what people did, though, was constrain the LM from *not* generating an EOS token (i.e. too early)
robinbin#9573: Also what limit apriori?
Some Point Process#3793: Oh no, I mean bidirectional transformers, which have no causal attention mask
|
robinbin#9573: I guess I'm just interested in what is done in practice, for gpt for instance, do you grow the size of the input on encoding, or do you shift to the right
robinbin#9573: I see
robinbin#9573: Which is pretty much BERT?
CRG#8707: You typically concatenate until you reach the limit seen during training
CRG#8707: Eg, 2048 for GPT-3/J
Some Point Process#3793: The LM objective is constrained ahead of time by specifying an eos token at the end of the sequence to "unmask" (i.e. predict)
robinbin#9573: I see, thanks @CRG
robinbin#9573: When you hit a limit do you shift right or just explode?
CRG#8707: You shift right
CRG#8707: You can even save the activations if you use relative attention (TrXL/alibi, rotary, etc)
robinbin#9573: I see, thanks @Some Point Process
CRG#8707: If you use absolute position embeddings, you need to recompute everything
robinbin#9573: Yeah that makes sense
robinbin#9573: Cool ๐ thanks a lot...
CRG#8707: See: https://arxiv.org/abs/2108.12409
CRG#8707: Alibi can use a different context window than seen during training, but that's because it kind of creates a soft limit anyways
CRG#8707: You penalize exponentially tokens further away
(Anyways, just read the alibi paper)
Some Point Process#3793: Important subtlelty is that in the case of causal (autoregressive) transformers, recomputation have to happen for tokens that were already generated; this is for when predicting the next token. The next token prediction is on the order of O(n) where n is the previous sequence length to "look back at"
EricHallahan#1051: (Taps Rule #3)
|
alstroemeria313#1694: @pbaylies i pushed `.should_create_graph()`
alstroemeria313#1694: problem should be fixed
Some Point Process#3793: > recomputation have to happen for tokens
typo: *doesn't* have to happen
pbaylies#1820: @alstroemeria313 Thanks!
alstroemeria313#1694: oh hey now that i have `should_create_graph()` then backpack works and doesn't leak memory/oom/get killed
alstroemeria313#1694: which means i can use my new gradient noise scale lr tuner with it
ilovescience#3282: @alstroemeria313 you're playing with optimizer stuff, right? have you tried playing with epsilon in adaptive gradient methods, potentially introducing a schedule for it?
alstroemeria313#1694: no
alstroemeria313#1694: i dislike epsilons bc they make the system not scale invariant
CRG#8707: Openai did it
alstroemeria313#1694: but i understand the need for them
ilovescience#3282: link pls?
alstroemeria313#1694: i just feel it is inelegant if the epsilon is like... required to get a particular behavior other than the optimizer not being unstable
CRG#8707: https://arxiv.org/abs/2106.00958
ilovescience#3282: ah this is a fairly new paper actually...
ilovescience#3282: I don't know much about RL but this is an interesting paper! thanks for sharing...
nshepperd#2316: @alstroemeria313 with ESGD-M, what if we clipped the update steps instead of having a large epsilon
nshepperd#2316: like if the purpose is to prevent it taking huge steps when the |H| estimate is bad
nshepperd#2316: we can just have a tiny epsilon to prevent division by 0 and do parameterwise clipping
|
alstroemeria313#1694: ahh
nshepperd#2316: the maximum step size should be easier to set correctly than the epsilon, i think
nshepperd#2316: bc it is in units of params^1, instead of loss^1 params^-2
alstroemeria313#1694: ah
nshepperd#2316: after a warmup phase we could actually set it automatically maybe
nshepperd#2316: like 20 * sqrt(ema of the squared update steps)
alstroemeria313#1694: ah
Kharr#7888: This thing is really cool, thanks for sharing ๐
alstroemeria313#1694: :blobcutehappy:
chilli#5665: @alstroemeria313 btw, I remember you once sent me a custom CUDA op from Stylegan3(?) that looked like upsample => pointwise => downsample
chilli#5665: or something like that
chilli#5665: 1. do you have an idea of how much faster that is than eager? and 2. Have you seen that kind of pattern anywhere else?
nshepperd#2316: that would have been stylegan3 yeah
nshepperd#2316: reducing aliasing
chilli#5665: any idea how much faster that is than raw PyTorch?
nshepperd#2316: no idea, unfortunately
nshepperd#2316: we could dig it out of the stylegan3 codebase and benchmark maybe
chilli#5665: hmm
chilli#5665: just curious whether it's worth compiling this pattern
chilli#5665: Could probably compile from arbitrary upsample => stuff => downsample patterns in PyTorch down to an optimized kernel
|
chilli#5665: or maybe even arbitrary upsample => stuff => reduction patterns
nshepperd#2316: upsample => pointwise => downsample at least seems like an easy thing to automatically fuse
Some Point Process#3793: What is this op supposed to be for? Finding some fixed point?
nshepperd#2316: it replaces just having the pointwise op by itself
nshepperd#2316: usually a nn activation
chilli#5665: yeah, you do need a compiler that can compile upsample though
nshepperd#2316: the purpose is uhhh signal processing reasons
nshepperd#2316: reducing spatial aliasing caused by nonlinear nn activations
nshepperd#2316: so that the nn is more equivariant to subpixel shifts
Some Point Process#3793: Can understanding how kalman filters work inform nonconvex optimization?
chilli#5665: Do you know whether other folks are also doing this? Like, is this a common thing to do now?
nshepperd#2316: i haven't seen anyone do it other than stylegan3
nshepperd#2316: bc their mission was to make the whole nn equivariant
Some Point Process#3793: How exactly did the relu cause aliasing as found by sg3?
nshepperd#2316: idk i'm not a signal processing expert
nshepperd#2316: https://www.dsprelated.com/freebooks/pasp/Nonlinear_Elements.html
EricHallahan#1051: IIRC it is because ReLU is not bandlimited.
Some Point Process#3793: I mean a one sided clipping of the signal might cause distortion (viewed as clip distortion). Dunno what the transfer function looks like tho
cfoster0#4356: >>> in the continuous domain, any pointwise function commutes trivially with geometric transformations and is thus equivariant to translation and rotation. Fulfilling the bandlimit constraint is another questionโapplying, e.g., ReLU in the continuous domain may introduce arbitrarily high frequencies that cannot be represented in the output.
EricHallahan#1051: Wait I was correct? :berk:
|
cfoster0#4356: I believe so
EricHallahan#1051: I haven't read that paper in some time lol
nshepperd#2316: yeah so you upsample the signal, apply relu in what is... at least closer to the continuous domain, then downsample using a filter that removes the high frequencies
nshepperd#2316: i think
EricHallahan#1051: Yep
ari#9020: The authors say it's a pretty big win, at least:
> The speedup over native PyTorch operations varies between โผ20โ40ร, which yields an overall training speedup of approximately 10ร
chilli#5665: link?
ari#9020: https://nvlabs-fi-cdn.nvidia.com/stylegan3/stylegan3-paper.pdf , appendix D
chilli#5665: makes sense
chilli#5665: if this were a more popular operation, would probably be worth optimizing
EricHallahan#1051: Yeah it's kinda hard to justify in a lot of cases since the upscaling-downscaling creates relatively large activations in comparison to StyleGAN2.
kindiana#1016: so now its not going to be a popular operation because its slow :omniberk:
chilli#5665: lol, maybe TVM would work well on this
chilli#5665: @nshepperd does anybody care about stylegan 3?
chilli#5665: Like, would a Pure Pytorch stylegan3 that was fast be interesting?
EricHallahan#1051: It would certainly undermine the paper's claims that it isn't viable.
chilli#5665: Oh, I mean, itโd definitely need some New compiler APIs lol
chilli#5665: Or like, new compiler work
EricHallahan#1051: But yeah I don't know how much it would be in demand.
|
nostalgiahurts#3408: from what I remember, tpapp157 seemed interested in standalone stylegan3 layers
> Yeah the the details of stylegan3 really go over my head. I wish they would release their basic ops as simple function library.
> Does anyone know of a repo that takes the stylegan3 custom kernels/ops and packages them as standalone layers?
so that's at least 1 person
chilli#5665: Yeah maybe folks would be interested in them if they were easier to use/more flexible
๐
ฌ gabriel_syme ๐
ฌ#3220: his neurips presentation is nice (although still gated), let me get some snips
๐
ฌ gabriel_syme ๐
ฌ#3220: Some main things I got out of it:
```
continuous representation: nyquist-shannon sampling theorem, continuous and discrete signals are equivalent under specific conditions; design the entire network on the continuous side building equivariance on the continuous signals
create equivariant operations in the continuous world:
upsampling -> do nothing and re-discretize
downsampling -> low pass filtering to remove content
pointwise nonlinearity -> cannot discretize so we use low pass filtering to remove boundary information
changes to SG2: surround the relu with upsample and donwsample operations (the fused kernel); crop after downsample to keep boundaries; remove noise input
also mentioned: switching from 3x3 to 1x1 convolutions, we can allow rotational movement
```
๐
ฌ gabriel_syme ๐
ฌ#3220: another thing was that while GANs are hierarchical (in their representations), networks are lazy and typically good at finding shortcuts between layers. In his words "bypass coarse-to-fine refinement by using position references from outside the hierarchy".
ari#9020: I still love how they did half a dozen steps of careful signal processing work to get subpixel translation equivariance, and then at the end they just went, oh, now we can get rotation equivariance by making our kernels small
Some Point Process#3793: Rotation equivariance meaning F(R(x)) commutes with (i.e. is equal to) R(F(x))?
๐
ฌ gabriel_syme ๐
ฌ#3220: oopsie
|
Kia#2550: Is it possible to run 20B on Batbot? (if im not mistaken all gpu's that Batbot uses is V100)
Kia#2550: Actually
Kia#2550: No one specified what kind of V100 Models that are being use:empties:
tpapp157#3643: Yeah I'm still interested. I think these ops would be very useful for generative models. It'd also be interesting to see how they impact learned features in a classifier.
tpapp157#3643: I assume rotational equivariance is impossible to achieve on a square pixel grid for arbitrary rotations, necessitating the reduction to 1x1 kernels. Maybe it's possible on a hex grid.
ari#9020: There's some papers about CNNs that are equivariant to 90 degree rotations and maybe flips (which are easy to implement on square pixel grids by just rotating/flipping your kernels), the middle example of video 6 on https://nvlabs.github.io/stylegan3/ shows what that'd look like applied to StyleGAN3; not sure what it'd look like if you worked on a hex grid, but I'd expect it'd be wonky at a lot of angles
tpapp157#3643: Right. Any non-smooth and/or non-continuous function is going to create signals at infinite frequencies when applied. Take an FFT of a relu compared to a smoothed version like gelu and the difference will be very clear. These high-frequency (sub-pixel) signals then create aliasing artifacts in the activations.
tpapp157#3643: There was a paper several years ago that explored rotational equivariance by adding an extra dimension or two iirc. They had some impressive visuals.
EricHallahan#1051: Yeah that's the visualization I had in mind.
nshepperd#2316: can't you make it approximately rotation equivariant by learning the convolution C and an orthogonal rotation matrix `M_ฮธ` depending on the angle so that `C(R_ฮธ(M_ฮธ(x)) = R_ฮธ(M_ฮธ(C(x))`
nshepperd#2316: or like, that's what it means for a conv to be rotation equivariant isn't it
tpapp157#3643: http://visual.cs.ucl.ac.uk/pubs/harmonicNets/index.html This was the paper I was thinking of.
nshepperd#2316: with 90ยฐ rotation and flip (D8) equivariance you can like multiply the channels by 8 so that theres a channel for each transformed version of each feature
nshepperd#2316: and then use grouped convolution or something lol
nshepperd#2316: oh thats what stylegan used
nshepperd#2316: except only rotation not flip
tpapp157#3643: Normal CNNs basically learn this with multiple different neurons detecting different edge orientations and the combination of these neuron activations covering full 360.
nshepperd#2316: https://arxiv.org/abs/1602.07576 this thing?
nshepperd#2316: yeah
tpapp157#3643: A good analysis of how normal CNNs learn rotational "equivariance": https://distill.pub/2020/circuits/curve-detectors/
|
alstroemeria313#1694: i think their special CUDA version is considerably faster.
chilli#5665: Yeah, ari linked some part of the paper that said 20-40x
chilli#5665: lol
alstroemeria313#1694: They are using special good upsampling and downsampling filters too
alstroemeria313#1694: Not just bilinear or even bicubic
chilli#5665: Hmm
tpapp157#3643: There is this but I don't think it's comprehensive of everything in the paper nor does it have the custom cuda kernels: https://github.com/junjun3518/alias-free-torch
alstroemeria313#1694: another nice thing to have along these lines is antialiased max pooling
chilli#5665: what is that?
chilli#5665: upsample => max pool => downsample?
alstroemeria313#1694: i mean better than maxblurpool which still aliases just not as much
alstroemeria313#1694: yes
alstroemeria313#1694: even so, maxblurpool is still reasonably fast in plain pytorch and is supposed to improve shift invariance in classifiers even if it is not totally equivariant
alstroemeria313#1694: (maxblurpool is 2x2 stride 1 max pooling followed by blurpool, bilinear downsampling)
alstroemeria313#1694: the upsample is 2x and the downsample is 4x
alstroemeria313#1694: the internal max pooling is probably stride 1, idk kernel size offhand, would have to work it out
chilli#5665: is there some paper I can read about it?
alstroemeria313#1694: no
chilli#5665: oh, just something you cam up with? ๐
alstroemeria313#1694: yep
|
chilli#5665: So I guess upsample => generic op => downsample might be worth fusing
alstroemeria313#1694: yep
alstroemeria313#1694: this is the paper with maxblurpool which demonstrates that it improves matters but still aliases some. https://arxiv.org/pdf/1904.11486.pdf
alstroemeria313#1694: i have used regular blurpool a lot
alstroemeria313#1694: it makes resnets have noticeably smoother gradients wrt their input
chilli#5665: interesting
alstroemeria313#1694: smoother than average pooling that is.
chilli#5665: and that's useful for image generation?
alstroemeria313#1694: since average pooling is a box filter and so its backward pass is nearest neighbor upsampling.
alstroemeria313#1694: it's good for *CLIP guided diffusion* where you backprop through the net to get a gradient wrt its input and then apply this gradient directly in image space
alstroemeria313#1694: It is also good for GAN discriminators because it makes their gradient wrt G's output smoother
alstroemeria313#1694: StyleGAN uses it in their D
chilli#5665: right
alstroemeria313#1694: it is probably also good for perceptual losses where you do things with the gradient wrt the input
chilli#5665: interesting, will take a closer look
alstroemeria313#1694: except you probably want the max pooling version of it
alstroemeria313#1694: for that
alstroemeria313#1694: bc perceptual losses are classifier based and the max makes it better at classifying
chilli#5665: I guess blurpool is pretty fast?
alstroemeria313#1694: yeah it's just a stride 2 conv2d with a fixed kernel
|
chilli#5665: hmmm
chilli#5665: this seems fusable lol
chilli#5665: but I don't know what's gonna fuse it
alstroemeria313#1694: eheh
chilli#5665: maybe TVM ๐ค
alstroemeria313#1694: also for my custom blurpool i reflect pad first
chilli#5665: but basically, as a summary
alstroemeria313#1694: can we fuse the padding
chilli#5665: blurpool is fastish
chilli#5665: maxblurpool is fastish
chilli#5665: antialiased max pooling would be slow as fuck
alstroemeria313#1694: yeahhh
alstroemeria313#1694: the key reason why it is slow
alstroemeria313#1694: is that you have to *write the 2x upsampled version of the thing* out to gpu memory then read it all back
alstroemeria313#1694: if we could keep it in sram that would be great
chilli#5665: yeah
chilli#5665: and it's probably even worse
chilli#5665: since with default autograd, that's probably also gonna be reflected in memory usage
chilli#5665: lol
alstroemeria313#1694: yep
|
chilli#5665: this could be feasible
alstroemeria313#1694: yay~
chilli#5665: this could be plausible too
alstroemeria313#1694: :)
alstroemeria313#1694: ...Can we have good 2x upsampling that's not bilinear too ^^;;
alstroemeria313#1694: Like pytorch bicubic has problems
alstroemeria313#1694: So I just use bilinear for my stuff
alstroemeria313#1694: i have written a custom 2x bicubic upsample that looks visually way better
alstroemeria313#1694: but it is not the best speed wise, it is just reflect padding followed by a stride 2 transposed conv with a fixed kernel
alstroemeria313#1694: idk what's wrong with pytorch's, it makes weird artifacts in the output if you apply it to its own output over and over
alstroemeria313#1694: whereas mine (and practically everyone else's) is fine
chilli#5665: mmm
chilli#5665: have you seen the paper on "issues with all the upsampling implementations"
tpapp157#3643: Also, note previously talked about issues with naive bilinear downsampling by more than 2x.
alstroemeria313#1694: i think so
alstroemeria313#1694: it's bad and i only use ops that i have verified do the right thing or have written myself ^^;;
alstroemeria313#1694: by "everyone else's" i mean excluding deep learning lol
chilli#5665: lol
alstroemeria313#1694: because people love to take shortcuts in dl that they wouldn't take if they were doing image processing in rgb space and could just *look* at the visual quality of the result
chilli#5665: I think torchvision has a "better" one
|
chilli#5665: that's being merged into pytorch core
alstroemeria313#1694: jax.image.resize() is correct btw
alstroemeria313#1694: but it's kinda slow on tpu
alstroemeria313#1694: for feature maps
alstroemeria313#1694: since i just want fixed 2x downsamples and upsamples this probably simplifies things
nshepperd#2316: @alstroemeria313 morning~
alstroemeria313#1694: morning!
nshepperd#2316: i'm just about to go to bed ^^;
nshepperd#2316: i don't remember if i actually checked if the convolution based one is faster on tpu, i probably should
alstroemeria313#1694: :)
nshepperd#2316: i bet writing it with grouped convolution instead of reshaping to [nc, 1, h, w] is better bc of the whole padding issue
chilli#5665: @alstroemeria313 https://github.com/pytorch/pytorch/pull/70930
alstroemeria313#1694: yay!
alstroemeria313#1694: finally~
alstroemeria313#1694: ohh. like just repeating the kernel?
nshepperd#2316: yeah
alstroemeria313#1694: wait, the torchvision backward pass was merged in August?
alstroemeria313#1694: the docs say there's no backward pass
alstroemeria313#1694: even in master
alstroemeria313#1694: is there one but the docs weren't updated?
|
alstroemeria313#1694: applying the torchvision bicubic upsample 2x to itself over and over gets me these artifacts https://cdn.discordapp.com/attachments/729741769738158194/935585563736682496/thing.png
alstroemeria313#1694: (original image was a 4x4 identity matrix)
alstroemeria313#1694: when i use my own it doesn't have artifacts.
alstroemeria313#1694: it only happens if you recursively upscale. or rather it's only visible if you recursively upscale bc whatever small artifacts get amplified.
alstroemeria313#1694: but this is a problem for using the op to upsample in resnets which output images obviously
krigeta#6645: Hello, I want to ask if it is possible to predict stories future based on the past story?
StellaAthena#3530: No
krigeta#6645: okay
chilli#5665: you need to run with antialias flag I think
alstroemeria313#1694: ah
chilli#5665: (at least according to that PR)
alstroemeria313#1694: it doesn't in stable
alstroemeria313#1694: the output has no grad_fn
Sid#2121: huh? i mean to some extent, sure it is. that's what language modelling is.
Sid#2121: it depends exactly what you mean by predict. Stories also generally fit into quite a small set of narrative arcs / topologies / whatever you want to call them
jesse#7865: OpenAI embeddings are live! https://openai.com/blog/introducing-text-and-code-embeddings/
cfoster0#4356: Oh! There's a paper too
cfoster0#4356: Seemingly the exact same setup as CLIP, and initialized with GPT/Codex weights
cfoster0#4356: The only substantive discussion of training data I could find
>>> Our models are trained on naturally occurring paired data. cpt-text models are trained on Internet data with neighboring pieces of text as positive pairs for the contrastive objective.The code embedding cpt-code models use (text, code) pairs extracted from open source code.
|
๐
ฌ gabriel_syme ๐
ฌ#3220: came here to post this ๐
๐
ฌ gabriel_syme ๐
ฌ#3220: hmm could I use this for retrieval? I'd imagine there is a cap to how many calls right
bmk#1476: time to make GPT (Goose Pretrained Transformer): trained on (goose, goose) pairs
๐
ฌ gabriel_syme ๐
ฌ#3220: dang 12288 dim, I want that one plz
๐
ฌ gabriel_syme ๐
ฌ#3220: about $8k for a million vectors, probably a bit too expensive for me ๐
๐
ฌ gabriel_syme ๐
ฌ#3220: ah no, ehm that's for 1k tokens
tpapp157#3643: Shit ain't cheap. Does the ToS allow you to store the embeddings?
๐
ฌ gabriel_syme ๐
ฌ#3220: good question, if it does I'm sure someone will do this
๐
ฌ gabriel_syme ๐
ฌ#3220: costs a lot but you only do it once, or close to that
tpapp157#3643: If it was a one time cost to encode a dataset then a business case could be made but a lot of these APIs specifically forbid storing the results.
๐
ฌ gabriel_syme ๐
ฌ#3220: in that case I'll stick to J ๐
Kazumi#1297: oh noe https://cdn.discordapp.com/attachments/729741769738158194/935617287262720111/Screenshot_from_2022-01-26_04-28-32.png
Kazumi#1297: I wonder how painful is this going to be to fix
bmk#1476: :ptsd:
Kazumi#1297: this probably broke when all my python packages broke
alstroemeria313#1694: oh on
Kazumi#1297: :ded:
Kazumi#1297: it's been over a year since when I've set up torch
Kazumi#1297: I do not remember how this works
tpapp157#3643: do you have the appropriate matching versions of pytorch, cuda, and cudnn?
|
Kazumi#1297: I think it's just missing
Kazumi#1297: there is no cuda under /usr/local
tpapp157#3643: If cuda isn't installed then pytorch isn't going to be able to see the gpu.
Kazumi#1297: hmm
Kazumi#1297: nvidia-smi and nvcc works, so I guess it's the environmental variables?
tpapp157#3643: could be
alstroemeria313#1694: it's in /usr/bin?
Kazumi#1297: they're both in /usr/bin
alstroemeria313#1694: ah
alstroemeria313#1694: and your pytorch was *compiled with* cuda support?
alstroemeria313#1694: it just can't use your cuda?
alstroemeria313#1694: like if you `pip list`
alstroemeria313#1694: it's like `1.10.1+cu113`?
alstroemeria313#1694: if not, you need to do like `pip3 install torch==1.10.1+cu113 torchvision==0.11.2+cu113 torchaudio==0.10.1+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html`
Kazumi#1297: hm, nope, it doesn't say that
Kazumi#1297: ahh
alstroemeria313#1694: (this is from the pytorch.org website)
Kazumi#1297: I was doing `conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch`
alstroemeria313#1694: Ahh
alstroemeria313#1694: Oh that's conda?
|
alstroemeria313#1694: oh
alstroemeria313#1694: ...But then why doesn't it show it in pip
Kazumi#1297: it just says `1.10.1`
Kazumi#1297: and pip says no matching version for `1.10.1+cu113`
Kazumi#1297: ohh, I did not think the link was part of it, I thought that was a reference
Kazumi#1297: now it says `1.10.1+cu113`, but `torch.cuda.is_available()` is still false
alstroemeria313#1694: ohh :/
Kazumi#1297: ok
Kazumi#1297: guess what made it work
Kazumi#1297: turning my computer off and on again
Kazumi#1297: :ded:
bmk#1476: did you mean
bmk#1476: :goose10:
bmk#1476: when in goose, do as the geese
Kazumi#1297: do I deserve sleep now
Kia#2550: You need to
Kazumi#1297: it's 8am
Kia#2550: Go sleep now
Kia#2550: This is the time you sleep right
Kazumi#1297: okie
|
Kia#2550: Yeah,rest well now
Kazumi#1297: good night
alstroemeria313#1694: i need to write documentation for my automatic lr scaler
JoshYears#2906: Hey y'all, I'm a professional user researcher and rules writer/editor in the tabletop game world and I know my way around modern NLP half-decently. (I took a graduate course and have built a pretty mediocre model+dataset that does QA for board game rulebooks.) I'm curious, is there any need for UX/UR/docs work for Eleuther projects?
Louis#0144: Indeed there is!
Louis#0144: @๐
ฌ gabriel_syme ๐
ฌ
Louis#0144: (Also cc @Tracy)
alstroemeria313#1694: ...why does the lr adjustment from No More Pesky Learning Rates suddenly appear to work much better when you take its square root
Some Point Process#3793: Has anyone tried this yet? https://cdn.discordapp.com/attachments/729741769738158194/935705012531765268/unknown.png
Some Point Process#3793: (<https://arxiv.org/abs/2105.15183>)
EricHallahan#1051: ... I feel really guilty that this page hasn't been put in a more accessible place. I seriously need to get around to sprucing up the website.
https://www.eleuther.ai/get-involved/
EricHallahan#1051: You can even tell that it has been ignored because it doesn't even have dedicated meta text. :guilty:
๐
ฌ gabriel_syme ๐
ฌ#3220: Oh, good morning
EricHallahan#1051: To respond, there used to be at one point a small section on the above page about web dev and UX that said something along the lines of "While there isn't much work that we need completed in this area, we find that we occasionally have a need for it."
EricHallahan#1051: So while sparse, it is certainly appreciated.
EricHallahan#1051: (I honestly have no idea what the goose has in mind.)
Kazumi#1297: good morning
Kia#2550: wha
Kazumi#1297: it was cold, I woke up
|
Kazumi#1297: also pytorch forgot my gpu existed again. I don't want to restart my computer every time I let it go to sleep mode
update: running `sudo rmmod nvidia_uvm ; sudo modprobe nvidia_uvm` seems to reset the gpu, so I don't need to restart the computer
JoshYears#2906: Neat! What would be the best way to be alerted of those needs? Also nice to see another Penn State personโI was there for grad school from 2010 to 2013 in materials science and engineering.
Louis#0144: We have a literal ux/ur research project
Louis#0144: Which @JoshYears might be interested in
Louis#0144: We also are working on an LM human eval project and a data annotation project
Louis#0144: Both of which need web engineers
JoshYears#2906: Yes def. Where do I sign up? :Goose:
JoshYears#2906: Cuz I'm seeing the projects channels and none of them scream UX/UR. ๐
Louis#0144: @๐
ฌ gabriel_syme ๐
ฌ
Louis#0144: honk
๐
ฌ gabriel_syme ๐
ฌ#3220: oh hi, sorry just got coffee and morning tasks
๐
ฌ gabriel_syme ๐
ฌ#3220: yes we just started discussing about a UI/UX project for semantic generation and there's also the annotation project Louis is talking about
๐
ฌ gabriel_syme ๐
ฌ#3220: if you want we can have a chat about it, see what sparks your interest
OccultSage#3875: so I just got my A6000 48gb card and installed it in my Ryzen 3950x machine.
OccultSage#3875: and the most important thing I did.
OccultSage#3875: https://cdn.discordapp.com/attachments/815341704747024444/935712977598939206/IMG_0352.jpg
Louis#0144: lmao
OccultSage#3875: https://cdn.discordapp.com/attachments/815341704747024444/935712977989042197/IMG_0353.jpg
|
Louis#0144: iirc they suck at gaming
Louis#0144: no?
OccultSage#3875: no, they're great
OccultSage#3875: a bit faster than a 3090.
OccultSage#3875: the A6000 is basically a 3090 with 48GB of ECC RAM
EricHallahan#1051: This is #general sir.
OccultSage#3875: aaah, fuck
OccultSage#3875: this is important ML stuff
OccultSage#3875: yessir
OccultSage#3875: Three Kingdoms as a proxy for ML benchmarks
OccultSage#3875: :facepalm:
Kia#2550: lmao
EricHallahan#1051: Oh, I'm not at University Park, I'm at Great Valley most of the time. `:P`
๐
ฌ gabriel_syme ๐
ฌ#3220: Do you think I should take away `\n` from my RETRO chunks? Is it something of interest to the models if that chunk was at the end of a specific input / doc?
alstroemeria313#1694: @nshepperd wonder if i should try adahessian-type pooling the Hessian estimate over various dimensions of the param tensor to reduce variance
alstroemeria313#1694: However
alstroemeria313#1694: Like remember how I said adahessian was computing E[diag |H|] by mistake
nshepperd#2316: yeah
alstroemeria313#1694: By squaring, EMAing, and sqrting
alstroemeria313#1694: They pool before squaring
|
alstroemeria313#1694: So if they do the pooling then they aren't actually getting either E[diag |H|] or |E[diag H]|
nshepperd#2316: oh :catgirl5:
alstroemeria313#1694: Which is wrong and if I did this I would square before pooling so it does the right thing
nshepperd#2316: yeah
nshepperd#2316: pool before squaring will tend to cancel out the values, so the step sizes will be too large
alstroemeria313#1694: i think they do actually *take the absolute value* before pooling
nshepperd#2316: ohh
alstroemeria313#1694: yeah `tmp_output = torch.mean(hv.abs(), dim=[2, 3], keepdim=True)`
nshepperd#2316: ```py
def update(self, params, grads, hvp):
self.m1 = tree_lerp(self.beta1, self.m1, grads)
if hvp is not None:
hvp = jax.tree_util.tree_map(jnp.mean, hvp)
self.m2 = tree_lerp(self.beta2, self.m2, hvp)
self.m2_bias *= self.beta2
def step(p, m1, m2):
vhat = m2 / (1.0 - self.m2_bias)
s = self.lr / (jnp.sqrt(vhat) + self.eps)
|
return p - s * m1
return jax.tree_util.tree_map(step, params, self.m1, self.m2)
```
alstroemeria313#1694: that is the thing they have an EMA of its square
nshepperd#2316: this is my scalar one. the hvps are already squared
alstroemeria313#1694: *nods*
alstroemeria313#1694: taking the mean of the squared values is correct
alstroemeria313#1694: i have really got to get the docs for this lr tuner written
alstroemeria313#1694: bc https://cdn.discordapp.com/attachments/729741769738158194/935736590028926986/Screen_Shot_2022-01-25_at_7.23.04_PM.png
alstroemeria313#1694: the expectation is over the squared hvp
alstroemeria313#1694: then to get the 2-norm of the row you take the square root
alstroemeria313#1694: so what we are doing when we pool over dimensions is.
alstroemeria313#1694: well, if we sum over dimensions we are getting the squared 2-norm of those rows
alstroemeria313#1694: so taking their mean means replacing each individual squared 2-norm with the mean over some group
alstroemeria313#1694: likewise summing over *everything* gets you the squared 2-norm of the entire Hessian which you then divide by the number of rows in it
alstroemeria313#1694: so this preserves the properties of the estimator, right?
๐
ฌ gabriel_syme ๐
ฌ#3220: this one is fun, a nice brake from offtopic too ๐ just started it myself
https://www.youtube.com/watch?v=IncnIIpeKVM
๐
ฌ gabriel_syme ๐
ฌ#3220: joscha bach will be in our digital future sessions as well, this might help me have decent questions
kurumuz#5695: I love joscha
|
๐
ฌ gabriel_syme ๐
ฌ#3220: i'll send you a link if I can
EricHallahan#1051: `s/brake/break` ๐
๐
ฌ gabriel_syme ๐
ฌ#3220: I confuse those two all the time ๐
๐
ฌ gabriel_syme ๐
ฌ#3220: ok video is much more relevant than I thought, seems cool
kurumuz#5695: I am probably just gonna listen to joscha
kurumuz#5695: :berk:
kurumuz#5695: and skip the rest
๐
ฌ gabriel_syme ๐
ฌ#3220: sounds good ๐
๐
ฌ gabriel_syme ๐
ฌ#3220: the first talk is from Intel's multimodal team
cfoster0#4356: Tbh the others are not bad, would still recommend
cfoster0#4356: Lightly but still
๐
ฌ gabriel_syme ๐
ฌ#3220: first one started with quite introductory stuff but then he mentioned an object knowledge paper so got me interested
Louis#0144: We need off topic 1 and 2
nshepperd#2316: #off-topic and #really-off-topic
EricHallahan#1051: #off-topic
bmk#1476: really-off-topic goes to other servers
alstroemeria313#1694: ok so i have a question.
alstroemeria313#1694: suppose i have unbiased estimators of two quantities.
alstroemeria313#1694: And I want to take their ratio.
alstroemeria313#1694: And get some sort of... thing with reasonable properties.
|
alstroemeria313#1694: Now, firstly, the ratio of two unbiased estimators is not necessarily an unbiased estimator and in fact it is not in this case
bmk#1476: can you adjust for that
alstroemeria313#1694: Secondly, sometimes my unbiased estimators can provide values that are < 0
alstroemeria313#1694: The actual values themselves must be nonnegative
alstroemeria313#1694: And I can't put negative estimated values into the ratio, it breaks
alstroemeria313#1694: Like, the estimators have to output things that are < 0 sometimes because their mean has to be the actual value so if they couldn't then they would have to output exactly 0 always if the true quantity was 0
alstroemeria313#1694: And we can't construct them to do this afaik.
alstroemeria313#1694: So we have to accept that they sometimes are negative but if we take the mean of enough samples then the mean can't be below 0.
alstroemeria313#1694: And sometimes we don't have enough samples yet
alstroemeria313#1694: so... what do
๐
ฌ gabriel_syme ๐
ฌ#3220: oh ViT-L/14 models released
๐
ฌ gabriel_syme ๐
ฌ#3220: and RN50x64
EricHallahan#1051: Yes it has been discussed in three other places by now. ๐
๐
ฌ gabriel_syme ๐
ฌ#3220: dang
nshepperd#2316: @alstroemeria313 can we get an uncertainty for the estimate of the thing that is sometimes <0?
alstroemeria313#1694: yes we could do EMA variances too
alstroemeria313#1694: of both quantities
nshepperd#2316: and then like, compute the mean of the truncated normal or sth
nshepperd#2316: so that it is never <0
alstroemeria313#1694: I haven't done EMA variances bc I didn't have a good idea how to use them
|
alstroemeria313#1694: ahh...
alstroemeria313#1694: so the reason these quantities can go negative
alstroemeria313#1694: is that they incorporate a correction for the fact that the batch size you got the samples from is a sample and not the whole population
alstroemeria313#1694: (i.e. you don't have the true mean gradient but only n samples of gradients)
alstroemeria313#1694: if you tell it you got the sample from an infinitely large batch, your estimator of the norm of the true gradient can then never be 0
alstroemeria313#1694: but... we probably do not actually want to do this hack
alstroemeria313#1694: but instead find some way to sanely use the estimators instead
alstroemeria313#1694: also. the condition where the openai estimator returns a value less than 0 for the squared norm of the mean of the microbatch gradients is less than the sum of the squared norms of the microbatch gradients
alstroemeria313#1694: like, when there was *so much* cancellation that the variance went down by more than the number of things you took the mean of
alstroemeria313#1694: so... what happens when someone tries this on a problem where they have correlations between their samples
alstroemeria313#1694: it's gonna break
alstroemeria313#1694: bc this estimator is only valid if there are no correlations
alstroemeria313#1694: so while it might be negative *on one* batch it can't average out to less than 0
alstroemeria313#1694: i found an alternate formula that can't go below 0 but i am unsure it is as good
igoro#7477: I've been running various fine-tuning gpt-neox experiments on small models, where I freeze most of the model. The behavior I keep seeing seeing is the model initially degrades badly before recovering: https://cdn.discordapp.com/attachments/729741769738158194/935947897739026543/unknown.png
igoro#7477: Notice how the training error initially explodes. This only seems to happen with FP16, not with FP32. So maybe something to do with optimizer state? Any tips on how to investigate?
(As long as I'm only unfreezing very small parts of the model, the training recovers pretty quickly, so it's not a big deal. But if I unfreeze a larger part, it can be difficult to get the model to ever recover from the initial error explosion.)
StellaAthena#3530: @igoro Are you using the optimizer states from the trained model to initialize the optimizer for the finetuning?
igoro#7477: I am trying not to. So, I set "finetune=true" in the config file. But I have anecdotal evidence that some of the optimizer state is still getting loaded (e.g., checkpoint["optimizer"]["fp32_groups_flat"])
StellaAthena#3530: @igoro What LR are you using
|
igoro#7477: I'm using 0.0006. I tried to vary the LR, but the spike in the beginning didn't go away.
StellaAthena#3530: Is that 10% the LR that the model was trained with originally?
igoro#7477: It's not. Probably the same LR as what the model was trained with. I wasn't aware of that rule-of-thumb. (Good to know!) I did try to decrease the LR pretty far down (further than 10x down) and the spike remained.
StellaAthena#3530: Hmm. What does the plot look like using 10% of the LR
igoro#7477: The only experiment where the spike went away was when I switched to FP32
StellaAthena#3530: That's very strange.
Can you share the plot using 10% of the LR
igoro#7477: Yeah, trying to find it on wandb. I guess it's easier to rerun. I'll share shortly
igoro#7477: This is with LR at 10% (so 6e-4 to 6e-5). Also spiked: https://cdn.discordapp.com/attachments/729741769738158194/935956114170839090/unknown.png
igoro#7477: Same thing in FP32, with original LR (6e-4). No spike. (Actually, I shared the wrong graph here. I'll reconfirm.)
cfoster0#4356: What part of the model are you unfreezing?
igoro#7477: in this experiment, final_linear. so the unembedding matrix
evared#6360: Is anyone aware of some example code for running eval_harness on a CUDA based pytorch transformer model?
evared#6360: I'm looking to do some model benchmarking
cfoster0#4356: Hmm are you seeing any warnings about gradient oveflows? (assuming DeepSpeed is in play) I've run into issues before tuning some Neo layers with fp16, where it kept overflowing and it turned the dynamic loss scale all the way down to 1.0
igoro#7477: I don't see any warnings when I use deepzero stage 0. I do see warnings when I use deepzero stage 1. but the basic spiking behavior is the same
igoro#7477: I can run some more experiments and get a tighter repro. I'll share what I find. Thanks for taking a peek, @cfoster0 and @StellaAthena
guac#4716: you might want to check out this huggingface language model class for inspiration: https://github.com/EleutherAI/lm-evaluation-harness/blob/master/lm_eval/models/gpt2.py
igoro#7477: @StellaAthena , @cfoster0 : I can avoid the issue now and I think I mostly understand it. Here is what's going on:
* Optimizer depends on parameter weights, so those are read from checkpoint
|
* FP16_Optimizer ideally wants 32-bit parameter weights but the ones in the checkpoint are 16-bit. So, FP16_Optimizer **additionally** stores a copy of the 32-bit parameter weights in "mp_rank_00_model_states.py" under `checkpoint["optimizer"]["fp32_groups_flat"]`
* Not all parameters are stored in the optimizer state: only those parameters that were unfrozen, i.e., have `requires_grad=True`.
* This is then the basic problem. If you freeze/unfreeze different parameters, the 16-bit optimizer's expectations on what's in `checkpoint["optimizer"]["fp32_groups_flat"]` will be wrong
igoro#7477: It's a little more complicated, though. Here is what the code looks like, in `FP16_Optimizer.load_state_dict`:
```
try:
for current, saved in zip(
self.fp32_groups_flat, state_dict["fp32_groups_flat"]
):
current.data.copy_(saved.data)
except RuntimeError as error:
print(error)
print(
"Error in loading fp32 model parameters!\nRefreshing fp32 model params from the model's fp16 params instead. This may incur some precision loss."
)
self.refresh_fp32_params()
```
The code zips together `fp32_groups_flat` and `state_dict["fp32_groups_flat"]`. The former list is based on what's currently unfrozen, while the latter is based on what was unfrozen in the checkpoint.
|
A few things can happen:
1. The two lists match perfectly. The 32-bit parameter values get restored from `checkpoint["optimizer"]["fp32_groups_flat"]` and all is good.
2. The two lists mismatch horribly. The `current.data.copy_(saved.data)` will throw an exception, and the code will take the fallback path of restoring the parameter values from the 16-bit values in the checkpoint. That's also OK.
3. The two lists mismatch, but in a way that doesn't trigger the exception. In particular, `zip()` on two unequal-length lists will return length equal to the shorter one of the lists. In this case, some parameters will not be loaded at all.
I was hitting case (3), where some parameters weren't getting initialized at all.
IMO the code that relies on detecting the RuntimeException isn't guaranteed to do the right thing in all cases. So, if you are freezing and unfreezing parts of the model, you can hit the issue that I hit.
alstroemeria313#1694: how do you do mixtures of experts btw ^^;;
robinbin#9573: Hi guys, one more question on transformers. So the decoder transformer block would always output the same shape as its input, ie (length_output_sequence, embedding size), and that gets transformed into (length_output_sequence, vocab length), by a linear layer then softmax. But this doesn't make sense since you only want to predict the next vocab, so the output should be (1,vocab length). Do people just take the top item in the array or something?
Sid#2121: The model produces a probability distribution over all possible outputs, when sampling you usually just sample the most probable, or sample with a temperature
robinbin#9573: no, the shape of the output is (output_seqence_length,vocab length), that is output_sequence_length amount of probability distributions, not a single probability distribution
robinbin#9573: I understand how logits work, trying to understand transformer output shapes ๐
Sphinx#2092: This has nothing to do with transformers. RNNs work the same way.
Sphinx#2092: Indeed, the only correct answer is
> The model produces a probability distribution over all possible outputs, when sampling you usually just sample the most probable, or sample with a temperature
inox#5400: yes you slice of the last vector in the output, sample from that distribution, concat it to the sequence and repeat
robinbin#9573: Thank you @inox, but isn't that unnecessary? You're having to compute an extra (n-1) x vocab length amount of data
inox#5400: that's transformers, you have to compute it, every layer depends on all the tokens
inox#5400: (within the context window)
|
robinbin#9573: @Sphinx What I'm referring to is the output per step, so if you are doing it for the whole batch/entire sequence at once during training it'd be (batch,seq size, output size size, vocab length) as output
robinbin#9573: Yeah. But that's pretty crazy and no one really mentions it.
robinbin#9573: Has there been any new architecture which addresses this?
inox#5400: just look at this <https://github.com/karpathy/minGPT/blob/master/mingpt/utils.py#L19-L47>
Sphinx#2092: ehh, in practice you cache the keys and values
Sphinx#2092: so no recomputation is needed.
Sphinx#2092: autoregressive cache is pretty op.
inox#5400: s4? <https://srush.github.io/annotated-s4/>
robinbin#9573: Nah I'm referring to the output logits, so what karpathy did here : logits = logits[:, -1, :] / temperature
EricHallahan#1051: Not a transformer. `:)`
inox#5400: it's transformer-like
EricHallahan#1051: Fair
robinbin#9573: Assuming I'm correct in understanding this, is no one outraged by how much extra things are being computed here?\
EricHallahan#1051: Again, autoregressive cache solves the problem.
robinbin#9573: Any links to understanding autoregressive cache?
robinbin#9573: Thanks a lot guys. ๐
robinbin#9573: / gals / +
EricHallahan#1051: The best suggestion I have is to just understand how the HF GPT/GPT-2 code works.
robinbin#9573: Don't really feel like going through the hugging face repo - -. it's a lot of code. Is there an easier way to just understand exactly what is being cached in "autoregressive caching"?
Is the query/key/value outputs being cached? Or is it the intermediate attention/value matrix, such that when new dot products are being computed you just append that to the end of the matrix? That doesn't explain the output shape, unless the immediate output from the multihead attention gets cropped before it gets fed into the positional FF.
|
robinbin#9573: :harold:
EricHallahan#1051: It's a single file.
Dashiell#8739: in the interest of being upfront, and intending no harshness, @robinbin, it appears that no one here is interested in handholding you through this question in the way you'd like. People have answered your (initial) question and pointed you to reasonable resources where you can pursue a fuller understanding. If you don't "feel like" doing the work yourself to figure this out then you may just have to become comfortable with not understanding
robinbin#9573: In the interest of being upfront, I'm not asking nor expecting to be hand held, and fully appreciate anyone's volunteered help to me. but this is not a single file : https://github.com/huggingface/transformers/tree/master/src/transformers
igoro#7477: FWIW, my understanding is that it's the keys & values that are getting cached. Then attention from future tokens can query the past tokens without the need to recompute any of the state associated with earlier tokens.
robinbin#9573: I've been working on transformers for the past 4 days and almost finished with my own implementation, Just trying to understand autoregressive cache.
Dashiell#8739: > The best suggestion I have is to just understand how the **HF GPT/GPT-2 code** works.
https://github.com/huggingface/transformers/blob/master/src/transformers/models/gpt2/modeling_gpt2.py
one file
robinbin#9573: Thanks ๐ I didn't know where to find that and don't assume it is obvious to do so.
cfoster0#4356: Um I don't think the HF will enlighten much, generally speaking
cfoster0#4356: It's a huge nest of complexity that is mostly unrelated to anything of architectural interest
robinbin#9573: Thanks. But then the output to the multihead attention will still be (output seq length, model_k), so I'm assuming that + plucking before feeding to feedforward would solve the issue.
cfoster0#4356: The answer to your earlier question is that in training, we input a tensor of [batch, sequence, dim] and we get out a tensor of [batch, sequence, vocab]. Because the information from future tokens does not change past token representations, in inference we can store the keys&values (since that is the only place there is interaction between tokens) as they are computed. Each key/value is a [batch, 1, heads, head_dim] tensor per iteration. In inference, every iteration of the loop our output is just the [batch, 1, vocab] tensor for the *current* position were computing. So we never need to waste on reconputstion
robinbin#9573: But wouldn't the output to the multihead attention still be [batch, sequence, dim]?
How do you end up with [batch, 1, vocab]?
cfoster0#4356: When you do the attention during inference, you only need to compute the attention from a single query (the current one) to the past keys, so no. You just have the [batch, 1, heads, head_dim] slice as your attention's output
cfoster0#4356: Along the way you produce a [batch, 1, past_seq_len, heads] attention matrix, but that gets disposed of immediately
robinbin#9573: Oh I see. Sorry that makes everything very clear...
robinbin#9573: Thanks so much.
|
๐
ฌ gabriel_syme ๐
ฌ#3220: The codebases I know of are: fastmoe, thor, and dselect-k although I have not checked in a while
StellaAthena#3530: Megatron supports it too, for language modeling
alstroemeria313#1694: hmm
alstroemeria313#1694: i am trying to get my head around the basic concept rn
alstroemeria313#1694: you... have a layer type that decides which "expert" to use?
alstroemeria313#1694: and an expert is what exactly?
alstroemeria313#1694: and how do you train/differentiate this
alstroemeria313#1694: since i assume we are not taking convex combinations of experts or smth?
kindiana#1016: an expert is a small mlp
kindiana#1016: usually you have a gating network
kindiana#1016: which picks which experts to use per token
StellaAthena#3530: @alstroemeria313 at a high level, the gating layer is doing a routing task where it chooses which subnetworks get to opine on the output for a particular input
kindiana#1016: if you pick more than one expert you can differentiate through the gating in a meaningful way
kindiana#1016: top-2 gating is pretty common
alstroemeria313#1694: ahh
tpapp157#3643: Think of it like multihead attention where each head is an expert but instead of sending each token to each expert you use a routing mechanism to only send them to ~2.
chilli#5665: wait why
alstroemeria313#1694: ah
kindiana#1016: theoretically you can get gradients with top-1, but its extremely noisy
chilli#5665: how do you get the gradients? Something like REINFORCE?
|
kindiana#1016: you could do like gumbel
kindiana#1016: actually
kindiana#1016: that doesn't get you compute savings?
kindiana#1016: well actually
cfoster0#4356: Idk if there's actually an issue getting good enough gradients in general
cfoster0#4356: Like, so long as you aren't channeling all the tokens into the same expert(s)
kindiana#1016: yeah, there is debate if you even need routing
kindiana#1016: e.g. hash layers
cfoster0#4356: Yeah. The hash embedding thing (from the NVIDIA paper) gives some more evidence that this style of approach works
cfoster0#4356: To increase capacity for a given compute budget
alstroemeria313#1694: Ah
inox#5400: they used a gating gradient hack in the outrageous paper <https://arxiv.org/abs/1701.06538>
๐
ฌ gabriel_syme ๐
ฌ#3220: Can someone eli5 how is hashing a counterargument to MoE for me? :hap:
๐
ฌ gabriel_syme ๐
ฌ#3220: If its a "read that paper thoroughly" answer it is fine, I've only skimmed it
cfoster0#4356: You can just use a hash function to map the token index to the expert and it works just as well as / better than having the network learn to route information to experts
cfoster0#4356: According to https://arxiv.org/abs/2106.04426
๐
ฌ gabriel_syme ๐
ฌ#3220: Ohhh, okay thanks I'll take a look!
๐
ฌ gabriel_syme ๐
ฌ#3220: I thought it was a compression vs sparsity thing
๐
ฌ gabriel_syme ๐
ฌ#3220: That is pretty wild, at least from the claims in the abstract. Cheers, I had missed it
cfoster0#4356: One of the authors talks about it in the back half of this talk, if you're interested <https://youtu.be/EYvPRVcB2uI>
|
tpapp157#3643: Just reading the abstract it's saying you can fix the routing and just learn the experts and it works fine. Makes sense to me.
tpapp157#3643: Learning both the routing and the experts is in a certain sense redundant.
tpapp157#3643: There was an academic trend a number of years back of predefining a library of logical and mathematical functions and training the network to learn to route them dynamically. In this context it's kind of the conceptual opposite with fixed experts and learned routing.
tpapp157#3643: I recall there being a handful of transformer papers that explored the idea of fixed attention patterns of various sorts.
๐
ฌ gabriel_syme ๐
ฌ#3220: Dang I think I need to better understand MoE because I'm not sure how those are separated. Doesn't routing impact what experts learn?
๐
ฌ gabriel_syme ๐
ฌ#3220: So like if you have a multi task setting for example, you just send each task specifix example to the same expert?
tpapp157#3643: It's saying that the network is flexible enough that given a fixed routing scheme it can optimize the token inputs and the experts around it.
cfoster0#4356: Hmm now that I'm thinking about it, how is the actual per-expert computation done in parallel? Like I don't think any plain old `einsum` expression will cut it. Or do you just iterate over the experts and project one by one?
kindiana#1016: the routing cannot be expressed efficiently with an einsum
cfoster0#4356: So... what do
kindiana#1016: iterate over experts and gather
kindiana#1016: or triton ๐
chilli#5665: hmm, explain?
kindiana#1016: if you want to process each expert efficiently, you need to only process specific tokens
kindiana#1016: and you can't really express a gather op with einsum
chilli#5665: so it's about fusing batched matmuls with a gather?
chilli#5665: Also, is an interpolate just a convolution?
๐
ฌ gabriel_syme ๐
ฌ#3220: this is nice https://cdn.discordapp.com/attachments/729741769738158194/936180259844395028/unknown.png
kurumuz#5695: what is that
๐
ฌ gabriel_syme ๐
ฌ#3220: it's from Cohen's talk, he was talking about attention and semantics (control)
|
๐
ฌ gabriel_syme ๐
ฌ#3220: sorry out of context lol
nostalgiahurts#3408: for audio at least, interpolation can be thought of as first upsampling with zeros, then convolving with a filter:
- nearest neighbor: rectangular filter, which has a sinc response
- linear interpolation: triangle filter, which has a sinc^2 response
- sinc interpolation: sinc filter, which has a rectangular response
and so on. this is from section 2.2 of this paper https://arxiv.org/abs/2111.11773
๐
ฌ gabriel_syme ๐
ฌ#3220: it's that discussion we linked earlier with Bach et al
Sidd#6307: OpenAI finally came out with the blog post around how they trained the Instruct GPT-3 models: https://openai.com/blog/instruction-following/. RL from human feedback plays a major role, which I didn't expect!
StellaAthena#3530: This took me entirely by surprise o.O
Sidd#6307: you and me both lol
Kazumi#1297: > This is the first time our alignment research, which weโve been pursuing for several years, has been applied to our product.
StellaAthena#3530: Interestingly, the Instruct Series went into pseudo-public beta (acknowledged beta?) only two months after "learning to summarize from human feedback" was released. It seems then that their research was much more mature than they had let on.
cfoster0#4356: Huh
Sidd#6307: Yeah that's what got me first. Not sure if they've been improving it steadily since then
ari#9020: The original instruct models were used to gather prompts for the RLHF-trained models:
> Our prompt dataset consists primarily of text prompts submitted to the OpenAI API, specifically those using an earlier version of the InstructGPT models (trained via supervised learning on a subset of our demonstration data) on the Playground interface. Customers using the Playground were informed that their data could be used to train further models via a recurring notification any time InstructGPT models were used.
cfoster0#4356: :schmidhuber:
nev#4905: they also kept tuning the policy models on LM, interesting
nev#4905: the reward models will probably never be released :sadge:
65536william#9999: anyone know off the top of their heads what settings were used for the BigScience tokenisation of the Pile? i.e. `min_unique_tokens` and normalisation?
|
StellaAthena#3530: Not a clue
tpapp157#3643: So I read through that Hash MoE paper. I didn't realize but they're actually using the raw text tokens to determine the routing to experts. I had assumed they were using the token outputs from the prior layer. Their formulation is a lot simpler and you can definitely make the argument that it doesn't qualify as a true Mixture of Experts model at all and is instead just a model that includes a few unique layers for each token in their dictionary.
tpapp157#3643: Given that their MoE block sits within a residual connection, and therefore the model can learn to use or ignore the unique layers, it really shouldn't be a surprise that it performs better than the baseline transformer.
tpapp157#3643: Therefore, the main point of interest I think paper provides is the relative comparison of different hashing techniques they explore for routing.
cfoster0#4356: Tbh I think it should/would be a surprise that all that work on load balance losses, learned routing, yadda yadda was wasted. That is not obvious a priori
cfoster0#4356: Hmm, I'm still slightly confused about this footnote to the blog post.
>>> The InstructGPT models deployed in the API are updated versions trained using the same human feedback data. They use a similar but slightly different training method that we will describe in a forthcoming publication.
kindiana#1016: guess you gotta be patient ๐
Sphinx#2092: gotta setup for the next movie, the OpenAI cinematic universe
aูด#8803: Is there a unit of measurement for determining how normalized a list of values are with respect to the list?
bmk#1476: it's simple, really https://cdn.discordapp.com/attachments/729741769738158194/936409465828311150/Screenshot_20220127-165635_Firefox_Focus.jpg
aูด#8803: Can I get this in normal people talk?
bmk#1476: https://en.m.wikipedia.org/wiki/Norm_(mathematics)
Louis#0144: no
timudk#8246: Has anybody here experience training models on multiple GPUs using Jax? If so, how hard is the transition form PyTorch to Jax?
AI_WAIFU#2844: how many is "multiple"?
timudk#8246: 2 nodes of 8 GPUs
AI_WAIFU#2844: Don't do it, stick to pytorch
AI_WAIFU#2844: https://github.com/google/jax/issues/2731
kindiana#1016: I think it theoretically works now
|
kindiana#1016: but I've never tried
AI_WAIFU#2844: Well it's "worked" for over a year now.
bmk#1476: "theoretically works" is a scary phrase
timudk#8246: agreed
timudk#8246: I need to use forward diff for a project, and thought that might be a good excuse to dabble my feet into jax
kindiana#1016: you can try functorch
AI_WAIFU#2844: You're probably just SOL tho
AI_WAIFU#2844: I don't see forward autodiff in functorch
AI_WAIFU#2844: It's pretty wild that we don't have decent distributed forward autodiff on nvidia hardware, but what can you do
kindiana#1016: isn't this forward mode? https://pytorch.org/functorch/generated/functorch.grad.html#functorch.grad
AI_WAIFU#2844: I don't think so.
guac#4716: yeah they don't even have jvps implemented yet
Louis#0144: @guac ikr get with the times ๐
nshepperd#2316: :works_internally:
AI_WAIFU#2844: Chilli's the one you should ping about that
guac#4716: yeah someone should *push* them to get it done ๐
Louis#0144: ๐โโ๏ธ
Louis#0144: I need a hair flip emote
Louis#0144: Honestly
timudk#8246: lmao
|
timudk#8246: has anyone tried this out? https://pytorch.org/tutorials/intermediate/forward_ad_usage.html
AI_WAIFU#2844: nope but you can report back and tell us how it went
๐
ฌ gabriel_syme ๐
ฌ#3220: ~~see, this is why I don't buy GPUs~~
alstroemeria313#1694: wow... that looks like a ton of work for the user compared to two backwards, the first with create_graph=True
alstroemeria313#1694: to get an hvp
alstroemeria313#1694: does it at least let you get batches of HVPs efficiently if you go through that trouble?
alstroemeria313#1694: (Since one of the problems with two reverse mode passes is having do to a *third* reverse mode pass for a second hvp, etc.)
alstroemeria313#1694: like, if you have an unbiased estimator for the Hessian diagonal or the absolute Hessian diagonal or some matrix of interest
alstroemeria313#1694: The variance is often pretty high if you only do one hvp
alstroemeria313#1694: But if you can do batches of hvps at the same point with a bunch of different vs that would be great
chilli#5665: What, we do though
chilli#5665: https://github.com/pytorch/functorch#jvp
chilli#5665: @timudk @guac
chilli#5665: smh
guac#4716: ha maybe these docs are out of date https://pytorch.org/functorch/functorch.html ?
chilli#5665: Yeah thatโs prolly for the 1.10 release
chilli#5665: Also, @Louis smh
chilli#5665: Functorchโs jvp is actually just a wrapper around Pytorch coreโs jvp
chilli#5665: Just with a different API
AI_WAIFU#2844: Does functorch work with distributed/multinode?
|
Louis#0144: Why did you tag me
Louis#0144: Not guac
Louis#0144: Lmao
chilli#5665: Prolly
kindiana#1016: Pretty optimistic considering jax's isn't correct with collectives :P
chilli#5665: Well, itโs not gonna work with model parallelism
chilli#5665: Since Pytorch doesnโt have any model parallelism primitives ๐
chilli#5665: So Iโm just saying it works with data parallel
chilli#5665: Which I think it does
Louis#0144: Yet
pbaylies#1820: https://github.com/sjb3d/descent/tree/main/examples/image_fit <-- pretty fun experiment here re: overfitting a few different kinds of small networks to an image, with results etc.
AI_WAIFU#2844: wait but doesn't it support a bunch of basic paralleism ops that would let you do model paralellism?
chilli#5665: Oh hmm, it wonโt work with that I think
chilli#5665: Sadly
kindiana#1016: you just gotta make some hacky ops with custom grads
kindiana#1016: ๐
chilli#5665: Oh hmm, yeah, it could work
chilli#5665: But doubt it works with forward mode now lol
AI_WAIFU#2844: I'm sure something can be hacked together
AI_WAIFU#2844: might not be the fastest thing
|
chilli#5665: Yeah you can extend forward mode
AI_WAIFU#2844: ok then it's totes doable
chilli#5665: With custom autograd.function
kindiana#1016: this is how to do it in jax https://github.com/kingoflolz/mesh-transformer-jax/blob/master/mesh_transformer/util.py#L99-L110
chilli#5665: Heโs talking about getting forward mode to work with collectives
GrimSqueaker#8837: https://twitter.com/Nils_Reimers/status/1487014203483377664
Ouch
cfoster0#4356: Minor note: it looks like Splade is "freely available", but only as CC-BY-NC, so probably kinda useless for the customers OpenAI is after
finetune#0907: does the nc part in cc refer to any use or only distribution? tried reading the license and wasn't quite clear
guess you'd have to ask a lawyer
kurumuz#5695: also idk gotta test on downstream
cfoster0#4356: IANAL but no, not just distribution from my read
GrimSqueaker#8837: Never heardof splade before that ; bot clear if it beats BM25 (on lemmatizedtext)
StellaAthena#3530: The NC clause covers anything that is โprimarily intended for or directed toward commercial advantage or monetary compensation.โ
finetune#0907: o alright
finetune#0907: wait, actually, that's just from the definition of the term "NonCommercial"
the part that grants rights only mentions the "NonCommercial" term wrt "reproduce and Share the Licensed Material" and "produce, reproduce, and Share Adapted Material" and "to extract, reuse, reproduce, and Share all or a substantial portion of the contents of the database"
also royalty collection is not waived unless use is "NonCommercial"
tpapp157#3643: Worth noting that a lot of these license terms have yet to be tested in court in the context of ML models, outputs generated by ML models, products, services, or other models derived in part from other ML models, etc. It's anyone's guess at this point how a court would choose to interpret these and where they would draw legal lines.
StellaAthena#3530: What is your question, specifically? You want to know if you can do X for profitโฆ what is X?
|
finetune#0907: mainly curious if the license actually says what one would reasonably assume when applied to ml models
finetune#0907: feels to me like it might not
finetune#0907: since it was clearly designed to data where "use" is mainly distribution, "use" in other ways might not be covered by the non commercial restriction
finetune#0907: or even at all
StellaAthena#3530: Oh, youโre thinking about whether you can profit off of a model you trained on CC-BY-NC data?
There is zero case law in the US that affirms an answer to that question. In the EU, there is legislation saying that that is okay *I think*, but I havenโt been paying a huge amount of attention to that.
cfoster0#4356: They definitely had other uses besides distribution in mind for the creative commons license. Stuff like remixing and creating derivatives are clearly in scope
tpapp157#3643: It's probably going to be a few decades until a lot of these questions have definitive legal answers.
tpapp157#3643: For now, the major tech companies have been content to share their basic ML advancements and live and let live. But as these models start to become more practically useful and generate real money, we'll see the legal knives come out.
tpapp157#3643: Let's not forget that a bunch of basic NN building blocks like batch norm and dropout are patented.
tailcalled#2750: 1. Should I write this up in more detail with proper diagrams etc in a real post, rather than a comment?
https://www.lesswrong.com/posts/oqzasmQ9Lye45QDMZ/causality-transformative-ai-and-alignment-part-i?commentId=rJ7Ssb82bR677F2hq#BZGm2tZT2y8dgjxjM
I feel like a lot of people are unfairly dismissive of existing representation learning, and I feel like causal justifications might help people understand
2. I actually probably shouldn't write it up because I only have a little hands-on experience with training neural networks, I'm mostly just reading other's work, so if I do write it up, does anyone who has more experience want to read it over before I publish to inject some practical wisdom?
tpapp157#3643: Yes and no. The techniques you describe are really just assumptions regarding the structure of the data and relationships between variables. Each technique makes assumptions about what forms causality can take and how that form may be expressed in data. So these techniques are really just tests of those assumptions which in turn can hint at causal relationships between variables but ultimately it is impossible to prove or disprove a causal relationship. We must simply rely on a preponderance of evidence. This is fine, all of statistical modeling is based on making these sort of assumptions regarding data structure, after all. But on a more philosophical level, causality is fundamentally an ill-defined concept.
tailcalled#2750: I don't really understand your position, it seems like there's a very large inferential distance to my position. E.g. I tend to take a Bayesian approach to epistemology and a Pearlian approach to causality (except much more optimistic than Pearl).
StellaAthena#3530: ~~PyTorch now has an official distributed training library: https://github.com/facebookresearch/moolib~~
EricHallahan#1051: I think that is a disingenuous description of what it is?
|
EricHallahan#1051: It's not related to the PyTorch project at all AFAICT.
StellaAthena#3530: IDK, that seems like a rather reasonable description of something that is described as โa library for distributed ML training with PyTorchโ and released by the people who created PyTorch?
EricHallahan#1051: `facebookresearch` != `pytorch` ?
StellaAthena#3530: Is it not the case that pytorch $\subsetneq$ facebook\_research
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/936744271728492554/193204646687408129.png
EricHallahan#1051: No, but if it were a part of the PyTorch project it would be published as such.
nostalgiahurts#3408: I always found it weird how a lot of facebook research code is dumped into pytorch/fairseq's examples folder. rather than having a facebookresearch/fairseq repo
Sphinx#2092: they probably don't care.
StellaAthena#3530: ICLR final details are out, including PC judgements, and who got orals and spotlights.
StellaAthena#3530: This PC report on Einops is a fascinating read and a very clear statement about what the PC views as acceptable publishable work
https://openreview.net/forum?id=oapKSVM2bcj
CarsonPoole#0640: it's interesting to see Fairseq 13B slowly learn how to deal with newlines
CarsonPoole#0640: it starts out just being completely confused
CarsonPoole#0640: then uses newlines everywhere
CarsonPoole#0640: then slowly becomes more normal
CarsonPoole#0640: currently training 13B on the BigScience dataset to get an "Instruct" like model
chilli#5665: I think there are many distributed training libraries for different situations ๐ค
chilli#5665: I feel like that's kind of a strange description of it
OccultSage#3875: 2.7b seems to get it after one epoch on a couple hundred megabytes.
|
alstroemeria313#1694: `RuntimeError: expected scalar type Float but found Half`
alstroemeria313#1694: ...
alstroemeria313#1694: Is there anything special I need to do in order to make fp16 work with deepspeed?
alstroemeria313#1694: I enabled it in the config file and now it's... not autoconverting either the input data or the model weights?
alstroemeria313#1694: ```
File "train.py", line 124, in forward
f = 2 * math.pi * input @ self.weight.T
RuntimeError: expected scalar type Float but found Half
```
alstroemeria313#1694: oh
alstroemeria313#1694: you have to manually convert the inputs and targets to fp16
alstroemeria313#1694: or it will error
alstroemeria313#1694: it doesn't convert them automatically.
alstroemeria313#1694: huh
alstroemeria313#1694: my loss scale is going super low and it's never not overflowing :/
alstroemeria313#1694: is fp16 training even going to work with this...
alstroemeria313#1694: oh
alstroemeria313#1694: i had the lr too high
alstroemeria313#1694: and it took a too large step and was NaN from then on
alstroemeria313#1694: i got it working :)
|
alstroemeria313#1694: it's saving EMA weights too
alstroemeria313#1694: but it hung on training step 460 and i don't know why :/
alstroemeria313#1694: of pipeline parallel
alstroemeria313#1694: it only does it with... wait is that an epoch boundary
sylv#4534: is there a selfhostable version of something like openai's content filter?
rawwerks#0536: Iโm really excited to join this server. Apologies if this is not the right channel, but Iโm curious if anyone has seen any good examples of 3D โstyle transferโ? The only paper Iโve found so far is vox2vox (http://computationalcreativity.net/iccc20/papers/052-iccc20.pdf) - I would really appreciate any references/repos/notebooks along these lines. (For clarification, Iโm not really interested in โcharacter riggingโ (which has a lot of examples) - but actually generating 3D objects/meshes with AI/ML.)
nev#4905: there's https://threedle.github.io/text2mesh/ and https://twitter.com/apeoffire/status/1478465287028711427
nev#4905: also https://ajayj.com/dreamfields (which was released actually? :ultragoose:)
rawwerks#0536: Wow - this looks amazing. Thank you so much. Finally can combine my โhobbyโ interest in AI for art with my โprofessionalโ interest in additive manufacturing.
nev#4905: one thing to keep in mind is that text2mesh is the only one where mesh export is possible for now
nev#4905: I do have ideas on how to add meshes to text2voxels
rawwerks#0536: @nev - so is the โdream fieldsโ just outputting the movie? (Not a mesh object.)
nev#4905: yeah, it can output a network (internal representation) but that's not directly convertible to a mesh
mkualquiera#3484: net2mesh wen
Emad#9608: May want to check out https://mobile.twitter.com/NJetchev โs work
SweatyPalms#1231: Hey guys
SweatyPalms#1231: Would anyone familiar with training models be able to chat with me for 5-10 mins?
chilli#5665: probably not, without more details
rawwerks#0536: thanks, i reached out on IG. his tool seems to be called "ClipMatrix". i found the paper but not the repo/notebook yet => https://deepai.org/publication/clipmatrix-text-controlled-creation-of-3d-textured-meshes
rawwerks#0536: totally separate question - what is the fastest "Style Transfer" colab notebook that folx here have seen?
|
(this claims to be "fast style transfer" => https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/tf2_arbitrary_image_stylization.ipynb and there is some reference in the original style transfer colab that "cycleGAN" is the fastest (https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/generative/cyclegan.ipynb))
this field moves so quickly - i'm sure someone here has a super-speedy variant. (i'm ideally looking for something already ported to colab.) thank you!
SweatyPalms#1231: Fair enough lol
pbaylies#1820: Check out https://magenta.tensorflow.org/blog/2018/12/20/style-transfer-js/
pbaylies#1820: Also - not fast, but probably best - https://github.com/crowsonkb/style-transfer-pytorch
rawwerks#0536: sweet, thank you @pbaylies - appreciate having "the fastest" and "the bestest" ๐
Louis#0144: https://github.com/EleutherAI/adaptive-hitl
Louis#0144: Need make suggestions that don't read like adaptive hitler
Louis#0144: Who here is good at naming stuff
bmk#1476: goosey mcgooseface
Louis#0144: Honestly
bmk#1476: adahuloop
Louis#0144: Huh
Louis#0144: @fรฉlin en fuite
Louis#0144: @sweg
Louis#0144: You two should chime in here
Louis#0144: Also @Ambient
sweg#8920: alt idea: double down
|
Louis#0144: LMAO
sweg#8920: Ada Hitler
Louis#0144: Omg
Louis#0144: HAHAHA
Louis#0144: The cool new optimizer
nev#4905: hitler is a surname
bmk#1476: vetoing any name that has hitler in it
Louis#0144: Yes vetoing any name that has Hitler
Louis#0144: Or reads like Hitler
genetyx8#7543: ||ada-ploop||
Louis#0144: Honestly
Louis#0144: If there's no better suggestions
fรฉlin en fuite#6720: what if you just make it caps letters
Louis#0144: Might go with that
fรฉlin en fuite#6720: HITL
Louis#0144: Still looks like Hitler
Louis#0144: Lol
nev#4905: hitl in the loop
bernaise#6161: human in the language embedding loop, or HILLEL
bmk#1476: adahoop
|
cfoster0#4356: Adaptive Human IN The loop (ada-hint)
nev#4905: there's going to be so many "hitler"s in the search box tomorrow
cfoster0#4356: Yes plz no
Louis#0144: Ada hint is good
Louis#0144: I like that one
Louis#0144: If no one suggests a better one I'll go with that
StellaAthena#3530: Only if you're using an ada* optimizer
nev#4905: adahuber
StellaAthena#3530: but yes
fรฉlin en fuite#6720: HITLoop Eval
fรฉlin en fuite#6720: that's the most work I've done in my life
bmk#1476: pyhoop
bmk#1476: python human in the loop harness
bmk#1476: Oracle ยฎ๏ธ ยฉ๏ธ Human In The Loopโข๏ธ Development Kit
cfoster0#4356: Human IN ThEโVery AdaptiveโLoop (hint-eval)
Louis#0144: so to clarify it is adaptive bc youll specify what level of statistical significance you want and it'll construct the tests around that. So you say that you want 99% confidence and it'll construct and evaluate an LP given each task to minimize the cost (eg minimize the number of participants expensive tasks) while maximizing statistical significance and following constraints (eg I want atleast 2 storytelling tasks and I want 3 NLG tasks)
Louis#0144: That's kinda the main feature
Louis#0144: It does all the optimization stuff for you
Louis#0144: And it figures out the best demographic to minimize cost under constraint
Louis#0144: If you don't need EFL for instance it'll allow for ESL countries
|
ari#9020: TurkOptimizer
bmk#1476: like.. it keeps collecting data until it's significant?
Louis#0144: Yes
bmk#1476: uh
Louis#0144: Well obviously you set a hard limit
Louis#0144: lol
Louis#0144: But it'll try to stay as far under that limit as it can
bmk#1476: you mean, like,
```while p > 0.05:
collect more data
```
?
bmk#1476: that sounds.. bad
bmk#1476: statistically
StellaAthena#3530: @Louis probably best to also have a setting for power
Louis#0144: Power?
genetyx8#7543: ~~literally p-hacking~~
StellaAthena#3530: Statistical power
StellaAthena#3530: This is not p-hacking lol
|
Louis#0144: Oh lol yes ok
Louis#0144: This is not p hacking ya
Louis#0144: lol
bmk#1476: statistical power = statistical energy / statistical time, clearly
bmk#1476: https://stats.stackexchange.com/questions/310119/why-does-collecting-data-until-finding-a-significant-result-increase-type-i-erro
bernaise#6161: Testing Human Intelligence for Confidence and Clarity: THICC
StellaAthena#3530: Thatโs not what heโs doing
StellaAthena#3530: (Or; itโs not what heโs representing himself as doing)
Louis#0144: I'm not doing that
Louis#0144: I'm trying to express it better
Louis#0144: Basically I compute the desired sample size initially since we can have a general idea of what our distribution might look like
StellaAthena#3530: If you have an estimated effect size and a desired significant level thereโs an optimal number of trials to run
bmk#1476: oh
Louis#0144: And I want to minimize the sample size since that lets us minimize cost
bmk#1476: but then you stick to that sample size?
Louis#0144: Ye
bernaise#6161: Optimizing via Person-ENgaged Adaptive Instruction: OPENAI
Louis#0144: If we can name it something goose related
Louis#0144: I'd be grateful
nev#4905: adahonk
|
Louis#0144: Human ON Ketamine
Louis#0144: Idk
Louis#0144: Adaptive human on ketamine
nev#4905: ada human oriented natural ketamine
nev#4905: it's all ketamine :blobsad:
bmk#1476: human orieNted knowledge
Louis#0144: We can name @BoneAmputees clip search goose btw
Louis#0144: GOose Oriented Search Engine
nev#4905: goose-mini
bmk#1476: my life will be complete once I have a paper on my Google scholar page beginning with "GOOSE:"
Tinytitan#5596: Human Oriented N K where N and K represent parameters of the trials you are preforming
bernaise#6161: Neural Knetwork
Louis#0144: K should be ketamine
bmk#1476: Human Oriented Neural Ketamine
bernaise#6161: LOON: Learning on human-Optimized Networks
Louis#0144: Thereโs no learning component
Louis#0144: Solely eval
Louis#0144: Maybe we can add preference learning in the future
Tinytitan#5596: General Objective Optimised ? ?
Louis#0144: But prodigy already does that really well
|
nev#4905: s environment
Louis#0144: We can drop the word adaptive
Ambient#0001: Coadaptive Eval for Neural Architectures: CENA
Tinytitan#5596: General Objective Optimised Safety Environment
Louis#0144: Ok forget naming it goose
Louis#0144: Pls
Louis#0144: No silly names
Ambient#0001: can be mildly silly (I said in an aside to Louis that there's a fine-line between memorable/funny and wanting it to be taken seriously)
bmk#1476: Systemitized Evaluation
Louis#0144: Automated Coadaptive Eval of Language models
Louis#0144: ACEL
Louis#0144: or drop automated
Louis#0144: CEL
Louis#0144: CELm?
bmk#1476: inCEL
nev#4905: inferred CEL
Louis#0144: What's another phrase for a language model
Ambient#0001: do not under any circumstances name it incel lol
bernaise#6161: Adaptive Loops invoLving You! ALLY
Louis#0144: ACE-LM
|
Louis#0144: ACE4LM
bernaise#6161: FACEP4LM
Louis#0144: Omg
nev#4905: f = ?
bmk#1476: Foose
bernaise#6161: Feature
bernaise#6161: or Feather ๐คฆโโ๏ธ
nev#4905: Fully Automated Coadaptive Eval
bmk#1476: F stands for FACEP4LM, clearly
bernaise#6161: recursivenym
nev#4905: weak
nev#4905: it's not even double recursive
Louis#0144: Guys focus pls
Louis#0144: Lmao
bmk#1476: FACEP4LM: Automated Coadaptive Eval using Pytorch 4 Language Models
Louis#0144: I wanna get a name that is pronounceable
Louis#0144: That isn't a meme
Louis#0144: That doesn't read like Hitler
bmk#1476: I think FACEP4LM is a great name
Louis#0144: It isn't dependent on pytorch tho
|
Louis#0144: lol
Louis#0144: It isn't dependent on any single inference api
bernaise#6161: Evaluating Languagemodel Efficacy Using Text from Human Entered Responses: ELEUTHER
Ambient#0001: Evaluation of Language and Embedding Models: Enhancing Natural-Language Transformers (ELEMENT)
Louis#0144: Reads like a paper title
Louis#0144: Not an engineering project tbh
nev#4905: FACEP4LM: Automated Coadaptive Eval Parchitecture 4 Language Models
bmk#1476: all engineering projects are papers if you try
Ambient#0001: > spencer you're too good at naming papers
Ambient#0001: what do you want from me
Louis#0144: HHAHAHA
Louis#0144: Omg
Ambient#0001: ok zoom out
Ambient#0001: do you want it to read like a person, place, thing, animal, process, verb, what
Louis#0144: what
Louis#0144: for sure
Louis#0144: Jkjk
Louis#0144: A verb would be best
Louis#0144: Or a thing
Ambient#0001: ok, next question... other engineering projects that have verb/thing names that you like
|
bmk#1476: darn are we actually picking a serious name now
bmk#1476: ok I'm out
Ambient#0001: I tried to name it after John Cena but deaf ears
Louis#0144: Not lm eval harness
bernaise#6161: Fast, Anthropocentric Correction and Evaluation of Parameters for Langauge Models
bernaise#6161: if you like lm eval harness just call it lm eval loop
Louis#0144: I said I don't like it
Louis#0144: Lmao
Louis#0144: I'm looking for a repo I like the name of
Louis#0144: Ok so
Louis#0144: Whatever
Louis#0144: lm-HITL-harness
Ambient#0001: alternative question to help everyone: what _words_ that people have used do you like
Ambient#0001: evaluation, harness, etc
Louis#0144: Definitely needs to contain HITL or something like HITL in the name
Louis#0144: Besides that i don't care much
Ambient#0001: Human Interaction, Preference Learning, Interactive Machine Learning, Expert-Based Learning... unfortunate not a lot of acronym friendly adjacent terms
bernaise#6161: if you want human-in-the-loop but don't like the acronym change it to person and make it PitL
EricHallahan#1051: I feel like the word "harness" has the same function as "manifold" here lol
> After discovering the word "Manifold", O.M.B. Engineers decided it would be best to use it in the name of at least ONE product.
|
Louis#0144: For Chinese speakers
bmk#1476: Learning to Evaluate from Human Feedback
Louis#0144: My phone keeps autocorrecting human in the loop to hundan in the loop
ari#9020: Finesse Experiments (to) Avoid Towering Human Evaluator Reimbursement
Louis#0144: FEATHER
Louis#0144: omg
Louis#0144: Pls no
Louis#0144: Too much goose
Louis#0144: too much goose
Louis#0144: ๐ญ
Louis#0144: Human in the loop evaluation repository
Louis#0144: Since we were skirting around it
Louis#0144: Just get it out of the way
Ambient#0001: Human And Machine Coadaptive Harness for Effective Evaluation, Steering & Enhancement
HAM-CHEESE
Louis#0144: LMAO
Louis#0144: YWS
Louis#0144: YES
Louis#0144: IM SOLD
Ambient#0001: or just cheese
|
Louis#0144: OMG
Ambient#0001: you're welcome
Louis#0144: @bmk do you veto
Louis#0144: Naming it cheese
Louis#0144: @fรฉlin en fuite
fรฉlin en fuite#6720: poh
fรฉlin en fuite#6720: ok
fรฉlin en fuite#6720: not vegan but ok
Louis#0144: Ok I named it cheese
Ambient#0001: there's vegan cheeses at least
Ambient#0001: there're
Ambient#0001: whatever
sylv#4534: vqgan-cheese
Louis#0144: Yeah
Louis#0144: VQGAN cheese
Louis#0144: Honestly
Louis#0144: If we wanna do eval for gans
Louis#0144: That's solid
Ambient#0001: also works as a verb for people who want to "cheese" their models... "cheese it"
Ambient#0001: "needs some cheese"
|
Ambient#0001: Im fun at parties
๐
ฌ gabriel_syme ๐
ฌ#3220: this would all be solved if you trained an acronym finder model tbh
nshepperd#2316: ```py
vhat = m2 / (1.0 - self.m2_bias)**2
```
effective lr warmup by just doubling the bias correction :thinkies:
nshepperd#2316: seems to be working
Severine#8325: Hi Everyone! I am part of the AI Nordics community where we are trying to start a Nordic Language Pile (Swedish, Danish, Norwegian, Faroes,...) fully inspired by you here at Eleuther! Some time ago @Ariel Ekgren already mentioned this project and there was mentioned that you maybe could help us out with resources. At this moment we are trying to figure out the resources, so is there still a possibility to talk about this?
Daj#7482: Hey there! What kind of resources are you looking for here?
Severine#8325: Well we are looking for a server or any storage solution, to gather the first data before we distribute it taking into account the legal considerations. We are still a young community so I believe around 100GB should be good.
Daj#7482: the-eye will surely host whatever you want for download if that's what you're looking for
Daj#7482: They host all our downloads
Daj#7482: If you're looking for servers to work on, hetzner has cheap servers to rent
Daj#7482: We could also maybe spare some capacity
Severine#8325: Thank you for this great overview. The-eye will be for in some months, we are now setting up the structure to getting data. I will check out the hetzner server!
StellaAthena#3530: @Severine Are you in touch with AI Sweden or the National Library of Norway?
Severine#8325: Hi @StellaAthena yes, I am part of AI Sweden!
Prismane#3728: wait.. so cade metz managed to write a book about history of deep learning without a single mention of GPT models? https://cdn.discordapp.com/attachments/729741769738158194/938061820143497286/Screenshot_2022-02-01-18-50-11-84.png
Orz#3023: I mean
they aren't technically a "history"
|
*they are the future*
StellaAthena#3530: Oh excellent. I don't think we've spoken before but I've been in touch with Magus and Peter about how we can help y'all out. We are quite excited to do so ๐
Severine#8325: We haven't spoke before, but it is nice to meet you and that is great to hear!๐
tpapp157#3643: I mean there isn't anything too special about GPT-3 other than being really big. And historically, being the biggest is a very temporary title. Doesn't the Microsoft LM hold the title for largest now? I stopped keeping track since those models are useless to me.
mr_seeker#1337: Facebook's MoE said 1.1T parameters, so...
Deleted User#0000: I would disagree with that. In a sequence of growing language models, sure, it's just one along the way. But the ecosystem/cultural/investment/.. impact was massive. It changed a lot of people's views on AI timelines, etc
tpapp157#3643: I disagree with that as well. For people who weren't paying attention, then yeah GPT and its capabilities were surprising. But LMs were already showing those trends for several years beforehand, GPT just suddenly scaled them up by an order of magnitude. I guess you could argue that GPT were the first models to cross some subjective threshold of quality for text generation. But again, ten years from now I'm not sure people are still going to be referencing GPT as a major milestone.
zphang#7252: a better question is: why does performer have its own wikipedia article
mr_seeker#1337: Because someone made one?
CRG#8707: MoE parameters are not real parameters
mr_seeker#1337: You want to start calculating in transformer blocks, thats fine with me too ;)
fรฉlin en fuite#6720: does pyfra not have any documentation :wojak_despair:
fรฉlin en fuite#6720: just pinging people who used pyfra - can you guys give me a lead on how to work it, or a project that you used it in? i cant find any documentation or longer examples in github.
@bmk @kurumuz @mkualquiera @EricHallahan
bmk#1476: there's docs
bmk#1476: well there are docstrings and I have docs autogenerated from thosr
fรฉlin en fuite#6720: it looks empty
fรฉlin en fuite#6720: there is an issue on it that has been open for 2 weeks
fรฉlin en fuite#6720: https://github.com/EleutherAI/pyfra/issues/28
|
mkualquiera#3484: ~~bold of you to assume I do any work~~
bmk#1476: oh, the doc build must have broken
bmk#1476: I don't know what's up
fรฉlin en fuite#6720: i see...
bmk#1476: you can always read the docstrings in the code while I try to fix the docs build
bmk#1476: pyfra/shell.py and pyfra/remote.py are where the majority of the interesting stuff happens
fรฉlin en fuite#6720: thanks
mkualquiera#3484: I got it to compile the docs but I had to do a bunch of weird stuff so I'm not really sure what fixed it
mkualquiera#3484: first it complained about the ``sphinx_rtd_theme`` being unavailable, and then about not finding the packages ``imohash``, ``yaspin``, ``deprecation``
mkualquiera#3484: so I removed the theme and installed those, and now it works
nostalgebraist#3542: anyone know a non-painful way to do model averaging (ema) while using deepspeed's zero?
nostalgebraist#3542: accessing the unflattened master params is not easy, looks like they save the shape info and flat params into checkpoints and then have a script to unflatten them from checkpoints specifically
nostalgebraist#3542: it's my first time using deepspeed and it's been ... interesting. the kind of interesting where you override `half()` to stop it from trying to do groupnorm in fp16
Louis#0144: @nostalgebraist preference learning in DS?
nostalgebraist#3542: diffusion
Louis#0144: Oh ok
alstroemeria313#1694: not really
alstroemeria313#1694: i have a custom optimizer that does the EMA
alstroemeria313#1694: but it will only work with like, plain deepspeed fp16
alstroemeria313#1694: not the zero stuff?
|
nostalgebraist#3542: i have no idea if deepspeed is a good idea here, just thought i'd try it
alstroemeria313#1694: well i need to make it work at some point for pipeline parallel training
alstroemeria313#1694: so
alstroemeria313#1694: i have it working, really, just not with zero
alstroemeria313#1694: oh no
nostalgebraist#3542: is there a right way to do that?
alstroemeria313#1694: i don't know
alstroemeria313#1694: i have not tried to train anything real with it yet
alstroemeria313#1694: i just made a custom AdamW that keeps an EMA of the params too
nostalgebraist#3542: when you turn on fp16 it just calls `.half()` on the module you gave it https://github.com/microsoft/DeepSpeed/blob/master/deepspeed/runtime/engine.py#L997
nostalgebraist#3542: it has a separate amp mode that "doesn't work with zero"
nostalgebraist#3542: unlike apex's O2 it doesn't keep batchnorm in fp32 (afaik?), much less groupnorm
alstroemeria313#1694: ah. i am not using that, just the normal fp16 mode
alstroemeria313#1694: i could not make the amp mode work easily
alstroemeria313#1694: i have done a lot of training in like, normal pytorch amp
alstroemeria313#1694: and i'm used to that
nostalgebraist#3542: yeah, i prefer that
nostalgebraist#3542: i did get its amp mode working after i installed apex
alstroemeria313#1694: ahh
alstroemeria313#1694: yeah
|
alstroemeria313#1694: i didn't have apex
nshepperd#2316: i am continuing to experiment with different variations of ESGD-M to try and make it faster without loss spikes
nshepperd#2316: one that seems possibly promising is a version based on infinity norm
nshepperd#2316: what i actually tried is the thing were you keep only a scalar per weight tensor
nshepperd#2316: but instead of averaging the H^2 over the tensor, using the max
alstroemeria313#1694: ohh
alstroemeria313#1694: but isn't that overly conservative?
nshepperd#2316: yeah, probably
alstroemeria313#1694: bc the variance goes both ways
alstroemeria313#1694: it estimates too low but also too high
alstroemeria313#1694: right?
nshepperd#2316: yeah
nshepperd#2316: maybe i can increase the lr to compensate
nshepperd#2316: ...unless that just leaves me back where i started in term of spikes
nshepperd#2316: not too sure whether the stats work out like that
nshepperd#2316: mean + 1*std would be less extreme
nshepperd#2316: huh wait, with beta2=0.999 surely a loss spike couldn't be due to just a low estimate of D in a single step
nshepperd#2316: like even if it was 0 it would be lerped and would just increase the step size by 1/0.999
nshepperd#2316: that one has to have been a gradient spike
nshepperd#2316: maybe i should just clip the grads then
|
nshepperd#2316: ah right it's because unlike adam, grad spikes don't result in denominator spikes
nshepperd#2316: because the D is not directly computed from the grad
๐
ฌ gabriel_syme ๐
ฌ#3220: anyone knows if the stanford mistral checkpoints now work with flax/tpus?
๐
ฌ gabriel_syme ๐
ฌ#3220: nvm seems not ๐ฆ
krigeta#6645: Hello, Any blogger here, may someone explain or guide me on how can I use the "EleutherAI" for blogging? like keywords and title writing or more?
StellaAthena#3530: This is not a very good place to get help with this. I would check out the HuggingFace discord in #communities instead
krigeta#6645: Aaye aaye captain ๐
krigeta#6645: Not able to find the HuggingFace discord in the #communities , and the last invite is invalid I guess that was it
StellaAthena#3530: RIP, I'll reach out to them for a new one
Kazumi#1297: I don't have an invite permission there, and can't find it in their channels etierh, huh
krigeta#6645: Kewl
aูด#8803: :goose: :goose9:
Kazumi#1297: found it
https://discord.gg/HF7DU4J2
Omar Sanseviero#6198: Sorry for that, we have one link which should always work (and it's back up again) which is http://hf.co/join/discord
StellaAthena#3530: This has been relinked in #communities
Omar Sanseviero#6198: Thanks Stella!
gdawg16#0493: YOU RLY DID IT YOU CRAZY GEESE. CONGRATS
EricHallahan#1051: i don't think i'm a goose tho
Louis#0144: Ye he isn't referring to you
|
Louis#0144: ๐
Louis#0144: Geese only
EricHallahan#1051: Not here please.
Chaminou#1384: Short but important rant : I think that treating language as a modality like vision or sound is a mistake that will be really hard to undo. The difficulty to do so will
be proportional to how much language models fools us. And right now, we are absolute fools.
NOW, ROAST ME !
guac#4716: you might want to check out #communities for beginner materials
iwearapot#5464: Does DeepSpeed MP work with non-identical GPUs (e.g. 3080+3090)?
Louis#0144: No
Louis#0144: Well
Louis#0144: It shouldn't
Kren#7701: New to the community, one of the main rules is no beginner questions which is fair. We're should I take my begginer questions other than huggingface??
Louis#0144: Yannics server
Louis#0144: #communities
krigeta#6645: thanks you so much
๐
ฌ gabriel_syme ๐
ฌ#3220: Is anyone here working on the 3D space? Would anyone be interested in a fun collaboration around that area, if I was able to generate paired data (2D-3D)? Just curious
uwu1#4864: @jenny.hu and I were thinking of trying to do text -> 3d game assets but thought 2d would be a lot less work
uwu1#4864: what kind of paired data are you making?
๐
ฌ gabriel_syme ๐
ฌ#3220: hmm well I can make a bunch of things, but the easiest would be layouts lol, although those can be any type more or less (e.g. dungeon crawler)
|
๐
ฌ gabriel_syme ๐
ฌ#3220: since I already have both 2d versions and a model that generates more. But we could also do assets procedurally (e.g. generating and placing 3d objects inside 3d structures for example)
๐
ฌ gabriel_syme ๐
ฌ#3220: But idk open to anything, I want to get into it as well
uwu1#4864: yeah in my mind the unknown part is the animations/game aspect of what you do with the generated assets. Although it could also just be like the Everything game where everything has the same animation haha
uwu1#4864: the generating levels and level assets could be really fun too if one can think of a game that could work in them. Like an FPS in latent space
kad99kev#0514: I am currently looking into evolutionary strategies (evolving architectures/hyperparams) for NLP tasks. Does anyone have any suggestions (your favourite paper/research/anything really)?
iwearapot#5464: This area of text -> procedural assets is an interest of mine
nev#4905: I have a little dataset and some experience with clip-guided 3D
nev#4905: I want to try something with simultaneous 2D+depth generation like NYD or imagenet with automatic depth
starostap#6780: are we pre-training any Codex-style model for code generation atm?
Daj#7482: No one at EAI is working on that atm, no
nev#4905: you might want to check out the code.ai server
๐
ฌ gabriel_syme ๐
ฌ#3220: Sorry, I had to go to bed actually lol. Yeah that is quite interesting. People from my lab have worked on fps, generating weapons with preference learning. Generating environments would be as if not more amazing.
๐
ฌ gabriel_syme ๐
ฌ#3220: Cool! Which way would you start? Would it be some clip guided method?
nev#4905: for scene generation a systematic approach would be needed, and with a 3D labeled depth+segmentation+text dataset it's possible to evaluate anything
nev#4905: so probably that first - there is some existing work in that i believe
nev#4905: both street laser scans and room maps are abundant
๐
ฌ gabriel_syme ๐
ฌ#3220: Yeah there is the latest matterport dataset as well for those
nev#4905: yeah it's p good
then you need to decide if it will include 3d shape generation as well - there are approaches for that, but not many text-guided ones
|
nev#4905: there *are* datasets for shape generation and dream fields showed that clip can make ok chairs
nev#4905: that's probably too far off from the actual thing
nev#4905: tbh the way i would go with a set of 360 scans is to make a diffusion model that can take multiple viewpoints and generate simultaneously
๐
ฌ gabriel_syme ๐
ฌ#3220: yeah not sure what level to start, I guess shape generation might be interesting but it's been done a lot
nev#4905: with the current state of the models
๐
ฌ gabriel_syme ๐
ฌ#3220: diffusion huh
nev#4905: it's easy to train, doesn't run into composition issues and is simple to guide
๐
ฌ gabriel_syme ๐
ฌ#3220: was thinking about implicit modelling, but again I don't have much experience in this area
๐
ฌ gabriel_syme ๐
ฌ#3220: like the lineage of occupancy networks and such
nev#4905: ah
๐
ฌ gabriel_syme ๐
ฌ#3220: but diffusion might be better huh
nev#4905: right that's an option
nev#4905: with the recent 3d stylegans and such
nev#4905: i want to look into image space modeling because you can extend it to slam and tracking
nev#4905: (is that the right term? :p)
๐
ฌ gabriel_syme ๐
ฌ#3220: not sure ๐
๐
ฌ gabriel_syme ๐
ฌ#3220: going through this now, have not finished: https://youtu.be/-RsTgHlwhmw
๐
ฌ gabriel_syme ๐
ฌ#3220: although he's still at 2020 ๐
nev#4905: also iteration time needs to be considered
nev#4905: with vector gpt how long would it take to train a reasonable model?
|
starostap#6780: might be a valuable model to work on.. at least for me lol
๐
ฌ gabriel_syme ๐
ฌ#3220: it's super fast
๐
ฌ gabriel_syme ๐
ฌ#3220: depends on the dataset ofc but you can probably have smth generating on 2d in hours
uwu1#4864: IMO targeting the instant NGP repr makes sense mb just for how much faster you could iterate
๐
ฌ gabriel_syme ๐
ฌ#3220: pretrained models too powerful
๐
ฌ gabriel_syme ๐
ฌ#3220: yeah I was thinking hashing / NGP along with implicit modelling stuff
๐
ฌ gabriel_syme ๐
ฌ#3220: idk I guess depends on the output representation one wants as well
nev#4905: will hash encoding work with conditioning?
uwu1#4864: one could also explicitly play with the 2D -> 3D -> 2D -> 3D ... process as part of the game/art piece
nev#4905: or meta-learning
๐
ฌ gabriel_syme ๐
ฌ#3220: hmm
๐
ฌ gabriel_syme ๐
ฌ#3220: what about hash-based occupancy networks
๐
ฌ gabriel_syme ๐
ฌ#3220: they do conv ones, might work
๐
ฌ gabriel_syme ๐
ฌ#3220: (lol spitting out words really, not sure)
๐
ฌ gabriel_syme ๐
ฌ#3220: he got on the interesting stuff, scene based stuff is what I'm mostly into I guess (although starting with single objects is fine too) https://cdn.discordapp.com/attachments/729741769738158194/938912946090803200/unknown.png
uwu1#4864: what was that paper a few days ago about doing text to 3D just using 2d supervision?
๐
ฌ gabriel_syme ๐
ฌ#3220: ofc in that case, you could also just do what NVIDIA did, kind of like architext for scene construction where you only predict bboxes for primitives and then maybe have another model to make those primitives. Is that the game you're talking about, doing it progressively?
๐
ฌ gabriel_syme ๐
ฌ#3220: let me check my folder
nev#4905: 2d?
nev#4905: clip?
|
nev#4905: dream fields was the og
nev#4905: https://ajayj.com/dreamfields
๐
ฌ gabriel_syme ๐
ฌ#3220: ah dreamfields
nev#4905: i got something similar working with voxels
nev#4905: maybe with pulsar you can get point clouds
๐
ฌ gabriel_syme ๐
ฌ#3220: I also liked the functa paper
๐
ฌ gabriel_syme ๐
ฌ#3220: generally for this domain implicit stuff have advantage imo
๐
ฌ gabriel_syme ๐
ฌ#3220: yeah you need to sample to generate but the representation is much more compact I guess and flexible
uwu1#4864: I think doing it progressive or at least split between scene and stuff allow the most for incorpating interaction. Butt I also haven't seen much work on doing "neural building 3d scans" I guess because the current approach seems to work pretty well
uwu1#4864: game I was referencing: https://en.m.wikipedia.org/wiki/Perspective_(video_game)
nev#4905: eh it's a pain to export in any reasonable way :harold:
uwu1#4864: one benefit of voxel/SDF output is you can do precise collision detection for cheap
nev#4905: also there is https://arxiv.org/abs/2104.03670 and other direct 3D diffusion and GAN models
nev#4905: which can be a useful prior
๐
ฌ gabriel_syme ๐
ฌ#3220: paper still loading ๐ probably 100mb
uwu1#4864: https://codeparade.itch.io/marblemarcher
https://ubm-twvideo01.s3.amazonaws.com/o1/vault/gdc2018/presentations/Aaltonen_Sebastian_GPU_Based_Clay.pdf
another fun thing you can do with SDFs is drive GPU accelerated particle systems, allowing you to call up dust into the form of your thing e.g or other fun effects: https://twitter.com/aman_gif/status/1100187796302635008
๐
ฌ gabriel_syme ๐
ฌ#3220: ok this is cool but can it not be point clouds? ๐
๐
ฌ gabriel_syme ๐
ฌ#3220: can we do diffusion on the SIREN embeddings functa was doing for e.g.?
|
anthony_fuller#1075: Just about to setup PyTorch training for some TPUs, is Lightning the way to go?
uwu1#4864: diffusion gives the best low res preview and stuff so far right
uwu1#4864: or if not siren at least diffusion in the Fourier or some other globally supported basis
cfoster0#4356: Only one way to find out
cfoster0#4356: Err I guess two. The other way is to wait a few months
nev#4905: yeah ofc!
nev#4905: but you can't directly combine that ig
nev#4905: so if you make multiple scenes with the siren diffusion you can't merge them
๐
ฌ gabriel_syme ๐
ฌ#3220: So my perspective on the process has always been hierarchical for this task of...'generating worlds', whatever they may be (a game map, a scene, a city, an appartment, etc.).
i) a model that generates the layout: editing and adjustments are welcome here ofc, at the global level of connections, adjacencies, etc.
ii) a model that generates the general arrangement of things inside (this can also be the previous model). It can be where to place furniture, or enemy placement according to difficulty and room type, etc. Again editing here is welcome
iii) another set of models that generate the individual assets being arranged / placed in the scene.
Not sure if I'm missing stuff, I'm sure I am. Things like the adaptation of perspective for e.g. is interesting.
๐
ฌ gabriel_syme ๐
ฌ#3220: p.s. every editing from a human designer is a reward signal and allows for more fun stuff (also making things more tricky lmao)
nev#4905: the hierarchical approach is really interesting - @ColtonD had an aerial view dataset
๐
ฌ gabriel_syme ๐
ฌ#3220: NVIDIA recently did smth liek Architext where they took layouts and generated furniture arrangements
nev#4905: it can be images all the way down
๐
ฌ gabriel_syme ๐
ฌ#3220: it'd be cool to do the latter with another model lol
|
๐
ฌ gabriel_syme ๐
ฌ#3220: yeah
๐
ฌ gabriel_syme ๐
ฌ#3220: and it's really how designs are made tbh, in this step wise manner. ofc when you have all those parts you could also maybe generate things in one step idk
nev#4905: or iteratively like diffusion
nev#4905: lol
๐
ฌ gabriel_syme ๐
ฌ#3220: yeah
nev#4905: but there's a lot to think about
๐
ฌ gabriel_syme ๐
ฌ#3220: like the GLIDE example
๐
ฌ gabriel_syme ๐
ฌ#3220: yeah lol, ehm too many. Where to start is probably the right question
๐
ฌ gabriel_syme ๐
ฌ#3220: I wanted initially to get involved with (iii) so asset generation of some kind
nev#4905: https://poly.pizza has some good models with potential for a dataset
๐
ฌ gabriel_syme ๐
ฌ#3220: I feel a lot of the rest can be done with AR models
nev#4905: but we'd need to talk it over with the author
nev#4905: who scraped google poly
nev#4905: most of them are cc0
ColtonD#8375: it was actually a heightmap dataset
๐
ฌ gabriel_syme ๐
ฌ#3220: cool
๐
ฌ gabriel_syme ๐
ฌ#3220: and do we do 2D image -> 3D shape? I guess right
ColtonD#8375: yep!
ColtonD#8375: process used a lot actually
๐
ฌ gabriel_syme ๐
ฌ#3220: aha that's the dataset I made to train my first CFD prediction model, city heightmaps
|
ColtonD#8375: just cuz its way simpler for programs to simulate erosion and other terrain stuff on a 2d heightmap than a 3d mesh thing
ColtonD#8375: cuz really unless ur getting very specific or high detail/small, all that matters is vertical detail
๐
ฌ gabriel_syme ๐
ฌ#3220: yeah same with CFD / wind ๐
๐
ฌ gabriel_syme ๐
ฌ#3220: turns out it works amazingly well, you're right
๐
ฌ gabriel_syme ๐
ฌ#3220: I never did erosion, interesting
ColtonD#8375: oh interesting, my dataset is larger scale, and taken specifically from areas with dramatic terrain. probably wouldnt even see a city on it if it was there.
๐
ฌ gabriel_syme ๐
ฌ#3220: @uwu1 would 2d->3d as a first step be interesting for you?
ColtonD#8375: it was mainly for training an upscaler
๐
ฌ gabriel_syme ๐
ฌ#3220: ohhh cool
ColtonD#8375: i havent actually used it for anything else yet
ColtonD#8375: i might try finetuning rudalle on it though
๐
ฌ gabriel_syme ๐
ฌ#3220: wonder if u can do that for flooding
ColtonD#8375: the issue is that for terrain heightmaps. if its an 8 bit output, its basically unusable
ColtonD#8375: need much more height resolution. 16 bit generally
ColtonD#8375: and since 16 bit images are rarely needed and take up more space almost nothing uses them lmao
ColtonD#8375: i had to modify a lot of stuff to get my trained esrgan to use 16 bit information in the upscale
uwu1#4864: satellite imagery also has the extra fun multispectra data
uwu1#4864: and radar and whatnot too
๐
ฌ gabriel_syme ๐
ฌ#3220: can do some cool stuff with that ye
๐
ฌ gabriel_syme ๐
ฌ#3220: I used to try find ships, looking for pirates
|
uwu1#4864: can get material params etc. And then combine it with demographics... i did a model once inverting the census statistics -> imagery mapping for NYC, def kind of worked
uwu1#4864: 2d to 3d would be cool but I dont want it to look like just a relief/2.5D/have a preffered "front Angle" to look from. Sculpture vs installation vs diorama
๐
ฌ gabriel_syme ๐
ฌ#3220: would you like it to be fully 3d?
uwu1#4864: but the massive wealth of 2D data and easy to manipulation and abstraction mean the 2D->3D component will be really important in any outcome
ColtonD#8375: in something much larger scale but related, about a year ago i found a semi abandoned project that simulated tectonic plates and landmass movement, breakup and all that jazz, as well as some basic erosion and climates for an entire world. it had some pretty nice outputs, but again its entire planet scale and isnt really all that usable for much more than making semi realistic alternate world maps
ColtonD#8375: i mean heightmaps can be converted to meshes very easily
ColtonD#8375: its the most common workflow
uwu1#4864: or a combination. I guess I just don't want to clone RGB -> RGB-D work mostly. the beauty of generating "assets" is that you can have a world to walk around in (even if just a 2.5D one)
๐
ฌ gabriel_syme ๐
ฌ#3220: definitely
uwu1#4864: oh I meant like the raster 2D data source the 3D heightmap data was inferred from
uwu1#4864: Or do you mean figuring out overhangs/occlusions and meshifying the heightmap data
๐
ฌ gabriel_syme ๐
ฌ#3220: dang I never managed overhangs :berk:
๐
ฌ gabriel_syme ๐
ฌ#3220: just ignored they exist
ColtonD#8375: oh light depth estimation on a satellite image?
ColtonD#8375: as opposed to starting with generating a heighmap and then a the texture based on that?
ColtonD#8375: i feel like the second option might be a better approach
ColtonD#8375: but who knows honestly this is all very speculative lol
๐
ฌ gabriel_syme ๐
ฌ#3220: okay, what if I start a document that catalogs different approaches for shape / 3d generation, along with code (if there), a table of how they score on usual benchmarks, and a list of datasets?
Then, if people are interested, we can talk about what methods seem interesting? Also probably will add where each one can be trained heh. TPUs would be cool but not sure I've seen a lot of this happening there.
ColtonD#8375: that would be cool!
|
uwu1#4864: yeah if brax on tpu can be so good maybe rendering can be cool there too
ColtonD#8375: i actually haven't seen much in the realm of diffusion, dalle, GAN or anything in that realm being used to make heightmaps for 3d which suprises me a bit given how easy it should be to modify existing stuff for that(also btw when im talking about heightmaps, they're really only useful for the large scale stuff)
ColtonD#8375: https://youtu.be/NEscK5RCtlo
๐
ฌ gabriel_syme ๐
ฌ#3220: aha he got into interesting domain now https://cdn.discordapp.com/attachments/729741769738158194/938924513524088902/unknown.png
ColtonD#8375: oh! i forgot about this one
uwu1#4864: that would be awesome :)
uwu1#4864: true we have like literally the whole earth as a heightmap <> electromagnetic radiation dataset going back decades ๐ค
๐
ฌ gabriel_syme ๐
ฌ#3220: why are all the papers he is presenting from 2020 ๐ฌ
ColtonD#8375: yeah there is all the data you could possibly need out there
๐
ฌ gabriel_syme ๐
ฌ#3220: there was a cool project in thje last game jam I was in that did simple GANs to 3d landscape generation, was cool
random person#5234: oooohhh
๐
ฌ gabriel_syme ๐
ฌ#3220: you could use mouse clicks to do it
random person#5234: CFD
uwu1#4864: actually if you could massively improve the resolution of image to heightmap for ocean data that would be huge
nev#4905: I think a triangle renderer for brax doesn't exist yet ๐ค
random person#5234: my UG was in MechE so I actually know that stuff fairly well
ColtonD#8375: does anyone know if there is a reasonable way to get Rudalle, or diffusion to generate something in 16 bits? cuz i might try terrain generation with them
random person#5234: I have never seen DL used for CFD
random person#5234: like, modelling RANS with DL approximation would be fairly hard
๐
ฌ gabriel_syme ๐
ฌ#3220: there are PINN models for proper CFD simulations
|
๐
ฌ gabriel_syme ๐
ฌ#3220: they do exactly that modelling, physics informed
nev#4905: i think someone here did that
ColtonD#8375: (if i can finetune them that is, cuz theres no way something trained on regular images would understand heightmaps enough with just prompts)
ColtonD#8375: oh?
nev#4905: with generated data
uwu1#4864: not too hard to do differentiable rendering although Im not sure how you'd get the mesh from the repr while keeping the grad connected
ColtonD#8375: :knoht:
random person#5234: I would imagine the problem would be hard to modularize
random person#5234: like for example, if I were to setup a sim on ANSYS Fluent for subsonic uses
๐
ฌ gabriel_syme ๐
ฌ#3220: they have huge success, NVIDIA actually showcased them in their CEOs last talk
๐
ฌ gabriel_syme ๐
ฌ#3220: but my approach predicts static cfd results, cheating a bit but very useful in practice (no one in my field does transient more or less)
uwu1#4864: I already have a JAX based SDF renderer laying around somewhere, although it didn't end up being faster than GLSL on my gpu
random person#5234: I see. so is this incompressible stuff?
nev#4905: a good starting point is this but with heightmaps https://www.youtube.com/watch?v=oXUf6anNAtc
๐
ฌ gabriel_syme ๐
ฌ#3220: I'm not sure about the use cases I think it covers everything tbh
๐
ฌ gabriel_syme ๐
ฌ#3220: this is what they made with it, they renamed to modulus: https://developer.nvidia.com/modulus
random person#5234: hmmm.... well for traditional CFD, you usually have a lot of different numerical equations you can pick
random person#5234: and for different flow conditions you need different solvers
random person#5234: thanks for sharing @๐
ฌ gabriel_syme ๐
ฌ I am gonna take a look!
random person#5234: I would imagine as the title says, its more on the side of PDE solvers
|
random person#5234: vs trying to abstract entirely the physic solver away
๐
ฌ gabriel_syme ๐
ฌ#3220: oh this one was interesting, name reminded me of the paper! https://cdn.discordapp.com/attachments/729741769738158194/938925987767402516/unknown.png
๐
ฌ gabriel_syme ๐
ฌ#3220: ok I'll start the document lol
๐
ฌ gabriel_syme ๐
ฌ#3220: composition is also another important aspect in general
random person#5234: lol honestly if someone can make a decent DL CFD solver, probably make hundreds of millions
random person#5234: *if the result is actually on par with actual traditional CFD solvers
๐
ฌ gabriel_syme ๐
ฌ#3220: yeah NVIDIA is very close
๐
ฌ gabriel_syme ๐
ฌ#3220: our approach also works quite nicely and we're trying to spin it off now
ColtonD#8375: do you think the next big step for 3d object generation is to gen 2d things using diffusion and related things, and then have a separate model that can create a 3d mesh based on that? or something that generates things directly in 3d space? i mean bit of a dumb black and white question cuz itll almost for sure be a mix of those two things and probably something else too
๐
ฌ gabriel_syme ๐
ฌ#3220: I like 2d to 3d personally
random person#5234: I mean, the problem is, traditional CFD solver are very iterative based for grinding down the PDEs numerically
random person#5234: its just so slow
ColtonD#8375: i would agree
nev#4905: 2d, then based on that more 2d views
nev#4905: and then 3d
๐
ฌ gabriel_syme ๐
ฌ#3220: we can easily generate 2d stuff, even with AR models
nev#4905: is how i see it
random person#5234: and frankly half of the time results is very meh
ColtonD#8375: yeah cuz 2d stuff is moving really fast in terms of quality and just amount of focus on it
๐
ฌ gabriel_syme ๐
ฌ#3220: yeah, my approach just predicts end state ๐
|
uwu1#4864: I think generating in 3D and having 2D model do the critique?
random person#5234: what kind of flow condition does it work on? Are you predicting the velocity and pressure gradient?
ColtonD#8375: i mean right now that seems like the most obvious approach that is used
random person#5234: how does it work with length scale?
๐
ฌ gabriel_syme ๐
ฌ#3220: just wind velocity, on static incompresible without heat etc.
๐
ฌ gabriel_syme ๐
ฌ#3220: simplified scenarios that are typical for urban wind
random person#5234: I see
๐
ฌ gabriel_syme ๐
ฌ#3220: oh this is interesting
nev#4905: with this approach you can generate without gradients - like evolution
nev#4905: like dream fields
uwu1#4864: I just can't imagine monocular 2D image to 3D working well beyond "2.5D" data like depth maps or heightmap I guess. or in general "diorama" scenes
๐
ฌ gabriel_syme ๐
ฌ#3220: this is it, deployed within a 3D cad modelling software, predicting comfort in real time ๐
https://www.youtube.com/watch?v=iSVYsLWI5oM
random person#5234: @๐
ฌ gabriel_syme ๐
ฌ make sense, this looks somewhat similar to ansys discovery
uwu1#4864: but! it seems silly to waste the 2.5 data we could get for free from 2D generation as e.g multiview starting points
nev#4905: if we could make 2D with a depth map simultaneously that would be great
๐
ฌ gabriel_syme ๐
ฌ#3220: can it be a back and forth?
nev#4905: or a multi-view image
ColtonD#8375: i just think you get a leg up if you start with 2d and can figure out how to make it 3d. There is already so much out there in terms of existing methods, amount of focus and people working on it, and data.
tpapp157#3643: If you've learned strong priors through pretraining it's certainly doable, in the same way that with CLIP you can go from text to image despite text only specifying a miniscule fraction of the data shown in an image.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.