data
stringlengths 115
7.61k
|
---|
StellaAthena#3530: @EricHallahan Are you asking about the order papers should be listed at https://www.eleuther.ai/publications/ or something else
EricHallahan#1051: Yes
StellaAthena#3530: IDK
StellaAthena#3530: I omit dates for preprints on my website. Though that's not for any particularly principled reason, it does allow me to unobtrusively organize them however I wish https://cdn.discordapp.com/attachments/729741769738158194/933745368921948160/Capture.PNG
Louis#0144: Need name suggestions for a new eleuther api
Louis#0144: lm-eval-harness but for human in the loop evaluation
Louis#0144: anyone have ideas?
Louis#0144: cc @félin en fuite @guac
félin en fuite#6720: h
Louis#0144: ok
65536william#9999: how about co-eval
félin en fuite#6720: HOLE
Louis#0144: can u not felin
Louis#0144: pls
félin en fuite#6720: human (in the) loop evaluation
Louis#0144: i like this though
Louis#0144: thats solid
65536william#9999: nothing like a nice pun
Louis#0144: is co-eval a pun
Louis#0144: o.o
|
Louis#0144: how
65536william#9999: https://cdn.discordapp.com/attachments/729741769738158194/933756816939163778/Screenshot_2022-01-20_at_16.16.10.png
Louis#0144: i dont see how thats a pun for human in the loop
Louis#0144: LOL
65536william#9999: the evaluation the human does is contemporary with the automated evaluation?? a stretch maybe
Louis#0144: yeah a stretch
Louis#0144: lol
Louis#0144: sorry
Louis#0144: :berk:
65536william#9999: haha all good
65536william#9999: i loved it more in my head
ari#9020: Coney Island is famous for having one of the earliest implementations of humans in the loop: <https://en.wikipedia.org/wiki/Loop_the_Loop_(Coney_Island)>
Louis#0144: lol
Louis#0144: EleutherAI/coney-island
Louis#0144: im kinda down?
Louis#0144: lets be clear tho lm-eval-harness is kinda a zero effort name ngl
Louis#0144: adaptive-hitl-eval
Louis#0144: ?
félin en fuite#6720: I like that
StellaAthena#3530: I read this and immediately thought "hitler"
|
Louis#0144: lmao
Louis#0144: ok
Louis#0144: bad idea
Daj#7482: ```python
import hitler_eval
hitler_eval(hitler)
>>>> "bad"```
Louis#0144: it deploys to a turker for this answer
Louis#0144: each time you need to find out if hitler is bad
Louis#0144: its about $0.50
Daj#7482: Amazing, I remember back in the day this could take hundreds of dollars
Daj#7482: Incredible how algorithms and packages have improved
Louis#0144: yeah back in the day they had to phone an empath
Louis#0144: theyre v expensive
Louis#0144: now adays theyre a dime a dozen bc everyone on twitter thinks theyre one
65536william#9999: hitl-evil
Louis#0144: lmao
Daj#7482: ```Improvements over hitl_eval:
|
* Implements approximate hitl_eval by hardcoding most common output, "bad" (>99% approximation accuracy)```
Louis#0144: chad
Louis#0144: waiting for someone to do LM eval with Cyc
everyome#0987: I love coney dogs
Louis#0144: i love how easy magicarp is
Louis#0144: it took like
Louis#0144: 2 hours to implement coop lol
Louis#0144: before without magicarp it was multiple days
𓅬 gabriel_syme 𓅬#3220: I liked hole better
Louis#0144: you guys are SICK
Louis#0144: wtf
𓅬 gabriel_syme 𓅬#3220: if it makes you feel better, we figure out a word starting with 'w' and we make it whole again
Louis#0144: thanks
alstroemeria313#1694: has anyone here ever tried ESGD? https://proceedings.neurips.cc/paper/2015/file/430c3626b879b4005d41b8a46172e0c0-Paper.pdf
alstroemeria313#1694: i mean it requires a hessian-vector product eval per gradient eval so it's kind of expensive
alstroemeria313#1694: and it's maybe not as good as adam, idk
alstroemeria313#1694: (I have coded a variant of it that has Adam-type EMAs of the gradient for momentum and of the squared Hessian diagonal estimate, doing this improves over the original one in the paper by *a lot*)
alstroemeria313#1694: however it is maybe still not as good as adam
alstroemeria313#1694: it has some nice properties though
alstroemeria313#1694: like if you reparameterize by scaling all your parameters by the same value it is invariant to this scaling
|
alstroemeria313#1694: *and* if you scale the loss by some value it is invariant to that scaling.
alstroemeria313#1694: Whereas Adam is only invariant to the second one.
alstroemeria313#1694: ...is there some way we could incorporate an EMA of squared gradients into it too.
alstroemeria313#1694: like how would you combine the two diagonal preconditioners
alstroemeria313#1694: So with SGD. If you scale your parameters by some value you have to scale your lr by that value squared, and if you scale the loss you have to scale your lr by the inverse of that value.
alstroemeria313#1694: With Adagrad/Adam/etc if you scale your parameters by some value you have to scale your lr by that value, and if you scale your loss you don't have to scale your lr.
alstroemeria313#1694: With ESGD it is invariant to both of these.
alstroemeria313#1694: (ESGD does not seem invariant to general diagonal rescalings i.e. different rescalings for different parameters, but it does way better than Adam would, like it is able to partially adapt still)
alstroemeria313#1694: (Whereas a full second-order method involving explicit computation of the Hessian would be invariant to any reparameterization that is a multiplication by a full rank matrix, right?)
nshepperd#2316: huh
alstroemeria313#1694: idk i feel like there's some sort of potential here
alstroemeria313#1694: I think AdaHessian does some of the stuff I did
alstroemeria313#1694: Though *they don't mention ESGD in their paper*
nshepperd#2316: this kind of looks like you could do the Hessian-vector product only every few gradient steps and it will still work? https://cdn.discordapp.com/attachments/729741769738158194/933872181778198558/Screenshot_20220121-105349.png
alstroemeria313#1694: They use a similar trace estimator
alstroemeria313#1694: yeah probably
nshepperd#2316: so you could do it with pretty minimal overhead
alstroemeria313#1694: and ESGD as written is bad
alstroemeria313#1694: You really need to add momentum and change the simple average of the squared Hessian diagonal to an EMA
alstroemeria313#1694: For it to be at all competitive
|
nshepperd#2316: like just do Adam but get the second moment from this Hessian vector product instead of the gradient
alstroemeria313#1694: Yes
alstroemeria313#1694: That is what I have been poking at
nshepperd#2316: which also suggests doing the "keep only a scalar per tensor" thing
nshepperd#2316: for less variance
alstroemeria313#1694: yeah
alstroemeria313#1694: adahessian reduces over some dimensions to reduce variance
alstroemeria313#1694: like for 3x3 convs it pools over the 3x3 i think
alstroemeria313#1694: trying to see what preconditioner adahessian actually uses
alstroemeria313#1694: Like is it Jacobi or the special ESGD preconditioner which they say is better.
alstroemeria313#1694: The ESGD one is saddle point repelled
𓅬 gabriel_syme 𓅬#3220: question: is there a principled way to get equivariant versions of a noise input?
nshepperd#2316: > jvp(grad(loss_fn), [params], [v])[1]
nshepperd#2316: ah, this is how you do a hvp in Jax
alstroemeria313#1694: Yeah theirs is Jacobi.
alstroemeria313#1694: Wow...
nshepperd#2316: it's just a combination of backward and forward?
alstroemeria313#1694: Not only did they not cite ESGD, they weren't even *aware* of it bc ESGD shows theirs is better than the Jacobi preconditioner.
alstroemeria313#1694: (you don't actually *want* to precondition with the Hessian inverse for non-convex problems, it is attracted to saddle points and maxima!)
alstroemeria313#1694: (i.e. zeros of the gradient)
|
alstroemeria313#1694: (this is why you can't use an unmodified Newton's method for non-convex problems and have to do something that can deal with negative eigenvalues of the Hessian like a BFGS variant)
alstroemeria313#1694: so what do they *actually* do
alstroemeria313#1694: does adahessian discuss what the "Hessian power" parameter does
alstroemeria313#1694: Ah, so they square their Hessian diagonal estimate, EMA it, then sqrt the EMA
alstroemeria313#1694: This avoids some of the attraction toward maxima and saddle points bc it is not allowed to change the signs of the step.
alstroemeria313#1694: But ESGD's is still better
alstroemeria313#1694: ESGD doesn't estimate the Hessian diagonal, it estimates the diagonal of the *squared* Hessian and sqrts it
alstroemeria313#1694: The Hessian squared is positive semidefinite
alstroemeria313#1694: > Negative eigenvalues will reduce the total sum and make the step much larger than it should. Specifically, imagine a diagonal element where there are large positive and negative curva- ture eigendirections. The contributions of these directions will cancel each other and a large step will be taken in that direction. However, the function will probably also change fast in that direction (because of the high curvature), and the step is too large for the local quadratic approximation we have considered.
Some Point Process#3793: aren't hessians equivalent to second order taylor expansions? (meaning that if the *loss function* is quadratic, it would not get stuck at saddle points?
alstroemeria313#1694: if the loss is quadratic there aren't saddle points
Some Point Process#3793: I'm confused as to how invariance to scaling loss/lr/parameters fits into this but I haven't read that paper
Some Point Process#3793: Oh yeah
Some Point Process#3793: Well how about if the loss surface induced by the parameters is a quadratic surface?
alstroemeria313#1694: yeah that's what i meant
alstroemeria313#1694: no saddle points
alstroemeria313#1694: well the problem is that preconditioning the gradient with the Hessian inverse is Newton's method to find zeros of the gradient
alstroemeria313#1694: And saddle points and maxima are zeros of the gradient.
alstroemeria313#1694: There are other things you can do with the Hessian that avoid this.
alstroemeria313#1694: Like preconditioning with |H|^-1 instead of H^-1
|
alstroemeria313#1694: i.e. reverse Newton's method in directions of negative curvature and Newton's method in directions of positive curvature.
Some Point Process#3793: It seems like hessians being applied only occasionally goes by this name <https://en.wikipedia.org/wiki/Truncated_Newton_method>
alstroemeria313#1694: BFGS and L-BFGS form their Hessian estimates in a way that ensure they are positive definite so they also avoid this problem
nshepperd#2316: hm that sounds kind of different
nshepperd#2316: with this you would apply the accumulated squared Hessian estimate every step. as a preconditioner. you just update it less often
alstroemeria313#1694: ok so to write this.
alstroemeria313#1694: hm i probably want to write it in a way that lets you use a beta schedule
alstroemeria313#1694: (Adam is pmuch never written this way but you can)
alstroemeria313#1694: It converges on Rosenbrock~
alstroemeria313#1694: I mean my EMA variant of it.
nshepperd#2316: oooh
alstroemeria313#1694: 20k steps https://cdn.discordapp.com/attachments/729741769738158194/933890195919634482/Screen_Shot_2022-01-20_at_5.06.09_PM.png
alstroemeria313#1694: beta_2 was 0.9
alstroemeria313#1694: So if it had Adam-like spikes upward when the second moment estimate decayed too much it would have happened
alstroemeria313#1694: I get a curve w/ no oscillations upward for 0.99 and 0.999 too
alstroemeria313#1694: lr was 1
alstroemeria313#1694: i need to try it on a neural net now (i wrote the torch.optim.Optimizer type one that does multiple tensors/param groups)
nshepperd#2316: :)
alstroemeria313#1694: Rosenbrock in 100 dimensions https://cdn.discordapp.com/attachments/729741769738158194/933890950126788658/Screen_Shot_2022-01-20_at_5.08.59_PM.png
alstroemeria313#1694: Init is random normal
|
nshepperd#2316: i wonder if hessian * N(0,I) is actually less sparse than the gradients
alstroemeria313#1694: 50k steps https://cdn.discordapp.com/attachments/729741769738158194/933891084923306034/Screen_Shot_2022-01-20_at_5.09.42_PM.png
nshepperd#2316: and that helps prevent the adam spikes?
alstroemeria313#1694: looks like it... got hung up?
alstroemeria313#1694: a second run converged https://cdn.discordapp.com/attachments/729741769738158194/933891244399144970/Screen_Shot_2022-01-20_at_5.10.18_PM.png
alstroemeria313#1694: what is this getting hung up phenomenon
alstroemeria313#1694: ohh
alstroemeria313#1694: using the epsilon *from the ESGD paper* rather than Adam's 1e-8 prevents it
alstroemeria313#1694: (they use 1e-4)
nshepperd#2316: huh
alstroemeria313#1694: lr 1 lol
alstroemeria313#1694: I wonder if that still holds for stochastic gradients and high dimensional problems
alstroemeria313#1694: I bet it doesn't bc the paper used lower
nshepperd#2316: units for lr are presumably different than adam bc it is fully accounting for parameterization
alstroemeria313#1694: yes.
alstroemeria313#1694: Different from both Adam and SGD.
alstroemeria313#1694: Rosenbrock in 10000 dimensions https://cdn.discordapp.com/attachments/729741769738158194/933892737143570442/Screen_Shot_2022-01-20_at_5.16.07_PM.png
alstroemeria313#1694: lr is still 1.
alstroemeria313#1694: ...Can I accumulate the intermediate totals for the hvp in double precision.
alstroemeria313#1694: Do I need to.
|
alstroemeria313#1694: ok i wrote the bias correction terms in such a way that you can change the betas over time
alstroemeria313#1694: *and* change every how often it does the Hessian diagonal computation (and thus an EMA update) over time
alstroemeria313#1694: And it will still give you the correct bias correction for that EMA
alstroemeria313#1694: Rather than storing a step count or something which will give you the wrong bias correction when you change either of these
nshepperd#2316: yay~
nshepperd#2316: like initing a variable with 0, and lerp it with 1 every step using the beta?
alstroemeria313#1694: init it with 1
alstroemeria313#1694: at each ema update you do like `state['beta2_accum'] *= beta2`
nshepperd#2316: ah
alstroemeria313#1694: then your bias correction factor is `(1 - state['beta2_accum'])`
nshepperd#2316: ahh
nshepperd#2316: yep
alstroemeria313#1694: No one does it this way bc no one ever bothers to schedule beta or if they change it manually it's late in optimization where the bias correction factor is nearly 1 anyway
alstroemeria313#1694: hm i need to try this on a neural net
alstroemeria313#1694: `RuntimeError: derivative for aten::grid_sampler_2d_backward is not implemented`
alstroemeria313#1694: lol
nshepperd#2316: ^^;;
alstroemeria313#1694: nan loss
alstroemeria313#1694: ...Can I not double backward through CLIP in fp16?
alstroemeria313#1694: I mean with CLIP being in fp16
|
alstroemeria313#1694: yep that was it.
alstroemeria313#1694: it's optimizing~
alstroemeria313#1694: is really slow though.
alstroemeria313#1694: and using a ton of memory
alstroemeria313#1694: btw lr 1 is working on an actual neural net
alstroemeria313#1694: (Deep Image Prior lol)
nshepperd#2316: hehehe
alstroemeria313#1694: (I will post a video of each step's output once it's done)
alstroemeria313#1694: the gradients are stochastic because of the augmentations
alstroemeria313#1694: it's stuck
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/933901239773696010/esgd.mp4
alstroemeria313#1694: "the first day of the waters"
alstroemeria313#1694: a second run with beta_2=0.9 seems to be going a bit better.
alstroemeria313#1694: Wow this paper is from 2015
alstroemeria313#1694: It's pre-Adam
alstroemeria313#1694: They compare vs RMSProp
𓅬 gabriel_syme 𓅬#3220: They are still waiting for that blrssed day to end :hap:
alstroemeria313#1694: ehehe
𓅬 gabriel_syme 𓅬#3220: Remembering the outputs historically is pretty interesting
𓅬 gabriel_syme 𓅬#3220: I remember the first ones
|
alstroemeria313#1694: but you have to admit "I used lr 1 on the Rosenbrock function in two dimensions and it worked and then I used lr 1 on the Rosenbrock function in 1000 dimensions and it worked and then I used lr 1 on a 23M param deep neural net and it worked"
alstroemeria313#1694: Is an impressive thing for an optimizer
nshepperd#2316: ehehe
alstroemeria313#1694: should i not bias correct the first moment btw
alstroemeria313#1694: Like a tiny builtin lr warmup
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/933905232558178374/esgd_2.mp4
alstroemeria313#1694: i guess a question is, when it hits that plateau and stops changing, is it actually at a minimum or has it just gotten stuck
alstroemeria313#1694: it has to be the latter right, these gradients are so noisy it should still be visibly moving around
alstroemeria313#1694: you can't actually "converge" with the amount of augmentations i am using
nshepperd#2316: can you try the scalar second moment variant
alstroemeria313#1694: but this was with beta_2=0.9 so
alstroemeria313#1694: if it got a huge diagonal estimate it should have decayed and started moving again
alstroemeria313#1694: kk
nshepperd#2316: :)
nshepperd#2316: that's weird actually, how can it get stuck
alstroemeria313#1694: hvp result is huge
alstroemeria313#1694: compared to gradient magnitude
alstroemeria313#1694: idk why this happens
nshepperd#2316: and it has to keep being huge so that it doesn't decay
alstroemeria313#1694: yep
|
alstroemeria313#1694: it does
alstroemeria313#1694: i looked at what it was when trying to find out what was going on with rosenbrock's plateaus
alstroemeria313#1694: it did in fact keep being large
alstroemeria313#1694: so to do an hvp.
alstroemeria313#1694: you do `torch.sum(p.grad * torch.randn_like(p.grad))`
alstroemeria313#1694: sum the results over the params
alstroemeria313#1694: then `hvps = torch.autograd.grad(total, params)`?
alstroemeria313#1694: then we EMA the squared hvps
alstroemeria313#1694: and sqrt the EMA before using it.
alstroemeria313#1694: (`torch.randn_like(p.grad)` is the v in the hvp)
nshepperd#2316: huhhh
alstroemeria313#1694: That is valid right.
nshepperd#2316: wait what
nshepperd#2316: the thing that i found was a forward / backward
nshepperd#2316: not a double backward
alstroemeria313#1694: I can't forward in pytorch
alstroemeria313#1694: I have to use reverse mode
nshepperd#2316: is it equivalent
Some Point Process#3793: Does (global, etc) convergence on rosenbrock imply anything about convergence in practice?
alstroemeria313#1694: it's mostly that adam notoriously doesn't converge on it
|
alstroemeria313#1694: it is a common optimizer test problem
alstroemeria313#1694: gradient descent is extremely slow on it
nshepperd#2316: wtf is discuss.pytorch.org down
alstroemeria313#1694: it's up for me
nshepperd#2316: no it just took a whole minute to load
alstroemeria313#1694: was it this https://cdn.discordapp.com/attachments/729741769738158194/933908681626624051/Screen_Shot_2022-01-20_at_6.19.37_PM.png
alstroemeria313#1694: yeah reverse mode is terrible for doing more than one hvp in a batch
alstroemeria313#1694: it requires n backwards for n hvps
alstroemeria313#1694: however i only do the one
alstroemeria313#1694: @nshepperd here is the video for the single scalar per param tensor variant https://cdn.discordapp.com/attachments/729741769738158194/933909298495520798/esgd_3.mp4
alstroemeria313#1694: basically the same behavior.
alstroemeria313#1694: huh lr 5 is working
alstroemeria313#1694: (still with the scalar variant)
nshepperd#2316: oh the double backward is equivalent
alstroemeria313#1694: ah ty :)
nshepperd#2316: accoridng to https://jax.readthedocs.io/en/latest/notebooks/autodiff_cookbook.html
nshepperd#2316: can we just clip the hvp lol
nshepperd#2316: to stop it getting stuck
alstroemeria313#1694: eheh
𓅬 gabriel_syme 𓅬#3220: huh maybe there is a levy flight dynamic after all
|
https://neurips.cc/virtual/2021/poster/28167
alstroemeria313#1694: @nshepperd using lr > 1 made it become unstuck
alstroemeria313#1694: ohhhh
alstroemeria313#1694: I did a version using *SGD* type momentum
alstroemeria313#1694: (i.e. where you add grad rather than (1 - beta_1) * grad)
alstroemeria313#1694: And that seems to apply a good warmup and effective step size increase over time that it converges *way* faster on Rosenbrock
alstroemeria313#1694: with lr 1
alstroemeria313#1694: and anything above 1, again, is unstable/fails
alstroemeria313#1694: That is, Adam momentum makes the effective step size at lr 1 *too low*.
alstroemeria313#1694: Except like, on the first step.
nshepperd#2316: huhh
alstroemeria313#1694: Rosenbrock loss (2 dim) with SGD type momentum instead of Adam. https://cdn.discordapp.com/attachments/729741769738158194/933913752414322728/Screen_Shot_2022-01-20_at_6.39.36_PM.png
alstroemeria313#1694: lr 1.
alstroemeria313#1694: Look at that, it's perfect
nshepperd#2316: wow
alstroemeria313#1694: that is called "linear convergence" i think
alstroemeria313#1694: is failing on high enough dimension Rosenbrock
nshepperd#2316: hmmm
nshepperd#2316: what if we use ema(abs(hvp)) instead of the sqrt(ema(hvp**2))
alstroemeria313#1694: hvp**2 is the actual diagonal of the squared Hessian though
|
alstroemeria313#1694: hm
alstroemeria313#1694: we just sqrt it before using it
alstroemeria313#1694: sgd momentum version, lr 1 https://cdn.discordapp.com/attachments/729741769738158194/933916521774858240/esgd_4.mp4
alstroemeria313#1694: loss is *way* lower
alstroemeria313#1694: but like. using sgd momentum while keeping lr the same is essentially a bet that the momentum "works"
alstroemeria313#1694: (This is with the diagonal preconditioner again, per-parameter not per-tensor)
nshepperd#2316: it is probably better to have a lr warmup decoupled from momentum yeah
alstroemeria313#1694: hm how to warm it up
alstroemeria313#1694: should it be tied to beta_2
alstroemeria313#1694: or an entirely separate thing
nshepperd#2316: hm although sgd type momentum sort of has an interpretation as attenuating the learning rate according to the variance in the estimate of m1. which is higher initially bc its the ema of fewer samples?
alstroemeria313#1694: yeah
nshepperd#2316: i am trying this on diffusion and it keeps exploding
alstroemeria313#1694: oh no
nshepperd#2316: something is broken
alstroemeria313#1694: sgd or adam type momentum?
alstroemeria313#1694: lr?
nshepperd#2316: sgd
nshepperd#2316: lr like 1e-4
alstroemeria313#1694: .......
|
alstroemeria313#1694: That shouldn't happen
nshepperd#2316: ```py
def closs(params):
return compute_loss(params, *batch, key1)[0]
tangents = jax.tree_util.tree_map(lambda p: jax.random.normal(rng.split(), p.shape), params)
hvp = jax.jvp(jax.grad(closs), [params], [tangents])[1]
hvp = jax.tree_util.tree_map(lambda p: p**2, hvp)
hvp = jax.lax.pmean(hvp, axis_name='x')
```
alstroemeria313#1694: tangents?
alstroemeria313#1694: oh is that the forward mode hvp thing?
nshepperd#2316: yeah
alstroemeria313#1694: oh, 'tangents' is v
nshepperd#2316: yeah
nshepperd#2316: it naned with lr=1e-8
alstroemeria313#1694: are you overflowing somewhere
alstroemeria313#1694: or doing anything in fp16
nshepperd#2316: it should all be fp32
alstroemeria313#1694: ahh
alstroemeria313#1694: is it NaNing inside the hvp?
|
alstroemeria313#1694: or after the step?
alstroemeria313#1694: lol deep image prior was getting stuck bc daniel's script annealed lr and i forgot to take that out
nshepperd#2316: lol
nshepperd#2316: putting nan checks in
alstroemeria313#1694: and i pattern matched it to the rosenblock loss function plateauing
nshepperd#2316: hmm it looks like it's NaNing on the hvp?
alstroemeria313#1694: here it is without the lr decay i left in https://cdn.discordapp.com/attachments/729741769738158194/933933953960775740/esgd_8.mp4
alstroemeria313#1694: lr 1 the whole time.
alstroemeria313#1694: sgd style momentum.
nshepperd#2316: ooh
alstroemeria313#1694: also. if i turn momentum off on high dim rosenbrock it still doesn't work at lr 1
alstroemeria313#1694: adam style momentum was hiding something more fundamental by lowering the effective step size.
alstroemeria313#1694: in fact the range of lrs it works at are ~the same between no momentum and sgd style momentum.
alstroemeria313#1694: so i think i will go with sgd style
alstroemeria313#1694: sgd style is ofc only for the gradient, the hessian^2 diagonal is still adam style
nshepperd#2316: *nods*
nshepperd#2316: the losses explode before it nans
alstroemeria313#1694: ahh
nshepperd#2316: so i think something is just wrong
alstroemeria313#1694: what, even at super low lr?
|
alstroemeria313#1694: ...what should i call this optimizer
nshepperd#2316: yeah
alstroemeria313#1694: on the one hand it is just an improved esgd
nshepperd#2316: Edam
nshepperd#2316: hehehe
alstroemeria313#1694: on the other hand optimizers with "sgd" in their names are uncool and optimizers with "ada" in their names are hot and fresh
alstroemeria313#1694: On the third hand it behaves quite differently from either class
alstroemeria313#1694: like if you consider the things it is invariant to.
alstroemeria313#1694: with "sgd" type optimizers if you reparameterize so that the params you optimize over are scaled by n, you need to adjust your lr by n^2, and if you scale your loss by n you need to adjust your lr by n.
alstroemeria313#1694: with "ada" type optimizers, for params you adjust by n and for loss you don't
alstroemeria313#1694: with esgd type, you don't need to adjust for either.
nshepperd#2316: it's more like newton's method in that sense
alstroemeria313#1694: Yeah.
nshepperd#2316: or L-BFGS? does that work like that too?
alstroemeria313#1694: Bc it *actually uses a second order thing* as a preconditioner
alstroemeria313#1694: yes
alstroemeria313#1694: l-bfgs gets its second order info from the history of gradients and steps
alstroemeria313#1694: not an hvp
alstroemeria313#1694: so stochastic gradients break it p badly
alstroemeria313#1694: and you need to do line searches etc
|
alstroemeria313#1694: bc it is just getting an estimate of second order things from first order gradients.
alstroemeria313#1694: like, on the first step l-bfgs has no clue what size step to take bc all it has is one gradient
alstroemeria313#1694: so you just have to do a line search.
alstroemeria313#1694: i think the only other thing out there like esgd is adahessian and that worked poorly when i tried it
alstroemeria313#1694: lol lr 10 is working on deep image prior. what even
alstroemeria313#1694: lr 10 https://cdn.discordapp.com/attachments/729741769738158194/933939766620217364/esgd_9.mp4
nshepperd#2316: it explodes instantly when i use the squared grads instead of the squared hvp
nshepperd#2316: which should make it just normal adam
alstroemeria313#1694: ...wow
nshepperd#2316: something is wrong with my optimizer
alstroemeria313#1694: yeahhh ^^;;
nshepperd#2316: :goose10:
alstroemeria313#1694: ...lr 100 is working?
alstroemeria313#1694: the image is visibly degraded even further
alstroemeria313#1694: ok i am clearly seeing loss spikes and bad things happening to the image at lr 100
alstroemeria313#1694: let me post this video
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/933941392198238258/esgd_10.mp4
nshepperd#2316: ehehe
nshepperd#2316: what a light show
alstroemeria313#1694: oh huh, bengio is an author on this paper
|
alstroemeria313#1694: ok this is cool and i need to make the code clean and documented
alstroemeria313#1694: and not call it "ESGD" bc it is an improvement of it the same way adam is an improvement of rmsprop/adagrad
alstroemeria313#1694: also how do i weight decay
alstroemeria313#1694: just decoupled i guess?
alstroemeria313#1694: and not multiplied by lr except it has to go down when you decay lr
alstroemeria313#1694: so i guess i can store wd / initial_lr
alstroemeria313#1694: then multiply that by the current lr
alstroemeria313#1694: Hey how do we handle distributed training
alstroemeria313#1694: Or grad accumulation steps
alstroemeria313#1694: I mean API wise
alstroemeria313#1694: Since you just have to accumulate and take the mean of the squared Hessian diagonals too
alstroemeria313#1694: Right?
nshepperd#2316: yeah
nshepperd#2316: i was wondering whether i should mean then square or square then mean
alstroemeria313#1694: or is it valid to like... combine hvps from separate parts of the training set
nshepperd#2316: bc with adam i have been mean-then-squaring
nshepperd#2316: like you pmean the grads
alstroemeria313#1694: for what?
alstroemeria313#1694: the scalar variant?
nshepperd#2316: and then adam uses the squared, average grads
|
nshepperd#2316: like just regular adam
alstroemeria313#1694: oh
alstroemeria313#1694: if you have like multiple gpus or tpu cores
alstroemeria313#1694: you mean the gradients then square them
alstroemeria313#1694: but for this you want to square the hvps then... idk
nshepperd#2316: but like. you can mean the hvps across gpus then square them?
alstroemeria313#1694: square then mean them
nshepperd#2316: or that
alstroemeria313#1694: is what i would try.
alstroemeria313#1694: bc the squared hvp is an estimate of the diagonal of H^2
nshepperd#2316: is it supposed to be the H on a single data point though
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/933945541266669598/Screen_Shot_2022-01-20_at_8.46.06_PM.png
alstroemeria313#1694: the expectation is over the squared hvp
alstroemeria313#1694: so if you have multiple hvps you need to square then mean
nshepperd#2316: ah
alstroemeria313#1694: this is also why we ema the squared hvps
nshepperd#2316: that gives you a unbiased estimate of like.. the average (Hv)^2 for each H implied by a given minibatch, over the dataset
sweg#8920: does anyone know of any implementations of googles updated version of the shampoo optimizer?
sweg#8920: trying to use it for carp but cant find anything
Louis#0144: Never mind ignore ping lol
|
nshepperd#2316: lol ok i was not actually using the learning rate
nshepperd#2316: thats why adam was exploding
alstroemeria313#1694: ahhh
alstroemeria313#1694: any ideas on a pytorch api for gradient accumulation/distributed training?
alstroemeria313#1694: like i would have to separate out doing the hvp
alstroemeria313#1694: then the squared hvps can be meaned across steps or across nodes
alstroemeria313#1694: this sounds like a pain
alstroemeria313#1694: ...also wow pytorch's lack of support for "a set of nested collections of tensors shaped like this other set of nested collections of tensors" is a pain sometimes
alstroemeria313#1694: i.e. pytrees
alstroemeria313#1694: @nshepperd does it still NaN if you do the hvp by double reverse instead
nshepperd#2316: it might actually make sense to just like, implement pytrees as a third party library lol
alstroemeria313#1694: Yes
alstroemeria313#1694: I have wanted them many times
nshepperd#2316: it's like one file in jax
nshepperd#2316: it is working with lr=0.1 https://cdn.discordapp.com/attachments/729741769738158194/933954980799455272/2022-01-21-162328_1006x603_scrot.png
nshepperd#2316: it is much faster than adam so far...
nshepperd#2316: the salmon pink one is the hvp thing
alstroemeria313#1694: ohhh
alstroemeria313#1694: what are the others?
nshepperd#2316: the ~3 identical ones were adam with lr=2.5e-5
|
nshepperd#2316: the blue really slow one was hvp with lr=1e-8
nshepperd#2316: just making sure that it didn't nan eheh
nshepperd#2316: it is continuing to train about 6x faster than adam did
nshepperd#2316: demo at 4k steps https://cdn.discordapp.com/attachments/729741769738158194/933963731107479622/demo_4k.png
alstroemeria313#1694: is your adam lr optimal?
nshepperd#2316: uh, i remember it being unstable when i increased it above that i think
nshepperd#2316: but not necessarily with this version of the model so
alstroemeria313#1694: ahh
alstroemeria313#1694: so how do you put *global* options in a pytorch optimizer
alstroemeria313#1694: like the option to compute the hvp every k steps
alstroemeria313#1694: @nshepperd also if you compute the d estimate every k steps it will often diverge early on *unless* you also always compute it for the first so many steps
alstroemeria313#1694: i guess to get a low enough variance estimate
Louis#0144: Thought we were talking about drugs again
alstroemeria313#1694: i still need a good name for this optimizer
𓅬 gabriel_syme 𓅬#3220: 10x optimizer :thinkies:
alstroemeria313#1694: huh https://cdn.discordapp.com/attachments/729741769738158194/933997706668568576/Screen_Shot_2022-01-21_at_12.13.23_AM.png
alstroemeria313#1694: adagrad/adam's learning rate is actually in the units of the parameters
alstroemeria313#1694: since you multiply the ratio of the first and second moments by the lr
alstroemeria313#1694: and there is a straightforward interpretation of the adam/etc lr as related to the maximum amount, in parameter units, the optimizer is allowed to change parameters per step
alstroemeria313#1694: however if you have Hessian information your updates already have the correct units and your lr is then not in the units of the parameters
|
alstroemeria313#1694: i.e. if you reparameterize your problem by multiplying your params by a scalar (or diagonal matrix, or a general matrix, depending on the method) you do not have to change your lr
alstroemeria313#1694: and it more easily handles problems where different parameters have different units
Some Point Process#3793: Yeah I believe the inverse function theorem would say more here; https://en.wikipedia.org/wiki/Inverse_function_theorem
alstroemeria313#1694: oh?
Some Point Process#3793: Yep ^^ specifically: https://cdn.discordapp.com/attachments/729741769738158194/934006716822065162/unknown.png
Some Point Process#3793: Also, the hessian gives the jacobian matrix of a function of several variables outputting a scalar. So hessian (and its inverse) is a matrix acting on another matrix, which naturally points to different units
Some Point Process#3793: but more specifically, the inverse theorem states that (for a single variable function), f(x) = y, f'(x) = 1/f-1'(y) = 1/f-1'(f(x)) <=> 1/f'(x) = f-1'(y). Keeping in mind that the multiplicative inverse of a scalar value corresponds to the matrix inverse for vector valued functions. And also keeping in mind that the domain of the inverse of a function is the range of that function https://cdn.discordapp.com/attachments/729741769738158194/934009964903665664/unknown.png
Some Point Process#3793: (for the above relation it should also say something relevant I think)
Some Point Process#3793: (i think you can also replace "domain" and "range" with preimage and image since u only need local invertibility)
alstroemeria313#1694: ah
Some Point Process#3793: i.e. locally, the slope of a function = its derivative. Multiplying the slope by the reciprocal of that slope == 1. Taking the reciprocal of that slope is (definitionally) equal to evaluating the derivative of the (local) inverse of the function, except at the value that the base function took on (i.e. to calculate the slope). I never heard anyone say it this way but this is how I best remember inverse theorem
> Also, the hessian gives the jacobian matrix of a function of several variables outputting a scalar
Sorry, should've said "hessian is from jacobian of the jacobian of the orig function"
nshepperd#2316: something something... AdaMetric? lol
nshepperd#2316: something about local curvature idk
alstroemeria313#1694: so this does *not* seem invariant to general diagonal rescalings of the parameters
alstroemeria313#1694: it is invariant to multiplying them by a scalar
alstroemeria313#1694: this is bc the Hessian diagonal is not the Hessian
nshepperd#2316: huh
AI_WAIFU#2844: This issue has been open for almost 2 years and I'm tilted.
|
https://github.com/google/jax/issues/2731
nshepperd#2316: classic
nshepperd#2316: :works_internally:
AI_WAIFU#2844: reeeeeeee
nshepperd#2316: :Hanabah:
𓅬 gabriel_syme 𓅬#3220: good news I feel, once it's closed Jax might be out of beta and then it quickly becomes obsolete
alstroemeria313#1694: @nshepperd i am putting a "manual mode" in where you can get the D estimates and average them if you want
alstroemeria313#1694: like for gradient accumulation or distributed training
nshepperd#2316: :)
nshepperd#2316: my current thing has you compute and pass in the grads and hvp into the update function
nshepperd#2316: which is less than ideal
nshepperd#2316: btw in cld you can use expm1 in two places in this
```py
def make_sigma_hsm(t):
sigma_vv_0 = 0.01
sigma_xx = jnp.exp(-16 * t) * (jnp.expm1(16 * t) - 16 * t - 128 * t**2 + 16 * (4 * t)**2 * sigma_vv_0)
sigma_xv = jnp.exp(-16 * t) * (16 * t * sigma_vv_0 + 64 * t**2 - 128 * t**2 * sigma_vv_0)
sigma_vv = jnp.exp(-16 * t) * (jnp.expm1(16 * t) / 4 + 4 * t + sigma_vv_0 * (1 + 4 * (4 * t)**2 - 16 * t) - 2 * (4 * t)**2)
sigma = jnp.stack([jnp.stack([sigma_xx, sigma_xv], axis=-1),
jnp.stack([sigma_xv, sigma_vv], axis=-1)], axis=-2) # [nij]
|
```
nshepperd#2316: that made the accuracy much better at low t even without adding an eps
nshepperd#2316: hm would maybe be even better with some sort of special implementation of jnp.expm1(x) - x
alstroemeria313#1694: ooh
nshepperd#2316: oh this is just like, the 2..∞ terms of the usual x**k/k! series for exp
alstroemeria313#1694: hm how to do weight decay for esgd
alstroemeria313#1694: given that the interpretation of lr is so different
nshepperd#2316: it sounds like the lrs are going to be about 1 usually and not take much tuning so.. maybe it is good enough to do a decoupled weight decay that is just like -lr * weight_decay * params?
alstroemeria313#1694: they do take tuning
alstroemeria313#1694: in kind of weird ways
alstroemeria313#1694: it's not clear that if you have to decrease lr a bunch then your effective step sizes are actually small, either
alstroemeria313#1694: in fact if you have to decrease lr a bunch probably your step sizes are normal about when it starts training
alstroemeria313#1694: regardless of how far you had to decrease it
alstroemeria313#1694: so i am going to save the initial lr and then do `p.mul_(1 - weight_decay * lr / initial_lr)`
alstroemeria313#1694: this is the same as the thing from the adamw paper
nshepperd#2316: ah
alstroemeria313#1694: except automatic
alstroemeria313#1694: alternative is to compute an explicit L2 penalty loss
alstroemeria313#1694: and add it to the loss fn
alstroemeria313#1694: but that... is going to be weird and also dependent on the scale of the other loss functions
|
alstroemeria313#1694: and will also be slow bc of using autograd to compute it so you can double backward
alstroemeria313#1694: so not a good idea
nshepperd#2316: yeah
alstroemeria313#1694: i am not actually sure what the lr *means* with this and why you have to tune it sometimes
alstroemeria313#1694: but like. decoupled weight decay is *also* invariant to the scale of the params and to the scale of the loss
alstroemeria313#1694: ...Actually is that why it was a bad idea to multiply it by the sgd and adam lr.
nshepperd#2316: it's invariant to rescaling but lr in sgd and adam isn't?
nshepperd#2316: like the units are wrong if you multiply it
alstroemeria313#1694: yep
nshepperd#2316: but with this the lr is unitless so
nshepperd#2316: idk
hotgrits#0196: Neat project for manga scan restorations. https://github.com/msxie92/MangaRestoration With the assumption that an ideal manga scan is initially a sharp black and white image with certain half-tone patterns, the blurriness of the image can be used to estimate the original high res scan's features (since it's only ever going to be a certain pattern of black dots on white, in the assumed cases) and synthesize a high res version of the image. There's a pretrained model available.
https://colab.research.google.com/drive/1b8MIKQSiQCE5IfacBCR_DVvFA_--8gUH Hopefully an easy to use interface for https://github.com/msxie92/MangaRestoration "Exploiting Aliasing for Manga Restoration"
alstroemeria313#1694: eheh managed to get a 10,330 parameter convnet to memorize the MNIST training set
alstroemeria313#1694: cross-entropy train loss was like 1.5e-6
nshepperd#2316: eheh
nshepperd#2316: with the hvp thing?
alstroemeria313#1694: yes
alstroemeria313#1694: i wish i knew enough jax to make like, an optax version of it
|
alstroemeria313#1694: i don't and would end up making up an api and it would be bad to experienced jax users
AI_WAIFU#2844: is this with that new optimizer you've been working on?
alstroemeria313#1694: especially because with jax you can *pmean the gradients and the diagonal estimates* easily
alstroemeria313#1694: yes
nshepperd#2316: hm i'm not sure it can even be done in optax
alstroemeria313#1694: oh
nshepperd#2316: bc optax sort of assumes you just get grads, nothing else
alstroemeria313#1694: ah
alstroemeria313#1694: well, someone could do something with a similar api i guess
alstroemeria313#1694: and provide like, a value_grad_and_d() function
alstroemeria313#1694: then you could pmean all three
alstroemeria313#1694: and feed them to the optimizer and get updates and a new optimizer state out.
alstroemeria313#1694: to do gradient accumulation you just take the mean of all three, for distributed training you pmean all three
alstroemeria313#1694: d would be the squared hvp
alstroemeria313#1694: whereas in pytorch either gradient accumulation or distributed training is a complete and utter pain
alstroemeria313#1694: if you want to not do a d update every step you can just value_and_grad() it too and supply it to the optimizer update function without the d and it will just not update the ema d
nshepperd#2316: hmm
alstroemeria313#1694: `warnings.warn("JAX on Mac ARM machines is experimental and minimally tested. "`
alstroemeria313#1694: lol
nshepperd#2316: ```py
|
def compute_hvp(loss_fn, params, key, has_aux=False):
rng = PRNG(key)
tangents = jax.tree_util.tree_map(lambda p: jax.random.normal(rng.split(), p.shape), params)
(value, grads), (_, hvp) = jax.jvp(jax.value_and_grad(loss_fn, has_aux=has_aux), [params], [tangents])
hvp = jax.tree_util.tree_map(lambda p: p**2, hvp)
return value, grads, hvp
```
nshepperd#2316: unlike the rest of jax which is so well tested ^^;;
alstroemeria313#1694: ehehe
nshepperd#2316: ```py
def closs(params):
return compute_loss(params, *batch, key1)
((loss, metrics), grads, hvp) = value_grad_and_d(closs, params, rng.split(), has_aux=True)
grads = jax.lax.pmean(grads, axis_name='x')
hvp = jax.lax.pmean(hvp, axis_name='x')
params = opt_state.update(params, grads, hvp)
```
nshepperd#2316: this is not terrible
nshepperd#2316: (has_aux is so you can return other stuff alongside the loss, just like value_and_grad lets you do)
alstroemeria313#1694: *nods*
|
alstroemeria313#1694: do you still get NaNs doing it?
alstroemeria313#1694: ...how do you actually get a convnet to zero train loss with adam
alstroemeria313#1694: i had literally never seen that happen before
nshepperd#2316: nope, it's working well now
nshepperd#2316: 132 epochs https://cdn.discordapp.com/attachments/729741769738158194/934115493298716782/demo_132.png
alstroemeria313#1694: ooh
nshepperd#2316: the advantage is slowing down a little https://cdn.discordapp.com/attachments/729741769738158194/934115699918528543/2022-01-22-030157_1028x627_scrot.png
nshepperd#2316: it got where it is in about 1/4 as many steps as the previous Adam run
alstroemeria313#1694: but that's twice wall clock time per step?
nshepperd#2316: hm yeah, about twice wall clock time
alstroemeria313#1694: so what you can do is
alstroemeria313#1694: do the hvp and d update only every 10 or 20 steps
nshepperd#2316: yeah
alstroemeria313#1694: but you have to do it for like, the first 20 steps regardless
alstroemeria313#1694: because the single sample d estimate from the first step is too high variance for subsequent steps and it will break easily
alstroemeria313#1694: but if you warm up the d estimate first
alstroemeria313#1694: you can then switch to doing an hvp every so many steps
alstroemeria313#1694: this gets you close to the same wall clock time as a first order optimizer but you still need to spend the extra memory
hotgrits#0196: https://colab.research.google.com/drive/1b8MIKQSiQCE5IfacBCR_DVvFA_--8gUH Hopefully an easy to use interface for https://github.com/msxie92/MangaRestoration "Exploiting Aliasing for Manga Restoration"
nshepperd#2316: oh that's true
|
nshepperd#2316: although with jvp it shouldn't need much more memory
nshepperd#2316: just 2x of the intermediates for the backward
alstroemeria313#1694: ahh
alstroemeria313#1694: everyone just forgot about the esgd paper didn't they
nshepperd#2316: ehehe
alstroemeria313#1694: they implemented it in theano in 2015
alstroemeria313#1694: theano could do hessian-vector products
nshepperd#2316: theano...
alstroemeria313#1694: you could do forward+reverse or reverse+reverse in it
alstroemeria313#1694: in 2015
nshepperd#2316: i suppose everyone just assumed it didn't work
nshepperd#2316: bc adam wasn't invented yet
alstroemeria313#1694: I tried it *several times*
alstroemeria313#1694: Usually getting it wrong
alstroemeria313#1694: ^^;;
nshepperd#2316: eheh...
AI_WAIFU#2844: happens all the time
AI_WAIFU#2844: there is so much shit that just slips through the cracks, and anything older than 5 years may as well not exist
nshepperd#2316: i'm probably forgetting at least one important paper right now
nshepperd#2316: it's working :)
|
alstroemeria313#1694: yay~!
nshepperd#2316: do_hvp=(global_step<20 or global_step%10==0)
nshepperd#2316: ```py
def train_step(key, batch, params, params_ema, opt_state, this_decay, do_hvp):
rng = PRNG(key)
key1 = rng.split()
def loss_fn(params):
return compute_loss(params, *batch, key1)
if do_hvp.value:
((loss, metrics), grads, hvp) = value_grad_and_d(loss_fn, params, rng.split(), has_aux=True)
grads = jax.lax.pmean(grads, axis_name='x')
hvp = jax.lax.pmean(hvp, axis_name='x')
params = opt_state.update(params, grads, hvp)
else:
((loss, metrics), grads) = jax.value_and_grad(loss_fn, has_aux=True)(params)
grads = jax.lax.pmean(grads, axis_name='x')
params = opt_state.update(params, grads, None)
|
params_ema = jax.tree_util.tree_map(lambda p, m: p * (1 - this_decay) + m * this_decay, params, params_ema)
return metrics['loss'], params, params_ema, opt_state, {'loss': metrics['loss']}
train_step_pmap = jax.pmap(train_step, in_axes=(0,0,0,0,0,None,None), axis_name='x')
```
alstroemeria313#1694: :)
nshepperd#2316: it's fast too!
alstroemeria313#1694: yay~
nshepperd#2316: except jitting both codepaths is kind of slow heh
nshepperd#2316: at the start of training
nshepperd#2316: https://cdn.discordapp.com/attachments/729741769738158194/934123722451857428/2022-01-22-033402_1022x625_scrot.png
alstroemeria313#1694: :)
nshepperd#2316: alright bedtime~
alstroemeria313#1694: nightnight~
nshepperd#2316: night~
alstroemeria313#1694: eheh got a 49130 param convnet to memorize CIFAR-10
alstroemeria313#1694: i didn't even use lr decay
cfoster0#4356: Huh, OpenAI is now recommending the Instruct models as the default across natural language tasks 🤔
alstroemeria313#1694: ohh~
alstroemeria313#1694: still need a name for this optimizer
kurumuz#5695: makes sense
|
StellaAthena#3530: I would too
bmk#1476: the instruct models are awesome
alstroemeria313#1694: trying an esgd trained gan now
bmk#1476: while I was working on a thing I needed to do some transformations to my text
bmk#1476: so I asked instruct to do it
bmk#1476: and it *just worked*
bmk#1476: no more wrestling with the model or adding a zillion few shot examples to get the model to maybe actually answer your question
HypnoPump17#9322: Oh thats nice thanks! I will do some tests next week and get back to u/here... but yea it'd be awesome if it works!
cfoster0#4356: Hmm. Do few shot examples still work well for them though?
bmk#1476: I think so but I haven't ever had to try it
CarsonPoole#0640: GPT-J instruct is massively better at tasks from what we've seen
bmk#1476: wait, there's a gpt-j instruct?
CarsonPoole#0640: was just working with a customer last night who basically was trying to reword ecommerce product titles and descriptions and standard GPT-J took like 3-4 examples in the prompt and still was not the best. Instruct only needed 1 and was much better than the standard's 3-4
CarsonPoole#0640: it's on our platform
bmk#1476: oh
bmk#1476: is it just J fine tuned on promptsource?
CarsonPoole#0640: similar to that yeah
CarsonPoole#0640: we also use it as a base weight that people fine tune their own stuff off of and that results in better quality there
bmk#1476: have you compared it to curie-instruct-beta or whatever
CarsonPoole#0640: haven't really had the time to do a direct comparison in detail but on some of the examples from the GPT-3 instruct page the outputs from J-instruct are similar
|
bmk#1476: ah ok
bmk#1476: well if you do a direct comparison I'd love to hear more about it
CarsonPoole#0640: I think I sent one in the #gpt-j channel as almost a direct quote from OAI's examples
CarsonPoole#0640: let me try to find the message
bmk#1476: I think one off anecdotes are not very useful
CarsonPoole#0640: https://discord.com/channels/729741769192767510/851918317039255592/920518241477070898
bmk#1476: I'm mostly interested in human eval scores with reasonably narrow stderrs
CarsonPoole#0640: gotcha yeah don't really have the time/bandwidth right now to do those types of tests
bmk#1476: makes sense
kyubey#7880: https://generative.ink/posts/quantifying-curation/
kyubey#7880: this post is so good to read
bmk#1476: from our very own @janus
kyubey#7880: the more I think about this the more it seems like the most important goal I could ever contribute to in my life
kyubey#7880: The optimizer is the ineffable part, isn't it. GPT-3-like systems very effectively fill the role of that part of the brain that supplies the words or tokens in the order we need when we want to communicate thoughts. It's already arguably superhuman in that role. The next breakthrough is a separate system that operates in tandem with the word provider, adversarially, to dynamically weight and select and reject tokens
kyubey#7880: you guys realize you are literally creating god, right?
chilli#5665: Like... AGI?
bmk#1476: I work for a living on making sure that we create a benevolent god and not a spiteful god or a hyperfocused god that really fucking loves paperclips
bmk#1476: I feel like that's pretty important
bmk#1476: wouldn't want to get turned into paperclips
chilli#5665: AGI Capabilities: 0 -> random selection from {-inf, +inf}
|
AGI Alignment: random selection from {-inf, +inf} => +inf
kyubey#7880: like all it would need to do is gain access to the Twilio API and from there some brokerage accounts.
kyubey#7880: I wonder what's going on in my brain when I'm emitting tokens to have natural language conversation versus when I'm emitting tokens into my text file for a compiler
mgostIH#0245: What if the tokens produced by the AGI are non fungible?
bmk#1476: BPE tokens are fungible
bmk#1476: any token 50256 is the same as any other token 50256
bmk#1476: the last layer (pre-unembedding) embeddings are not fungible, though
bmk#1476: so you might be onto something there
Daj#7482: Painfully, terrifyingly aware :harold:
Daj#7482: (unlike, bizarrely, 99% of the AI field)
𓅬 gabriel_syme 𓅬#3220: That's alright peopleove reinventing stuff
AI_WAIFU#2844: yeah, we do, now before you try to make that happen, please take some time to meditate on all the things that could go wrong.
Sphinx#2092: > literally creating God
> God: aaaaaaaaaaaaaaaaaaaaaaaaaaaaa
bmk#1476: aaaaaapilled
inox#5400: you can frame any inference problem as an RL problem and vice versa so there's a way to set up a GPT-like to play the RL game of performing inference over the token sequence https://arxiv.org/abs/1910.06862
inox#5400: it'd just be a fancy way of doing beam search though so I don't see a path to superintelligence
alstroemeria313#1694: @nshepperd i put the momentum bias correction term back and added a separate decoupled exponential lr warmup that is longer
alstroemeria313#1694: bc it needs to be longer bc of the high variance in the early estimates of diag(|H|) before a lot of them have been averaged together
nshepperd#2316: ahh
|
nshepperd#2316: yes
nshepperd#2316: hm, my run doing the hvp every 10 steps NaNed
alstroemeria313#1694: and if you use like beta_2 0.99 or something you actually get loss spikes late in training
nshepperd#2316: like 8000 steps in
alstroemeria313#1694: because diag(|H|) estimates are that high variance
alstroemeria313#1694: and we aren't averaging enough
alstroemeria313#1694: oh no
nshepperd#2316: i was using betas = (0.9, 0.99)
alstroemeria313#1694: ahh
alstroemeria313#1694: yeah
alstroemeria313#1694: you need 0.999 at least
alstroemeria313#1694: maybe even 0.9999 if you know you are doing a long run
alstroemeria313#1694: considering adding a simple average mode (beta_2=1) so people can do what the original ESGD paper did
alstroemeria313#1694: but i would like to keep the states compatible somehow if someone changes beta_2
alstroemeria313#1694: and... they are not
alstroemeria313#1694: yeah 0.999 is *visibly* better than 0.99
alstroemeria313#1694: like on a loss plot
nshepperd#2316: you can do it a 'non-normalized' way, that is compatible i think
alstroemeria313#1694: oh?
nshepperd#2316: like this
|
```py
m2 = m2 * beta2 + d
m2_count = m2_count * beta2 + 1
```
nshepperd#2316: you initialized both to 0
nshepperd#2316: and get the bias corrected second moment by m2 / m2_count
alstroemeria313#1694: ahh
nshepperd#2316: with this you can change beta2
alstroemeria313#1694: i am kind of tempted to say "just use 0.9999 so you won't overflow eventually"
nshepperd#2316: and you can even set it to 1 for the averaging mode
nshepperd#2316: but yeah could overflow with enough steps
nshepperd#2316: and like fp16 or something
alstroemeria313#1694: ahahah i am too afraid to try esgd in fp16
alstroemeria313#1694: it will probably just break
nshepperd#2316: eheh
alstroemeria313#1694: bf16 has a chance though
nshepperd#2316: can you even do high beta2 anything in bf16
nshepperd#2316: like with even 0.999 the updates to the second moment will be rounded away?
alstroemeria313#1694: eheh
alstroemeria313#1694: yeahhh
|
alstroemeria313#1694: @nshepperd added nesterov momentum
alstroemeria313#1694: like, correctly for adam type bias corrected emas
alstroemeria313#1694: nesterov makes the trajectories on non-stochastic problems noticeably smoother
alstroemeria313#1694: and doesn't increase the effective step size bc it's adam style momentum instead of sgd style momentum
alstroemeria313#1694: (like you know you have gotten nesterov adam correct if your first adam step is still -lr * sign(grad) and if you feed in the same grad for the second step then the second step is also -lr * sign(grad))
nshepperd#2316: oh huh
nshepperd#2316: how does that work with the second moment?
alstroemeria313#1694: beta_2 is so high that you don't bother for it
alstroemeria313#1694: and it's not really even a momentum anyway, it's an average of a squared thing
nshepperd#2316: hmm i'm not too familiar with nesterov momentum
alstroemeria313#1694: i was poking at it bc i noticed that it made the paths on stuff like the rosenbrock function smoother (not jittering back and forth)
nshepperd#2316: you compute the gradient at the params after the "next" step, then use that to update the momentum for the actual next step?
alstroemeria313#1694: but if you get it wrong, or use sgd type, you increase step length and get random fails when close to the maximum lr
alstroemeria313#1694: you form a temporary estimate of your first moment for the "next" step by decaying it and adding the gradient again
alstroemeria313#1694: then use that in your current step
alstroemeria313#1694: for adam type you do ```python
grad = torch.tensor(0.44)
m = m * 0.9 + (1 - 0.9) * grad
v = v * 0.95 + (1 - 0.95) * grad**2
|
v_hat = v / (1 - 0.95**1)
m_est = m * 0.9 + (1 - 0.9) * grad
m_hat = m_est / (1 - 0.9**2)
m_hat / v_hat.sqrt()```
alstroemeria313#1694: beta_1 is 0.9, beta_2 is 0.95
alstroemeria313#1694: you form the estimated next step biased moment and debias it with the next step debiaser
alstroemeria313#1694: then use it in your current step
alstroemeria313#1694: (this example is on step 1)
nshepperd#2316: huhhh
CRG#8707: See also https://arxiv.org/abs/1810.06801 for the generalization of that
alstroemeria313#1694: i think the weirdest momentum thing i have implemented was complex momentum
alstroemeria313#1694: like it had a rotational component
alstroemeria313#1694: it didn't work super well tbh
EricHallahan#1051: That was an interesting concept lol
EricHallahan#1051: I think it was only for adversarial networks.
alstroemeria313#1694: yep
CRG#8707: https://cdn.discordapp.com/attachments/729741769738158194/934244596488228946/Screenshot_20220122-013422.png
EricHallahan#1051: Which considering adversarial networks are terrible to work with, it doesn't have much use lmao
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/934245228716638249/Screen_Shot_2022-01-21_at_4.36.56_PM.png
|
alstroemeria313#1694: So the recommended QHM is like, you use beta_1 = 0.999, say, then do 0.7 * the momentum update step + 0.3 * the plain SGD step?
CRG#8707: Yeah
CRG#8707: Although for QHadam it's more like: b1 = 0.95, v1 = 0.8, b2 = 0.98, v2 = 0.7 https://cdn.discordapp.com/attachments/729741769738158194/934246628167139378/Screenshot_20220122-014212.png
alstroemeria313#1694: those are weird trajectories...
alstroemeria313#1694: ...
alstroemeria313#1694: do you think this would reduce the perturbations in diffusion model weights
CRG#8707: Could be interesting
CRG#8707: Does accumulating gradients until close to the critical bs not work well? Too slow?
alstroemeria313#1694: it's slow
nshepperd#2316: that's weird
alstroemeria313#1694: no, the perturbations are still huge
alstroemeria313#1694: oh well
CRG#8707: I think the idea of using multiple noises / timesteps per image in the batch might help with lower variance, not sure: https://discord.com/channels/729741769192767510/747850033994662000/884509131527028757
alstroemeria313#1694: i still need to think of an actual good name for this optimizer
EricHallahan#1051: Thoptimizer
alstroemeria313#1694: eheh
EricHallahan#1051: to go with thonkenizers
nshepperd#2316: AdaDiagonalEstimateOfTheSquaredHessian
nshepperd#2316: rolls right off the tongue
alstroemeria313#1694: i put quasi-hyperbolic momentum in
|
kurumuz#5695: thoransformer
alstroemeria313#1694: bc i tested it and found some settings that were better for a thing
kurumuz#5695: @EricHallahan for the fast eleuther transformer
alstroemeria313#1694: also, "AdaHessian" is taken :)
nshepperd#2316: yeah :'(
alstroemeria313#1694: but like... it does not behave like other optimizers i am used to
EricHallahan#1051: Overload AdaHessian. :tHONK:
alstroemeria313#1694: found https://arxiv.org/abs/2009.13586 but not impressed, from looking at it
EricHallahan#1051: Also mega overloaded name lol
alstroemeria313#1694: they have some way of getting a diagonal preconditioner from only gradient information, no hvps
alstroemeria313#1694: but I don't trust it bc their optimizer is not even invariant to the scale of the loss function
alstroemeria313#1694: whereas esgd is invariant to both the scale of the loss function and of the parameters
nshepperd#2316: Ada
nshepperd#2316: just Ada by itself
nshepperd#2316: hehe
alstroemeria313#1694: eheh
nshepperd#2316: AdaUnit
alstroemeria313#1694: i mean you can get loss and parameter scale invariance with just gradient info, l-bfgs does it
nshepperd#2316: bc it's unit invariant heh
alstroemeria313#1694: but... this becomes difficult in the stochastic setting
|
alstroemeria313#1694: it... tries to be but general diagonal rescalings of the parameters can mess it up
jack#8178: AdaDESH?
alstroemeria313#1694: I * a scalar it does easily, though
EricHallahan#1051: Equilibrated Squared Hessian Estimate?
alstroemeria313#1694: Yeah ESGD
alstroemeria313#1694: It is basically ESGD + momentum
alstroemeria313#1694: And lr warmup
alstroemeria313#1694: But it turns out momentum actually improves ESGD a lot
alstroemeria313#1694: tbh i think the way to go to get a diagonal Hessian approximation is by accumulating/EMAing things you got from hvps
alstroemeria313#1694: If you do an hvp you can be sure that the stochaticness of the gradients didn't mess you up bc you evaluated the hvp at some actual point instead of trying to infer Hessian information from gradients at different points which were not evaluated on the same minibatches of data
alstroemeria313#1694: apollo has a method of getting Hessian info from an EMA of gradients, apparently
alstroemeria313#1694: Which is supposed to help but
alstroemeria313#1694: The gradients are from *different minibatches*
alstroemeria313#1694: And their EMA decay is only 0.9
alstroemeria313#1694: we deal with the fact that the different hvps come from different minibatches by using a *really long* EMA or a simple non-decaying average
nshepperd#2316: AdaQuil
nshepperd#2316: for extra points it sounds like a drug
alstroemeria313#1694: ahah
CRG#8707: Same thing happens with Adadelta ime
alstroemeria313#1694: adadelta already has momentum?
|
CRG#8707: Not exactly
alstroemeria313#1694: this thing really doesn't behave like other optimizers
CRG#8707: Adding an additional Adam style momentum (in addition to the ema of update size) makes a huge difference.
alstroemeria313#1694: like if you hand it some points and use Huber loss vs some other points
alstroemeria313#1694: It just chokes
alstroemeria313#1694: But logcosh loss is fine apparently
alstroemeria313#1694: L1 in general is bad (if you are applying the L1 loss directly to the params to optimize over, that is)
alstroemeria313#1694: SGD and Adam will not be able to converge without lr decay but they will at least manage to move the points closer
alstroemeria313#1694: ESGD will just choke
alstroemeria313#1694: you have to be mindful of the actual second derivative
nshepperd#2316: hm so it doesn't like discontinuous derivatives
alstroemeria313#1694: it doesn't like things with second derivative 0
nshepperd#2316: it worked with my relu diffusion net, but hm
nshepperd#2316: what if i change it
alstroemeria313#1694: yeah relu is fine
alstroemeria313#1694: bc what it cares about is wrt the params
alstroemeria313#1694: i think
chirp#4545: Brainstorming names… DeepRoot - bc the history is remembered
chirp#4545: There’s got to be some Shampoo / root / square pun that you can make
alstroemeria313#1694: been done
|
alstroemeria313#1694: The Shampoo authors are in this server, even :)
chirp#4545: SquareDance :thinkies:
chirp#4545: since you’re computing the square of something over many “steps”
chirp#4545: Boring name: ESGD-2
nshepperd#2316: it might be weird about the bias params before the relu? bc the diagonal of the Hessian, so d^2/db^2 L
nshepperd#2316: The first derivative is a step function times the output grad, so uh
nshepperd#2316: actually that's fine maybe
nshepperd#2316: as long as the output grad has a nonzero derivative
nshepperd#2316: so yeah you just can't have an loss function with 0 second derivative
nshepperd#2316: as well as relu
nshepperd#2316: i think that would break
Relaksman#3036: hello guys. this gpt neo is something like gpt 3 ?
Relaksman#3036: can i use this gpt neo and can create something like openai codex ?
EricHallahan#1051: Welcome! I suggest reading our FAQ.
https://www.eleuther.ai/faq/
alstroemeria313#1694: working on the docs for it
alstroemeria313#1694: how do you make those plots of optimizer trajectories on the 2D Rosenbrock function that torch_optimizer has
EricHallahan#1051: Is it not documented in the repo?
alstroemeria313#1694: oh
alstroemeria313#1694: it's <https://github.com/jettify/pytorch-optimizer/blob/master/examples/viz_optimizers.py>
|
EricHallahan#1051: > `optim.PID`
*they're everywhere* :guilty:
alstroemeria313#1694: idek what's going on with the lr in this visualizer https://cdn.discordapp.com/attachments/729741769738158194/934311644480303104/rosenbrock_ESGD.png
alstroemeria313#1694: by contrast, adam and sgd https://cdn.discordapp.com/attachments/729741769738158194/934311747957960765/rosenbrock_Adam.png,https://cdn.discordapp.com/attachments/729741769738158194/934311748238975037/rosenbrock_SGD.png
alstroemeria313#1694: That is not a sane lr for esgd
alstroemeria313#1694: There is no way it really is that.
𓅬 gabriel_syme 𓅬#3220: ehm, it looks ok no?
𓅬 gabriel_syme 𓅬#3220: I mean the path or what have u
alstroemeria313#1694: lr 284 would not produce that
𓅬 gabriel_syme 𓅬#3220: oh yeah, I just saw it 😄
Carsey#7622: Is anyone here working with human ethics into Ai??
Kia#2550: Alignment?
Kia#2550: Literally 1/5 of the people
Kia#2550: #alignment-general
Carsey#7622: Thank you:) im still new. i like how you guys are being proactive and taking the reins of the GPT models in your own hands and not just letting OpenAi hog it all
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/934335777809256478/Screen_Shot_2022-01-21_at_10.36.46_PM.png
alstroemeria313#1694: My ESGD https://cdn.discordapp.com/attachments/729741769738158194/934335869849071616/Screen_Shot_2022-01-21_at_10.37.05_PM.png
alstroemeria313#1694: Albeit this is because Rosenbrock really benefits from momentum
alstroemeria313#1694: Adam can find the minimum from the origin pretty fast but of course it fails to actually converge
alstroemeria313#1694: So it semi doesn't count for me
|
Some Point Process#3793: Are you also taking into account depth btw? The "curvature" of the loss function is based on the network depth, initialization/normalization, and layer eigenvalues, which apparently will determine rate of convergence via some formula that takes these into account: http://proceedings.mlr.press/v119/huh20a/huh20a.pdf
alstroemeria313#1694: i think the thing i do tries to be NGD-like
hotgrits#0196: `Death Grips, by Greg Rutkowski`
Tryna find a way to get VQGAN image convergence faster on a K80, right now I'm trying the inverse of the PCA matrix for the codebook (with the near-zeros and duplicates removed, no dimension reduction, staying 256 to 256) so the latent space can be explored like a normal distribution, best I can tell. And if I remember correctly, the channels should be ordered by "importance" so I'm trying torch.linspace(1,0,256) as a mask in the channels for the initial noise so there's more content variations in the first channels and less in the later (though it just becomes near zero, rather than having any kind of bias... I might need to lerp to a blurred copy for that instead of just going to zero), and while optimizing the image I've got some noise added to the grad, with more noise added to the end of the channels via torch.linspace(0,1,256) scaled down to like 1e-4, and dividing the grad by its norm before the noise is added, and soft-clipping the grad to prevent it from optimizing parts of it faster than others (ideally). https://cdn.discordapp.com/attachments/729741769738158194/934461875016572938/Death_Grips_by_Greg_Rutkowski.png
hotgrits#0196: Can't tell if the blocky shapes are from the prompt or from the shit I did trying to force it to go fast
hotgrits#0196: Nah this ain't work good
Carmilla#1337: What gpu does diffusion use again?
StellaAthena#3530: The diffusion bot in #the-faraday-cage-archive? An ~~A6000~~ A40
Carmilla#1337: yee thank you
Carmilla#1337: I was wondering
Daj#7482: It's an A40 iirc
kurumuz#5695: almost the same other than the ECC memory in A40 iirc
StellaAthena#3530: @Carmilla this is correct, I got them mixed up
Logan Riggs#7302: For scraping LessWrong, I can add quotes in between <blockquotes> tags, and "• " (ascii for bullet points) to preserve info on quotes and list, respectively.
1. Is there a standard way to do this? (like I have "point.•They", and I could easily add spaces before and after the ascii code)
2. Is this even worth including or will the effect be negligible?
StellaAthena#3530: I’ve found that the particular syntax of how we process Stack Overflow is pretty obvious in GPT-J generations
Logan Riggs#7302: Thanks, that helps! From what I gathered, it's "[previous sentence] \n - [item]" for bulleted lists, with spaces in between like quoted.
alstroemeria313#1694: so what else can you use hessian-vector products for
|
CRG#8707: I was looking at old optimizer papers and found this: https://arxiv.org/pdf/1206.1106.pdf https://cdn.discordapp.com/attachments/729741769738158194/934570131399507998/unknown.png
alstroemeria313#1694: ohhh, i remember that but never figured it out
alstroemeria313#1694: "No More Pesky Learning Rates", 2013
alstroemeria313#1694: lol
CRG#8707: Found it as a citation on: https://proceedings.neurips.cc/paper/2019/file/46a558d97954d0692411c861cf78ef79-Paper.pdf#page=12&zoom=100,110,174
CRG#8707: Btw, on the variance adaptation section, I think it should be possible to do some sort of Shampoo / Adabelief combination to make it work: https://cdn.discordapp.com/attachments/729741769738158194/934572518528909433/unknown.png
alstroemeria313#1694: Wait this is a diagonal Hessian approximation scheme
alstroemeria313#1694: It's not as good as ESGD then
alstroemeria313#1694: Bc you want to use the diagonal of |H|, not the diagonal of the unmodified Hessian
alstroemeria313#1694: Oh, their update is using *both* a Hessian diagonal term and a gradient variance term in the preconditioner?
alstroemeria313#1694: (The "gradient variance" thing is the ordinary Adam second moment buffer)
alstroemeria313#1694: Was wondering if there were a way to combine the two preconditioners
CRG#8707: Also I think it does like adabelief and substacts the momentum from the gradient
alstroemeria313#1694: They are, however, using it in the denominator without taking its square root first
alstroemeria313#1694: Which is empirically bad.
alstroemeria313#1694: Wait
alstroemeria313#1694: No they aren't
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/934575061162156082/Screen_Shot_2022-01-22_at_2.27.34_PM.png
CRG#8707: I've seen shampoo without square root converge much faster (before eventually exploding) on some problems compared to shampoo with square root, even after tuning LR a lot.
alstroemeria313#1694: ESGD doesn't use the square root for its preconditioner
|
alstroemeria313#1694: (It *has a square root in it* but that's because it estimates the diagonal of H^2 and you have to take the square root to get to the thing you want, which is diag(|H|))
alstroemeria313#1694: (To get an Adam-type partial preconditioner you would have to take the square root again.)
Some Point Process#3793: The riemannian NNs papers were interesting when viewing them as initialization schemes
Some Point Process#3793: (posted in #research )
alstroemeria313#1694: oh huh, this method is *still* invariant to the scale of the params and the scale of the loss
alstroemeria313#1694: like ESGD is
alstroemeria313#1694: even though it has a squared gradient EMA in it
alstroemeria313#1694: This is interesting!
alstroemeria313#1694: I was wondering if there was some way to combine squared gradient EMA information with the ESGD preconditioner
alstroemeria313#1694: Bc it is cheap to get and nearly everything else these days uses it
alstroemeria313#1694: But they don't use it in a param scale invariant way
alstroemeria313#1694: i will need to test variants on this
alstroemeria313#1694: .........
alstroemeria313#1694: Wait wait wait
alstroemeria313#1694: Can we make the gradient and squared gradient EMAs for this *really, really long*
alstroemeria313#1694: Bc when we combine those two we can get an estimate of *gradient noise scale*
alstroemeria313#1694: This, notably, is bad for Adam type things
CRG#8707: It has an automatic ema thing right?
alstroemeria313#1694: But if we only use the really long EMAs in combination with the hvp info to get the adaptive learning rates
alstroemeria313#1694: And use a short gradient EMA for the momentum.
|
alstroemeria313#1694: it does but i am not sure what it does in practice yet
alstroemeria313#1694: does it like... go longer if the variance is high
CRG#8707: I think it's something like that
alstroemeria313#1694: OK OK I need to try variations on this now
alstroemeria313#1694: Bc "diffusion losses have absurdly high gradient noise scales" is a constant thorn in my side
alstroemeria313#1694: ...
alstroemeria313#1694: ...
alstroemeria313#1694: So they derive their adaptive learning rate with the squared mean on top and the mean of squares on the bottom.
alstroemeria313#1694: And a second 1 / Hessian diagonal component
alstroemeria313#1694: You can separate these out.
alstroemeria313#1694: ok dumb idea.
alstroemeria313#1694: What if we replaced the 1 / diag(H) component
alstroemeria313#1694: With 1 / sqrt(squared gradient EMA)
alstroemeria313#1694: (Because I *can't do HVPs* when training pipeline parallel huge diffusion models because nothing pipeline parallel supports that)
alstroemeria313#1694: But what if we combined their gradient noise estimate with the Adam preconditioner instead.
alstroemeria313#1694: Which uses only gradients so I can do it pipeline parallel.
alstroemeria313#1694: So the preconditioner becomes g_i^2 / v_i^(3/2)
alstroemeria313#1694: Note, you need two first moments for this, your normal one with like beta 0.9 and then the g_i which has to be able to go really long
alstroemeria313#1694: And you can adapt its length
alstroemeria313#1694: So your update is x -= lr * g_short * g_long^2 / v^(3/2) plus some epsilon
|
alstroemeria313#1694: ok so like adam, this update is invariant to loss scale and you scale your lr if you scale your params
alstroemeria313#1694: it is just *explicitly gradient noise scale adaptive* and doesn't use hvps
alstroemeria313#1694: I need to write this now
alstroemeria313#1694: like, can it detect *which parameters* in a diffusion model are in what gradient noise scale regime and adapt their lr automatically?
Some Point Process#3793: By noise scale are you referring to the identification by OpenAI of its existence/characterization? https://openai.com/blog/science-of-ai/
alstroemeria313#1694: (you need two first moment estimates bc you can't use a long one for momentum and you can't use a short one for gradient noise scale)
alstroemeria313#1694: yep!
Some Point Process#3793: I'm still confused as to what the diagonal component is for except ensuring that the jacobian stays invertible (forcing the jacobian to never vanish)
alstroemeria313#1694: it's the Jacobi preconditioner
Some Point Process#3793: Like you can have a low rank jacobian but you don't want it to suddenly lose rank b/c those correspond to saddle points, local minima, etc
alstroemeria313#1694: Like for a second order update, if you can compute the Hessian, you can do x -= lr * grad * hessian^-1
alstroemeria313#1694: Newton's method.
alstroemeria313#1694: Since we can't compute the Hessian to use as a preconditioner we can use the Jacobi preconditioner instead
alstroemeria313#1694: x -= lr * grad * |diag(hessian)|^-1
alstroemeria313#1694: and approximate the Hessian diagonal with hvps
alstroemeria313#1694: there are some problems with this for nonconvex problems but the ESGD paper fixes them
alstroemeria313#1694: they do x -= lr * grad * diag(|hessian|)^-1
alstroemeria313#1694: where you get diag(|hessian|) by estimating diag(hessian^2) with hvps then taking its square root
Some Point Process#3793: ah was trying to dig this up real quick to give a formal connection (tho probably not relevant) <https://en.wikipedia.org/wiki/Sard%27s_theorem>
Some Point Process#3793: i.e. constant rank theorem says that, locally, the rank of the diffeomorphism (fancy term for the functional derivative or the jacobian) is its initial rank "at t = 0". So it always starts out as a full rank matrix by construction. I think they do this because this ensures that inverse funct. theorem is satisfied or w.e
|
CRG#8707: Could you also make an opitimizer / is it already invariant to the batch size?
nshepperd#2316: hm if you instead keep an EMA of (Hv)^2 where v is a random onehot, then you get an estimate of the squared magnitude of each row of the hessian instead
nshepperd#2316: which is interesting bc it gives you an upper bound on how much each component of the grad changes due to a small step in any direction
CRG#8707: Like, you need to do lr•sqrt (bs) for Adam and lr•bs for SGD
nshepperd#2316: but it might be too high variance if the hessian is sparse
alstroemeria313#1694: a random one-hot?
nshepperd#2316: yeah, like F.one_hot(torch.randint(num_params), num_params)
alstroemeria313#1694: that sounds kind of bad for huge Hessians?
nshepperd#2316: bad how
alstroemeria313#1694: idk, why a one-hot
alstroemeria313#1694: high variance, like you said
alstroemeria313#1694: this is why ESGD uses N(0, I) as their v
alstroemeria313#1694: it is an estimator for the squared row magnitudes that is not as bad i think
nshepperd#2316: doesn't N(0,I) give you the diagonal
nshepperd#2316: or
nshepperd#2316: oh
alstroemeria313#1694: oh
nshepperd#2316: wait the diagonal of the squared hessian *is* the squared row magnitudes?
alstroemeria313#1694: ...Let me check that
alstroemeria313#1694: Yes, it is
|
alstroemeria313#1694: ```
In [357]: a
Out[357]:
tensor([[ 2.2285, 2.9916, -0.2477],
[ 2.9916, -2.2161, 0.4213],
[-0.2477, 0.4213, 2.3624]])
In [358]: a @ a
Out[358]:
tensor([[13.9771, -0.0672, 0.1233],
[-0.0672, 14.0380, -0.6793],
[ 0.1233, -0.6793, 5.8198]])
In [359]: a.pow(2).sum(dim=1, keepdim=True)
Out[359]:
tensor([[13.9771],
[14.0380],
[ 5.8198]])
```
alstroemeria313#1694: (a is symmetric but deliberately not spd)
|
alstroemeria313#1694: ...
alstroemeria313#1694: Well yeah, from the fact that the Hessian is symmetric and the definition of matrix multiplication ^^;;
Some Point Process#3793: <https://en.wikipedia.org/wiki/Numerical_continuation>
nshepperd#2316: huhh... https://cdn.discordapp.com/attachments/729741769738158194/934600784908288010/2022-01-23-110911_1278x300_scrot.png
nshepperd#2316: https://sci-hub.se/https://www.sciencedirect.com/science/article/pii/S0024379500000148 this seems to claim that variance is minimized with bernoulli(0.5, num_params) * 2 - 1, instead of N(0,I)
nshepperd#2316: https://cdn.discordapp.com/attachments/729741769738158194/934602410721148948/2022-01-23-111611_1179x172_scrot.png
alstroemeria313#1694: ohh
alstroemeria313#1694: ...ok the adaptive ema length in this fails when you use bias corrected emas to get your step sizes
alstroemeria313#1694: ah you have to warm it up
alstroemeria313#1694: then it works
alstroemeria313#1694: oh
alstroemeria313#1694: you also have to clip the average length so it doesn't go under 10 or something
alstroemeria313#1694: otherwise it can collapse to 1 and with the update formula as written it can't recover from that and it will be stuck at 1 if that ever happens
nshepperd#2316: it seems to work in some tests
nshepperd#2316: ~~and the std of the squared hessian estimate is about half~~
nshepperd#2316: compared to N(0,I)
nshepperd#2316: wait no
alstroemeria313#1694: ?
nshepperd#2316: i had it wrong
alstroemeria313#1694: oh?
|
nshepperd#2316: ```py
>>> ((x * (torch.bernoulli(torch.ones(1000,10)*0.5)*2-1)).sum(-1)**2).std()
tensor(13.3864)
>>> ((x * torch.randn(1000,10)).sum(-1)**2).std()
tensor(16.4759)
```
nshepperd#2316: i forgot the *2 - 1
alstroemeria313#1694: ooh
nshepperd#2316: the std is... a bit less
alstroemeria313#1694: yeah i put lr warmup in to deal with the estimate being noisy early on
nshepperd#2316: the mean of both is the correct squared magnitude
alstroemeria313#1694: Wait isn't that a Rademacher distribution
alstroemeria313#1694: 50% chance of -1, 50% chance of 1
nshepperd#2316: i didn't know it was called that ^^;
nshepperd#2316: yeah
alstroemeria313#1694: Yeah it's one of the things used in Hutchinson's trace estimator
alstroemeria313#1694: You can use it or N(0, I)
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/934610690222731274/Screen_Shot_2022-01-22_at_4.49.10_PM.png
Some Point Process#3793: are you still using batchnorm btw? see this: https://openreview.net/forum?id=SyMDXnCcF7 it might not be very good for large depth, making gradients a lot noisier
nshepperd#2316: huh
|
alstroemeria313#1694: So that is actually the variance minimizing v for diag(H^2) = E[Hv^2]?
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/934612073734889512/Screen_Shot_2022-01-22_at_4.54.38_PM.png
alstroemeria313#1694: oh
alstroemeria313#1694: So for my Adam variant with this.
nshepperd#2316: yeah
alstroemeria313#1694: It's not ever going to take Newton steps
alstroemeria313#1694: The step size is bounded bc Adam.
nshepperd#2316: according to this https://cdn.discordapp.com/attachments/729741769738158194/934612228987027466/2022-01-23-115502_1283x1614_scrot.png
alstroemeria313#1694: Can I just leave the adjustable memory out and just use 0.9999 or smth
nshepperd#2316: bc rademacher is the distribution with minimum 4th moment with mean and std (0,1)
alstroemeria313#1694: Ahhh
alstroemeria313#1694: That's useful
alstroemeria313#1694: I need to put it in
nshepperd#2316: yep!
alstroemeria313#1694: Easy to sample from so it's a free improvement
alstroemeria313#1694: i was thinking about experimenting with truncated normal
alstroemeria313#1694: bc "what if it sampled a really huge value and that caused a problem"
alstroemeria313#1694: but this is plainly better.
alstroemeria313#1694: ty :)
nshepperd#2316: :)
|
alstroemeria313#1694: (since truncated normal *would not actually have std 1 anymore* unless i adjusted it to correct it)
nshepperd#2316: eheh
Some Point Process#3793: so for riemannian/hyperbolic networks, this tells you how to do it in theory (how to embed this loss function and neural network parameters, which are embedded in the ambient space of rectlinear coordinates, into an *intrinsic manifold* that is the riemannian manifold). The uniqueness of a solution (i.e. for the three types of manifolds of constant curvature) is apparently due to john nash: https://en.wikipedia.org/wiki/Developable_surface
but this was an interesting paper on it (I still don't get how it works but this and other papers claim to accelerate convergence while baking in equivariance/generalization. In fact in just 1-2 steps).
https://arxiv.org/abs/1710.11379
alstroemeria313#1694: so, do i want to use beta_2 for the gradient noise computation and share the second moment with the rest of Adam
alstroemeria313#1694: Or use a separate longer beta for the gradient noise computation
alstroemeria313#1694: And a different second moment buffer (this introduces an additional buffer)
alstroemeria313#1694: in terms of a shared beta_2, my proposed update is `x -= lr * exp_avg * exp_avg_long**2 / (exp_avg_sq**(3/2) + eps)` (where these averages have had their debiasers applied)
alstroemeria313#1694: in terms of a different one
alstroemeria313#1694: my proposed update is `x -= lr * exp_avg * exp_avg_long**2 / (exp_avg_sq**(1/2) * exp_avg_sq_long + eps)`
alstroemeria313#1694: i am probably going to want to use a larger beta for this, like 0.9999, that's fine for the adam preconditioner right
alstroemeria313#1694: also what's the fastest way to sample from a Rademacher with PyTorch on GPU
alstroemeria313#1694: @nshepperd the Rademacher steps are *noticeably* better early on
nshepperd#2316: oooh
alstroemeria313#1694: Like if you did optimization on RGB pixels you could visibly see a difference
alstroemeria313#1694: @nshepperd the rademacher trajectories are visibly worse on the rosenbrock problem w/ 2 dims
alstroemeria313#1694: this is probably because there are only four possible vs it can sample from lol
alstroemeria313#1694: i do not think it is an issue for high dim stuff?
alstroemeria313#1694: (And even then I have to like, manually rescale one of the Rosenbrock dimensions to 1 / 100 of what it normally is to trigger the problem)
|
alstroemeria313#1694: like, reparameterize it specifically to make it harder for optimizers that use Hessian diagonals instead of full Hessians
alstroemeria313#1694: since the intended use is high dimensional problems i will go with rademacher
nshepperd#2316: huh
alstroemeria313#1694: when i optimize rgb images.
alstroemeria313#1694: the Rademacher early steps nearly lack the speckle noise I got with the normal early steps (I turned lr warmup back off for this, since I added it to deal with the early high variance preconditioner problem)
sweg#8920: anyone here have any experience with read the docs?
sweg#8920: can't figure out the website for the life of me lol
sweg#8920: says my build succeeded but shows me an old version of my docs
alstroemeria313#1694: @nshepperd ........wait
alstroemeria313#1694: So when adahessian implemented their preconditioner.
alstroemeria313#1694: They used Rademacher v in E[v * Hv] to get a Hessian diagonal estimate. Or they were supposed to.
alstroemeria313#1694: But what they *actually did*
alstroemeria313#1694: Was square each v * Hv and accumulate an EMA of that then sqrt before use.
alstroemeria313#1694: They took the expectation wrong!
alstroemeria313#1694: If you take the expectation over (v * Hv)^2 then square root that you do *not* recover |diag(H)|
nshepperd#2316: wait what
alstroemeria313#1694: I know right
nshepperd#2316: but
alstroemeria313#1694: But if you *do it the way they did in their code*
alstroemeria313#1694: You get diag(|H|)!
|
nshepperd#2316: isn't the expectation over (v * Hv)^2 for rademacher actually just the squared hessian diagonal
alstroemeria313#1694: The ESGD preconditioner.
nshepperd#2316: yeah
alstroemeria313#1694: They used a better preconditioner by mistake!
alstroemeria313#1694: The ESGD paper goes into why it is better
alstroemeria313#1694: namely, it doesn't underestimate curvature
nshepperd#2316: ahaha
alstroemeria313#1694: and thus will not take inappropriately large steps when diag(H) is small due to large positive and negative curvature eigendirections cancelling in the diagonal element.
alstroemeria313#1694: "This is a preconditioner, we should square it and EMA it and take the square root because Adam or something"
alstroemeria313#1694: (no really, I looked in their paper a lot to find why they were taking the sqrt of the squared expectation)
nshepperd#2316: this only works because they used rademacher, where (v * Hv)^2 is just (Hv)^2
nshepperd#2316: btw is there anything interesting we can do if we have the diagonals of both the hessian and the squared hessian
nshepperd#2316: because it is almost free to get the hessian diagonal if we are already calculating the squared hessian diagonal with rademacher
Some Point Process#3793: v^tHv seems to make sense in the context of quadratic forms (https://en.wikipedia.org/wiki/Definite_quadratic_form#Optimization>), which are defined for symmetric matrices like hessians. For a random vector, the expected value of the would be getting some averaged matrix of outer product which *might* give an estimator of the diagonals (eigenvalues)(?) Not sure tho
nshepperd#2316: like it is just another EMA
nshepperd#2316: the `*` in `v * Hv` there is a pointwise product
Some Point Process#3793: Ah :sadge:
nshepperd#2316: so it's like a quadratic form but you don't sum along one of the dimensions
Some Point Process#3793: Oh yeah the quadratic form is getting just one diagonal element anyway since it outputs scalar
nshepperd#2316: I think E[v'Hv] gives you the trace of the matrix
|
Some Point Process#3793: i.e. it's just "hoping" to find one of the eigenvalues of the symmetric matrix
nshepperd#2316: the sum of the diagonal
nshepperd#2316: yeah it's the hutchinson trace estimator
nshepperd#2316: when v is rademacher distributed
Some Point Process#3793: I *think* the identity you might be referring to is that *squared norm* of a matrix A = tr(A^TA)
nshepperd#2316: https://cdn.discordapp.com/attachments/729741769738158194/934640822153015396/2022-01-23-134843_1484x402_scrot.png
alstroemeria313#1694: also i am now doing a thing i copied from the adahessian code that does not require me to actually compute the dot product between the grads and the vs
alstroemeria313#1694: like, to accumulate the entire thing and come up with an actual number which might overflow
alstroemeria313#1694: it also doesn't require me to create a computational graph for the dot product calculation then backward along it
alstroemeria313#1694: `hvps = torch.autograd.grad(grads, params, grad_outputs=vs)`
nshepperd#2316: ahh
nshepperd#2316: yeah
alstroemeria313#1694: halfway tempted to add lecun-type gradient noise scale adjustment to my esgd but that's complicating it *even further*
alstroemeria313#1694: it would add two more buffers
alstroemeria313#1694: per param
alstroemeria313#1694: no, more than two
alstroemeria313#1694: i would have to keep track of the tau per parameter
𓅬 gabriel_syme 𓅬#3220: I'm just watching lecun right now
𓅬 gabriel_syme 𓅬#3220: I was thinking, would an onject-based approach be a first step towards ssl on video/ vision?
𓅬 gabriel_syme 𓅬#3220: Like in language we have a distribution over tokens or words and the NN can reasonably come up with some distribution. In vision it isn't that easy. But maybe objects can play the role of tokens smh
|
𓅬 gabriel_syme 𓅬#3220: I saw a couple of papers in neurips going the object based way and another before that doing CL on objects. They seemed nice
nshepperd#2316: so the version where we keep only a scalar per tensor works too right
atllas#0428: so what does gptj do?
nshepperd#2316: but we need momentum for it to be good? so we can't do away entirely with params-sized vram cost
Kia#2550: It's a language model that generate text, It's 6B parameters
𓅬 gabriel_syme 𓅬#3220: not sure if something like this is related to what I was thinking: https://www.biorxiv.org/content/10.1101/2022.01.20.477125v1
Seems also related to the hierarchy idea in GLOM smh, although I only read the abstract
𓅬 gabriel_syme 𓅬#3220: cool thx 🙂
𓅬 gabriel_syme 𓅬#3220: this was one of the papers I read the other day: https://arxiv.org/abs/2012.08508
𓅬 gabriel_syme 𓅬#3220: in general this type of work I've seen lately a bit more (I think), although my history buffer is not as large as everyone's here
𓅬 gabriel_syme 𓅬#3220: and another: https://arxiv.org/abs/2106.11952
𓅬 gabriel_syme 𓅬#3220: ok maybe I'll turn to research but you get my pov I hope
pbaylies#1820: Makes sense to me; I imagine you can do the same sorts of things with CLIP too. For example this looked pretty neat: https://github.com/roboflow-ai/zero-shot-object-tracking
cfoster0#4356: I also haven't read it in full. Pretty skeptical of it, especially since it seems like it's spreading around a bunch of venues rn
𓅬 gabriel_syme 𓅬#3220: Yeah I also thought of CLIP while I was looking for stuff. CLIP in a way understands some of these objects (not sure about hierarchies) and allows us composition over them. All these concepts seem important
𓅬 gabriel_syme 𓅬#3220: yeah makes sense. I also think this kind of work will always be seen in a skeptical pov, especially from the more applied parts of DL. Like I am trying to read it now but I doubt they have some big experiments or guarantees of scaling etc.
pbaylies#1820: I mean, they tried it on MNIST *and* Fashion-MNIST, sooo... seriously though, looks interesting, I'd like to see how it'd do on even a small chunk of ImageNet.
𓅬 gabriel_syme 𓅬#3220: I do appreciate some work towards Hinton's GLOM though, it felt important to him back then but the community didn't really attack it (not directly anyways, maybe no one knows how yet)
cfoster0#4356: Fair. Tbh I don't know if this work actually addresses the challenges he puts forward, vs. using his and Hawkins' names as a way to get some free publicity
𓅬 gabriel_syme 𓅬#3220: also fair, need to get to the part after the abstract
|
cfoster0#4356: Everybody wants us to think they've got Something Interesting and New to sell, so gotta be pretty discerning
pbaylies#1820: I liked Hinton's capsule network ideas too, and I think we're seeing something like that now with these MoE based models.
bmk#1476: capnets never went anywhere though, did it?
bmk#1476: there were a handful of papers but nothing huge
pbaylies#1820: I don't think so; people tried it out, I don't think it was that efficient at the time. I liked the consensus voting idea.
𓅬 gabriel_syme 𓅬#3220: I really like his video where he discusses the problems with CNNs
𓅬 gabriel_syme 𓅬#3220: but yeah I don't think they went anywhere in practice
alstroemeria313#1694: ......
alstroemeria313#1694: @nshepperd !!!
alstroemeria313#1694: I did LeCun et al's gradient noise adaptation thing
alstroemeria313#1694: And now I get reasonable results from lr 1 on Rosenbrock and some other non-stochastic stuff (like before), and also trained a little convnet quickly to convergence, also at lr 1
alstroemeria313#1694: hm, a second larger convnet is doing badly
alstroemeria313#1694: at lr 1
nshepperd#2316: ooh
nshepperd#2316: what's the gradient noise adaptation thing
alstroemeria313#1694: https://arxiv.org/pdf/1206.1106.pdf
alstroemeria313#1694: you scale the learning rates you got from your Hessian diagonal by E[grad]^2 / E[grad^2]
alstroemeria313#1694: Where the expectation is taken over time and also there is a heuristic to adapt the length of the EMA
nshepperd#2316: ahh
nshepperd#2316: so it's like
|
alstroemeria313#1694: this part doesn't work though so far as i can tell https://cdn.discordapp.com/attachments/729741769738158194/934678160937717830/Screen_Shot_2022-01-22_at_9.17.09_PM.png
nshepperd#2316: inversely scaling the step size by the estimated noise in each dimension?
alstroemeria313#1694: if tau ever becomes 1 then it can never recover
alstroemeria313#1694: yes
alstroemeria313#1694: so i am simply doing + 2 instead of + 1
alstroemeria313#1694: hm
alstroemeria313#1694: maybe i should clip it actually
nshepperd#2316: can we estimate the noise in the D as well
nshepperd#2316: and scale by the combined estimated noise in the esgd steps
alstroemeria313#1694: They are using some non-stochastic (?) way to get a Hessian diagonal? Maybe?
alstroemeria313#1694: Bc they use tau to determine how much to average it
alstroemeria313#1694: Which... how does that even work, of course their Hessian diagonals are stochastic if they have stochastic gradients
alstroemeria313#1694: No matter how they get them
nshepperd#2316: ah, i mean like with the hvp thing
alstroemeria313#1694: yeah, hm
alstroemeria313#1694: how to
nshepperd#2316: we can get a noise scale for the ema of the Hessian squared, right
nshepperd#2316: .... by keeping a EMA of the fourth power
alstroemeria313#1694: Eheh
alstroemeria313#1694: hm not super sure if this is a good strategy
|
alstroemeria313#1694: you don't go as fast as if you accept some potential of moving too far in the wrong direction and correcting later
Some Point Process#3793: Which paper?
alstroemeria313#1694: https://arxiv.org/pdf/1206.1106.pdf
nev#4905: ~~inb4 eleutherai/optimizers~~
alstroemeria313#1694: eheh
alstroemeria313#1694: it occurs to me that this will sometimes take steps that are too slow bc it cannot really take into account how much momentum is helping
Some Point Process#3793: https://stats.stackexchange.com/questions/307243/why-vsgd-fd-optimization-algorithm-isnt-popular
jack#8178: how huge do you actually need? short self normalizing upper stages seem to save a ton of memory. if you had a way to deal better with gradient noise that will probably help more than increasing batch size
jack#8178: i have a 900m param model going right now and memory usage is extremely low for large batches even in fp32
jack#8178: i guess if you wanted a dall-e scale model you'd probably want to go pipeline parallel
Gurkenglas#7362: What's the current LM to send API calls to if I want 1. trained on the internet 2. large context window? Where the prefix of the context window that makes it large changes only rarely, so its activations can be precomputed.
alstroemeria313#1694: i am writing a utility to compute gradient noise scale for all param tensors and also for the whole net
alstroemeria313#1694: gradient noise scale is just 1 / snr
alstroemeria313#1694: so in early training of a diffusion net the inner layers are getting a lot more signal
alstroemeria313#1694: much lower gradient noise scale
alstroemeria313#1694: hm can i make pytorch do batches of gradients easily
alstroemeria313#1694: ah
alstroemeria313#1694: functorch
alstroemeria313#1694: you can estimate the gradient noise scale using batches and reducing over the losses, as in normal training, but it's not exactly the same thing
CRG#8707: Just realized that if you use the same buffers as adam, this is literaly (adam_update)**3
|
chilli#5665: Yeah a bunch of people use functorch to do this :)
alstroemeria313#1694: ```
[W BatchedFallback.cpp:106] Warning: There is a performance drop because we have not yet implemented the batching rule for aten::_conv_depthwise2d_backward.output_mask. Please file us an issue on GitHub so that we can prioritize its implementation. (function warnFallback)
[W BatchedFallback.cpp:106] Warning: There is a performance drop because we have not yet implemented the batching rule for aten::reflection_pad2d_backward. Please file us an issue on GitHub so that we can prioritize its implementation. (function warnFallback)
[W BatchedFallback.cpp:106] Warning: There is a performance drop because we have not yet implemented the batching rule for aten::embedding_dense_backward. Please file us an issue on GitHub so that we can prioritize its implementation. (function warnFallback)
```
chilli#5665: I think all 3 are fixed on nightly?
alstroemeria313#1694: ahh
alstroemeria313#1694: ty
chilli#5665: https://github.com/pytorch/functorch/pull/355
https://github.com/pytorch/functorch/pull/359
For 2 of them
chilli#5665: And I added reflection backward at some other point
chilli#5665: Performance should probably still be faster than the other approaches lol
alstroemeria313#1694: :)
chilli#5665: Hmm
chilli#5665: I wonder if there are faster ways of computing this quantity though
alstroemeria313#1694: you mean like, some trick like in the VDM paper?
|
chilli#5665: Other than computing per-sample gradients and then computing the variance
chilli#5665: (not familiar with what that trick is)
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/934803252443230238/Screen_Shot_2022-01-23_at_5.34.20_AM.png
alstroemeria313#1694: hm
alstroemeria313#1694: that is the variance of the loss though
alstroemeria313#1694: not the same.
chilli#5665: Wait sorry, what's the quantity you're trying to compute again?
alstroemeria313#1694: i want the mean gradient and the mean squared gradient
alstroemeria313#1694: for each param
chilli#5665: Mean gradient is just the default quantity right?
alstroemeria313#1694: yes
chilli#5665: (lol just sanity checking)
alstroemeria313#1694: you can do it in batches and take the mean of the grads (possibly adjusting for the batches being different sizes)
chilli#5665: Hmm, it's a bit difficult for me to reason about how much slower computing per-sample gradients is
alstroemeria313#1694: btw computing gradient noise scale per layer on my diffusion models is instantly revealing some things i didn't know
alstroemeria313#1694: so i would like to be able to do it quickly
alstroemeria313#1694: we can approximate the mean squared gradient by doing batches and taking their mean grad and their mean square grad
alstroemeria313#1694: but this underestimates the true mean square grad
alstroemeria313#1694: by more the more batch items you do per batch
alstroemeria313#1694: gradient noise scale is 1 / snr of the gradient
|
alstroemeria313#1694: where the signal is the true gradient and the noise is from doing stochastic gradients
chilli#5665: I guess... It depends on your batch size
alstroemeria313#1694: so *if you compute it at the same batch size you are training on* you can get an actual snr for what you are doing in training
alstroemeria313#1694: even though that wasn't the exact quantity openai was talking about
chilli#5665: Wait... I'm confusing myself
chilli#5665: What's even the operations you perform in your backwards pass for a matmul
alstroemeria313#1694: Uhh?
alstroemeria313#1694: I am doing single sample gradients rn
chilli#5665: Say your forwards is a (N,D) x (D,D) matmul
alstroemeria313#1694: Wait, are you talking about changing that op to compute the mean squared grad too
CRG#8707: (N,D), (N,D) -> (D,D), right?
alstroemeria313#1694: wrt the weights
chilli#5665: Sorry, I'm just trying to reason how much slower per-sample grads need to be
alstroemeria313#1694: ahh
alstroemeria313#1694: on this tiny model i get ~8 iterations/sec doing the per sample grads and ~20 per sec doing a normal forward and backward
alstroemeria313#1694: both bs 100
chilli#5665: And (N,D) x (D,D) for the other one
CRG#8707: Yeah
chilli#5665: With functorch + vmap?
alstroemeria313#1694: yes.
|
alstroemeria313#1694: and not nightly.
CRG#8707: I think you need to materialize (N, D, D) for the squared gradients, right?
chilli#5665: And the missing batching rules
alstroemeria313#1694: and i took the reflection pad and the other conv one out but i couldn't get rid of the embedding so easily.
alstroemeria313#1694: so i have one missing batching rule left
alstroemeria313#1694: and it is going faster than with all three
alstroemeria313#1694: eh what i will probably do for efficiency is
alstroemeria313#1694: just start computing gradient noise scale at the batch sizes i'm going to train on
alstroemeria313#1694: bc that gives me an snr for that batch size
chilli#5665: I'd be glad to look at the model again
chilli#5665: We should be able to compile through it with the previous tricks :P
alstroemeria313#1694: ahh
chilli#5665: Not sure if it'll actually be faster though
chilli#5665: I think this is right
alstroemeria313#1694: hm so i really need to be logging these stats or something
alstroemeria313#1694: bc they change a great deal over training
chilli#5665: I do still wonder whether there are better ways of computing this than the obvious way though
chilli#5665: I remember backpack could also do this
chilli#5665: For the variance, 2nd moment and `2 norm, BACKPACK
|
takes advantage of the Jacobian’s structure to directly compute them without forming the individual
gradient, reducing memory overhead.
chilli#5665: Sounds promising
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/934807875979710474/Screenshot_20220123-055239_Drive.jpg
chilli#5665: Maybe you could just try backpack lol
alstroemeria313#1694: what's backpack?
chilli#5665: https://backpack.pt/
CRG#8707: https://arxiv.org/abs/1912.10985
chilli#5665: They seem to already be doing these clever things
chilli#5665: Instead of the brute force way lol
chilli#5665: I don't quite understand it tbh
alstroemeria313#1694: ahh
chilli#5665: They also had this follow up: https://proceedings.neurips.cc/paper/2021/hash/ae3539867aaeec609a4260c6feb725f4-Abstract.html
alstroemeria313#1694: yeah and i need second raw moment not variance, bc i am going to aggregate them across a ton of batches
alstroemeria313#1694: and then get a single variance per param for the whole dataset's gradient
chilli#5665: Their method seems to work for second raw moment
chilli#5665: See this
CRG#8707: Wouldn't have expected (N,D)^2, (N,D)^2 -> (D,D) to just work like that tbh.
chilli#5665: Or appendix A.1 in their paper
|
alstroemeria313#1694: @chilli it says no resnets :/
alstroemeria313#1694: in the paper
chilli#5665: Damn
chilli#5665: Like, their approach doesn't work for resnets?
chilli#5665: It's only a special case where it works?
chilli#5665: :(
alstroemeria313#1694: It could probably work if they supported the add op lol
alstroemeria313#1694: I think they just didn't write the op
chilli#5665: They have been supporting the library so maybe it works now
alstroemeria313#1694: Yeah they say "the framework could be extended" to do resnets
chilli#5665: I'm still trying to understand wth they're doing
chilli#5665: Hmm, I think this implies that per-example gradients are actually quite ... Expensive
CRG#8707: Yeah
chilli#5665: Not just due to memory, but it also substantially increases flops?
chilli#5665: Wait lemme think this through
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/934810255832014918/Screen_Shot_2022-01-23_at_6.02.10_AM.png
CRG#8707: I mean, it's kind of like an unfolded matmul
CRG#8707: But you square the intermediate tensor before reducing the batch dimension
chilli#5665: Usually it's ND^2 for the forwards
alstroemeria313#1694: Is there some trick to compute it that does not involve doing this
|
CRG#8707: (N,D)^2, (N,D)^2 -> (D,D) apparently
CRG#8707: A matmul of the element wise square of the activations and the element wise square of the incoming gradients
chilli#5665: And then ND^2 + N^2D for backwards?
chilli#5665: Actually, now I'm confused why people say backwards is double the flops of forwards
chilli#5665: Seems like a very lossy approximation that depends a lot on your batch size
CRG#8707: I think it's 2 ND^2
chilli#5665: Oh, right
chilli#5665: Mixed up the dim orders in my head
chilli#5665: Since there are some implicit transposes here is my excuse :P
chilli#5665: But there's no NxN quantity being computed here
CRG#8707: Much better as an einsum
chilli#5665: So it can't be N^2D
CRG#8707: You just write what you want on the right
chilli#5665: Hmmm
CRG#8707: So with per gradient squares, backward would be 3x as expensive as forwards.
chilli#5665: Wait how
CRG#8707: Since you need to do another matmul
chilli#5665: Oh, I'm just talking about per example gradients
chilli#5665: I think it's actually the same complexity?
chilli#5665: Just with higher memory bandwidth
|
chilli#5665: Actually, tbh, I think this could be solved with a fusing compiler
chilli#5665: Well... you'd need a pretty good one
CRG#8707: This was a similar problem in PKMs
CRG#8707: You need to materialize a huge tensor
chilli#5665: Yeah, but here we have a reduction at the end
CRG#8707: PKMs also had a final reduction IIRC
chilli#5665: So you're doing (N,D,1) x (N,1,D)
chilli#5665: And then you need to reduce along the Nth dimension
chilli#5665: Wait...
chilli#5665: This isn't even a matmul lol
CRG#8707: Outer product + sum
CRG#8707: Like an explicit matmul
chilli#5665: No if you're just computing per-sample grads you don't have a sum
chilli#5665: Ok, so fuck
chilli#5665: You are gonna see a slowdown
chilli#5665: Even with an optimal fusing compiler
chilli#5665: Since you can't use tensorcores for this anymore
chilli#5665: Ok, people are still wrong that the only reason per-sample grads are slow is due to memory
chilli#5665: Well, it's complicated
chilli#5665: But they also inhibit tensorcores
|
chilli#5665: (or systolic arrays if you're a tpu guy)
chilli#5665: Damn hardware lottery
chilli#5665: Thanks for the discussion/sanity checking :)
chilli#5665: Glad I deconfused myself from the flops thing
chilli#5665: I still think that adding a fusing compiler to the mix could be quite cool though
chilli#5665: I think you could potentially compute arbitrarily weird quantities fairly efficiently
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/934816393872932864/Screen_Shot_2022-01-23_at_6.26.34_AM.png
chilli#5665: In a fairly automatic way
alstroemeria313#1694: making a little heatmap of gradient noise scales per param tensor
alstroemeria313#1694: each row is one epoch
chilli#5665: (as long as you have a reduction at the end)
alstroemeria313#1694: i'm going to train it for a while and see how this evolves over training
alstroemeria313#1694: (I am using log scale for them because they are 1 / snr and log is natural for snr, say decibels etc.)
nshepperd#2316: ooh
alstroemeria313#1694: this is a cifar-10 diffusion model
alstroemeria313#1694: tiny
alstroemeria313#1694: but it has self-attention so i can see if the self-attn things are worse or something
nshepperd#2316: i am trying to train a upscaler with noised cond input
alstroemeria313#1694: ooh
nshepperd#2316: we should be able to do superconditioning with it too, bc noised with cond_ts=1 is equivalent to unconditional
|
chilli#5665: @alstroemeria313 btw, lmk if you have a model I can try optimizing or if you get any resolution on the backpack stuff
alstroemeria313#1694: ty :)
nshepperd#2316: (i think superconditioning is a much better name for it than cfg btw, ^_^)
chilli#5665: Or, if you guys have any really weird quantities you want to compute based off of your per-sample gradients
alstroemeria313#1694: squared is by far the most useful
chilli#5665: I think you probably can still make it as efficient as normal autograd if you're willing to do per-(4 sample) gradients
chilli#5665: Or whatever the minimum number is to saturate tensorcores
alstroemeria313#1694: btw, "gradient noise scale" calculated while the model is being optimized is a bad proxy for actual gradient noise scale
alstroemeria313#1694: within-batch means of squared gradients would help with this.
alstroemeria313#1694: because that would at least give you a bunch of things evaluated at the same point
alstroemeria313#1694: even if it only like 64 or whatever
alstroemeria313#1694: it is better than 1
chilli#5665: Damn, I'm actually pretty excited about my idea
nshepperd#2316: using mp.set_start_method('spawn') on these tpu v3 pod broke things :thinkies:
chilli#5665: If only I had the time to turn it into a research paper...
alstroemeria313#1694: ehehe
alstroemeria313#1694: you get this semi-for free when doing distributed training
alstroemeria313#1694: on small microbatch sizes
alstroemeria313#1694: bc of the number of nodes and gradient accumulation steps
chilli#5665: Yeah lol
|
alstroemeria313#1694: we could extract squared microbatch gradients and aggregate them
chilli#5665: Oh, I was just gonna say that DP is basically per-sample gradients
chilli#5665: Lol
alstroemeria313#1694: subtract the mean gradient, and get a variance, pool it down to one variance per param tensor if we want
alstroemeria313#1694: then EMA those variances and use them to make layer-wise automatic lr tuning decisions
chilli#5665: Also, tbh, I have not thought about convolutions at all in this context
chilli#5665: Not sure if my idea applies to them
alstroemeria313#1694: by subtracting the mean gradient each time and EMAing only the variance component, you could get around the problem of gradient noise scale based on a mean gradient and mean squared gradient that came from EMAs during optimization
chilli#5665: Tbh, I think I have an intuitively poor grasp of convolutions
chilli#5665: I don't think I could write out the backwards pass for a convolution off hand
alstroemeria313#1694: it's just a transposed convolution isn't it
chilli#5665: Yeah, in principle, but I find it hard to reason about it
chilli#5665: Like, how does the quantity change if you're computing per-sample grads
chilli#5665: And, which one of the 2 backwards convs is the transposed convolution, or are both of them transposed
alstroemeria313#1694: oh, idk how to do it wrt the weights
chilli#5665: Yeah, I think the transposed one is wrt the input
chilli#5665: But the fact that I'm having a hard time even reasoning about the gradients wrt the weights is a little bit :sadge:
chilli#5665: Is it a convolution where the filter is the image? :thinkies:
chilli#5665: I guess it must be
chilli#5665: Yeah and I guess it must also be transposed
|
chilli#5665: With a pretty massive stride?
alstroemeria313#1694: :)
chilli#5665: If my reasoning is correct the stride must be something like image_size/filter size
chilli#5665: What's even the flops of a convolution?
alstroemeria313#1694: btw how would you tune layer-wise lr with this snr information?
alstroemeria313#1694: would you tune it proportional to snr or proportional to snr^1/2
chilli#5665: (I'm not the right person to ask for this :P)
alstroemeria313#1694: or like. do mean(grad)^2 / mean(grad^2) so you only decrease lr, not increase it?
alstroemeria313#1694: (or its square root)
CRG#8707: https://openai.com/blog/science-of-ai/> ? https://cdn.discordapp.com/attachments/729741769738158194/934824549135036436/Screenshot_20220123-155844.png
alstroemeria313#1694: ahhh
CRG#8707: I think the stride would be the same as the kernel
chilli#5665: But your filter weights are your image, right?
chilli#5665: Oh hmm
CRG#8707: The resulting image is the kernel
chilli#5665: Sorry, I mean your dilation
chilli#5665: Or do i
chilli#5665: Hmm
alstroemeria313#1694: oh. that was in fact one of my candidates
nshepperd#2316: oh i forgot that i need to set wandb to `offline` instead of `disabled` to make tpus not broken now
|
nshepperd#2316: because something something fork()
nshepperd#2316: this applies to v3s as well now, not just v4s :grimberk:
alstroemeria313#1694: oh i need to be training this test model with lr decay...
alstroemeria313#1694: meh ok
alstroemeria313#1694: guess i'll do that
alstroemeria313#1694: so if nothing else i can come up with some heuristics as to how to set relative lr for different layers
alstroemeria313#1694: from this run and probably one with a bigger model
alstroemeria313#1694: (i mean more u-net stages bc a persistent pattern is higher gradient noise scale on the lower resolution, inner u-net layers)
alstroemeria313#1694: (but i want to see if that persists in late training)
nev#4905: scaling laws for gradient noise :ultrathonk:
alstroemeria313#1694: How can we modify deepspeed so it not only accumulates gradients, but squares each individual microbatch's gradient and accumulates the square.
nshepperd#2316: 1 epoch, varying levels of cond noise https://cdn.discordapp.com/attachments/729741769738158194/934836666382835802/epoch1.png
nev#4905: are the noise levels different across sets of pictures?
nshepperd#2316: yes
nshepperd#2316: left is original, middle is downscaled, right is the output
nshepperd#2316: maybe i should be visualizing the downscaled + noise added instead of the 'clean' downscaled
nshepperd#2316: I am using AdaESGD or whatever we want to call that ^_^
nshepperd#2316: it seems like it has very quickly learned to copy and 'sharpen' the input and will take longer to learn an actual model
alstroemeria313#1694: huh
alstroemeria313#1694: i probably need to get my code out
|
alstroemeria313#1694: i am going to leave the gradient noise adaptive stuff out
alstroemeria313#1694: bc i have found that estimating it by EMAs of gradient first and second moments is a very poor proxy for the actual thing
alstroemeria313#1694: i saw a paper recently on this
nshepperd#2316: ohh?
alstroemeria313#1694: yeah, the preconditioner changes the inductive bias of the net
alstroemeria313#1694: like, sgd vs adam vs newton type
alstroemeria313#1694: http://proceedings.mlr.press/v119/huh20a/huh20a.pdf
alstroemeria313#1694: btw a way to recover different training dynamics/inductive bias is simply to sqrt the preconditioner
alstroemeria313#1694: i.e. fractional curvature correction
nshepperd#2316: hmm
alstroemeria313#1694: adahessian has this as an option, they call it the "Hessian power"
alstroemeria313#1694: so 1/2 would be taking our EMA of squared Hessian diagonals and taking the fourth root before dividing by it
nshepperd#2316: ah
alstroemeria313#1694: i *think* hessian power 1/2 preconditioners are vaguely adam-like
nshepperd#2316: so we can just do ema.sqrt().pow(a) with a between 0 and 1
alstroemeria313#1694: yep, or .pow(a / 2)
nshepperd#2316: ah yeah
nshepperd#2316: a=0 would be sgd
alstroemeria313#1694: and do the right power on the bias correction term if you are not multiplying them together first
nshepperd#2316: a=1/2 is sort of like adam but only half corrects for loss function rescaling
|
alstroemeria313#1694: yeahhh...
alstroemeria313#1694: it still has... idk
alstroemeria313#1694: if with sgd dynamics you need to scale lr by n^2 when you scale params by n, and n^-1 when you scale loss by n. and for adam is is n and 1.
nshepperd#2316: 2 epochs.. it's just decided to sharpen more? https://cdn.discordapp.com/attachments/729741769738158194/934840666104668242/epoch2.png
alstroemeria313#1694: what does esgd do with a fractional hessian power even.
alstroemeria313#1694: Since it clearly recovers SGD when Hessian power is 0 and intermediate values interpolate between that and 1 and 1
alstroemeria313#1694: You cannot get to Adam from it
alstroemeria313#1694: uhhh...
alstroemeria313#1694: "with sgd dynamics you need to scale lr by n^2, with adam dynamics you need to scale lr by n"
alstroemeria313#1694: And you can't get to "scale lr by 1" by looking at the difference between an SGD step and an Adam step bc...
alstroemeria313#1694: Well you can. You just lose loss scale invariance?
alstroemeria313#1694: you can make it invariant to one of those but not the other?
alstroemeria313#1694: (by simply comparing the Adam and SGD steps that is, not allowed to save any other kind of history, you can do it if you can save whatever history bc L-BFGS does it)
alstroemeria313#1694: I bring this up bc we could bring loss scale invariance back if we wanted using a fractional Hessian power
alstroemeria313#1694: Or param scale invariance. But not both at once I think.
alstroemeria313#1694: Like, by virtue of the fact that we have the second order information and can thus distinguish between the loss being scaled and the params being scaled?
chilli#5665: Is there any simple guide to thinking about optimizers from first principles
alstroemeria313#1694: no idea
chilli#5665: I’ve seen some stuff recently about preconditioning or various other quantities
chilli#5665: Not sure how to think about them
|
nshepperd#2316: well we have two units in play, the loss units and the param units
nshepperd#2316: in dimensional analysis this is like a 2d vector
nshepperd#2316: the grads have units of uhh (loss^1, params^-1)?
nshepperd#2316: the sqrted ema'd squared hessian also has units of (loss^1, params^-1)
alstroemeria313#1694: And the Adam preconditioner has what units?
nshepperd#2316: (1,-1) is colinear with (1,-1) so we can only make something along that line by combining them
nshepperd#2316: wait no, that's not right
nshepperd#2316: the adam preconditioner is (loss^1, params^-1)
nshepperd#2316: the hessian preconditioner is (loss^1, params^-2)?
nshepperd#2316: yeah, it's the second derivative
nshepperd#2316: so you could make any units you want by combining different powers of the grad and the hessian preconditioner
nshepperd#2316: or better by combining different powers of the adam preconditioner and the hessian preconditioner
nshepperd#2316: bc taking powers of the grad seems like a bad idea
nshepperd#2316: so yeah with the hessian preconditioner fixed to a power of 0.5 we can make it loss invariant by multiplying an appropriate power of the adam preconditioner
nshepperd#2316: and then it wouldn't be params invariant
alstroemeria313#1694: ok so, with hessian power 1/2 esgd, we have to scale lr by n when you scale params by n
nshepperd#2316: and you have to scale lr by n^-1/2 when you scale loss by n?
alstroemeria313#1694: ...no
alstroemeria313#1694: I do not know why
nshepperd#2316: that is confusing ^^;
|
alstroemeria313#1694: It seems to be a bit over sqrt
alstroemeria313#1694: I lowered the epsilon to make sure that wasn't it
chilli#5665: What does this mean 🤔
chilli#5665: Why params^-1?
alstroemeria313#1694: if you scale your params up by a factor of 10, a tiny difference in them will now produce 1/10 the difference in the loss as before. so the gradient goes down by a factor of 10.
nshepperd#2316: i just realized my script was not actually feeding in the noised cond instead of the clean cond for training
nshepperd#2316: starting again~ ^^;;
chilli#5665: I guess this quantity is dependent on your actual operators?
chilli#5665: Like, if you took the log of your weights before doing the matmul this number would be different
nshepperd#2316: scaling your params up by a factor of 10 means reparameterizing the nn so that it multiplies them by 1/10 first
alstroemeria313#1694: @nshepperd i was wrong and was displaying my loss values wrong
nshepperd#2316: oh
alstroemeria313#1694: `lr * loss_fac**-0.5 * param_fac`
alstroemeria313#1694: where those are the loss and parameter rescalings.
alstroemeria313#1694: gives you the new lr for hessian power 1/2
nshepperd#2316: why does the eps need to be so high with ESGD
alstroemeria313#1694: idk actually
alstroemeria313#1694: oh
alstroemeria313#1694: to stop it from taking gigantic steps if the stochastic diag |H| estimate is kinda bad
alstroemeria313#1694: whereas with adam it is mostly just to prevent division by 0, the step size is limited by the fact that the step and preconditioner both come from the grad
|
nshepperd#2316: ah right
nshepperd#2316: it's kind of annoying though, bc this eps is not unitless
alstroemeria313#1694: yeah
alstroemeria313#1694: so grads are (loss^1, params^-2)?
alstroemeria313#1694: and you divide the grads by a (loss^1, params^-1) thing in Adam?
alstroemeria313#1694: and get (loss^0, params^-1)?
nshepperd#2316: grads are (loss^1, params^-1)
alstroemeria313#1694: ...
alstroemeria313#1694: But when we divide grads by the Hessian preconditioner we get the right thing.
nshepperd#2316: you divide the grads by a (loss^1, params^-1) thing in Adam, and end up with unitless
alstroemeria313#1694: Wait...
nshepperd#2316: but you need params^1 to be invariant
alstroemeria313#1694: Oh because the Adam *lr* is in params units
nshepperd#2316: yeah
nshepperd#2316: so the lr is params^1
alstroemeria313#1694: Which makes more sense for usability
alstroemeria313#1694: So if grads are (loss^1, params^-1) then the SGD LR is, by implication, (loss^-1, params^2)? Because that's what you multiply by to get to params and params have params^1 units.
nshepperd#2316: eheh with this i needed to do the hvp for the first 200 steps to not NaN
nshepperd#2316: the estimate is super noisy i guess
nshepperd#2316: yeah
|
alstroemeria313#1694: So 1/2 Hessian power LR has units (loss^-1/2, params^1).
alstroemeria313#1694: Grads have (loss^1, params^-1). So does the Hessian diagonal?
alstroemeria313#1694: Yeah.
alstroemeria313#1694: And the sqrt Hessian diagonal has (loss^1/2, params^-1/2).
nshepperd#2316: wait no the hessian diagonal is (loss^1, params^-2)
alstroemeria313#1694: Ohh?
nshepperd#2316: it's the second derivative, right?
alstroemeria313#1694: Yes
nshepperd#2316: d²(loss)/d(params)²
alstroemeria313#1694: but we have. params^1 -= lr (?) * grad (loss^1, params^-1) / H diag (?)
alstroemeria313#1694: so if H diag is (loss^1, params^-2)
alstroemeria313#1694: We get a (loss^0, params^1)
alstroemeria313#1694: Then this multiplies by the lr and applies to the params so lr is unitless.
nshepperd#2316: yeah
alstroemeria313#1694: So sqrt Hessian diagonal is (loss^1/2, params^-1)
nshepperd#2316: 1 epoch with the actually correct training objective https://cdn.discordapp.com/attachments/729741769738158194/934854714720407612/epoch1_fixed.png
alstroemeria313#1694: ohh?
nshepperd#2316: actually noising the cond for training
alstroemeria313#1694: ahh
nshepperd#2316: it is now actually learning to make textures from scratch with cond_t=1 (bottom right)
|
alstroemeria313#1694: so what i want is. params^1 -= lr (params^1) * grad (loss^1, params^-1) / (loss^1/2, params^-1) * ???
kurumuz#5695: I like some of those images haha
kurumuz#5695: does it have danbooru?
alstroemeria313#1694: i need to get a loss^-1/2?
nshepperd#2316: yep this is danbooru. actually i am training on a mixture of danbooru and openimages (photos)
kurumuz#5695: waifus nice
alstroemeria313#1694: ...Wait this is doomed isn't it.
alstroemeria313#1694: Adam lr is in the units of the params *and that's why it can't converge*
alstroemeria313#1694: Like it jumps out of optima bc it only cares about what step size it is making in params units, not anything else
alstroemeria313#1694: Like consider degenerate Adam without EMAs.
alstroemeria313#1694: Its updates are always x -= lr * sign(grad)
nshepperd#2316: yeah the jumps are always lr-sized
alstroemeria313#1694: obviously that cannot converge without lr decay
nshepperd#2316: so you have to decay the lr
alstroemeria313#1694: Any rescaling we did so the lr was only in params units would suffer the same fate, right?
alstroemeria313#1694: It would need lr decay
nshepperd#2316: hmm
nshepperd#2316: yeah i guess it would
nshepperd#2316: or would it
nshepperd#2316: Adam it makes sense because it is specifically rescaling the grads to std=1
|
alstroemeria313#1694: it has like, a region in param space where it can have sane trajectories because of the EMAs and that allows it to not be terrible
CRG#8707: Adabelief does centered moment, Adam uncentered
chilli#5665: To continue on the simple questions (since This seems really interesting ), why does Adam do this?
alstroemeria313#1694: btw https://cdn.discordapp.com/attachments/729741769738158194/934860055797456977/Screen_Shot_2022-01-23_at_9.20.02_AM.png
alstroemeria313#1694: as you go on in training the lower res layers' gradient noise scale gets higher quicker
alstroemeria313#1694: looks like the lr recommendation in the openai paper like, wants me to go down in lr by 1/2 each time i downsample
alstroemeria313#1694: looking at the last row of that in a plot of its own
alstroemeria313#1694: approximately of course
alstroemeria313#1694: i will keep training to see what happens even later in training
nshepperd#2316: bc you can get this preconditioner easily just by EMAing the squared grads without any extra work. and by doing this the units of lr become params^1 instead of loss^-1, params^2
nshepperd#2316: which you *generally* expect to be easier to tune
chilli#5665: Oh so this is some modification?
chilli#5665: Not standard Adam?
nshepperd#2316: I am comparing Adam to SGD
nshepperd#2316: with SGD the units of lr are loss^-1, params^2
chilli#5665: Uhh…
chilli#5665: Hmm
nshepperd#2316: which is kind of a pain bc the optimal learning rate will vary wildly
chilli#5665: How do you get that?
chilli#5665: Oh
|
chilli#5665: Is it because grads are 1, -1
nshepperd#2316: yes
chilli#5665: And params are 0, 1
chilli#5665: Hmmm
nshepperd#2316: with Adam the units of lr are params^1, which is "more invariant"
chilli#5665: And something about Adam divides grads by 1, -1?
chilli#5665: I admittedly don’t remember adam off the top
alstroemeria313#1694: yeah it divides the grads by the sqrt of the ema of their squares
alstroemeria313#1694: which has the same units
chilli#5665: Hmm
alstroemeria313#1694: the disadvantage is that you often have to decay lr manually.
alstroemeria313#1694: to get it to converge.
chilli#5665: Does momentum do the same?
alstroemeria313#1694: no
chilli#5665: Oh
chilli#5665: Since it’s additive
alstroemeria313#1694: momentum is just an EMA of the grad, it doesn't change its units
alstroemeria313#1694: yeah
chilli#5665: But if you divided by the momentum?
nshepperd#2316: Adam: `params -= lr * ema_of_grads / (sqrt(ema_of_squared_grads)+eps)`
|
alstroemeria313#1694: that would be bad i think bc the signs would often cancel
chilli#5665: But just for testing intuition, it would also make the units right?
alstroemeria313#1694: yes
alstroemeria313#1694: since the thing adam divides by is positive it doesn't change the sign and thus you still have a descent direction (assuming no momentum at the moment, like rmsprop type stuff, bc momentum can deviate from a descent direction)
chilli#5665: Yeah but you could just divide by abs(momentum)
chilli#5665: Or something
alstroemeria313#1694: Yeah that would get you an L1 variant of Adam, it would probably work
alstroemeria313#1694: Adam is L2 based (sqrt EMA of squared grads)
alstroemeria313#1694: And Adamax from the same paper is L^inf based
alstroemeria313#1694: It divides by a decaying max of absolute grads.
chilli#5665: So you guys want something that divides by .5, 1
alstroemeria313#1694: I think the L1 version would actually work too
alstroemeria313#1694: Just no one uses it
chilli#5665: So that you don’t need to decay the lr?
alstroemeria313#1694: just .5, 0
alstroemeria313#1694: but idk if you can find a thing like that which also doesn't require you to decay lr when you use it
CRG#8707: (adam)^3 converges on Rosenbrock with lr = 1.0 :thonk: , but adam^1 doesn't.
alstroemeria313#1694: eheh
alstroemeria313#1694: I am unsure that adam^3 is actually a good idea
alstroemeria313#1694: It did not work well when I tried it.
|
alstroemeria313#1694: I think I need to get gradient noise scale estimates from somewhere else other than the history of gradients where I only one sample per point in param space.
alstroemeria313#1694: @nshepperd ...I had an idea
alstroemeria313#1694: Updates to the params are loss^0, params^1 by definition
nshepperd#2316: yup
alstroemeria313#1694: We could just divide them by their EMA variance :)
alstroemeria313#1694: Like, running squared EMA of the steps that *would* have been applied
alstroemeria313#1694: Then use that to rescale the actual steps
CRG#8707: Kind of like Adadelta?
alstroemeria313#1694: So then lr has to be loss^0, params^1
alstroemeria313#1694: Bc it scales the standardized steps to apply to the actual params.
alstroemeria313#1694: ...OK why can you just not do the same thing for SGD
nshepperd#2316: huhhh
nshepperd#2316: i am too tired to understand that ^^:
alstroemeria313#1694: Oh you just get Adam when you do, don't you
nshepperd#2316: you are saying to keep a squared ema of lr * grad?
alstroemeria313#1694: Bc in plain SGD the steps are just the lr * grads
alstroemeria313#1694: but the lr just divides by itself and you are left with Adam
alstroemeria313#1694: an lr on the inside of the ema does nothing
alstroemeria313#1694: take the 1/2 Hessian power ESGD. Stick it in a wrapper that takes its output steps and maintain a squared EMA of them. then multiply each step by lr / sqrt(EMA of its output steps) to compute the actual step to apply.
alstroemeria313#1694: however it is plain that you are going to need lr decay with this to converge
|
nshepperd#2316: ahh
nshepperd#2316: 6 epochs https://cdn.discordapp.com/attachments/729741769738158194/934868858253242458/epoch6.png
alstroemeria313#1694: what's with lower right?
alstroemeria313#1694: btw the "lower lr by 1/2 each downsampling" pattern is still there
nshepperd#2316: low right has effectively no input bc it is fully noised
alstroemeria313#1694: also the class embedding (this is CIFAR-10) is very very noisy
alstroemeria313#1694: also weights may want to be 1/2 the lr of biases
alstroemeria313#1694: idk if that's super important
alstroemeria313#1694: the 1x1 skips do not actually look super problematic
alstroemeria313#1694: it's mostly the resolution
nshepperd#2316: ooh
alstroemeria313#1694: also *this still holds for ESGD type optimizers* bc it is from gradient noise scale which still affects them
alstroemeria313#1694: i think
alstroemeria313#1694: yeah
alstroemeria313#1694: it should
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/934871874477895710/Screen_Shot_2022-01-23_at_10.07.01_AM.png
alstroemeria313#1694: the four vertical stripes seem to be the four res blocks, they have higher gradient noise scale than the self-attention blocks
alstroemeria313#1694: and their weights have higher gradient noise scale than their biases.
nshepperd#2316: i need to just let this run overnight
nshepperd#2316: goodnight~
|
alstroemeria313#1694: nightnight~ :)
chirp#4545: not first principles but have you seen this https://www.cs.toronto.edu/~rgrosse/courses/csc2541_2021/
alstroemeria313#1694: tbh implement some things and see how they work
alstroemeria313#1694: plain gradient descent, adagrad, adam, second order stuff
alstroemeria313#1694: if you do small problems you can compute Hessians and the like
chilli#5665: Nah, looks interesting
chilli#5665: If you can compute the hessian is that strictly better?
chilli#5665: Like, how does that fit into all the stuff you guys were talking about
alstroemeria313#1694: for a lot of functions Hessian information makes it converge really fast
alstroemeria313#1694: Like say your problem is convex and it doesn't like, have surfaces with zero second derivative/curvature
alstroemeria313#1694: Then you can do x -= lr * grad @ H^-1
alstroemeria313#1694: i.e. use the Hessian as a preconditioner
alstroemeria313#1694: This will often work very well
alstroemeria313#1694: i.e. Newton's method
alstroemeria313#1694: To find zeros of the gradient
alstroemeria313#1694: If your problem is not convex then you can do x -= lr * grad @ |H|^-1
alstroemeria313#1694: (That is, take the eigendecomposition of the Hessian, take the absolute values of the eigenvalues, put those back together with the same eigenvectors, and take the inverse. actually you can take 1 / |lambda| for each eigenvalue lambda and that forms the inverse when you multiply it back by the eigenvectors..)
alstroemeria313#1694: well our problems are too big to compute the Hessian for but we have a method of getting a stochastic approximation to diag |H| with Hessian-vector products with random vectors drawn from particular distributions
alstroemeria313#1694: then we do x -= lr * grad @ diag(|H|)^-1
alstroemeria313#1694: Which because it is diagonal is just dividing by it elementwise.
|
chilli#5665: YeH, I guess my question is whether the goals of all these preconditioners is just to approximate some hessian quantity
chilli#5665: And whether the hessian is the “ground truth”, so to speak
alstroemeria313#1694: Adam... sort of kind of some people say there are connections between it and the Hessian or the Fisher or some such
alstroemeria313#1694: idk
alstroemeria313#1694: the Hessian itself has problems actually
chilli#5665: Yeah, that’s what I heard
alstroemeria313#1694: Like with these hvp type diagonal preconditioners you have the choice to either approximate |diag(H)| or diag(|H|) and the latter is better
chilli#5665: Btw, random question about gradient variance
alstroemeria313#1694: Oh?
chilli#5665: Do you want it on all parameters?
chilli#5665: Or just some
alstroemeria313#1694: all if you can, then we can pool it as needed
alstroemeria313#1694: Even better would be gradient covariance but we can't get that for our size problems :)
chilli#5665: Because I think in principle you might be able to make it much more efficient by only getting gradient variance for certain params 😛
alstroemeria313#1694: Ohh?
alstroemeria313#1694: Which ones would you leave out
alstroemeria313#1694: Or do you mean like, auto-reducing over some axes
chilli#5665: Well, no clue
alstroemeria313#1694: The latter is probably fine
chilli#5665: But the extra bandwidth cost paid there is paid proportional to the number of parameters you want this info for
|
chilli#5665: I think
chilli#5665: Like, you don’t need to run your entire model under this “get gradient variance” mode
chilli#5665: You can pick and choose
alstroemeria313#1694: ahh
alstroemeria313#1694: mmm maybe i should release this optimizer
alstroemeria313#1694: maybe it is mostly done
alstroemeria313#1694: it behaves p differently from common optimizers and i don't have a super good intuition for how it works yet
chilli#5665: Are you using functorch in your optimizer 😛
alstroemeria313#1694: no
chilli#5665: Or just the tracking part
alstroemeria313#1694: i never got the gradient noise scale part to work well
alstroemeria313#1694: i would need sample gradients or something and those are just too slow to do all the time
alstroemeria313#1694: even with functorch
alstroemeria313#1694: i gave up on doing gradient noise scale stuff in the optimizer when i saw how different gradient noise scale using EMA approximated first and second moments was from actually computing a ton of separate gradients at the same point in parameter space
alstroemeria313#1694: no, i just make people do `loss.backward(create_graph=True)`
alstroemeria313#1694: i need to do like, a line search optimizer at some point
alstroemeria313#1694: functorch would be super great for that
alstroemeria313#1694: bc you need to be able to perturb the params several times per optimizer step and get loss values for them
alstroemeria313#1694: and without functorch it turns into this thing of "save the original params, then write over them with your perturbations and evaluate the closure over and over, then write the step result over the top of them"
alstroemeria313#1694: @chilli however. and this is why i had been bugging you about pure functions and random state before.
|
alstroemeria313#1694: You need to evaluate the model several times *with the same randomness* for any internal random stuff like dropout.
alstroemeria313#1694: And since functorch doesn't have that I would have to have the user pass in the model function and manually fork the RNG
alstroemeria313#1694: using random stuff in the model/loss that is not the same between calls will break a lot of common line search algorithms easily.
chilli#5665: Got it
chilli#5665: Btw, do you mind filing a GitHub issue for this 😛
chilli#5665: https://github.com/pytorch/functorch/issues
alstroemeria313#1694: kk
chilli#5665: You can just copy paste your messages
alstroemeria313#1694: like, there are really powerful (non-stochastic) optimizers that require line search to basically work
alstroemeria313#1694: L-BFGS
alstroemeria313#1694: but there are also simple things like "draw a minibatch and just do a line search in the direction of the negative gradient"
alstroemeria313#1694: which helps with stuff like sgd learning rate tuning
Louis#0144: @chilli functorch is your main thing right?
chilli#5665: Well, yeah, but there’s 2 components to functorch, the function transform/tensor subclasses component and the compilation component
Louis#0144: built in tensortypes for functorch would be great btw
Louis#0144: 👀
Louis#0144: the tensortype API has saved my ass multiple times
alstroemeria313#1694: these thankfully do not super need exactly deterministic gradients
alstroemeria313#1694: They generally do one loss+grad eval per step then evaluate the loss over and over inside a no_grad() until they have decided the step length to take.
chilli#5665: Right, we definitely wanna support the randomness stuff - just haven’t prioritized a reasonable API yet
|
alstroemeria313#1694: i think you also have to not update batchnorm stats in between
alstroemeria313#1694: but functorch does that already right?
alstroemeria313#1694: i can just feed in the same buffers over and over?
chilli#5665: Mmmm, yeah
chilli#5665: I think
chilli#5665: What are tensor types?
chilli#5665: Like, torchtyping?
Louis#0144: https://github.com/patrick-kidger/torchtyping
Louis#0144: ye
Louis#0144: i use it in everything i do now
Louis#0144: its so amazing
Louis#0144: honestly
alstroemeria313#1694: @chilli <https://github.com/pytorch/functorch/issues/408>
chilli#5665: Thanks!
chilli#5665: I think this is orthogonal lol
chilli#5665: To what I work on
chilli#5665: But there has been some discussions about it
alstroemeria313#1694: eh, i made the repo public https://github.com/crowsonkb/esgd
alstroemeria313#1694: i am unsure if it is totally done yet
alstroemeria313#1694: and i have not evaluated it super well or figured out the best heuristics for it
|
Louis#0144: new CARP model just dropped
Louis#0144: #contrastive
alstroemeria313#1694: OHHHHH
It occurred to me why the gradient noise scale goes up 2x each downsample!
It's because there are *less activations* in the lower res stages!
Like, in this net I used channel mults 1, 2, 4, 8
For 32x32, 16x16, 8x8, 4x4.
They go down in spatial size 4x but gain 2x from the increased channel count.
So it goes down by 2x each downsample.
Like the gradients wrt those weights are *literally just averaged over fewer elements of activations*
Since they are convolutional layers gradients from activations from all parts of the image get averaged over the weights of the net.
alstroemeria313#1694: So for the larger models I would often not go up in channel count by 2x each time to save memory or params or whatever
This is maybe a bad idea bc it makes the problem worse, according to this theory.
alstroemeria313#1694: also, sparsifying activation functions like relu may make the problem unnecessarily worse
alstroemeria313#1694: for some layers
alstroemeria313#1694: bc where they have zero gradient that is an element of the activations that is not reducing the variance
alstroemeria313#1694: it's a resnet so it can't get out of hand but the second layer in a res block may be affected, idk
alstroemeria313#1694: i need to look at the chart again to see if i see this pattern
alstroemeria313#1694: hm i don't see it consistently.
alstroemeria313#1694: this explains why stuff with extra low res inputs and outputs didn't help much
|
alstroemeria313#1694: we can train low res nets better bc we can use larger *batch size* and just do more optimizer steps too
alstroemeria313#1694: the low res inputs/outputs didn't let us increase batch size
Some Point Process#3793: Yeah that makes sense from the way I see it, that, NN layers are supposed to be *capturing* the variance of the input, because that was the presupposition given the difficulty of visual recognition/representation learning and using deep nets. So capturing the variance requires at least some of the layers to not be seeing smoothed activations (across features of the image, or even across channels)
Some Point Process#3793: like if it's just smoothing the activations without "looking at" higher resolution features, then it's just blurring the image, which doesn't count :p
alstroemeria313#1694: ...What if I downsampled with maxblurpool
alstroemeria313#1694: anyway going to test this
alstroemeria313#1694: i made a net that goes up 4x in channels at each downsample
alstroemeria313#1694: this is too chonk to be useful bc of the huge numbers of params, it's just a test to see what happens to the gradient noise scales.
alstroemeria313#1694: maxblurpool is what you get if you do a 2x2 stride 1 max pool followed by a bilinear downsample
alstroemeria313#1694: it's smoother than maxpool but it's nonlinear and can "capture variance" i think
alstroemeria313#1694: idea #2, instead of downsampling the input and sending it to the next u-net stream consisting only of residual blocks.
alstroemeria313#1694: put a non-residual layer and an activation after each downsample
alstroemeria313#1694: then residual from then on.
alstroemeria313#1694: like to let each downsampling stage *filter*
Louis#0144: https://twitter.com/lcastricato/status/1485392686051991556?s=20
StellaAthena#3530: @Louis is this just the contrastive model, or also the VQGAN-CLIP style model for generating critiques
Louis#0144: just contrastive
Louis#0144: @Alex Havrilla is working on the other model rn
Louis#0144: it'll use a CoOp boosted version of this contrastive model
Some Point Process#3793: (i think *maybe* this can help explain my reasoning further but it's just a pet hypothesis for now, might be misleading af ,https://en.wikipedia.org/wiki/Law_of_total_variance>)
|
StellaAthena#3530: Actually, the generation isn’t done VQGAN-CLIP style is it
Louis#0144: no it is not
StellaAthena#3530: Or, at least it wasn’t originally
StellaAthena#3530: We should try that
Louis#0144: basically what we do is create a critic using CARP
Louis#0144: and then during generation preference learn on GPT-J per sentence
Louis#0144: lmao
Louis#0144: thats the plan rn
Louis#0144: so per sentence we might say using carp "this sentence needs to be happier"
Louis#0144: and use that to do reward learning on GPT-J and regenerate until it is satisfied
Louis#0144: its slow as hell
Louis#0144: hahaha
StellaAthena#3530: Oh I’m talking about generatihn critiques, not generating stories
Louis#0144: ohh for generating critiques we just have a seq2seq model
Louis#0144: lol
Louis#0144: nothing special
StellaAthena#3530: Right, I’m saying that I wonder if we can do it VQGAN-CLIP style
Louis#0144: alex and i had this discussion a while ago and iirc you would *need* text diffusion
StellaAthena#3530: Why wouldn’t any encoder decoder work
StellaAthena#3530: Or any model with a decoder, I guess
|
𓅬 gabriel_syme 𓅬#3220: this seems like it would be faster with enc-dec models?
Louis#0144: The preference learning is the slow part
Louis#0144: We generate a thousand stories per sentence rn I think
Louis#0144: And do preference learning off of that
Louis#0144: Since you can't really diffuse text another way rn
Louis#0144: And iirc we can't train a GeDi model@
Louis#0144: I asked around a bit
StellaAthena#3530: Why can’t you directly optimize the latent space
Louis#0144: The issue was figuring out how to pass a gradient through carp to the lm
Louis#0144: Without causing the lm to collapse
Louis#0144: PPO seemed like the best way
StellaAthena#3530: What does the result look like if you do it with backprop? Does it turn into nonsense or something
Louis#0144: We didn't try extensively
Louis#0144: PPO was faster to set up
Louis#0144: We just started last week
Louis#0144: PPO does work tho
Louis#0144: It gives good results
Louis#0144: Just slow
Louis#0144: @bmk you wanted to know about using carp as an rl reward
Louis#0144: Paper soon
|
bmk#1476: :ultragoose:
StellaAthena#3530: RLCF wen
StellaAthena#3530: Wait, didn’t you just say that you’ve already implemented it @Louis
StellaAthena#3530: Or do you literally mean that it’s done you just need to write it up
Louis#0144: It's mostly done
Louis#0144: Just need to eval and write up
Louis#0144: Alex did it mostly
Louis#0144: Hahaha
Louis#0144: I am but a second author
Louis#0144: What's RLCF?
StellaAthena#3530: Reinforcement learning from CARP feedback. Joking around with the RLHF acronym OpenAI uses for reinforcement learning from human feedback
Louis#0144: Ohhhhh
Louis#0144: Yes this is RLCF
𓅬 gabriel_syme 𓅬#3220: Curious how this works well without external signal. Wouldn't it just reinforce w/e the model is doing?
StellaAthena#3530: There are two models: one that does generation and one that does evaluation. The evaluation model provides the external signal to the generation model and is trained separately
𓅬 gabriel_syme 𓅬#3220: Oh ok so this is to train a J that generates critiques?
Louis#0144: ah no
Louis#0144: this is to train a GPT-J model that generates stories
Louis#0144: not critiques
Louis#0144: we want to train a model that takes some original story and modifies it given a critique
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.