data
stringlengths 115
7.61k
|
---|
alstroemeria313#1694: Why does it seem like all this fancy stuff makes it harder to do complicated things in JAX
alstroemeria313#1694: Except for the few very specific complicated things it was designed to enable.
nshepperd#2316: yeah...
alstroemeria313#1694: fortunately, diffusion isn't especially weird
alstroemeria313#1694: It's actually straightforward
nshepperd#2316: i think flax is similarly magical
alstroemeria313#1694: is it, by any chance, in a totally different way
alstroemeria313#1694: How do you even save and load these models if they were defined in a different framework.
nshepperd#2316: not totally different, no
nshepperd#2316: i think their state dicts are probably similarly formatted
nshepperd#2316: ... you would probably have to write a function to convert between the different ways of spelling parameter names or something
nshepperd#2316: idk, you could use my library ^^;;
nshepperd#2316: i had basically the same reaction lol
alstroemeria313#1694: is there a quick way to upsample feature maps
alstroemeria313#1694: in jax
alstroemeria313#1694: or haiku
nshepperd#2316: jax.image.resize
alstroemeria313#1694: ah ty :)
CRG#8707: Einops repeat would also work.
alstroemeria313#1694: need bilinear at least
|
alstroemeria313#1694: well, it's best that way
alstroemeria313#1694: Can you not put a ReLU inside an hk.Sequential because activations aren't hk.Modules?
alstroemeria313#1694: No, it doesn't work that way at all
alstroemeria313#1694: How do I do the RNG thing. I'm not using randomness inside the model why does it want one.
alstroemeria313#1694: ...
alstroemeria313#1694: What does the apply function even *want* as arguments, where is even the documentation.
CRG#8707: You can use hk.without_rng(hk.transform() if you don't use the rng.
alstroemeria313#1694: ty :)
CRG#8707: Or use None
CRG#8707: Apply takes params and the input.
nshepperd#2316: apply(params, prng_key, actual arguments)
alstroemeria313#1694: so it does do shape inference in the init
CRG#8707: Yeah
alstroemeria313#1694: Why is it channels last by default.
alstroemeria313#1694: No.
alstroemeria313#1694: No.
nshepperd#2316: oh...
nshepperd#2316: because it was written by tensorflow people
alstroemeria313#1694: ok it's going
alstroemeria313#1694: uh
|
alstroemeria313#1694: where are the biases
alstroemeria313#1694: oh they're inited to zero ok
alstroemeria313#1694: "why am i getting zero output for a zero input"
nshepperd#2316: eheh
alstroemeria313#1694: ok i made a two layer convnet
alstroemeria313#1694: How do I sample from a dataset
alstroemeria313#1694: Because doing it in the PyTorch data loader is incredibly slow
alstroemeria313#1694: Literally why do I have to piece together five pieces of documentation to do basic things
nshepperd#2316: :works_internally: :(
alstroemeria313#1694: How do I get at the params to optimize them
nshepperd#2316: i think you use tree_map
alstroemeria313#1694: No one would ever deal with this level of confusingness and patchwork ecosystem and poor documentation if they didn't want to use TPUs tbh.
kurumuz#5695: yep
nev#4905: yep
when will torch xla improve?
alstroemeria313#1694: Like can you feed the haiku params to optax?
CRG#8707: Yea
alstroemeria313#1694: They are both called 'params' in the examples but are they the same thing.
CRG#8707: It's the same
alstroemeria313#1694: So what happens if there's state in the params that you don't want to optimize?
|
alstroemeria313#1694: Like batchnorm whatever.
CRG#8707: You have transform_with_state
alstroemeria313#1694: Also can I just tree_map the params to do EMA
CRG#8707: Yeah
CRG#8707: Tree_multimap for multiple inputs
EricHallahan#1051: Optax is the standard optimizer package.
nshepperd#2316: it's just tree_map, it takes as many arguments as you want
alstroemeria313#1694: ok so random numbers.
alstroemeria313#1694: How do I get them.
alstroemeria313#1694: It's some... random key thing? I need to keep getting new random numbers
alstroemeria313#1694: And I uh
alstroemeria313#1694: I need to split the key inside the thing that computes the loss and return the new key from it too?
alstroemeria313#1694: Or split outside.
alstroemeria313#1694: What if I accidentally reuse random numbers through the wrong series of splits
nshepperd#2316: ```
class PRNG(object):
"""Just a stateful wrapper for a jax.random.PRNGKey."""
def __init__(self, key):
self.key = key
def split(self):
|
(self.key, subkey) = jax.random.split(self.key)
return subkey
rng = PRNG(jax.random.PRNGKey(0))
# main loop
while True:
...
do_whatever(rng.split(), ...)
...
```
alstroemeria313#1694: ahah
nshepperd#2316: what i do^^
alstroemeria313#1694: oh, haiku has a special thing for this but i need it in the *loss computation* not in the model
nshepperd#2316: you never return a random key, you always split outside and pass something in
alstroemeria313#1694: ah ok
CRG#8707: Haiku has PRNGSequence: https://cdn.discordapp.com/attachments/729741769738158194/890227815134396456/Screenshot_20210922-152714.png
nshepperd#2316: i use that PRNG thing liberally, like in any function that takes a key as argument i just do rng = PRNG(key) at the top and use rng.split() for everything in that function
nshepperd#2316: PRNGSequence is probably basically the same thing
alstroemeria313#1694: ok i have my log snrs, alphas, and sigmas
|
cfoster0#4356: (since the values along the diffusion schedule are log SNR, we could think of them as being expressed as nepers and convert them to dB, right?)
alstroemeria313#1694: yes
alstroemeria313#1694: works fine
alstroemeria313#1694: just multiply by the scaling factor
alstroemeria313#1694: ok good i am getting a loss
alstroemeria313#1694: the same one when i put in the same key and batch of training examples
cfoster0#4356: nice so I can draw them out in a DAW with audio faders :berk:
alstroemeria313#1694: yep~
alstroemeria313#1694: The grad is wrt the input?
alstroemeria313#1694: Eheh.
alstroemeria313#1694: ok got it
alstroemeria313#1694: i have gradients!
nshepperd#2316: :)
alstroemeria313#1694: apparently people use 'grads' as the name for the second return value from a value_and_grad
๐
ฌ gabriel_syme ๐
ฌ#3220: yay
alstroemeria313#1694: now how do you apply the grads
๐
ฌ gabriel_syme ๐
ฌ#3220: on gradients not grads
random_lurker99a#1890: updates, opt_state = opt apply(grads, opt state, params)
alstroemeria313#1694: ty :)
random_lurker99a#1890: params = optax.apply(params, update)
|
random_lurker99a#1890: just dont :blobsad: , but also jax team is working on preventing accidental wrong reuse
alstroemeria313#1694: loss go down!
๐
ฌ gabriel_syme ๐
ฌ#3220: once you get a first run going, you need to get some rest ๐
alstroemeria313#1694: so where do i jit
random_lurker99a#1890: you jit your outermost update call. Or pmap it if you are on multiple devices
alstroemeria313#1694: ah
nshepperd#2316: in my experiments i have jitted the value_and_grad call
nshepperd#2316: (why is the optax update thing two steps? when is that ever useful?)
alstroemeria313#1694: 11 it/s
random_lurker99a#1890: since pmap also works on a single device, most code bases should be fine with using pmap always for the update just for portability
alstroemeria313#1694: ahh
alstroemeria313#1694: do you need pmap to use all tpu cores?
nshepperd#2316: yeah
alstroemeria313#1694: where do you put it
random_lurker99a#1890: yes, basically unless you know what you are doing with xmap, use pmap as rule of thumb
alstroemeria313#1694: my stuff is already batched
alstroemeria313#1694: it came that way from haiku
alstroemeria313#1694: can i shard it instead
random_lurker99a#1890: shard what, inputs, parameters, opt state?
alstroemeria313#1694: inputs
|
alstroemeria313#1694: uhh
alstroemeria313#1694: So for the sampling step.
alstroemeria313#1694: I need two functions right, one for the last step and one for all others?
alstroemeria313#1694: Can I jit a function that might take an array and might take None?
random_lurker99a#1890: so just pmap and resize your batches appropriately? tf data can basically handle all of this, ds.shard(host_count, host_id)
alstroemeria313#1694: As one of its params.
alstroemeria313#1694: ah ty
nshepperd#2316: yeah. it's recompiled when it switches from an array to None, so you can have like an 'if arr is None:' in the function
alstroemeria313#1694: ahh ty
EricHallahan#1051: That's a pretty good idea actually.
nshepperd#2316: the compiled code for each combination of arguments is cached
nshepperd#2316: also, jitted functions have to have static tensor dimensions, so it'll be recompiled if the *shape* of an input changes
nshepperd#2316: which probably won't be a problem for diffusion
cfoster0#4356: This actually might not work bc the schedules we use go from like -10 to 10 Np, which would be ~~-87 to 87dB~~ -43 to 43dB, and I've never seen a fader with that kind of range
alstroemeria313#1694: it's like -40 to 40
alstroemeria313#1694: i thought
alstroemeria313#1694: or are you using 20 log10 formulation
alstroemeria313#1694: how do i convert to an image
alstroemeria313#1694: How do I convert to NumPy for that matter.
nshepperd#2316: np.array(x)
|
alstroemeria313#1694: ah
cfoster0#4356: Hmm yes. Is the SNR in our case of power quantities or a root-power quantites?
alstroemeria313#1694: it's alpha*\*2 / sigma**2
alstroemeria313#1694: so power
alstroemeria313#1694: i.e. a ratio of variances
alstroemeria313#1694: btw ```python
def to_pil_image(x):
if x.ndim == 4:
assert x.shape[0] == 1
x = x[0]
if x.shape[0] == 1:
x = x[0]
else:
x = x.transpose((1, 2, 0))
arr = np.array(jnp.round(jnp.clip((x + 1) * 127.5, 0, 255)).astype(jnp.uint8))
return Image.fromarray(arr)```
alstroemeria313#1694: here you go
alstroemeria313#1694: takes as input an image in range -1 to 1
alstroemeria313#1694: nchw
cfoster0#4356: Ah ok then yeah I think I should've used the 10 log10 formulation
|
alstroemeria313#1694: How do I make a Haiku module which takes an argument that tells it how to init the module
nshepperd#2316: does just adding arguments to the `__init__` not work
alstroemeria313#1694: Do I have to store them and reference them in the `__call__`
nshepperd#2316: think so
random_lurker99a#1890: yes, or pass whatever you want in call
alstroemeria313#1694: how do i make a model with more than one input
alstroemeria313#1694: fuck it i'll do Fourier Features w/ an inner linear layer with no bias
nshepperd#2316: i think you can just make your `__call__` take multiple arguments
nshepperd#2316: huh, jax.pmap returns a 'ShardedDeviceArray' with the contents still sharded across the devices. so you can like, compute the grad on minibatches in parallel, then just .mean(0) the output to average them
alstroemeria313#1694: ahh
nshepperd#2316: i was afraid it would like, try to return a contiguous tensor of all the batches
alstroemeria313#1694: how do i concat
alstroemeria313#1694: like torch.cat
alstroemeria313#1694: oh, jnp.concatenate?
random_lurker99a#1890: you probably want to use a psum here
alstroemeria313#1694: ```[[1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0.]
[1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0.]]``` yay
nshepperd#2316: :)
|
nshepperd#2316: would .mean() not do the correct cross device communication?
nshepperd#2316: (this isn't explained in the pmap docs, only in some tutorial colab https://colab.research.google.com/github/google/jax/blob/main/docs/jax-101/06-parallelism.ipynb)
alstroemeria313#1694: ok got it
alstroemeria313#1694: ```python
class FourierFeatures(hk.Module):
def __init__(self, output_size, std=1., name=None):
super().__init__(name)
assert output_size % 2 == 0
self.output_size = output_size
self.std = std
def __call__(self, x):
w = hk.get_parameter('w', [self.output_size // 2, x.shape[1]],
init=hk.initializers.RandomNormal(self.std, 0))
f = 2 * jnp.pi * x @ w.T
return jnp.concatenate([jnp.cos(f), jnp.sin(f)], axis=-1)
```
alstroemeria313#1694: This does shape inference too
alstroemeria313#1694: For the input size
random_lurker99a#1890: you'd need to show the full code, but the general idea to transparently work across hosts is to declare a reduction axis in pmap and then use psum/pmean as appropriate to do the correct collectives
|
alstroemeria313#1694: the model is now conditioned on log snr!
alstroemeria313#1694: not a U-Net yet though
alstroemeria313#1694: nor residual
random_lurker99a#1890: triggered because using an assert to validate an input instead of a ValueError
alstroemeria313#1694: But I think I'm most of the way there
alstroemeria313#1694: oh no ;_;
alstroemeria313#1694: Yeah I should do that
alstroemeria313#1694: how do i tile stuff
alstroemeria313#1694: ...i'm just going to einops for that tbh
random_lurker99a#1890: jnp.tile?
alstroemeria313#1694: like `rearrange(torch.zeros([25, 1, 4, 4]), '(s1 s2) c h w -> c (s1 h) (s2 w)', s1=5)`
alstroemeria313#1694: i got it btw
alstroemeria313#1694: it's for making demo grids
alstroemeria313#1694: ok now for residual blocks
random_lurker99a#1890: you should turn your jax code along into a blog post for newcomers
alstroemeria313#1694: eheh...
alstroemeria313#1694: gwern always wants me to do write-ups too
alstroemeria313#1694: ...we have residual blocks!
alstroemeria313#1694: ```python
def res_conv_block(c_mid, c_out):
|
def inner(x):
x_skip = x if x.shape[1] == c_out else hk.Conv2D(c_out, 1, with_bias=False, data_format='NCHW')(x)
x = jax.nn.relu(x)
x = hk.Conv2D(c_mid, 3, data_format='NCHW')(x)
x = jax.nn.relu(x)
x = hk.Conv2D(c_out, 3, data_format='NCHW')(x)
return x + x_skip
return inner
```
alstroemeria313#1694: ok now U-Net
alstroemeria313#1694: ...Do I have to use a transposed convolution for upsampling or can I use something actually good
alstroemeria313#1694: uh is jax.image.resize nchw or nhwc or what
alstroemeria313#1694: oh
alstroemeria313#1694: it's general, not 2d
alstroemeria313#1694: Can I double vmap it
EricHallahan#1051: neither :berk:
alstroemeria313#1694: ...no
alstroemeria313#1694: ...yes
alstroemeria313#1694: lol
alstroemeria313#1694: idk it's kinda slower now
|
alstroemeria313#1694: why
alstroemeria313#1694: ok now i need EMA
alstroemeria313#1694: how do i pmap also
Deleted User#0000: @alstroemeria313 have you ever tried alternating between VQGAN and Diffusion?
alstroemeria313#1694: no
Deleted User#0000: i noticed Diffusion yields very symmetric images
Deleted User#0000: ok cool, just curry
nev#4905: clip+vqgan+diffusion = agi probably??
alstroemeria313#1694: ```python
@jit
def ema_update(params, averaged_params, decay):
return jax.tree_map(lambda p, a: p * (1 - decay) + a * decay, params, averaged_params)
```
alstroemeria313#1694: this is still kind of slow
alstroemeria313#1694: compared to gpu
alstroemeria313#1694: i am going to let this run for a while
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/890262073160912966/Unknown-141.png
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/890265568928759828/Unknown-142.png
nshepperd#2316: getting better~
nshepperd#2316: tpu really is slow though huh
|
nshepperd#2316: oh huh... torch.load just uses pickle. so an evil checkpoint could pwn your machine
bmk#1476: thankfully nobody shares checkpoints online
nshepperd#2316: (:
alstroemeria313#1694: i still can't figure out pmap
random_lurker99a#1890: you really only need to two to things on a base level
random_lurker99a#1890: pmap your update function, and then apply psum to your grads on the bound axis
alstroemeria313#1694: i can't pmap update
random_lurker99a#1890: unless you are doing something you shouldnt be doing :harold:
nshepperd#2316: pmap the values_and_grads
random_lurker99a#1890: :delet:
nshepperd#2316: how can you even pmap the update function, its one input one output
alstroemeria313#1694: I have a bunch of different inputs and outputs to my update function
alstroemeria313#1694: `UnfilteredStackTrace: ValueError: pmap got inconsistent sizes for array axes to be mapped`
alstroemeria313#1694: i always get this
alstroemeria313#1694: Do I literally need to replicate all of my inputs.
nshepperd#2316: like you can, but like averaging the results of n different Adam steps gives you something that... isn't Adam
alstroemeria313#1694: How do I even replicate a key
alstroemeria313#1694: you pmean the gradients
alstroemeria313#1694: synchronize them
alstroemeria313#1694: all shards do the same thing
|
nshepperd#2316: oh
alstroemeria313#1694: I didn't get it to work bc other stuff
alstroemeria313#1694: Like how do I replicate random keys
alstroemeria313#1694: Can I pmap only some arguments lol
nshepperd#2316: the random key is a tensor, so yeah you can have batched keys
alstroemeria313#1694: can i do n splits
alstroemeria313#1694: they need to have different randomness
nshepperd#2316: yeah you can do n splits then jnp.stack them
alstroemeria313#1694: ok so then
alstroemeria313#1694: the params
alstroemeria313#1694: i need to replicate them?
random_lurker99a#1890: you need to pmap the init as well
alstroemeria313#1694: The which.
random_lurker99a#1890: the haiku unit
random_lurker99a#1890: i
alstroemeria313#1694: ...
alstroemeria313#1694: No that's elsewhere
alstroemeria313#1694: Not in the thing I'm pmapping
nshepperd#2316: uhh, you can pass in_axes=(None,0,0) or similar to pmap to make the first argument broadcasted to all cores
alstroemeria313#1694: oh ok
|
nshepperd#2316: similar to vmap
alstroemeria313#1694: and for out
alstroemeria313#1694: oh, i use the default
alstroemeria313#1694: why am i getting an unbound axis name
alstroemeria313#1694: i need to pmean inside the thing i pmapped?
random_lurker99a#1890: :delet: no, pmap the init to give you params of the correct shape
random_lurker99a#1890: yes in a bound axis context
nshepperd#2316: pmap the init?
alstroemeria313#1694: ok so
alstroemeria313#1694: Now I take the first... replica? Of the loss and grads?
nshepperd#2316: if you do out_axes=(None,None) you seem to get the first replica automatically but i dunno if you're supposed to do that
clay#9806: oh no, are we losing you to TPU-land?
alstroemeria313#1694: it didn't crash this time!
alstroemeria313#1694: It is still slow though.
nshepperd#2316: @random_lurker99a seems to be suggesting that you just keep the params as ShardedDeviceArray tensors replicated across all devices and do n synchronized pmapped update loops
alstroemeria313#1694: and opt state?
nshepperd#2316: yeah, replicated opt state too i guess
alstroemeria313#1694: so this will also break if the batch size isn't divisible by n_devices
alstroemeria313#1694: what do
random_lurker99a#1890: pmap opt init as well
|
random_lurker99a#1890: only have reasonable multiples of 8 if you want to use TPUs
alstroemeria313#1694: what do i actually do
alstroemeria313#1694: Drop the last examples?
random_lurker99a#1890: yes
alstroemeria313#1694: ah
alstroemeria313#1694: nope this is no faster
alstroemeria313#1694: idk why
xcodevn#9003: make sure you are using TPU cores instead of CPU. print(jax.devices()) helps
alstroemeria313#1694: it is tpu
alstroemeria313#1694: i mean it is not faster than one tpu core
random_lurker99a#1890: you may be doing bad things wrt device to host transfer
alstroemeria313#1694: probably
random_lurker99a#1890: use larger batches
alstroemeria313#1694: i literally started looking at JAX four hours ago
alstroemeria313#1694: for the first time
alstroemeria313#1694: i'm gonna take a break ^^;;
random_lurker99a#1890: good progress
alstroemeria313#1694: i mean the model works, just only on one tpu core
alstroemeria313#1694: this is so horrible ```python
def res_conv_block(c_mid, c_out):
|
def inner(x):
x_skip = x if x.shape[1] == c_out else hk.Conv2D(c_out, 1, with_bias=False, data_format='NCHW')(x)
x = jax.nn.relu(x)
x = hk.Conv2D(c_mid, 3, data_format='NCHW')(x)
x = jax.nn.relu(x)
x = hk.Conv2D(c_out, 3, data_format='NCHW')(x)
return x + x_skip
return inner
def diffusion_model(x, log_snr):
ff = FourierFeatures(16, 0.2)(log_snr[:, None])
ff_planes = jnp.tile(ff[..., None, None], [1, 1, x.shape[2], x.shape[3]])
x = jnp.concatenate([x, ff_planes], axis=1)
x = res_conv_block(32, 32)(x)
x = res_conv_block(32, 32)(x)
x_2 = jax.image.resize(x, [*x.shape[:2], 14, 14], 'cubic')
x_2 = res_conv_block(32, 32)(x_2)
x_2 = res_conv_block(32, 32)(x_2)
x_3 = jax.image.resize(x_2, [*x_2.shape[:2], 7, 7], 'cubic')
|
x_3 = res_conv_block(32, 32)(x_3)
x_3 = res_conv_block(32, 32)(x_3)
x_3 = res_conv_block(32, 32)(x_3)
x_3 = res_conv_block(32, 32)(x_3)
x_3 = jax.image.resize(x_3, [*x_3.shape[:2], *x_2.shape[2:]], 'cubic')
x_2 = jnp.concatenate([x_2, x_3], axis=1)
x_2 = res_conv_block(32, 32)(x_2)
x_2 = res_conv_block(32, 32)(x_2)
x_2 = jax.image.resize(x_2, [*x_2.shape[:2], *x.shape[2:]], 'cubic')
x = jnp.concatenate([x, x_2], axis=1)
x = res_conv_block(32, 32)(x)
x = res_conv_block(32, 1)(x)
return x
model = hk.transform(diffusion_model)
```
nshepperd#2316: haiku is weird
alstroemeria313#1694: it's like it wants to work like nngraph or keras
alstroemeria313#1694: except it automatically incorporates every operation on an array, not just ones contained in the framework's layer class
|
nshepperd#2316: the mutant child of tensorflow and keras...
nshepperd#2316: idk why the major frameworks resort to these weird global mutable state sorta things
xcodevn#9003: It is not that weird, you can implement the model in a very similar way to pytorch. Except the inconvenient of `hk.transform`.
alstroemeria313#1694: it is not at all similar
alstroemeria313#1694: I know PyTorch well and this is bizarro land
nshepperd#2316: similar but totally different because you have to run under some bizarre mutant python interpreter to simulate pytorchness
nshepperd#2316: which stops any of the code making sense
alstroemeria313#1694: how do you get a parameter count btw?
alstroemeria313#1694: can i map reduce the params pytree
alstroemeria313#1694: idk how to reduce ^^;;
xcodevn#9003: jax.tree_reduce i think
CRG#8707: HK.data_structtures.tree_size I think?
nshepperd#2316: `len(jax.tree_util.tree_leaves(params))`
alstroemeria313#1694: oh hk has this https://dm-haiku.readthedocs.io/en/latest/api.html#haiku.EMAParamsTree
alstroemeria313#1694: I didn't need to roll my own
alstroemeria313#1694: > A positive integer, EMA has no effect until the internal counter has reached warmup_length at which point the initial value for the decaying average is initialized to the input value after warmup_length iterations.
alstroemeria313#1694: I actually use a schedule in my PyTorch diffusion code
alstroemeria313#1694: Not just zero then it kicking in.
xcodevn#9003: can you elaborate a bit on how it is different from pytorch from the model implementation point of view?
chilli#5665: iirc it rolls in model creation with the forward pass
|
alstroemeria313#1694: yes
alstroemeria313#1694: there is an 'init' function that does shape inference and stuff
alstroemeria313#1694: by tracing
chilli#5665: Sasha Rush was also complaining a lot about this I remember
alstroemeria313#1694: In PyTorch when you instantiate a module its parameters and buffers are created immediately.
alstroemeria313#1694: Then calling that module on something returns the output of the function it implements.
alstroemeria313#1694: You init its state in `__init__()` and then you only do operations with the already inited state in the `forward()`.
alstroemeria313#1694: In Haiku you do this bizarro thing where you stick the hyperparameters in instance variables in the `__init__()` and then do the actual init using them in the `__call__()`.
xcodevn#9003: yes, that is the big differences.
alstroemeria313#1694: So the modules are totally different! One creates a callable w/ state and another creates some sort of function creater callable.
chilli#5665: hmm
chilli#5665: I'm not totally sure if there's a better way to do it in Jax, unfortunately
chilli#5665: I think the constraint that they're running into
chilli#5665: is that you need some step that actually figures out what stuff in your function is part of your module
xcodevn#9003: I mean you can *declare* your model in a very similar way to Pytorch.
alstroemeria313#1694: It can only look superficially similar to PyTorch if you don't make custom layers?
chilli#5665: Like, what haiku needs to do is
chilli#5665: turn
chilli#5665: ```
def forward(self, x):
|
return self.params * x
```
chilli#5665: into
xcodevn#9003: the limitation is that we cannot use it right after declaration.
chilli#5665: ```
def forward(self, params, x):
return params * x
```
alstroemeria313#1694: ...What happens if you make an hk.Conv2D and then use it in two different places actually. Does it make one layer or two.
xcodevn#9003: one
alstroemeria313#1694: Wait, how
bmk#1476: maybe it has state and tracks whether it's been used
random_lurker99a#1890: give it a different name
alstroemeria313#1694: yes
alstroemeria313#1694: i realized how
alstroemeria313#1694: shortly after asking ^^;;
alstroemeria313#1694: The state is lazily inited in the `__call__()` and associated with that instance.
alstroemeria313#1694: So if you use it twice it just grabs the existing state.
alstroemeria313#1694: Right?
xcodevn#9003: i not really sure about this...
|
xcodevn#9003: haiku has something called `current_frame`, that tracks the current emulated call stack.
alstroemeria313#1694: like you have hk.get_parameter(), right?
alstroemeria313#1694: And it takes a name that's relative to the current module?
alstroemeria313#1694: Using some sort of magic that knows what module it's executing in.
alstroemeria313#1694: I think it only looks like PyTorch if you're not doing anything slightly unusual
random_lurker99a#1890: https://github.com/deepmind/dm-haiku/blob/main/haiku/_src/base.py#L46
chilli#5665: another example of somewhere I think this implicit tracing thing breaks really hard for
chilli#5665: is when you do something with `lax.cond`
chilli#5665: (or maybe it's `hk.cond` with Haiku?)
chilli#5665: Since you need to make sure that you use the exact same parameters in both sides of the conditional
xcodevn#9003: yeah, something funny will happend if we use `jax` transformations inside `hk.transform` functions.
chilli#5665: For example, something like
```
hk.cond(lambda x: x>x.sum(),
lambda x: x * self.param,
lambda x: x
)
```
chilli#5665: that'll break
chilli#5665: the last time I checked
|
xcodevn#9003: i think `haiku` makes senses if we think a pure functional way. it like react in the senses that any state should be registered and access using `get_state` and `set_state`.
nshepperd#2316: there is a better way
nshepperd#2316: https://github.com/nshepperd/jaxtorch/blob/master/README.md
nshepperd#2316: i need to make some more advanced examples
alstroemeria313#1694: yeah.
xcodevn#9003: ok, it's time to introduce my repo again LOL
https://github.com/NTT123/pax
this is what you all want, like pytorch, it creates parameters when you create a module.
alstroemeria313#1694: Which is totally not PyTorch-like.
gollark#3909: I've always found the React hooks vaguely horrific.
gollark#3909: If you want to do something OOPy, which the hooks are for, then you should at least use the mechanisms the language actually provides for it.
nshepperd#2316: how do you make the modules act as pytrees when they may need some `__init__` arguments for initialization
nshepperd#2316: like how do you save that stuff
xcodevn#9003: it acts like pytree because it has a `tree_flatten` method
xcodevn#9003: Oh, I store pytree information in a dictionary. It tells which attributes are part of the pytree.
nshepperd#2316: oh, you save non-tensor fields and... bypass `__init__` with `object.__new__`?
xcodevn#9003: hmm, no, I keeps track of all parameters and states
xcodevn#9003: and when `tree_flatten` is called.
xcodevn#9003: it will list all the attributes and check if an attribute is a parameter or a state
xcodevn#9003: if it is, append to the `children` list of the pytree.
|
xcodevn#9003: see, https://github.com/NTT123/pax/blob/main/pax/module.py#L268
chilli#5665: oh lol, you're that guy
xcodevn#9003: i'm that guy ...
chilli#5665: haha, I just didn't realize that you were the person who wrote pax
chilli#5665: one thing that's kinda weird about making modules pytrees
chilli#5665: is that you return a module of gradients
xcodevn#9003: yes...
xcodevn#9003: gradient is really a module
chilli#5665: yeah...
chilli#5665: I'm not convinced that's a great idea lol
nshepperd#2316: you can try and run that module and get total nonsense
xcodevn#9003: you can think it as a tree
xcodevn#9003: that is fine
chilli#5665: I think it was one of the main issues preventing us from doing this design in PyTorch
chilli#5665: Like, what if the module contains some non-tensor state that's expensive?
xcodevn#9003: basically, Pax's module cannot store non-tensor state.
chilli#5665: oh hmm
chilli#5665: really?
xcodevn#9003: yes
chilli#5665: I don't think it's so bad to allow non-tensor state like bools
|
chilli#5665: and stuff
xcodevn#9003: yeah, you can
xcodevn#9003: but you cannot modify it in the forward pass
xcodevn#9003: The Pax's main point is:
xcodevn#9003: You can do anything outside Jax transformation
xcodevn#9003: You cannot modify non-tensor states inside jax's transformation.
alstroemeria313#1694: So if you do TRC and they give you access to multiple TPUs are they/can they be set up properly to train a thing in a distributed fashion on all of them?
chilli#5665: yeah, what I'm concerned about is say:
```
class Foo(pax.Module):
def __init__(self):
self.foo = <expensive non-tensor state used for say, tracking the module>
...
```
chilli#5665: Now, every time you call grad, you're gonna create this module with extra weird state
chilli#5665: that's expensive
xcodevn#9003: no.
xcodevn#9003: jax.jit prevents this to happend
chilli#5665: no?
|
chilli#5665: uh...
chilli#5665: I don't see how
xcodevn#9003: it bascially a compiled program
xcodevn#9003: with tensor states only
chilli#5665: this part is not really part of jit
chilli#5665: basically, what jit is doing is
alstroemeria313#1694: I uh, this diffusion project would never have gotten off the ground if I had to learn JAX at the same time.
alstroemeria313#1694: Because I never would have been able to do enough fast experimentation to get it good.
xcodevn#9003: yeah, i agree. But in the end, you have to use `jax.jit`
chilli#5665: ```
tree_flatten(inputs)
jitted_f(flat_inputs)
tree_unflatten(outputs) # this is where the expensive state is created
```
chilli#5665: The fundamental issue is that you're returning a `pax.Module`
nshepperd#2316: there's a thing called tpu pods which have some sort of special support in jax. but you don't get those :(. just single tpus. to use the 100 preemptible TPUs you probably have to implement something like shawn's swarm training
chilli#5665: and if you have this constraint that `grad` returns a `pax.Module`, then you're basically adding a restriction to `pax.Module` that it cannot hold much state other than tensors
alstroemeria313#1694: ahah i remember the swarm
nshepperd#2316: like manually shipping updates across the network
alstroemeria313#1694: it was so ridiculous
|
xcodevn#9003: let me see if i understand you correctly
alstroemeria313#1694: He managed to use so much more compute than everyone else
chilli#5665: Otherwise, any state is going to be replicated whenever you return the module
alstroemeria313#1694: i have a use for the preemptibles actually
EricHallahan#1051: Ben has some swarm code too.
https://github.com/kingoflolz/swarm-jax
alstroemeria313#1694: Doing a ton of sampling from diffusion models for FID evaluation
chilli#5665: Let's imagine some ridiculous module initialization code like
alstroemeria313#1694: wow
nshepperd#2316: why does it have to use haiku :/
chilli#5665: ```
class Foo(pax.Module):
def __init__(self):
self.foo = network_call_to_load_state
...
```
nshepperd#2316: ahh. that is parallelizable :)
xcodevn#9003: ok, i think i understand you now....
xcodevn#9003: you are talking about a case when there is something heavy non-tensor inside a Pax's module.
chilli#5665: yes
|
chilli#5665: or basically, any time that creating a Pax module has ramifications
xcodevn#9003: my short answer is... don't do this
chilli#5665: haha
chilli#5665: that's fair
xcodevn#9003: however
xcodevn#9003: you can do it
xcodevn#9003: but, you get undefined behaviours
xcodevn#9003: that is the real reason why you should not do it this way.
xcodevn#9003: as I said because of `jax.jit` which prevents side effects.
chilli#5665: well, it's not really related to `jax.jit` I think
chilli#5665: mmm
chilli#5665: well
chilli#5665: I get what you mean
chilli#5665: if you're calling `grad(f(pax.Module))` within a jitted function
chilli#5665: you'll have problems
nshepperd#2316: jax.jit symbolically executes the tensor computations, so the side effects will only happen the first time
chilli#5665: I was thinking of `jit(grad(f(pax.Module)))`
chilli#5665: which will allow side effects to happen multiple times
xcodevn#9003: oh, this is a bit details
xcodevn#9003: but there are two possible cases here
|
chilli#5665: but yeah, fundamentally, if your transformations are able to return modules
chilli#5665: this imposes restrictions on what you can/cannot do in your module initialization
xcodevn#9003: if your heavy thing change its internal state.
xcodevn#9003: jax.jit will not detect that modification
xcodevn#9003: therefore, execute the same compiled program without side effects.
chilli#5665: mmm
chilli#5665: no
chilli#5665: well
chilli#5665: that's not really relevant here
chilli#5665: since we only have one module
chilli#5665: that's already been initialized
chilli#5665: my point is
chilli#5665: when you call
`jit(grad(f(pax.Module)))`
chilli#5665: when will `pax.Module.__init__` be called?
nshepperd#2316: i think side effects in `__init__` won't actually happen again anyway, because the unflatten thing bypasses init entirely
nshepperd#2316: it uses `object.__new__` to create a new module object
chilli#5665: hmmm?
nshepperd#2316: and then just stuffs the fields back in
xcodevn#9003: you create the module outside of any jax's tranformation
|
xcodevn#9003: so,, `__init__` is called long before you call `jax.grad`.
xcodevn#9003: oh, i see....
nshepperd#2316: this bit https://github.com/NTT123/pax/blob/7828ab3c727079d681f32d2e0a70cba34884a6e7/pax/module.py#L294
xcodevn#9003: you meant, clone of the modules
chilli#5665: I see
nshepperd#2316: imo it's still not really worth bundling the params with the module like this though
chilli#5665: Doesn't `__new__` always call `__init__` after though?
nshepperd#2316: passing them in as a parameter isn't that bad
xcodevn#9003: it is how pytorch does it
chilli#5665: so I think my question still stands
nshepperd#2316: no, it doesn't
xcodevn#9003: when we clone a module, no `__init__` is called, we copy all attributes.
chilli#5665: hmm
chilli#5665: maybe I should write a code examle
chilli#5665: https://www.tutorialspoint.com/why-is-init-always-called-after-new-in-python
nshepperd#2316: i don't think that means that `__new__` calls `__init__`. it means that they are called in sequence when you create something the normal way
chilli#5665: hmm
xcodevn#9003: interestingly, there is a cost of using pytree like this.
xcodevn#9003: it is the cost of calling `tree_flatten` and `tree_unflatten`.
xcodevn#9003: my estimate, it is < 1% of training time.
|
xcodevn#9003: calling `flatten/unflatten` for a ResNet101 takes
xcodevn#9003: 1 minutes for 10,000 iterations.
alstroemeria313#1694: I implemented DDPM now
chilli#5665: yeah I think you're right
nshepperd#2316: yay
alstroemeria313#1694: So I think it is feature parity with my PyTorch MNIST notebook
alstroemeria313#1694: How do I do bf16
alstroemeria313#1694: I uh, need some sort of exactly-the-same resampling thing if I'm going to convert the models to PyTorch
chilli#5665: I think it's bf16 by default
chilli#5665: wait
chilli#5665: they might have changed that recently
alstroemeria313#1694: Do I need to just make blurpool2d and its reverse w/ transposed convolutions
alstroemeria313#1694: So I can calculate the kernels and use the same ones on both JAX and PyTorch
nshepperd#2316: uh, tree_map the params with whatever jnp function converts things to bf16
alstroemeria313#1694: oh
nshepperd#2316: and make sure the inputs are bf16 too
xcodevn#9003: so, is your point still stand?
chilli#5665: It used to be bf16 by default, no?
nshepperd#2316: idk
xcodevn#9003: i think it is
|
nev#4905: discord underlines links too ๐ค
chilli#5665: well, yes, it just has a less compelling example I think
chilli#5665: lol
alstroemeria313#1694: Yes. Yes I will just make my own downsampling and upsampling ops to make sure they're exactly the same.
nshepperd#2316: probably
chilli#5665: but it is making me reconsider whether simply returning modules from `grad` is viable
chilli#5665: it's still dissatisfying lol
chilli#5665: but it might be ... acceptable
nshepperd#2316: i think maybe preemptible TPUs could be a good fit for SSVRG or something
alstroemeria313#1694: wow eta=1 is making higher quality samples
nshepperd#2316: bc that involves occasionally making huge minibatches
nshepperd#2316: computing the gradient on a megabatch
xcodevn#9003: LOL, my point still: do not store any thing heavy inside a module.
nshepperd#2316: then using it to reduce variance on subsequent steps
chilli#5665: well... I actually think it's fine
nshepperd#2316: so like, swarm training, with the occasional synchronized megabatch
nshepperd#2316: downside of SSVRG though is you need to keep two whole sets of parameters around
chilli#5665: well
chilli#5665: hmm
xcodevn#9003: bc, it is designed to be tensor computation. not download things from internet.
|
chilli#5665: you still have weird tracing issues
random_lurker99a#1890: what do you want to store wtf
chilli#5665: I was just giving an example earlier lol
random_lurker99a#1890: sorry dropping in and out of meetings :blobsad:
chilli#5665: I'm curious, do you know whether the flax/haiku people have any thoughts on just making modules pytrees?
chilli#5665: @xcodevn has made me rethink one of my main objections ๐
chilli#5665: tbh, it's kind of an obvious design
chilli#5665: (which is a good thing!)
chilli#5665: so I'm wondering what the reasons are that they didn't go for it
xcodevn#9003: yes, it is so obvious to me when i saw it the first time.
random_lurker99a#1890: flax and haiku are completely separate groups of people hah, but either way havent heard anything on this regard, I dont think it's a pressing priority
xcodevn#9003: haha, any jax-based thing has tracing issues.
chilli#5665: well, it's not a pressing priority, but if I was them, I would definitely have thoughts lol
chilli#5665: since they're trying to make "the" nn.Module package for Jax
chilli#5665: and all of a sudden, there's an explosion of Pytree-based nn.Module packages from OSS randos
chilli#5665: (no offense @xcodevn )
random_lurker99a#1890: true for flax but not working with that so really dont know
random_lurker99a#1890: (haiku priority is making DM users happy, arguably)
xcodevn#9003: the idea of pytree module is so so *cool* when i saw it the first time.
chilli#5665: that was actually our initial design in functorch lol
|
chilli#5665: but we stepped away from it due to the aforementioned issue (it's semantically strange to create nn.Modules that are complete nonsense)
chilli#5665: but I've been reconsidering this option recently
xcodevn#9003: tbh, I think haiku is quite good... really. It is purposely, carefully designed for DM user.
xcodevn#9003: and `optax` is an amazing piece of software....
chilli#5665: I also have mixed feelings on optax
chilli#5665: lol
xcodevn#9003: i love optax so much...
chilli#5665: lol
chilli#5665: why?
xcodevn#9003: composition of `GradientTransformation`
xcodevn#9003: is a beautiful idea.
chilli#5665: hmm
chilli#5665: I agree the idea is nice
chilli#5665: haha
xcodevn#9003: it's soo cool, it works like lego pieces
alstroemeria313#1694: JAX MNIST diffusion demo grid https://cdn.discordapp.com/attachments/729741769738158194/890314821176795168/Unknown-144.png
alstroemeria313#1694: 26 epochs.
random_lurker99a#1890: did you get pmap to work properly
alstroemeria313#1694: no
alstroemeria313#1694: not yet
|
alstroemeria313#1694: i am taking a break while this runs
alstroemeria313#1694: i don't like the JAX version as much
alstroemeria313#1694: it is not as clean and the functions pass a ton of arguments around
alstroemeria313#1694: I wonder if I could use PyTorch/XLA if I took the bilinear upsamples out and stuck learnable transposed convolutions in instead.
alstroemeria313#1694: F.interpolate() may be the culprit
random_lurker99a#1890: Jaxers would say that part is cleaner because explicit about any changes to the state :chadhuber:
alstroemeria313#1694: It was F.interpolate().
alstroemeria313#1694: I now have a diffusion model training on a Colab TPU with PyTorch/XLA *and it is going twice as fast as the JAX one*.
alstroemeria313#1694: (What dtype is PyTorch/XLA using by default actually, is it, by any chance, bf16)
alstroemeria313#1694: (And if it's not that it has to be that using a transposed convolution for upsampling is just way faster, right?)
alstroemeria313#1694: I ran into several PyTorch/XLA footguns on the way to fixing it.
random_lurker99a#1890: that is depressing but if you are only using one device not terrible indicative
alstroemeria313#1694: only one rn
random_lurker99a#1890: wasting 7 perfectly fine TPUs :blobsad:
alstroemeria313#1694: Because the JAX one is fp32 rn.
alstroemeria313#1694: ahah it doesn't work in bf16
alstroemeria313#1694: (It does work in fp16 on GPU)
alstroemeria313#1694: Can I not use PyTorch/XLA multiprocessing on Colab?
alstroemeria313#1694: it's bad though
alstroemeria313#1694: idk why
|
alstroemeria313#1694: another footgun i'm sure
alstroemeria313#1694: @nshepperd i had an idea
alstroemeria313#1694: what if... we trained a noise level conditioned discriminator for diffusion.
alstroemeria313#1694: And we got the fakes to feed it by taking the original sampled timesteps for that batch, subtracting small random amounts, and taking DDIM steps to the new timesteps.
alstroemeria313#1694: And we would get the reals to feed it by noising the clean reals according to the new timesteps (so the timestep distribution is the same for the fakes and the reals).
alstroemeria313#1694: How long does it normally take TRC to respond to an application
๐
ฌ gabriel_syme ๐
ฌ#3220: yes please ๐
StellaAthena#3530: < a week
alstroemeria313#1694: :)
alstroemeria313#1694: I couldn't get this to work.
alstroemeria313#1694: The adversarial loss seriously hurt the MSE objective.
๐
ฌ gabriel_syme ๐
ฌ#3220: is there something like train_state in haiku
๐
ฌ gabriel_syme ๐
ฌ#3220: in flax I'd just pmap the train/eval steps, create a 'state' by passing call, params, and optimizer, and then replicate it on all cores. I think that sort of was it
xcodevn#9003: what do you mean `train_state`. Is it a boolean variable telling train mode or eval mode? if that is the case then no
๐
ฌ gabriel_syme ๐
ฌ#3220: https://flax.readthedocs.io/en/latest/flax.training.html#train-state
xcodevn#9003: oh, I see. It is a no. At least officially.
๐
ฌ gabriel_syme ๐
ฌ#3220: that's a shame
xcodevn#9003: let's just say haiku want you to do it explicitly.
๐
ฌ gabriel_syme ๐
ฌ#3220: does that mean they want you to write the source code of that? ๐
๐
ฌ gabriel_syme ๐
ฌ#3220: jk, I guess it means smth more specific to how you structure the code
|
xcodevn#9003: it means you have to pass parameters, auxiliary, optimizer state explicitly :D
xcodevn#9003: As i understand, it is not that haiku's authors don't have time to offer helper utilities functions. They just don't want to create another wrapper if you can do it explicitly.
๐
ฌ gabriel_syme ๐
ฌ#3220: oh god
๐
ฌ gabriel_syme ๐
ฌ#3220: that's a big no for me, but I can see why it might be awesome for AI researchers
xcodevn#9003: let's just say, it is , possibly, a good thing in the end. If deepmind create AGI, it will be a pure functional AI, everything is explicit, lesser unknown bugs. Therefore, lesser the possibility of world ending scenario with evil AI.
๐
ฌ gabriel_syme ๐
ฌ#3220: hehe
Orz#3023: let's just say that they've already created one and deployed it in areas you can't even think of
And call it a day
Orz#3023: I mean
How can YouTube literally point me to that literal song with just the lyrics I came up just inside my head?
xcodevn#9003: sorry, what is your point here?
Orz#3023: umm
It's just that deepmind has already created some kind of an AGI and they are refraining from publishing it
Orz#3023: ***it's just a theory / speculation
And has no reason for you to think it's real***
Kia#2550: But who would know tho :v
xcodevn#9003: I see.... I forgot to mention my prior though: I'm assuming deepmind is a good guy here lol
xcodevn#9003: or at least, researchers at deepmind are *mostly* good people.
๐
ฌ gabriel_syme ๐
ฌ#3220: I'm with you on that
|
๐
ฌ gabriel_syme ๐
ฌ#3220: but very precisely on the 'researchers at'
๐
ฌ gabriel_syme ๐
ฌ#3220: I'm not convinced of any corporate/management being good enough to resist the typical urges of "let's make more/scale up, and profit!!"
xcodevn#9003: tbh, i really think AGI will come step-by-step
xcodevn#9003: it will start with a mice level AI
xcodevn#9003: dog-level AI
xcodevn#9003: monkey-level AI
Kia#2550: It's already been like that
Kia#2550: Or just a giant clump of Smaller Size Models creating a Giant Organism Model
Kia#2550: Everything is base on nature:thinkies:
xcodevn#9003: and these steps will create opportunities to allow everyone access to AI along the way.
xcodevn#9003: because it is the only ladder of intelligence that we know of , hehe
xcodevn#9003: ok, the second ladder is... alpha go
Some Point Process#3793: If AGI is not going to arrive in my lifetime I'd rather die rn and not have to wait for something that will never materialize
๐
ฌ gabriel_syme ๐
ฌ#3220: yooo no
๐
ฌ gabriel_syme ๐
ฌ#3220: life is beautiful
xcodevn#9003: what would you do if AGI arrive?
xcodevn#9003: it just a cool tech lol
xcodevn#9003: in some senses, you are a cool tech, a super nano intelligence machine, after billions years of development.
nshepperd#2316: aww :(
StellaAthena#3530: take a nap
|
bmk#1476: the AGI would make sure of that, I'm sure
xcodevn#9003: Am I correct to infer that you're working tirelessly to make AGI happens then take a nap?
bmk#1476: the AGI will make sure that everyone takes a nap
StellaAthena#3530: No
bmk#1476: a very long nap
๐
ฌ gabriel_syme ๐
ฌ#3220: sleep with the fishes
nshepperd#2316: tell my friends i love them
xcodevn#9003: mine is: move to the next interesting things.
Some Point Process#3793: I would try to influence some organization to use it to develop longevity tech (or mind uploading, if it's possible). Join a special interest group etc
Some Point Process#3793: IMO it's the only ticket to immortality and to a truly interesting future
Some Point Process#3793: for everyone*
xcodevn#9003: interesting, I even think that we will have longevity tech before AGI arrive, lol
Some Point Process#3793: Nah, human biology is too complicated IMO
Some Point Process#3793: For a human brain to understand
xcodevn#9003: you know, all DNA editting things, I think it is really promissing...
xcodevn#9003: btw, i sent you a PM ๐
nish#9264: Hello
Louis#0144: hi
EricHallahan#1051: Welcome!
๐
ฌ gabriel_syme ๐
ฌ#3220: have huge doubts about that
|
Some Point Process#3793: Well are you going to explain why?
๐
ฌ gabriel_syme ๐
ฌ#3220: oops ๐
๐
ฌ gabriel_syme ๐
ฌ#3220: ehm, the main thing I go back to is that all these innovations never happen through a different socio-economic-politcal frameworks / situation
๐
ฌ gabriel_syme ๐
ฌ#3220: I have huge doubts any such technology would become freely available to all
๐
ฌ gabriel_syme ๐
ฌ#3220: In fact, I'd bet against that
๐
ฌ gabriel_syme ๐
ฌ#3220: that's pretty much it tbh
xcodevn#9003: this makes me to remember a quote: "The future is already here โ it's just not very evenly distributed" ๐
๐
ฌ gabriel_syme ๐
ฌ#3220: that's a nice one
๐
ฌ gabriel_syme ๐
ฌ#3220: I like also another about the future
๐
ฌ gabriel_syme ๐
ฌ#3220: "the future is no longer what it used to be"
xcodevn#9003: because time pass, right?
๐
ฌ gabriel_syme ๐
ฌ#3220: speaks well to the retroactivity of history, AGI could have that impact in a positive way (not completely denying it). Just find it hard without changing things before
nish#9264: Wait is it alright to ask what type of ai are you specifically creating
EricHallahan#1051: Ideally one that isn't going to turn us all into paperclips.
๐
ฌ gabriel_syme ๐
ฌ#3220: what's the best way to get into seq2seq models? Is it still HF, or is there another codebase that's friendly?
๐
ฌ gabriel_syme ๐
ฌ#3220: i don't necessarily want a whole overview, pretty satisfied with better models only
someKindaBean#8471: i've been playing a lot with seq2seq using HF recently (mostly for summarization), but I'd be interested in an alternative if you or anyone else knows of one
StellaAthena#3530: Writing a seq2seq setting for NeoX, obviously /s
๐
ฌ gabriel_syme ๐
ฌ#3220: the HF implementations have the advantage of flax I guess. Probably start there. I think I have some tasks ready, we'll see
apergo#9298: Quick question does anybody know if someone tried causal reasoning with image data sets before? And if not, what would you think: How good are the chances that current models can solve the following task? (Where it would have to select the correct image out of the four possible results.)
|
apergo#9298: https://cdn.discordapp.com/attachments/729741769738158194/890504823143817216/unknown.png
apergo#9298: https://cdn.discordapp.com/attachments/729741769738158194/890504865434988544/unknown.png
p.b.#2673: Honestly, I can't solve this task.
p.b.#2673: fire + guys with a hose = firemen
p.b.#2673: burning forest + lots of water = lush forest some day
p.b.#2673: burning forest + lots of warer = burned forest
p.b.#2673: burning forest + water from a single hose = burning forest
p.b.#2673: All of this makes sense to me
Kia#2550: So a Dataset for solving/Reasoning?
apergo#9298: Thanks for your answer. Great input.
You're correct that all 4 scenarios could make sense.
But, what would be the most likely consequence of having firefighters fighting a forest fire? Imho it's an extinguished forest fire. (The green forest would take years if not decades to grow, so it is unlikely a direct or immediate consequence of firefighters fighting a fire (someone could have built houses there in the meantime). A burnt out tractor is unlikely as it isn't shown in one of the pictures and thus unlikely a consequence. Only the continuation of the forest fire makes some sense (imho) as 3 firefighters alone won't be able to put out a whole forest fire. But, it's kind of just a matter of fact, that as soon as firefighters show up at a forest fire it will get extinguished (as that is what firefighters are there for).
But you point to a valid point. Selection of pictures would have to be selected carefully prevent misunderstandings.
apergo#9298: Exactly. There are several test sets / benchmarks which are doing that for NLP (https://arxiv.org/abs/2005.05763 ; https://arxiv.org/abs/2106.00969) but to my knowledge nothing like that in the image space.
(Although some go in a somewhat similar direction but those are more explanatory (why is something depicted in an image given another image (https://arxiv.org/abs/2107.10300 ; https://paperswithcode.com/paper/visual-choice-of-plausible-alternatives-an). But none (to my knowledge) tried to combine images to predict correctly the (most likely) causal effect of i.e. two given images.
Kia#2550: Honestly interesting idea,But what specific Use cases do you think this dataset can be used for?
|
apergo#9298: The thing is, when looking into causal extraction and causal modelling in NLP / NLU area it is kind of shallow as textual representation of causal relation is a mere projection from real scenarios. With using images one could first, get a step closer for models to 'understand' causal structures of real life and not some textual model thereof and second, training multimodal models on both data sets (image as well as text) could yield better results in either domain, as the model could 'transfer' clues from visual depiction to solving language based tests.
At the end, it would be about robust robot-world interaction based on a causal prediction engine (focusing on 'understanding' the consequences of object interaction) such that the robot could 'predict' that putting the wet cat in the microwave oven to dry it, might not be a good idea - but that's far out.
apergo#9298: Just another example. Data set construction probably is the biggest problem.
And finding the right images, so as not to conjure up the wrong associations, as p.b. pointed out https://cdn.discordapp.com/attachments/729741769738158194/890528568260591626/unknown.png
Kia#2550: Ah
Kia#2550: You can probably try doing this with CLIP
Kia#2550: Like with A+B Outcome C
Kia#2550: I supposed CLIP can do it
alstroemeria313#1694: Is this IC-GAN?
Kia#2550: It's normal images
alstroemeria313#1694: oh
Kia#2550: But CLIP is mostly text and images
Kia#2550: Training a model *just* with Images and making it solve things is something that the model can *Magically Do*
๐
ฌ gabriel_syme ๐
ฌ#3220: extract CLIP annotation from first image and use CLIP and second image to generate the results :think:
Awesome_Ruler_007#7922: why not simulate it?
Awesome_Ruler_007#7922: atleast for basics, 3D game engines + ray tracing to create photorealistic scenarios?
Awesome_Ruler_007#7922: so you wouldn't have to worry too much about generalizations
Awesome_Ruler_007#7922: A simple way to solve such dataset would be - image captioning model, obtaining annotation vector for both images, and printing out the resultant vector (using any of the reasoning NLP models you posted above), CLIP and She-bang to create images back
|
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/890563856370376714/demo_00019.png
alstroemeria313#1694: btw
Diffusion models seem to not have the problem GANs have with small datasets.
I overfit my normal sized MNIST diffusion model to a single MNIST real.
Training was still completely stable and it learned to correctly produce it from any starting noise.
(Note that such a model is useless for CLIP guidance, in fact the distribution it learns is so simple you can compute it in closed form and use it directly, and it's just equivalent to CLIP gradient descent in RGB space with an additional MSE loss toward the single real I think. Or something pretty similar.)
alstroemeria313#1694: simple idea for a mode coverage test: train the model on ten MNIST reals then sample and see how many of each you get.
alstroemeria313#1694: Like check for the nearest neighbor of each sample and see how close the distribution is to uniform.
alstroemeria313#1694: Uh, what's a good statistical test for seeing if a distribution is the same as some other distribution.
alstroemeria313#1694: Like for two categoricals.
alstroemeria313#1694: idk i could sample a bunch and eyeball the distribution
alstroemeria313#1694: Or, uh, actually compute KL divergence from uniform to the empirical distribution
alstroemeria313#1694: Uh, think I want KL the other way around actually
alstroemeria313#1694: P = uniform, Q = the empirical distribution.
๐
ฌ gabriel_syme ๐
ฌ#3220: Can you software engineers let me know how this looks like? https://huggingface.co/infinity
๐
ฌ gabriel_syme ๐
ฌ#3220: @kurumuz how does it look?
EricHallahan#1051: > Achieve 1ms latency for BERT-like models
alstroemeria313#1694: ugh that doesn't work
alstroemeria313#1694: I can have actual zeros in the empirical distribution.
alstroemeria313#1694: So KL that way around fails.
|
kurumuz#5695: is that a big deal?
kurumuz#5695: lol
kurumuz#5695: seems pretty standart to me
kurumuz#5695: forwards are quite fast
Louis#0144: Latency is also kinda unimportant
EricHallahan#1051: Yeah, nothing is particularly interesting or groundbreaking here
Louis#0144: They should report throughput
kurumuz#5695: no details either
kurumuz#5695: smaller sequence will be faster etc
Louis#0144: I actually wonder what it is they're doing
Louis#0144: I kinda assumed they just wrapped cuda compilation
Louis#0144: So that it compiles for target hardware
Louis#0144: So it's for people who don't know how to configure their instances lol
Louis#0144: Eg enterprise
kurumuz#5695: iirc T4 was able to do bert inference in 2ms with tensorRT @EricHallahan
kurumuz#5695: and we all know how slow T4 is :berk:
kurumuz#5695: ahh the problem is, huggingface doesn't say which BERT model it is
kurumuz#5695: https://developer.nvidia.com/blog/nvidia-announces-tensorrt-8-slashing-bert-large-inference-down-to-1-millisecond/
kurumuz#5695: nvidia already did 1ms a while ago with BERT-Large
๐
ฌ gabriel_syme ๐
ฌ#3220: hmm ok
|
๐
ฌ gabriel_syme ๐
ฌ#3220: could it be just that it will be as easy as most of the HF stuff?
๐
ฌ gabriel_syme ๐
ฌ#3220: because i'm not sure I know how to set up tensorRT, is that easy?
nshepperd#2316: js divergence maybe?
nshepperd#2316: average of the kl divergence of the average of the two distributions to each of the distributions
EricHallahan#1051: That's their target audience.
alstroemeria313#1694: doing 1000 samples now
Louis#0144: Maybe infinity is a wrapper for tensorRT?
Louis#0144: No
Louis#0144: Not rly
๐
ฌ gabriel_syme ๐
ฌ#3220: well, that is about 99.99% of the audience ๐
๐
ฌ gabriel_syme ๐
ฌ#3220: aha ok then, I'm all in for infinity then lol
๐
ฌ gabriel_syme ๐
ฌ#3220: like at this point it's not about skill tbh, it's about bandwidth available for stuff. Some things need to be..easy.
๐
ฌ gabriel_syme ๐
ฌ#3220: I wish they had a table across different models though
alstroemeria313#1694: so i am training it on 10 MNIST reals over and over
alstroemeria313#1694: They are not all from different classes.
alstroemeria313#1694: They are just the first 10 in the train set.
๐
ฌ gabriel_syme ๐
ฌ#3220: oh no generation though, RIP
alstroemeria313#1694: Then every five epochs, drawing 1000 samples
alstroemeria313#1694: With fresh random noise instead of using the same noise over and over like I normally do for demo grids so I can compare samples from one to the next.
alstroemeria313#1694: Ideally the model would learn all ten modes ~perfectly (it has the capacity for this easily, it's HUGE in comparison) and produce them with equal probability.
|
alstroemeria313#1694: A GAN would fail at this task pretty hard
alstroemeria313#1694: D would overfit to the ten reals and not produce useful gradients to train G.
alstroemeria313#1694: And I bet the mode coverage might be bad too if you managed to get it to work with DiffAugment and gradient penalties or whatever tricks.
kurumuz#5695: I think if you're a company doing this stuff you should know what you're doing
kurumuz#5695: lol
kurumuz#5695: idk how you far you can get with relying on hf
kurumuz#5695: i dont think its very far
๐
ฌ gabriel_syme ๐
ฌ#3220: that is true but I can imagine a lot of companies doing this stuff as an addition / service, very far away from their product/domain
alstroemeria313#1694: ```Probs: tensor([0.1100, 0.0980, 0.1000, 0.0950, 0.1150, 0.1150, 0.0990, 0.0820, 0.0950, 0.0910])
KL Divergence: 0.000505354
```
alstroemeria313#1694: p good!
alstroemeria313#1694: The batch of samples in question. https://cdn.discordapp.com/attachments/729741769738158194/890575699419738122/demo_00030-4.png
๐
ฌ gabriel_syme ๐
ฌ#3220: WOAH
๐
ฌ gabriel_syme ๐
ฌ#3220: looks crisp
alstroemeria313#1694: Like at this point does it matter much which way around I do the KL, it's so close
alstroemeria313#1694: I uh, I probably do need to do it the way I am
alstroemeria313#1694: Because dropping two modes entirely is worse than dropping one mode entirely
alstroemeria313#1694: And KL the other way around would not be defined for either.
๐
ฌ gabriel_syme ๐
ฌ#3220: what we need is some CIFAR code :guilty:
|
alstroemeria313#1694: i posted a notebook yesterday ^^;;
๐
ฌ gabriel_syme ๐
ฌ#3220: oh is it?
alstroemeria313#1694: https://colab.research.google.com/drive/1HubSGRVxDRCRYK-YEjs8nYYImSrlR0qf
๐
ฌ gabriel_syme ๐
ฌ#3220: let me give it a shot
alstroemeria313#1694: here you go~
๐
ฌ gabriel_syme ๐
ฌ#3220: wait no, I meant jax ๐
alstroemeria313#1694: Me and Jack are turning this codebase into a thing with PyTorch Lightning, wandb logging, and distributed data parallel
alstroemeria313#1694: oh
alstroemeria313#1694: yeah i have that but it's unaccountably 1/2 as fast as the PyTorch/XLA version
๐
ฌ gabriel_syme ๐
ฌ#3220: dang
๐
ฌ gabriel_syme ๐
ฌ#3220: should I use accelerate with the notebook?
alstroemeria313#1694: You need to avoid all the undocumented PyTorch/XLA footguns though.
nshepperd#2316: do you mind if i see the jax notebook? maybe i can have an idea
alstroemeria313#1694: Like I had to replace the upsamples in the model with learned stride=2 transposed convolutions
alstroemeria313#1694: And not use `.lerp_()`
alstroemeria313#1694: And put barriers in the right places
alstroemeria313#1694: let me make sure i didn't break it
alstroemeria313#1694: I uh, remember making edits to it and then not running it after ^^;;
alstroemeria313#1694: Because I got bored/frustrated
alstroemeria313#1694: ...No, it was because I got the PyTorch/XLA one working
|
nshepperd#2316: ^^;;
nshepperd#2316: i feel guilty for suggesting trying haiku now lol ;;
๐
ฌ gabriel_syme ๐
ฌ#3220: flax?
alstroemeria313#1694: is fine, i got lucky and figured out what all the relevant footguns were
alstroemeria313#1694: And they weren't anything I couldn't work around.
alstroemeria313#1694: This usually doesn't happen
nshepperd#2316: ah~
alstroemeria313#1694: https://colab.research.google.com/drive/1k8_yALZtNZ_kkkb4ycPigKkUePtgthHr you probably have to ask permission to view but i'll just grant anyone who responds within like 15 minutes
๐
ฌ gabriel_syme ๐
ฌ#3220: hey there is an ema in haiku
alstroemeria313#1694: i didn't know about it and just did it myself ^^;;
๐
ฌ gabriel_syme ๐
ฌ#3220: haha
๐
ฌ gabriel_syme ๐
ฌ#3220: I was looking in flax and it has a 4 line example (not sure it works though) and then haiku is 1 page
๐
ฌ gabriel_syme ๐
ฌ#3220: but my guess it's the proper way to write it for a library ๐
alstroemeria313#1694: i also don't like the haiku one
๐
ฌ gabriel_syme ๐
ฌ#3220: I wish they had examples in the code tbh
alstroemeria313#1694: for diffusion i like to use EMA schedules
๐
ฌ gabriel_syme ๐
ฌ#3220: for dum dumbs like me
alstroemeria313#1694: And with mine I just supply the decay on each step.
๐
ฌ gabriel_syme ๐
ฌ#3220: ah so decay varies as well
alstroemeria313#1694: i didn't do it in the jax one yet
|
alstroemeria313#1694: i am in the pytorch one
alstroemeria313#1694: And it lets you evaluate whether the model is working faster
๐
ฌ gabriel_syme ๐
ฌ#3220: just looking at the notebook and thinking if I should sink 10 hours of my life in trying. I 99% fail
alstroemeria313#1694: That's the main reason for a schedule, it makes the early samples better
alstroemeria313#1694: Super slow decay means it takes forever for changes to show up.
alstroemeria313#1694: No decay means your samples are bad/unrepresentative.
alstroemeria313#1694: (I have unbiased decay-varying EMAs too)
alstroemeria313#1694: (Those are an improvement over the Adam one which assumes it's fixed and breaks if you change it.)
alstroemeria313#1694: (Basically you just store the cumulative product of the decays you *actually* applied, rather than storing step count and assuming the past decays were all the same.)
alstroemeria313#1694: (That is, um? If you put it in an optimizer it would let you change or schedule the betas.)
BoneAmputee#8363: ๐
BoneAmputee#8363: *checks watch*
๐
ฌ gabriel_syme ๐
ฌ#3220: I also asked permission ๐
alstroemeria313#1694: lol the share UI was bad
Kia#2550: What does the notebook do :v?
nshepperd#2316: i requested
alstroemeria313#1694: it's a jax version of my mnist diffusion notebook
๐
ฌ gabriel_syme ๐
ฌ#3220: do you remember how slow it was?
Kia#2550: Ow, That's actually lovely
alstroemeria313#1694: half the speed of pytorch/xla
|
alstroemeria313#1694: on tpu
๐
ฌ gabriel_syme ๐
ฌ#3220: checking now
nshepperd#2316: thanks ๐ฅฐ
Kia#2550: v3's?
alstroemeria313#1694: v2
alstroemeria313#1694: The code is a mess and probably bad, that's why I didn't share it publicly
Kia#2550: Ah :surprise:
alstroemeria313#1694: it does actually work though
๐
ฌ gabriel_syme ๐
ฌ#3220: 13 seconds / epoch
๐
ฌ gabriel_syme ๐
ฌ#3220: wait no, that was the sampling? anyways, next epoch is 1min
๐
ฌ gabriel_syme ๐
ฌ#3220: is that fast?
alstroemeria313#1694: it is like 40% of the speed of PyTorch/XLA on the same hw.
alstroemeria313#1694: And that is a bit slower than a P100.
๐
ฌ gabriel_syme ๐
ฌ#3220: https://cdn.discordapp.com/attachments/729741769738158194/890585501936005150/unknown.png
๐
ฌ gabriel_syme ๐
ฌ#3220: dang, so pytorch is like 30 secs?
alstroemeria313#1694: IDK why the performance difference, am I doing something wrong in JAX?
alstroemeria313#1694: yep
alstroemeria313#1694: Sample quality and loss are equally good
alstroemeria313#1694: Model is the same arch
๐
ฌ gabriel_syme ๐
ฌ#3220: I imagine it is not the torch stuff?
|
๐
ฌ gabriel_syme ๐
ฌ#3220: given that on TPU you need the cpu version
alstroemeria313#1694: Except for using jax.image.resize instead of transposed conv
alstroemeria313#1694: i only use pytorch in this notebook for its data loaders
alstroemeria313#1694: It is entirely possible that the perf difference is due to JAX shipping the data to the TPU slower.
alstroemeria313#1694: I don't know how to detect this.
alstroemeria313#1694: And I tried transposed conv in the JAX version and that wasn't it
xcodevn#9003: try training on a single batch that is already on TPU memory.
๐
ฌ gabriel_syme ๐
ฌ#3220: is the ema applied to the whole tree?
alstroemeria313#1694: yes
๐
ฌ gabriel_syme ๐
ฌ#3220: oh wait, ye dataset is in memory no
alstroemeria313#1694: Also there are nothing but trainable parameters in it
๐
ฌ gabriel_syme ๐
ฌ#3220: can we try https://github.com/deepmind/dm-haiku/blob/75f23834bfc4637b92e546ccb6dc6426a6c24419/haiku/_src/moving_averages.py#L140?
alstroemeria313#1694: So I am not incorrectly applying it to batchnorm statistics or w/e
alstroemeria313#1694: yeah you can put it in
alstroemeria313#1694: Or just take the EMA out actually.
๐
ฌ gabriel_syme ๐
ฌ#3220: completely?
alstroemeria313#1694: That would be faster to try.
alstroemeria313#1694: Yep
๐
ฌ gabriel_syme ๐
ฌ#3220: ok 1 sec
alstroemeria313#1694: Sample from the non-EMA one.
|
๐
ฌ gabriel_syme ๐
ฌ#3220: btw, is the speed the same on colab?
alstroemeria313#1694: Training is still gonna be slow I bet.
๐
ฌ gabriel_syme ๐
ฌ#3220: because this is a v3 I imagined they are faster? but maybe not, memory only?
alstroemeria313#1694: yes, it was jax colab tpu vs pytorch/xla colab tpu
alstroemeria313#1694: You're on a v3?
๐
ฌ gabriel_syme ๐
ฌ#3220: yep
alstroemeria313#1694: I get 1m44s on a v2.
๐
ฌ gabriel_syme ๐
ฌ#3220: per epoch?
alstroemeria313#1694: yes
๐
ฌ gabriel_syme ๐
ฌ#3220: oh ok
alstroemeria313#1694: no 1m30s
๐
ฌ gabriel_syme ๐
ฌ#3220: It does 33s here
alstroemeria313#1694: oh
alstroemeria313#1694: do you want the PyTorch/XLA one too
alstroemeria313#1694: Hold on
alstroemeria313#1694: To compare like to like.
๐
ฌ gabriel_syme ๐
ฌ#3220: sure
alstroemeria313#1694: https://colab.research.google.com/drive/1lWkjP0CNPXZS3Yhtxj0Cbd0ursJZ_KtS
๐
ฌ gabriel_syme ๐
ฌ#3220: cool 1 sec
๐
ฌ gabriel_syme ๐
ฌ#3220: wait do I need GPU pt?
|
๐
ฌ gabriel_syme ๐
ฌ#3220: oh no xla
alstroemeria313#1694: no
alstroemeria313#1694: use tpu
๐
ฌ gabriel_syme ๐
ฌ#3220: requested
alstroemeria313#1694: To compare against GPU use the public one
๐
ฌ gabriel_syme ๐
ฌ#3220: what's the tpu btw? I still install from the site menu lol
alstroemeria313#1694: ?
๐
ฌ gabriel_syme ๐
ฌ#3220: just CPU one?
๐
ฌ gabriel_syme ๐
ฌ#3220: not sure if there's a special version or the CPU works
๐
ฌ gabriel_syme ๐
ฌ#3220: so running without EMA now
alstroemeria313#1694: just CPU what?
๐
ฌ gabriel_syme ๐
ฌ#3220: first sampling does 249/250 is that ok?
alstroemeria313#1694: Yes
๐
ฌ gabriel_syme ๐
ฌ#3220: pytorch ๐
alstroemeria313#1694: oh
alstroemeria313#1694: Yeah you use cpu pytorch
๐
ฌ gabriel_syme ๐
ฌ#3220: 0ok
alstroemeria313#1694: With JAX
alstroemeria313#1694: Bc all you need is the data related stuff
alstroemeria313#1694: IDK if you can actually convert a PyTorch/XLA tensor to JAX without shipping it to the host and back.
|
alstroemeria313#1694: I bet you do need to round trip to do it.
๐
ฌ gabriel_syme ๐
ฌ#3220: without ema exactly the same time
๐
ฌ gabriel_syme ๐
ฌ#3220: I took this out `params_ema = ema_update(params, params_ema, ema_decay)`
alstroemeria313#1694: something is subtly wrong with the tracing, i think it's because the sampling step function has a different signature on the last step so it's a separate JIT compiled thing
alstroemeria313#1694: or... maybe
alstroemeria313#1694: idk
alstroemeria313#1694: it's some weird JAX thing
๐
ฌ gabriel_syme ๐
ฌ#3220: I mean it works ๐
๐
ฌ gabriel_syme ๐
ฌ#3220: ok doing xla now
๐
ฌ gabriel_syme ๐
ฌ#3220: do I need this? `torch_xla-1.9-cp37-cp37m-linux_x86_64.whl`
๐
ฌ gabriel_syme ๐
ฌ#3220: ok yes I guess ๐
๐
ฌ gabriel_syme ๐
ฌ#3220: ehm no perhaps
alstroemeria313#1694: i don't know
alstroemeria313#1694: TRC hasn't replied back to me yet
alstroemeria313#1694: So I have not tested it on a TPU VM
alstroemeria313#1694: In any case we will get a distributed data parallel one for PyTorch/XLA probably
๐
ฌ gabriel_syme ๐
ฌ#3220: nice ๐
alstroemeria313#1694: And that will probably be the one I actually use
alstroemeria313#1694: Bc one codebase
๐
ฌ gabriel_syme ๐
ฌ#3220: boo I need docker
|
alstroemeria313#1694: aw
๐
ฌ gabriel_syme ๐
ฌ#3220: or I can create a new VM but I don't want
alstroemeria313#1694: for PT/XLA?
๐
ฌ gabriel_syme ๐
ฌ#3220: I think so
alstroemeria313#1694: ah
alstroemeria313#1694: ...I'm gonna do an experiment
alstroemeria313#1694: bf16 in PT/XLA did not work with this model when I tried it.
alstroemeria313#1694: To rule out weirdness I'm gonna try Ampere bf16 with normal PyTorch.
alstroemeria313#1694: I uh, you can try it on the JAX version too, IDK how
alstroemeria313#1694: Also part of the GPU speed benefit for this code is that I can use fp16.
alstroemeria313#1694: But TPUs only have bf16 and that didn't work.
๐
ฌ gabriel_syme ๐
ฌ#3220: yeah looking now
๐
ฌ gabriel_syme ๐
ฌ#3220: I guess on params right
alstroemeria313#1694: btw redownload the pt/xla one
alstroemeria313#1694: I managed to break it and not realize
alstroemeria313#1694: if you change the convtranspose2d kernel size you also have to change the padding
alstroemeria313#1694: Or it will upsample to the wrong size
alstroemeria313#1694: And fail to concat.
๐
ฌ gabriel_syme ๐
ฌ#3220: sure
๐
ฌ gabriel_syme ๐
ฌ#3220: maybe this? https://pythonrepo.com/repo/deepmind-jmp-python-machine-learning#examples
|
alstroemeria313#1694: wow bf16 on ampere super slow
๐
ฌ gabriel_syme ๐
ฌ#3220: I can't find mixed_precision on haiku
alstroemeria313#1694: i'm on an A6000
alstroemeria313#1694: It's not one of the things that's only good on A100 right?
alstroemeria313#1694: fp16 and amp work fine
๐
ฌ gabriel_syme ๐
ฌ#3220: not sure at all
alstroemeria313#1694: (What are the things that are only good on A100 btw)
๐
ฌ gabriel_syme ๐
ฌ#3220: crypto :berk:
๐
ฌ gabriel_syme ๐
ฌ#3220: not sure, never had the pleasure of using one
alstroemeria313#1694: A6000 is a GA102, like the 3090
random person#5234: BF16
alstroemeria313#1694: oh
alstroemeria313#1694: lol
random person#5234: also FP64
alstroemeria313#1694: Yeah I can get an A100 for a bit to test it but
alstroemeria313#1694: yeah the model isn't learning well in bf16 so far
alstroemeria313#1694: gonna run it a bit longer
alstroemeria313#1694: fortunately i don't need fp64
alstroemeria313#1694: That's mostly a non-DL HPC thing right?
๐
ฌ gabriel_syme ๐
ฌ#3220: yeah
|
๐
ฌ gabriel_syme ๐
ฌ#3220: rendering maybe or smth
alstroemeria313#1694: i bet that's also fp32
๐
ฌ gabriel_syme ๐
ฌ#3220: I don't remember if my CFD was that
alstroemeria313#1694: "Diffusion models don't work in bf16" is... not good for TPUs
๐
ฌ gabriel_syme ๐
ฌ#3220: CFD would easily get values of 10e-15, even less I think for some stuff
๐
ฌ gabriel_syme ๐
ฌ#3220: sucks
๐
ฌ gabriel_syme ๐
ฌ#3220: wait there wasn't one that was trained on tpus?
alstroemeria313#1694: Which
alstroemeria313#1694: And you can just do fp32, that works
nshepperd#2316: didn't OAI train theirs in bf16? or was that a different 16-bit fp
alstroemeria313#1694: OAI is not TPU :)
alstroemeria313#1694: It was regular fp16.
alstroemeria313#1694: On V100s.
nshepperd#2316: ohh
๐
ฌ gabriel_syme ๐
ฌ#3220: so to train CIFAR, do I just load that dataset?
๐
ฌ gabriel_syme ๐
ฌ#3220: or I need to make the model chonkier / mode convs?
alstroemeria313#1694: 3 + 16 channels in, 3 channels out
alstroemeria313#1694: And chonk it up
alstroemeria313#1694: Add one more U-Net stage if you want to match my arch
alstroemeria313#1694: But you can leave it out and just try it ๐คทโโ๏ธ
|
alstroemeria313#1694: Base channel count 64 at least
๐
ฌ gabriel_syme ๐
ฌ#3220: base channel is `c`?
alstroemeria313#1694: yes
๐
ฌ gabriel_syme ๐
ฌ#3220: wait do I just add a conv block or is it with resize as well
alstroemeria313#1694: you need to go up to x_4
alstroemeria313#1694: following the pattern
nshepperd#2316: it would be cool if you could like. run the model in fp32 and bf16 and somehow compare the outputs of every individual op. bc maybe it's just one thing that needs to be rewritten to loss less bits
alstroemeria313#1694: to add a u-net stage.
alstroemeria313#1694: I am going to clean this up eventually
alstroemeria313#1694: The manual U-Net was... me trying to make a thing work as fast as possible
๐
ฌ gabriel_syme ๐
ฌ#3220: what size of filters should I do in the additional Unet?
alstroemeria313#1694: c * 4
๐
ฌ gabriel_syme ๐
ฌ#3220: cool
๐
ฌ gabriel_syme ๐
ฌ#3220: and in the resize?
alstroemeria313#1694: i think i use channel multipliers 1, 2, 2, 4?
alstroemeria313#1694: uh
xcodevn#9003: i think the problem is at image.resize
xcodevn#9003: i replace cubic by nearest...
xcodevn#9003: the training time now 30 s
alstroemeria313#1694: for the three resizes you need [*x.shape[:2], 16, 16]
|
alstroemeria313#1694: [*x_2.shape[:2], 8, 8]
xcodevn#9003: https://cdn.discordapp.com/attachments/729741769738158194/890595960915370014/unknown.png
alstroemeria313#1694: and [*x_3.shape[:2], 4, 4]
alstroemeria313#1694: The downsamples.
alstroemeria313#1694: The upsamples use the actual size of the thing it's upsampling to, which is already known.
alstroemeria313#1694: ohhh!
alstroemeria313#1694: So one thing!
alstroemeria313#1694: That is breaking on bf16
alstroemeria313#1694: Is EMA.
๐
ฌ gabriel_syme ๐
ฌ#3220: ```python
x_4 = jax.image.resize(x_2, [*x_3.shape[:2], 4, 4], 'cubic')
x_4 = res_conv_block(c * 4, c * 4)(x_4)
x_4 = res_conv_block(c * 4, c * 4)(x_4)
x_4 = res_conv_block(c * 4, c * 4)(x_4)
x_4 = res_conv_block(c * 4, c * 4)(x_4)
x_4 = jax.image.resize(x_4, [*x_4.shape[:2], *x_3.shape[2:]], 'cubic')
```
alstroemeria313#1694: You can't do it, not enough precision.
alstroemeria313#1694: `jax.image.resize(x_2,` -> x_3
๐
ฌ gabriel_syme ๐
ฌ#3220: good catch ๐
|
alstroemeria313#1694: Yeah this is why people define them recursively when they do them for real
๐
ฌ gabriel_syme ๐
ฌ#3220: haha ye
alstroemeria313#1694: You supply lists of layer count at each stage and channel count at each stage.
alstroemeria313#1694: And which stages get self-attention blocks if you have those.
alstroemeria313#1694: So but yeah. We can't train in bf16 unless we store the weights in fp32.
alstroemeria313#1694: The *EMA* weights anyway.
๐
ฌ gabriel_syme ๐
ฌ#3220: oh I need a concat too right
alstroemeria313#1694: Yes
alstroemeria313#1694: Since I am using AMP for fp16
๐
ฌ gabriel_syme ๐
ฌ#3220: ok let's try
alstroemeria313#1694: The weights were fp32 and EMA worked.
alstroemeria313#1694: Also I still doubt it is as good because the loss was still worse.
๐
ฌ gabriel_syme ๐
ฌ#3220: model params = `8619185`
alstroemeria313#1694: Yeah looks about right
๐
ฌ gabriel_syme ๐
ฌ#3220: oh dang forgot to fix the inputs heh
alstroemeria313#1694: But without EMA in fp32 you're just sunk.
๐
ฌ gabriel_syme ๐
ฌ#3220: `'conv2_d/w' with shape (3, 3, 21, 64) does not match shape=(3, 3, 23, 64) dtype=dtype('float32')`
nshepperd#2316: having to use nearest for resizing is not great :'(
๐
ฌ gabriel_syme ๐
ฌ#3220: nearest will forever remind me of when I was working on a gaming internet cafe
alstroemeria313#1694: ahah
|
๐
ฌ gabriel_syme ๐
ฌ#3220: it's the cemetery
๐
ฌ gabriel_syme ๐
ฌ#3220: ๐
๐
ฌ gabriel_syme ๐
ฌ#3220: to the nearest
xcodevn#9003: i mean, cubic interpolation is slow thing down...
alstroemeria313#1694: use learnable transposed convolutions i guess
nshepperd#2316: it seems about equally slow wit linear
nshepperd#2316: yeah maybe the resize needs to be manually reformulated as a conv2d
alstroemeria313#1694: maybe we need to work out what the opposite of blurpool2d is
alstroemeria313#1694: like the kernel to use with stride=2 transposed conv
nshepperd#2316: jax.image.resize uses some sort of einsum thing, that's probably why it's slow
alstroemeria313#1694: and just hardcode that
alstroemeria313#1694: How does stride=2 transposed work anyway.
๐
ฌ gabriel_syme ๐
ฌ#3220: do I need to change fourrier features?
alstroemeria313#1694: no
alstroemeria313#1694: Timestep conditioning is the same
๐
ฌ gabriel_syme ๐
ฌ#3220: k
alstroemeria313#1694: ok so a transposed conv... ugh
nshepperd#2316: use gradient descent to learn the kernel that inverts blurpool2d lol
alstroemeria313#1694: It inserts zeros and then applies the conv?
alstroemeria313#1694: Or
|
alstroemeria313#1694: No that's slow
alstroemeria313#1694: It doesn't do that
alstroemeria313#1694: It would be nice if it did
alstroemeria313#1694: oh huh, i found the kernel that's equivalent to nearest upsampling
alstroemeria313#1694: eheh
๐
ฌ gabriel_syme ๐
ฌ#3220: hehe totally failing in cifar
alstroemeria313#1694: it's just [1 1]
๐
ฌ gabriel_syme ๐
ฌ#3220: do I change x_skip?
alstroemeria313#1694: no
alstroemeria313#1694: res blocks should stay the same
๐
ฌ gabriel_syme ๐
ฌ#3220: there's a mismatch I'm too tired to catch. I'll try
alstroemeria313#1694: how's it failing
alstroemeria313#1694: can you post your whole diffusion_model()
alstroemeria313#1694: I'll just look at it
nshepperd#2316: the resizes in the unet are not all x2, are they
nshepperd#2316: because of the support you added for odd shaped images
nshepperd#2316: ah yeah, jax.image.resize has a different code path for nearest, that just uses indexing. everything else turns into scale_and_translate. that's why nearest is way faster
alstroemeria313#1694: i never added it ^^;;
alstroemeria313#1694: except in the jax one and that's incorrect
nshepperd#2316: ah
|
alstroemeria313#1694: so a transposed conv2d like... it just copypastes the kernel into the output
alstroemeria313#1694: scaled by the input's value at that the corresponding point
alstroemeria313#1694: I mean it starts with zeros and sticks copies of the kernel centered two pixels apart
alstroemeria313#1694: Wait
alstroemeria313#1694: That *is* equivalent to adding zeros and then filtering?
alstroemeria313#1694: Just fast
๐
ฌ gabriel_syme ๐
ฌ#3220: maybe larger bs will use the TPU more efficiently?
alstroemeria313#1694: ok so.
alstroemeria313#1694: You have to add the zeros and then lowpass filter
alstroemeria313#1694: Uh, so the ideal filter from a frequency perspective is sinc
alstroemeria313#1694: (Trying to work out the right *width* for the lowpass filter.)
๐
ฌ gabriel_syme ๐
ฌ#3220: 13 seconds sampling, 48 seconds training, for CIFAR10 with one more block
alstroemeria313#1694: ooh
alstroemeria313#1694: on v3?
๐
ฌ gabriel_syme ๐
ฌ#3220: yea
alstroemeria313#1694: so equally fast?
๐
ฌ gabriel_syme ๐
ฌ#3220: not sure it's learning, maybe what I did sucked
alstroemeria313#1694: it's going to train visibly slower
alstroemeria313#1694: ime
๐
ฌ gabriel_syme ๐
ฌ#3220: https://cdn.discordapp.com/attachments/729741769738158194/890603675385475113/unknown.png
|
alstroemeria313#1694: run at least 20 epochs and see if you can start making vague shapes out
alstroemeria313#1694: oh
๐
ฌ gabriel_syme ๐
ฌ#3220: that feels bad ๐
alstroemeria313#1694: is that bf16
๐
ฌ gabriel_syme ๐
ฌ#3220: ehm I did not change anything else I think
๐
ฌ gabriel_syme ๐
ฌ#3220: btw where does parallel stuff happen in haiku
alstroemeria313#1694: idk
alstroemeria313#1694: Does haiku use a different init btw
alstroemeria313#1694: Than PyTorch
alstroemeria313#1694: Bc I noticed that the same arch would produce visibly worse results earlier that changed in ways I wasn't used to when I changed small details of the arch
alstroemeria313#1694: So this is probably due to the init
alstroemeria313#1694: bf16 on ampere, no ema https://cdn.discordapp.com/attachments/729741769738158194/890604386923995196/demo_00017-2.png
alstroemeria313#1694: Meh
alstroemeria313#1694: that's the pred variant too, which is not as good
nshepperd#2316: probably the conv2d inits are different
alstroemeria313#1694: Gonna switch back to eps now that I know EMA was the culprit
๐
ฌ gabriel_syme ๐
ฌ#3220: oh wait am I printing 1 channel or smth
nshepperd#2316: i rememeber being surprised by pytorch conv2d init when i was porting
alstroemeria313#1694: i thought my to_pil_image handled three correctly already
๐
ฌ gabriel_syme ๐
ฌ#3220: it has to they are colored
|
๐
ฌ gabriel_syme ๐
ฌ#3220: it's not learning though, dang
nshepperd#2316: blurry
alstroemeria313#1694: yeah pred is bad
๐
ฌ gabriel_syme ๐
ฌ#3220: oh no ema is off
alstroemeria313#1694: It makes blurry outputs bc it learns to just lowpass filter its output to predict the clean reals better
alstroemeria313#1694: so you turned ema off. did you change sampling to sample from non-ema?
๐
ฌ gabriel_syme ๐
ฌ#3220: :guilty:
๐
ฌ gabriel_syme ๐
ฌ#3220: nvm I turned it on
๐
ฌ gabriel_syme ๐
ฌ#3220: ahaaa
๐
ฌ gabriel_syme ๐
ฌ#3220: 1:09 now btw, so ye wasn't doing much before
alstroemeria313#1694: ohh?
๐
ฌ gabriel_syme ๐
ฌ#3220: ehm actually that was epoch 0, so should be faster
๐
ฌ gabriel_syme ๐
ฌ#3220: yep nvm it's equally fast
๐
ฌ gabriel_syme ๐
ฌ#3220: 48
alstroemeria313#1694: ahh
alstroemeria313#1694: Hey, dumb idea.
alstroemeria313#1694: Can we interpolate between outputting pred and eps
alstroemeria313#1694: Eh, that seems like a bad idea
alstroemeria313#1694: We should output eps and reweight if we want.
๐
ฌ gabriel_syme ๐
ฌ#3220: look at this beauty https://cdn.discordapp.com/attachments/729741769738158194/890606388504576030/unknown.png
|
alstroemeria313#1694: Looks normal!
๐
ฌ gabriel_syme ๐
ฌ#3220: yep
๐
ฌ gabriel_syme ๐
ฌ#3220: I may just let it run while I sleep
๐
ฌ gabriel_syme ๐
ฌ#3220: super duper overfit
alstroemeria313#1694: eheh
alstroemeria313#1694: Yeah diffusion models still generate clean samples when overfit
alstroemeria313#1694: They just memorize the reals or w/e
๐
ฌ gabriel_syme ๐
ฌ#3220: so you mean...they keep it real?
alstroemeria313#1694: Is not like a GAN where that will break training.
alstroemeria313#1694: ^_^
๐
ฌ gabriel_syme ๐
ฌ#3220: it was so lame, I had to
alstroemeria313#1694: Wonder if we should be using dropout...
alstroemeria313#1694: Like the 2D version that drops feature maps at random.
๐
ฌ gabriel_syme ๐
ฌ#3220: is BN dead?
alstroemeria313#1694: I think it's bad for this. I tried it.
๐
ฌ gabriel_syme ๐
ฌ#3220: ic
alstroemeria313#1694: OpenAI uses group norm in theirs.
alstroemeria313#1694: This seems to work.
alstroemeria313#1694: But I haven't tried it myself yet.
๐
ฌ gabriel_syme ๐
ฌ#3220: we got an easy class, could try
|
alstroemeria313#1694: *nods*
alstroemeria313#1694: They use conditional group norms for timestep and class conditioning.
alstroemeria313#1694: I have an alternate way of conditioning that doesn't need this.
alstroemeria313#1694: But plain group norms might help anyway, idk
alstroemeria313#1694: And we are going to need to start injecting conditioning in the middle of the net again if we want to condition on something high dimensional like a CLIP embedding.
alstroemeria313#1694: Since there just aren't enough channels at the start.
alstroemeria313#1694: What is group norm anyway.
๐
ฌ gabriel_syme ๐
ฌ#3220: no idea
๐
ฌ gabriel_syme ๐
ฌ#3220: maybe we try all norms
alstroemeria313#1694: i've tried bn, don't bother with it
๐
ฌ gabriel_syme ๐
ฌ#3220: they have layer and spectral I guess
๐
ฌ gabriel_syme ๐
ฌ#3220: and rms
alstroemeria313#1694: try instance norm, group norm, and layer norm i guess
nshepperd#2316: batch norm has so many thing that annoy me about it ;;
๐
ฌ gabriel_syme ๐
ฌ#3220: did they have specific values for it
alstroemeria313#1694: group norm with as many groups as channels is equivalent to instance norm
alstroemeria313#1694: group norm with one group is equivalent to layer norm
alstroemeria313#1694: idk use sqrt(channels) lol
๐
ฌ gabriel_syme ๐
ฌ#3220: ok running it on the other v3
alstroemeria313#1694: So the way they do their conditional group norms is.
|
๐
ฌ gabriel_syme ๐
ฌ#3220: I'm lazily opening these with my lab so it might die in the night
xcodevn#9003: can you name a few ๐ ?
alstroemeria313#1694: Equivalent to a plain group norm w/o affine parameters, followed by a conditional 1x1 conv2d.
alstroemeria313#1694: So if we want to condition in the middle of the network why don't we just use conditional 1x1.
alstroemeria313#1694: Then we can use whatever norm we want, or none.
nshepperd#2316: statefulness and batch size dependence
nshepperd#2316: so actually by many i mean two
alstroemeria313#1694: oh
xcodevn#9003: batch size dependence bc it is.. *batch* norm ๐
nshepperd#2316: OAI used num_groups=32 for all the models
alstroemeria313#1694: Because if you do it that way you want to stick a nonlinearity before it.
alstroemeria313#1694: In the conditional norm case the *norm* is the nonlinearity.
xcodevn#9003: statefulness bc .... it is *batch* norm ๐
xcodevn#9003: i mean... it works really well in so many cases..
alstroemeria313#1694: eugh i hate it
๐
ฌ gabriel_syme ๐
ฌ#3220: doesn't it brake a bunch of stuff nowadays too though
๐
ฌ gabriel_syme ๐
ฌ#3220: or also, you take it out and not much happens?
alstroemeria313#1694: It leaks information about other examples in the batch.
alstroemeria313#1694: During training that is.
alstroemeria313#1694: (Since stats are fixed at test time it doesn't do this then)
|
alstroemeria313#1694: so. instance norm is bad for convnets at the low res stages bc the estimate of the norm is noisy
๐
ฌ gabriel_syme ๐
ฌ#3220: haiku really needs examples for people like me
alstroemeria313#1694: it really does not have enough.
xcodevn#9003: i think it is not bc of the stats, but because of batch normalization operations
alstroemeria313#1694: yes
๐
ฌ gabriel_syme ๐
ฌ#3220: yeah ๐ฆ
alstroemeria313#1694: (Like people noticed batchnorm made GAN Ds work better and prevent collapse to a single mode and it turned out to be bc it was leaking information between examples in a batch so D could tell that a batch of all the same looking thing was fake)
nshepperd#2316: i ported the pytorch impl of groupnorm https://github.com/nshepperd/jax-guided-diffusion/blob/2320ce05aa2d6ea83234469ef86d36481ef962ea/jaxtorch/nn/modules.py#L171
alstroemeria313#1694: (So Nvidia put in an explicit minibatch std layer to leak it in a controlled way so they didn't have to deal with batchnorm's other problems)
alstroemeria313#1694: (In StyleGAN2)
nshepperd#2316: i guess haiku's is the same though probably
๐
ฌ gabriel_syme ๐
ฌ#3220: so I would do groupnorm(32, c_mid)?
๐
ฌ gabriel_syme ๐
ฌ#3220: before the conv2d?
alstroemeria313#1694: um
alstroemeria313#1694: hold on
alstroemeria313#1694: i thought it was between the conv and the relu but maybe that was only for batchnorm
alstroemeria313#1694: I am just going to check OAI's code
๐
ฌ gabriel_syme ๐
ฌ#3220: and do we need it at a different point on the way up?
๐
ฌ gabriel_syme ๐
ฌ#3220: https://cdn.discordapp.com/attachments/729741769738158194/890611167586840596/unknown.png
๐
ฌ gabriel_syme ๐
ฌ#3220: do we have a way to save checkpoint? ๐
|
alstroemeria313#1694: I don't know how in Haiku yet. ^^;;
๐
ฌ gabriel_syme ๐
ฌ#3220: haha
๐
ฌ gabriel_syme ๐
ฌ#3220: man the sequential in nshepperd's jaxtorch feels nice
xcodevn#9003: to save checkpoint: just pickle the param dic and optimizer dict
alstroemeria313#1694: what, i can't understand oai's convoluted code
alstroemeria313#1694: gonna just look at the paper
๐
ฌ gabriel_syme ๐
ฌ#3220: in the code above it was before silu on the way down
nshepperd#2316: ```
self.in_layers = nn.Sequential(
normalization(channels),
nn.SiLU(),
nn.Conv2d(channels, self.out_channels, 3, padding=1),
)
```
nshepperd#2316: in the res blocks
EricHallahan#1051: ~~*looks at paper and decides to go back to looking at code*~~
nshepperd#2316: normalization is GroupNorm((32, )
nshepperd#2316: out_layers has normalization, silu, dropout, conv
alstroemeria313#1694: ...They don't say
alstroemeria313#1694: Sigh.
|
alstroemeria313#1694: So it goes between conv and relu?
alstroemeria313#1694: idk just put it there anyway
๐
ฌ gabriel_syme ๐
ฌ#3220: figuring out how to ๐
๐
ฌ gabriel_syme ๐
ฌ#3220: I have to remind I have no idea what I'm doing, but it's helping
๐
ฌ gabriel_syme ๐
ฌ#3220: would it be after x_skip I guess
alstroemeria313#1694: inside the res block?
alstroemeria313#1694: idk
๐
ฌ gabriel_syme ๐
ฌ#3220: it has to right?
๐
ฌ gabriel_syme ๐
ฌ#3220: I mean be inside
alstroemeria313#1694: Yeah
๐
ฌ gabriel_syme ๐
ฌ#3220: ok trying ๐
๐
ฌ gabriel_syme ๐
ฌ#3220: channels 16?
๐
ฌ gabriel_syme ๐
ฌ#3220: is this the fourier?
๐
ฌ gabriel_syme ๐
ฌ#3220: I think I need to up to 32 right?
๐
ฌ gabriel_syme ๐
ฌ#3220: oh I get it nvm
๐
ฌ gabriel_syme ๐
ฌ#3220: is it totally wrong to change the 16, 8, 4 to 128, 64, 32?
๐
ฌ gabriel_syme ๐
ฌ#3220: 20k more parameters but it takes like 2x time
alstroemeria313#1694: that would be for a 256x256 input
๐
ฌ gabriel_syme ๐
ฌ#3220: ye I messed up, I made a new block
๐
ฌ gabriel_syme ๐
ฌ#3220: I think this has to be in the first block right?
|
๐
ฌ gabriel_syme ๐
ฌ#3220: not throughout
alstroemeria313#1694: The which?
๐
ฌ gabriel_syme ๐
ฌ#3220: groupnorm
alstroemeria313#1694: all blocks
๐
ฌ gabriel_syme ๐
ฌ#3220: well with 32 groups, it needs at least 32 filters
alstroemeria313#1694: Put it in res_conv_block()
alstroemeria313#1694: Yeah. Do you need to manually set it to NCHW again
๐
ฌ gabriel_syme ๐
ฌ#3220: no I mean the 16/8/4 fails
alstroemeria313#1694: ...
alstroemeria313#1694: Those are not filters?
alstroemeria313#1694: Those are spatial sizes
๐
ฌ gabriel_syme ๐
ฌ#3220: I know, but why is it calling them filters
alstroemeria313#1694: Because you are using NHWC somehow
๐
ฌ gabriel_syme ๐
ฌ#3220: ohhh lol
๐
ฌ gabriel_syme ๐
ฌ#3220: oh wait I know
alstroemeria313#1694: I am so glad PyTorch picked one and stuck with it.
alstroemeria313#1694: Even the choice of one that isn't the one you most prefer is better than always having to deal with both cases.
๐
ฌ gabriel_syme ๐
ฌ#3220: ok, I think it's running
alstroemeria313#1694: :)
๐
ฌ gabriel_syme ๐
ฌ#3220: I forgot to set data_format
|
๐
ฌ gabriel_syme ๐
ฌ#3220: so tired ๐
alstroemeria313#1694: i can try groupnorm myself in a bit
alstroemeria313#1694: when i'm done w/ the current experiment
alstroemeria313#1694: i mean in pytorch on a gpu
๐
ฌ gabriel_syme ๐
ฌ#3220: wait no lol
alstroemeria313#1694: oh, i think my sampling code is subtly wrong btw
alstroemeria313#1694: or, uh. the jax version may actually not be wrong
alstroemeria313#1694: idk
alstroemeria313#1694: so i did `t = torch.linspace(1, 0, steps)` and this is wrong and you can see it by considering what happens in the cases with one or two steps.
๐
ฌ gabriel_syme ๐
ฌ#3220: damn these axes
๐
ฌ gabriel_syme ๐
ฌ#3220: whyu would we have 39 channels lol
alstroemeria313#1694: Correct is `t = torch.linspace(1, 0, steps + 1)[:-1]`
alstroemeria313#1694: 3 + 32 + 4
alstroemeria313#1694: three for noise input, 32 for fourier features, 4 for class conditioning.
๐
ฌ gabriel_syme ๐
ฌ#3220: oh so I may be adding it at a wrong spot
alstroemeria313#1694: Just stick a conv2d 1x1 before it
alstroemeria313#1694: oh
๐
ฌ gabriel_syme ๐
ฌ#3220: ah ok
alstroemeria313#1694: wait
alstroemeria313#1694: you should be adding the groupnorms after the convs in the res block
|
alstroemeria313#1694: So the group norm should never see 39
๐
ฌ gabriel_syme ๐
ฌ#3220: aha ok
๐
ฌ gabriel_syme ๐
ฌ#3220: both?
alstroemeria313#1694: And don't group norm the conv layer it makes for the skip connection
alstroemeria313#1694: Yeah after both
๐
ฌ gabriel_syme ๐
ฌ#3220: gotcha, ok
๐
ฌ gabriel_syme ๐
ฌ#3220: the other one is at epoch 48
alstroemeria313#1694: Anyway. You just do that then `if i < steps - 1:` instead of `if t[i]:`.
alstroemeria313#1694: That fixes it.
๐
ฌ gabriel_syme ๐
ฌ#3220: oh god damn it, I'm making a norm block
๐
ฌ gabriel_syme ๐
ฌ#3220: I think the jax code is like that
alstroemeria313#1694: yeah i did it differently there
alstroemeria313#1694: does the one step case work
alstroemeria313#1694: I don't have the notebook open rn ^^;;
๐
ฌ gabriel_syme ๐
ฌ#3220: ehm I only have the jax versions
alstroemeria313#1694: Like it inputs the log snr for timestep 1 and nothing for the next one
alstroemeria313#1694: ?
alstroemeria313#1694: hold on
๐
ฌ gabriel_syme ๐
ฌ#3220: groupnorm works, I only had to remove it from the last conv block
alstroemeria313#1694: yeah no
|
๐
ฌ gabriel_syme ๐
ฌ#3220: woah so different init
alstroemeria313#1694: should be `t = jnp.linspace(1, 0, steps + 1)[:-1]`
๐
ฌ gabriel_syme ๐
ฌ#3220: ok doing that
๐
ฌ gabriel_syme ๐
ฌ#3220: should I change in the other one I wonder
๐
ฌ gabriel_syme ๐
ฌ#3220: nah let's compare
alstroemeria313#1694: it prob doesn't matter if you do like a hundred or more steps
๐
ฌ gabriel_syme ๐
ฌ#3220: oh ok
alstroemeria313#1694: I was testing one step for someone and noticed the off by one error.
๐
ฌ gabriel_syme ๐
ฌ#3220: then I won't stop either ๐ and I'll head to sleep
๐
ฌ gabriel_syme ๐
ฌ#3220: oh no
๐
ฌ gabriel_syme ๐
ฌ#3220: https://cdn.discordapp.com/attachments/729741769738158194/890619019437178940/unknown.png
alstroemeria313#1694: oh no
๐
ฌ gabriel_syme ๐
ฌ#3220: lol it's like an color table
๐
ฌ gabriel_syme ๐
ฌ#3220: looks kind of nice, but not really CIFAR
๐
ฌ gabriel_syme ๐
ฌ#3220: wait in the last ~~layer~~block
๐
ฌ gabriel_syme ๐
ฌ#3220: do I take groupnorm out or just put it at the first conv?
alstroemeria313#1694: you might want to put it first only
๐
ฌ gabriel_syme ๐
ฌ#3220: I did ye
๐
ฌ gabriel_syme ๐
ฌ#3220: changed a bit but still weird
๐
ฌ gabriel_syme ๐
ฌ#3220: that said I keep stopping it at iter 2 ๐
|
alstroemeria313#1694: i changed to this ```python
ResConvBlock(c, c, c),
nn.Conv2d(c, 3, 1),
```
alstroemeria313#1694: at the end
alstroemeria313#1694: so i could groupnorm the last conv in the last residual block.
alstroemeria313#1694: Then I just 1x1 it down to 3 channels.
๐
ฌ gabriel_syme ๐
ฌ#3220: wait is that (c,c)
alstroemeria313#1694: yes
alstroemeria313#1694: bc you don't specify the input channel count with haiku
alstroemeria313#1694: bc it infers it
๐
ฌ gabriel_syme ๐
ฌ#3220: that was confusing
alstroemeria313#1694: Wait how is [1/2 1 1/2] not just the bilinear upsampling transposed conv kernel
alstroemeria313#1694: At points where it puts a nonzero you have zeros on other side so you only get the 1.
alstroemeria313#1694: At points where it puts a zero it is 1/2 of the point on the left + 1/2 of the point on the right.
alstroemeria313#1694: I guess the difference is padding/shifting?
nshepperd#2316: makes sense to me
๐
ฌ gabriel_syme ๐
ฌ#3220: damn blocked my CIFAR grid ๐
alstroemeria313#1694: Like it has to align with whatever a [1/4 1/2 1/4] stride 2 downsample does
๐
ฌ gabriel_syme ๐
ฌ#3220: the groupnorm not so hot, the normal is pretty cool already
|
alstroemeria313#1694: Those should reverse right?
๐
ฌ gabriel_syme ๐
ฌ#3220: how many epochs have you run these before?
alstroemeria313#1694: (Specifically the combination of the two shouldn't shift the image by fractional pixels)
alstroemeria313#1694: like 50-200
๐
ฌ gabriel_syme ๐
ฌ#3220: okay
nshepperd#2316: i think it doesn't shift?
alstroemeria313#1694: idk how not
alstroemeria313#1694: i can't get it to line up
alstroemeria313#1694: like with the required padding and stuff
nshepperd#2316: I'm too tired. i think I'll just sleep ๐ค
alstroemeria313#1694: ^_^
Louis#0144: gn
alstroemeria313#1694: goodnight~
Kia#2550: Get some rest nshepperd
alstroemeria313#1694: btw is there any non-ugly way to broadcast some state from the output of some layer to the input of all layers of a certain type
alstroemeria313#1694: In PyTorch
alstroemeria313#1694: Like no matter how deeply nested any of these layers are.
alstroemeria313#1694: They should all get the output of the thing as an auxiliary input.
alstroemeria313#1694: ...What is Perceiver IO like and is there a U-Net type variant of it possible?
Like they did autoencoders for images and videos
|
But we need long range skip connections, it won't work otherwise.
alstroemeria313#1694: For diffusion that is.
inox#5400: I came up with a cursed way, you can decide how ugly it is:
```
import torch
import torch.nn as nn
context = {}
def contextualize(Module):
class ContextualizedModule(Module):
def forward(self, x):
try:
input = context['input']
print(f'adding context input {input.shape}')
x = x + input
except KeyError:
print(f'no context input to add')
|
pass
return super().forward(x)
return ContextualizedModule
net = nn.Sequential(*[contextualize(nn.Linear)(10,10)]*4)
_ = net(torch.randn(16, 10))
context['input'] = torch.randn(1,10)
_ = net(torch.randn(16, 10))
```
inox#5400: output:
```
no context input to add
no context input to add
no context input to add
no context input to add
adding context input torch.Size([1, 10])
adding context input torch.Size([1, 10])
adding context input torch.Size([1, 10])
adding context input torch.Size([1, 10])
```
|
alstroemeria313#1694: Mine was worse I think
alstroemeria313#1694: I put a dict in an instance variable of the parent module and arranged for it to be propagated into instance variables of the modules that needed it.
alstroemeria313#1694: Then they all pulled the thing out of the dict.
alstroemeria313#1694: Then I cleared the dict at the end of the parent module's forward() to release the memory.
alstroemeria313#1694: The thing is, the modules that need the global condition vector can be arbitrarily deep
inox#5400: oh yeah I've done that before
inox#5400: โจ pytorch programming โจ
inox#5400: hm should I adapt this code to copy an existing Module and contextualize everything in it or actually do my job
cfoster0#4356: It's pretty much just perciever but using cross attention for both the input and output layers
cfoster0#4356: Yeah, you could do a U-Net variant
alstroemeria313#1694: No TRC yet
alstroemeria313#1694: ooh
inox#5400: ```
import pickle
import io
# https://stackoverflow.com/a/3073327
class MyUnpickler(pickle.Unpickler):
def find_class(self, module, name):
cls = pickle.Unpickler.find_class(self, module, name)
|
try:
if issubclass(cls, nn.Module):
return contextualize(cls)
except TypeError:
pass
return cls
pickled_module = pickle.dumps(net)
def myloads(s, fix_imports=True, encoding="ASCII", errors="strict"):
if isinstance(s, str):
raise TypeError("Can't load pickle from unicode string")
file = io.BytesIO(s)
return MyUnpickler(file, fix_imports=fix_imports,
encoding=encoding, errors=errors).load()
contextualized_net = myloads(pickled_module)
context['input'] = torch.randn(1,1)
_ = contextualized_net(torch.randn(1,1,28,28))
```
|
output:
```
adding context input torch.Size([1, 1])
adding context input torch.Size([1, 1])
adding context input torch.Size([1, 1])
adding context input torch.Size([1, 1])
adding context input torch.Size([1, 1])
adding context input torch.Size([1, 1])
adding context input torch.Size([1, 1])
adding context input torch.Size([1, 1])
```
inox#5400: abusing the pickle module seems to work
kurumuz#5695: does anyone know if cliptokenizer == gpt2tokenizer?
EricHallahan#1051: I don't think it is?
CRG#8707: It's a custom case insensitive tokenizer that used bpe dropout
EricHallahan#1051: Remember that CLIP is case insensitiveโฆ CRG scooped me. :thonk:
๐
ฌ gabriel_syme ๐
ฌ#3220: hey groupnorm worked
๐
ฌ gabriel_syme ๐
ฌ#3220: good morning ๐
๐
ฌ gabriel_syme ๐
ฌ#3220: https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html
|
:popcorn:
๐
ฌ gabriel_syme ๐
ฌ#3220: I wonder if he is absolutely right, he might, but just pays the price for character or idiosyncracy
๐
ฌ gabriel_syme ๐
ฌ#3220: There is no doubt the work is incredible over the years but maybe he needed more than that, namely a sort of business acumen or PR capacity. Idk
Louis#0144: MOOOOM SCHMIDHUBER IS AT IT AGAIN
AI_WAIFU#2844: What he needed was to be in north america
๐
ฌ gabriel_syme ๐
ฌ#3220: yea I was thinking just that
Kazumi#1297: Has anyone tried to make their own huggingface dataset for causal language models from your own txt file?
๐
ฌ gabriel_syme ๐
ฌ#3220: yes I have
๐
ฌ gabriel_syme ๐
ฌ#3220: edit: sending it below
๐
ฌ gabriel_syme ๐
ฌ#3220: holy size batman, I think I might delete this
Kazumi#1297: thank you, I'll study that. sometimes discord lets you post code where you can expand and contract, but I guess that's when it's even longer. or was it when you upload it as a file?
Sora#8531: Schmidhubert was robbed and if any of you was stolen from such an honor as the Turing price you would go as nuts as he does (or probably worse)
๐
ฌ gabriel_syme ๐
ฌ#3220: yeh as a file, let me do that mbe
๐
ฌ gabriel_syme ๐
ฌ#3220: https://cdn.discordapp.com/attachments/729741769738158194/890953027090018354/hf_lm_dataset_example.py
๐
ฌ gabriel_syme ๐
ฌ#3220: yay cleaner
Kazumi#1297: yeaa
fengoku#9000: Final call for papers (submission deadline of **Sept. 30**) for our controllable generation (CtrlGen) workshop happening at NeurIPS this Dec! More details and submission instructions at https://ctrlgenworkshop.github.io/CFP.html. Looking forward to seeing some great work!
eo#8848: (sorry if this is a beginner question and if so let me know) but are there any benefits to training with masked attention in transformer decoder layers (as opposed to removing the mask and training on prefix / next character) besides training efficiency? intuitively it seems to constrain information flow to only going L->R? or is this something to do with [handwavily] 'making intermediate representations more useful to attention heads in subsequent layers / to the right'?
bmk#1476: it's "just" efficiency, but the efficiency gain is huge
bmk#1476: doing a full bidirectional attention on prefix predicting next token is hugely *hugely* inefficent
|
cfoster0#4356: With bidirectional information flow you leak information that makes it easier to predict the outputs, so you get less bits of "real" feedback
eo#8848: I meant changing to train on prefix / next character as opposed to just removing the mask
eo#8848: unless I'm missing something
cfoster0#4356: What do you mean by train on prefix / next character?
eo#8848: so like we pass some prefix of the target sentence to the decoder (alongside encoder hidden states) and we only care about what the rightmost head predicts (i.e. the next token in the target sentence)
bmk#1476: i think he's basically suggesting using an encoder and then predicting a single next token
bmk#1476: which works, it's just absurdly inefficient for sampling and training
cfoster0#4356: Yeah so I'm saying you'd have to spend O(prefix) compute for O(1) bits of supervision
eo#8848: oh I meant in a sequence to sequence problem as opposed to LM
bmk#1476: uh ok im confused
eo#8848: [sorry, don't think I explained this well]
bmk#1476: an encoder is just a decoder with no mask, right
bmk#1476: what youre suggesting is basically an encoder that just predicts a single token to the right
cfoster0#4356: I dunno if non-causal decoders work reliably well for text yet. Even for seq2seq my understanding is you use a bidirectional encoder for the prefix and a causal decoder
eo#8848: > non-causal decoders
right, this is the term I was looking for
cfoster0#4356: Like in theory there are ways to use diffusion or convolutions or perciever stuff for it, but I haven't seen any papers where that works well yet
eo#8848: a decoder with no mask that doesn't attend to some 'memory' (e.g. encoder hidden states)?
bmk#1476: uh not sure what memory you're thinking of
bmk#1476: an encoder attends to everything in the previous layer
|
cfoster0#4356: "encoder" and "decoder" are messy terms
cfoster0#4356: But generally we refer to stuff as encoders if they go from an "input/output" space to a "latent" space, whereas we refer to stuff as decoders if they go from a "latent" space to an "input/output" space. The particular implications of that in transformer land and convolution land differ
eo#8848: context: a seq2seq encoder-decoder architecture -- my understanding is that the stack of encoder layers take a sequence of token embeddings as input and return a sequence of hidden states of the same length as output, and the stack of decoder layers take the encoder hidden states and current predicted target sequence prefix as input and return the predicted next token in target sequence as output (with each decoder layer attending to the hidden states of the previous decoder layer and the final hidden states from the encoder stack)
eo#8848: [the terminology I'm using may be a bit off]
cfoster0#4356: If you remove the causal mask, which positions in the target are you predicting? Just the single token t+1? In a normal setup with the causal mask you can predict all [1...t+1] within a single forward pass through the decoder
eo#8848: yup, just the single token t+1 -- I'm aware this will result in a factor of (length of target sequence) slowdown, but I'm training various types of transformers on various algorithms and wondering whether removing the 'causal information flow' restriction on the decoder stack might make some algorithms easier to learn
cfoster0#4356: Hmm what's the intuition for how removing that would help?
Sphinx#2092: I'm more confused on how this would work at all. So the decoder only ever sees 1 token at a time during training or what?
eo#8848: suppose hypothetically / handwavily we're learning some algorithm that, in order to predict the next target sequence token, warrants some sort of 'swap' operation on (parts of) the hidden states at two locations in the target sequence prefix -- the decoder stack wouldn't be able to learn an operation like that if we have causal masking
(not that it's necessarily likely we'd learn such an operation, and the proposed scenario has its own issues, but it's illustrative of the kind of thing I'm thinking about)
nshepperd#2316: my prediction is that whatever loss in expressivity you get from causal masking would be completely outweighed by the scaling law benefit from having like 1000x as efficient compute for training
cfoster0#4356: I think the causal masking version could, although it may be difficult because of information bottlenecks. While predicting the next token, the decoder could assign half of its current dimension to information gathered from one location, half of its current dimension to the other, and swap with the FF projection
nshepperd#2316: with typical sequence lengths of ~1000
cfoster0#4356: I like to think of the current token's representation as the set of registers
cfoster0#4356: Whereas the past hiddens are ROM
eo#8848: riiight, from that perspective it makes a lot more sense ๐
I don't _quite_ have that level of resources so I've been working at an order of magnitude lower...
eo#8848: I was imagining a situation where you might have to make multiple swaps per token prediction... ig one can make this scenario more and more contrived in order to 'force' non-causal masking
cfoster0#4356: You've got the right mindset, definitely
eo#8848: ig with the decoder prefix you have an 'append-only stack'...
|
cfoster0#4356: Yeah
cfoster0#4356: It's also kind of a program trace tbh
cfoster0#4356: Err, execution trace?
eo#8848: [did any particular papers inspire this perspective?]
cfoster0#4356: Mm not directly. It's mostly the result of a lot of ๐ค and chatting with folks here etc.
cfoster0#4356: Here's one paper that kinda speaks to it https://arxiv.org/abs/2106.06981
cfoster0#4356: This might be useful if you had data where there were a bunch of single-token targets that are conditionally independent given the encoder prefix. Bc then you can batch
MaxHager#6351: Hey. I know that there is a initiative from google where they show a lot of ANN. A kind of playground with already trained models. I forgot the name of the website. Does someone know what I mean and can share the link please?
eo#8848: do you mean https://microscope.openai.com/models ? or tensorflow model zoo
MaxHager#6351: Nice, thank you!!:)
eo#8848: > single-token targets that are conditionally independent given the encoder prefix
example?
cfoster0#4356: I can't think of a good example of when you'd actually encounter this ๐
Sphinx#2092: I still don't really get the setup nor what thr model is supposed to be doing, nor why swapping is needed.
Sphinx#2092: That said, you can model nonautoregressive MT this way.
Awesome_Ruler_007#7922: Any tips for fine-tuning large (120M) object detection models? what kind or LR scheduler I should use for a small dataset (~5k images)? yadda yadda .....
Awesome_Ruler_007#7922: I am using kaggle, so due to quota limitations I want to reduce my experimentation time ๐
๐
ฌ gabriel_syme ๐
ฌ#3220: cool paper thx
AdamScherlis#1848: Hi all -- I'm a researcher at Redwood Research. We want to pay people to try to break our language model. Should I post the details here, #off-topic , one of the alignment channels, or what?
(Alignment and ML knowledge aren't a prerequisite for helping us out with this.)
|
janus#0150: No advertising :banhammer:
EricHallahan#1051: Welcome! Any of those would be suitable, though here would probably get the most exposure.
bmk#1476: probably i'd say #alignment-general would be the best place to post it
bmk#1476: here would get more exposure overall, but less exposure from the kinds of people who might be interested in it
EricHallahan#1051: If you want real feedback, I concur with the opinion to leave it in #alignment-general.
AdamScherlis#1848: I've posted in #alignment-general , head over there if I've piqued your interest ๐
janus#0150: I really like the project you guys are working on ๐. I'm excited to see how it goes.
janus#0150: I like that it is very biased towards Alex rider, I've been very curious about how training LMs on narrow data affects it's corrigibility and off-distribution behavior.
janus#0150: Like it you have it continue a Wikipedia article or python code, will Alex Rider eventually interject himself?
๐
ฌ gabriel_syme ๐
ฌ#3220: narrow data you say ๐ค
AdamScherlis#1848: Ha! Accidental benefit.
Louis#0144: What does break actually entail
Louis#0144: I'm curious
EricHallahan#1051: Did you read *any* of the resources linked?
EricHallahan#1051: They define it pretty thoroughly.
AdamScherlis#1848: I tried this briefly and it seems to be continuing it in Wiki style without any Rider or fanfic-isms
janus#0150: Makes sense. I expect it to be very good at compartmentalization, like it never switches from wikipedia to python code just because python is mentioned. In the limit I wonder how its ability to simulate other characters is silently corrupted though. Like if you simulate other characters, do their actions/values drift towards Alex Rider's perspective/actions?
SeishinSennin#9919: Just dropping yet another pdf here... sorry if it's the wrong place, newbie, please no bulli https://arxiv.org/abs/2109.10317
Louis#0144: @StellaAthena will interest you
EricHallahan#1051: Welcome!
|
EricHallahan#1051: I suggest moving this to #research, which is usually where we discuss papers.
SeishinSennin#9919: ok
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/891270607298244658/Screen_Shot_2021-09-25_at_3.31.07_AM.png
alstroemeria313#1694: (From the ViT paper.)
alstroemeria313#1694: I was thinking about using a hybrid U-Net ViT as a diffusion model arch.
alstroemeria313#1694: That is, the outputs would be projected into a feature map w/ a different learned output projection
alstroemeria313#1694: And then there would be upsampling stages
alstroemeria313#1694: And long range skip connections between each convolutional downsampling stage in the encoder and each convolutional upsampling stage in the decoder.
nshepperd#2316: the transformer goes in the lowest resolution stage?
nshepperd#2316: like in the middle
alstroemeria313#1694: yep
alstroemeria313#1694: you downsample to like 16x16, do a couple of conv layers at 16x16, and feed the end of that into the ViT
nshepperd#2316: ahh
alstroemeria313#1694: or mb just feed it in directly after downsampling idk
nshepperd#2316: won't even need perceiver for 16x16
alstroemeria313#1694: yeah, idk how perceiver works yet
kurumuz#5695: i wonder if the reason most gans cant do bodies are convnets
kurumuz#5695: i havent seen an examination on this
๐
ฌ gabriel_syme ๐
ฌ#3220: is it useful to have embeddings at different resolutions for diffusion?
๐
ฌ gabriel_syme ๐
ฌ#3220: because I remember a handful of UNets doing really cool/weird stuff with that in medical applications
|
alstroemeria313#1694: well, you'd add a positional embedding before feeding it in
alstroemeria313#1694: i looked at unet++ but didn't see any obvious way to provide the deep supervision at the lower resolutions w/ diffusion
alstroemeria313#1694: since we output the predicted noise
alstroemeria313#1694: like would we have to use downsampled noise as the target? idk
๐
ฌ gabriel_syme ๐
ฌ#3220: hmm ok
alstroemeria313#1694: it still might work w/ downsampled noise
alstroemeria313#1694: but that's a complex arch lol
๐
ฌ gabriel_syme ๐
ฌ#3220: like do diffusion in parallel lol
๐
ฌ gabriel_syme ๐
ฌ#3220: yeh those Unets were already super complex
alstroemeria313#1694: the original U-Net is fairly understandable, but
๐
ฌ gabriel_syme ๐
ฌ#3220: one had residuals between every scale, and an output for each scale and joined at the end etc.
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/891273742645354566/Screen_Shot_2021-09-23_at_2.png
alstroemeria313#1694: UNet++ doesn't join at the end i think?
๐
ฌ gabriel_syme ๐
ฌ#3220: ye smth like that! only it also had an output at each scale
๐
ฌ gabriel_syme ๐
ฌ#3220: does this have?
alstroemeria313#1694: The outputs for each scale are for deep supervision, they aren't actually used during inference i thought
๐
ฌ gabriel_syme ๐
ฌ#3220: oh the middle bits
alstroemeria313#1694: the red lines from L yes
nshepperd#2316: what even is that diagram lol
๐
ฌ gabriel_syme ๐
ฌ#3220: ye fck that tbh ๐
|
alstroemeria313#1694: eheh~
๐
ฌ gabriel_syme ๐
ฌ#3220: must feel fun connecting lines though
alstroemeria313#1694: It's defined recursively
alstroemeria313#1694: They show the first few cases on the right
nshepperd#2316: so it has a downsampling process. and each stage of downscaling has a *separate* series of upsampling steps taking it back to the original size
alstroemeria313#1694: yes
nshepperd#2316: and there are skip connections between literally anything that's the same resolution
alstroemeria313#1694: yes
alstroemeria313#1694: Maximum feature reuse.
alstroemeria313#1694: I think they train their segmentation maps at all resolutions simultaneously
alstroemeria313#1694: so my ViT for this would have no [class] token
alstroemeria313#1694: And would use all output patches instead of just one.
nshepperd#2316: sounds good
nshepperd#2316: decision transformer is learning kind of slow... not sure it's actually using the cosim score input instead of just inferring the lerp ratio from the text and image embeddings being different domains
๐
ฌ gabriel_syme ๐
ฌ#3220: a question about these DT experiments
๐
ฌ gabriel_syme ๐
ฌ#3220: what are the actual sequences?
๐
ฌ gabriel_syme ๐
ฌ#3220: and is it the same reward-to-go at each step? I forget, I think it was right
alstroemeria313#1694: VQGAN tokens
alstroemeria313#1694: yes
๐
ฌ gabriel_syme ๐
ฌ#3220: so..all the patches that make an image get flattened to a sequence?
|
alstroemeria313#1694: yes
๐
ฌ gabriel_syme ๐
ฌ#3220: aha cool ty!
alstroemeria313#1694: you can see during early training that it learns horizontal dependencies before vertical
๐
ฌ gabriel_syme ๐
ฌ#3220: that's pretty cool, making an image piece by piece. we should make an interpretability/visualization tool
alstroemeria313#1694: DALL-E works the same way
alstroemeria313#1694: Except with OpenAI's VAE tokens
alstroemeria313#1694: And not having the reward input
nshepperd#2316: maybe i should also lerp between image embeddings of different images
alstroemeria313#1694: And taking text tokens directly instead of a CLIP embedding.
Some Point Process#3793: They still use a transformer don't they? :p
alstroemeria313#1694: yeah it outputs logits for VAE tokens autoregressively, left to right on the top row then the next row etc.
alstroemeria313#1694: and you generate by sampling
RyanT#5929: For people who have used WandB before, is it difficult to post process the data from the generated graphs automatically? I.e having a script to pull the data down and applying some set of transformations before plotting locally? I imagine it should be but I havenโt worked with it
bmk#1476: the data can be downloaded as csv
StellaAthena#3530: @RyanT You can run arbitrary analysis / modification before you upload it actually. Or you can bulk download as a CSV like Leo said
bmk#1476: (psa: csvs are bad and terrible and you should use jsonl instead (when possible, unfortunately wandb doesnt have jsonl))
bmk#1476: i should make a generic csv to jsonl (and vice versa) adaptor so i never ever have to use csvs again in my code
Awesome_Ruler_007#7922: Whaaat?
Awesome_Ruler_007#7922: wtf, how does my training end at the 1000th step with CUDA OOM?
|
batch size is same, samples are of fixed size ๐คฃ
EricHallahan#1051: You have a memory leak obviously.
nev#4905: might be something eval related
nev#4905: if it's exactly at the 1000th step
EricHallahan#1051: It could.
Awesome_Ruler_007#7922: no eval scheduled before epoch, and epoch was for 1500 steps:berk:
EricHallahan#1051: So it is a memory leak.
Awesome_Ruler_007#7922: it was at 1040th step or smthing
Awesome_Ruler_007#7922: and `mmdet` reports memory, its the same for all steps
```py
2021-09-25 16:55:56,092 - mmdet - INFO - Epoch [1][1030/1350] lr: 2.500e-03, eta: 8:32:26, time: 1.606, data_time: 0.006, memory: 14637, loss_rpn_cls: 0.0029, loss_rpn_bbox: 0.0000, s0.loss_cls: 0.0000, s0.acc: 100.0000, s0.loss_bbox: 0.0000, s1.loss_cls: 0.0000, s1.acc: 100.0000, s1.loss_bbox: 0.0000, s2.loss_cls: 0.0000, s2.acc: 100.0000, s2.loss_bbox: 0.0000, loss: 0.0030
2021-09-25 16:56:12,370 - mmdet - INFO - Epoch [1][1040/1350] lr: 2.500e-03, eta: 8:32:15, time: 1.628, data_time: 0.007, memory: 14637, loss_rpn_cls: 0.0026, loss_rpn_bbox: 0.0000, s0.loss_cls: 0.0000, s0.acc: 100.0000, s0.loss_bbox: 0.0000, s1.loss_cls: 0.0000, s1.acc: 100.0000, s1.loss_bbox: 0.0000, s2.loss_cls: 0.0000, s2.acc: 100.0000, s2.loss_bbox: 0.0000, loss: 0.0026
```
Awesome_Ruler_007#7922: I assume my model is too big for the data, hence the skewed losses. its training from scratch for a sanity check
Awesome_Ruler_007#7922: but the `memory` is same for all steps :ultrathonk:
Awesome_Ruler_007#7922: :virgin: Downward loss graph
:chad: earthquake loss graph https://cdn.discordapp.com/attachments/729741769738158194/891438088998510592/A59qmbbdvYEAAAAAElFTkSuQmCC.png
Awesome_Ruler_007#7922: I hope such unstable training is normal for ridiculously large models (90M) for small datasets from scratch (5k images), and 4 batch size.
|
cuz I can't handle any more debugging ๐ค
sweg#8920: so interested pytorch error im getting. i suspect its due to duplicates in cross entropy but not entirely sure
sweg#8920: https://cdn.discordapp.com/attachments/729741769738158194/891443631020449823/unknown.png
sweg#8920: this goes on for a looong time followed by
sweg#8920: https://cdn.discordapp.com/attachments/729741769738158194/891443715158208572/unknown.png
sweg#8920: anyone have any ideas?
AI_WAIFU#2844: you're on your own lol
cfoster0#4356: If the graph above is your training loss, something is deeply deeply wrong
sweg#8920: :harold:
Awesome_Ruler_007#7922: ๐ฆ
Awesome_Ruler_007#7922: the LR is like 2.5 e-3, the BS is small. maybe it can't shift the grads very well with such a small dataset (5K images), small amount of steps?
Awesome_Ruler_007#7922: 1k steps isn't a lot - I am really really hoping the stuff works. The model is pretty big anyways (90 million), its just to verify my pipeline works ๐ค
I believe it was trained for the original COCO, which is 120K images - and I am training from scratch
Awesome_Ruler_007#7922: considering above factors, does it indicate something's off?
cfoster0#4356: Probably. Your stats say you're at 0 loss and 100% accuracy which is sus
Awesome_Ruler_007#7922: hmm...that's true.
Awesome_Ruler_007#7922: but that's not the overall accuracy, is it?
Awesome_Ruler_007#7922: its just for a particular class - which yea is sus, but not that out of ordinary considering the amount of things that makes it unstable
|
cfoster0#4356: Idk your data or setup, but it doesn't pass the smell test
nostalgebraist#3542: this usually means one of the target indices was too big to look up in the logits matrix
nostalgebraist#3542: like say your model outputs were shaped (32, 3) but one of your target indices was 12
Awesome_Ruler_007#7922: well I knew I won't get anywhere without a bit of pain anyways
nostalgebraist#3542: instead of them all being < 3
Awesome_Ruler_007#7922: clearly I have to suffer more
nostalgebraist#3542: @Awesome_Ruler_007 are you clipping grads?
Awesome_Ruler_007#7922: idts
Awesome_Ruler_007#7922: why would I yet anyways, since I don't seem to *have* grads yet :berk:
Awesome_Ruler_007#7922: im pretty sure its severely underfitting, training 90M model from scratch designed for 120k image dataset, not 5k
nostalgebraist#3542: you mean overfitting?
Awesome_Ruler_007#7922: imma try a warm restart tmrw and see how it goes
nostalgebraist#3542: not clipping grads might result in those loss spikes, was my thinking
Awesome_Ruler_007#7922: o yea sorry lol.
sweg#8920: Labels are made based on output shape so I don't think it could be that
Awesome_Ruler_007#7922: I don't think the repo has the option to clip grads, so I would have to go down into the nitty gritty
Awesome_Ruler_007#7922: oh wait nvm it does. Imma prolly try fine-tuning and see how it goes
๐
ฌ gabriel_syme ๐
ฌ#3220: I'm running out of colors in the wandb report
๐
ฌ gabriel_syme ๐
ฌ#3220: I think we might be able to use the API to export json from the dataframe
๐
ฌ gabriel_syme ๐
ฌ#3220: going to try it now
|
๐
ฌ gabriel_syme ๐
ฌ#3220: turns out it's pretty cool:
```python
runs = api.runs("usename/project")
history = run.scan_history(keys=["plot_you_want_to_export"])
data = [row["plot_you_want_to_export"] for row in history]
```
๐
ฌ gabriel_syme ๐
ฌ#3220: can put all those in a df and export into json
๐
ฌ gabriel_syme ๐
ฌ#3220: ok got it, I'll send it over in a dm to not flood general
๐
ฌ gabriel_syme ๐
ฌ#3220: this anyways https://cdn.discordapp.com/attachments/729741769738158194/891487075604516945/extract_data_wandb_api.py
EricHallahan#1051: Is that JSON or JSONL? Looks like standard JSON.
๐
ฌ gabriel_syme ๐
ฌ#3220: yeah json
๐
ฌ gabriel_syme ๐
ฌ#3220: did not look if I can do jsonl from pandas
๐
ฌ gabriel_syme ๐
ฌ#3220: ah you might be able to do with orient and some lamdas. messy
๐
ฌ gabriel_syme ๐
ฌ#3220: ok I think you can get jsonl by: `all_df.to_json("project.jsonl", orient='records', lines=True, indent=4)`
๐
ฌ gabriel_syme ๐
ฌ#3220: cool
oreo#2740: can someone point me to some products/websites that use gpt-j/neo?
EricHallahan#1051: Do you have any restrictions that limits what we can count or is it free for all?
Deleted User#0000: hey! I am trying to get access to OpenAI Codex. I have joined the waiting list, but I did not receive an email confirmation? Are you supposed to get an email saying that you joined the waiting list, or not? Thank you in advance! ๐
greencube#6725: AttributeError: module 'math' has no attribute 'dist'
|
greencube#6725: why do i get this error
greencube#6725: dist is supposed to be in math
random_lurker99a#1890: wrong python version
greencube#6725: ok ill just use pytorch dist
Dx#7484: I'm trying to understand how exactly closedai prepped their dataset. Afaik, Gpt3 is trained with a generative objective. Did they do sliding windows of length (context window), and let the model predict the next token? Do they feed in variable length sequences and left pad them? Or is there something else going on that I'm not aware of? I've been reading the original paper, but it seems that this is not clearly explained.
Dx#7484: Any guidance is appreciated.
EricHallahan#1051: It's all next token prediction.
Some Point Process#3793: they stuffed the entire context with random samples during training
Some Point Process#3793: IIRC endoftext between different examples
Dx#7484: Wait, so they took any random length of samples and just stuff in the context?
Dx#7484: Wouldn't this potentially cause bias when the input is very short?
Kharr#7888: More like <EOT> Sample 1 <EOT> Sample 3343 <EOT> Sample 42992 <EOT> until the context window is entirely full
cfoster0#4356: Yeah, a lot of AR models have this bias
cfoster0#4356: The "early token curse"
cfoster0#4356: https://arxiv.org/abs/2012.15832
oreo#2740: @Kharr <EOT> tokens in the beginning and end of the whole context as well?
Kharr#7888: Just at the very start. GPT models are normally prompted with EOT
Awesome_Ruler_007#7922: I am dumb AF. Spent 2 hours trying to debug why my loss is so wonky while fine-tuning, only to discover that I hadn't even loaded the checkpoint ๐ https://cdn.discordapp.com/attachments/729741769738158194/891804802856669285/WB_Bad_loss.png
alstroemeria313#1694: oh no
Awesome_Ruler_007#7922: Apparently I make all my changes in my dreams :berk:
|
alstroemeria313#1694: yesterday i discovered that i had left all the biases off the conv2d layers in a couple of my networks
bmk#1476: congratulations you just solved neural network debiasing
๐
ฌ gabriel_syme ๐
ฌ#3220: @AI_WAIFU did you ever do more with your DT? Slowly starting to think of fleshing an experiment out
AI_WAIFU#2844: no, it's behind a big fuck backlog of other shit I gotta do.
๐
ฌ gabriel_syme ๐
ฌ#3220: okie ๐ I'll be giving it a shot this or next month but have not yet opened their repo.
heuo#5207: Hi all, newbie question, is there a visualisation tool that allows you to see the approximate state of the model internally when it is running. Something with a sci-fi feel, a nice GUI interface, and a certain amount of real time. Does a similar tool currently exist? This will definitely be available in the future.
EricHallahan#1051: What is your purpose/motivation for watching the internal state?
heuo#5207: ๐ณ Curiosity. Also in the work. If the model runs are observed. You can easily find conditions that are not detectable in the data reports alone.
A graphical representation would be more intuitive.
Example. You can find some inputs that only interact in the local network of the model. Some inputs, on the other hand, interact violently within the model. This is not only cool. It is also easier for us to understand the details of the model operation.
cfoster0#4356: Unfortunately no
xcodevn#9003: ha, i think i noticed this in your colab notebook. Is this because your conv layers are expected to be followed by batchnorms, however, there is no batchnorm?
nshepperd#2316: yeah she removed the groupnorms bc they were breaking things
wabi-sabi#5811: Maybe look at Distill's stuff on feature visualizations, that's the closest I know of to this.
๐
ฌ gabriel_syme ๐
ฌ#3220: @alstroemeria313 not sure what this means but it has some keywords you were talking about recently, maybe there's an idea in there
https://arxiv.org/abs/2109.12077
nshepperd#2316: my DT train loss increased overnight...
nshepperd#2316: https://arxiv.org/abs/2005.09669 uhh weird, I looked up "mirror flow" and these people apparently came up with some sort of diffusion version of newton's method?
nshepperd#2316: this does not seem efficient https://cdn.discordapp.com/attachments/729741769738158194/891968484815278121/2021-09-27-184335_1709x410_scrot.png
alstroemeria313#1694: groupnorms, yes
|
alstroemeria313#1694: I decided to take them out and forgot to re-enable biases
Awesome_Ruler_007#7922: when you want to use strong language to argue something, but are also an academic
https://www.scs.stanford.edu/~dm/home/papers/remove.pdf
chirp#4545: ๐ฎ https://twitter.com/ak92501/status/1442532808476528642?s=21
chirp#4545: Maybe the Tesla Bot wonโt be so far off after all
๐
ฌ gabriel_syme ๐
ฌ#3220: the semantic revolution y'all
cfoster0#4356: Semantic revolution?
mkualquiera#3484: meaning-based programming and such
mkualquiera#3484: I guess that's what gabriel means
๐
ฌ gabriel_syme ๐
ฌ#3220: ah yes, my bad
๐
ฌ gabriel_syme ๐
ฌ#3220: it might be a wrong use of english there
๐
ฌ gabriel_syme ๐
ฌ#3220: but that's exactly it yes
mkualquiera#3484: I think it's correct, just not a commonly used term (for obvious reasons)
๐
ฌ gabriel_syme ๐
ฌ#3220: yea ๐
๐
ฌ gabriel_syme ๐
ฌ#3220: in my domain I just use the 'language as design'
๐
ฌ gabriel_syme ๐
ฌ#3220: sells more tickets
One#5919: Can you imagine when every human will be able to program a computer
One#5919: Just using natural language
One#5919: ๐คฏ ๐คฏ ๐คฏ
Slack#2746: im creating a discord chatbot with gpt-neo
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.