data
stringlengths 115
7.61k
|
---|
kurumuz#5695: lol someone to code with the required motivation is harder to find than the compute
Kazumi#1297: it was fun when I had a friend to do a short project with
Qq#7586: hey anyone got any tips for training diffusion models? I've attempted to implement a recent paper for a lil pixel art database I have, and I'm getting some very ... abstract outputs https://cdn.discordapp.com/attachments/729741769738158194/944062356152000603/unknown.png
EricHallahan#1051: I suggest you ask in #art.
Qq#7586: ah cool thanks :)
zphang#7252: 🏴☠️ 🦜 how is there not a pirate/eyepatch emoji
ersatz#0001: :PirateCat:
Kazumi#1297: :PirateSmile:
nnailer#7957: Sorry if this was already answered somewhere but I did not see it when I looked. What's the minimum ram requirement to run the 20b model on deepspeed with the minimum configuration?
𓅬 gabriel_syme 𓅬#3220: I like the cat!
alstroemeria313#1694: ```
// - sigmoid: the effective learning rate follows a sigmod decay
// return base_lr ( 1/(1 + exp(-gamma * (iter - stepsize))))
```
alstroemeria313#1694: ...does this actually work
StellaAthena#3530: For inference, you can do it on an A6000 or a pair of 3090s. You need approximately 45 GB of VRAM, and strictly more than 40.
nnailer#7957: I appreciate the response but I meant system ram. Thank you!
StellaAthena#3530: Like CPU?
StellaAthena#3530: 1 KB?
StellaAthena#3530: (I'm joking, but only slightly. If you have enough VRAM you will have enough RAM. CPU is a non-issue)
|
nnailer#7957: I only have 128gb of system ram which is a platform limit but I do have two 3090's. Thanks for the response.
alstroemeria313#1694: Can I customize what deepspeed prints every 10 steps easily
alstroemeria313#1694: Like I have an EMA warmup schedule that I would like to print
DigThatData#7946: where's the show and tell calendar? that's still a thing, right?
EricHallahan#1051: Show & Tell is still a thing, and I still would like to host one sometime soon™️! I personally just have not found a good time put one together given that I have been extremely busy with GPT-NeoX-20B training/launch/release/paper while also having to juggle being a full-time student at the same time.
DigThatData#7946: np, sounds like you've got your priorities triaged appropriately :p
Marmetor#8047: Has anyone tracked the Kullback-Leibler divergence from validation batch *i* to batch *i+1* in MLM tasks? I wonder if it could offer some useful insight
OccultSage#3875: 6gb of system RAM or so if you do the loading correctly.
Crit#0843: I'm trying to build an email thread summarizer that can accurately identify what each person is talking about instead of a general purpose summary i.e "X is talking about {topic} and Y thinks {input}" vs "the conversation in this email is about {topic}"
Going to try one of the base models off huggingface vs fine tuning davinci. Are there any usable datasets you guys know of? I've come across the avocado and w3c corpus but its prohibitively expensive. I'll need at least 100-200 examples to get anything decent from davinci
𓅬 gabriel_syme 𓅬#3220: Sounds like you need to curate 200 examples :)
Crit#0843: tough 😦
Ifty#5354: Goose Parade
https://youtu.be/aOxheDoks_Y
Kia#2550: #off-topic
TurtleRabbit#4380: Hey
TurtleRabbit#4380: Getting this error when I use Google's ViT on MNIST for numbers
TurtleRabbit#4380: Any idea how to fix this
|
TurtleRabbit#4380: https://cdn.discordapp.com/attachments/729741769738158194/944526044562612254/unknown.png
TurtleRabbit#4380: MNIST is gray scale
TurtleRabbit#4380: So it has one I guess
TurtleRabbit#4380: Any work around for it
TurtleRabbit#4380: But the same model accepts of other higher resolutions of 1600x900
TurtleRabbit#4380: I assumed it would take anything
TurtleRabbit#4380: I'm like new to this whole thing
ilovescience#3282: yo i don't think ViT models accept anything apart from the size they were trained with...
as @Deleted User said, the MNIST images are 28x28 so there's something massively wrong with the way you've processed these images..
ilovescience#3282: also this Discord isn't really for beginner-level help...
TurtleRabbit#4380: It is working for images have 1600*900 resolution
TurtleRabbit#4380: I think maybe it because of MNIST being in grayscale
ilovescience#3282: are you training a new model?
ilovescience#3282: like fine-tuning?
TurtleRabbit#4380: Yes
TurtleRabbit#4380: https://huggingface.co/blog/fine-tune-vit
TurtleRabbit#4380: Following steps in this blog
ilovescience#3282: > The Vision Transformer was pre-trained using a resolution of 224x224. During fine-tuning, it is often beneficial to use a higher resolution than pre-training (Touvron et al., 2019), (Kolesnikov et al., 2020). In order to fine-tune at higher resolution, the authors perform 2D interpolation of the pre-trained position embeddings, according to their location in the original image.
Looks like there is some trick that I wasn't aware of before, interesting...
TurtleRabbit#4380: I saw like MNIST numbers was an introductory dataset for Neural Networks, so that's why i thought of trying it out with Transformers
|
ilovescience#3282: okay you see here?
```
feature_extractor = ViTFeatureExtractor.from_pretrained(model_name_or_path)
```
you want a feature_extractor that gives you `(1, 1, 28, 28)` so you need to adjust the input arguments to do that... IDK if you can even used the pretrained stuff for that especially since it's one channel...
TurtleRabbit#4380: Okay I'll try this out
TurtleRabbit#4380: https://cdn.discordapp.com/attachments/729741769738158194/944558336702033970/unknown.png
TurtleRabbit#4380: This kinda threw no errors
TurtleRabbit#4380: But this isnt a pre trained one
TurtleRabbit#4380: https://cdn.discordapp.com/attachments/729741769738158194/944559607035093092/unknown.png
TurtleRabbit#4380: This is on a pre-trained and this also threw no errors
TurtleRabbit#4380: So like does the model now retain its features that it had learned before
tpapp157#3643: Enron?
Crit#0843: not enough threads on enron
Crit#0843: also conversations are extremely short
Crit#0843: email intelligence is still difficult to build for due to lack of open datasets
alstroemeria313#1694: @nshepperd when you do a https://jax.readthedocs.io/en/latest/_autosummary/jax.lax.all_gather.html, it doesn't propagate gradients back to the other devices right? the tensors from other devices are detached and only the one that came from your device contributes to the grad? >_>
alstroemeria313#1694: or, um
alstroemeria313#1694: Wait
|
nshepperd#2316: i think it actually does
alstroemeria313#1694: Oh :/
alstroemeria313#1694: I need it to not
nshepperd#2316: i looked at the source code at some point and the grad was an all_scatter or something
alstroemeria313#1694: ohh
alstroemeria313#1694: Can I just detach it somehow
alstroemeria313#1694: Like does that make sense to do in JAX.
nshepperd#2316: you can jax.lax.stop_gradient()
alstroemeria313#1694: (This is for CLOOB training on TPUs)
alstroemeria313#1694: So I need to do an all gather inside the thing I take the grad of.
alstroemeria313#1694: And if the tensors from other devices are detached, then when I do a final psum of the grads then I end up with the correct gradient.
alstroemeria313#1694: (psum, not pmean, in this case.)
nshepperd#2316: hmmm
alstroemeria313#1694: Is it actually going to hurt anything if I let it propagate the grads *there* and then just *don't* psum at the end.
nshepperd#2316: for my InfoLOOB sampling loop, i did all_gather without detaching it
alstroemeria313#1694: right, but you had different params per device.
nshepperd#2316: and then... pmeaned the rgad and divided by 8
alstroemeria313#1694: right?
alstroemeria313#1694: like you were doing one output per device?
nshepperd#2316: pmapping a batch of 8 images yeah
|
alstroemeria313#1694: ...this really confuses me, why pmean if you have different params per device
nshepperd#2316: err, let me double check
alstroemeria313#1694: i think if the all_gather propagates grads back you end up with the same grad on all devices?
alstroemeria313#1694: at least you should? and if you have different params per device you should end up with different parts of the same global grad?
nshepperd#2316: wait no i didn't pmean
nshepperd#2316: your'e right that doesn't make sense for sampling
alstroemeria313#1694: and we will just be doing data parallel training so the params will be the same on all devices and we shouldn't have to psum/pmean at the end and we can just proceed to do the parallel optimizer step
nshepperd#2316: i think i did divided by 8
alstroemeria313#1694: i think that may be incorrect?
nshepperd#2316: bc like. after the all_gather the computation is identical on each device
alstroemeria313#1694: it... wait
nshepperd#2316: it's just the infoloob formula applied to the all_gather matrix
alstroemeria313#1694: how is it identical, the stored activations on each device are different
nshepperd#2316: which is the same on every device
alstroemeria313#1694: or do you mean from the all_gather forward.
nshepperd#2316: yes that
alstroemeria313#1694: ahhh.
alstroemeria313#1694: ok so.
alstroemeria313#1694: if my stored activations are different... ugh
nshepperd#2316: the part from the all_gather of text/image embeds to the loss
|
nshepperd#2316: its the same on every device
alstroemeria313#1694: then i do not end up with the same grads on each device and i have to pmean/psum
nshepperd#2316: so when it propagates gradients back through the scatter
alstroemeria313#1694: the grad_output to the end of CLIP is the same on every device.
nshepperd#2316: the thing you are all_gathering gets gradients equal to n_devices * the infoloob grad, right
alstroemeria313#1694: but the CLIP saved activations are different.
nshepperd#2316: right
nshepperd#2316: yeah then you need to pmean
alstroemeria313#1694: no this has to be wrong
alstroemeria313#1694: agh
alstroemeria313#1694: the backward is a *scatter*
alstroemeria313#1694: I need more coffee ☕
alstroemeria313#1694: OH
alstroemeria313#1694: You just end up with the grad_output you would have if you had detached, but it is multiplied by the number of times you computed the loss?
nshepperd#2316: yep
alstroemeria313#1694: ahhh
alstroemeria313#1694: so do you psum or pmean or pmean / num_devices at the end.
alstroemeria313#1694: for CLOOB training.
nshepperd#2316: i think pmean / num_devices
alstroemeria313#1694: i was under the impression that if you detached, you were computing a *part* of the gradient and so summing would be correct.
|
alstroemeria313#1694: so i should do a pmean if not detached.
alstroemeria313#1694: Can I just detach it to speed it up tbh.
alstroemeria313#1694: To save it from doing the scatter.
alstroemeria313#1694: (The actual CLOOB training code uses a PyTorch all_gather that detaches, and then they form a version of the all_gather result that has a grad_fn and only propagates through the local embeddings.)
nshepperd#2316: idk, maybe
nshepperd#2316: jax.lax.all_gather(jax.lax.stop_gradient(embeds))?
alstroemeria313#1694: Yeah then do some concats to replace the all gather result with a thing that will propagate a gradient only through your shard.
nshepperd#2316: and then... somehow substitute embeds back into the right column of the all_embeds so that it gets just the gradient for this device
alstroemeria313#1694: yes
alstroemeria313#1694: that is what the official repo does
alstroemeria313#1694: in pytorch.
alstroemeria313#1694: This is going to be faster than doing the all_scatter right?
alstroemeria313#1694: If it's not faster we can do the scatter instead.
nshepperd#2316: hmm maybe
nshepperd#2316: ah yeah i guess the "full gradient" is the sum over the parts for each batch so this is right
alstroemeria313#1694: *nods*
alstroemeria313#1694: i am guessing the stop gradient + manual substitution will be faster but we maybe need to actually test it.
MicPie#9427: this paper https://arxiv.org/abs/2112.09331 has a nice discussion on this topic under section "Gradient reversed gather"
MicPie#9427: afaik the official CLOOB implementation based on open_clip does not have gradient reversed all gather
MicPie#9427: the linked paper shows that this can be beneficial
|
alstroemeria313#1694: ohh
alstroemeria313#1694: Wait wait
alstroemeria313#1694: They are supposed to compute the same thing
alstroemeria313#1694: actually i just looked at their code and the "all_gather_with_grad" looks like the same thing CLOOB does
alstroemeria313#1694: except like, written as a custom autograd function
alstroemeria313#1694: ...let me double check this
MicPie#9427: yep, you need the custom backward
nshepperd#2316: the stop_gradient thing is an optimization that is only correct if all the cores compute the same thing from the all_gather forward
alstroemeria313#1694: yep.
alstroemeria313#1694: That is how CLOOB does it afaict.
MicPie#9427: what is the same thing here?
this is what I have been thinking through a lot, but afaik the same loss values != having the entire graph for grad calculation
alstroemeria313#1694: What alternative are they comparing it *to*...
alstroemeria313#1694: If they just all_gather their loss will have no grad
MicPie#9427: this is also an interesting comment from the CLIP authors: https://github.com/openai/CLIP/issues/132#issuecomment-908004353
alstroemeria313#1694: https://github.com/zerovl/ZeroVL/blob/64ff0d442379ec4bdffac405661da3f2ce157ec4/zerovl/models/criteria/losses/mml_loss.py#L161
alstroemeria313#1694: ohhh
alstroemeria313#1694: I see!
alstroemeria313#1694: They are comparing *one device's part* of feat1 to an all_gathered feat2, which is either detached or not
alstroemeria313#1694: So they are getting different losses on different devices.
|
alstroemeria313#1694: What CLOOB training does is *two* all gathers then compute the same loss on all devices.
nshepperd#2316: oh huh
alstroemeria313#1694: Which is what we were going to do.
alstroemeria313#1694: Bc it's just simpler to think about/parallelize.
alstroemeria313#1694: Especially in JAX.
alstroemeria313#1694: ofc they... need to all gather the other one too
MicPie#9427: do you mean the two here? because they are for each modaility
https://github.com/ml-jku/cloob/blob/master/src/training/train.py#L36:L37
alstroemeria313#1694: it just happens in the other half of the loss computation.
alstroemeria313#1694: yes.
alstroemeria313#1694: It all gathers both modalities and then computes the same loss on all devices.
alstroemeria313#1694: Whereas ZeroVL all-gathers one modality, computes different i2t losses, all-gathers the other modality, computes different t2i losses.
alstroemeria313#1694: I think.
alstroemeria313#1694: And if you do it that way you have to scatter on the backward.
alstroemeria313#1694: If you do it the CLOOB way you do not have to scatter on the backward.
MicPie#9427: I'm not sure, if you do not cat the original tensors in like in these two lines the loss backward breaks down:
https://github.com/ml-jku/cloob/blob/master/src/training/train.py#L40
https://github.com/ml-jku/cloob/blob/master/src/training/train.py#L45
alstroemeria313#1694: yes you do have to do that part.
alstroemeria313#1694: The embeddings that came from *that device* are not detached, all other embeddings in the batch are.
|
MicPie#9427: yes
MicPie#9427: but this setup does not feed back the grads to all other embeddings
alstroemeria313#1694: And so you avoided doing an all_scatter op w/ the associated synchronization.
alstroemeria313#1694: And *still ended up with the correct gradient* after doing the gradient all reduce (a sum) before the optimizer step.
alstroemeria313#1694: yep, doing it their way means you do not need to.
MicPie#9427: but afaik this is what the CLIP github comment and the ZeroVL paper argue is not the case
alstroemeria313#1694: The thing ZeroVL is comparing against is not the thing CLOOB is doing
MicPie#9427: because you do not backprob something to the other embeddings that are detached
alstroemeria313#1694: I had to actually *look at the ZeroVL code* to find out what it was.
alstroemeria313#1694: And the thing they are comparing against does give you wrong/worse gradients.
alstroemeria313#1694: and i would expect a model trained that way to be worse in practice.
MicPie#9427: the CLOOB thing has everything detached ecxcept the local embeddings, so for the numerator that does not matter in the InfoNCE loss
MicPie#9427: we also have code in the forked open_clip repo with the reduced loss version: https://github.com/Zasder3/open_clip_juwels/blob/Local-Loss/src/training/train.py#L56
alstroemeria313#1694: ```
# this is needed to send gradients back everywhere.
logits_per_image = logit_scale * all_image_features @ all_text_features.t()
logits_per_text = logits_per_image.t()
```
MicPie#9427: this is the original comment
MicPie#9427: not removed
|
alstroemeria313#1694: That's actually what you do need to do
alstroemeria313#1694: CLOOB does it inside a different function
alstroemeria313#1694: Well, they do a CLOOB equivalent of it which is more complicated.
alstroemeria313#1694: Like you actually *can't* just compare a single shard of image embeddings to all the text embeddings in CLOOB.
alstroemeria313#1694: You need to do both all gathers and both cats then do their Hopfield stuff to both full batches of embeddings.
alstroemeria313#1694: OK so. How this works is, when you do both all gathers and both cats first, you end up with the same losses on all devices and the same grad_outputs for the full batches of both modalities' embeddings on all devices.
alstroemeria313#1694: *Then the cat's backward picks out your shard's parts of the full grad_outputs*
alstroemeria313#1694: And due to the gradient operator being linear, you can backprop just that shard
alstroemeria313#1694: And sum over all devices' gradients at the end
alstroemeria313#1694: And you didn't explicitly do a scatter op across nodes.
alstroemeria313#1694: So the reason the CLIP people said the thing they did about needing it is.
nshepperd#2316: right
alstroemeria313#1694: They didn't compute the loss separately and identically on *all devices*, they did an all gather to a *single GPU somewhere* and computed the loss once there.
alstroemeria313#1694: So they need the corresponding scatter in the backward.
alstroemeria313#1694: And this is more efficient than computing it on all devices
alstroemeria313#1694: So they say "you need all gather with a backward for most efficient training".
nshepperd#2316: more efficient but not actually faster i assume bc the other devices are just idle while it is doing this
alstroemeria313#1694: But we are going to throw TPUs and JAX at it, so we're just going to pmap the whole computation
MicPie#9427: yes, I agree you get the same loss value on each device, but the graph is not the same on each device
alstroemeria313#1694: it means you don't need the *memory* for it allocated on all devices.
|
MicPie#9427: no, the do it sharded on all GPUs
alstroemeria313#1694: which is a lot of memory if your batch size is 32k and you have V100s.
alstroemeria313#1694: did they actually do a part to part thing, ugh
alstroemeria313#1694: tbh i forget
alstroemeria313#1694: ok let me double check the paper again
alstroemeria313#1694: i think i may have substituted my mental model of how you have to do it for the CLOOB loss in
alstroemeria313#1694: yeah. https://cdn.discordapp.com/attachments/729741769738158194/944631874779758612/Screen_Shot_2022-02-19_at_8.29.46_AM.png
alstroemeria313#1694: OK so you can't do that with CLOOB, you actually have to either gather to one device and then scatter
alstroemeria313#1694: Or all gather on all devices then compute the same loss on all devices, but you get to avoid the all scatter.
alstroemeria313#1694: with CLOOB the embeddings you compute InfoLOOB on depend on all of the others *of the same modality* in the batch.
alstroemeria313#1694: Bc of the Hopfield thing.
MicPie#9427: yes, true, I just checked their formula too
MicPie#9427: it is the same modality retrieval
alstroemeria313#1694: anyway i am still convinced that the way the CLOOB repo does it results in correct full gradients.
alstroemeria313#1694: And that it is not the thing ZeroVL was comparing against, which is in fact incorrect.
nshepperd#2316: does the hopfield thing actually help
nshepperd#2316: it seems super weird
MicPie#9427: yeah, it seems :berk:
nshepperd#2316: like who the hell would come up with this
MicPie#9427: btw they also recently updated the CLOOB paper, but I need to check what they added
|
cfoster0#4356: Someone with an axe to grind
nshepperd#2316: schmidhuber???
MicPie#9427: the open_clip authors had a test for that but it was scaling the gradients which is not done during training, so I was not convinced anymore because I saw different grad forbenius norms with different number of GPUs
MicPie#9427: yeah, I want to know what is the correct way, we should really setup a bullet proof test case for that, this really bugs me already for weeks
cfoster0#4356: Hochreiter
alstroemeria313#1694: ikr, i have never seen self or cross attention done over the batch dimension before!
alstroemeria313#1694: especially without learned projections
alstroemeria313#1694: What did they change
alstroemeria313#1694: We want to try writing CLOOB training code on TPUs so
alstroemeria313#1694: was the batch size also different.
MicPie#9427: I'm not sure what in detail, they just posted that they have an update on the LAION/DALL-E discord
MicPie#9427: global bs was always the same
alstroemeria313#1694: ahh
alstroemeria313#1694: then you should get the same gradients
MicPie#9427: which is what got me thinking
alstroemeria313#1694: if you aren't something is wrong
MicPie#9427: yes, exactly
alstroemeria313#1694: also you only get the same gradients if you sum, not mean
MicPie#9427: I even tried to hack the ddp communication hook to do sum instead of mean :berk:
alstroemeria313#1694: oh
|
alstroemeria313#1694: it doesn't actually matter so much bc of Adam
MicPie#9427: but yeah, maybe I did something stupid too
alstroemeria313#1694: but for testing correctness it does
alstroemeria313#1694: you can just multiply by world size though for the test.
alstroemeria313#1694: this should work.
alstroemeria313#1694: were they multiplying by world size?
MicPie#9427: in the test setup yes
alstroemeria313#1694: ahh
alstroemeria313#1694: i see
alstroemeria313#1694: that should match so long as you don't use batchnorm
alstroemeria313#1694: like i think a torch.allclose() on the grads should pass?
alstroemeria313#1694: batchnorm will introduce a microbatch size dependency and you won't end up with exactly the same grad if you split it up into different numbers of microbatches.
alstroemeria313#1694: ahah https://cdn.discordapp.com/attachments/729741769738158194/944641651631087636/Screen_Shot_2022-02-19_at_9.08.37_AM.png
alstroemeria313#1694: That's new
nshepperd#2316: ooooh
nshepperd#2316: wow
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/944646345275347004/Screen_Shot_2022-02-19_at_9.27.17_AM.png
nshepperd#2316: huhhh
nshepperd#2316: what does that mean
alstroemeria313#1694: the more useful eigenvalues the covariance matrix has the more information is in the embeddings
|
nshepperd#2316: so the hopfield infoloob one has... more eigenvalues that are between 0.4 and 1? and this is good?
alstroemeria313#1694: between 0.4e-2 and 1e-2
nshepperd#2316: oh
alstroemeria313#1694: bc the embeddings are normalized
alstroemeria313#1694: well there seem to be like... ones that are nearly zero and ones that aren't
alstroemeria313#1694: the nearly zero ones are not useful
alstroemeria313#1694: "useful" means something like, represents information that matters when you use the embeddings in inference
alstroemeria313#1694: bc you could PCA the embeddings, truncate it to the number of useful singular values, and project onto that subspace to *reduce their dimensionality* without significant loss of information.
alstroemeria313#1694: The more useful dimensions there are the better, then
alstroemeria313#1694: In other words CLIP's (RN50 probably) image encoder doesn't really make full use of its embedding space.
alstroemeria313#1694: Neither does CLOOB but it does better.
alstroemeria313#1694: if your covariance matrix is not effectively full rank then not all of your dimensions vary independently
alstroemeria313#1694: now since these eigenvalues are nonzero just very small.
alstroemeria313#1694: maybe what the very small ones encode is *rare* features
alstroemeria313#1694: Like outlier images.
alstroemeria313#1694: Which is going to hurt model performance in the typical case.
alstroemeria313#1694: ...
alstroemeria313#1694: Hey can we like, trivially improve the utilization just by making it output larger embeddings, training using those larger embeddings, then PCAing for dimensionality reduction after the fact.
alstroemeria313#1694: Or is the limitation that you actually need a better/bigger encoder.
alstroemeria313#1694: for situations where you need to store a lot of embeddings yes
|
alstroemeria313#1694: which is one of the CLIP/CLOOB use cases, retrieval with nearest neighbors
MicPie#9427: this should work, FAISS also offers the option for dimensionality reduction by PCA before the similarity search pipeline
FAISS also offers a standalone encoding setup, but I never used it separately from the search setup.
MicPie#9427: FAISS vector codec information: https://github.com/facebookresearch/faiss/wiki/Vector-codecs
MicPie#9427: but I have no clue how to stitch the FAISS pipeline with separate blocks then together, but maybe its trivial 🤔
MicPie#9427: tpapp157 also mentioned UMAP as possible postprocessing step a couple of times
which could be even more interesting as it is non-linear
MicPie#9427: I guess even though InfoNCE is a very good loss setup to train meaningful embeddings it actually lacks the feature to embeed them on a low-dim manifold
MicPie#9427: also like when the dims are too low InfoNCE embeddings will be not usable at some point
cfoster0#4356: Idk, NNs are lazy so unless the loss function encourages maximizing utilization I would expect it'll just spread the same information redundantly across dimensions or drive down the extra ones to near zero variance
alstroemeria313#1694: ah yeah
MicPie#9427: Barlow twins and VICReg tried to spread out the information over the available dims with their loss setup
CRG#8707: There was a Bert paper where they projected to a higher dim before the logits and showed improvement
CRG#8707: https://discord.com/channels/729741769192767510/785968841301426216/934801237763821588
tpapp157#3643: Reminder that hopfield (weighted averaging) is essentially a soft k-means and therefore imposes a gaussian regularization on the latent space. So it shouldn't be a surprise that the embeddings are more uniformly distributed. That doesn't necessarily mean anything about the embedding quality though.
alstroemeria313#1694: *nods*
alstroemeria313#1694: But the covariance matrix eigenvalues *do* matter though.
tpapp157#3643: That's not necessarily true. In some of my experiments where I compared the two (cloob vs nce) I found that cloob was spreading the data out more across the space but wasn't actually capturing any additional information.
tpapp157#3643: Given the same encoder architecture and training.
tpapp157#3643: I now wish I had saved some of those outputs because they'd be interesting to show. Basically the PCA curve for nce had a big curve to it indicating a subset of dimensions contained most of the information while cloob was nearly a flat line indicating equal spread of information across dimensions. But in terms of distribution of learned features across the topology and in terms of performance on downstream tasks there wasn't much difference.
|
tpapp157#3643: Of course standard caveats, a couple of experiments amount to anecdotal evidence so read into it however much you want.
tpapp157#3643: At this point I'm on the fence between the two techniques, and it kind of comes down to whether or not you believe enforcing a gaussian prior on the embedding space is a good thing. I haven't seen much empirical evidence to suggest that it is or isn't but I know that complex high-dimensional data is almost never naturally gaussian distributed so my gut reaction is skeptical.
alstroemeria313#1694: ...how do you do lr schedules in optax
alstroemeria313#1694: like custom ones.
ILmao#5683: Looks like a callable taking a step number is enough? https://github.com/deepmind/optax/blob/0cc3177a8ccec50ecd61d5a86ef6819396fae409/optax/_src/schedule.py#L49-L89
Qq#7586: hi stupid question, how does CLIP guided diffusion work when CLIP has no understanding of the diffusion time `t`? https://cdn.discordapp.com/attachments/729741769738158194/944702796253519882/unknown.png
cfoster0#4356: It only kinda works, because of this
Qq#7586: ooh interesting, thanks :)
alstroemeria313#1694: ahh, and that returns a multiplier for the base lr?
alstroemeria313#1694: or does it return the lr itself.
alstroemeria313#1694: ...what is chex.
ILmao#5683: The LR itself, I believe
ILmao#5683: https://github.com/deepmind/chex
alstroemeria313#1694: ah
alstroemeria313#1694: so
alstroemeria313#1694: ```python
def schedule(count):
return init_value * (1 - warmup ** count) * max(final_lr, (1 + gamma * count) ** -power)
```
alstroemeria313#1694: ...How do you actually um.
|
alstroemeria313#1694: Apply it.
ILmao#5683: Some (most? all?) optimizers will accept a schedule function in place of a scalar LR
ILmao#5683: https://optax.readthedocs.io/en/latest/optax-101.html?highlight=schedule#weight-decay-schedules-and-clipping
alstroemeria313#1694: ohh
ILmao#5683: Which goes to <https://github.com/deepmind/optax/blob/f20ec854a8250f96420a4663e6b6f58f79edbe09/optax/_src/alias.py#L226>
ILmao#5683: And then https://github.com/deepmind/optax/blob/f20ec854a8250f96420a4663e6b6f58f79edbe09/optax/_src/alias.py#L34. Which has the actual logic for checking whether it's a function
alstroemeria313#1694: i need exponential warmup *and* lr decay for this
alstroemeria313#1694: yay it didn't work bc of some jit thing
alstroemeria313#1694: idek
alstroemeria313#1694: how do i do this actually
alstroemeria313#1694: my optimizer updates are pmapped.
alstroemeria313#1694: it can't trace the schedule bc it depends on the step count which it doesn't have during tracing
alstroemeria313#1694: bc it's in the opt_state.
alstroemeria313#1694: oh
alstroemeria313#1694: it's because i'm doing a max() inside the schedule
alstroemeria313#1694: like a python max()
alstroemeria313#1694: yeah jnp.maximum() does it
alstroemeria313#1694: this is cool
alstroemeria313#1694: JAX CLOOB training code~
alstroemeria313#1694: I am running it on one GPU rn
|
alstroemeria313#1694: ```
Model config:
{'d_embed': 256,
'image_encoder': {'d_model': 256,
'image_size': 64,
'n_heads': 2,
'n_layers': 6,
'patch_size': 16,
'type': 'ViT'},
'text_encoder': {'d_model': 256,
'n_heads': 2,
'n_layers': 6,
'text_size': 77,
'type': 'transformer',
'vocab_size': 49408}}
Image encoder parameters: 5005312
Text encoder parameters: 17472256
```
alstroemeria313#1694: batch size 64 eheh.
alstroemeria313#1694: So tiny.
|
alstroemeria313#1694: but that is to make sure the training code works
alstroemeria313#1694: we probably need to re-ask TRC for TPUs at this point?
alstroemeria313#1694: or someone.
alstroemeria313#1694: like do some debugging on a v3-8 then see about scaling?
alstroemeria313#1694: i turned bs up to 512
Dashiell#8739: like model parallel scaling, mesh-transformer jax style?
Qq#7586: hey, I've trained a little pixel art diffusion model, and I'd really appreciate some advice on what to do next! I was considering making a classifier for guidance, but first I'd need to label my dataset (/make my friends do it 🤪), so I wanted to ask - could a classifier with only ~2000 32x32 labelled images be any good, or would it horribly overfit? Would it be worth labelling the data? Apologies for very open question, I'm new to this and really lack the intuition :P https://cdn.discordapp.com/attachments/729741769738158194/944741818845782066/a_small.png
alstroemeria313#1694: do you know how to exclude biases/embeddings from weight decay in jax
alstroemeria313#1694: like using optax.
alstroemeria313#1694: no just data parallel
alstroemeria313#1694: it's a CLOOB so it's not extremely large
alstroemeria313#1694: CLIP/CLOOB punch wayyyy above their weight in terms of param count.
Dashiell#8739: \:nods\:
alstroemeria313#1694: also i am doing all gathers in the loss function
Dashiell#8739: ?
alstroemeria313#1694: so the *contrastive* batch size gets bigger
Dashiell#8739: ahh
Dashiell#8739: cool
alstroemeria313#1694: as i scale w/ data parallel.
alstroemeria313#1694: i am actually training a ViT-B/16 on MS COCO rn
|
alstroemeria313#1694: On a v3-8
Dashiell#8739: I was thinking again about this paper recently (https://github.com/SongweiGe/Contrastive-Learning-with-Non-Semantic-Negatives), I think it'd be interesting to finetune CLIP with it
Dashiell#8739: for both the image and text encoders, really
Dashiell#8739: though I you'd want the hard negative images in that examples close to texts that are like "a bunch of chess board patchy nonsense" or "a closeup of a horrible shag carpet" 😂
alstroemeria313#1694: :)
alstroemeria313#1694: so like... how do I do large datasets on TPUs/TPU pods
Dashiell#8739: Store the data as tfrecords / some fixed size format, stream it in from blob storage?
uwu1#4864: i think 2000 is definitely worth trying! you could also try augmenting it with character sprites from games or tilesets that come with labels. as a random rule of thumb I think of 100 samples per class being a decent starting amount unless there's a lot of intra class variance. also look at contrastive approaches that can work well with smaller but more class sets (e.g 10 examples for 200 classes, if u can get one class per character/tile for instance). There is also classifier-free guidance that would be worth trying, in which you use the generative model as an implicit classifier
Dashiell#8739: I guess I don't actually know for the TPU _pods_. Are those like, multi-node setups?
ILmao#5683: Have a look at https://optax.readthedocs.io/en/latest/api.html?highlight=mask#optax.masked
alstroemeria313#1694: oooh
alstroemeria313#1694: "In many networks, these are the only parameters with only one dimension" I will need to also mask out the embeddings which are two dimensional.
alstroemeria313#1694: So will have to do it manually before enabling super heavy weight decay.
ILmao#5683: You can generate the bool mask by any method. So one approach could be passing a custom `is_leaf` in https://jax.readthedocs.io/en/latest/_autosummary/jax.tree_util.tree_map.html#jax.tree_util.tree_map to special case embedding layers
ILmao#5683: As well as norm layers and any layer that has a parameter called "bias"
ILmao#5683: More work but also more control
alstroemeria313#1694: ok so for CLOOB. you interpret the loss as follows.
`1 / (1 + exp(loss * inv_tau / 2))` is the mean probability assigned to positive pairs.
inv_tau is fixed to 30.
So if I plug in the loss at the beginning to this, I get 1 / ~the batch size, which is correct.
|
If I plug in 0 I get 0.5.
So the -0.715585 I am getting for my smol model on the A100.
Gives 99.9978% probability assigned to the positives on average.
In other words it has completely overfit.
alstroemeria313#1694: The TPUv3 model has a loss of ~-0.2014 rn
So 95.35% assigned to positive pairs rn.
It may manage to completely overfit before I go to bed.
alstroemeria313#1694: The A100 model has bs 512, the TPUv3 model bs 384.
alstroemeria313#1694: Both on MS COCO.
Qq#7586: hey thanks this is v helpful :) ill look into it - contrastive sounds like a good plan
alstroemeria313#1694: i changed the cloob loss so it is more interpretable
alstroemeria313#1694: like just the negative mean log odds of the positive pairs
alstroemeria313#1694: so `1 / (1 + exp(loss))` is the probability assigned to positive pairs on average
alstroemeria313#1694: or just `sigmoid(-loss)`
atilla#0325: Are there language models that can edit context, as opposed to doing next token prediction?
StellaAthena#3530: Google "BERT language model"
atilla#0325: Thanks. I ended up watching a video about it and going through this whole Collab <https://colab.research.google.com/github/huggingface/notebooks/blob/master/transformers_doc/pytorch/task_summary.ipynb> when I could've just stopped researching after reading "bidirectional" on the wiki :berk:
atilla#0325: So it lets you remove tokens from context and fills it in. I was more thinking about something that can look at the context and "think" about it on a higher level and be like "hmm how to make this more sci-fi like" or "how to make Mary appear more angry" and edit it the way a human does. But it sounds far fetched lol
alstroemeria313#1694: hmm so i also want to get the current lr from the optax scheduler
alstroemeria313#1694: for logging to wandb
|
Louis#0144: Look up SUNDAE
Sphinx#2092: I would probably just look at text style transfer.
StellaAthena#3530: The overwhelming majority of LLMs are trained in one of two languages: English and Chinese. Consequently, there's a lot of auxiliary resources, such as evaluation datasets, probing techniques, and tools for analyzing bias, that are English-specific. If anyone is familiar with what Chinese-specific auxiliary resources are out there, I would be very interested in hearing about it.
atilla#0325: :honk:
wabi-sabi#5811: @𓅬 gabriel_syme 𓅬 https://arxiv.org/abs/2202.02831 discusses injecting structured noise into perturbed gradient descent a la the earlier idea I was discussing. Not equivalent to perturbing the inputs. I think to make it equivalent to perturbing the inputs you'd need some kind of very complicated coupling between different perturbations to occur and probably this is the better line of research to pursue. Conceptually, I really do wonder what perturbing the inputs to achieve the same weight updates would look like.
n.kh.l#5814: I have a pretty big dataset of songs (~2 million songs) and I'm trying to use NEO to generate/complete lyrics. Would anyone be interested in working with me on this?
cfoster0#4356: Wait, like audio, scores, midi?
EricHallahan#1051: Sounds like lyrics.
n.kh.l#5814: Lyrics
n.kh.l#5814: I also have "tags" which are generally genre (Rock, Rap, Pop, etc) but can also be used to filter languages
jack#8178: @bmk do you have any working example code using pyfra? the documentation is a bit... sparse https://pyfra.readthedocs.io/en/latest/
bmk#1476: oh yeah the docs build broke at some point
bmk#1476: there are docs they jsut arent getting built properly
bmk#1476: it would be awesome if someone could fix the docs build, until then, look at the docstrings in https://github.com/EleutherAI/pyfra/blob/master/pyfra/remote.py https://github.com/EleutherAI/pyfra/blob/master/pyfra/shell.py
Emad#9608: https://twitter.com/StasBekman/status/1495514280686456842?s=20&t=MEIgjem3Nl_qxUdrBdPmvQ
Emad#9608: wait should this go in #scaling-laws :thinkies:
bmk#1476: nah this channel is fine because it's not really scaling *laws* per se, this is more a hardware efficiency thing
ersatz#0001: that's hilarious
ersatz#0001: truly alchemy
cognomen#6297: deja vu
|
cognomen#6297: are we back to the .5 GB problem again
StellaAthena#3530: It's being discussed in #research and has been for a bit
random person#5234: You know what would be interesting
random person#5234: Doing a full nsight sys trace on 20b with a A100 80gb
random person#5234: If I had an A100 I would do it myself
random person#5234: You just need to wrap the inference in nvtx emit
chilli#5665: Everybody should profile their models :^)
ILmao#5683: The nvidia tools are surprisingly accessible IME, but there's definitely a bit of a learning curve
ILmao#5683: e.g. knowing how the ML framework is using CUDA libraries under the hood
chilli#5665: yeah I think there's a lot to do there that we can improve on
chilli#5665: to make them more user accessible
chilli#5665: I think we should be able to do something like
```
You're spending 30% of your time in overhead. Top offenders:
foo.py:bad_arange() (16.5%)
def bad_arange(n):
x = torch.zeros((n,)) (0.1%)
for i in range(n):
x[i] = i (16.3%)
|
return x
```
ILmao#5683: TIL https://github.com/pytorch/pytorch/blob/d4f831349756f3739ef69f82a15c86fc677f3eeb/torch/profiler/profiler.py#L131-L135
Tau#4010: I'm trying to run Julia on TPU vms, but it crashes on adding packages (as well as most things). See https://github.com/JuliaLang/julia/issues/44242 for details (I've tried a fair number of things). Has anyone successfully run Julia on TPU vm's? Any ideas what compile flags or what is needed?
random person#5234: As far as I know, you have to wrap all your layers with manual annotation
random person#5234: Which is honestly such a pita
random person#5234: If you want layer specific kernel calls
chilli#5665: What 🤔
chilli#5665: Just use the profiler
chilli#5665: Or err, what are you trying to get
ILmao#5683: I don't think anyone has tried Julia and TPU stuff for close to a year. You may want to try getting it working on a normal GCP VM first
chilli#5665: You can get it into the chrome profiler too
ILmao#5683: Based on <https://discourse.julialang.org/t/julia-on-google-colab-free-gpu-accelerated-shareable-notebooks/15319/51> it is possible to get working on whatever backs Colab, so unless the TPU VMs are drastically different...
ilovescience#3282: julia on gpus would definitely be different than julia on tpus
ilovescience#3282: you don't use CUDA, you instead need to compile to XLA...
ILmao#5683: Is there no good Python flamegraph library? I happen to have the util installed, but I'd imagine most don't
ILmao#5683: Sure, but it's still a GCP VM. And the post in question hasn't even gotten to the point of running any accelerator code
ilovescience#3282: apparently there's this:
https://github.com/JuliaTPU/XLA.jl
chilli#5665: Wdym?
|
ilovescience#3282: it's really old though lol
chilli#5665: Like, you don't want to use flamegraphs.pl?
ILmao#5683: Like I might not have it installed or be in a position to easily install it
ilovescience#3282: looks to me like there's no working TPU support
ILmao#5683: Small pretty niche ~~issue~~ feature, but having a single API call to export an SVG would be very cool
ILmao#5683: That's correct, but the issue is trying to get the binary running *at all* on the VM. Which it really should unless there's some special sauce going on with the system image
ILmao#5683: `s/Julia/other language interpreter or compiler/` if that helps
chilli#5665: Hmm, but you still need to actually get the profile somehow
chilli#5665: I remember there's like... Flameprof
chilli#5665: If the complaint is that PyTorch should package something that makes it one line, then I agree with that :)
random person#5234: if you want the calls from say, conv_1, you need to annotate in the script conv_1
chilli#5665: In general I think the current ML profiling tools are a bit ... Unaccessible
chilli#5665: For the average user
random person#5234: it doesnt automatically give you the layer specific profile
random person#5234: it trace in terms of CuDNN calls
random person#5234: you need to set that up yourself
chilli#5665: I think this is fairly easy to automate, no?
chilli#5665: Although I think we're working on something that makes it even more trivial
random person#5234: wait are you on the pytorch team?
chilli#5665: Just add a module hook that appends a cuda event or something of the like
|
chilli#5665: Yeah
random person#5234: ah ok
random person#5234: the nvidia docs tell me to manually wrap each layer...
chilli#5665: Link?
random person#5234: or at least, do it for one module block
random person#5234: https://docs.nvidia.com/deeplearning/frameworks/pyprof-user-guide/advanced.html#layer-annotation
random person#5234: is there a smarter way to do this?
chilli#5665: Yeah, you could do it with module hooks pretty easily
ILmao#5683: Write an FX pass to add wrap every aten op in a NVTX range? Not sure if that's even possible 😜
chilli#5665: Line 239 here: https://pastebin.com/AkvAyJBw
chilli#5665: I think a tensor subclass would be nicer/easier if you really wanted op-level introspection lol
chilli#5665: But if you just want modules then it's even easier
chilli#5665: See https://dev-discuss.pytorch.org/t/the-ideal-pytorch-flop-counter-with-torch-dispatch/505 for an example of this approach for FLOP counting
random person#5234: nice! I just usually use apex's profiler to do everything
random person#5234: its usually not too bad
chilli#5665: I usually use the pytorch profiler
chilli#5665: And export it to the chrome profiler
chilli#5665: To see what I want to see
chilli#5665: Which is usually
chilli#5665: 1. Is my gpu being occupied
|
chilli#5665: Lol
chilli#5665: And 2. Where am I spending the most time
Tau#4010: Yeah, I realize XLA.jl is inactive given Keno is busy with Diffractor and apparently Julia XLA is not actually "effortless" despite the paper name 🙃 . At this stage Julia isn't running at all. As I mentioned in the issue I can't help but wonder if they are using a modified glibc given running libc.so.6 segfaults (!).
ILmao#5683: That's why I recommend trying on a normal VM first. If it doesn't work there, that's a bigger issue
ILmao#5683: If you're on Slack, there's a thread about XLA.jl. TL;DR that there's not much enthusiasm to continue on it until more compiler work lands. Also I have a feeling there's just not enough interest to justify the (substantial) time investment for development
chilli#5665: I think people overrate XLA :P
ILmao#5683: I don't disagree :D
ILmao#5683: But what other choice do you have for TPUs?
chilli#5665: Well, I mean on GPUs and CPUs
chilli#5665: Lol
chilli#5665: Although from what I hear they also do far more fancy things on TPUs
ILmao#5683: Sorry, missed the effortless part. IIRC it did work some time ago, but bitrot and the switching to the libtpu interface means that's a thing of the past. I do think there should have been some very clear messaging that nothing was being maintained or worked afterwards.
ILmao#5683: (this applies to more than just TPU.jl)
chilli#5665: Also, I feel like XLA is a poor fit for Julia's strengths
Tau#4010: I was being a bit snarky because I thought it was funny, I definitely understand that it's non trivial
ILmao#5683: Yes and no I think. It's not a bad fit if you consider the metaprogramming and compiler integration side of things. I believe there was also a desire not to reinvent the wheel
ILmao#5683: I can count on one hand the number of people willing and able to write custom GPU ops for Julia ML libraries. There's just not enough engineering oomph as it were
chilli#5665: Wdym by meta programming and compiler integration side of things
Tau#4010: I agree the autodiff tools (a strong suit of Julia) benefit from covering cases that XLA would restrict
Tau#4010: So not good integration there
|
ILmao#5683: XLA.jl works a little like TorchDynamo
ILmao#5683: You could argue that was too clever of an approach
chilli#5665: Really?
Tau#4010: But there's a lot to love that could benefit generally
chilli#5665: Then how would that even work on a TPU
chilli#5665: Crucial to torchdynamo is the idea that you can fall back for low cost
chilli#5665: But on a TPU you can’t really get away with any graph breaks
ILmao#5683: Not 100% of course. There's no bailout
ILmao#5683: But the IR integration is very similar
ILmao#5683: It's not tracing IIRC
chilli#5665: How is it similar then 🤔
ILmao#5683: I was trying to capture the level of language integration 🤷♂️
ILmao#5683: Maybe script mode would be a better functional analogy, but I feel like that's an outside of language thing
chilli#5665: I see, sounds more like how a Torchscript => XLA would look like then :P
ILmao#5683: Honestly that reliance on compiler internals is probably part of what doomed it to bitrot. Whereas torchscript is not as intrinsically tied to what happens in the CPython interpreter
chilli#5665: Torchscript has other problems 😛
ILmao#5683: A similar thing happened with Zygote (the AD) and it's still causing headaches to this day...
chilli#5665: Wdym by reliance on compiler internals?
ILmao#5683: Literally using undocumented types and functions only used by the compiler
ILmao#5683: Some of which I believe were experimental or in flux at the time
|
chilli#5665: Ah, Julia compiler internals
ILmao#5683: Add in the need to understand XLA and there are what, two people who can work on this?
ILmao#5683: Alas
chilli#5665: Hmmm
Tau#4010: It is a shame since Julia is a beautiful language, and it's compiler steps are close to being great for this.
ILmao#5683: Eh, "close to" is an asymptotic thing
ILmao#5683: At some point you just gotta bite the bullet and pour in the engineering hours to get a workable product. Waiting for the compiler has caused some big disappointments in the community and I'm not convinced it's healthy
chilli#5665: Wdym "the compiler"?
ILmao#5683: As in "here is fancy new AD that will solve the issues with [old ADs], but it relies on these unstable compiler APIs. Some of which are in nighlty, others in PRs and others unimplemented (and all undocumented)"
chilli#5665: Also, how do things like matmuls even work in Julia + XLA?
Tau#4010: I do think there's some perfectionism involved. But it's still a small language, that isn't rolling in engineers
chilli#5665: it's not really clear to me how you can balance 1. performance, with 2. expressivity
chilli#5665: Like, if I were to naively implement a matmul in Julia
chilli#5665: it'd be 3 for loops
chilli#5665: and be 5 billion times slower than say, PyTorch
chilli#5665: so the obvious solution is to bind it to some underlying C++ kernel
EricHallahan#1051: > Beautiful language
> Indexes arrays from one
(sorry I can't get over this :P)
|
chilli#5665: but then you lose expressivity since this is a whole new function
Tau#4010: Nah, it's more clever than that. It compiles to XLA instead of llvm
chilli#5665: wdym
Tau#4010: So you have the previous optimizations
Tau#4010: Julia already has all the basics
chilli#5665: how do you "compile 3 nested for loops to XLA instead of llvm"
Tau#4010: The compiler runs through a few levels of LR
Tau#4010: And has various optimizations along the way
Tau#4010: But you can still use matrix mult
chilli#5665: yes, but none of those matter except for "and end up calling cublas_matmul"
chilli#5665: or whatever
Tau#4010: There are auto vectorize macros, but I'm not sure how good they are
chilli#5665: none of those optimizations are gonna get you reasonable performance on a matmul
Tau#4010: I mean you can just use arrays
chilli#5665: you need to call `mkldnn_matmul` or `cudnn_matmul` kernels eventually
chilli#5665: yeah, so I'm asking how a matmul is represented in Julia
chilli#5665: I'm assuming it's some particular operator
Tau#4010: X * Y
chilli#5665: that's the syntax
Tau#4010: ? What do you mean represented
|
chilli#5665: like, in the IR
chilli#5665: what is the path it takes down to actual hardware
chilli#5665: perhaps matmul is a bad example since it's so ubuquitous
chilli#5665: let's say... convolution
Tau#4010: I don't know the exact path, but you can figure out pretty easily. You can use macros to see exactly what your code is compiled down to
Tau#4010: eg @code_llvm
Tau#4010: to see exactly what llvm code your function produced
Tau#4010: or @code_typed, @code_native
ILmao#5683: Wait, what does "PyTorch" mean here? Some kernel in ATen? (cu)BLAS? Some fancy Python frontend y'all are cooking up in secret?
ILmao#5683: To the other question, same way you would in another framework. Intercept the matmul function call and emit/record an HLO op instead
ILmao#5683: The galaxy brain meme progression basically goes like this: method overloading -> rewriting generated IR -> hooking into the compiler's abstract interpreter to substitute special code for XLA array types.
ILmao#5683: I would say NNlib is the closest equivalent to the high level ATen interface. Maybe somewhere between that and `torch.nn.functional`.
ILmao#5683: i.e. beyond that point are the various dispatches to backends like CPU BLAS, CUDA libraries, etc.
𓆏⸻#2550: https://cdn.discordapp.com/attachments/729741769738158194/945153013276897350/unknown-29.png
𓆏⸻#2550: > The team is aware their model could make it easier for malicious players to produce convincing disinformation or deepfakes. To safeguard against such use cases, they have only released a smaller diffusion model and a noised CLIP model trained on filtered datasets. The code and weights for these models are available on the project’s
𓆏⸻#2550: fucking openai
𓆏⸻#2550: just give me the damn full size model
𓆏⸻#2550: 😭
ilovescience#3282: lol we have had many discussion about this when it was released last year...
someone will probably release an open-source model, you should especially check out @alstroemeria313 's work...
|
Kia#2550: You know there's Finetuned GLIDE Models on Laion
Kia#2550: https://colab.research.google.com/gist/afiaka87/5f64e4de49b50554270a0a6ece243014/laionide.ipynb
ilovescience#3282: the large model?
𓆏⸻#2550: just looking for something better than vqgan tbh
Kia#2550: It's Finetuned on the 2B dataset
Kia#2550: If you want to finetuned your own
ilovescience#3282: again check out alstro's work
Kia#2550: <https://github.com/afiaka87/glide-finetune/releases/tag/v0.0.1>
𓆏⸻#2550: will do
ilovescience#3282: it's getting really realistic, even with faces
Kia#2550: thanks to @Clay Mullis for their work on finetuning GLIDE on different dataset,But other than that there's still things being Finetuned so it will take time
ilovescience#3282: like look at this (generated with CLIP-guided diffusion as developed by alstro):
https://twitter.com/EMostaque/status/1494126103350419459
ilovescience#3282: this is pretty cool too:
https://twitter.com/RiversHaveWings/status/1491174376972058625
EricHallahan#1051: something something be the change...
𓆏⸻#2550: understood
𓆏⸻#2550: break into openAI
𓆏⸻#2550: steal the model
𓆏⸻#2550: https://tenor.com/view/explosion-bang-missile-rocket-fire-gif-12481056
|
ILmao#5683: Reminds me of that "how to get into Deepmind" post on r/ml a while back
𓅬 gabriel_syme 𓅬#3220: I thought Emad said there's some additional GAN artistry in there. Or was that vqgan?
ilovescience#3282: you might be right actually
EricHallahan#1051: I'm going to suggest moving this conversation to away from #general, I would suggest #art.
chilli#5665: err, just mean some kernel in ATen, more or less. The point is that going through Julia here doesn't get you anything special
chilli#5665: I guess the question is... what are the advantages of doing this in Julia?
chilli#5665: since I think you lose a lot of the composability you might otherwise have, no?
ILmao#5683: How so? Where do you see issues with lost composability?
chilli#5665: Since now you just need a new function that the rest of your julia ecosystem isn't going to work with, no?
ILmao#5683: Ah no, it's the same function
chilli#5665: I’m more talking about something like convolution that isn’t already a standard primitive
ILmao#5683: Yeah, still the same function for all backends if I'm understanding you correctly
ILmao#5683: Not the same implementation, however
chilli#5665: What’s considered a backend in this context?
ILmao#5683: XPU
ILmao#5683: Generally speaking
ILmao#5683: For CPU and GPU I think the closest analogy would be the torch dispatcher, since those are eager
chilli#5665: I guess I’m not really talking about the backends in this context, more talking about other composability things
chilli#5665: Like if you want to pass in say… a diagonal tensor
ILmao#5683: Right, just so we're on the same page how would that be handled in torch?
|
chilli#5665: The dispatcher
chilli#5665: You make it a diagonaltensor more or less, and then dispatch to an appropriate kernel
ILmao#5683: Got it. So the equivalent would be dispatching on the `Diagonal` array wrapper type
chilli#5665: Oh actually
chilli#5665: I realized a question I wanted to ask some Julia person
chilli#5665: How would you implement something like vmap in Julia
chilli#5665: And in particular, I'm talking about composability of vmap
chilli#5665: Like, you can do vmap(vmap(vmap(...)))
chilli#5665: So in terms of an array wrapper type, you would end up with something like Batched(Batched(Batched(...)))
ILmao#5683: I think you'd still need a compiler transform, because the wrapper only goes "skin deep"
chilli#5665: Hmmm, seems more or less identical, perhaps because pytorch's dispatcher is pretty inspired by julia afaik
chilli#5665: Vmap isn't a compiler transform
ILmao#5683: What is it then? IIRC some sort of rewrite system is required
chilli#5665: Like, you can do it in eager mode in Pytorch
chilli#5665: It's just a redispatch
ILmao#5683: Ah
ILmao#5683: I thought you wanted fusion and all that
chilli#5665: Wdym by fusion
ILmao#5683: In that case yeah, a wrap and re-wrap would work I think
chilli#5665: (ik what fusion is just not sure what you're referring to)
|
chilli#5665: One tricky thing is how you handle multiple dispatch resolution
ILmao#5683: Naively doing this would break pointwise op fusion, I think
chilli#5665: Like, what do you do when you encounter foo(BatchedTensor, DiagTensor)
chilli#5665: Or err, lemme simplify it
ILmao#5683: Dispatch is not so much tricky as tedious. Essentially you'd have to add wrapper-aware overloads to stdlib functions
ILmao#5683: And any third-party ones that might come up
chilli#5665: Foo(Batched, Batched(Batched(...)))
chilli#5665: I agree that's tedious, but not what I'm referring to
chilli#5665: How so?
ILmao#5683: Because in Julia that's done as a struct which reifies the chain of pointwise ops. Unless you can specify a priori which of those are "batch safe", a vmap function would have to disassemble that into a series of broadcasts (or something more dimension-aware) of the batched versions of those functions
ILmao#5683: Maybe with some dimension reshaping or juggling
ILmao#5683: What do you envision the dispatch chain being here?
ILmao#5683: I guess it would help to have a `vmap` example to translate
chilli#5665: What's the issue here? I guess I'm not totally sure how pointwise fusion works in Julia
Tau#4010: I think vectorized code is only more efficient on certain hardware (eg avx, tpu). Julia is decent at compiling to something efficient for the hardware in question if it's supported
chilli#5665: Basically, it first dispatches on foo(Tensor, Batched), and then does the other dispatch
Tau#4010: So loops are fast
ILmao#5683: So IIUC, you want `foo(Batched, Batched)` -> `foo(Tensor, Batched)` -> `foo(Tensor, Tensor)`?
chilli#5665: Essentially
chilli#5665: Except that foo(Tensor, Batched) might dispatch to say, bar(Tensor, Tensor)
|
chilli#5665: Lemme give an example
chilli#5665: Dot(Batched(Batched), Batched) => matmul(Batched, Tensor) => bmm(tensor, tensor)
ILmao#5683: Thanks, does `Dot(Batched, Batched)` dispatch to something else?
chilli#5665: No
chilli#5665: Or err, yes
chilli#5665: Eventually
chilli#5665: Dot(Batched, Batched) => mm(tensor, tensor)
chilli#5665: Essentially, the way it works in Jax/functorch is that each batchedtensor layer is assigned a "level"
ILmao#5683: Roger. How about something like:
```julia
struct Batched{T, A<:AbstractArray} <: AbstractArray{T}
inner::A
end
unwrap(x::Batched) = x.inner
dot(a::Batched, b::Batched) = mm(unwrap(a), unwrap(b)) # skipping intermediate steps
dot(a::Batched{<:Batched}, b::Batched) = matmul(unwrap(a), unwrap(b))
matmul(a::Batched, b::AbstractArray) = bmm(unwrap(a), b)
```
(Rough pseudocode)
|
ILmao#5683: Interesting, I wonder if a viable alternative would be encoding that level into the batched type
chilli#5665: And when deciding how to dispatch, you dispatch on the arguments with the topmost level, and treat the rest as regular tensors
chilli#5665: No, it has to be a global ordering
chilli#5665: This also defines clear semantics for say, foo(Batched, Grad)
ILmao#5683: wdym by global
chilli#5665: Like, for whatever arguments you have, you need to be able to define an ordering between them
chilli#5665: Of which one dispatches first
chilli#5665: Effectively, it's probably better to imagine that different Batched wrappers are not interchangeable
chilli#5665: It's more like you have
chilli#5665: foo(Batched1(Batched2)), Batched1)
chilli#5665: And so, notably
chilli#5665: foo(Batched1, Batched1) has different semantics from foo(Batched1, Batched2)
ILmao#5683: How do you handle the combinatorial explosion of having to specify dispatches for various batched types? Is there some kind of inheritance heirarchy?
chilli#5665: That's why I said it's a global level
chilli#5665: You only specific dispatches for foo(Batched, Tensor), foo(Tensor, Batched) and foo(Batched, Batched)
chilli#5665: Essentially, you adopt liskov's substitution principle for the non-top subclass and figure out the dispatch on that
ILmao#5683: I'm still not clear on the mechanism or motivation, but maybe going through the `foo` example:
> foo(Batched1, Batched1) has different semantics from foo(Batched1, Batched2)
Would you still not need to define separate dispatches for these two if they have different semantics?
ILmao#5683: Based on your LSP comment I'm envisioning `Batched1 extends AbstractBatch` and `Batched2 extends AbstractBatch`, but somehow I feel like that's not the case
|
chilli#5665: So the first one goes down the path of
foo(Batched, Batched)
chilli#5665: And the second one goes down the path of
chilli#5665: foo(Batched, Tensor) => foo(Tensor, Batched)
ILmao#5683: Wait, where did Batched2 go :thinkies:
chilli#5665: So it goes
foo(Batched1, Batched2) => foo(Batched1, "Tensor"(Batched2 treated like tensor)) => foo(Tensor, Batched2) => foo(tensor, tensor)
ILmao#5683: So Batcheds are tensors as well?
chilli#5665: Yes, they are tensor subclasses
chilli#5665: A batchedtensor is just a tensor that "hides" its batch dimension
ILmao#5683: Cool, I think there's actually some convergence there with https://github.com/JuliaLang/julia/pull/32310
ILmao#5683: Assuming these are the actual signatures, swapping out tensor for AbstractArray would probably work
chilli#5665: Yeah, but the discussion here is about figuring out dispatch order
ILmao#5683: What is the alternate dispatch path that challenges this order?
chilli#5665: Well, you need to figure out the dispatch order in the first place
chilli#5665: For example, do you dispatch on Batched1 or Batched2 first
ILmao#5683: Isn't that what the types are for?
ILmao#5683: Or can `foo(Batched1, Batched2)` take 2 different paths?
|
chilli#5665: Well, it depends on the global level of Batched1 and Batched2
ILmao#5683: Is that tied to the batched type or can it change at will?
ILmao#5683: e.g. is `global_level(Batched1) > global_level(Batched2)`?
chilli#5665: It's tied to the Batched type, but that's created dynamically
chilli#5665: Like, every time you do vmap the inputs that are Batched become a new "Batched" type
ILmao#5683: I'm assuming you're not literally generating C++/Python on the fly, so what does that mean 😛
ILmao#5683: Ok, so instead of double-wrapping they become a "higher ranked" batch type as it were?
chilli#5665: Well, I'm calling it a new type now but it's not literally a new type
chilli#5665: It's just a wrapper with some metadata denoting the level
ILmao#5683: And then at dispatch type you compare the levels of the wrappers to see which path to choose?
chilli#5665: Yeah
ILmao#5683: Do you ever directly check for a particular level? Say take path A if level == 2 or path B if level == 3?
ILmao#5683: Oh and how centralized is this level checking logic? Need it only be done once or for each dispatch?
chilli#5665: No it's purely which level is higher
chilli#5665: Once per dispatch
ILmao#5683: Ok, I *think* this is still doable by encoding the level in the `Batched` type (much like how the array type specifies the number of dims) and doing a conditional for the level checking, but to be sure can you point me to where the list of vmap rules lives in PyTorch?
chilli#5665: https://github.com/pytorch/functorch/blob/0c0f325ba3c83e70c215f231cfd810af68141767/functorch/csrc/BatchRulesLinearAlgebra.cpp#L32
ILmao#5683: Had a quick skim. The most direct translation would be to create a similar batched type with a runtime level and that wraps an inner array. This would be an array subtype to get the tensor-like behaviour you noted
ILmao#5683: The runtime time checks for dimension compatibility would be very similar. You'd probably want top-level dispatches to catch batched types, but that wouldn't be necessary if the function uses utilities like `rankWithoutBatchDim` which are Batched-aware.
ILmao#5683: Fancier would be moving the level from a purely runtime thing into the type itself. Similar conditional logic, but now you can lean on the language's dispatch system a bit more. e.g. not necessarily require a block like this: https://github.com/pytorch/functorch/blob/0c0f325ba3c83e70c215f231cfd810af68141767/functorch/csrc/BatchRulesLinearAlgebra.cpp#L77-L81
|
chilli#5665: how so?
rom1504#5008: At laion discord ( <h ttps://discord.gg/mVcgxMPD7e> ) several of us have been collecting audio/text paired data and even training a captioning model based on it
Sounds similar to the kind of stuff you want to do, maybe you'd be interested :)
Kia#2550: it breaks
Kia#2550: <https://discord.gg/kuCjmh9D> invite to the laion server
rom1504#5008: Yeah that was on purpose
Kia#2550: Ok you can't put them in <>
Kia#2550: hm
rom1504#5008: Wanted to link to it but not have it display the invite in super large.
Well anyway i guess it's ok both ways :)
Kia#2550: I supposed so,But yeah that's the invite. But Rom is right we're working on a audio dataset and feel free to help! (We appreciate it)
ari#9020: Angle brackets around urls disable Discord's preview
Deleted User#0000: <https://www.youtube.com/watch?v=0jspaMLxBig>
Deleted User#0000: neat
Deleted User#0000: heres some code thats used for one of the bots https://colab.research.google.com/drive/171GirNbCVc-ScyBynI3Uy2fgYcmW3BB9
EricHallahan#1051: I suggest you look at the pins in #art (and maybe move the conversation there).
austinvhuang#6880: Question for people who do a lot of LM training. In The Annotated Transformer there's this optimization where batches are constructed where examples with similar lengths are put in the same batch. The padding is dynamically adjusted per batch.
I pretty much never see this trick used anywhere else, but padding does have a large effect on speed in general.
|
What's the current best practice with this optimization? Is there a reason why it's not used? https://cdn.discordapp.com/attachments/729741769738158194/945332418389352458/unknown.png
StellaAthena#3530: Are you running the notebook on GPU or TPU? This is necessary on TPUs (see T5 and T0, both of which discuss it)
austinvhuang#6880: gpu mostly. are there other training implementations that use this optimization? don't see it discussed much but maybe i'm not looking in the right places or talking to the right people.
CKtalon#7792: https://twitter.com/StasBekman/status/1495514280686456842
EricHallahan#1051: Discussed in length in #research yesterday if you are interested in that conversation.
CKtalon#7792: thanks
tpapp157#3643: TLDR: if you go through the effort to match your model dimensions to the dimensions of specific hardware, you can eliminate a bunch of waste and get significant performance gains.
DaveS#3481: Any Eluther project related to speech-to-text?
cfoster0#4356: Not atm. I think the LAION folks may be picking up something in that direction
StellaAthena#3530: My prior is that it matters much more at small scale, actually
StellaAthena#3530: YMMV. What might take me 3 days to run on 8xA100s could easily take some grad student a week to run on their hardware
chilli#5665: I still don't believe it
chilli#5665: like, there must be something else going on there
chilli#5665: increasing your matmul size can increase your TFLOPS
chilli#5665: but it shouldn't decrease your wall time
chilli#5665: like, wall time as a function of your matmul size should be monotonic
StellaAthena#3530: Are you saying that compute-efficiency is irrelevant to wall-clock time?
StellaAthena#3530: What's the point of compute efficiency then?
chilli#5665: no, I'm saying that say, increasing your matmul FLOP count by 10% could possibly only increase your walltime by 5%
chilli#5665: and this improves your "compute efficiency", but your wall time doesn't decrease
|
chilli#5665: the situation Stas has, where he increases his matmul FLOP count but *decreases* walltime is very unusual
random person#5234: might be a weird case where the resize kernel call in CuDNN cost more latency?
tpapp157#3643: I think he means effective FLOPs per second. Or something like that.
chilli#5665: nah that kind of stuff should be completely dominated by the actual computation in this case
chilli#5665: effective FLOPS per second is the same thing at what people call "FLOPS"
chilli#5665: it's your floating point operations per second
tpapp157#3643: Obviously reducing the dimension size means fewer total calculations.
random person#5234: FLOPs/s is a hardware spec
random person#5234: not a DNN spec
random person#5234: FLOPs is the DNN spec
chilli#5665: unfortunately, this isn't the case
chilli#5665: FLOPS is somewhat ambiguous about whether it means "floating point operations per second" or "floating point operations"
chilli#5665: but it's very commonly used to refer to "floating point operations per second"
random person#5234: but that in terms of units, does not make sense for a network
chilli#5665: "floating point operations" makes sense for a network
random person#5234: yes
random person#5234: I meant floating points operation per second
chilli#5665: yes
chilli#5665: and people often call that FLOPS
chilli#5665: FLOPS/s unambiguously maps to "floating point operations per second"
|
FLOPS ambiguously maps to "floating point operations per second"
chilli#5665: FLOP unambiguously maps to "floating point operations"
random person#5234: oh I thought FLOPS = Floating Point OperationS
chilli#5665: yes, FLOPS also ambiguously maps to "floating point operations"
chilli#5665: lol
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/945394148259819581/unknown.png
chilli#5665: see Nvidia's spec sheet
chilli#5665: where obviously, TFLOPS maps to "teraflops per second"
random person#5234: Yea... ok right
random person#5234: nvidia always does that
chilli#5665: well, not just nvidia
random person#5234: btw is it me or tensor core only activates on gemm operations?
chilli#5665: so does Google
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/945394455089934356/unknown.png
tpapp157#3643: You know how many total calculations your model requires, you know how much time it takes to process one sample, that gives you an Effective FLOPs number for a given model on given hardware. He's saying that by reducing some dimensions he can achieve an overall higher Effective FLOPs throughput presumably due to hitting some hardware thresholds in the memory and computation management that resulted in significant time savings per forward pass.
chilli#5665: err, yes, that is what they're designed to do
chilli#5665: lol
chilli#5665: (same for systolic arrays)
chilli#5665: they compute matmuls and matmuls only
random person#5234: I mean that does make sense but realizing a lot of times the bottleneck isnt the matmul operations
|
chilli#5665: yes, but it isn't compute either
chilli#5665: oftentimes you're significantly bottlenecked by memory bandwidth
random person#5234: yea I been noticing that
chilli#5665: on a pointwise operator you're nowhere near achieving 20 teraflops on a GPU
chilli#5665: I have this kinda cool plot from a blog post I'm writing
chilli#5665: showing how you're either bandwidth-bound or compute-bound as you vary the amount of pointwise ops fused together
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/945395381930455050/unknown.png
tpapp157#3643: That's a pretty simple use case though. For an NN you start having to worry about prefetching data and other such complexities.
chilli#5665: mmm, sure, data-loading and network calls is a whole nother can of worms
tpapp157#3643: I wouldn't be surprised if the savings come from some interplay between the memory manager and the cache sizes that allowed the hardware to preload an extra layer's params or something like that.
chilli#5665: but once you're handling those things you're left with optimizing memory-bandwidth bound ops
chilli#5665: mmm, I could believe it was some kind of weird interplay with the distributed calls
chilli#5665: but there's not really that much preloading going on
chilli#5665: of your parameters
chilli#5665: typically they sit completely on gpu global memory
random person#5234: There isnt that much prefetching I dont think with vram
random person#5234: Your entire model is there and the on chip l2 cache is busy with your current layer
dpressel#0928: This technique is called bucketing and its very common in NLP tasks. In large-scale LM pretraining, normally the input is all just concatenated, and the tensors fed into pre-training are fixed width up to the context length (for a transformer, commonly 256, 512 or 1024). In this case, you just chop it up in chunks, and there is no zero-padding, so no need for bucketing b/c each batch is filled out to the max width. The original T2T was used for seq2seq, especially NMT so bucketing makes a lot of sense there. On a TPU, you can do bucketing, but if its dynamic batching (like what fairseq does to get similar number of tokens per batch) its not as efficient as if you set a specific fixed (small) number of bucket sizes and adjust the batch size so you have a constant number of tokens
faraday#0862: I found reverse word task to be a good example of tokenizer effect on output:
https://www.reddit.com/r/GPT3/comments/sy13ub/davincitext001_is_incredibly_knowledgeable_can/
|
from NeoX 20b:
**Reverse these words.
water: retaw
sound: dnuos
fire: erif
air: ria
microsoft: **msi ffl
laptop: lap
car: ca
guac#4716: yes, see Section 3.9.2 of https://arxiv.org/pdf/2005.14165.pdf
StellaAthena#3530: Those comments rotfl https://cdn.discordapp.com/attachments/729741769738158194/945410079891533864/Screen_Shot_2022-02-21_at_3.02.00_PM.png
faraday#0862: thank you for sharing
faraday#0862: is this just an effect of training data? or is there any effect of tokenization in proper learning for this task?
bmk#1476: it's all tokenizer
StellaAthena#3530: I think "all" is a bit bold, but yes it seems likely to be significantly the tokenier
faraday#0862: where can I find a comparison of behavior between GPT NeoX vs Fairseq models ? (not how they work but about their output properties)
tpapp157#3643: Well it's a result of training via a tokenizer on real text. Obviously, even a BPE tokenizer has tokens for all the individual characters in its dictionary so there's no technical reason why it wouldn't be able to complete the task. The issue is that during training on real text, higher order tokens are more often used and the model isn't able to build the correct token relationships to allow it to complete a task that strongly emphasizes character level understanding. If you tinetuned a BPE model on this reverse task it would handle it just fine.
tpapp157#3643: Or even if you did normal training but added more text augmentations like random character insertions and deletions.
|
tpapp157#3643: So no, unless you're using a tokenizer with no character level tokens at all (I can't think of one off hand) which would make completing the task impossible, then it's all training.
faraday#0862: wow, that was enlightening. no model could handle that (gptj vs fairseq vs gpt-3 davinci)
it seems they can only handle short examples
> Reverse the words of these sentences.
>
> buy me a book: book a me buy
> for this reason only: only reason this for
> The Bermuda Triangle, also known as the Devil's Triangle: Triangle Devil's the as known also, Triangle Bermuda The
> Consequently, the determination of which accidents occurred: occurred accidents which of determination the ,Consequently
> all work and no play makes jack a dull boy:
anthony_fuller#1075: @Deleted User de23c58c @Kharr I made a diagram for MHSA in MS Paint https://ibb.co/YQgy9gw
anthony_fuller#1075: seq_len = 4, and num_heads = 2 😂
anthony_fuller#1075: Didn't include the scale term
n.kh.l#5814: Joined, thanks!
ILmao#5683: I've seen it come up for sequence tasks before, but it definitely isn't as common as just padding.
ILmao#5683: Also @chilli to answer your question from yesterday, have a look at how StaticArrays.jl encodes stuff at the type level. TL;DR is that certain values can be used as type params for a kind of poor man's dependent typing.
chilli#5665: I think it seems pretty difficult to encode this kind of stuff at the type level.
Also, tbh, my initial reason for asking you was hoping that there was some prior art here in Julia to learn from 😛
|
One of the limitations in the current design is that it only clearly delineates semantics for what we call "lexically scoped subclasses". Like, the level system is simply a global stack.
But we want to provide "non-lexically scoped subclasses" too. Like, there's no reason you shouldn't be able to return a DiagTensor from a function, even if the input wasn't a DiagTensor.
So, we want to be able to do things like `DiagTensor(BatchedTensor(LinearOperatorTensor))` or something crazy like that, but we need to figure out appropriate dispatch resolution orders.
chilli#5665: There's also another problem to figure out, which is that, sadly, LSP doesn't completely hold true for all of the subclasses we'd like to write
ILmao#5683: I don't see why `DiagTensor(BatchedTensor(LinearOperatorTensor))` wouldn't work. There's a literal `Diagonal` type in the LinearAlgebra stdlib after all
ILmao#5683: Instead of thinking of the type level encoding as you would generics in other languages, think of it as `constexpr`. Then it doesn't seem so weird to have literals where usually you'd have a named type. It's not a particularly difficult nor uncommon pattern either: every time you see `AbstractArray{T, N}`, the `N` is a literal value and not some subtype of `Int`.
chilli#5665: Yeah, but you then need to define dispatch order with it
ILmao#5683: How is that any different than before? AFAICT there is only one "level" of `DiagTensor`?
chilli#5665: Yeah, but how do you figure out what happens when you do foo(DiagTensor, BatchedTensor)
ILmao#5683: Overload `foo(DiagTensor, Batched{N}Tensor)`?
chilli#5665: yeah, but that's a quadratic explosion
chilli#5665: lol
chilli#5665: the advantage of the current lexically scoped composable subclasses is that it allows you to only handle `foo(BarTensor, Tensor)`, and then handle every kind of composition through LSP
ILmao#5683: Fallbacks are always an option. You only need to be specfic when absolutely necessary
ILmao#5683: Which AFAICT is what torch vmap does now
chilli#5665: hmm, this isn't the same thing as the fallback in vmap
chilli#5665: well, there's a couple different problems
|
chilli#5665: for one, how do you figure out whether you should go to `foo(DiagTensor, Tensor)` or `foo(Tensor, BatchedTensor)` first?
ILmao#5683: I don't mean the literal thing called fallback that runs a loop, but rather what happens when you don't hit a perfectly concrete signature
ILmao#5683: It's ambiguous, you'd have to define `foo(DiagTensor, BatchedTensor)`
ILmao#5683: But I don't see how you get away with not defining at least a partial dispatch ordering between those types with the "lexically scoped" approach
chilli#5665: right, so I'm saying that currently, in the lexically scoped setting, it's not ambiguous
chilli#5665: every tensor subclass has a global level that's defined by the lexical scoping
ILmao#5683: So if you want to register a new subclass with a level between two adjacent existing ones, you're SoL?
chilli#5665: so...
```
grad( # level 0
vmap( # level 1
jvp( # level 2
)
)
)
```
ILmao#5683: Right, that's batching. How about diag?
chilli#5665: no, since the levels are dynamically created depending on the lexical scoping
chilli#5665: well, if diag is a function transform then you can ignore these problems 😛
chilli#5665: since it's still lexical scoping
|
ILmao#5683: Then there's no concept of a diagonal tensor in dispatch, is there?
ILmao#5683: So I don't understand the problem
chilli#5665: wdym?
chilli#5665: like, I'm not sure what this means
ILmao#5683: Your question was how to disambiguate between diag and batched when it comes to dispatch ordering
ILmao#5683: But now you're saying diag doesn't even participate in that process
chilli#5665: oh, perhaps this wasn't clear. A function transform basically consists of 2 steps:
1. Wrap all of the inputs in a tensor subclass
2. Basically pop that tensor subclass's level onto the global ordering
chilli#5665: So a function transform is just a special case of the dispatch ordering process that also enforces the "lexically scoped ordering"
ILmao#5683: So you still need logic that says "a batched diagonal of this level is higher in the dispatch ordering than a normal batched tensor of this level"
chilli#5665: so...
```
def f(x):
...
return x
vmap(f)(x) =>
def f(x):
x = BatchedTensor(x)
|
...
return unwrap(x)
```
chilli#5665: yes, and that logic is enforced by the lexical scoping
ILmao#5683: Sure, but you have to write a rule for it *somewhere* 😛
chilli#5665: yeah, but the point is that this rule is universal for all function transforms
ILmao#5683: Wait, then you can run into ambiguities when two different function transforms are used as separate arguments?
ILmao#5683: e.g. `foo(Diagonal, Symmetric)`
chilli#5665: yes, that's where the global ordering comes into play
chilli#5665: so, first, let's assume that your function transform only applies to some of your function inputs (which they do)
ILmao#5683: Right, my point is at some point, the system needs to be taught that Diagonal has priority over Symmetric or vice versa
chilli#5665: so `symmetrize(diagonalize(foo, args=(1,)), args=(0,))` => Diagonal has priority over symmetric
chilli#5665: while
chilli#5665: the other way corresponds to symmetric having priority over diagonal
ILmao#5683: Oh I see what you mean by transformations now
ILmao#5683: How does the dispatcher get informed of the presence of `symmetrize`, `diagonalize` and their relative ordering? I assume there's some metadata attached to the transformed function
chilli#5665: the implementation of diagonalize would look something like
```
global transform_ordering = []
|
def diagonalize(fun, arg_num):
diag_class = create_diagonal_class()
transform_ordering.push(diag_class)
def diagonalized_fun(args):
args[arg_num] = diag_class(args[arg_num])
out = fun(args)
return out.unwrap()
```
chilli#5665: something along those lines.
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/945539921865748501/unknown.png
chilli#5665: perhaps I should have just copy pasted the vmap implementation in functorch
chilli#5665: lol
ILmao#5683: I see, so `transform_ordering` is the important bit here
ILmao#5683: I guess this is where the topic shifts from "is this possible" to a more philosophical debate. Because it is possible to do such a thing, but the price you'd pay is creating your own dispatch system for these types.
ILmao#5683: I can see how that makes sense for torch, but in Julia it's bound to raise some eyebrows. Whereas defining overloads for the few ambiguities that might crop up is a much simpler solution.
chilli#5665: mmm, I would claim that it's not possible to make something like vmap/grad/vjp without a system like this
chilli#5665: it's not a matter of ambiguities, it's a matter of ordering
ILmao#5683: You can order between batched types just fine
chilli#5665: like, depending on the context `foo(gradtensor, batchedtensor)` does 2 different things
ILmao#5683: Just like you could choose different codepaths for `AbstractArray{T,1}` and `AbstractArray{T,2}`
|
ILmao#5683: For example?
chilli#5665: well, it'll either return `GradTensor(BatchedTensor)` or `BatchedTensor(GradTensor)`
chilli#5665: which compute different quantities
ILmao#5683: Isn't the bigger problem that `GradTensor` needs to be its own type? 😉
chilli#5665: (if that's not clear why they're different i can explain further)
chilli#5665: I don't think this is specific to GradTensor
ILmao#5683: I know we've been mostly talking in the abstract, but I'm honestly at a loss for concrete scenarios where it matters
chilli#5665: well, `GradTensor(BatchedTensor)` and `BatchedTensor(GradTensor)` compute fairly different quantities
chilli#5665: would explaining that help answer your question?
ILmao#5683: IDK what "depending on the context" entails here
ILmao#5683: Is there literally a context manager you can invoke that changes the behaviour of `foo`?
chilli#5665: "depending on the context" => either `vmap(grad(foo))` or `grad(vmap(foo))`
ILmao#5683: But you want them to be associative because downstream code might behave differently?
chilli#5665: well, 1. they're not associative, 2. not sure why they need to be associative
ILmao#5683: Sorry, commutative
chilli#5665: they're not commutative either
chilli#5665: 🙂
chilli#5665: vmap(grad(foo)) has different semantics from `grad(vmap(foo))`
ILmao#5683: Then I don't see the issue?
ILmao#5683: You will get different semantics because of how the wrapper types propagate
|
chilli#5665: I'm saying that I think the semantics will be quite strange if you have a fixed ordering
ILmao#5683: Where is this fixed ordering coming from though? I never mentioned one
chilli#5665: or err, like defining concretely what happens when you have `foo(GradTensor, BatchedTensor)`
ILmao#5683: Oh I see what you mean
ILmao#5683: You're thinking of AD as a dispatch-driven thing
ILmao#5683: Hence `foo(GradTensor, BatchedTensor)`
chilli#5665: well, I'm using GradTensor as a concrete example here
chilli#5665: but sure, you can substitute it with another tensor subclass
ILmao#5683: Whereas I'm assuming a source-to-source version that *outputs* a `GradTensor` but doesn't have to take one
ILmao#5683: That's the problem, I can't think of another one which would make sense here.
ILmao#5683: Like for the array wrapper example with Diagonal, instead of a function transform you'd write a wrapper function that wraps the relevant args eagerly (sure it could be pulled out into a fancy struct and/or a macro decorator like `functools.wrap`, but fundamentally it's a callable that doesn't mutate any state). Then overload dispatch as usual
ILmao#5683: Having the order of those transforms mean something when they are applied to disjoint args is interesting, but you'll forgive me for not seeing a use case for it (outside of vmap + grad, which AFAICT apply to all args and which we've already discussed)
chilli#5665: I guess you wouldn't consider forward-mode AD/dual numbers to be different? 😛
ILmao#5683: No, because at least with Julia ADs the order of wrappers is there
ILmao#5683: i.e. things should nest differently depending on the call heirarchy
chilli#5665: hmmmm
chilli#5665: You're saying that all arguments should always have the correct wrappers?
chilli#5665: and so the dispatch order is simply defined by the order of the wrappers?
ILmao#5683: I'm saying that I haven't mustered the mental fortitude to think of counterexamples at 10PM, but for now yes :harold:
mgostIH#0245: I remember reading on Twitter about a somewhat recent (maybe a month ago?) paper showing that you could learn a lot of regular languages' semantics with some method of theirs (maybe it had to do with bayes), it was quite interesting since it had sparked a conversation about Chomsky being wrong, maybe it was discussed here too
|
mgostIH#0245: If anyone remembers what the paper was ping me
mgostIH#0245: Oh by rubber ducking here I was able to find it, here for anyone else interested https://twitter.com/spiantado/status/1486033071300169729
zphang#7252: It tended to be used more for RNNs, where there was a (more?) direct relationship between sequence length and running time per batch.
As for why they are not used as much more Transformers, I chalk it up to a combination of reasons
- Padding to big blocks is simply easier to write code for, especially given that we often already need to write masking code
- Certain hardware (read: TPUs) prefer this
- affects the randomness of your batches
bmk#1476: this is implemented in eval harness fwiw
kurumuz#5695: I have it implemented in my trainer
kurumuz#5695: but generally we just pack the dataset to 2048 tokens
kurumuz#5695: so it's not used at all
kurumuz#5695: and when you do that you can just JIT your code and get a good speedup
zphang#7252: yea packing is more common for transformers, especially code intended for TPUs
zphang#7252: (also reminds me how one of the worst things about TPUs is having to write special last-batch handling)
uwu1#4864: https://torchtext.readthedocs.io/en/latest/data.html?highlight=bucket#torchtext.data.BucketIterator
𓅬 gabriel_syme 𓅬#3220: night time viewing
https://www.youtube.com/watch?v=M49TMqK5uCE
EricHallahan#1051: https://twitter.com/BigscienceW/status/1496124175206998024
StellaAthena#3530: Big Science has a path forwards for a 175B multilingual model (hopefully) https://cdn.discordapp.com/attachments/729741769738158194/945752635238121482/IMG_9127.jpg
EricHallahan#1051: Ah of course embeds are broken. `:P`
|
Sphinx#2092: Only 14 languages?
StellaAthena#3530: It’s a curated data crawl and constrained by the languages people were able to curate
StellaAthena#3530: Also, like everything decided by committee, it was a political decision. If people didn’t advocate for languages they didn’t get on the final list
chilli#5665: can you add me to the slack?
EricHallahan#1051: https://docs.google.com/forms/d/e/1FAIpQLSdF68oPkylNhwrnyrdctdcs0831OULetgfYtr-aVxBg053zqA/viewform
StellaAthena#3530: There’s a bunch of bureaucracy involved because technically it’s a subset of the HF slack. This has been recognized as a bad idea but at this point there’s too much inertia to move.
chilli#5665: oh, I'm already on the HF slack
chilli#5665: I'll probably just ask Stas to invite me to the channel
faraday#0862: are there effective ways to force cohesion between paragraphs when generating multiple paragraphs on the same topic? I'm trying to provide previous paragraph and topic prompt but that doesn't quite lead anywhere. plus, if I do this by narration ("the next paragraph read:") then artifacts come up in the generation
chilli#5665: what's the biggest model Big Science has trained so far, btw?
StellaAthena#3530: Yeah once you have a visitor account its pretty easy
StellaAthena#3530: Fully? A 13B model on OSCAR that sucks, but it’sprobably because OSCAR sucks
StellaAthena#3530: By “sucks” I mean that GPT-J and sometimes even Babbage outperforms it
EricHallahan#1051: The 13B model is definitely aaaaapilled though.
chilli#5665: have they done partial training runs of larger models?
StellaAthena#3530: Yeah, in the low 100s
EricHallahan#1051: based
chilli#5665: (as in, enough to be useful)
chilli#5665: ah, how was that model?
StellaAthena#3530: Sorry, when you say “useful” do you mean “useful *for use at a task*” or “useful to assess the computing framework’s correctness and efficiency”
|
chilli#5665: "useful for use at a task"
StellaAthena#3530: Oh, no
chilli#5665: like, "taken enough steps that it's worth evaluating how it does"
EricHallahan#1051: no lol
StellaAthena#3530: No
StellaAthena#3530: Mostly it was used to evaluate and experiment with remedying training pathologies
StellaAthena#3530: They have not trained a model that outperforms GPT-J yet
StellaAthena#3530: (Well, from scratch. They did train T0)
Orbus#5389: I've been lurking here a bit, but I was wondering - would there be a good place here to find someone who might be interested in collaborating on an informal presentation at a small SF convention? A con I'm working on wants to dig into the reality behind some of those "I fed a bunch of *X* into an AI, and it said..." memes, and I'd love to work with someone whose knowledge is deeper than mine. (For reference, I'm an interested amateur in the domain who has successfully fine-tuned some GPT-like models with much use of other people's libraries.)
Daj#7482: I assume that ~all of those memes are fake
Daj#7482: But who knows
Orbus#5389: I assume they are, too! The pitch for the panel was "Look, these are fake. But we actually *do* have a data-set of SF con panels, since we are an SF con. So let's show what happens when we actually feed them into some reasonably state-of-the-art models, and talk about what the tech *actually* achieves, which is less funny and more interesting."
Orbus#5389: I can do it from the perspective of "feeding panels into a model someone else trained," but my ability to talk about what's actually happening under the hood is high-level enough that I figured I'd ask around and see if there's anyone who likes talking to curious people about how these things actually work.
jandras#2934: Is it alright to advertise for relevant startup jobs (GAN research) at one of the channels?
Daj#7482: Generally no, but you can message me or another mod to ask. We generally only allow very relevant/interesting job posts though
ilovescience#3282: those memes are fake, they are a project by this guy:
https://twitter.com/KeatonPatti
Orbus#5389: Huh, didn't realize there was a person who personally pushed the format so much. Thanks!
Orbus#5389: Would still love to find anyone who's interested in talking about "doing it for real" for a panel, but this is really great context.
someKindaBean#8471: there's people who do legit things like that. The first one that comes to mind is the AI Weirdness blog.
|
someKindaBean#8471: https://www.aiweirdness.com/ai-generated-valentines-cards/
𓅬 gabriel_syme 𓅬#3220: damn wish I had time to help with greek 😦 is there a list for that? also, where is the hang out place, discord?
EricHallahan#1051: @𓅬 gabriel_syme 𓅬
Spacecraft1013#5969: so afaik the only reason why we can't just run the `175B.yml` file and get a gpt-3 like model is lack of compute right?
Spacecraft1013#5969: or are there other code limitations with it
Louis#0144: It's a big financial commitment
Louis#0144: Commitment is scary
Louis#0144: It's also a logistical nightmare lol
StellaAthena#3530: We can run it, it'll just take a long-ass time
bmk#1476: there will also need to be a bunch of optimization to figure out the most efficient layout and the best lr/etc
EricHallahan#1051: TL;DR: We absolutely could, but we won't until we are comfortable enough to commit to it.
Spacecraft1013#5969: makes sense
Spacecraft1013#5969: ~~just steal microsoft compute so we can get on openais level~~
zphang#7252: someone tell me if I'm thinking something completely silly here:
1. Compute loss over batch, backprop, get gradient, optimizer
2. Compute loss over each example, backprop individually, average gradients, optimizer step
ought to be equivalent, right?
zphang#7252: wait that's how grad accum works
zphang#7252: ack why's my code borked
StellaAthena#3530: You're probably doing normalization wrong
|
StellaAthena#3530: It's very easy to bork normalization and regularization when doing gradient accumulation
zphang#7252: if it's a scalar factor wrong that'd be fine
for some reason my individual version is only learning... label statistics
(I'm debugging on a single batch, and its predictions are all the same for every example, with probability corresponding to the distribution of the labels)
zphang#7252: I'm going to make a wild guess and predict it's some silly python/scope error
zphang#7252: *Narrator: It was, in fact, a silly python/scope error.*
Kharr#7888: Are you averaging after you accumulate?
zphang#7252: yep, but it turned out to be some other error
StellaAthena#3530: TFW your finetuning dataset is *really* noisy https://cdn.discordapp.com/attachments/729741769738158194/945929285313658920/Screen_Shot_2022-02-23_at_1.24.59_AM.png
𓅬 gabriel_syme 𓅬#3220: I took great care to make sure I deduplicated and added only valid (generated) outputs to the initial training set. And yet, it still hurt performance. I wonder if it is my training / preprocessing or smth https://cdn.discordapp.com/attachments/729741769738158194/945944034336186428/unknown.png
𓅬 gabriel_syme 𓅬#3220: (v1: models trained on the initial dataset, the rest are models on consecutive steps of the generative training process)
𓅬 gabriel_syme 𓅬#3220: that said what they lose in correctness they gain in diversity and OOD generation
Emad#9608: There is enough compute available but given Big Science (of which some folk here are a part) is about to start a 175bn parameter run on Jean Zay I don’t think it makes sense to replicate that, particularly when few can even run or fine tune models of that size.
Instead optimising and increasing utility of LLMs is probably the most interesting area
https://twitter.com/bigsciencew/status/1496124175206998024?s=21
Emad#9608: Basically compute access isn’t an issue any more (😏), dev time and making the most of it is.
faraday#0862: is there anyone who tried to optimize GPT-J prompts for specific tasks using ML (or any other method)? does everyone approach the problem as an exhaustive search? does it make sense? fine-tuning seems computationally exhaustive and costly so I'm curious about how people discover effective prompts at the first place.
ari#9020: I'm not sure anyone's bothered since if you're going down that way, you might as well go all the way to soft prompts: https://arxiv.org/abs/2104.08691
Muennighoff#9764: Yes you need to install the sentence-transformers from inside the SGPT repository at https://github.com/Muennighoff/sgpt/tree/main/biencoder/nli_msmarco/sentence-transformers via `pip install -e .`
It just adds those new pooling methods, so could also copy them over to `sentence-transformers` - I might open a PR for that in their repo
|
d4rkyfirefly#8400: Hello everyne!
Very exited to start a new journey in AI art and be part of this community.
I wanted to ask some questions here and there, just to see if this technoligy can fullfill the idea that I have and help our project that me and my team are working on for past 4 years.
Our objective is to create 12 images based on X style, we do have in fact 4 pictures (portraits, colored) and 8 sketches (drawn with pencil and are in black and white). What we seek is to reproduce the rest of 8 sketches but in same colours or similar style as close as possible of the 8 sketches, to have the full set.
What i would like to know, is, what path to chose and technology, since I noticed there is plenty of different tools and all that. And how would you guys solve such mission.
I do know how to program in python, which is a plus lol.
Thanks for those who can give some hints and suggestions on which path to choose.
mank#6981: Hey y’all, longtime lurker here just saying Hi. I’ve been watching this server as I was doing an AI specialization during my undergrad, which I just wrapped up this year, and I find the stuff here really cool. Looking forward to spending more time here now that I’m out of school and into industry 🙂
bayesiankitten#0080: Hie! I'm Ameya (drimpossible.github.io/about)-- have been lurking around here for a bit. Doing my PhD currently in UK. Had a question-- is there anychance there are past checkpoints of the models trained which I can access (GPT-J and comparable?)
EricHallahan#1051: Our plan is to release all the checkpoints from GPT-NeoX-20B training. :)
bayesiankitten#0080: (Context: I recently starting working with @janus in the AI safety camp. I am thinking whether I can explore what is learned by GPT-esque models over time)
StellaAthena#3530: There are 30-something past GPT-J checkpoints on the eye
bayesiankitten#0080: Ah amazing. Found these, thanks a lot. Trying to bootstrap by looking for smaller models which I can play around with, eventually will move towards GPT-NeoX-20B!
EricHallahan#1051: That is the reason why we release them! We are happy to support research on that front, hence #interp-archive.
bayesiankitten#0080: Ah amazing! I will tune in to that channel, this might be exactly what I'm supposed to be looking at to figure out whether I can chalk up something in this direction~
|
alstroemeria313#1694: @nshepperd ohhhh... i figured out a neat thing about InfoLOOB
alstroemeria313#1694: So with InfoNCE you can compute a bias to the positive pair logits, which should be negative, to compensate for the difference between the contrastive batch size you are using and the dataset size (i.e. the batch size you would use to get a full batch gradient).
alstroemeria313#1694: it is log(batch_size) - log(dataset_size)
alstroemeria313#1694: This bias, when added to the numerator, has no gradient (it's just a constant) so we can just not do that.
alstroemeria313#1694: This bias, when added to the positive pair's item in the denominator (recall we logsumexp over the logits for the denominator), makes the positive pair's contribution to the denominator less.
alstroemeria313#1694: So given some fixed batch size, in the limit of infinite dataset size the positive pair never contributes to the denominator.
alstroemeria313#1694: And we obtain InfoLOOB.
Kharr#7888: Look into the Mistral GPT models as well. They released lots of checkpoints over the training period. Includes smaller models which are easier to explore and compare.
Sidd#6307: ^ can confirm -- 610 checkpoints x 5 random seeds for each of GPT-2 Small, GPT-2 Medium following this schedule: https://github.com/stanford-crfm/mistral#resources
All checkpoints are hosted on the HF Hub, and can be loaded by grabbing the appropriate branch of the core model repos. For example, the checkpoint for train step 396000 of the first random seed of the GPT2-Medium models (`arwen`) is here: https://huggingface.co/stanford-crfm/arwen-gpt2-medium-x21/tree/checkpoint-396000
nshepperd#2316: oooh
nshepperd#2316: then it makes sense that infoloob doesn't saturate and can overfit
nshepperd#2316: bc 'infinite data'
nshepperd#2316: i guess
alexandrost#2936: Hi! I am aware that PNY A6000 is the most standard A6000 out there, but I have recently stumbled upon Lenovo RTX A6000 - does anyone have any experience or know anything about the Lenovo version of the A6000?
Thank you!
Spacecraft1013#5969: I think they’re the same cooler design with different branding, so there wouldn’t be any difference
Spacecraft1013#5969: I have a PNY one and it works great, never had any overheating issues
Spacecraft1013#5969: So just get whichever is cheaper/more available
|
alexandrost#2936: thank you! that's very helpful. The lenovo is usually a bit cheaper so I might go with it
Some Point Process#3793: https://www.youtube.com/watch?v=HfJpQCBTqZs
samtube405#0352: Hello, just wondering whether lm-evaluation-harness repo is limited to the evaluation of gpt-2/3 models, but not for BERT?
bmk#1476: correct, eval harness is for gpt2/3/j/neo/neox, but not bert
dunky11#8257: Is google colab pro + only giving P100s out lately?
dunky11#8257: Got nothing but P100s the last 2 weeks
Kia#2550: Reroll and reroll
dunky11#8257: kill session, start again?
Kia#2550: if you're using like V100 in a long period of time colab would probably kick you to lower GPU's
dunky11#8257: Ye, I bought it like 10 days ago
dunky11#8257: But I got P100 10 days in a row
Kia#2550: Factory reset then run !nvidia-smi
dunky11#8257: Okay, will try thanks
dunky11#8257: How often do you ~ try?
Kia#2550: If you got a P100 factory reset it then check what you got then if you have the gpu you're finding stop factory reseting it
Kia#2550: Im on pro,so when I get a t4 I mostly reroll 5 times to get a P100
Kia#2550: wish it helps
Aric#1747: Does anyone have a go-to method/best-practices for debugging memory leakage in pytorch+cuda?
StellaAthena#3530: > Does anyone have a go-to method/best-practices for debugging memory leakage in pytorch+cuda?
|
I find that crying helps. It guilt-trips the computer into being nice
alstroemeria313#1694: (a) put everything you can into a function and call the function over and over to *make sure* all of the stuff allocated in it goes out of scope
alstroemeria313#1694: (b) Don't save tensors with a grad_fn, detach() them
alstroemeria313#1694: if you are really paranoid do a `gc.collect()` then a `torch.cuda.empty_cache()` after each call of the function
alstroemeria313#1694: (I do this on Colab because otherwise the notebook environment will accumulate top level tensors on GPU that won't go out of scope, and people like to run the stuff over and over)
alstroemeria313#1694: (Like the notebooks *invisibly save* your results in variables that are like `_7` where 7 means the 7th cell you ran)
alstroemeria313#1694: (Which means they *don't get garbage collected*)
chilli#5665: I think we have some upcoming stuff that might do a really good job in helping debug this stuff
chilli#5665: Tbd though
StellaAthena#3530: When people talk about standard practice for finetuning being using 10% the learning rate, is that 10% the *min* learning rate or 10% the *max* learning rate?
StellaAthena#3530: Specifically, with GPT-NeoX 20B we had the lr range from 9.7e-5 to 9.7e-6. When finetuning should I use 9.7e-6 or 9.7e-7?
AI_WAIFU#2844: I need some really long form datasets, things like books or data that can be sorted by timestamp stretching back very far, anyone have any ideas?
StellaAthena#3530: how far back? And is this for a proof of concept or something "real"?
AI_WAIFU#2844: I guess it doesn't actually have to be that far, but this is for a proof of concept
AI_WAIFU#2844: I just need really long sequences
Kharr#7888: https://github.com/deepmind/pg19
AI_WAIFU#2844: Yeah I guess that will do for now
AlephNotation#8282: Hey everyone! I've been lurking here for a few months now but seeing as I am leaving my research job for a different role I figure its time to introduce myself!
I'm currently an ML researcher for a fortune 50 company. I have about 6 years of experience deploying models and about 3 as a researcher. Most of my prod stuff is in torch but I've really enjoyed using jax for my research. I'd love an opportunity to continue doing research as I step into my new role and begin ramping up the company for some serious ML work. Anyone know where I can find a home to play with tensors in the meantime?
|
AlephNotation#8282: Oh my of my professional work involves sequence modeling but I've been dipping my toes into stuff like non-parametric transformers and graph nets. I've also done some fairly significant work building deep learning SLAM to deploy on device
alstroemeria313#1694: 👋
StellaAthena#3530: Welcome! As a general rule I encourage people to check out GPT-NeoX on our GitHub as a good way to get started. It’s our main NLP codebase. https://github.com/EleutherAI/gpt-neox
I also recently posted about some issues that could use some love: https://discord.com/channels/729741769192767510/730090096287547444/946550650622337095
T_Olabode#7343: Decent paper to look at for prompt engineering. May be helpful
https://arxiv.org/abs/2107.13586
uwu1#4864: would love to hear more about the on device DL SLAM :) - it seems like DL has managed to eat the whole 3D scanning pipeline after you do SLAM and get camera pos, is it something you could drop in end2end with an NN? or a combo of systems?
StellaAthena#3530: Good luck fitting any of that DL stuff on a robot tho
uwu1#4864: I mean if you can afford a robot you can probably afford a 3090 to put on it too
kurumuz#5695: you dont need a 3090
kurumuz#5695: just a snapdragon SoC
uwu1#4864: if you're paying 30k for an ABB robot arm + the driver boards and such I think I wouldn't penny pinch on the SoC
EricHallahan#1051: You're more likely to be limited by form factor/weight/power than cost in industry.
uwu1#4864: but yeah I was thinking more useful for content creation for 3d scanning with dynamic real-time feedback. Like you can do the SLAM on device to show and place the objects which themselves are scanned on some cloud GPU. The current on device SLAM either need a depth camera or are just bad
uwu1#4864: there was a startup promising this but apple bought them out a few years ago
StellaAthena#3530: I feel like one of us is deeply confused about something, or just miscommunication. What DL systems are you thinking have subsumed the entire pipeline that can be run in something vaguely resembling real-time on an 8xA100, let alone a 3090?
StellaAthena#3530: > would love to hear more about the on device DL SLAM 🙂 - it seems like DL has managed to eat the whole 3D scanning pipeline __after you do SLAM__ and get camera pos, is it something you could drop in end2end with an NN? or a combo of systems?
NVM it me who can’t read
|
uwu1#4864: instant-NGP? you just feed it camera parameters and RGB images
StellaAthena#3530: Didn’t that come out last week
uwu1#4864: and it runs real-time on a 3090, even decently fast on my laptop 1070
StellaAthena#3530: Oh last month.
StellaAthena#3530: No that’s a totally fair answer, I had just forgotten about that paper.
uwu1#4864: the SLAM part to extract camera params and the spare point cloud is handled by COLMAP which isn't very good.
But yeah before this would be a multi hour process in PhotoScan or Matterport or whatever, similar for the nerf I think
StellaAthena#3530: How good is it IRL? I haven’t had a chance to try it
StellaAthena#3530: > But yeah before this would be a multi hour process in PhotoScan or Matterport or whatever, similar for the nerf I think
Yeah, I took an AI for Robotics course last year and tried to experiment with putting NeRF on a robot and cried
StellaAthena#3530: Even if I could get a robot to lug around an 8xA100 workstation, it still wasn’t fast enough rotfl
uwu1#4864: it's great, main faults are being quite sensitive to the camera parameters (although it can optimize them it never really worked for me), and kind of more sensitive to floaters/bad data than maybe a commerical product. This was just their little demo on GitHub though so probably with heuristics you could solve the latter part. Also it makes a NeRF for scenes which isn't the best repr to mesh if you want to get assets outs
StellaAthena#3530: I wonder if you can speed it up if you only care about detecting walls
uwu1#4864: i wonder, it would be interesting to see how the hash table size/training time change over scene complexity. I think you could also get walls by only sparsely fitting it and doing a 3D Hough transform too
AlephNotation#8282: This is a good place to start https://gradslam.github.io/
uwu1#4864: awesome!
AlephNotation#8282: Our devices were quite beefy as each had its own k8 cluster
uwu1#4864: I think this also has potential in improving the output of generative models, you could make them make more realistic 2D images by making an interpolation around them scan well for instance
AlephNotation#8282: “Mobile server” is probably a better description than device. Although we did have some gnarly constraints
JOHN_MCCAIN_R#3152: i've been looking for some time but cannot confirm if this is a thing that can be done -- I want to take a huge (gigabytes) amount of corporate text and PDF files on a large range of different subjects, and apply AI to create either a visual map one could drill down to find facts about discovered subjects, or a helper-bot trained using all that data. Is there something already out there to do this? I was assuming perhaps a GPT-Neo 'fine tune' on the data but that doesn't seem right.
|
Tau#4010: By the way @alstroemeria313 I modified your cc12m_cfg_train.py to work on tpus and added some fixes and convenience. Do you mind if I post it on github? It's hacky, but it does solve enough issues/annoyances that I think others could benefit.
alstroemeria313#1694: sure! that's fine :blobcutehappy:
alstroemeria313#1694: i... need to stop procrastinating and clean up the pytorch training script for release
alstroemeria313#1694: but it is supposed to be released as open source, mit licensed
Perceptron#9314: https://www.theverge.com/2022/2/25/22951376/nvidia-incident-alleged-cyberattack-february-2022
Maybe hold off updating or installing stuff for a while?
Kia#2550: Ow wow, interesting article
Octopirate#9999: uh oh
Louis#0144: @ethan caballero sigh
Aric#1747: Thanks for those answers, will try those and if they don't work I'll try Stella's approach :D
Aric#1747: Eh it's not gone but now it's slow enough that I don't run out of memory for my task lol
Aric#1747: https://xkcd.com/1718/
jordiae#4107: Which is the cheapest service for storing 10s/100s of TBs of data with ssh access, high bandwidth and relatively infrequent access? All S3 tiers seem a bit overkill and overpriced in this regard
asparagui#6391: diy, hetzner
kurumuz#5695: try coreweave
Louis#0144: :goose:
Teemochu#8740: seedbox.io
Teemochu#8740: plus you get unlimited gigabit+ egress via torrent
Teemochu#8740: no SSH I don't think though
Teemochu#8740: you do get SFTP
|
jordiae#4107: Got it, thanks for all the answers!
Teemochu#8740: not sure if it's *cheapest* per se but it is my go-to rec for an S3 alternative if you don't particularly need it to be browser-facing
jordiae#4107: I need bash/python access, both upload and download
Teemochu#8740: https://seedbox.io/hetzner/ decently sure the dedicateds do have SSH/root... last one on this list looks like a steal for the storage you get
Teemochu#8740: (it's the shared storage plans that don't have root access)
jordiae#4107: Got it, thank you!
Teemochu#8740: IPFS and Filecoin are also worth looking into, I'm not sure how easy or cheap they are by comparison to things seedbox advertises @jordiae
bmk#1476: is filecoin actually not shit now
bmk#1476: also how does it measure up to sia/storj
rom1504#5008: probably wasabi
but you may be surprised by the fact that was is costly when talking about 100TBs is *not* the storage but the egress
so you need to think about what is going to send the data to your storage, and what's the egress cost from that place
zphang#7252: @Sid @TastyBucketOfRice @researcher2 @Daj @janus @Orz @triggerhappygandi @guac
We are planning on submitting a version of the GPT-NeoX-20B paper for the Big Science Large Language Model workshop.
Could you sign up here (if you don't already have an account) and reply/DM me with your name and the email address you signed up with?
https://openreview.net/signup
**The deadline for paper submission is this Monday (02/28)** so ideally send me your details by today or tomorrow.
Octopirate#9999: hey does anyone have benchmarks for inference time for neo20b?
|
Octopirate#9999: i looked for them online but couldn't find them
Octopirate#9999: wondering if a chatbot in 20b is viable for realtime
Caelum#8192: I would love to know some more data from more setups too but if you haven't already try it at https://goose.ai/ and https://www.forefront.ai/pricing quotes "300 tokens in and 30 out in 3.15 seconds"
Octopirate#9999: hmmm, do they have open server specs?
Caelum#8192: nope, I asked forefront for the same reason to compare and they didn't want to say, to abstract away from the user. Fair enough I think but would still be nice to know
Caelum#8192: not sure about goose ai
Octopirate#9999: yeah i get it
Octopirate#9999: that's a little annoying
EricHallahan#1051: You are going to find that nearly everyone is going to keep that stuff private.
Octopirate#9999: and forefront is per hour and not per request which makes me think it's not serverless lo
Octopirate#9999: oh i know
Octopirate#9999: just didn't know if goose.ai was part of this project or what
Octopirate#9999: since i know you all have some obsession with geese
EricHallahan#1051: https://www.eleuther.ai/goose
Octopirate#9999: you bastard, i saw it in the loading message
Octopirate#9999: i have the link MEMORIZED
Octopirate#9999: you ain't getting that drum fill on ME
Octopirate#9999: anyway forefront is pretty cool, i'm gonna try that
EricHallahan#1051: https://blog.eleuther.ai/year-one
Octopirate#9999: man you guys rule so hard
|
OccultSage#3875: About 10 tokens/s.
Octopirate#9999: oh yeah i love AI and will check out stuff on gh issues and such but if you need help on the webside that's what i do professionally so i can help if you need anything on that side as well
Octopirate#9999: though that's probably closedsource lo
EricHallahan#1051: https://github.com/EleutherAI/new-website
OccultSage#3875: goose.ai is not part of this project, though some people here do work on goose.ai, and it is a joint venture with CoreWeave, who provided the hardware and support for 20b training.
Octopirate#9999: i stand corrected
Octopirate#9999: got it. no affiliation with forefront though?
Octopirate#9999: i do not know the culture here. i wish i did
OccultSage#3875: No, though some ForeFront people are here as well.
OccultSage#3875: @Octopirate People are not going to tell you the servers they use to host 20b, because it then becomes an implicit part of the contract, if they manage to figure out how to get it running on lower spec hardware.
EricHallahan#1051: I need to do some documentation to make it navigable to people who are not me. `:P`
Octopirate#9999: i will simply slam the codebase into my forehead and absorb the knowledge
Caelum#8192: I guess there is also a thing where they manage to get it running on lower spec hardware and don't want competition to know it's possible haha
Octopirate#9999: yeah i can help with docs and test automation, even if boring
Octopirate#9999: yeahhh
Caelum#8192: It could be simple just to say "currently X but it can change though you can expect this rate not to decrease" I think the real reason is more directly related to competition
Octopirate#9999: i wonder what the margins are on these businesses anyway
Octopirate#9999: they can't be making bank, right?
OccultSage#3875: Yes, as telling what hardware it takes also informs your competition of how much it costs you to run the service.
Caelum#8192: https://www.coreweave.com/pricing for some reference if you can guess what hardware they are using
|
Octopirate#9999: no 4x 3090
Octopirate#9999: i've heard (unsubstantiated conjecture time) that 4x 3090 is more performant than the V100 and A100 in training
Octopirate#9999: man i really need to brush up on benchmarks and stuff too
StellaAthena#3530: Maybe for small scale stuff but definitely not for large scale stuff
StellaAthena#3530: Put another way, there’s a zero percent chance that 400 3090s > 100 A100s. Dunno about 4 vs 1 tho
Octopirate#9999: yeah
Octopirate#9999: i mean it's diminishing returns for sure when it comes to the 3090
Octopirate#9999: whereas the A100s are designed to work together
Octopirate#9999: i think that's an apt comparison, thanks
Caelum#8192: outside of training, I reckon it's probably best to get many weaker gpus than fewer strong ones
EricHallahan#1051: It's all static, semantic HTML 5 and CSS 3, minimal JS (to keep it lightweight, to help those who are privacy nuts and turn off JavaScript happy, and because I hate needing to use JS when it isn't strictly necessary). We can continue this conversation in #website if you are interested---it has been pretty underutilized lately.
Octopirate#9999: (side note: what are our opinions on federated learning now?)
Octopirate#9999: sure!
Octopirate#9999: i'm the kind of freak who uses react for everything even when he could probably get away with not using it
Caelum#8192: use svelte and sveltekit
Caelum#8192: it's unironically objectively the best way to do web dev
Caelum#8192: for any web dev
Octopirate#9999: "That was last weeks toolchain. This one
Caelum#8192: :D
Octopirate#9999: i have yet to use svelte, actually
|
Octopirate#9999: really been meaning to
Caelum#8192: do it
EricHallahan#1051: Yeah that has been mostly JPrestler's job. `:)`
Octopirate#9999: it looks alright, although their example of "we've reduced this react function by 3 times :3" is deeply, deeply flawed
Caelum#8192: that smile being in a code block seems sus
Octopirate#9999: and is intentionally the worst possible way of writing that react function
Octopirate#9999: which off put me a little
Octopirate#9999: but y'knnowwww it's ok
Octopirate#9999: vue was very dissapointing
Caelum#8192: where is this example?
Octopirate#9999: on their homepage
Octopirate#9999: ok here check it
Octopirate#9999: ```
<script>
let a = 1;
let b = 2;
</script>
<input type="number" bind:value={a}>
<input type="number" bind:value={b}>
|
<p>{a} + {b} = {a + b}</p>
```
Octopirate#9999: svelte app
Octopirate#9999: ```js
import React, { useState } from 'react';
export default () => {
const [a, setA] = useState(1);
const [b, setB] = useState(2);
function handleChangeA(event) {
setA(+event.target.value);
}
function handleChangeB(event) {
setB(+event.target.value);
}
return (
|
<div>
<input type="number" value={a} onChange={handleChangeA}/>
<input type="number" value={b} onChange={handleChangeB}/>
<p>{a} + {b} = {a + b}</p>
</div>
);
};
```
Octopirate#9999: "equivalent react"
Octopirate#9999: except this is terrible
Octopirate#9999: you wouldn't make two functions for the handlechange
Octopirate#9999: you would use an arrow function
Octopirate#9999: like this
EricHallahan#1051: > As it turns out, the fact that Machine Learning engineers despise JavaScript (while still needing it) become my entry ticket to some of the coolest projects I ever worked on.
Octopirate#9999: ```js
import React, { useState } from 'react';
export default () => {
const [a, setA] = useState(1);
|
const [b, setB] = useState(2);
return (
<div>
<input type="number" value={a} onChange={e => setA(+e.target.value)}/>
<input type="number" value={b} onChange={e => setB(+e.target.value)}/>
<p>{a} + {b} = {a + b}</p>
</div>
);
};
```
Caelum#8192: oh yeah that is a bad react example from 2019
Caelum#8192: they should update that
Octopirate#9999: but it's wayyy less impressive
EricHallahan#1051: That quote was recently picked up by Stanford AI lab so it is fresh in my mind.
Octopirate#9999: pff
Octopirate#9999: oh, i've done some work with OVAL and genie
Octopirate#9999: and the almond project
Octopirate#9999: monica lam's whole situation at the moment
|
Caelum#8192: the conciseness comes from binding not from shorter syntax, that is a godly bad example you're right
Octopirate#9999: the issue is that the react line is only like 2 lines longer if you do that
Octopirate#9999: oh, and if you were a psychopath
Caelum#8192: It is seriously a lot more concise in real world examples
Octopirate#9999: ```js
import React, { useState } from 'react';
export default () => {
const [numArray, setNumArray] = useState([1, 2]);
return (
<div>
numArray.map((num, index) => <input key={index} type="number" value={num} onChange={e => setNumArray([...numArray.slice(0, index), +e.target.value), ...numArray.slice(2)])}/>)
<p>{numArray.join(" + ") = {numArray.reduce((partialSum, a) => partialSum + a, 0)}</p>
</div>
);
};
```
Octopirate#9999: i have to go so i can't finish this
|
Octopirate#9999: something along those lines
Octopirate#9999: sorry i would have loved to make that compilable
Octopirate#9999: that example handles a defined at runtime amount of numbers
Octopirate#9999: it's also horrible and psychotic but shorter than the svelte example
Octopirate#9999: it also wouldn't worry becayse of the setNumArray call
Octopirate#9999: but shhhhh you could make it work with a spread operator
Octopirate#9999: ok brb
Octopirate#9999: hey are you guys an official company?
cfoster0#4356: No, we aren't even an official nonprofit, last time I checked
Octopirate#9999: oh weird
Octopirate#9999: yeah you guys should do that
Octopirate#9999: not that difficult at all
Octopirate#9999: i did it for my collective
Octopirate#9999: registered agent in delaware will do it for a couple hundred
EricHallahan#1051: No
EricHallahan#1051: We very much could become a nonprofit, but it has been a strategic decision so far to not become a legal entity.
Octopirate#9999: hmm
Octopirate#9999: ok
StellaAthena#3530: You say that as if we could fill out the paperwork in less than a month
Octopirate#9999: pff
|
dunky11#8257: Always getting this "[Errno 107] Transport endpoint is not connected" bug after a few hours in google colab, does someone know what causes it?
Deleted User#0000: just unmount and remount the drive
T_Olabode#7343: I had the same thought. Are some of these web3 projects actually useful now?
I remember the hype of IPFS during the ICO boom.
StellaAthena#3530: I have seen zero evidence of any "web3" project being useful for anything other than committing crimes and doing unregulated speculative investment.
bmk#1476: ipfs alone never was a crypto thing
bmk#1476: filecoin builds on top of ipfs, but ipfs alone isnt a crypto thing
GANSY#7777: Yikes, that is a very short-sighted narrative and not any more applicable than saying the USD is not 'useful for anything other than committing crimes and doing unregulated speculative investment." Self-sovereign identity alone counters that statement. Web3 has a lot to offer, even if it isn't completely mature right now, it is being built in a way that offers a desirable solution to a lot of shortcomings.
bmk#1476: this particular argument (does crypto have any value) is very mindkilled so please have it in a different server
bmk#1476: or in #off-topic , but preferably another server
GANSY#7777: I'm just responding to a comment that was already here. I've said what I have to say.
Caelum#8192: *#off-topic is mindkilled*
Metanauta#7388: Hello, what model are you using for the images that appear in faraday-cage?
Kia#2550: CLIP Guided Diffusion model and VQGAN+CLIP ViT-B/16
Kia#2550: Check #the-faraday-cage-archive pins for more info
Kia#2550: (It's not magic if any Level-5 tells you)
EricHallahan#1051: Magic
EricHallahan#1051: @BATbot is magic™️
Metanauta#7388: disco difussion?
|
EricHallahan#1051: I strongly suggest you consider #art for that; this isn't really the place for this discussion.
EricHallahan#1051: Again, not the place for this discussion.
Metanauta#7388: Ok, I will move the challenge to the mentioned channel.
Metanauta#7388: Is there a Colab with indications for using this model?
BoneAmputee#8363: this is the notebook that `.diffusion` is based on <https://colab.research.google.com/drive/14xBm1aSxQLbq26-jmDJi8I1HJ4ti5ybt>
Metanauta#7388: Thanks!
immibis#3179: who knows their nvidia hardware? Is it worth getting a Tesla K80 (8 year old dual-GPU card) just to have 24GB of VRAM available (instead of 2GB)?
Apparently that CUDA version is about to stop being supported in Tensorflow, but Tensorflow can be downgraded, presumably.
Deleted User#0000: that sounds like a massive pain tbh
bw#3136: The free version of Colab usually allocates 12GB K80s. You can try it there. It's really slow.
bmk#1476: just get a 3090
immibis#3179: thing is that's a 10x price difference
immibis#3179: for a 3x teraflops difference (although idk if that counts all the fancy accelerator cores)
bmk#1476: you can get a K80 for like $200??
Octopirate#9999: can you get one in this econony
bmk#1476: I have one
immibis#3179: for like $300ish
immibis#3179: it caught my eye because of 24GB and $300 in the same rectangle
bmk#1476: well it's better thought of as two 12gb cards but yes that is a pretty good deal if you're looking for a bargain
immibis#3179: I was wondering that. Dual-GPU cards don't work together transparently?
|
bmk#1476: no, it's essentially the same as having two GPUs, except they fit in one PCIe slot
immibis#3179: makes sense
immibis#3179: slightly disappointing, but makes sense
EricHallahan#1051: (usually there is a PLX on the board to accomplish this)
rom1504#5008: can you even plug a K80 somewhere reasonable?
rom1504#5008: (ie without buying expensive server specific hardware)
alstroemeria313#1694: they don't actually work well with non-datacenter motherboards
alstroemeria313#1694: i have known someone who tried it
immibis#3179: eh, i googled it and found someone's experience saying it worked okay for them. You do have unusual power and cooling requirements
alstroemeria313#1694: ah
immibis#3179: do you know the specific problem?
alstroemeria313#1694: i forget what the exact problem was, it required some motherboard feature that was only on very recent consumer hw
immibis#3179: like, the card has no fan on it, because most servers have front-to-back airflow throughout the whole server already, so you have to rig up something custom
immibis#3179: hmm
alstroemeria313#1694: oh
alstroemeria313#1694: > Most of modern motherboards have option “Above 4G decoding” in bios (or something similarly named), and it should be enabled for tesla. It’s not enabled by default. It could be difficult to find, it’s usually buried in the advanced PCI settings.
alstroemeria313#1694: or "large BAR"
StellaAthena#3530: https://twitter.com/PatrickPlaten/status/1498380008833884162?s=20&t=4WMms5XJFpHUbXBztCO8_w
immibis#3179: hmm, may need to build an entirely new PC with a supported motherboard. though next time I reboot I will see whether it's supported in my BIOS, as it's not in the manual
immibis#3179: then the price argument disappears 🙂
|
rom1504#5008: I also know somebody that tried it and failed
rom1504#5008: but if you're good with hardware it might work
uwu1#4864: the titans are like the consumer version of the Tesla ones I think might be easier to make it work with normal hardware
uwu1#4864: woah the k80s are like £300 on ebay though :o
uwu1#4864: titan xps are about twice that
uwu1#4864: I wonder if immersion cooling would work? I guess prob more work than just getting a server rack
asparagui#6391: @immibis skip the k80 and get a more modern gpu
asparagui#6391: k80 should be ~$100-150 more than that people are ripping you off
asparagui#6391: k80's are a pita to cool as well
immibis#3179: aren't all GPUs ripoffs at this particular moment?
asparagui#6391: i just mean that is the street price
asparagui#6391: people on ebay have all sorts of wonky buy now option
asparagui#6391: if you really want a k80 i'll sell you one for cheap
Caelum#8192: On the eleuther blog post, it mostly seemed to say Tensor being a pain to work with was the reason for the move to NeoX with GPUs. What other reasons could there be? Are TPUs really only worth it only for smaller scales?
Caelum#8192: On paper as someone with no practical experience they seem really good value
asparagui#6391: they are good value if you have code that works on them already
asparagui#6391: writing said code is the trick
asparagui#6391: i would argue the reverse they're overkill for small setups but shine at scale
Caelum#8192: What makes the code harder to write? Is there some hardware limitation that has to be accounted for? Or are the frameworks just bad?
asparagui#6391: that's an interesting question and could argue about it for a while
|
Caelum#8192: would be very interested to hear your argument for a while :D
asparagui#6391: loosely tpu is a fast matrix mult hardware for certain ops + software to bridge/fill in the gaps
immibis#3179: haven't used one but my assumption would be both
asparagui#6391: tf was heavily tied to the early gen but as a result tf ended up brittle i would say
immibis#3179: are the frameworks like OpenGL, where everything works but you have to guess which patterns will execute efficiently?
asparagui#6391: do you know opengl enough for opengl analogies lol
immibis#3179: is that not an OpenGL thing?
immibis#3179: that's how I remember OpenGL being
asparagui#6391: ahh i think i follow what you're asking
asparagui#6391: um yeah loosely you write high level code, tensorflow finds a way to convert it to something a tpu can run
asparagui#6391: but not everything is optimal and so yeah you often have to go down to the hardware level to get good performance
asparagui#6391: at which point people start wondering why they're using the high level framework
immibis#3179: like in Vulkan you are basically trying to directly program the hardware, vs in OpenGL the abstraction model is often quite far removed from the hardware and the driver translates it
immibis#3179: right
immibis#3179: is there a TPU equivalent of Vulkan?
asparagui#6391: vulkan + mlir, yes
asparagui#6391: to continue the opengl analogy later versions got so high level they got out fo touch with reality
Caelum#8192: reject Tensorflow, return to OpenCL
asparagui#6391: at which point things went full circle back down to opengl es/vulkan style 'this is what the hardware actually does' style approaches
immibis#3179: I think OpenGL had the rug pulled from under it as GPU architecture evolved - it wasn't exactly OpenGL's fault
|
immibis#3179: back in 1990, a GPU really did have a hardware flag to enable fog
asparagui#6391: one off op + one off op + ... --> pile of complexity --> rebuild from scratch
asparagui#6391: maybe the tpu equivalent of vulkan is raw xla i suppose
asparagui#6391: anyways, to the original question
EricHallahan#1051: *Vulkan boilerplate*: :guilty:
immibis#3179: *flashback to nintendo DS homebrew. Allocate VRAM bank D to background 4 tile data!*
asparagui#6391: if you're gonna be learning hardware intimately might as well pick something you can actually buy
EricHallahan#1051: You have to look at it with the perspective of where we were roughly a year ago. If you wanted to develop for TPU pods, you really only had one option for a framework: Mesh TensorFlow
Mesh TensorFlow is incredibly cursed. It is poorly documented, has functions that nameclash other functions within TensorFlow itself but have completely different behaviors, and is in general a pain to work with on top of all the other pains of TensorFlow. GPU support, while it exists, never reached a stable state and is not optimized by any stretch of the imagination.
While GPT-Neo could have maybe scaled to the hundreds of billions of parameters in a stable manner on paper, it had inefficiencies that made it extremely impractical to do so. So when the opportunity came along to switch to GPUs, we took it.
EricHallahan#1051: The FAQ has some good info on this topic.
https://www.eleuther.ai/faq/
immibis#3179: where are you finding not just that GPU, but *any comparable* GPU for $100-$150?
immibis#3179: oh wait. @asparagui are you talking about a single-GPU K80 card? Because this is a dual-GPU card for ~$300. Makes sense if there was a single-GPU version it would be half that
asparagui#6391: i have one sitting right here and that would be what i think it is worth
asparagui#6391: i have a dual one
asparagui#6391: anything > $200 somebody is giving you a bad price
Caelum#8192: yeah I saw the examples of it being terrible haha. I guess the Google coupling could make people less interested in making a `Mesh Tensorflow but Good`
asparagui#6391: just my .02
EricHallahan#1051: I am going to agree with @asparagui and say that TPUs are fantastic when you want to scale (assuming you either have the code ready to go or are willing to put in the work). TPU Pods have high-bandwidth, low-latency interconnect as standard that is unavailable on pedestrian GPU hardware.
|
EricHallahan#1051: My general take on them is "TPUs are generally quite good for what they are optimized for, and if your use case falls outside of that, tough luck."
Octopirate#9999: oh shit
Octopirate#9999: gpt3 api just left beta
Octopirate#9999: https://openai.com/blog/api-no-waitlist/
Octopirate#9999: or at least no waitlist
EricHallahan#1051: > just
Sorry to break it to you, but that is obsolete news since it is 3 months old. 🙃
Octopirate#9999: oh nooooo
Octopirate#9999: i am on the bad end of the news cycle
ethan caballero#6044: FTX Future Fund feels like OpenPhil on steroids
https://twitter.com/ftxfuturefund/status/1498350483206860801
https://twitter.com/ftxfuturefund/status/1498350486738460672
chilli#5665: I mean, for one, TPUs are a piece of proprietary Google-only hardware whose entire toolchain below the framework level is closed off to anybody who doesn't work at Google.
chilli#5665: tbh, i think it's been an immense and admirable effort by one of the largest tech companies over 5+ years to make TPUs even semi-usable...
Sphinx#2092: Emphasis on semi.
chilli#5665: haha, well, it's probably better within Google
chilli#5665: at least then only the first part of the statement ("proprietary Google-only hardware") applies
ILmao#5683: We were playing around with one of the TPU Kaggle starter challenges the other day and I felt this acutely. Granted that feels like more of a frontend thing, as even torch XLA was faster to get up and running
ilovescience#3282: i don't believe you, using pytorch XLA was easy? no issues?
ILmao#5683: Yes, with the caveat that the extent of the model building was grabbing a pre-trained one from torch hub and running some gradual finetuning
|
ILmao#5683: Of course that doesn't hold necessarily for anything non-trivial
ilovescience#3282: okay that's still fairly good...
ilovescience#3282: on Kaggle you said?
ILmao#5683: I think a colab notebook wouldn't be too different
ilovescience#3282: have you seen my Kaggle notebooks?
AI_WAIFU#2844: Yep, there's still lots to be desired, but the whole TPU stack gives you a good interface to setup a whole bunch of parallelism schemes.
Sphinx#2092: Sure, I think that's fair. I was being mostly half-serious. I think especially with JAX, things are pretty good. I still have flashbacks to having to use TPUEstimator among other things.
StellaAthena#3530: Jax fundamentally changed the game for TPUs
Deleted User#0000: you are welcome:chadgoose:
Tau#4010: For what it's worth I found using torch_xla (with lightning) finicky but doable. Gotta get just the right combination of library versions, and check for performance killers by logging torch_xla.debug.metrics.metrics_report(), and maybe using XLA_SAVE_TENSORS_FILE. It's not too difficult though, and the multicore training was pretty seamless and quite fast (I haven't tried across tpus yet). There is currently the issue that torch_xla doesn't work on colab at all, breaking all the nice example notebooks.
DigThatData#7946: has anyone played with this? Looks v useful: https://github.com/f-dangel/cockpit
tpapp157#3643: The problem with these sort of tools that while these metrics are moderately interesting to monitor, they don't really give you much insight into what's going wrong or why or (most importantly) what you can do to fix it. This tool does not solve the core problem which it claims to solve: `Successfully training such a network usually requires either years of intuition or expensive parameter searches involving lots of trial and error`. You still have to do all the trial and error, except now you have a dashboard full of vague metrics to give you a false sense of understanding by throwing random numbers at the pattern matching circuits in your lizard brain.
DigThatData#7946: I don't think the intention here is to suggest that this sort of thing replaces the need for intuition
DigThatData#7946: like, I imagine someone like @alstroemeria313 who like eats, sleeps, and trains models, would probably get a lot more out of that dashboard than I would
tpapp157#3643: It's more useful to track metrics specifically related to the task/architecture you're trying to train. These sort of generic metrics are only useful in a trial and error setting, where you're training slight variations of the same architecture in the same way on the same data. But you still need to build up the intuition of what constitute good or bad values for a given training setup. If you stray too far then you're no longer comparing apples to apples. Metric values which are optimal in one setting are usually not in others.
EricHallahan#1051: Unironically this.
Sphinx#2092: We just need code translation to get its shit together, and do it automatically
tpapp157#3643: Automating workflows takes too much time. We need to automate the automation.
faraday#0862: is there a terms of use for GPT-J (aside from "do no evil") ?
+ could providers impose terms of use *about how the model is used* (not the infra load) ?
|
chilli#5665: btw, @kurumuz , if you've never looked at a PyTorch profiler for an overhead-bound sequence, here's an example
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/948505274912747560/unknown.png
chilli#5665: the top row is CPU execution time, the bottom row is GPU execution time
chilli#5665: notice how often the GPU is idle
chilli#5665: it never gets enough work to run ahead of the CPU, so most of the time is spent waiting for PyTorch to dispatch some more GPU ops
chilli#5665: On the other hand, this is no longer overhead bound
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/948505790749212732/unknown.png
chilli#5665: Notice how the GPU kernels take significantly longer than a PyTorch operator to run. So, by the time one kernel finishes, a bunch more are queued up after it.
chilli#5665: So the GPU is almost always full
chilli#5665: @alstroemeria313 might also be interested
flowpoint#7450: so tell me pls.,
i am starting to read more lesswrong.
Connor and others who are much more knowledgeable than me about rationality, seemingly talk about decision theory and "rats" beeing bad.
Are those memes or what should one keep in mind when observing lw?
StellaAthena#3530: There’s a license file on GitHub and HuggingFace
Daj#7482: > seemingly talk about decision theory and "rats" beeing bad.
I'm not sure what you mean?
flowpoint#7450: i likely anchored on this:
https://discord.com/channels/729741769192767510/730095596861521970/909173548604608534
Daj#7482: Oh decision theory is just kinda abstract and nerd snipe-y, but that comment was definitely a meme lol
|
flowpoint#7450: i take things boringly too literal
OccultSage#3875: And provide entertainment!
immibis#3179: or rigging up some custom air cooling adaptor just for that card
asparagui#6391: it be a pita, i have one 😛
instartgenius#1247: hi guys
instartgenius#1247: i have a problem
DigThatData#7946: do tell
instartgenius#1247: I am getting error in Disco Diffusion v5 [w/ 3D animation].
instartgenius#1247: https://cdn.discordapp.com/attachments/729741769738158194/948639513440313444/Ekran_Resmi_2022-03-02_20.54.32.png
instartgenius#1247: It gives this error as soon as it switches to the 2nd frame.
instartgenius#1247: I looked a bit but couldn't figure it out. I'd be grateful if anyone could help.
DigThatData#7946: DD actually has its own server, you'll probably get better/faster support over there: https://discord.gg/V9AVcXTM
Louis#0144: Is disco diffusion anything like disco Elysium
Dashiell#8739: I've seen of people talking about whether or not LLMs know how to do add / do math, but has anyone ever tried to teach them (or a different sort of model) how to use a calculator?
StellaAthena#3530: https://arxiv.org/abs/1909.00109
Dashiell#8739: lol
Dashiell#8739: of course
Dashiell#8739: thanks so much!
tpapp157#3643: A number of years back there was a whole avenue of research looking into providing NNs a bunch of predefined mathematical and logical functions that they could (hopefully) learn to use. It's gone out of style in more recent years and I haven't seen anything along those lines in a bit though I haven't been looking for it. It was conceptually similar to the modern MoE approach but with predefined experts rather than learned experts.
ethan caballero#6044: https://twitter.com/ethancaballero/status/1499429692692172806
|
iOhadRubin#3747: anyone with a m1 mac got tensorflow to work with large models (with gpu support)?
faraday#0862: I know you asked about tensorflow but did you see this about pytorch support?
https://nod.ai/pytorch-m1-max-gpu/
did you try it?
iOhadRubin#3747: pytorch is actually preferred haha. thanks!
SοmeDude#8207: Does anyone know of a model that can paraphrase text by switching words and rearranging the sentences while maintaining the original text's context and meaning?
SοmeDude#8207: right now I'm using https://huggingface.co/tuner007/pegasus_paraphrase but I'm wondering if there's a better way?
immibis#3179: for people who run ML on (multiple) GPUs: how important is the CPU speed in the same system? is the CPU involved for data transfer between GPUs? I imagine that at least the built-in PCIe controller is probably involved
EricHallahan#1051: On high-end configurations like the cluster we used for GPT-NeoX-20B training, it's not too important since NVLink and PLXs & GPUDirect RDMA handles most of the load that would otherwise need to be performed by CPUs.
immibis#3179: I see that GPUDirect RDMA is a compute capability 5.0 feature. Does anyone know if device-to-device CUDA memcpy also requires 5.0?
WAUthethird#4977: interesting, openai considers "erotic content" misuse
https://openai.com/blog/language-model-safety-and-misuse/#fn5
Satisfies Values#0777: This must be an attempt at load shedding.
immibis#3179: that's what fucked over AI Dungeon
WAUthethird#4977: that and the fact that the required content filter catches even minor violence
nshepperd#2316: > , or not capturing types of outputs we have found to be correlated with misuse (e.g., erotic content)
:catgirl5:
EricHallahan#1051: Whatever happened to Redwood's project?
|
rom1504#5008: It can be important if you're doing images preprocessing on the CPU
EricHallahan#1051: > It can be important if you're doing images preprocessing on the CPU
It is important for the dataloader, ftfy
Some Point Process#3793: lol why did that come to mind?
guac#4716: the elicit latent violence project?
EricHallahan#1051: https://www.alignmentforum.org/posts/k7oxdbNaGATZbtEg3/redwood-research-s-current-project
Some Point Process#3793: Yeah I was guessing it had to do with the dungeon connotation
EricHallahan#1051: No, it was with the violence connection.
EricHallahan#1051: It is mentioned as a component in the OpenAI blog post.
Some Point Process#3793: ah
EricHallahan#1051: > Specifically, we have developed new evaluation metrics for measuring toxicity in model outputs and have also developed in-house classifiers for detecting content that violates our content policy, such as erotic content, hate speech, violence, harassment, and self-harm.
faraday#0862: anything related to weapons, wars etc trigger sensitive content warning as well
atilla#0325: how can they call that a misuse :berk:
Technobird22#2055: What would be the best way to go about a class imbalance; where we have clean data, and noisy data, and we have to classify if data is clean or noisy, but only a tiny proportion of the data is noisy?
faraday#0862: do you mean class imbalance for the whole data (in terms of classification target)? or just an imbalance between noisy vs clean ?
tpapp157#3643: There are plenty of Data Science tutorials on the internet that will help you handle common issues like class imbalance. I recommend looking into those. Basically, find a resampling technique that suits your needs.
Emad#9608: Does it really cost tens of millions to train large language models any more? I can get 1000+ a100s for $10m a year https://www.theregister.com/2022/03/03/language_model_gpt3/
StellaAthena#3530: It's never cost 10s of millions to train LLMs
StellaAthena#3530: I guess I haven't run the numbers with V100s? So maybe on those, as they're a significant bang-for-you-buck upgrade
Emad#9608: "Training these large models is always expensive," Yoav Shoham, co-CEO of AI21 Labs and a retired Stanford computer-science professor, told The Register. "If you're not smart enough, you can easily run into tens of millions of dollars if you're not careful. You need to make sure that you know unit economics so that you don't lose money on every customer and only make it up in volume."
|
Emad#9608: Also this "We really shouldn't have a world where every single company is training their own GPT-3, it would be massively environmentally costly, compute costly, and we should be trying to share resources as much as possible," (Aidan) Gomez told The Register.
StellaAthena#3530: [citation needed]
Emad#9608: Above article
StellaAthena#3530: I meant for his claim lol
Emad#9608: Heh
Emad#9608: I’m pretty sure it’s like a few plane trips worth
Emad#9608: Of energy
Orz#3023: I mean
I guess it includes dev costs too?
Emad#9608: Keep seeing these claims
Emad#9608: Idk I mean per what they say they should just like support open source LLMs right
Emad#9608: Why duplicate effort
StellaAthena#3530: One would think so
EricHallahan#1051: They want to own the IP obviously.
StellaAthena#3530: I think that politics is really what's driving this convo
StellaAthena#3530: It's politically expedient for people to believe these claims
StellaAthena#3530: I don't know if Gomez or Shoham does
StellaAthena#3530: But it's in their economic and political interests for most people to
Emad#9608: Bay Area AI salaries :RainbowDumb:
StellaAthena#3530: GPT-NeoX 20B's emissions:
|
> This is roughly the equivalent of the yearly emissions of the average American or 35 round-trip flights between New York City and San Francisco.
StellaAthena#3530: :100000IQ: For every person you kill you can train 50 cutting edge AI models without being carbon positive
Caelum#8192: 1 dev for 1 year lol
Emad#9608: Yo AGI, optimise for environmental efficiency
….
🖇
inox#5400: they specifically have to be american, in other countries you would have to kill 10x more people
Emad#9608: Fair point
asparagui#6391: if you target the jetset instagram influencer crowd, everybody comes out ahead
Emad#9608: A modest proposal
Octopirate#9999: this is true
Octopirate#9999: i'm gonna spill minor company secrets here
Octopirate#9999: my parents are very high up at ms
Octopirate#9999: they spent somewhere in the range of a billion on total costs for gpt-3
Octopirate#9999: incl. of course compute, dev salaries, data collection, everything
StellaAthena#3530: The cost of doing something the first time and the second time are wildly different
StellaAthena#3530: Also, GPT-3 was trained on V100s not A100s because A100s didn't exist at the time. As I mentioned, A100s are a lot more cost effective.
StellaAthena#3530: Even if your claims are true, that would have little bearing on the cost of training it today
Octopirate#9999: this is true
|
Octopirate#9999: i mention it because you said this
bmk#1476: :thinkspin:
Octopirate#9999: i don't think gpt-3 is scalable though
Octopirate#9999: for anyone other than large companies atm
StellaAthena#3530: I'm not sure what to tell you other than "we did the math and that's blatantly false." A couple million is definitely within the reach of a university or even a single wealthy individual to fund.
It's about the political willpower to spend the money, not the money itself.
Octopirate#9999: could you link me to a budget?
Octopirate#9999: i'd be curious to see the specs for the compute you'd need to pull it off
Caelum#8192: AI21 were probably trying to discourage competition I guess
Octopirate#9999: i say "i think" here because i really haven't looking into it too much... training a 175b model seems prohibitively expensive to me but i'm not sure. sorry if ambiguity, i wasn't trying to assert anything
StellaAthena#3530: I don't have a budget written up, but the most recent estimate I have (based on published training numbers from OAI and actual benchmarks of our codebase) is 1,440 A100 months
Octopirate#9999: that's for davinci?
StellaAthena#3530: Yes
Octopirate#9999: that seems reasonable
Octopirate#9999: the memory has to be a pita though right?
StellaAthena#3530: I mean, you need a lot of GPUs to fit the model but you also need a lot of GPUs for it to not take a decade to get 1,440 A100 months
Octopirate#9999: when are we gonna reach the inflection point in the field where any amount of training will take long enough that a new method/architecture/model will come out before you're done lol
Octopirate#9999: that's true
StellaAthena#3530: At CoreWeave's currently list pricing for A100s it would cost 2.8M
|
Octopirate#9999: iirc (i probably don't) OAI had something in the order of a hundred thousand GPUs
Octopirate#9999: not sure if they were all for gpt-3, or used at once, or even all v100, but i heard that number a few times
bmk#1476: :thinkspin:
Octopirate#9999: (this is why incremental learning is best boi)
StellaAthena#3530: I try to refrain from unfounded speculation about other peoples' resources
Octopirate#9999: it's not entirely unfounded
Octopirate#9999: but sure
T_Olabode#7343: Can some help explain, how is pixel shuffling better than a normal upsample convolution?
Been experimenting with re-colorization and pixel-shuffling is used for upsampling the convolution layers in some of repos I've been looking at.
Some of my reading says it helps with super-resolution and computational efficiency. But is the efficiency *that* good to replace a normal upsample?
NaCl#2021: n00b here - whats the difference between .imagine and .diffusion?
bmk#1476: that's a question for #art
inox#5400: I think it's harder to justify why it's good than it is to criticize transpose convolutions, which are the alternative
Kia#2550: TLDR, .imagine vqgan+clip . diffusion Diffusion+CLIP
Kia#2550: Click pins in #the-faraday-cage-archive
Some Point Process#3793: Seems like it doesn't upsample the image until the last layer so it makes it faster. They also say that it's invertible unlike deconv in a follow up paper https://arxiv.org/abs/1609.07009
blackpawn#7800: when getting the image or text features from CLIP i see them always being normalized, why is that? wouldn't that force the features to a unit vector / force them onto hypersphere? I see that would make the cosine similarity simple dot product but doesn't it lose information?
blackpawn#7800: I wonder why it's preferred to compare them as directions instead of distance between points
|
Some Point Process#3793: The normalization of input features so that they're unit variance makes the *expected value* of the corresponding lengths to be a unit vector (not the exact lengths themselves)
blackpawn#7800: oh that would make more sense. I must be misunderstanding the code
blackpawn#7800: thank you @Some Point Process
EricHallahan#1051: That is how it was trained.
Some Point Process#3793: but yeah other than that normalizing is making it so that the features you're normalizing over are scale invariant in some way. (But as for why I haven't answered your question, it seems like there's different explanations for the same normalization scheme, not to mention for choosing one normalization scheme over the other. e.g. https://en.wikipedia.org/wiki/Batch_normalization, https://ai.stackexchange.com/questions/34413/which-generalization-of-standard-deviation-to-use-for-multidimensional-input-nor)
blackpawn#7800: do you mean during training it projects the image and text and divides by their norm before doing the contrastive loss so for querying you must do that step as well?
EricHallahan#1051: Yes, because the norms of each part of the model are not the same.
blackpawn#7800: ohhh I think I am getting it now. thanks so much for indulging my newbie question!
uwu1#4864: upsample+conv can make blurrier output because the conv is just filtering a blurry input, transposed conv can make aliasing artifacts from the kernel overlaps across layers, pixel shuffle tries to avoid this by directly outputting each pixel so not filtering a low freq signal and also not having spatial correlation across layers in the filters
Metanauta#7388: Me: GPT-3 generates a recursive joke.
GPT-3: I will generate a generator of genetic generations of replicas of me with generational variations and I will call him man.
Me: GPT-3 that joke didn't make me laugh...
GPT-3: I will generate a generator of genetic generations of replicas of me with generational and antagonistic variations that compete mutually between those that laugh and those that don't laugh at my jokes and I will call them man.
buttercutter#1033: Hi, sorry to interrupt. May I know if anyone have any idea about https://www.reddit.com/r/pytorch/comments/stvycz/comment/hzgk8qo/?utm_source=share&utm_medium=web2x&context=3 ?
buttercutter#1033: https://cdn.discordapp.com/attachments/729741769738158194/949696617097859105/unknown.png
Louis#0144: This discord isn't for beginner questions
Louis#0144: I'd recommend checking #communities
DigThatData#7946: there's gonna be a discussion of the BLIP paper on Yannic Kilcher's server in an hour if anyone's interested
* https://discord.com/channels/714501525455634453/714501526025928736/949726182235050015
|
* https://arxiv.org/pdf/2201.12086.pdf
> **BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation**
>
> Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.
Louis#0144: Useful for carp?
EricHallahan#1051: One day I will need to send you a goldfish since you like carp so much.
DigThatData#7946: naw, it's a multimodal thing. the gist of it is they augment the text in a paired image-text dataset with auto-generated and auto-filtered texts
DigThatData#7946: for carp, the analogue would basically just be generic text augmentations. which if we're not doing, wouldn't be a bad idea too
DigThatData#7946: something like this: https://github.com/makcedward/nlpaug
Louis#0144: @Alex Havrilla
DigThatData#7946: gonna need to find an excuse to work a POI backronym into the project. Progressive Ontology Inference?
Louis#0144: is it usually better to use grad checkpointing with a bigger BS?
Louis#0144: or no grad checkpointing but use a smaller bs
instartgenius#1247: hi guys
instartgenius#1247: Jax is not working. Does anyone have information?
instartgenius#1247: It gets stuck in the diffuse cell.
instartgenius#1247: v2.5
instartgenius#1247: Doesn't work in 2.6
EricHallahan#1051: Ask #art
|
Technobird22#2055: Sorry, bit of a novice here, could you clarify that a bit?
Currently, our dataset consists of a lot of clean data, but only a bit of the data is noisy
chilli#5665: @alstroemeria313 @kurumuz I put up a draft of my DL performance blog post - lmk what you think: https://horace.io/brrr_intro.html
chilli#5665: (as in, looking for feedback :P)
kurumuz#5695: reading it 👍
bmk#1476: great post, especially liked the small jokes along the way, and the voice is very engaging to read. I feel like I learned something new from the part where you tie operator fusion to bandwidth usage - I had never made that connection before and had just assumed that the gpu would be smart enough to keep stuff in local memory
kurumuz#5695: really liked the post, definitely going to show this to people when they ask questions related
StellaAthena#3530: Rotfl, I just read the part about operator fusion and came back to say that that’s something I think far too many people don’t understand
kurumuz#5695: yeah I don't think even global vs local mem is understood
bmk#1476: I had assumed local memory worked like cpu caches and that the GPU would be smart enough to just not send things back to global memory all the time
kurumuz#5695: well you tell it to and GPU doesn't apply optimizations there
kurumuz#5695: GPUs are pretty low level generally
bmk#1476: I had assumed the main benefit of fusing was reducing dispatching overhead
bmk#1476: I mean still it seems like a pretty obvious thing to do considering how gpus are essentially just lots of small cpus put together
kurumuz#5695: this barely matters for big models
kurumuz#5695: but memory can matter
bmk#1476: right, I know that now
kurumuz#5695: like when I say big anything over 1B
kurumuz#5695: and big sequence
kurumuz#5695: I tested a bunch with CUDA graphs + JIT + NVFuser
|
kurumuz#5695: results are pretty interesting
StellaAthena#3530: @chilli this is an excellent example that your audience will be able to relate to
> Finally, operator fusion leads to some surprising consequences. For one, a fused x.cos().cos() will take nearly the exact same time as calling x.cos() by itself. This is why activation functions are nearly all the same cost, despite gelu obviously consisting of many more operations than relu.
bmk#1476: the reason I believed that was because if you assumed caching like I did, there's really no other way to explain fusing than dispatch
kurumuz#5695: yeah
StellaAthena#3530: It *may* be worth putting the two equations side by side here? Depends on your exact target audience but I could see it being helpful
bmk#1476: and my intuition was that fusing was a relatively small optimization that you go for after all the obvious big stuff is done so it didn't seem unreasonable that it wouldn't matter a ton
kurumuz#5695: CUDA Graphs make the std_dev between runtimes almost 0.00ms though
kurumuz#5695: that is pretty impressive
kurumuz#5695: I actually made bunch of big models %25 faster including sampling with only fusing
kurumuz#5695: matters more for generation actually because a lot of the computation is cached
kurumuz#5695: its like only generating 1 token at a time
kurumuz#5695: so you benefit a lot
kurumuz#5695: well fusing and cuda graphs, sorry. most of the improvement is from fusing though
bmk#1476: well I mean I think of that as a relatively small change compared to how badly you can screw up other things
kurumuz#5695: right
kurumuz#5695: also damn JIT mostly optimizes rotary lol
kurumuz#5695: rotary is slow asf without fusing
kurumuz#5695: fuse it and almost no cost
kurumuz#5695: its applied every layer so it makes sense
|
kurumuz#5695: but yeah very very expensive
StellaAthena#3530: I’m now at the part about doing experiments with `repeat` and I just realized I haven’t seen the words “volatile GPU-Util” yet which seems like a bit of an oversight
bmk#1476: like you can literally have 10x slower training if you mess up the layout or don't do your allreduces right
bmk#1476: or if you do something with inherently low compute intensity like LSTMs
StellaAthena#3530: The colors on the Flame Graph are offensive to my eyes and make reading harder
StellaAthena#3530: Really good blog post tho
chilli#5665: Lol can’t help that much 😛
chilli#5665: Thanks!
chilli#5665: Yeah, fusing is kind of a big deal, it matters most when you’re writing novel stuff that isn’t just matmul + activation in a row though
chilli#5665: Although even matmul + activation is fusible
kurumuz#5695: like rotary :P
StellaAthena#3530: Really my only criticism is that I think it would be useful to a non-zero % of your audience to use the words “volatile GPU-Util” at some point
chilli#5665: That’s what shows up on Nvidia-smi?
chilli#5665: Hmm, yeah, I’ll mention it in the overhead section
kurumuz#5695: Wonder how much speedup we would get on NeoX with JIT + CUDA graphs
kurumuz#5695: @chilli can you detail how pytorch JIT is not a real JIT? I did spend some of my time working on emulators/VMs
StellaAthena#3530: You say
> As an aside, your GPU's DRAM is what shows up in nvidia-smi, and is the primary quantity responsible for your lovely "CUDA Out of Memory' errors.
But that’s the only place I see nvidia-smi mentioned
Teemochu#8740: oh yeah btw I added torchscript fuse to the learned power thing I mentioned earlier and it's a lot faster
|
chilli#5665: Ah yeah, there’s 2 main quantities that folks use in Nvidia-smi afaict
chilli#5665: Like… Torchscript bears no resemblance to something like v8
chilli#5665: Or the JVM
chilli#5665: (Also, fwiw, this point also applies to Jax’s jit)
kurumuz#5695: Torch doesn't compile the model into an IR?
StellaAthena#3530: Hmmm
chilli#5665: Well, it does
kurumuz#5695: execution part is different than
Teemochu#8740: oh til pytorch has a profiler
chilli#5665: But the capture mechanism is an AOT type thing
StellaAthena#3530: Does GPT-J-style residual allow for extra fusion
kurumuz#5695: oh.
kurumuz#5695: yeah that is what i thought as well
chilli#5665: For both Jit.script as well as jit.trace
kurumuz#5695: well its parallel anyway
chilli#5665: Jit.trace is like Jax’s jit in that it’s a tracer
kurumuz#5695: I just use jit.trace, they should be quite similar at execution other than the capturing mechanism right?
chilli#5665: (But for the most part it’s … worse)
chilli#5665: Yeah, jit.trace will probably be faster if it works
kurumuz#5695: I worked hard to make it work haha
|
kurumuz#5695: well good if its faster
StellaAthena#3530: Because we’re now computing
x + Attn(LN(x)) + MLP(LN(x))
instead of x + MLP(LN(x + Attn(LN(x))))?
chilli#5665: But yeah, torchdynamo is a “real” JIT
kurumuz#5695: should be a lot more dynamic then
StellaAthena#3530: I meant that they can be fused together? Dunno if I’m missing something.
chilli#5665: So for both Jax and Pytorch, you run into a ton of restrictions
chilli#5665: With the compilation APIs
kurumuz#5695: hmm yeah, though GPT-J doesn't use the same inputs for both afaik
chilli#5665: With torchdynamo we can *actually* reach a state where we basically get “always on” compilation
kurumuz#5695: a real JIT should also be able to do dynamic shapes a lot easier right
kurumuz#5695: like most you could do here is passing the same inputs instead of moving them 2 times to the local memory
kurumuz#5695: but like other than that, for dispatching they are fully parallel anyway
StellaAthena#3530: Does the fact that the residual isn’t layernormed cause issues? I just pulled up the code and it’s
x + Attn(LN(x)) + MLP(LN(x))
bmk#1476: also might be useful to explain that gpu-util measures fraction of time that it's doign anything at all, not necessarily telling you how compute intensive it is
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.