data
stringlengths 115
7.61k
|
---|
alstroemeria313#1694: i have a naive model parallel implementation
alstroemeria313#1694: it only scales to two GPUs
alstroemeria313#1694: but it should work for this if i could only get two large enough GPUs
random person#5234: Well you could try a pair of A6000
random person#5234: There might be some vast ai instance with that
kurumuz#5695: A6000s are 48GB, not 80
alstroemeria313#1694: ...can i do bfloat16 actually
alstroemeria313#1694: idk if good idea
alstroemeria313#1694: like bfloat16 activations on one 80GB A100
random person#5234: Oh I thought the vram would scale in this in case with 2.
alstroemeria313#1694: that's only 96 total and I am OOMing on 80
Kharr#7888: I assume you're already gradient checkpointing and using gradient accumulation?
alstroemeria313#1694: i tried checkpointing and it didn't help. gradient accumulation not applicable to this
alstroemeria313#1694: memory usage only went down slightly with checkpointing.
alstroemeria313#1694: i might have to do some complicated things to checkpoint properly, idk
alstroemeria313#1694: i am using VGG-19 as a feature extractor and getting activations from six layers
alstroemeria313#1694: then forming the Gram matrix from the activations
alstroemeria313#1694: and if i checkpoint the model only then the activations still get saved before making the Gram matrix and they're *huge*
Kharr#7888: If you're pulling from every layer then checkpointing does not work well
alstroemeria313#1694: it's only like 1/3 of the layers but it still seems bad
|
alstroemeria313#1694: i think i would need to discard every intermediate and only keep the Gram matrix or smth?
alstroemeria313#1694: It's small
Kharr#7888: Do you need to backprop through it?
alstroemeria313#1694: Yes
Kharr#7888: You might be able to checkpoint the individual modules instead of whole layers. I've noticed some weird behaviors with checkpointing when using data from multiple layers
alstroemeria313#1694: modules?
alstroemeria313#1694: it's just an nn.Sequential
Kharr#7888: nn.Sequential is just executing a module list in order
alstroemeria313#1694: yeah
alstroemeria313#1694: the modules in the module list don't have submodules
Kharr#7888: I was just looking up the config, surprised it's using up so much memory with checkpointing. It should behave well 🤷
pragmaticml#1730: Since the 6 layers you have selected all need to go into the gram matrix you could maybe explicitly checkpoint those (since you'll need all those activations at the same time anyhow for back pass from gram matrix) rather than using the checkpoint sequential utility?
alstroemeria313#1694: it's the feature extraction
alstroemeria313#1694: i grab gigantic internal activations and process them and backprop through that
alstroemeria313#1694: there are five separate gram matrices then i do a different thing with the sixth layer's activations
alstroemeria313#1694: i was explicitly checkpointing the areas i wasn't extracting features from
alstroemeria313#1694: like not with the sequential utility but manually
pragmaticml#1730: 👍 thanks for the clarification, I misunderstood how your graph was structured
alstroemeria313#1694: like the region of the model between feature extraction point 2 and feature extraction point 3 would be a single call to the manual checkpointing thing
pragmaticml#1730: Sounds right to me -- I got nothing 🤷
|
nshepperd#2316: you may want to do some sort of nested checkpointing
nshepperd#2316: like, make a function that takes the input image, and returns the activations and gram matrix at point 1. and checkpoint that
nshepperd#2316: then use that as the first step of a function that takes the input image and returns the activations at point 2 and the gram matrices for points 1,2
nshepperd#2316: and checkpoint that
nshepperd#2316: etc
nshepperd#2316: this does more computation but saves less activations
alstroemeria313#1694: this still saves some activations though?
alstroemeria313#1694: i think the main problem is that the first two layers have absolutely huge activations
alstroemeria313#1694: bc they are pre-downsampling
alstroemeria313#1694: the same reason i couldn't scale model parallel to more than two GPUs well
alstroemeria313#1694: memory usage was too unbalanced.
nshepperd#2316: it does the forward pass multiple times
alstroemeria313#1694: for two GPUs I put the first two layers and the first max pooling layer on one GPU and all the other layers on the other GPU
alstroemeria313#1694: bc it minimizes comms if i do downsampling before transferring activations
alstroemeria313#1694: and this is approximately memory balanced
nshepperd#2316: and throws away the early activations for the first forward pass
alstroemeria313#1694: ahh
nshepperd#2316: i need to implement my remat thing, I've just been putting it off to play with TPUs ^^;
UnsupervisedLearner#4148: What is currently working best for training large LMs? Still BERT and Autoregressive?
UnsupervisedLearner#4148: Anything taken from the self-distillation and contrastive stuff working for large vision models then applied to language?
|
StellaAthena#3530: BERT was state of the art three years ago mate 😛
UnsupervisedLearner#4148: masked pretraining is conceptually and implementation-ally very simple and therefore popular
StellaAthena#3530: I would check out “task fine-tuning” like FLAN and T0
https://arxiv.org/abs/2110.08207
https://arxiv.org/abs/2109.01652
UnsupervisedLearner#4148: I am personally looking to novel loss and training schemes on raw untrained weights
UnsupervisedLearner#4148: are MoE still not bending curves on scaling?
StellaAthena#3530: MoE is weird. It changes the game in terms of what matters. I haven’t experimented with them personally yet, but the conclusions from the cutting edge research indicates it’s a poor approach for common use but potentially good if you have the $$$
StellaAthena#3530: Specifically, if we hold performance constant you reach that level of performance faster with a MoE model.
However if you hold *model size* constant, you’ll get more out of a 40 GB model that’s dense than one that’s MoE
UnsupervisedLearner#4148: I expect any enormous ML system to have to deal with sharding execution wherever possible that still yields performance on the duty of the system
UnsupervisedLearner#4148: isnt there some crazy latentcy from one side of the brain to the next?
UnsupervisedLearner#4148: this is what I remember. I remember the chinese big tech company multimodal omg so big params DALL-E clone being MoE and then I started focusing on other things for a bit
StellaAthena#3530: Most users are constrained more by memory than by money (or, their money constraints come from needing to buy more memory)
StellaAthena#3530: So most users are better off training for longer on the biggest GPUs they can afford
UnsupervisedLearner#4148: VRAM particular or any volative memory?
StellaAthena#3530: VRAM
UnsupervisedLearner#4148: hmmm we need to democratize gpus
|
UnsupervisedLearner#4148: everyone needs a gpu box
UnsupervisedLearner#4148: with a A100
StellaAthena#3530: We aren’t talking about *a* GPU box
UnsupervisedLearner#4148: we need parallel computing supercomputers to then distill into personal computer sized usable models
StellaAthena#3530: People who are not memory constrained in my description are people with hundreds or thousands of A100s
UnsupervisedLearner#4148: but to be a personal computer sized and usable model people need more gpus
UnsupervisedLearner#4148: buy gpus people
alstroemeria313#1694: they are $$$
kurumuz#5695: enev thoufg moe is flop efficient it is not VRAM efficient and that is not trigial to build huge GPU clusters and makenit fadt
kurumuz#5695: fast
alstroemeria313#1694: also i use laptops and they will never have good GPUs
UnsupervisedLearner#4148: I would like to see models trained async across vast differences in distance and underlying hardware
kurumuz#5695: i am not really interesred in thst
kurumuz#5695: i dont even understand the interest
UnsupervisedLearner#4148: 10P params floating on a billion devices
kurumuz#5695: pffd
UnsupervisedLearner#4148: if bigger = better
bmk#1476: no thanks lol
kurumuz#5695: takes 3 hours for an update step
UnsupervisedLearner#4148: single updates translate to far more inforamtion gain the bigger the netwokr
|
bmk#1476: I want homogeneous clusters with high reliability and bandwidth, low latency, and predictable network topology
kurumuz#5695: yes
UnsupervisedLearner#4148: @bmk that's small
UnsupervisedLearner#4148: we need global scale compute
UnsupervisedLearner#4148: and stuff like HOGWILD implies that the bigger the network the sparser the updates
UnsupervisedLearner#4148: if you combine with some token routing
bmk#1476: I bet that even before you factor in the enormous overhead resulting from that, you won't be able to collect as many GPUs as are in the largest GPU clusters
StellaAthena#3530: @UnsupervisedLearner The US, Chinese, Israeli, Russian, and probably several other militaries would like to have a word with you about that
StellaAthena#3530: Large computing systems are a national strategic resource
UnsupervisedLearner#4148: you could have otimized async models orders of magnitude bigger than currently possible
StellaAthena#3530: Solving international geopolitical cooperation is probably harder than developing AGI.
UnsupervisedLearner#4148: I mean we all trade with each other
StellaAthena#3530: This is fundamentally an infeasible project because the military will tell you to stop
UnsupervisedLearner#4148: They have not yet told me to stop
bmk#1476: Summit has like 28k high end GPUs, for example
UnsupervisedLearner#4148: Imagine Summit running 24/7 training the omnimodel mega-model
bmk#1476: I bet you need at least 2x that due to the costs of distributing it out, but even then, good fucking luck finding that many GPUs around the world lmao
StellaAthena#3530: You are not anywhere close to the scale you’re talking about wanting. Right now you’re a blip. If you actually try to coordinate millions of GPUs across countries they will intervene
UnsupervisedLearner#4148: They did not intervene in eth
bmk#1476: then just build your own Summit
|
UnsupervisedLearner#4148: I am
bmk#1476: it's probably multiple times cheaper and easier than trying to do a crazy async heterogeneous thing
StellaAthena#3530: Eth the cryptocurrency? Do you really think that cryptocurrency is comparable to large AIs?
UnsupervisedLearner#4148: I am and it is going to work well and outcompete and you will regret not listening to my schizoid rants about such topics
bmk#1476: folding@home has, like, 3x overhead or something, and folding *isn't even latency constrained*
UnsupervisedLearner#4148: it sets floor prices on the use of cloud gpus
kurumuz#5695: lol
kurumuz#5695: i do love delusional rants yes
UnsupervisedLearner#4148: there are methods to verify execution traces that are O(1) in time and space for verification
UnsupervisedLearner#4148: that are brand new research with no popular libraries yet
UnsupervisedLearner#4148: that could make async compution like that way less coordination heavy
bmk#1476: dont those methods usually make the work of the computation itself a lot harder though
bmk#1476: the cost of making it cheap to verify is the cost to write the proof increases a lot
UnsupervisedLearner#4148: hmmm it's possible but as new early stage research that hasnt been optimized a hundred times over I will wait to see. I dont even have a library worth playing with yet
bmk#1476: and where do you even plan on getting these gpus anyways?
bmk#1476: through cloud services?
bmk#1476: at those prices you might as well just buy the gpu directly for your own physical cluster
UnsupervisedLearner#4148: I will make an api to create VRChat avatars using my super awesome outperforming omnimodal thing
kurumuz#5695: so did you measure the latency of one part od the world to the another while transferring big gradients
UnsupervisedLearner#4148: and I will convince people to crowdfund the initial model
|
kurumuz#5695: are you a narcissist
Deleted User#0000: https://www.openmined.org
kurumuz#5695: serious question
UnsupervisedLearner#4148: been done already by the people who run big internet databases, this is well studied. I dont personally have a number but I know it's under the slow end of human reaction time
bmk#1476: oh, so youre counting on people contributing to it by volunteering their own gpus?
kurumuz#5695: ok i dont think this is similar to running databases
UnsupervisedLearner#4148: I honestly do not care how it gets started
UnsupervisedLearner#4148: I will bicycle power 10_000 gpus personally
kurumuz#5695: are you aware of the bandwith requirements of Model Parallelism
alstroemeria313#1694: yeah it's not just ping latency you have to worry about, but also the latency involved in transferring lots of data
kurumuz#5695: yep
UnsupervisedLearner#4148: which model architecture and what is your parallelism strategy
UnsupervisedLearner#4148: doesnt have to be all at once, for conceptual example, check out 1-bit adam
UnsupervisedLearner#4148: look I will handle logistics
kurumuz#5695: well with a cluster that big you will want to do both pipeline, data and model(megatron style MP)
bmk#1476: what im saying is that it's really fucking hard to get 20k gpus in a way that's significantly cheaper than just buying the gpus and putting them in a cluster, and certainly not in a way that's enough to offset the additional overhead
UnsupervisedLearner#4148: I came here asking research questions cause I trust you guys to stay on top of the most interesting parts of the ML part
kurumuz#5695: uhh yea
kurumuz#5695: I mean go make it work cheaper/faster than a centralized cluster ig
kurumuz#5695: and train a super big model
|
UnsupervisedLearner#4148: I dont think I can do faster but I may be able to do bigger. Just like more devices run linux because open source by design
bmk#1476: I severely doubt it will be bigger, given iso-cost
UnsupervisedLearner#4148: We'll see okay Im off love you guys great chat
StellaAthena#3530: Can you provide a citation for this claim?
uwu1#4864: I would like to see that too, I've only read that is possible for boolean circuits, but the transformation from normal program to circuit isn't factored in to that complexity. there are compact _interactive_ proofs for linear algebra but as far as I known there aren't any static/communication free ones
uwu1#4864: maybe this one? https://eprint.iacr.org/2013/879.pdf
uwu1#4864: ok I just skimmed it. not 110% but it looks like they basically make proofs that prove their VM did the/any computation correctly for some max input byte length and timestep bounds rather than having to regenerate and proof the circuit for each program.
nshepperd#2316: i would imagine that the overhead involved in constructing such proofs defeats the advantage of having more gpus several times over
uwu1#4864: yeah def
uwu1#4864: i do feel that for ML ops it could be possible, with randomization and using more properties of the actual domain rather than turning it into a generic arithmetic circuit
naclbbr#9203: tbh I'm still interested in distributed training possibilities, but unless the network is *always* large enough (e.g. large cryptos where everyone grinds to mine) it is subject to 50% attack other than overheads
naclbbr#9203: and it doesn't look like crypto guys who needs it the most solved the 50% attack issue neither
naclbbr#9203: the solution would always be like "the house" providing a large enough computing value or nominal value e.g. it has to be centralized to some degree
uwu1#4864: the above is talking about zero knowledge proofs, which don't require blockchain consensus to be verified. E.g you could just publish tasks to a public board, recieve result and zk proof, and be able to verify it yourself, without needing other parties to also confirm it as in a blockchain.
IGg#7871: Hi, could we work on an Eleuther AI co-pilot in the style of GitHub Copilot?
IGg#7871: ?
IGg#7871: could we work**..}?
cifkao#0540: Hi, how well shuffled is the Pile? If I take a couple thousand examples from the first shard, will that be a reasonably random sample of the whole training set?
bmk#1476: yeah it will be
kurumuz#5695: so for neo training they were just tokenized as is and token chunks were not shuffled?
|
kurumuz#5695: which means document level global shuffling ig
kurumuz#5695: instead of token sequence level
bmk#1476: right
spacesloth#0666: Hello all, I'm a mle for 2 years now. I do cv for industrial automation. (point cloud segmentation, object re-identification, detection etc). however, generative modeling is totally new to me. Past few days I've been obsessed with generating art using @alstroemeria313's notebooks, it makes me want to learn more about these techniques.
𓅬 gabriel_syme 𓅬#3220: really cool area of work, hopefully I can bring some of that (indirectly) to my field in the near future
CarsonPoole#0640: question regarding general fine tuning practices: say you have a text file that you're turning into a self supervised causal LM training objective. What actually makes up an epoch? Is an epoch every single possible window of 2048 tokens, sliding along by one each time? (meaning size of epoch is the number of tokens in the entire dataset minus 2048) or is each example just a window that slides along by 2048 tokens (making the size of the epoch = number of tokens in dataset // 2048) or is it some overlapping window, or something else?
CarsonPoole#0640: when you're doing something as large as the pile it doesn't really matter and you're just randomly sampling forevermore, but when fine tuning the distinction of what an epoch actually is matters quite a bit
CarsonPoole#0640: obviously if your attention mechanism has a causal mask then the model should in some sense be learning what comes next after every token, so an overlapping window is in some sense slightly redundant?
CarsonPoole#0640: but also then you have to wonder if there might be some things that are in the context that could be clipped out if you just do a window with no overlap
chilli#5665: why do you care about what an epoch is?
CarsonPoole#0640: just trying to have a standard unit to measure things by
StellaAthena#3530: Epochs are artificial notions that represent a data limitation rather than anything mathematically interesting IMO
chilli#5665: I think his point is that when fine-tuning you often do have data limitations
nostalgebraist#3542: the latter is what "epoch" conventionally means
nostalgebraist#3542: training for one or more "epochs" defined in the former manner sounds like a recipe for overfitting
Unjay#9908: Statistically speaking there should be some people here that have **aphantasia **( inability to voluntarily create mental images - it's a spectrum, not a binary thing).
If you're OK with talking about it, I wonder how do you view the process of creating images from text/concepts.
Do you have some expectations about the outputs when you start?
Seeing the initial, early results, can you visualize the final output?
zackt1234#6754: Danielrussruss has aphantasia, it’s in their Twitter bio so I imagine they would not be upset if I mentioned it this one time
|
zackt1234#6754: Also partly I’m interested too
zackt1234#6754: But I imagine it does not really make to much of a difference
zackt1234#6754: I know I never use visualization when making ai art
zackt1234#6754: Because it’s not like that can influence what the output will look like anyways
Daj#7482: I have aphantasia, and the way I would describe it for me is that my brain operates mostly in "semantic statements and graphs", rather than images. Like when you look at a written word, you don't "see" all the individual lines and strokes, your brain just naturally "post processes" it into a "pure symbol". My brain does that for ~everything and it's difficult to impossible for me to viscerally experience the finer visual details. So I can pretty well put together "prompts", but I can't turn them into images
mgostIH#0245: The Gary Marcus human
Daj#7482: now listen here u lil shit
Daj#7482: I prefer "GPT Humanoid"
zackt1234#6754: Haha this might be strange for you, but I read you name and was wondering where I remembered it from and it was from the geometry dash community
mgostIH#0245: I'm popular :smart:
bmk#1476: the what
bmk#1476: is that a weird fork of Dash
zackt1234#6754: https://tenor.com/view/funny-chill-cute-kid-dont-worry-about-it-sweetheart-gif-17899112
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/927609738223570944/1404px-Dash_cryptocurrency_logo.svg.png
mgostIH#0245: game based on tapping the screen and epilepsy inducing gameplay
mgostIH#0245: I was the pioneer in cheating in it
bmk#1476: do you have the slightest idea how little that narrows it down
mgostIH#0245: Surprisingly my stuff got like 300k downloads per each executable :Blobsweat:
zackt1234#6754: Lol
mgostIH#0245: The size of what seem extremely small communities can be surprising
|
zackt1234#6754: Yes, some interesting characters back in the day
mgostIH#0245: You can make a good living out of being a commentator for a silly mobile game
Daj#7482: #off-topic
Daj#7482: https://markusstrasser.org/extracting-knowledge-from-literature/
> TL;DR: I worked on biomedical literature search, discovery and recommender web applications for many months and concluded that extracting, structuring or synthesizing "insights" from academic publications (papers) or building knowledge bases from a domain corpus of literature has negligible value in industry.
>
> Close to nothing of what makes science actually work is published as text on the web
🤔
Daj#7482: ~~maybe the problem was thinking biomedical research was real research~~
Daj#7482: ~~or because they use knowledge graphs~~
Unjay#9908: Quite ironically it is hard to imagine
I'm guessing I have some vague non-visual concept of what a cat is for example, but I think for most concepts I do rely heavily on visualizing them.
chilli#5665: No, I think I believe it.
https://www.google.com/amp/s/amp.theatlantic.com/amp/article/621058/
One thing that was interesting was that the Nazi blueprints the US found after WW2 were completely useless. On the other hand, the operation to bring top Nazi scientists to the US was very successful.
chilli#5665: Basically, science knowledge is largely carried in people, not papers
chilli#5665: And so, it's obviously very difficult to synthesize this stuff from papers alone
random person#5234: the intuition can be hard to inference solely from a paper
|
janus#0150: In tech we have open source code which can be somewhere in between a blueprint and the science itself.
mo#0466: AI research in the west is mostly open access anyway
mo#0466: feels like the article barely applies here
chilli#5665: You mean to AI research?
Daj#7482: Oh no I'm not doubting it, I completely expected this result, I was just poking fun
mo#0466: yea, AI research is especially open and accessible
mo#0466: feels like we're an open book in that respect
mo#0466: I think the article overvalues "practical experience".
mo#0466: yes, you need practical experience, but it's easily gained by practicing.
mo#0466: it's not some black magic recipe
chilli#5665: It's not about the practical experience, per se, it's about all of the implicit knowledge that's gained from actually doing the research that you don't get from reading the paper
mo#0466: yes, and gaining that "implicit knowledge" yourself is a matter of practicing.
mo#0466: and we're all in the same boat... we read papers and practice.
bmk#1476: I doubt there's *that* much implicit knowledge
bmk#1476: I mean, just 5 or 6 years or whatever ago I was absolutely clueless about ML, and I picked up basically all of my knowledge through reading arxiv papers and doing stuff with open source libraries
Daj#7482: And leading one of the largest open implicit knowledge acquisition projects in ML
Daj#7482: If anything, the lesson from EAI for me has been a loud and clear "implicit knowledge matters immensely more than what people put in papers for training a good LLM"
chilli#5665: uh, and talking to people
chilli#5665: lol
janus#0150: Like every day for 5 years, publishing multiple research papers with dozens of collaborators, and publishing several big ML open source libraries
|
janus#0150: lol
uwu1#4864: a lot of innovations (or at least, interesting papers) come from challenging that implicit knowledge too, so it's maybe good to not know too much
Deleted User#0000: I'm curious of what imagenet, or the other vqgan+clip models get the data for illegal drugs?
Deleted User#0000: Same with weapons, etc
Daj#7482: From watching PhD students spend their weekends
Daj#7482: They really should be a better role model for their models
BoneAmputee#8363: CLIP has seen 400 million images with captions. that's how it knows about everything
Deleted User#0000: Ah so any source shows up pretty much
random person#5234: i would assume if you were to scrap the internet, sketchy stuff show up in frequency higher than what you expect
kiyoshi matsumoto#5637: does anyone have a colab for cc12m_1 with classifier-free guidance?
alstroemeria313#1694: it's not done training yet
kiyoshi matsumoto#5637: thank you! ball park eta?
alstroemeria313#1694: i grabbed an intermediate checkpoint to test
alstroemeria313#1694: probably like three days if it hasn't improved next time i grab a checkpoint
kiyoshi matsumoto#5637: gotcha thank you
alstroemeria313#1694: you can watch it train here https://wandb.ai/crowsonkb/kat-diffusion/runs/34cic7yn?workspace=user-crowsonkb
kiyoshi matsumoto#5637: wow this is neat! on epoch 20 as we speak
alstroemeria313#1694: it started at 15 bc it kept the number of epochs from the original training
alstroemeria313#1694: so i have done five epochs of CFG fine-tuning
alstroemeria313#1694: which is... probably enough tbh
|
kiyoshi matsumoto#5637: interesting can't wait to play around with it
Isaiah#0007: Excited to try it!
IvIoon Lad#8528: there are thesis that says that people that are more far away to a subject, are more able to solve problems in it
IvIoon Lad#8528: does anyone knows where i can find some resources or some google colab on the discord? there is not a section, it might be useful maybe
Napolean_Solo#2907: Hi, anybody has any good sources that shares how fast-fashion companies use AI/ML to predict demand and trends? Much appreciated! Please do ping me.
Napolean_Solo#2907: (Specifically looking for the workings of the algorithm.)
Ravna#1831: I think linear regression or just blind guess is the right way.
Ravna#1831: When you have like 3 data points, don't invest 100 million in AI.
𓅬 gabriel_syme 𓅬#3220: is anyone aware of a faster way to get EOT embeddings out of a model than HF pipelines?
nmz#2103: what's the model being run on the-faraday-cage?
MicPie#9427: Is there a mirror were I can download (parts of) the Pile?
Kia#2550: VQGAN Imagenet 16K+CLIP ViT-B/16(Forgot the clip model) And secondary CLIP Guided Diffusion Model (I think it's ViT-B/16)
nmz#2103: thanks!
𓅬 gabriel_syme 𓅬#3220: I think this is one (that may or may not be up)
http://eaidata.bmk.sh/data/pile
MicPie#9427: For testing that works. 🙏
lyosha#9941: Hey gals & guys! I'm new here, I'm also an artist working with code and text2image in particular, developing my own methods, hoping to release some notebooks eventually, etc. I wonder what's the best channel here to find out about "Dango's cutn tweak" (mentioned by @Somnai_dreams on twitter a few days ago) and also also if there's any info on training custom upscalers. Sorry if it's the wrong place to ask! Glad to join the community
Kia#2550: #art
Kia#2550: Just search it and you can find what's Dangos Cutn methods
Daj#7482: https://youtu.be/86ib0sfdFtw
|
lfg
mistobaan#2737: 3h and 20 min hours of video 😄
mistobaan#2737: https://arxiv.org/abs/2110.09485
ewald#7730: this needs to be pinned or something
finetune#0907: depends on exactly what you mean. you can load the state dict for the model and index into the input or output embedding matrices according to the eot token id
kurumuz#5695: i think he means EOT embeddings up in the final layer or so
kurumuz#5695: not the WTE
finetune#0907: a lot of models tie the weights for the first and last layer, so should be the same for those. otherwise should still be possible to just index into that matrix
cfoster0#4356: Hidden embedding of the EOT token given the previous context, not the raw projection weights
cfoster0#4356: IIUC
finetune#0907: oic
kurumuz#5695: yes
𓅬 gabriel_syme 𓅬#3220: Yeah that is exactly it and I understand its impossible to by pass for that reason. But still feels slow to me, maybe my dataset is just too big for this
CarsonPoole#0640: when people use embeddings for classification/clustering/search/etc are they just averaging over the sequence dimension?
CarsonPoole#0640: that obviously seems the most straightforward method but it seems like there would be information lost doing that?
zphang#7252: some use the `[CLS]`/`<s>` representation, others just average
𓅬 gabriel_syme 𓅬#3220: I'll be using the EOT representation over the average (at least at first), I was told that is also a nice option
EricE#6375: Has anyone come across any projects out there focused on using nlp models for analysis & detection of inconsistencies between & within constitutional, statutory, regulatory & case law?
nev#4905: is the output projection in attention necessary? isn't it made redundant by V?
CRG#8707: You need it in MHA, if you only have a single head then it's "redundant" (although even then, the gradients are not exactly the same)
|
CRG#8707: If you just concatenate the values, each head could only affect one slice of the residual stream, W_o is needed to fix that.
nev#4905: I see, thank you
alstroemeria313#1694: @nshepperd https://github.com/xuebinqin/U-2-Net huh...
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/928262247774576640/Screen_Shot_2022-01-05_at_4.22.43_AM.png
EricHallahan#1051: Have you not seen U^2 net before?
alstroemeria313#1694: I hadn't!
EricHallahan#1051: Oh interesting.
nshepperd#2316: morning~ :)
alstroemeria313#1694: to use it for diffusion we would have to either not use the deep supervision or like, figure out how to do it usefully
alstroemeria313#1694: morning!
nshepperd#2316: this seems quite :1000IQ:
alstroemeria313#1694: the internal u-nets are not residual
alstroemeria313#1694: i mean they do not have res blocks on the inside
alstroemeria313#1694: diffusion u-nets are generally made to go super deep by just stacking a bunch of residual blocks
nshepperd#2316: so, hm
nshepperd#2316: this is a unet, like the ones we use? but instead of ResConvBlocks at each level they have an *entire sub-unet*?
nshepperd#2316: and the sub unets just have convolutions
alstroemeria313#1694: yes, also i think there is just one sub-u-net rather than a stack of them
alstroemeria313#1694: Why do people do these :bigbrain: designs instead of residual blocks actually
alstroemeria313#1694: Are res blocks too slow/memory inefficient or smth
|
𓅬 gabriel_syme 𓅬#3220: this is the one I was talking about when we were discussing about outputs at each layer!
alstroemeria313#1694: ohhh
alstroemeria313#1694: i had seen different u-net variants with that and probably thought you were talking about them
𓅬 gabriel_syme 𓅬#3220: although I thought I remembered medical domain but no matter
alstroemeria313#1694: oh there were a number of those
alstroemeria313#1694: the original U-Net was for medical image segmentation
alstroemeria313#1694: dumb idea.
alstroemeria313#1694: We just do our current U-Net design and stack two of them
alstroemeria313#1694: Like in order
𓅬 gabriel_syme 𓅬#3220: a unet babushka
nshepperd#2316: ^_^
alstroemeria313#1694: Because.
alstroemeria313#1694: Diffusion models are essentially a ton of U-Nets in order anyway, they all just have the same weights
𓅬 gabriel_syme 𓅬#3220: the u-net part is because you use those right
𓅬 gabriel_syme 𓅬#3220: or do you mean it like more generally
nshepperd#2316: eheh
nshepperd#2316: like you run the unet, add noise, then run it again
alstroemeria313#1694: Actually.
alstroemeria313#1694: For progressive distillation.
alstroemeria313#1694: Can we take the original net and duplicate it a few times then fine-tune *that*
|
nshepperd#2316: hmmm
nshepperd#2316: that might be worth doing for the last few stages?
alstroemeria313#1694: If you duplicate the net it should be able to do in one step what the original net did in two like, automatically
alstroemeria313#1694: So what if you then take the duplicated net and fine-tune it to match the results of four teacher steps.
nshepperd#2316: yeah
alstroemeria313#1694: Like, can it use the extra model capacity to do better than 1/2 as many steps.
alstroemeria313#1694: Assuming you take off the projection to 3 channels and back
alstroemeria313#1694: idk, it's early in the morning ☕
nshepperd#2316: i thought the main reason we do 1/2 as many steps on each stage of progressive distillation is just because it's faster to do 2 steps of sampling
nshepperd#2316: like you could directly train on results of 1000 ddim steps, but it would be way too slow
alstroemeria313#1694: yeah.
nshepperd#2316: but.. yeah
𓅬 gabriel_syme 𓅬#3220: excuse my ignorance, what happens in training?
𓅬 gabriel_syme 𓅬#3220: how many steps is it usually for a single..image? or batch
alstroemeria313#1694: we train each timestep separately
alstroemeria313#1694: sampling a single timestep at random for each batch item
alstroemeria313#1694: for progressive distillation we sample a starting timestep randomly for each batch item and do 2 teacher steps w/ the current spacing then compute the single step student target
𓅬 gabriel_syme 𓅬#3220: thank you :hap:
nshepperd#2316: @alstroemeria313 btw i am trying to figure this out... isn't this weird? that one memorized image is only 1 out of 2008, it doesn't have any duplicates or anything https://cdn.discordapp.com/attachments/729741769738158194/928268455050887269/mem.png
nshepperd#2316: which, like
|
alstroemeria313#1694: huh
nshepperd#2316: for it to appear twice, doesn't that mean it's not properly matching the distribution?
alstroemeria313#1694: 2008 is kind of small and it may memorize, but
alstroemeria313#1694: this is the birthday problem isn't it
nshepperd#2316: heh maybe
alstroemeria313#1694: unless it has shown up in more than one grid
nshepperd#2316: hm let me generate more grids
nshepperd#2316: the demo grids from my training script all use the same seed
alstroemeria313#1694: ~6.17% chance of sampling the same image twice with 2008 images and 16 samples
alstroemeria313#1694: according to an approximation i found on wikipedia
nshepperd#2316: well it hasn't shown up in three random demo grids of 16, so ^_^
nshepperd#2316: i guess it is not that common
nshepperd#2316: anyway i am now training with some augmentations to help reduce the memorization too
nshepperd#2316: flip and translate
nshepperd#2316: the bad images for memorization seem to be the ones that are exactly 128x128 instead of larger, so it always gets the same cutout
nshepperd#2316: and it learns to use the padding i guess
alstroemeria313#1694: ahhh
nshepperd#2316: what I am doing now is I pad the images to 160x160 each
nshepperd#2316: if they are smaller
nshepperd#2316: then take a random 128x128 cutout
|
nshepperd#2316: I mask the losses so that it doesn't learn the padding
nshepperd#2316: this is still suboptimal probably since it may be learning to pay attention to padding in the input
nshepperd#2316: but it won't affect generation so it should be fine
nshepperd#2316: i suppose properly i could reduce the mask by the size of the receptive field too
nshepperd#2316: so the gradient would never even see the padding
nshepperd#2316: but i think it's too big for that with this arch
tpapp157#3643: Partially. In progressive distillation the student is initialized with the teacher's weights. That means if you try to change the training objective too much (skip too many diffusion steps at once) you'll get bad results. The point of progressive distillation is to give the student time to very gradually adjust its weights to the new objective.
tpapp157#3643: You can probably take bigger steps in the first few iterations, though. Jumping directly from 1024 to 256 (or maybe even 128) is probably doable without losing quality. But I would definitely stick to the 2:1 schedule for later iterations.
tpapp157#3643: It's an open question if you randomly initialized the student if you could jump directly from 1024 to the final step count.
tpapp157#3643: What padding are you using? I always use nearest neighbor padding with images. Never ever use default zero padding.
nshepperd#2316: zero padding lol
tpapp157#3643: Oh yeah, then your network is 100% learning features based on the zero padding.
nshepperd#2316: hm replicate is probably better yeah
nshepperd#2316: maybe
nshepperd#2316: it will have a slightly harder time noticing the padding to use it for locating the image
tpapp157#3643: I really don't understand why zero padding is still the default setting. I guess because changing it would break previously trained models that use it. So many papers over the years have shown how zero padding introduces a wide range of undesirable artifacts and especially how it completely breaks translation invariance.
tpapp157#3643: Just goes to show how easily bandaid solutions implemented out of convenience can long outlive their intended lifespan.
nshepperd#2316: well i wish at least jax had other padding options in the conv operator
nshepperd#2316: but xla is missing that too
nshepperd#2316: so you have to waste a bunch of memory on a manually padded copy of the features
|
nshepperd#2316: i guess
tpapp157#3643: That's unfortunate
nshepperd#2316: pytorch is the same i think. there's no fused conv op with different padding
tpapp157#3643: Is this what you're looking at? https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.pad.html
nshepperd#2316: yeah that's the 'manual' padding op
nshepperd#2316: i'd really prefer those options were all part of https://jax.readthedocs.io/en/latest/_autosummary/jax.lax.conv_general_dilated.html
tpapp157#3643: oh right
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/928321836327010364/IMG_0706.png
chilli#5665: :thonk:
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/928321876340666429/IMG_0707.png
alstroemeria313#1694: Is this one of the "backprop through the entire process of solving the SDE" thing
alstroemeria313#1694: Except they don't store intermediate activations for all of the steps?
alstroemeria313#1694: But it's still too slow to throw at diffusion bc our models are so gigantic so we just have to use SDEs which have a form where we can train the timesteps independently?
chilli#5665: Not sure - don’t think so actually.
alstroemeria313#1694: oh
alstroemeria313#1694: bc that is what torchdiffeq does
alstroemeria313#1694: for ODEs.
chilli#5665: Oh really?
chilli#5665: My understanding was that it was more common to solve another SDE in the backwards pass
alstroemeria313#1694: ah
|
igoro#7477: @Kharr , @CRG asking here about a discusion from #gpt-j, since it's a bit on a tangent from GPT-J:
> Kharr — Yesterday at 6:47 AM
> I was testing something similar to this yesterday. My biggest issue with Softmax attention (and activation in general) is its inability to say "I don't know" which requires it to park somewhere. I tried a few variants that allow null attention and it seems like you don't need Softmax attention for good LM
It seems like you could fix this issue with attention without necessarily abandoning Softmax? I.e., give the Softmax some options to pick a position that gives you zero V? Seems like there would be a few ways of making that work...
CRG#8707: Memory Key Values does basically this, see: https://discord.com/channels/729741769192767510/730090096287547444/743960665818923139
Kharr#7888: Yes, definitely. You can concat a learned global key which points to a 0 value and the model can learn to query it as required to give null attention.
Kharr#7888: This is one of those silly/funny one-off comments that you should try. Here's the code:
```Put this in your attention init:
self.global_key = nn.Parameter(nn.init.normal_(torch.zeros(1,1,1,dim),std=0.02))
Put this in your attention forward after you create key/query/values (before key transpose):
bs, n_head, seq_len, dim = key.size()
key = torch.cat([self.global_key.expand(bs, n_head, 1, dim), key], dim=-2)
value = torch.nn.functional.pad(value, (0,0,1,0))```
Kharr#7888: A quick and dirty test with a GPT model (same model, same init, same data and order) seems like an easy gain with a tiny tweak to attention. https://cdn.discordapp.com/attachments/729741769738158194/928374170511867994/unknown.png
tpapp157#3643: It's kind of introducing a 'bias' term to attention I guess.
Kharr#7888: It is specifically allowing attention to _not attend_ to the input and add 0 value. Normal Softmax attention always adds some vector and has no option to leave it unchanged. Maybe it matters, maybe it doesn't. NNs seem to find a good solution anyway.
|
igoro#7477: That's pretty cool. I guess one unknown is whether the difference largely matters before the model has learned the workaround, or whether there is an ongoing cost to the model from supporting the workaround, even later in training.
alstroemeria313#1694: hm so
alstroemeria313#1694: How do I want to tokenize the text for a diffusion model w/ an end-to-end trained transformer text encoder
nostalgebraist#3542: i did char level, after trying a custom BPE first
nostalgebraist#3542: not sure if char level was better or not
alstroemeria313#1694: ah
nostalgebraist#3542: that's for my project though, if the text has an abstract relation to the image then BPE might be better
alstroemeria313#1694: for mine, i want to fit larger prompts and don't necessarily want to have huge sequence lengths
alstroemeria313#1694: should I just reuse the CLIP tokenizer, is that OK
alstroemeria313#1694: Or is some other off-the-shelf tokenizer going to plainly do better.
nostalgebraist#3542: it depends, the CLIP tokenizer is poorly suited to my stuff because it lowercases and drops newlines
alstroemeria313#1694: Yeah
alstroemeria313#1694: Whereas both of those are fine for my use.
nostalgebraist#3542: i'd go with CLIP tokenizer then
alstroemeria313#1694: Do I need to like... how does an encoder handle variable sequence lengths
alstroemeria313#1694: Do I feed in pad tokens and take all the hidden states from the end
alstroemeria313#1694: Or do I need to mask the pad tokens so they aren't involved in the cross-attention
alstroemeria313#1694: Or the encoder's self-attention.
nostalgebraist#3542: i masked the pad tokens in both encoder attention and crossattn
alstroemeria313#1694: Ahh
|
alstroemeria313#1694: And I have to implement this manually don't I
alstroemeria313#1694: For cross-attention
nostalgebraist#3542: i mean, you could use my code, but it's Terrible™
alstroemeria313#1694: Eheh~
nostalgebraist#3542: https://github.com/nostalgebraist/improved-diffusion/tree/nbar-prod and specifically
https://github.com/nostalgebraist/improved-diffusion/blob/nbar-prod/improved_diffusion/text_nn.py
alstroemeria313#1694: So for cross attention I feed in the hidden states from the end of the encoder and the padding mask
nostalgebraist#3542: where unet.py, script_util.py, etc have been integrated with text_nn.py
alstroemeria313#1694: And after I calculate the attention matrix. I set the locations of the pad tokens to -inf?
alstroemeria313#1694: And then softmax?
alstroemeria313#1694: Is that all
nostalgebraist#3542: yeah -- my code has -10000 instead of inf, i tried inf first and it was unstable somehow
nostalgebraist#3542: relevant block is https://github.com/nostalgebraist/improved-diffusion/blob/nbar-prod/improved_diffusion/text_nn.py#L369-L372
alstroemeria313#1694: ...also i have forgotten which of q k and v come from the decoder and which from the encoder in cross-attention.
alstroemeria313#1694: ^^;;
alstroemeria313#1694: v has to be decoder bc it has to be the same dim as the decoder hidden states right?
nostalgebraist#3542: kv from "source", q from "target"
alstroemeria313#1694: oh
nostalgebraist#3542: if it's image looking at text, kv from text, q from image
alstroemeria313#1694: ty :)
|
alstroemeria313#1694: that tells me what dimension to make my projections
alstroemeria313#1694: well, sort of
alstroemeria313#1694: the encoder hidden states will probably be higher dim than many feature maps
nostalgebraist#3542: you have a choice because the image ch varies with res, i landed on "always project to highest dim available"
alstroemeria313#1694: so i project *down* to the feature map dim and then split into heads?
nostalgebraist#3542: ie project to text dim, not feature map dim
alstroemeria313#1694: Ah
nostalgebraist#3542: dunno if it's actually better, just seemed safer
alstroemeria313#1694: and no causal masks involved anywhere
alstroemeria313#1694: just the padding masks.
alstroemeria313#1694: do i layernorm both encoder hidden states and decoder hidden states?
nostalgebraist#3542: for the crossattn, i tried a bunch of norm types. ended up with layernorm for encoder, and groupnorm for image, specifically an AdaGN that uses the timestep
alstroemeria313#1694: ahh
alstroemeria313#1694: in mine i am using non-adaptive group norms rn
alstroemeria313#1694: (only the conv layers have an AdaGN after them)
nostalgebraist#3542: this part https://github.com/nostalgebraist/improved-diffusion/blob/nbar-prod/improved_diffusion/text_nn.py#L281-L288
which calls this https://github.com/nostalgebraist/improved-diffusion/blob/nbar-prod/improved_diffusion/nn.py#L22-L42
alstroemeria313#1694: *nods*
nostalgebraist#3542: in that case, just do groupnorm and add timestep embed. like, whatever you do normally
alstroemeria313#1694: *nods*
|
nostalgebraist#3542: oh, i mean, if your conv layers have AdaGN, then use AdaGN in crossattn. or that's what i do
alstroemeria313#1694: hm i am ending up w/ the wrong sequence length after multiplying by v
alstroemeria313#1694: OH I reversed q and kv proj lol
alstroemeria313#1694: ok i got it to work
alstroemeria313#1694: https://gist.github.com/crowsonkb/6484912d0292215025defe212938a31e btw if anyone has a need for it, here are sqrtm() functions for PyTorch with custom backward passes
alstroemeria313#1694: they run on gpu
alstroemeria313#1694: sqrtm_eig() is exact and sqrtm_ns_lyap() is iterative
chilli#5665: 😮
alstroemeria313#1694: um, i should put a citation in that actually
alstroemeria313#1694: it was from this https://people.cs.umass.edu/~smaji/projects/matrix-sqrt/
alstroemeria313#1694: except i cleaned their code up considerably
alstroemeria313#1694: Ohh there is a later paper on this? https://openreview.net/forum?id=-AOEi-5VTU8
alstroemeria313#1694: i used this code to backprop through FID calculation lol
alstroemeria313#1694: oh. their code *actually didn't even have a torch.autograd.Function*, you had to call the backward pass function manually lol
alstroemeria313#1694: That is why I rewrote it, I remember now
alstroemeria313#1694: let me add that citation.
alstroemeria313#1694: > For the backward pass of the differentiable matrix square root, Lin & Maji (2017) also suggest viewing the gradient function as a Lyapunov equation. However, their proposed exact solution is infeasible to compute practically, and the suggested Bartels-Steward algorithm (Bartels & Stewart, 1972) requires explicit eigendecomposition or Schur decomposition, which is again not GPU-friendly.
alstroemeria313#1694: ...But Lin & Maji *had* an iterative backward
alstroemeria313#1694: I reimplemented it.
alstroemeria313#1694: And used it for actual things.
|
alstroemeria313#1694: This iterative backward pass looks really similar
alstroemeria313#1694: And I am not sure whether you can rewrite one as the other with a little rearrangement.
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/928429935893291128/Screen_Shot_2022-01-05_at_3.29.01_PM.png
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/928430039651991612/Screen_Shot_2022-01-05_at_3.29.27_PM.png
alstroemeria313#1694: eye_a_a = (3I - B_k^2)
alstroemeria313#1694: a = B_k
alstroemeria313#1694: q = C_k
StellaAthena#3530: If you don’t have novelty, make it by ignoring prior work
alstroemeria313#1694: They don't look exactly the same
alstroemeria313#1694: B is spsd but C need not be
alstroemeria313#1694: Yeah the difference between these two is down to a transpose
alstroemeria313#1694: You can write Lin & Maji's as C_k+1 = 1/2 (-B_k^2 C_k^T + B_k C_k B_k + C_k (3 I - B_k^2)).
alstroemeria313#1694: And the B_k+1 step is the same.
alstroemeria313#1694: C is the gradient wrt the square root output by the forward pass and may not be symmetric afaict.
alstroemeria313#1694: Aha....!
alstroemeria313#1694: Lin & Maji's is *wrong*.
alstroemeria313#1694: Either that or I reimplemented it wrong lol
alstroemeria313#1694: Let me double check that...
alstroemeria313#1694: Ah I missed that q could be non-symmetrical.
alstroemeria313#1694: Now it passes the gradient checker too, their original code was correct.
|
alstroemeria313#1694: And was actually the same as proposed in the 2021 paper.
alstroemeria313#1694: gonna fix my code now.
alstroemeria313#1694: (You have to pass in a non-symmetrical grad_output for it to break, I somehow thought it had to be symmetrical or else just optimized it wrong)
alstroemeria313#1694: ok fixed my code.
alstroemeria313#1694: since i was taking the traces of its output before when i used it, my grad_outputs would be symmetrical and thus not trigger the bug.
Retoli Savoli#0469: Has anyone here taken a college course or equivalent for Machine Learning?
Retoli Savoli#0469: I haven’t pursued anything college related and I graduated HS last year, can’t think of anything to do education-wise and it seems like a reasonable thing
StellaAthena#3530: This is a discord server for ML researchers to talk about research. While I'm sure that the answer is yes, this isn't a good place to get advice about college courses. Some servers in #communities might be a good place to get advice
AI_WAIFU#2844: read the entirety of bishop's PR&ML
https://www.amazon.com/gp/product/0387310738/ref=pd_rvi_gw_2/102-4351241-7974535?_encoding=UTF8&v=glance&n=283155
guac#4716: And make sure it’s autographed https://cdn.discordapp.com/attachments/729741769738158194/928523606919622726/IMG_1036.jpg
kurumuz#5695: honkue
Louis#0144: Keep honking my child
Louis#0144: https://tenor.com/view/tiger-woods-gif-18852427
igoro#7477: I was reading the LoRA paper (https://arxiv.org/pdf/2106.09685.pdf) on parameter-efficient fine-tuning of large LMs. As I understand it, there are two main ideas:
1. Instead of fine-tuning an (n x m) matrix, train a pair of matrices (n x r) * (r * m) just for the residual (i.e., output to be added to the output of the original matrix), reducing the number of parameters
2. Only fine-tune the attention matrices, not the MLP matrices or anything else.
The first idea makes sense and sounds believable. The second idea sounds more surprising. Any intuition behind why fine-tuning just attention matrices is sufficient, at least for very large LMs? My intuition has been that most of the heavy lifting in the LM happens in the MLPs and attention mainly provides a communication channel.
Sphinx#2092: I don't think you really need "very large" either.
|
igoro#7477: Fair. 🙂 Any intuition behind why fine-tuning just attention matrices is sufficient for LMs?
kurumuz#5695: I just finetune the whole model for my task. These adapter kind of finetunes are not good enough
igoro#7477: I guess maybe fine-tuning the V attention matrix ends up being equivalent enough to fine-tuning the MLP output matrix
kurumuz#5695: We are changing the model way too much
igoro#7477: roughly how big of a model are you working with?
kurumuz#5695: 6B and bigger.
igoro#7477: interesting
Sphinx#2092: It's not just an lm thing. You can play similar games with encoder-decoder , see e.g. https://arxiv.org/abs/2104.08771
igoro#7477: although if you have a huge model and lots of adaptations (... like OpenAI "fine-tuning" for a bunch of customers), presumably you have to do some kind of adaptations, whether they are good enough or not
zphang#7252: this is separate from prompt tuning, I suppose?
𓅬 gabriel_syme 𓅬#3220: I think generating adapters during training is an interesting idea, maybe you can all try that and let us know how it goes 🙂
kurumuz#5695: I just meant for a big scale finetune you will not really get good results with adapters/prompt tuning
𓅬 gabriel_syme 𓅬#3220: https://aclanthology.org/2021.findings-emnlp.410/
𓅬 gabriel_syme 𓅬#3220: (recycle sry)
Sparkette#4342: If the full model for GPT-3 or DALL-E or something were to somehow leak from OpenAI, would everyone would be free to use it without legal repercussions?
Sparkette#4342: Because I don't think ML models are protected by copyright.
Sparkette#4342: Trade secret laws maybe, but AFAIK that protection doesn't last after something leaks to the public beyond the possibility of censorship.
Sparkette#4342: I don't have my hopes that high for a leak but it would be pretty awesome.
bmk#1476: :sus:
Kia#2550: :sus:
|
Sparkette#4342: Lol I'm not planning to hack them 😆
Kia#2550: In my own opinion, Everybody would be free to use the software without any legal repercussions, If It's able to be distributed fast enough
Sparkette#4342: I hope you're right; that would be nice
Sparkette#4342: Lol I don't know anything you don't about this
Sparkette#4342: I'm sure a useful model will leak from some company someday, even if not OpenAI
Kia#2550: Yup...But this is risky
Sparkette#4342: How so?
Sparkette#4342: I guess those questions would likely need to be settled in court first
Sparkette#4342: Is that what you mean?
Kia#2550: I mean for the person that leaks it
Sparkette#4342: Ah
Sparkette#4342: It wouldn't necessarily happen intentionally. Besides, some people are willing to take that risk. (Like Edward Snowden, though that's on a much larger scale)
Kia#2550: Who would know, You just don't want to get sue
Kia#2550: would be great to Opensource big things, Just No one wants to take the risk
random person#5234: I would fully expect accidental
random person#5234: I mean people have leaked AWS keys that have a ton of compute credits before
bmk#1476: I can't believe I have to say this but please don't encourage people to leak stuff
kurumuz#5695: git gud instead of betting on someone stealing models
kurumuz#5695: super lame
bmk#1476: the virgin model stealer vs the chad model replicator
|
random person#5234: I doubt any of it would be useable.
random person#5234: since it will quickly be shut down
random person#5234: and no reputable org/people want to try it
guywhoknowsnothing#0218: Wouldn't it be largely valueless to the open source community though?
guywhoknowsnothing#0218: Would violate copyright law to ever try and release something open source derived from it, wouldn't it?
Kia#2550: Yup,But it would be helpful for like platform or use their method on model and apply it to other models
Kia#2550: Yup
Kia#2550: Let's stop the conversation here, Because Im worried if someone actually do this and start looking at me
Sparkette#4342: I'm not betting on it; I know it's unlikely. But I'm still hoping it does happen because it would be great news for AI freedom.
Sparkette#4342: OpenAI is a huge company though. Do we really care more about their bottom line than we do about open access to models?
Sparkette#4342: That's only if AI models can be copyrighted, and the jury is still out on that one.
Sparkette#4342: (Not literally yet, to my knowledge)
EricHallahan#1051: I care about safety more than anything, and any leak or illicit smuggling or export of models is not safe by any stretch of the imagination.
cfoster0#4356: That's not the question, or the dichotomy
Sparkette#4342: Not safe for whom?
bmk#1476: this.. what? I don't even know how to respond to this
EricHallahan#1051: Not to mention it would probably be hella illegal.
nshepperd#2316: open access to models bad actually
Sparkette#4342: How?
bmk#1476: capabilities acceleration
|
kurumuz#5695: do you even know what kind of models they have internally
cfoster0#4356: Huh?
cfoster0#4356: *checks server description*
Sparkette#4342: If that means what I think it does then I see it as a good thing. But maybe it doesn't mean that.
bmk#1476: ok I'm eliding over nuance
kurumuz#5695: My argument is that we don't know what they have internally. Not talking about the models they publicly showcased
kurumuz#5695: because I don't think those are any threat
bmk#1476: but I mean.. I think we're in agreement that if we had a 1T model it would be a pretty bad idea to release it
Sparkette#4342: Not me
bmk#1476: our justification for replication expires once we get to 175B
EricHallahan#1051: Well it would presumably mean we are all closer to becoming paperclips.
𓅬 gabriel_syme 𓅬#3220: hmm does it though? what if the field moves to 500B
Sparkette#4342: Open access for the sake of open access is justification enough for me.
𓅬 gabriel_syme 𓅬#3220: seems a bit arbitrary stopping point, although tbf I understand the dangers. But the goal is to allow more people to also explore how these models fail, why, etc.
cfoster0#4356: I'm much much more worried about new techniques
kurumuz#5695: paperclipification for the sake of paperclipification
bmk#1476: well, we'd first have to wait for 1T models to be made by other people, and these models proven to be safe, before we can replicate 1T
Sparkette#4342: I wouldn't.
𓅬 gabriel_syme 𓅬#3220: but what's the point here? just to have a shiny collection of models on your HD?
𓅬 gabriel_syme 𓅬#3220: there's a point for releasing models and imo 'just releasing models' is not it
|
bmk#1476: i mean.. i think releasing 1T would definitely accelerate the development of new techniques
Sparkette#4342: No, to experiment with and learn from. Which imo everyone should be able to do
𓅬 gabriel_syme 𓅬#3220: let's be honest, a fraction of of the ML community can do anything with a model @ 175B
cfoster0#4356: I actually don't think it would, any more than releasing 175B would
𓅬 gabriel_syme 𓅬#3220: and that ML community is a fraction of people that might want to use these things
bmk#1476: the timing also matters
cfoster0#4356: A public model of that scale is a substitute for new techniques, for a subset of tasks
Sparkette#4342: For me, the overarching good I strive for is individual freedom. So I'm always in favor of letting everyone have these things above just groups who have resources.
nshepperd#2316: what if individual freedom kills everyone
𓅬 gabriel_syme 𓅬#3220: freedom is great, but you also have responsibilities. Unless you live alone, in isolation from everything else
nshepperd#2316: like, murders them to death, IRL, forever
EricHallahan#1051: It is safe to say we will never hold a LM size record, even if someone came to us tomorrow with virtually infinite compute and a magical time machine that let us obtain such a model in an instant.
bmk#1476: if we could release 1T like a few years after the first 1T model is made by other people, then sure
cfoster0#4356: And any player that can run the 1T model infrastructure competently is already a risk for developing new techniques (ones that scale to :foom: )
𓅬 gabriel_syme 𓅬#3220: this is interesting since I would feel that is equally dangerous, or maybe ~~more~~less, to having a replication sooner. I say this because I expect whoever trains these models to put them in production
Sparkette#4342: Allowing murder is not compatible with individual freedom, because it's the victim's freedom which is more at stake there.
nshepperd#2316: maybe you shouldn't be in favour of it then
kurumuz#5695: we are talking about everyone
Some Point Process#3793: Yeah that's why we have negative rights, above all
Sparkette#4342: I'm not in favor of murder.
|
kurumuz#5695: like everyone getting murdered
kurumuz#5695: just because you want free access to models without thinking about the consequences
𓅬 gabriel_syme 𓅬#3220: seems to me eerily similar to the guns debate
Sparkette#4342: I'm pro-gun, in case you haven't guessed 😛
𓅬 gabriel_syme 𓅬#3220: Freedom leading to unecessary deaths. Like I said, there are responsibilities in everything we do. Saying 'but that's not what I wanted to happen' is not enough
𓅬 gabriel_syme 𓅬#3220: it's fine, as long as you are clear on the implications it has to the real world
nshepperd#2316: if you want free access to models without caring about the consequences of everyone being murdered, you're in favor of murder
kurumuz#5695: pro-gun, anti proprietary?
bmk#1476: lets take this to #off-topic
𓅬 gabriel_syme 𓅬#3220: oh shit I once again thought we were there lol
Sparkette#4342: That's an odd slippery slope to assume
bmk#1476: **#off-topic**
Sparkette#4342: I'm more or less done with this conversation already, sorry though
bmk#1476: sorry for the emotional response earlier, a more level headed justification of our stance wrt releasing stuff is here -> https://blog.eleuther.ai/why-release-a-large-language-model/
paulbricman#2527: If one wanted to upskill on the engineering side of training large language models (e.g. GPT-J) on beefy hardware (e.g. multi-worker multi-GPU clusters), do you think that attempting to train small language models (distilgpt2) on potato hardware (a cluster of Raspberry Pis + old laptops) would lead to transferable skill building (parallelism, precision, speed/memory optimizations...)? Or are large LMs fundamentally different?
paulbricman#2527: Trying to think of accessible ways of tackling this https://www.alignmentforum.org/posts/YDF7XhMThhNfHfim9/ai-safety-needs-great-engineers
Daj#7482: Yes absolutely doing this will give you transferable skills
Daj#7482: Large models have a bunch of extra trickiness but the process is fundamentally the same as with smaller models
Qq#7586: Hi! I've been trying to use a GAN (a DCGAN, in particular) to create 32x32 pixel art from a small database of ~800 such characters. However, if my understanding of convolution is correct, I think it may not be a good approach for small pixel art, as each pixel is important. The results so far have been kind of blurry, missing details such as one clear pixel for an eye, or a precise black outline. Does anyone have any advice? I'm considering changing architecture/approach. Thanks :) (apologies if this is a bad place to ask!)
asara#0001: I'd check this out (some great examples in the thread) <https://twitter.com/dribnet/status/1427613617973653505>, probably belongs in #off-topic though is all
|
Qq#7586: Thanks ill check it out :)
tpapp157#3643: Your first problem is that ~800 samples isn't nearly enough to get the sort of results that you're probably hoping for.
Qq#7586: Do you think data augmentation would help?
Qq#7586: I tried flipping stuff / shuffling about by a few pixels so I had a few 1000s, but I'm not sure if that's a good idea
tpapp157#3643: Some types of augmentation would help: color shifting, horizontal flipping, desaturation, etc. Your choice of domain probably eliminates other common augmentations like cropping, skewing, scaling, etc. But even with augmentation, 800 base samples just doesn't provide very much in the way of core variety and that's the real problem.
ari#9020: Pokemon GANs are a somewhat popular source of disappointment from what I've seen
Qq#7586: Haha yeah I've seen a few of those
Qq#7586: Ok thanks! I can probably find more, I'll look into it
Qq#7586: I was mainly wondering if there were any interesting points to make about network architecture here
tpapp157#3643: Yeah there just aren't enough pokemon to create a reasonably sized dataset. Combined with the huge variety and complexity of possible pokemon configurations. Also standard GAN training architectures and losses struggle with cartoon art styles. It's a recipe for failure before you even start.
dmayhem93#3202: you could maybe try https://github.com/mit-han-lab/data-efficient-gans, but I lean towards tpapp's view
tpapp157#3643: I'm kind of surprised that no one has put together a massive cartoon dataset.
Qq#7586: What kind of order of magnitude would be reasonable? 10k?
tpapp157#3643: At least. Preferably more. In general, more is always better. GANs are very data hungry.
nev#4905: we have anime datasets lol
zackt1234#6754: Yeah isn’t that what danbooru is
tpapp157#3643: I would only classify some anime as cartoon. And there's a lot of cartoon animation that isn't anime.
alstroemeria313#1694: didn't people actually succeed recently though
alstroemeria313#1694: was it projected GANs or the other one or both
alstroemeria313#1694: like, the one where they regularize the net to have smooth interpolations between latents in its internal feature map space.
|
nev#4905: projected GANs did do pokemon, but that was about using pre-trained features for the discriminator
alstroemeria313#1694: do you remember what the one i'm thinking about is?
nev#4905: https://cdn.discordapp.com/attachments/729741769738158194/929062744039235614/unknown.png
nev#4905: nope
nev#4905: would also be curious
Qq#7586: There was this recently, pretty impressive https://mobile.twitter.com/minimaxir/status/1470926983076855808
BoneAmputee#8363: tfw not familiar enough with more recent generations of pokemon to know how creative that model is :wojak_despair:
ari#9020: I think these are the finetuned ruDALL-E?
tpapp157#3643: Not bad. Still mostly just amorphous blobs but definitely an improvement.
ari#9020: <https://github.com/minimaxir/ai-generated-pokemon-rudalle> yep
tpapp157#3643: That would make sense. I suspect the true way to make a usable pokemon gan is to train a gan on images of real animals and then finetune to pokemon.
nev#4905: `@ai_curio` also made an automatic pokemon generator with ruDALL-E
nev#4905: https://twitter.com/ai_fakemon
they were actually published around the same time
uwu1#4864: you could use the pokemon 3d models to at least generate a lot of new views in addition to the 2d art for style
Kharr#7888: Is there a version of LM eval harness that runs in Colab?
chilli#5665: Or... you could also just work on PyTorch or other OSS frameworks if you wanted to tackle it more directly 😛
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/929124227783737354/unknown.png
sholto#2407: Quick check - does anyone know how to specify TPU-VMs for GKE? It seems like the default guides set up host+accelerator in the old style
𓅬 gabriel_syme 𓅬#3220: what's GKE? 🙂
|
𓅬 gabriel_syme 𓅬#3220: this is what I use but not sure it's the setting you describe: `gcloud alpha compute tpus tpu-vm create smth--zone europe-west4-a --accelerator-type v3-8 --version=v2-alpha`
bmk#1476: google kubernetes engine
bmk#1476: I think
bmk#1476: I don't know if you can even do so
Spacecraft1013#5969: the basic concepts of cluster computing transfer, so learning on a cluster of small systems will help you in running stuff on a cluster of large systems
sholto#2407: Yeah! Kubernetes - to make it easier to manage a cluster of multiple TPUs. And yeah that’s the command I’d use for 1
sholto#2407: Ah pity
𓅬 gabriel_syme 𓅬#3220: ah ok, sorry have never used it that way
triggerhappygandi#0001: https://twitter.com/julien_c/status/1479470557343199241?t=jgLHG-Hdv1L5Jsahuc7Cdw&s=19
I am happy no one understands exactly why this happens.
triggerhappygandi#0001: But also sad that we won't get answers anytime soon
𓅬 gabriel_syme 𓅬#3220: data issue?
kurumuz#5695: looks like optimizer tbh
kurumuz#5695: something blows up but not sure what
kurumuz#5695: just use adamw and tune your parameters :berk:
ilovescience#3282: this is just @boris (^_^)'s dall-e training...
ilovescience#3282: here: https://discord.com/channels/729741769192767510/795089627089862656/929020835015049256
𓅬 gabriel_syme 𓅬#3220: yeah it reminded me of a loss chart I recently saw
𓅬 gabriel_syme 𓅬#3220: quick question, what's the best way to cite GPT-Neo models?
triggerhappygandi#0001: the repo
|
triggerhappygandi#0001: Interesting... Hadn't seen it. Thanks
𓅬 gabriel_syme 𓅬#3220: thanks :hap:
ersatz#0001: What the fuck?
ersatz#0001: This must be a bug
ersatz#0001: Or I’m changing field
triggerhappygandi#0001: Fwiw when I've encountered something like this personally it happens way early like 5k steps in. Still wastes days worth of compute.
alstroemeria313#1694: https://twitter.com/eigenhector/status/1479518969488834560 ohhh
alstroemeria313#1694: So if I am training in fp16 and a bunch of the elements of my gradient are getting underflowed to zero
alstroemeria313#1694: Eventually I get a loss spike
alstroemeria313#1694: Because the Adam second moment got too low for some elements?
alstroemeria313#1694: And I can raise epsilon?
alstroemeria313#1694: (So long as I unscale the gradients before feeding them to Adam so that their scale has a consistent interpretation)
𓅬 gabriel_syme 𓅬#3220: Yeah I did this with dalle
𓅬 gabriel_syme 𓅬#3220: Iirc I set it to smth like 1e5
alstroemeria313#1694: you mean 1e-5?
𓅬 gabriel_syme 𓅬#3220: I thought default was less than 1e-7
𓅬 gabriel_syme 𓅬#3220: Sry yes
alstroemeria313#1694: default is 1e-8
alstroemeria313#1694: from the paper
alstroemeria313#1694: and in most implementations
|
𓅬 gabriel_syme 𓅬#3220: Yeah 1e-8 was way too small for fp16 for me
𓅬 gabriel_syme 𓅬#3220: 5 and 6 worked I believe, have to go back into the server to remember which
alstroemeria313#1694: i need to try this to see if it stabilizes fp16 diffusion model training
nshepperd#2316: i think possibly the *adam updates* should be clipped
alstroemeria313#1694: ahh
alstroemeria313#1694: well, they are, by the lr, right?
nshepperd#2316: like, dividing by the second moment + eps is supposed to normalize the updates to stddev 1
nshepperd#2316: and then you multiply them by the lr
alstroemeria313#1694: ah
nshepperd#2316: so wouldn't it make sense to like, (grad / (second_moment.sqrt() + eps)).clamp(max=20) or something
nshepperd#2316: to clamp it to a max of 20 * the stddev
nshepperd#2316: basically
𓅬 gabriel_syme 𓅬#3220: Probably unrelated but I remember I had to slightly limit my bs when doing DALLE in fp16. And also I had deepspeed helping with dropped batches
alstroemeria313#1694: adamax won't help will it?
alstroemeria313#1694: it still will have the decaying norm problem
nshepperd#2316: is that the one where it keep the max of the second moment instead of an ema?
alstroemeria313#1694: it's a decaying max norm
nshepperd#2316: ohh huh
nshepperd#2316: yeah same problem i think
alstroemeria313#1694: it replaces the second moment update with v_t = max(beta_2 * v_t-1, abs(grad))
|
alstroemeria313#1694: then doesn't sqrt it
nshepperd#2316: "adascalar" might actually be better in some cases
alstroemeria313#1694: Is there an L1 version of Adam too
nshepperd#2316: like the thing where you keep a single scalar second moment for each tensor
nshepperd#2316: bc the second moment will have less variance on things with sparse updates
alstroemeria313#1694: Like v_t = beta_2 * v_t-1 + (1 - beta_2) * abs(grad)
alstroemeria313#1694: Then don't sqrt it before using it
alstroemeria313#1694: yeah
nshepperd#2316: like the embedding vectors of a language model
alstroemeria313#1694: I have seen gradients for some of my param tensors be all zero though
nshepperd#2316: ah
alstroemeria313#1694: If only stuff like adagrad actually worked well
alstroemeria313#1694: What even. (EMA std of loss) https://cdn.discordapp.com/attachments/729741769738158194/929370188258902056/Screen_Shot_2022-01-08_at_5.45.09_AM.png
alstroemeria313#1694: These jumps up and down happen at epoch boundaries.
alstroemeria313#1694: Yes I am shuffling the data
nshepperd#2316: what the heck
alstroemeria313#1694: I KNOW RIGHT
nshepperd#2316: are you dropping the last partial batch
alstroemeria313#1694: no
nshepperd#2316: what if you do that
|
𓅬 gabriel_syme 𓅬#3220: I think your model is having a heart attack
nshepperd#2316: @alstroemeria313 what if the last partial batch happens to be really small, so its like batch size 1
alstroemeria313#1694: but why would that cause the entire next epoch to behave weird
nshepperd#2316: and this is messing with the second moment estimate
alstroemeria313#1694: loss std per epoch https://cdn.discordapp.com/attachments/729741769738158194/929374455187259402/Screen_Shot_2022-01-08_at_6.02.09_AM.png
alstroemeria313#1694: Like entire epochs behave differently
alstroemeria313#1694: If I group by epoch and take the std
nshepperd#2316: huhhhh
nshepperd#2316: what even
nshepperd#2316: why is it STILL periodic
alstroemeria313#1694: I KNOW
alstroemeria313#1694: So the only things I do on epoch boundaries are: make a demo grid and save it, save the model, and remake the data loader iterator
alstroemeria313#1694: The lr scheduler gets called every *step*, not every epoch, so not the culprit
nshepperd#2316: is this gradient nondeterminism somehow
alstroemeria313#1694: This is the code <https://gist.github.com/crowsonkb/83806be13080baaa39a8c07a093a5b61>
nshepperd#2316: like idk, GPU heating up and then throttling, causing different scheduling
alstroemeria313#1694: i... that should wash out on adjacent steps and not behave differently on different epochs
alstroemeria313#1694: huh
alstroemeria313#1694: but shouldn't it not alternate
nshepperd#2316: making the demo grid would reduce(?) the load for a bit
|
alstroemeria313#1694: since i do the same thing each epoch boundary
alstroemeria313#1694: shouldn't it just cause weirdness at the beginning of an epoch vs the rest of it?
alstroemeria313#1694: btw I keep seeing weird oscillatory behavior in like, DALL-E losses
alstroemeria313#1694: Also on epoch boundaries
alstroemeria313#1694: Actually let me go look at its loss std now
alstroemeria313#1694: I have the loss csv from one of those runs saved
nshepperd#2316: and then... i don't know there could be weird dynamics with the control system for chip temperature
nshepperd#2316: it's probably not this lol but it might be interesting to plot nvidia-smi stuff over time or something
tpapp157#3643: GPU temp should only affect compute speed. There would enormous problems if it actually affected calculations.
alstroemeria313#1694: raw losses https://cdn.discordapp.com/attachments/729741769738158194/929376612225544272/Screen_Shot_2022-01-08_at_6.10.48_AM.png
alstroemeria313#1694: er, that's ema
nshepperd#2316: @tpapp157 yeah but gradients are non deterministic bc of the gpu scheduler
alstroemeria313#1694: but the std is fine https://cdn.discordapp.com/attachments/729741769738158194/929376816186138676/Screen_Shot_2022-01-08_at_6.11.35_AM.png
alstroemeria313#1694: why would... wait
alstroemeria313#1694: Wait
alstroemeria313#1694: What is the actual period of those std boundaries, is it really one epoch
alstroemeria313#1694: Or is it near one epoch but slightly out of phase and that's why it alternates between alternating and constantly high
alstroemeria313#1694: uh, how do i find this
alstroemeria313#1694: Also why am I not logging the within-batch std
alstroemeria313#1694: no it really is on epoch boundaries
|
alstroemeria313#1694: if i plot the individual epochs it is very visible
alstroemeria313#1694: So why, on this plot, does the loss go *up* during epochs
alstroemeria313#1694: Then jump down
alstroemeria313#1694: Like this is epoch 24 https://cdn.discordapp.com/attachments/729741769738158194/929379063519399987/Screen_Shot_2022-01-08_at_6.20.30_AM.png
alstroemeria313#1694: Also that gradual slide down at the beginning is an artifact of the EMA.
alstroemeria313#1694: It actually *jumps* down and then has a smooth up trend each epoch.
alstroemeria313#1694: If I filter out the previous epoch from the EMA this is visible.
alstroemeria313#1694: What even.
alstroemeria313#1694: Is shuffle=True not good enough somehow
nshepperd#2316: is this actually evidence for this theory
nshepperd#2316: bc the partial batch will have a higher std than usual
alstroemeria313#1694: but that's one batch. in an entire epoch.
nshepperd#2316: which will make the second moment estimate jump up
nshepperd#2316: and then slow decrease back to normal over the rest of the epoch
tpapp157#3643: With regard to shuffling, how large is the buffer? A shuffle buffer of 1 is equivalent to no shuffling.
alstroemeria313#1694: ...but this plot is a mean not an std
alstroemeria313#1694: and it goes up not down?
nshepperd#2316: yes
nshepperd#2316: the mean of the loss right?
alstroemeria313#1694: yes
|
alstroemeria313#1694: EMA of mean loss for each batch
nshepperd#2316: so a jump up in second moment causes a jump *down* in effective learning rate
nshepperd#2316: which then slowly increases back to normal
nshepperd#2316: bc the grads are divided by the second moment estimate
alstroemeria313#1694: I used beta_2=0.95 for this run
alstroemeria313#1694: Bc it is a transformer
nshepperd#2316: ahh then the second moment would decay really fast
alstroemeria313#1694: also it behaves this way over the whole run and i was decaying lr
alstroemeria313#1694: so effective lr is not increasing into the too high zone, i think?
alstroemeria313#1694: it is what pytorch does
alstroemeria313#1694: that is, on epoch boundaries it draws a new random permutation
alstroemeria313#1694: and keeps it in memory
nshepperd#2316: idk i've noticed a change in lr having an effect even when already quite low
tpapp157#3643: I wonder if the optimizer is doing something weird under the hood each epoch. Like reseting its momentum.
alstroemeria313#1694: i can test some of these theories when i am done with the ms coco tinyglide test run
nshepperd#2316: like nowhere near too high to train
alstroemeria313#1694: the optimizer doesn't know about the epoch boundaries though
alstroemeria313#1694: The only thing that touches the optimizer on epoch boundaries is when I grab its state dict to save the checkpoint.
alstroemeria313#1694: lr decay is done each step, so it's not the issue
alstroemeria313#1694: it sounds like the first thing to do is start dropping the small batch on the end.
|
alstroemeria313#1694: After that I can replace the low discrepancy timestep sampler w/ a uniform sampler
alstroemeria313#1694: (This is not causing the DALL-E model weirdness, low discrepancy is a diffusion model only thing)
alstroemeria313#1694: after that, shuffle the dataset myself
tpapp157#3643: Maybe a random seed issue?
alstroemeria313#1694: i'm not reseeding though
alstroemeria313#1694: in fact i never actually set the seed
alstroemeria313#1694: so pytorch draws it randomly at the start of the script
nshepperd#2316: oh yeah when you recreate the data loader iterators it usually kills and restarts the worker processes
alstroemeria313#1694: (reseeding for the demo grids and forgetting to fork the RNG was a previous bug)
nshepperd#2316: but that should be fine if it's not reseeding
alstroemeria313#1694: mine doesn't bc i set the option that told it not to.
nshepperd#2316: ah
alstroemeria313#1694: persistent_workers=True
alstroemeria313#1694: Also the seed *in the worker processes* does nothing in this code
alstroemeria313#1694: Bc it only applies to data augmentations you do in the worker processes
nshepperd#2316: oh
alstroemeria313#1694: And the shuffling is done in the main process.
alstroemeria313#1694: So I can also try not doing the demo grids or not saving
alstroemeria313#1694: Bc it has got to be one of those three things, demo grids, saving, or remaking the data loader iterator.
alstroemeria313#1694: (If it is not the small batch at the end that is.)
|
alstroemeria313#1694: It isn't something bizarre like tqdm messing with the data loader somehow, is it
alstroemeria313#1694: I could take the progress bar out.
nshepperd#2316: maybe try my stateless low discrepancy sampler instead of sobol engine, the `(torch.arange(batch_size) + torch.rand([batch_size])) / batch_size`
alstroemeria313#1694: ooh
alstroemeria313#1694: oh
alstroemeria313#1694: hm
alstroemeria313#1694: low discrepancy shouldn't be doing weird things exactly on epoch boundaries
nshepperd#2316: yeah it probably shouldn't
nshepperd#2316: oh, ideally this should be shuffled as well so the marginals are uniform per index within batch too
alstroemeria313#1694: ahh
alstroemeria313#1694: it... shouldn't super matter
alstroemeria313#1694: except that i am already suspecting the data loader of not shuffling correctly
alstroemeria313#1694: in which case it does
nshepperd#2316: yeah
alstroemeria313#1694: or maybe i made a mistake in the model and it leaks information across batch items
nshepperd#2316: ...huh, torch doesn't have a function for shuffling a tensor along an axis?
alstroemeria313#1694: you do a randperm then index into the tensor with it
alstroemeria313#1694: ...Why is my loss std nan
nshepperd#2316: ...batch size 1?
alstroemeria313#1694: ...because i typoed .sqrt() as .std() ^^;;
|
nshepperd#2316: ^_^
Louis#0144: happens to the best of us
Louis#0144: jk thats weird alstro idk why you would do that
lapenzer#6456: Hi my name is Lapenzer, I am a mathematics student interested in theoretical physics. Is there any kind of project that may be related to physics or simulation? I am relatively new to programming
genetyx8#7543: @lapenzer this isn't really a place for newcomers. check out #communities
EricHallahan#1051: I am more curious what you would ask a question relating to theoretical physics and simulation/modeling thereof in a community overwhelmingly focused on natural language and image generation.
Some Point Process#3793: https://www.youtube.com/watch?v=86ib0sfdFtw
𓅬 gabriel_syme 𓅬#3220: I recommend looking towards the lab of Dr. Karniadakis and their incredible work on PINNs
𓅬 gabriel_syme 𓅬#3220: There's no project like that here but there is a lot of work outside. Another interesting place is NVIDIA and their SimNet (which is built from the work of the lab I mentioned above). I believe they were looking for interns only recently
Spacecraft1013#5969: This server isn't the right place to look for a tutor
bmk#1476: if you have specific, research-level questions you can ask them in this server but yeah this server isnt for tutoring or beginner help
EstebanSir#2189: hey, so, i was looking at https://github.com/pytorch/fairseq/tree/main/examples/moe_lm to see some specs about Fairseq GPT, and does the model dimension actually reflect the amount of tokens the model can take as input? (not that it would make much of a difference if it only pays attention to the last few ones) https://cdn.discordapp.com/attachments/729741769738158194/929590661026553866/unknown.png
EstebanSir#2189: im not sure what model dim would mean
ari#9020: No, these were all trained with a sequence length of 2048 tokens, per table 1 https://arxiv.org/pdf/2112.10684.pdf
EstebanSir#2189: ah i see!
lapenzer#6456: Hey is it possible for u to ping that message I couldn't find it above
𓅬 gabriel_syme 𓅬#3220: you mean the one you just replied to 🙂 or the work I refer to?
lapenzer#6456: the one u refereed to
𓅬 gabriel_syme 𓅬#3220: hmm maybe this page: https://www.brown.edu/research/projects/crunch/george-karniadakis you can find PINNs in there
𓅬 gabriel_syme 𓅬#3220: (PINNS: physics informed neural networks)
|
lapenzer#6456: I will check this out
lapenzer#6456: i dont really know what that means but I will look into it thankss
𓅬 gabriel_syme 𓅬#3220: np 🙂 I'm sure you'll get up to speed really fast with your expertise
spacesloth#0666: wow i almost missed this one, v nice follow up to their last ep about this :peepoLaptop:
lapenzer#6456: I really hope so!
lapenzer#6456: how did u hear about the opportunity tho
𓅬 gabriel_syme 𓅬#3220: I can't remember, i think some sort of social media. But there is a lot of activity in that area
nshepperd#2316: `gcloud alpha compute tpus tpu-vm scp --worker=all` is so unreliable what the heck
nshepperd#2316: like i can run it three times in a row and it will have copied the file to every worker except worker 3 or something
𓅬 gabriel_syme 𓅬#3220: I've noticed TPUs get quite unstable on weekends as well
nshepperd#2316: weird
nshepperd#2316: i've taken to using ssh to just cat things into files instead
nshepperd#2316: like `cat file | gcloud alpha compute tpus tpu-vm ssh --worker=1 my-tpu --command 'cat > file'`
nshepperd#2316: lol
nshepperd#2316: :guh: preempted again
𓅬 gabriel_syme 𓅬#3220: this makes me want to check mine
boris (^_^)#0348: Yeah I typically create a script which ends in creating a dummy file. I check existence of the file at the beginning of the script to see if the command already performed on this worker and then launch the script many times on all workers
fe#0483: for future searchers: this issue was caused by a faulty power supply unit. Replacing the PSU with a new high quality one fixed the problem.
mr_seeker#1337: Trying to run Github's lm_eval_harness but the dummy runner gives errors. Is there a dummy runner that works?
StellaAthena#3530: Do you mean that ```bash
|
python main.py \
--model gpt2 \
--device cuda:0 \
--tasks lambada,hellaswag```
doesn't work? What error does it give
StellaAthena#3530: Or the `DummyLM` class?
naclbbr#9203: FYI fairseq's dense models are only trained with a seq. length of 1024 (MoE models are 2048)
ari#9020: Looking at the paper, I think that only applies to their distillation experiments, not the released models
naclbbr#9203: Hmm I ran 13b one and I got a message that the model is only up to 1024 seq. length
EstebanSir#2189: I think i got a similar message running GPT-J, is it not an error with the tokenizer?
naclbbr#9203: actually I might be wrong. It might be that max_source_positions defaulted to 1024 when I tested
mr_seeker#1337: Dummy model does not work
mr_seeker#1337: Will complain about something missing
Metroproxyn#2769: Hey everyone, learned about the community from comments on the Reddit thread about Open Source. The organisation of your team is impressive! I want to start contributing. What would be the best way to do it?
EricHallahan#1051: Welcome! I saw that thread on r/ML yesterday, but somebody had already plugged us by the time I saw it.
If you have not done so already, read the #rules and the FAQ at https://www.eleuther.ai/faq. They contain a lot of useful information for getting familiar with what we do.
As for what there is to work on, we have a whole task board filled with ideas at https://board.eleuther.ai/ to keep track of things.
Spacecraft1013#5969: could you give an exact error?
Metroproxyn#2769: Ok, I got it, thanks for the information!
StellaAthena#3530: It doesn't on my computer
|
mr_seeker#1337: Main.py, line 64: invalid syntax?
StellaAthena#3530: Can you please post an actual stacktrace and open an issue on github? The way you're going about getting help extremely inconvenient
mr_seeker#1337: I will, at the moment i am getting an error I am not expecting to see, being confused here too. Stack trace at this moment is difficult, since I am on mobile and if I try to copy-paste the output it also tends to copy all the colours too...
StellaAthena#3530: Thank you
mr_seeker#1337: ```
Traceback (most recent call last):
File "main.py", line 71, in <module>
main()
File "main.py", line 43, in main
results = evaluator.simple_evaluate(
File "/home/julius/Documents/projects/lm-evaluation-harness/lm_eval/util
s.py", line 156, in _wrapper
return fn(*args, **kwargs)
File "/home/julius/Documents/projects/lm-evaluation-harness/lm_eval/eval
uator.py", line 50, in simple_evaluate
lm = lm_eval.models.get_model(model).create_from_arg_string(model_args
, {
TypeError: create_from_arg_string() takes 2 positional arguments but 3 wer
e given
```
|
StellaAthena#3530: 1. That's a very different error than you previously described
2. I meant to make an issue and post the stack trace there, not to do it here
kurohagane#0887: Hey guys, sorry in advance if this is a dumb question.
So I'm a masters student in CS with a focus in AI/ML. I have to admit I'm not a very proactive student, so while I do the coursework, I haven't ever really gone beyond to read research papers or the like yet, but my thesis looms large. I would like to be informed enough about the current state of art in DL to be able to pick a topic for my thesis that could fit well within the scope of a master's thesis. How do you suggest I go about catching up on the research quickly in order to have an overview of the interesting stuff being done? I haven't really done that before so I don't really know where to start.
bmk#1476: this server isnt for beginner help, that being said, there's a pinned message in this channel with a bunch of useful resources
kurohagane#0887: yeah, I figured, sorry about that. I tried asking around on a couple of ML subreddits but didn't really get replies, so I figured maybe you guys at least could point me in some direction
inox#5400: if it's any comfort, DL believes in reinventing the wheel so you never have to read too far into the literature
inox#5400: all the best ideas get invented more than once
kurohagane#0887: I suppose that is reassuring, cause right now I don't know how much I don't know lol
kurohagane#0887: I'll check the pins too, thanks
𓅬 gabriel_syme 𓅬#3220: starting from an area you really like and/or enjoy helps
𓅬 gabriel_syme 𓅬#3220: Then google is your friend, find publications that fit. Then follow the thread from those publicatioons
kurohagane#0887: I'll try that. I like to draw/paint as a hobby and thought maybe I could come up with a topic to somehow incorporate that, so maybe I'll try looking for that first
𓅬 gabriel_syme 𓅬#3220: I recommend loooking in #art a lot of incredible work there along with incredible people doing it
kurohagane#0887: yeah I've seen some of that stuff and a lot of it looks very cool, makes you wonder about how the generative stuff will be incorporated into digital art in the future, or if some functionality could be incorporated into the standard workflow of an otherwise manual artist
nev#4905: pytorch3d is already meta'd
https://github.com/facebookresearch/pytorch3d/blob/main/pytorch3d/renderer/points/pulsar/renderer.py
> `# Copyright (c) Meta Platforms, Inc. and affiliates.# All rights reserved.`
ym#0104: I see that there's a 'distill...' branch for gpt-neox. Has anyone tried doing other sorts of compression techniques with the neox?
𓅬 gabriel_syme 𓅬#3220: you can find some discussion in the #gpt-neox-devs channel. There were some efforts, but I'm not sure where they are rn
|
cfoster0#4356: Anyone have stats on GPU finetuning throughput for any Neo models?
someKindaBean#8471: this thread has a really cool test of DeepAI's colorization API
someKindaBean#8471: https://twitter.com/gwenckatz/status/1381652071695351810
someKindaBean#8471: real: https://cdn.discordapp.com/attachments/729741769738158194/930523497732640788/EyyeXQHVEAI0rkj.png
someKindaBean#8471: DeepAI's colorized version from a desaturated image: https://cdn.discordapp.com/attachments/729741769738158194/930523593098555412/EyydSzuUUAA3Zq-.png
StellaAthena#3530: Another good thread on this: https://twitter.com/sam__goree/status/1381678881980215297
alstroemeria313#1694: > Since deep learning models are trained to minimize error, they will choose the color image which is in the middle
diffusion
alstroemeria313#1694: also someone else in that thread posted some sort of model that autoregressively samples chroma DCT coefficients conditioned on luma DCT coefficients, which also doesn't have this problem
AI_WAIFU#2844: really anything that doesn't assume unimodality will do
someKindaBean#8471: even if they use a model that's constrained that way, using HSV/HSL instead of RGB would probably fix that.
cfoster0#4356: You sure about that? 😛
someKindaBean#8471: lol, no
someKindaBean#8471: but it seems plausible that it could help, if the problem is as simple as "minimizing color space distance causes bland colors"
Sid#2121: I mean, super early colour film (given the date, this is probably autochrome) is not exactly a good representation of colour in reality either, most autochrome pictures are super highly saturated and inaccurate. Like, the AI colorised version looks more like what we would get from a modern digital camera
Sid#2121: And colour balance is a pretty subjective thing anyway
Sid#2121: https://twitter.com/gwenckatz/status/1381656888463396864?t=n6IT2WBkhNDVN8ONk5B4gQ&s=19 yeah I mean, it doesn't get the coloured trim on the building because that's an ood sample, but the hyper saturated originals are more to do with film stock than any type of objective reality
alstroemeria313#1694: it wouldn't fix it
alstroemeria313#1694: i have actually trained diffusion colorizers before that would like, in practice... take buildings like that and assign different parts of them random vivid colors
alstroemeria313#1694: then if you didn't like the colorization you could reroll and get something different
|
alstroemeria313#1694: found an old demo grid https://cdn.discordapp.com/attachments/729741769738158194/930564635806031902/demo_00565000.png
alstroemeria313#1694: left is original, middle is the b/w input, right is a sampled output given the input
alstroemeria313#1694: as you can see it sometimes has issues with determining the edges of objects and gives different parts of things different vivid colors
alstroemeria313#1694: but like... it probably just needs to be bigger and be trained more
alstroemeria313#1694: i forget if i was using self-attention on these too
alstroemeria313#1694: (it really needs it, i think)
alstroemeria313#1694: so like. the problem with just making the net output a distribution with multiple modes is that you have to somehow guarantee you get *consistent* colorizations i.e. you can't just output multiple modes per pixel and sample each pixel independently
alstroemeria313#1694: you need to like, use a diffusion model or an autoregressive model that outputs logits for color image tokens conditional on previous color image tokens and b/w image tokens, or some such.
Malka#9644: Hi. Probably stupid question. It is not normal that I see no memory usage difference, and no computation speedup, using fp16 ? I am using this: https://pytorch.org/docs/stable/notes/amp_examples.html#typical-mixed-precision-training (here is my training loop: https://pastebin.com/ap0QyPMc )
alstroemeria313#1694: that's weird. fp16 works for me
alstroemeria313#1694: and that looks like how i do it
Malka#9644: I think it is a case of me holding it wrong, but not sure how
alstroemeria313#1694: what if you use large batch sizes, do you see a compute/memory difference then
alstroemeria313#1694: how big is the model?
Malka#9644: mostly wanted to know if there was nothing that made you scream 'wtf' while reading the code 😛
Malka#9644: model has 33,276,162 trainable parameters
encoder has 46,716,544 trainable parameters
alstroemeria313#1694: ahh
alstroemeria313#1694: what gpu?
Malka#9644: 3090 RTX
|
alstroemeria313#1694: ahh
alstroemeria313#1694: so new enough that you should see a speedup
Malka#9644: input is FFHQ, rescaled to 64x64. batch size 100
alstroemeria313#1694: ahh
alstroemeria313#1694: that's kind of small
Malka#9644: takes the whole GPU memory though
Malka#9644: my model might have some memory leak issue ?
Some Point Process#3793: Ampere series graphics cards don't support mixed precision tensor ops like the data center accelerators do (i.e. a100, v100)
Some Point Process#3793: So everything just force-runs on 16 bit
Malka#9644: aaaaaah, i would need a data center accelerator to have the computation speedup ?
Some Point Process#3793: Yeah, that would be my thinking 🙂
Malka#9644: ok that solve the 1st issue, even if it does not improve speed, memory used should go down dramatically ? I am seeing a difference, but it is very small: about 500MB
Malka#9644: Anyway, thanks a lot for the inputs 🙂 I am kinda new in all this, and as a simple hobbyist, it can be difficult to know where to look for guidance
alstroemeria313#1694: the 3090 does still run stuff faster in fp16 for models with large activations or large batch sizes
alstroemeria313#1694: bc it uses less memory bandwidth
alstroemeria313#1694: it is just not as pronounced as with a datacenter card
Malka#9644: I have a small improvment: from ~1.30it/s to 1.60it/s
Some Point Process#3793: what if (as a driver or low level performance optim.) it downcasts anyway
Some Point Process#3793: before preparing comput. graph etc
EricHallahan#1051: Just a gentle reminder of Rule #3. I highly suggest you check out some of the #communities that are likely better suited to help you if you have more issues.
|
tpapp157#3643: Well fp16 isn't always stable depending on the type of computation, the model architecture, training, etc. It's pretty situational.
tpapp157#3643: In any case I think both pt and tf do take advantage of nvidia's tf32 format and will cast to that automatically for certain ops.
random person#5234: You can always leave it to nvidia to nerf something on geforce
𓅬 gabriel_syme 𓅬#3220: There was a colorization paper recently that tried to solve this exact thing I thought. this saturated outcomes. I'll try and find it, I remember I really liked it.
𓅬 gabriel_syme 𓅬#3220: this one: https://arxiv.org/abs/2108.09195
𓅬 gabriel_syme 𓅬#3220: (I may have responded to the wrong message but you know)
alstroemeria313#1694: ahh i remember that one but i didn't understand it
𓅬 gabriel_syme 𓅬#3220: their results I feel look more like the 'correct' image in the thread above
𓅬 gabriel_syme 𓅬#3220: vs the saturated one
someKindaBean#8471: that's a pretty wild paper. i also don't understand it, but the results look sweet
someKindaBean#8471: like, i get what they are trying to do, but i don't feel like spending enough time to understand how they are coming up with the imagined reference image
someKindaBean#8471: it reminds me a little of image analogies
𓅬 gabriel_syme 𓅬#3220: would love some code ye
alstroemeria313#1694: so if i have an optimizer state dict
alstroemeria313#1694: and some params
alstroemeria313#1694: how do i get the *names* of the params
Kharr#7888: the optimizer state dict does not contain the names, only the parameters. You'll have to match them up with the model somehow
alstroemeria313#1694: how do i do this
Kharr#7888: They should be an ordered dict so if you just do model.named_parameters() they should map to model.parameters()
alstroemeria313#1694: ah
|
alstroemeria313#1694: you mean they should be in the same order in the optimizer?
Kharr#7888: Yes, otherwise you have to try and guess what they are based on dimensions
alstroemeria313#1694: ugh
alstroemeria313#1694: if some params don't have a .grad
alstroemeria313#1694: will they be skipped in the optimizer
alstroemeria313#1694: (Also my optimizer states are pipeline parallel sharded saved by deepspeed, which complicates things)
Kharr#7888: Yes, optimizer checks if .grad is None before trying to update
alstroemeria313#1694: well i mean. will this break the *ordering*
Kharr#7888: It shouldn't, it will just iterate over them
alstroemeria313#1694: ah
alstroemeria313#1694: i am writing a custom optimizer for deepspeed
alstroemeria313#1694: well, it is just a torch.optim.Optimizer subclass for now
alstroemeria313#1694: but it saves EMA versions of the parameters
alstroemeria313#1694: and i need to be able to assemble these into a model
alstroemeria313#1694: to use during inference
Kharr#7888: Sounds straight forward. Since it is a custom optimizer you can make it accept named_parameters() to make it easier
alstroemeria313#1694: ahh
alstroemeria313#1694: and just save the names in the states?
alstroemeria313#1694: it is just AdamW + EMA so it is not anything weird
Kharr#7888: Yep. You can modify any existing optimizer to do it with a couple of lines
|
alstroemeria313#1694: the Adam paper suggests doing an unbiased EMA using a third beta value (or reusing beta_2)
Kharr#7888: Your loop just becomes "for n, p in named_parameters" --> update p vs "for p in parameters" --> update p.
alstroemeria313#1694: but mine is just doing a biased EMA using a separate decay value because i need it to be longer and also i want to schedule it and *no one* handles changing Adam beta properly
alstroemeria313#1694: (I know how to do it but don't feel like adding the complexity)
Kharr#7888: Custom optimizers offer a lot of flexibility for doing new things like that. Just have to be careful about putting everything into one optimizer or it will end up like Ranger and require a ton of hyper params https://github.com/lessw2020/Ranger21/blob/0a906ef9df4a4c394a48e5778b2b94f2c8e1ce8e/ranger21/ranger21.py#L107
kurumuz#5695: I am sick of model repetition at this point
kurumuz#5695: and repetition penalty is a dumb hack
alstroemeria313#1694: yeahhhh i am going to stick with this one addition
kurumuz#5695: :goose16:
alstroemeria313#1694: I normally don't even modify the optimizer for EMA
alstroemeria313#1694: I am only doing it bc I need deepspeed to shard the EMA params the same way as the normal ones
alstroemeria313#1694: for pipeline paralle
kurumuz#5695: I bet I can write some smart algorithm to do a great repetition blocking but
kurumuz#5695: it doesnt solve the robustness problem in its core
Kharr#7888: I am with you. Have you tried the negative repetition training? https://arxiv.org/abs/1908.04319
kurumuz#5695: nope
alstroemeria313#1694: ...so where did it stick the names
kurumuz#5695: gonna check that out
Kharr#7888: The problem with these post processing methods is that repetition can be 100 words long paragraphs over and over. You have to correct in latent space or train the model better to avoid the issue from the start.
kurumuz#5695: is repetition a generalization problem?
|
kurumuz#5695: If so, maybe we can apply regularizations like manifold mixup
Kharr#7888: This works
kurumuz#5695: huh, i guess it makes sense
kurumuz#5695: critical part is, I only do finetuning. still might help though i guess
Kharr#7888: See my comment about the minor change to the loop when moving it to accept model.named_parameters() vs just model.parameters() https://github.com/pytorch/pytorch/blob/e1aea9b9683c1786b45c34edee826cf2871ef359/torch/optim/sgd.py#L133
alstroemeria313#1694: you can't actually store tuples in the params list
alstroemeria313#1694: it has some assertion
alstroemeria313#1694: in any case i got it to save the names in such a way that i have an easy mapping to the params
alstroemeria313#1694: and shards correctly
tpapp157#3643: Repetition is a maximum likelihood problem at its core. The same as it is in many other modeling contexts. Maximum likelihood really really struggles with generation tasks in particular because it fundamentally cannot handle situations where many possibilities are equally likely.This is why basically all generative models today inject random noise somewhere into the process.
Kharr#7888: https://discord.com/channels/729741769192767510/747850033994662000/901877828159668304
Kharr#7888: Has anyone tried to see if leaving dropout on during inference reduces repetition? :thonk:
nev#4905: I'm curious what lucidrains' stats are like :ultrathonk: https://cdn.discordapp.com/attachments/729741769738158194/930897110314909706/unknown.png
alstroemeria313#1694: ...so how do you actually use the deepspeed flops profiler
alstroemeria313#1694: I have turned it on following the instructions and don't see any difference
alstroemeria313#1694: ...also i can't use fp16 and idk why
alstroemeria313#1694: oh, because it is trying to use an fp16 optimizer and i can't do that
alstroemeria313#1694: it has to be fp32
alstroemeria313#1694: like how do i just tell the thing to use autocast
Kharr#7888: Does DS even do autocast? I thought they have fp16 hardcoded in a bunch of places
|
Sid#2121: yeah it's pretty much fp16 everything in deepspeed, they've only recently introduced bf16 stuff
alstroemeria313#1694: ugh
alstroemeria313#1694: ok so i need to use fp16 weights in my custom optimizer then? but keep the ema params in fp32
Kharr#7888: You can keep an FP32 copy of the weights in the optimizer and cast to fp16 after you update. This is recommended for training anyway
alstroemeria313#1694: ...right but i am getting weird issues
alstroemeria313#1694: with it complaining that the gradients are the wrong dtype
alstroemeria313#1694: ah
Kharr#7888: Yes, you need to cast both the parameter.data and parameter.grad to fp32 --> update --> store --> put parameter.data back to fp16
alstroemeria313#1694: ugh
alstroemeria313#1694: Can I just do fp32 weights/grads and fp16 activations ;_;
alstroemeria313#1694: well i guess i could try that in a bit
Kharr#7888: https://arxiv.org/pdf/2112.11446.pdf https://cdn.discordapp.com/attachments/729741769738158194/930934885022072883/unknown.png
Kharr#7888: Storing fp32 copy of weights in the optimizer has measurable benefits 🙂 I would imagine this is even more important for raw fp16 which is worse than bf16
Tau#4010: What are the best open source text conditioned generative models with training code (particularly diffusion based)? v-diffusion-pytorch works well, but doesn't seem have training code. Ditto GLIDE. Is DALL-E still the best option?
alstroemeria313#1694: i can give you the training code for cc12m_1
alstroemeria313#1694: it just is not cleaned up enough to my satisfaction to be released
Tau#4010: Thank you!
Tau#4010: What's the easiest way for you?
alstroemeria313#1694: https://gist.github.com/crowsonkb/06f6288dcf82af1aa6735e459aebfd96
alstroemeria313#1694: this needs lr decay added
|
alstroemeria313#1694: also if you want to train in fp16 try setting eps=1e-5 in the optimizer
alstroemeria313#1694: this might help
alstroemeria313#1694: this is the cfg training code which i used to fine-tune it but it should be trainable from scratch with it too
Tau#4010: Thank you!
genetyx8#7543: http://www.stochasticlifestyle.com/engineering-trade-offs-in-automatic-differentiation-from-tensorflow-and-pytorch-to-jax-and-julia/
genetyx8#7543: interesting read. may interest some people here
StellaAthena#3530: Does anyone know anything about the “Open Data Science Conference”? They reached out about speaking but IDK if it’s really worth paying any attention to.
https://odsc.com/
Tau#4010: Good article. Chris Rackauckas is the master of auto diff. Unfortunately Zygote not supporting mutation is even worse than just being inefficient. Many language constructs use mutation internally, which also makes them unusable (defining the adjoints manually is somewhere between a pain and impossible).
I definitely think dynamic auto diff will be used now that it is available. Existing approaches don't need it because they are selected to be possible without dynamic autodiff.
genetyx8#7543: @Tau dynamic autodiff? tell me more plz
𓅬 gabriel_syme 𓅬#3220: smol models? :berk:
𓅬 gabriel_syme 𓅬#3220: or just LMs being annoying for me
Tau#4010: I just mean autodiff that can handle dynamic control flow (like Zygote).
genetyx8#7543: Ok I thought you were referring to some other newfangled autodiff engine
parzival#5010: Does anyone know if TRC does extensions for more than 30 days? My trial+extension for 30 days already expired, I am wondering if they fund for longer periods
Kia#2550: send them a email, And ask for a extension
𓅬 gabriel_syme 𓅬#3220: Yeah send and ask them. They will do longer if you present a plan / project
parzival#5010: great, will try that, thanks!
|
imoosa#0378: Hey Guys, I am a ML researcher from Bangladesh. I have been working in NLP for the last 6-8 months. So far I have been working in NLP for Indian languages. We have recently pushed some of our models to Huggingface hub.
If you are a speaker of any Indian language maybe you can try out our models https://huggingface.co/ibraheemmoosa/xlmindic-base-uniscript and https://huggingface.co/ibraheemmoosa/xlmindic-base-multiscript.
Here is my Github profile https://github.com/ibraheem-moosa. I have contributed PRs to Huggingface Transformers library and the Pytorch Lightning library. I am hoping work on some projects here on Eleuther AI.
Thanks for reading!
tpapp157#3643: @Kharr Any updates on your relu attention experiment?
Kharr#7888: This? https://discord.com/channels/729741769192767510/747850033994662000/929880674083696670
tpapp157#3643: yeah
Kharr#7888: Aside from "looks identical" nope. I'll have to train a full model with it and see how it evaluates on benchmarks. Given the nearly identical loss curve it doesn't seem like there would be a meaningful difference
tpapp157#3643: Ok. Just curious since I was considering giving it a try.
Kharr#7888: That's the longer chart (I let it keep running for a bit) https://cdn.discordapp.com/attachments/729741769738158194/931265992124928091/unknown.png
Tau#4010: Do you have Jax training code, (or generally TPU enabled)? I'm using TRC, and figured I should check what you had before converting. Thanks and sorry for the bother!
alstroemeria313#1694: i do somewhere for an older arch
alstroemeria313#1694: like for the v-diffusion-jax models
alstroemeria313#1694: it's on my other laptop
alstroemeria313#1694: those aren't text conditioned so you would have to add that
Tau#4010: Gotcha. The jax models don't have text conditioning? The examples seem to have it.
alstroemeria313#1694: it's CLIP guided sampling.
|
alstroemeria313#1694: using an unconditional diffusion model.
Tau#4010: ah I see
Tau#4010: I was a bit leery of pytorch XLA
alstroemeria313#1694: yeah it's not great
Tau#4010: I'll convert the pytorch to tpu (lightning claims to support out of the box). Might hit you up again if it goes poorly though (if that's okay). Thanks!
alstroemeria313#1694: you are going to run into footguns with the code as written
alstroemeria313#1694: unless they have fixed F.interpolate() and anything using it (like the bilinear upsampling) shipping tensors to the cpu and back bc the op isn't implemented
alstroemeria313#1694: you just have to use a custom thing or use nearest instead
Tau#4010: Hmm, that is unfortunate. I'll see if there's alternatives
alstroemeria313#1694: i mean. nearest is probably ok to use.
Tau#4010: Actually might be better for my application (image with text).
asparagui#6391: @StellaAthena odsc is a large conference with a lot of speakers
asparagui#6391: worth doing if you wanted to present
ilovescience#3282: haha looks like the issue went stale:
https://github.com/pytorch/xla/issues/2703
ilovescience#3282: i bet if you pester them they'll fix it soon enough
ilovescience#3282: @Tau basically with PyTorch XLA you have to be careful of three things:
1. operations not lowered (in which case you make an issue and the pytorch xla team will fix it or you can try to do op lowering yourself if you know c++)
2. frequent transfer of tensors to cpu (can happen with stats reporting for example and you have to be very careful of this)
3. dynamic shapes of tensors (everything must be the same shape always otherwise the graph will be recompiled all the time and slow stuff down)
|
Tau#4010: Thanks for the info! Very helpful to know what to look out for.
chilli#5665: 2/3 largely feel like similar footguns to Jax, I suppose
chilli#5665: although I guess they might be more silent with pytorchxla
ilovescience#3282: i feel the debugging experience is pretty bad with PyTorch XLA but idk if it's similarly true with JAX
chilli#5665: how so?
chilli#5665: (just curious)
ilovescience#3282: i mean the messages are usually quite cryptic...
chilli#5665: lol
ilovescience#3282: i guess with enough experience you start to figure stuff out but i can see how it can be intimidating to a beginner
tag#4294: Hi all! I am a final-year PhD candidate at the Stanford AI Lab (learn more about me at https://www.tginart.ai). I've been using language models (GPT-2 and GPT-Neo) for some research projects. I'm interested in the new RETRO paper by DeepMind (https://deepmind.com/research/publications/2021/improving-language-models-by-retrieving-from-trillions-of-tokens). As far as I know, DeepMind did not release pretrained models for RETRO and if that remains their intention, I'd be excited to get involved with you all for shipping an open-source RETRO.
minimario#1709: anyone know how to join the BigScience initiative slack lol (or if they have a community environment like this)
minimario#1709: i can't tell how updated their website/forms are lol
cfoster0#4356: Aran mentioned doing something like this, so there's probably interest here. Lemme link you the doc in #research
EricHallahan#1051: There is a Google Form to fill out.
EricHallahan#1051: Let me go find it…
EricHallahan#1051: https://docs.google.com/forms/d/e/1FAIpQLSdF68oPkylNhwrnyrdctdcs0831OULetgfYtr-aVxBg053zqA/viewform
sMili#6973: Guys what is the most advanced and big nlp model in what you are working (you as hugging face/eleutherAI comunity) just now?
sMili#6973: Nvidia 500b parameters, google (glam) 1.6t, or what?
Louis#0144: Hi, this isn't really a server for novices. You should check out some of the communities we have listed #communities
EricHallahan#1051: Honestly, this ought to be an FAQ question.
|
𓅬 gabriel_syme 𓅬#3220: Was the RLHF EAI repo still alive? I can't find the channel in the archived ones. I'd like to start working towards something like that, the OAI repo obv is a starting point but I'm curious about the progress done then
EricHallahan#1051: EEGI went into hibernation. I can only assume @Daj forgot to expose it publicly when he archived it.
The repos still exist, they just haven't seen any activity.
<https://github.com/EleutherAGI>
𓅬 gabriel_syme 𓅬#3220: ah thanks that's the name 😄
buttercutter#1033: Anyone have any idea on https://github.com/tensorflow/tensorboard/issues/5505 ?
EricHallahan#1051: *Gestures at rule #3*
𓅬 gabriel_syme 𓅬#3220: Cool I can also give BAP a try! These are exciting extensions, along with DTs. I just need to bump my head on the keyboard for a bit I guess
𓅬 gabriel_syme 𓅬#3220: I do forget who else was on this, I'll figure out and ask (if I get stuck)
alstroemeria313#1694: Hey how do I parameterize a function that has to have the following properties: f(0) = 0, f(1) = 1, and it is monotonically increasing. I do not care what happens outside the interval 0-1.
bmk#1476: what do you mean by parameterize
alstroemeria313#1694: with a neural net
bmk#1476: uhhhh are you fine with like adding a penalty to encourage it to be monotonic or does it have to be like 100% monotonic
alstroemeria313#1694: it has to be actually monotonic
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/931783796368498728/Screen_Shot_2022-01-14_at_9.36.06_PM.png
bmk#1476: uhh force the model to output a positive number and like integrate over that?
alstroemeria313#1694: hm
alstroemeria313#1694: yeah but integrate how
alstroemeria313#1694: like numerically?
bmk#1476: yeah
|
alstroemeria313#1694: Wait can I just use their monotonic net and postprocess it differently
alstroemeria313#1694: Namely evaluate it at 0 and 1 and scale the outputs so f(0) = 0 and f(1) = 1
alstroemeria313#1694: ah got it
alstroemeria313#1694: well, it only works sometimes
alstroemeria313#1694: ah got it to work reliably
bmk#1476: how does *their* monotonic net work
bmk#1476: making it pass through 0,0 and 1,1 is trivial, the monotonic part is the hard part
alstroemeria313#1694: they restrict the weights to be positive and... something
alstroemeria313#1694: well, nonnegative
bmk#1476: well.. I guess I see how that could work, but it seems suboptimal
bmk#1476: I could imagine that limiting expressivity a lot
alstroemeria313#1694: it does, i think
bmk#1476: I still think my integration solution is probably superior
alstroemeria313#1694: so i need to be able to integrate it like... reliably, and backprop through this
bmk#1476: I mean you can just backprop through it though? like integration is like linear and stuff
bmk#1476: and I'm assuming you're dealing with microscopic models
alstroemeria313#1694: yes
alstroemeria313#1694: this will be tiny
bmk#1476: so running it ten thousand times should be fine and the estimate will be good enough right
alstroemeria313#1694: i hope so
|
bmk#1476: what's this for anyways
alstroemeria313#1694: like i can just feed a batch of 10,000 to it
EricHallahan#1051: I assume some sort of schedule.
alstroemeria313#1694: trying to reduce the variance of my diffusion model losses by sampling timesteps from a distribution that produces the lowest variance loss which still has the same minima as the original loss
alstroemeria313#1694: i am trying to learn the distribution
bmk#1476: interesting
alstroemeria313#1694: i am adapting it from Variational Diffusion Models for v objective
bmk#1476: how are you gonna backprop through that
bmk#1476: the sampling
alstroemeria313#1694: i will evaluate the derivative of the net at the sampled timesteps and use those for the relative weights of those timesteps' examples in the main diffusion loss
bmk#1476: ah
bmk#1476: wait
bmk#1476: the derivative of the net?
alstroemeria313#1694: then i can get gradients for the little net via the weights
bmk#1476: why not just.. not integrate
alstroemeria313#1694: yes it has 1 input and 1 output
bmk#1476: just have it output the pdf
alstroemeria313#1694: Ah, and then make sure it's normalized by integrating
bmk#1476: yeah
alstroemeria313#1694: ...how do i sample from it though
|
alstroemeria313#1694: The thing I am trying to learn with this is the inverse cdf
alstroemeria313#1694: And its derivative is the relative weighting I need, I think.
bmk#1476: hm ok then cdf makes sense
alstroemeria313#1694: so then i just need to get approximate gradients for it without doing a second backward pass through the main (large) model
alstroemeria313#1694: they have some method for this
bmk#1476: ok so if I understand correctly you want to learn a probability distribution, sample from it, and then take the derivative of the cdf at that point wrt the parameters of the model?
alstroemeria313#1694: i want to take the derivative of the learned icdf and use that as a *relative weight* in the main diffusion loss
alstroemeria313#1694: then get the gradient of the *variance* of the main diffusion loss wrt the tiny model's params
bmk#1476: the derivative of the icdf is just 1/ the derivative of the cdf at the same point, right?
alstroemeria313#1694: ...maybe?
alstroemeria313#1694: it's late
bmk#1476: ok so what if you:
- train the tiny net to output the pdf
- integrate over this to get the cdf
- use this to get the icdf
- get the icdf gradient as 1/ pdf(icdf(y))
- use this to weight diffusion loss and get diffusion loss variance
- backprop through everything
alstroemeria313#1694: mm
|
bmk#1476: this only needs one backward pass
alstroemeria313#1694: let me think about that
bmk#1476: you might need to flip it around by modelling the derivative of the icdf and compute the pdf as 1/icdf' or something
alstroemeria313#1694: *nods*
alstroemeria313#1694: ...the loss is dropping p fast on this
alstroemeria313#1694: i hope i didn't mess the relative weighting up or something
shpotes#1606: random question: what's the fastest way to load data on jax? TF datasets, pytorch dataloaders or something else?
alstroemeria313#1694: Uh how do you like, do an in-place op in PyTorch and hide it from autograd
alstroemeria313#1694: Like I want to substitute a thing in on the backward pass bc I don't know it until I'm done with the forward pass
alstroemeria313#1694: (There is probably an actual good way to do this with a custom autograd function that I haven't figured out yet)
alstroemeria313#1694: pytorch dataloaders
alstroemeria313#1694: oh right hooks on tensors
alstroemeria313#1694: i forget they exist.
alstroemeria313#1694: i can register the hook after i know the thing i need to multiply the grad by but before the backward
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/931830571959738399/Screen_Shot_2022-01-15_at_12.41.58_AM.png
alstroemeria313#1694: it's to avoid a double backward
alstroemeria313#1694: since i don't use d/dSNR loss for anything else
alstroemeria313#1694: i can just replace it with d/dSNR loss^2
alstroemeria313#1694: maybe i can come up with a better way to parameterize the schedule now that i think i have gotten the gradient computation for it correct
alstroemeria313#1694: like bmk's suggestions
|
chilli#5665: Can’t you just do detach or something
chilli#5665: Or run it under torch.no_grad
alstroemeria313#1694: i tried no_grad and it complained
alstroemeria313#1694: about me having changed the thing in place
alstroemeria313#1694: anyway i figured out how to do it with hooks
Spacecraft1013#5969: do you guys think when functorch is fully ready it will be better than jax?
Spacecraft1013#5969: just curious
Huu Nguyen#6677: Has anyone worked with either of these? https://github.com/microsoft/ANCE or https://github.com/henryzhao5852/BeamDR
chilli#5665: Not sure what you mean by “better than Jax” 😛
Spacecraft1013#5969: in terms of speed/capabilities
chilli#5665: It’s complicated :P, and it still depends a lot on what you care about
chilli#5665: The primary thing functorch provides is a composable vmap/grad transform
chilli#5665: As well as a compilation decorator that works with vmap/grad
chilli#5665: vmap/grad mostly match Jax’s vmap/grad in terms of capabilities
chilli#5665: In terms of performance… it’s complicated
chilli#5665: Pytorch is definitely significantly faster than Jax when comparing eager modes
chilli#5665: And so it allows certain kinds of programs that would be… quite difficult to write in Jax
chilli#5665: Because of that, functorch can also feasibly allow a variant of function transforms that’s somewhat more Like Julia in terms of composability
chilli#5665: Where you have tensor subclasses that modify behavior through multiple dispatch
chilli#5665: I’d say so too, with a few caveats. For one, Jax’s distributed capabilities on TPUs are much more developed compared to Pytorch/XLA
|
chilli#5665: Which matters a lot if you’re gonna train language models
chilli#5665: If you’re training, say, vision models (which don’t need advanced distributed capabilities as much), I think Pytorch xla is feasible
chilli#5665: Yeah… you definitely can’t implement that exactly in Jax
chilli#5665: Well, it’s Gonna be way slower since It’s not in a jit
chilli#5665: And it’s not easy to make that jittable
chilli#5665: And I think Ross wightman has had a decent amount of success training models with pytorch/xla
chilli#5665: Although, tbh, I think Ross has an inordinate amount of patience to deal with these kinds of issues…,
chilli#5665: He’s getting stuff working on graphcore now, lol
chilli#5665: Btw, stuff like this is what I’m referring to: https://discord.com/channels/729741769192767510/785968841301426216/930661788268253214
chilli#5665: Implementing something like this with the same flexibility in Jax would be quite difficult
chilli#5665: I think you could - you’d just need to wrap it in a function transform
chilli#5665: While in PyTorch, you can just, say, change all your model’s weights to quanttensor, and then you’re good 🙂
chilli#5665: (Btw, to be clear, I work on functorch lol)
Spacecraft1013#5969: alright thanks for the explanation 🙂
evanrmurphy#1090: Is anyone else on the 80,000 Hours AI Safety Group list and see the "NSF Request for Information -- time for coordinated action?" post?
Link follows, though you may need to request group membership to access: https://groups.google.com/g/david-kruegers-80k-people/c/IY7dpHPbXmk
SweatyPalms#1231: Can someone explain to me how #the-faraday-cage-archive works
an0nkar#6725: Hey, everyone!
I'm Shubh, an aspiring computer vision/deep learning researcher. I also presented at NeurIPS this year. I'd love to know how I can help out with the projects you guys are working on in any capacity. Extremely grateful to be here on the server! Thanks!
|
alstroemeria313#1694: huh, madgrad seems to work actually rather well for some convex problems i threw at it
alstroemeria313#1694: combined with lr decay sometimes
alstroemeria313#1694: like this thing: ```python
class TVDenoising(nn.Module):
def __init__(self, strength=0.1):
super().__init__()
self.strength = strength
def tv_loss(self, x):
x = F.pad(x, (0, 1, 0, 1), 'replicate')
x_diff = x[..., :-1, 1:] - x[..., :-1, :-1]
y_diff = x[..., 1:, :-1] - x[..., :-1, :-1]
diff = torch.cat([x_diff, y_diff], dim=-3)
return diff.norm(dim=-3).sum()
def mse_loss(self, x, target):
return (x - target).pow(2).sum() / 2
def forward(self, x, target):
return self.tv_loss(x) * self.strength + self.mse_loss(x, target)```
|
EricHallahan#1051: So why does everyone find MADGRAD hard to work with?
alstroemeria313#1694: idk if it's as good for general deep learning
EricHallahan#1051: But isn't the pitch that it is a derivative of Adam?
alstroemeria313#1694: it's kind of its own thing
alstroemeria313#1694: it is not actually based on adam
alstroemeria313#1694: they reference the adagrad paper a lot
alstroemeria313#1694: but madgrad doesn't have adagrad type automatic lr decay, it's meant to be used with a decaying schedule of the sort you'd use for sgd or adam
alstroemeria313#1694: in https://cdn.discordapp.com/attachments/729741769738158194/932179243545268294/unknown.png
alstroemeria313#1694: descending the TVDenoising gradient w/ madgrad https://cdn.discordapp.com/attachments/729741769738158194/932179322159136798/unknown.png
alstroemeria313#1694: it seems to do better on this problem than adam
alstroemeria313#1694: (This is a convex, non-stochastic problem and there are in fact special iteration schemes for it that are not gradient descent based, but apparently with a good optimizer (esp. with momentum) gradient descent works)
kurumuz#5695: is this an init image
alstroemeria313#1694: it's an oldschool denoising method
alstroemeria313#1694: https://en.wikipedia.org/wiki/Total_variation_denoising
alstroemeria313#1694: i was using it as an optimizer test
alstroemeria313#1694: From 1992
alstroemeria313#1694: huh i wonder if i should add a madgrad option in my style transfer code
alstroemeria313#1694: since it seems actually kind of good at these kind of "optimize an rgb image with non-stochastic gradients you got from somewhere" things
alstroemeria313#1694: it consists of a weighted sum of an L1 loss and an L2 loss and the L1 part can be a problem
alstroemeria313#1694: for gradient descent
|
alstroemeria313#1694: but it is convex
alstroemeria313#1694: actually this is one of the extensions to color images that was published later
alstroemeria313#1694: i have tried several
alstroemeria313#1694: applying the b/w version independently on color channels is not great
Kia#2550: Batbot is running image models (CLIP+VQGAN and CLIP Guided Diffusion) on GPU's from somewhere. how to use it? `.imagine` vqgan+clip `.diffusion` and `.diffusion2` is the same (diffusion models). More info type in that channel `.help`
EricHallahan#1051: :goose2:
EricHallahan#1051: Wrong, it's clearly magic.
nev#4905: I found MADGRAD to work better than adam on TV with voxels as well
Spacecraft1013#5969: we just have a bunch of artists in a warehouse somewhere drawing everything you type
alstroemeria313#1694: hm this looks unoptimized: ```python
def grad(x):
"""Computes the discrete gradient of an image."""
out = x.new_zeros([2, *x.shape])
out[0, ..., :-1, :] = x[..., 1:, :] - x[..., :-1, :]
out[1, ..., :, :-1] = x[..., :, 1:] - x[..., :, :-1]
return out
def div(x):
"""Computes the discrete divergence of a vector array."""
|
out = torch.zeros_like(x)
out[0, ..., 0, :] = x[0, ..., 0, :]
out[0, ..., -1, :] = -x[0, ..., -2, :]
out[0, ..., 1:-1, :] = x[0, ..., 1:-1, :] - x[0, ..., :-2, :]
out[1, ..., :, 0] = x[1, ..., :, 0]
out[1, ..., :, -1] = -x[1, ..., :, -2]
out[1, ..., :, 1:-1] = x[1, ..., :, 1:-1] - x[1, ..., :, :-2]
return out.sum(0)
```
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/932291448160804864/unknown.png
nev#4905: so this is what this looks like in the limit
Kharr#7888: Do you by any chance have a notebook for playing with this and different optimizers?
alstroemeria313#1694: no
alstroemeria313#1694: i was playing around in jupyter notebook on my laptop cpu
alstroemeria313#1694: that's with a non-convex version of it
alstroemeria313#1694: w/ a hyper-Laplacian prior on image gradient magnitude
alstroemeria313#1694: like L^1/2
alstroemeria313#1694: since it is not convex different optimizers will produce different results with different aesthetics
alstroemeria313#1694: like you can see adam produces visibly different results bc on the first step it moves all the pixels by -lr * sign(grad)
|
alstroemeria313#1694: the adam first step produces kind of ugly, artifacty results actually and you just have to count on subsequent steps to pull it into a good local minimum
alstroemeria313#1694: or just use a tiny lr
alstroemeria313#1694: Actually.
alstroemeria313#1694: Why not try getting away from Adam for direct RGB optimization.
alstroemeria313#1694: Or at least using lr warmup so the first step doesn't introduce huge artifacts.
Kharr#7888: in Transformer work, Adam is always used with a lr warmup -- some versions attempt to correct this: https://arxiv.org/abs/1908.03265
nev#4905: good idea
nev#4905: the directvisions series use ranger and others
Kharr#7888: I'd be curious if RAdam eliminates the artifacts you are seeing. It's kind of designed for it.
alstroemeria313#1694: hm... adam https://cdn.discordapp.com/attachments/729741769738158194/932302396858712134/out_00600.png
alstroemeria313#1694: madgrad https://cdn.discordapp.com/attachments/729741769738158194/932302409789739058/out_00600-2.png
alstroemeria313#1694: madgrad with lr doubled https://cdn.discordapp.com/attachments/729741769738158194/932302703030304798/out_00600-3.png
nev#4905: yeah that's consistent with my observations that madgrad is slower
nev#4905: what do they look like as videos? madgrad usually has a pulsating pattern
alstroemeria313#1694: idk, would have to output every frame
alstroemeria313#1694: i could do that
alstroemeria313#1694: ok gonna do that
EricHallahan#1051: I think it is just a fact lol
Kharr#7888: Madgrad needs about 10x the lr of Adam
EricHallahan#1051: That's what I remember in my experience.
|
alstroemeria313#1694: ok here's Adam https://cdn.discordapp.com/attachments/729741769738158194/932303923321122896/adam_rgb.mp4
alstroemeria313#1694: lr was 0.05
alstroemeria313#1694: (on range -1 to 1 RGB image data)
Kharr#7888: Is this just trying to convert noise --> image with optimizer?
alstroemeria313#1694: madgrad, lr 0.1 https://cdn.discordapp.com/attachments/729741769738158194/932304380286369852/madgrad_rgb.mp4
alstroemeria313#1694: the gradients are from CLIP. the prompt is "the first day of the waters".
alstroemeria313#1694: CLIP sees random augmented square crops of the images each step
alstroemeria313#1694: so the gradients are stochastic
alstroemeria313#1694: madgrad, lr 0.25 https://cdn.discordapp.com/attachments/729741769738158194/932305102746824785/madgrad_rgb_lr_0_25.mp4
alstroemeria313#1694: imo since madgrad isn't fully adaptive to gradient norms
alstroemeria313#1694: it should have some sort of "rescale gradients" factor, idk
alstroemeria313#1694: (Yes I can just scale the loss but it would be cleaner to have it in the optimizer)
Kharr#7888: It's kind of cool to see how different optimizers do on this. I'll have to play around with it a bit more. The weird thing seems to be that the picture gets worse as the loss goes down?
Kharr#7888: Adam vs Cerberus -- starting with same seed etc for "the first day of the waters." https://cdn.discordapp.com/attachments/729741769738158194/932315780786307092/geEirvvQF6f2gAAAABJRU5ErkJggg.png,https://cdn.discordapp.com/attachments/729741769738158194/932315781113466932/OCnrzHwAAAABJRU5ErkJggg.png
alstroemeria313#1694: i think this is right ```python
import itertools
def wrap_repeat(x):
try:
|
iter(x)
return x
except TypeError:
return itertools.repeat(x)
def madgrad(x_0, grads, step_sizes, momentums=0.1, eps=1e-6):
k = 0
x = x_0
s = 0.
v = 0.
grads = wrap_repeat(grads)
step_sizes = wrap_repeat(step_sizes)
momentums = wrap_repeat(momentums)
for grad, step_size, momentum in zip(grads, step_sizes, momentums):
lambda_k = step_size * (k + 1)**0.5
s += lambda_k * grad
v += lambda_k * grad**2
z = x_0 - s / (v**(1/3) + eps)
x = (1 - momentum) * x + momentum * z
|
yield x```
alstroemeria313#1694: (1 is no momentum, 0.1 is default, eq. to 0.9 in SGD/Adam)
Kharr#7888: I think it's super interesting that one optimizer turned the stuff on the right to rocks while the other turned it into people and has a totally different aesthetic :thonk:
CRG#8707: Wasn't Cerberus an architecture?
Kharr#7888: It is, and there is also an optimizer based on it
Kharr#7888: 1k iter looks like a real painting :thonk: I tried to do a reverse image search to see if it was maybe in the training data but nothing came up. https://cdn.discordapp.com/attachments/729741769738158194/932318352729329704/HOkA7iP7bvpAAAAAElFTkSuQmCC.png
Kharr#7888: I'm sure I've shared charts like this before: https://cdn.discordapp.com/attachments/729741769738158194/932319976440856626/unknown.png
StellaAthena#3530: Does Cerberus always result in more realisitc looking images?
Kharr#7888: I just started testing things, will play around with a few prompts and let you know
Kharr#7888: 32L network https://cdn.discordapp.com/attachments/729741769738158194/932320427043336212/unknown.png
Kharr#7888: 2600 iter https://cdn.discordapp.com/attachments/729741769738158194/932320844972191774/3lqv2qry2P3AAAAAElFTkSuQmCC.png
CRG#8707: Yeah, though I'm still confused with how it can be both an optimizer and a (GLU like?) architecture :thonk: .
Kharr#7888: It's a concept that can be applied to both NN and optimizer design. Just matrix ops that create a specific set of bounds to force values to have a very specific set of characteristics. Think of it like L1/L2 regularization -- you can incorporate this into your NN design and your optimizer updates
Kharr#7888: Can you guess which optimizer made what? They appear to be finding fundamentally different minima 🤷 What's neat is if you line them up, you can see that the general shapes/elements are similar in location. https://cdn.discordapp.com/attachments/729741769738158194/932326728288862279/aDOPfY5K5V8CErlGEDIcV4EZZQhAtHS98klP88IU86YnxTyqeaMERacI4RCgHKMu1VLAOSMIYCcj5KXzmUzTG7SDLBkbPYApisc9rDkDkmOQS36P6ogifDviesEELo3grASIIwIciIyJGxAhvmIAuV1wUTBrrKaJB6Rt1D6obE4DTkwwgWNOFkGfhSgwIggKMuwOJ4hhxgHJ6Jlyo0Tv8BHtEs179ZOk8AAAAASUVORK5CYII.png,https://cdn.discordapp.com/attachments/729741769738158194/932326728557293620/4DP8YNjCvIsygAAAAASUVORK5CYII.png
StellaAthena#3530: I would guess the second is Adam
Kharr#7888: Yes, it seems to have a consistent bias
StellaAthena#3530: Post 19 more and let's do a confidence interval for my ability to guess correctly
StellaAthena#3530: Or, maybe DM me them / post them elsewhere and link to reduce spam
Kharr#7888: Going to take a little while, I'll leave it running and send it later
StellaAthena#3530: At the 99% confidence level, doing 15 total trials gives a diameter of less than 0.1 which seems like a decent amount of precision.
|
chirp#4545: Wait how are you generating these images
chirp#4545: Is this VQGAN or something?
Kharr#7888: This is @alstroemeria313 's VQGAN + CLIP notebook -- see #the-faraday-cage-archive and pinned items https://github.com/EleutherAI/vqgan-clip/tree/main/notebooks (I'm using the z-quantized version)
boris (^_^)#0348: Has anybody played with stochastic weight averaging?
A simplified version would be to save checkpoints every x steps up to a max of n checkpoints.
Then at each checkpointing you do some kind of weighted average of last checkpoints and see if it's better on a validation dataset.
Idea is to see if the loss can be improved further than these plateau's without having to decrease lr. https://cdn.discordapp.com/attachments/729741769738158194/932353297384357958/unknown.png
ilovescience#3282: i have used SWA a couple times in Kaggle competitions and have found it helpful...
ilovescience#3282: typically a good schedule for SWA is cosine annealing with warm restarts... the idea is that you can reach different minima with each cycle and then you average them to reach a better minimum
Gurkenglas#7362: What natural language protocol fits for modifying text to some purpose? Modify code towards using existing libraries, insert puns/references into writing, etc. Certainly one can prompt a model with the input text and ask nicely for the modified text or a diff, but is that the canonical way?
Gurkenglas#7362: I imagine that there should a category of tasks each of which has a mathematically corresponding protocol, which we don't use because asking nicely wants to work.
JacobProgramz#1792: Hello, I just joined. Nice to meet you, I am Jacob, I have been coding for roughly 7 years now, working in python, PHP, C##, C#, Java, JavaScript, and more. I am currently going to college for physics, and have a strong interest in the mathematics and creation of AI. I was wondering where to begin with AI development, as although I have trained basic 760 parameter CNNs, I have a weak computer and relatively little knowledge on how to code AI, but I can make algorithms. I worked on algorithms similar to the RETRO project, combining the math of a Neural Turing Machine with the math of a Transformer, using the attention mechanisms to be agnostic to inputs and take in large amounts of information and decompose it (for playing things like games, and much more). However, I want to be able to train and test basic models of my ideas in games, and I don't know how to integrate games with python and run an AI I create on top of it to train it. I just need a basic starting point in order to begin this, and test my mathematical skills. If anyone could help me, that would be great!
Tau#4010: If you want existing environments you can look at openai Gym compatible environments. ALE is a common benchmark (but may take more compute). You can define your own games fairly easily. There are various frameworks like rllib and polygames to help organize this and compare models.
JacobProgramz#1792: Thank you, I have created AI from the ground up using basic programs and mathematical tools (Numpy and others), but my main goal is to create an algorithm which uses intelligent machine learning coupled with the resources of a computer such as access to the internet, system RAM, and hard drive. The goal of this is to create an AI model which has the absolute bare minimum number of parameters, but instead uses databases similar to humans to gather information. This should drastically reduce the number of parameters, training overhead, and more, while having the benefits of an intelligent machine. Also, by doing so, it allows it to constantly have access to modern data in order to answer its queries and use to generate responses. I have taken ideas from Google's Perceiver, Retro, and other projects, as well as some of my own math. However, up to this point, I haven't been very much able to experiment and test AI in a setting where I could benchmark their performance against other machines I have created. I want to try to eventually have a machine that learns a game like Minecraft, but have an algorithms which can run and train on a basic home server. I was just looking to see what environments I could use, as well as tutorials, books, and other things in order to learn how to maximize the resources of a machine with the python language for my models.
JacobProgramz#1792: However, it seems like the vast majority of individuals import most of their AI through libraries like tensorflow, which I have nothing against, but that doesn't help me create new mathematical models in order to test them or learn nearly as much as what I need.
JacobProgramz#1792: My ideas are not too far out either, as the RETRO project, which is incredibly similar to mine, ran with only 7B parameters and outperformed GPT-3 at 175B. The advantage of that is that an instance can both train and operate with lower VRAM requirements. However, unless I have a basic benchmark to test these machines, and programs in order to develop and quickly deploy them, I cannot do much research
JacobProgramz#1792: And I really dont know where to begin because even though I understand the math, there is a lot of ways of implementing it, and I don't know really how I should train, develop, and test these machines against eachother
Deleted User#0000: I think basically the aspirations from what you want to do and the way you describe your thinking about tools here are ver yvery very far apart
Deleted User#0000: so what you should do is probably spend time appreciating why people use these tools and when it's appropriate to build from scratch and build up your mental model more of what can be done with where you are
JacobProgramz#1792: Indeed. I am trying to learn the basics of implementation using something like tensorflow and python libraries before building my own libraries to create and train models
|
JacobProgramz#1792: I am just wondering the most efficient way possible to get there
Deleted User#0000: so read papers and reimplement models etc, and probably forget rebuilding your own libraries for a long time
JacobProgramz#1792: Alright, but then what libraries, what tutorials, what is the fastest way I can go from taking 7 years of programming experience in unity, hardware, and websites, and then apply it to building state of the art deep learning algorithms?
JacobProgramz#1792: I am asking an individual who has a great amount of understanding and knows the quickest way to learn the maximum amount
JacobProgramz#1792: My only issue is I haven't really used python nearly as much as other languages
JacobProgramz#1792: And I am really struggling to sort through what are absolutely garbage libraries to learn about, old tutorials, useless information, etc
Deleted User#0000: I have not done this myself but read good things about https://course.fast.ai , maybe others have better recs
JacobProgramz#1792: Thank you
JacobProgramz#1792: I have spent a lot of time going through terrible tutorials and none have given me good info I need in a complete manner in order for me to begin taking my math on paper and benchmarking it in games using python. My biggest issue is just learning the python. The algorithms are easy to understand, the math is self explanatory.
JacobProgramz#1792: So thank you all!
JacobProgramz#1792: I literally came here because I have wasted so much time reading old code from like 2014 on github and documentation of Python libraries and watching YouTube videos that are hit or miss in how much they really help you and eventually I got tired, so I assumed many of the people on here have already went through that struggle and learned places to look to begin
tpapp157#3643: Deep Reinforcement Learning is one of those disciplines that sounds super cool and powerful at a surface level but is actually an unending hell to achieve reasonable results on anything non-trivial. For a beginner, my best recommendation is to do something else. The learning curve of deep RL is quite steep and in all likelihood you'll have very little to show for investing a few hundred hours.
naclbbr#9203: I don't know how RETRO was actually implemented in the real test, but RETRO's diagram looked like it's possible to make use of a search engine on internet rather than pre-made dataset+BERT key/value combination (so long as there is an appropriate encoder, in this case the AI is working only in text domain) so there should be a possibility for applying a similar approach to game agents
bun#9632: @BeeGass
BDCV#1521: how does a 64 core CPU fare for creating visual AI art?
chirp#4545: If you’re trying to learn, I think a better starting point than reading old stuff is to read about state of the art work and try to understand why it works
Qq#7586: Hey, in a similar boat and I've been trying to do this, but I don't know enough terminology etc to understand research papers - any advice?
chirp#4545: I think a good mindset is
chirp#4545: If the paper is important and I learn just one thing from it (or one new term) then I’ve made progress
chirp#4545: And if you keep making progress you can get quite far
|
chirp#4545: The hard part IMO is knowing what papers are important
chirp#4545: Twitter (and this server) are very good for that
Qq#7586: Cool thanks :)
chirp#4545: A good starting point is maybe to try to understand the papers that everyone is talking about
Qq#7586: Yeah... seems like the state of the art changes real fast (GANs replaced by diffusion?)
chirp#4545: Yeah. It’s not the best idea to try to keep up with every single new paper
chirp#4545: But it’s really helpful to understand, for example, why diffusion models can do better than GANs
chirp#4545: Btw another thing is you can reimplement papers
chirp#4545: If you want to learn to actually do ML, you can take a simple thing and try to implement it yourself. Will teach you a lot about how to get things to work
chirp#4545: I guess it depends on your goals @Qq — what are you trying to get out of your learning?
Qq#7586: Thanks, sounds like good advice :) might have a go sometime (when I'm not drowning in uni work)
Qq#7586: Haha good question - I'm not entirely sure. I guess I just want to be competent enough to produce some cool results/art as side projects
chirp#4545: Oh - if you’re interested in art then this is the perfect place! You don’t need much ML knowledge to get started. There are Colab notebooks in #art that will work right out of the box
Qq#7586: Aha thanks I know, I've been playing around with them for a while and I've found them pretty inspirational :) and as a CS student I'd be interested in getting more involved
chirp#4545: Might be worth asking around in #art — i think the people there have a lot of good ideas that they don’t have enough time to really try out. So if you ask, there might be something you can help them with!
Qq#7586: Ooh cool thanks, I will do - need some time to gain experience first though
asparagui#6391: @JacobProgramz do the fast.ai courses
chilli#5665: @kindiana Hmm, do you think it's fair to say that the only reason increasing batch size improves performance (on a single-GPU model) is that it reduces overhead?
chilli#5665: I know that it probably increases FLOPS for matmuls too, but I feel like that's still fundamentally kind of like overhead
tpapp157#3643: Yeah in terms of moving data between cache and vram. Also if you figure each tensor core computes a 4x4 matmul op (which is why matrix dimensions must be a multiple of 4 to use the tensor cores) which multiplied by the number of tensor cores tells you how many ops you can compute per cycle. So if the total number of ops involved in a computation don't divide evenly into the GPU's tensor cores, it can lead to partially filled op cycles where the remaining cores go unused.
|
tpapp157#3643: Then figure that most models have some amount of ops that can't be computed on the tensor cores and must therefore be kicked to the cuda cores. So you can get into situations where your tensor cores are sitting idle waiting on the cuda cores or for data to be moved around between memory caches. Larger batch sizes generally mean that your tensor cores are actively in use for a larger percentage of you GPU's cycles.
kindiana#1016: It amortizes parameter movement overhead
chilli#5665: I consider tail effects overheads too
StellaAthena#3530: Do you mean the actual batch size or the GAS? Increasing the actual amount of data that loads on the card means that you need to do fewer passes.
alstroemeria313#1694: https://github.com/zh217/torch-dct/pull/21
alstroemeria313#1694: There is a PR that was closed.
Someone needs to fork this lol
And merge it
I don't really care about supporting Python 3.5 or 3.6.
Colab gives me 3.7 and everywhere else I can control what I get.
alstroemeria313#1694: I like the DCT for image stuff, plain FFTs assume periodicity which does not hold for images bc of the edges
alstroemeria313#1694: And also complex parameters/gradients are often trouble for optimizers.
alstroemeria313#1694: specifically, I want to take Olah and Mordvintsev's suggestion that I do gradient descent in a Fourier-like basis with frequencies scaled such that they have equal energy
alstroemeria313#1694: ...How do you pull a pull request or whatever, so you can use it locally
EricHallahan#1051: Can you not just install the 1.9.0 compatibility branch directly from GitHub?
alstroemeria313#1694: Which branch?
alstroemeria313#1694: I don't see any
EricHallahan#1051: This one: https://github.com/jbojar/torch-dct/pull/1
alstroemeria313#1694: Yeah but how
alstroemeria313#1694: oh it is some `gh` command?
|
EricHallahan#1051: ```pip install git+https://github.com/jbojar/[email protected]```
alstroemeria313#1694: ohh
alstroemeria313#1694: ty :blobcutehappy:
EricHallahan#1051: Took a while to compose on mobile lol
alstroemeria313#1694: yeah that branch works
alstroemeria313#1694: ok if i have a dct that works i just need to compute the statistics of natural images
alstroemeria313#1694: i mean i could do that anyway with scipy lol
alstroemeria313#1694: but
alstroemeria313#1694: i couldn't use it
alstroemeria313#1694: dct https://cdn.discordapp.com/attachments/729741769738158194/932862364158672936/dct.png
alstroemeria313#1694: magnitudes https://cdn.discordapp.com/attachments/729741769738158194/932862502713294968/dct.png
alstroemeria313#1694: is there a "dctfreq" equivalent of fftfreq()?
virtualboy#4333: Is anyone working on a gpt-based word processing extension? Like grammarly, but with predictive text generation, sentence rewriting and grammatical fixes? I imagine it could be done with a dataset on that type of stuff and the right prompts.
alstroemeria313#1694: ah it is actually really simple
chilli#5665: yeah, the question is why the "per-sample" time goes down
chilli#5665: Is gelu generally considered to perform better than relu for transformers?
chilli#5665: and is there a simple intuition why?
CRG#8707: See: https://arxiv.org/abs/2002.05202
CRG#8707: Something something smooth, something something: https://cdn.discordapp.com/attachments/729741769738158194/932944732231966740/Screenshot_20220118-112856.png
Ravna#1831: Has the conclusion surpassed the stage of being a mere anecdote?
|
Ravna#1831: Are there 100 follow-up papers that confirm it?
chilli#5665: the thing is
CRG#8707: T5 1.1 switched to GeGLU
CRG#8707: <https://huggingface.co/google/t5-v1_1-large>
chilli#5665: I think there is actually a significant (well, as significant as activation functions can be lol) runtime cost to using GELU instead of RELU
chilli#5665: But... I don't think any of the current DL frameworks are taking advantage of it
Ravna#1831: All of those improvements are suspicious and I think we need a systematic re-examination in the statistical scale sooner or later.
chilli#5665: and I was wondering if I could come up with an activation function that has the computational benefits of RELU
chilli#5665: but also the performance of GELU
Daj#7482: Wasn't RELU^2 equivalent to GELU?
Daj#7482: In performance
Daj#7482: I remember reading that somewhere
chilli#5665: maybe, I'm not sure what came out of that
chilli#5665: lol
dmayhem93#3202: primer right? https://arxiv.org/abs/2109.08668v1
Daj#7482: I want us to switch to RELU just because it's more satisfyingly sparse/interpretable lol
kurumuz#5695: RELU^2 or ^3 does perform close to GELU
Daj#7482: so someone fix ReLU pls
kurumuz#5695: what is the computational benefits of RELU
chilli#5665: In principle, you can save a smaller activation
|
chilli#5665: Since it's simply a boolean mask
chilli#5665: With most activation functions you save either the input or the output
CRG#8707: Softmax could be replaced with ReLU (+ normalization) as well.
CRG#8707: https://discord.com/channels/729741769192767510/747850033994662000/929825581015638066
chilli#5665: which is usually ... 4 or 8 bytes per tensor
chilli#5665: but with RELU you could save a bit mask
chilli#5665: and save 16/32x memory compared to normal activation functions
Daj#7482: Interesting, you've tried this on large LMs?
kurumuz#5695: We should just design a super performant and competitive perplexity transformer with stuff we have seen so far
Daj#7482: "ReLU Is All You Need"
CRG#8707: https://discord.com/channels/729741769192767510/729741769738158194/931265992837976064
Daj#7482: That does seem rather elegant
kurumuz#5695: throws attention out
kurumuz#5695: puts RELU in instead
Daj#7482: EleutherFormer
nev#4905: releú
Ravna#1831: That would probably just lead to us confirming that the "stuff we have seen so far" are mostly scams that don't improve vanilla transformers at all.
Ravna#1831: :blobsad:
Daj#7482: almost certainly yes
nshepperd#2316: arxiv is a scam to sell more activation functions
|
kurumuz#5695: activation function maximizer
StellaAthena#3530: What are some examples of open data initiatives in NLP that are __not__ about replicating privately held datasets in some sense?
StellaAthena#3530: I would consider the Pile to be an example of what I am looking for, but not LAION, OpenWebText, or BookCorpus2.
sweg#8920: https://cdn.discordapp.com/attachments/729741769738158194/933042058019618877/unknown.png
sweg#8920: anyone ever seen any loss functions going all wonky like this for contrastive learning?
sweg#8920: for a 1.7B model
Louis#0144: The gradients are already being clipped
Louis#0144: Although there's some spooky stuff going on https://cdn.discordapp.com/attachments/729741769738158194/933042696174600222/IMG_4553.png
sweg#8920: well they arent in this case but when we did use gradient clipping the result did not change
Louis#0144: @kindiana doesn't GPT j use an epsilon of like 0.1
Louis#0144: Or something weird
sweg#8920: ok wtf if its that large that might be a solution
sweg#8920: cause i tried 1e-4 as largest value
CRG#8707: WD of 0.1 IIRC, not sure about eps.
kurumuz#5695: doubt eps is 0.1
tpapp157#3643: Looks like your model collapses for some reason. Maybe check you aren't over/underflowing anywhere in activations or gradients. Also maybe check you aren't getting nans or infs in your calculations. Things like that.
Sid#2121: ln eps or adam eps?
Sid#2121: either way, neither eps in GPTJ was 0.1
Sid#2121: iirc ln = 1e-5 and adam=1e-8?
Sid#2121: are you tuning with fp16? if so, are you doing loss scaling?
|
MicPie#9427: What is shown here, the gradient distribution?
With gradient *norm* clipping I was once able to stabilize InfoNCE training.
MicPie#9427: Do you already use a lr schedule?
Louis#0144: @sweg
Louis#0144: Get back here
sweg#8920: in a lecture rn gimme 10
Louis#0144: AMP I believe
Louis#0144: So mixed precision
sweg#8920: wait wdym ln
Sid#2121: layernorm
sweg#8920: ooh ok
sweg#8920: i think it has to do with the contrastive loss still
sweg#8920: we added an epsilon term to that as well but didnt fix 🤷
sweg#8920: yep we are clipping gradient by norm and are using a lr schedule
sweg#8920: what value did you use for the clip?
MicPie#9427: 1.0 but it was for a CNN (but I took the value from some Transformer setup)
when I log the grad norm I also see that grad clipping is only active at the beginning of the training and then the grad norms decrease quite fast to smaller values.
MicPie#9427: wandb would show you NaN in your loss with a different marker in the plot, right?
MicPie#9427: and which kind of loss function do you use? InfoNCE?
sweg#8920: no nan loss atm
|
sweg#8920: i think its just cross entropy on the logit matrix
sweg#8920: loss never NANs either
sweg#8920: though it seems the gradients shoot off in a random direction
MicPie#9427: for some contrastive losses you need to normalize the vectors (and, strangely, when you forget it it still somehow works for some time)
sweg#8920: oh yeah we are normalizing
MicPie#9427: I guess you plot the all the grads in one plot, then you get all the different values, but I could be wrong on that plot
sweg#8920: https://cdn.discordapp.com/attachments/729741769738158194/933057385830961203/unknown.png
sweg#8920: you mean like line 2 and 3 here right
zphang#7252: P3, https://github.com/allenai/natural-instructions-expansion ?
Louis#0144: this is InfoNCE
Louis#0144: lol
Louis#0144: just to be clear
Louis#0144: hence micpie's react
Louis#0144: haha
sweg#8920: yeah mb i thought so but wasnt sure after googling the equation lol
Louis#0144: i just thought it was funny
Sid#2121: is AMP doing dynamic loss scaling, or static loss scaling? bc it can do either iirc, and static sucks
Louis#0144: dynamic loss scaling
Louis#0144: wait what is static loss scaling
Louis#0144: o.o
|
Sid#2121: https://docs.nvidia.com/deeplearning/performance/mixed-precision-training/index.html
Louis#0144: oh yeah we use dynamic
tpapp157#3643: Have you tried without AMP? It can definitely make training unstable more often than not.
tpapp157#3643: At least my success rate on experiments with AMP has been low enough that I've mostly stopped bothering.
tpapp157#3643: But yeah, something somewhere in your model is becoming numerically unstable.
jack#8178: anyone know how to manually call the default pytorch conv2d backwards function?
jack#8178: `torch.nn.functional.grad.conv2d_weight` does some different, much worse thing
jack#8178: @nshepperd alstro said you had found it before?
jack#8178: context - trying to make a fused depthwise conv kernel - current code is here https://gist.github.com/GallagherCommaJack/0321874be9911c1b38af556b628d2468
RyanT#5929: https://twitter.com/croixalmer/status/1483171582507954182?s=21
RyanT#5929: @Louis
EricHallahan#1051: #off-topic
Louis#0144: @sweg have we?
sweg#8920: im not sure, i just remember you added amp and afaik theres nothing in config to turn it off
Kharr#7888: Amp should be very stable. I hope you're using the included optimizer and grad scalers?
Louis#0144: Yeah
mullikine#5015: https://asciinema.org/a/Wl8ti5oE7YK9bdwb3gzsxzB4w
mullikine#5015: Talking to dumbledore about the design of the PenSieve
alstroemeria313#1694: it's... an orb? like in the wizard pondering the orb meme?
alstroemeria313#1694: (It's not actually an orb)
|
Teemochu#8740: I still like relu^param
nshepperd#2316: pensively pondering my pensieve
Teemochu#8740: torch.exp(torch.log(x.clamp(min=1e-10))*self.powparam.to(torch.bfloat16))
Teemochu#8740: this is just relu^powparam but very performance optimized
Teemochu#8740: (this presumes x is bf16; powparam must be fp32)
bmk#1476: i doubt this is more performant than the naive option, it might be more numerically stable though
chilli#5665: yeah
chilli#5665: `torch.ops.aten.convolution_backward`
chilli#5665: You should probably fuse that
Teemochu#8740: oh with triton? it's second on my list once I dive in
chilli#5665: or just with torchscript
chilli#5665: or if you like new things, AOTAutograd 😛
chilli#5665: Triton isn't likely to do better than off-the-shelf fusers in this case
jack#8178: is there some non-default option I have to enable for torchscript to fuse ops? I don't think I've ever seen it produce a >5% speedup, so it can't be doing much fusion
chilli#5665: Like, 5% on a whole model during training?
jack#8178: eheh, 5% on any random computation I've thrown at it. didn't even know I *could* use it for training
chilli#5665: Haha, with default settings it just fuses pointwise ops
chilli#5665: And even that it kinda does suboptimally for training
chilli#5665: So, there's 2 things:
1. You can try enabling NVFuser, which can fuse more operators (including reductions and norm layers) by doing
|
```
with torch.jit.fuser("fuser2"):
....
```
2. That still doesn't solve some of the "suboptimal fusion of pointwise ops" I was talking about, but AOTAutograd does solve that 😛
https://dev-discuss.pytorch.org/t/min-cut-optimal-recomputation-i-e-activation-checkpointing-with-aotautograd/467/6
chilli#5665: (it's still using Torchscript under the hood, but it's a layer on top that lets us do the recomputation optimizations)
alstroemeria313#1694: ...wow, self-normalizing neural networks (https://proceedings.neurips.cc/paper/2017/file/5d44ee6f2c3f71b73125876103c8f6c4-Paper.pdf) actually work?
alstroemeria313#1694: I started stacking layers until I couldn't train it anymore and then tried to find things that made it work, over and over
alstroemeria313#1694: It behaves *visibly* very differently with their special init than with any other init
alstroemeria313#1694: have a duck painting made using a SELU deep image prior net https://cdn.discordapp.com/attachments/729741769738158194/933260245516558356/out_00695.png
MyFaceLicksFaces#7144: drugs and cyclic stuff
MyFaceLicksFaces#7144: just kidding i lov eyou all
MyFaceLicksFaces#7144: you are all good people :)
Emad#9608: has anyone seen any good writeups or pieces on why physicists can get billions for colliders or astronomers for telescopes but spend on AI compute is pathetic (supercomputers aside, but those are multipurpose)?
Kharr#7888: Are you referring to funding from governments or something else? Governments around the world still haven't figured out their AI mandates and investment is low. It's mostly industry carrying the torch at the moment.
nev#4905: the torch haha
Emad#9608: gotta go with the flow instead 👀
tpapp157#3643: I understand the ideal of disentanglement, but I never liked the discussion because the entire concept of disentanglement is underspecified. There is no objective mathematical definition of disentanglement, and from a practical perspective most data attributes in real world data cannot be truly disentangled. Even attributes which should in theory be independent are often strongly correlated with other attributes in the real world (truly and spuriously). For example, car color should be an independent attribute because in theory any car can be painted any color but in practice there are strong correlations with other attributes like make, model, year, etc.
|
Ramzi#0418: Hello @O5,
I'm part of a relatively new research institute at Queen's University in Kingston, Ontario, called Ingenuity Labs (https://ingenuitylabs.queensu.ca/). We're focused on the application of robotics and AI to different domains such as healthcare, civil infrastructure, resource extraction, etc. One area we've been exploring is the blend of engineering and art and how AI and robotics contributing to the arts. I stumbled onto the work of @alstroemeria313 on twitter and then your group here at Eleuther AI after exploring some of her github repositories.
We have a regular seminar series where we invite different groups to speak to us about their work in AI and/or robotics. Would you be open to giving us a talk on Eleuther AI, how it originated, and your philosophy on keeping AI models open source? It'd also be great to hear about this group collaborates in research, how you manage intellectual property to ensure the models are open source, and your collaborations with artists in general. Some of our previous talks are published on our youtube channel, please feel free to have a look here: https://www.youtube.com/watch?v=v9gfqV3u-pU&list=PL4D3JK85Acy7OjjTBROabyBYRvl_zvlZ1
Please forgive me if I'm not posting here in the proper way. It's my first time using discord and I'm still feeling my way around it. Happy to connect via email or other methods as well.
Thanks,
Ramzi
jack#8178: SELU diffusion when?
alstroemeria313#1694: i tried putting self-attention blocks in and it still trained but it wasn't as good
alstroemeria313#1694: like, not residual, no norm before it, and selu after it
Kharr#7888: If you like selu you may also want to try orthogonal init https://arxiv.org/pdf/2001.05992.pdf I've had great success training very deep non-residual networks with it.
coffe-boy#0322: Not so bad for j https://cdn.discordapp.com/attachments/729741769738158194/933433473228763246/IMG_0063.png
Chr0my#0173: Its not unethical if I whip gptJ into making my essay for school? Also just regarding licesnse, I can do that right?
Chr0my#0173: ```This is a tragic play that depicts many different struggles. It is an interesting play, and deals with some problems such as poverty and immigration. It focuses on conflicts and the effect of class and gender. This play will explore the family and their struggles, and will take us through a tragic fall. It will show how these struggles cause a family to fall apart.
```
My slave is doing it already!
Sidd#6307: Question for folks (mostly Pile authors) -- is there an official Datasheet for the dataset?
karser#1622: Hello. I have an AI project that I need professional consultation about.
It needs to generate questions on how an event described in the text
going to affect future events in this specific context.
I'm looking for some advice or guidance on how to solve this task.
|
If you have relevant experience please write to DM.
Louis#0144: you probably wont get many bites here @karser
Louis#0144: iirc theres communities that allow for job listings
EricHallahan#1051: Yeah this really isn't the place for that.
EricHallahan#1051: See #communities for places where it would be more appropriate.
karser#1622: Sorry about that. I'll take a look into those, thanks.
StellaAthena#3530: Yes! It's currently in the queue for arXiv, but you can read it here. https://cdn.discordapp.com/attachments/729741769738158194/933481646303375370/Pile_Datasheet.pdf
ilovescience#3282: this is an old paper which I have read, it uses StyleGAN2 for disentangelement...
I wanted to implement this paper and this was one of the reasons why I was interested in training StyleGAN2 models
evanrmurphy#1090: I was curious about this and got into discussion with the OP. Got convinced it was worthwhile and just wrote this post on LessWrong about it. Please take a look, without spending much time you can help us potentially improve the funding situation for AI Safety quite dramatically:
https://www.lesswrong.com/posts/vq6ztCgFczuH53f4Y/action-help-expand-funding-for-ai-safety-by-coordinating-on
alstroemeria313#1694: Can someone please find me like, the spatial contrast sensitivity function of the human visual system. Like, not as a chart, but as a table of numbers I can actually use for things.
alstroemeria313#1694: the colour-science python package supposedly has it but i can't use it bc i get the error `AttributeError: module 'numpy' has no attribute 'float128'` on import
alstroemeria313#1694: oh it's just an arm64 bug probably
alstroemeria313#1694: how big is a "cycle per degree"
alstroemeria313#1694: what is that in units i can make sense of
alstroemeria313#1694: like on a normal computer screen at a normal viewing distance. one degree of arc is how many pixels/inches/whatever.
alstroemeria313#1694: it's about 30 pixels if we think 18 inches distance and 96 dpi
EricHallahan#1051: Something like 36 inches?
|
alstroemeria313#1694: so our highest frequency is 15 cycles per degree
EricHallahan#1051: Oh I saw it quoted at 30.
alstroemeria313#1694: bc we are not going to be displaying these images at retina display resolution bc we cannot make them big enough yet.
alstroemeria313#1694: well one cycle must be no less than two pixels?
EricHallahan#1051: Yes
alstroemeria313#1694: "what is the highest frequency my model outputs can actually represent, given that they are not large enough to be displayed hidpi"
EricHallahan#1051: so it is `(2*cycles/96ppi)*cot(1 degree)`
EricHallahan#1051: In terms of viewing distance.
alstroemeria313#1694: *nods*
alstroemeria313#1694: so it's about like this on a log log plot https://cdn.discordapp.com/attachments/729741769738158194/933502867799670794/Screen_Shot_2022-01-19_at_3.26.59_PM.png
alstroemeria313#1694: the contrast sensitivity function.
alstroemeria313#1694: so where is that peak located at.
alstroemeria313#1694: 1.52727041 cycle per degree according to scipy.optimize.minimize()
alstroemeria313#1694: Or like ~19.6 pixels
alstroemeria313#1694: So the question is, at what noise level is the signal to noise ratio *at that frequency* 1
alstroemeria313#1694: Given that the power spectrum of natural images goes like 1/f^2 and Gaussian noise has equal power at all frequencies.
alstroemeria313#1694: So since N(0, I) has power 1 at all frequencies.
alstroemeria313#1694: umm, this is difficult
jack#8178: `No such operator aten::convolution_backward`
jack#8178: do I need to install from source?
|
chilli#5665: oh, sorry, what version of PyTorch are you using?
jack#8178: 1.10.1
chilli#5665: Try `torch.ops.aten.convolution_backward_overrideable`?
jack#8178: ...where would this be documented? `torch.ops` doesn't seem to be a normal module or have readable help
chilli#5665: https://github.com/pytorch/pytorch/blob/302ee7bfb604ebef384602c56e3853efed262030/aten/src/ATen/native/native_functions.yaml#L1194
jack#8178: alright that's a function
chilli#5665: 😆
chilli#5665: There's recently been a cleaning up of PyTorch's convolution operators
chilli#5665: to basically consolidate them under 2 main operators
chilli#5665: `aten::convolution` and `aten::convolution_backward`
chilli#5665: but that's only on master right now
jack#8178: eheh new problem
nshepperd#2316: there's also torch.nn.functional.conv_transpose2d which has a kinda confusing interface
jack#8178: ```
RuntimeError: You are likely triggering this with tensor backend other than CPU/CUDA/MKLDNN, if this is intended, please use TORCH_LIBRARY_IMPL to override this function
```
jack#8178: how would i get the gradients for the weights from that?
nshepperd#2316: oh that's what you want
nshepperd#2316: idk
jack#8178: trying to implement a fused depthwise conv module
|
jack#8178: which remats the weights so you still get the memory savings from parameter reduction
nshepperd#2316: i think the gradient for the weights should be some sort of convolution between the input and the output grad
jack#8178: yeah
jack#8178: so
jack#8178: `torch.nn.functional.grad.conv2d_weight` does that
jack#8178: but it's much slower than the original backward pass
jack#8178: and it allocates a huge amount of memory
jack#8178: `grad_output = grad_output.contiguous().repeat(1, in_channels // groups, 1, 1)`
chilli#5665: yeah, you probably want one of the actual kernel implementations
chilli#5665: Like, this one: https://github.com/pytorch/pytorch/blob/302ee7bfb604ebef384602c56e3853efed262030/aten/src/ATen/native/native_functions.yaml#L1339
chilli#5665: but tbh, you might be better off just using nightly and then using `torch.ops.aten.convolution_backward`
chilli#5665: (which corresponds to `torch.ops.aten.cudnn_convolution_backward`)
jack#8178: wait... could i literally just use `expand` instead of `repeat` here?
jack#8178: and immediately fix the crappy performance?
jack#8178: i bet i can
chilli#5665: lol
jack#8178: oh that doesn't work - expand can only expand dimensions of size 1
jack#8178: that's why they use repeat
jack#8178: eheh
jack#8178: ok i see what this is doing - the repeat is actually only necessary for handling groups
|
chilli#5665: groups are actually such a pain
chilli#5665: They're the only thing that makes `convolution` not closed under autograd
chilli#5665: 😠
jack#8178: yeah i'm going to ignore them
jack#8178: bc this is for a depthwise conv
jack#8178: if you're doing groups on top of a depthwise conv, that is not my problem
𓅬 gabriel_syme 𓅬#3220: anyone here uses tail free sampling?
𓅬 gabriel_syme 𓅬#3220: I'm trying to understand how to do it with HF, is finetuneanon's still the only place available? Anyone maybe has an example of using it? 🙂
jack#8178: wait
jack#8178: can i just
jack#8178: ```py
gi = rearrange(grad_output, "b c h w -> c b h w")
ii = rearrange(input, "b c h w -> c b h w")
grad_weight = F.conv2d(gi, ii)
```
jack#8178: (that's not quite right bc it's only `c_o x c_i x 1 x 1`...)
jack#8178: ok wait how do I actually get gradients for a >1x1 convolution here? if i include padding it magically works out but only if I do the equivalent of `padding="same"`
AI_WAIFU#2844: You know what pytorch/optax is missing? A half decent stochastic MCMC integrator. Doesn't need to be super fancy, but I do want to be able to do Bayesian inference by changing 1 line of code rather than needing to write my own
jack#8178: but this gives extremely wrong numerical results
jack#8178: so that's not quite right
|
jack#8178: ah the padding works out because the output will be a different shape if I don't use it
jack#8178: but then... why doesn't this work
jack#8178: ah i just got the ordering wrong
jack#8178: it's `F.conv2d(ii, gi)`
inox#5400: don't they have nuts in pyro?
nshepperd#2316: hamiltorch lol
jack#8178: ok, this is still shockingly slow
jack#8178: i guess because `torch.conv2d` is really not optimized well for large kernels
jack#8178: 10x slower than builtin backwards
𓅬 gabriel_syme 𓅬#3220: anyone played around with this?
https://github.com/google/flaxformer
AI_WAIFU#2844: Sure, but then
a) I would have to use pyro
b) it can't handle stochastic gradients, which means I can only do fullbatch, which is a no-go.
ilovescience#3282: huh interesting thanks for sharing
𓅬 gabriel_syme 𓅬#3220: is pyro even alive? I seem to remember it became smth else or maybe not
nshepperd#2316: what kind of half decent stochastic mcmc integrators are there anyway
nshepperd#2316: i've read papers about stochastic gradient versions of hamiltonian monte carlo but never seen any of them actually implemented
𓅬 gabriel_syme 𓅬#3220: julia has a nice library I think
nshepperd#2316: the metropolis-hastings ones seem to always require variable batch sizes
|
𓅬 gabriel_syme 𓅬#3220: AdvancedHMC or smth like it
chilli#5665: @HypnoPump17 btw I fixed the output grad thing
chilli#5665: I'm still not totally sure what you're trying to use it for tbh 😛
Louis#0144: Does anyone else always read mcmc as meme
Louis#0144: Stochastic meme integrators
nshepperd#2316: hehehe
𓅬 gabriel_syme 𓅬#3220: https://www.linkedin.com/posts/yann-lecun_i-think-the-phrase-agi-should-be-retired-activity-6889610518529613824-gl2F
AI_WAIFU#2844: that's the problem, I always need to implement them myself but stuff like this works (even though the implementation described in this paper is wrong and you need to fix it first for the damn thing to work) https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43934.pdf
Louis#0144: Doesn't HMC make aggressive independence claims such that it doesn't even make sense to train a model with it
StellaAthena#3530: It's approximately a year late, but the datasheet for the Pile is now available on arXiv: https://arxiv.org/abs/2201.07311
EricHallahan#1051: Another website update? 👀
AI_WAIFU#2844: first I've heard of that. What I remember was that the only assumption was that the space be differentiable.
EricHallahan#1051: ```md
**Stella Biderman**, **Kieran Bicheno**, and **Leo Gao**. "Datasheet for the Pile." _Preprint_, 2022. [[arXiv]](https://arxiv.org/abs/2201.07311)
```
StellaAthena#3530: More importantly, IMO, is linking to this prominently on the Pile website
StellaAthena#3530: but yes
EricHallahan#1051: Yeah I don't know how that site works.
EricHallahan#1051: Order them by arXiv ID or by chronology?
EricHallahan#1051: I'm inclined to do former.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.