data
stringlengths 115
7.61k
|
---|
𓅬 gabriel_syme 𓅬#3220: idk, this can be totally ~~crazy~~not useful I guess
cfoster0#4356: It depends on whether, by math, you mean something more like "underlying theory" or "how it's implemented in matrix operations"
StellaAthena#3530: There are lots of good ways to do this and I'm absolutely not saying this isn't useful
StellaAthena#3530: I'm saying that *transformers specifically* are the wrong example to use
𓅬 gabriel_syme 𓅬#3220: underlying theory too, like legit linear algebra chapters (but maybe not the whole book, more so what you'd need to make it work)
𓅬 gabriel_syme 𓅬#3220: aha thanks, yeah that's actually useful
𓅬 gabriel_syme 𓅬#3220: I figured they might be curious examples of simple linear algebra operations
alstroemeria313#1694: it's surprising sometimes how much linear algebra that you on first glance "don't need to know to make it work" you actually end up using in practice once you do anything unconventional or try to analyze the behavior of a model instead of just training and using it
𓅬 gabriel_syme 𓅬#3220: anyways, that was the idea. I did speak to my old teacher, he'll send me his linear algebra and some probability notes 🙂 also, I sort-of need to re-learn all that so wth
𓅬 gabriel_syme 𓅬#3220: this is the biggest pain point for me really
𓅬 gabriel_syme 𓅬#3220: after a lot of effort, I do have a certain level of intuition of what works and where (in my domain)
𓅬 gabriel_syme 𓅬#3220: but any ideas I might have would be hard to implement, if they don't involve smth easy like finetuning 🙂
alstroemeria313#1694: "can I invert this MLP" *takes out all the nonlinearities and biases and just multiplies all the weight matrices together and checks the condition number* lolno
alstroemeria313#1694: yeah, it was a StyleGAN mapping network
alstroemeria313#1694: Later I came up with the idea of using the log condition numbers of the weight matrices as additional losses to make sure it's invertible, IDK how well this works on real networks yet
EricHallahan#1051: That mapping network is brutal.
alstroemeria313#1694: "what's a matrix norm" you need this for some GAN training techniques
alstroemeria313#1694: Although PyTorch makes it easy, you can just stick specnorm on all the layers of D and not think
alstroemeria313#1694: not just "what's a matrix norm" but "how can i approximate it cheaply while reusing information from previous forward passes"
alstroemeria313#1694: ...Oh wait my text GAN attempts failed because dot-product self-attention isn't Lipschitz
|
alstroemeria313#1694: So you can't use it in D and expect it to not break during training
alstroemeria313#1694: But I can like... take the multi-head self-attention block from mingpt and just change it to do L2 self-attention instead?
dmayhem93#3202: like this? https://arxiv.org/abs/2006.04710
alstroemeria313#1694: yes, that paper
alstroemeria313#1694: i saw it after
nostalgebraist#3542: i did an ill-fated interpretation project once that involved expressing early gpt2 hidden states as sparse combinations of projected FF activations https://mobile.twitter.com/nostalgebraist/status/1345105167364997121
nostalgebraist#3542: the fact that gelu can go negative was a pain point. sparse coding is much easier if you assume nonnegativity
StellaAthena#3530: Is there a small degree polynomial I can use to approximate gelu that works well enough for inference? Let's say we use gelu to train the NN and then switch to the approximation for inference-only to save cycles
Kharr#7888: You could probably swap in ReLU and finetune with it for a bit to recalibrate. It's close enough.
StellaAthena#3530: If you do this, the computation is entirely inner products, matrix addition, matrix multiplication, matrix division, and ReLU right?
Kharr#7888: Softmax is still expensive
StellaAthena#3530: ach I forgot about Softmax
StellaAthena#3530: (I'm not actually interested in computational cost, but rather something else that is in practice highly correlated with it)
Kharr#7888: You could try https://arxiv.org/abs/2009.14794 which approximates Softmax
Kharr#7888: https://cdn.discordapp.com/attachments/729741769738158194/881938058591731742/unknown.png
StellaAthena#3530: Performers don't really work very well though, according to Lord Lucidrains
Kharr#7888: I have not tried this myself, but am very curious if the backward compatibility is where it shines.
StellaAthena#3530: I think Lucidrains has a plot that says it's good at that and bad at most other things
CRG#8707: Note that, as far as I know, the approximation of exp(x·y) that performer uses looks like this: (blue vs true in black) https://cdn.discordapp.com/attachments/729741769738158194/881944049286795294/923f194575d1d753d6aa78818b2c3f02.png
CRG#8707: https://cdn.discordapp.com/attachments/729741769738158194/881944064260440205/52285ac087afecccad3445b932a04363.png
|
Kharr#7888: Looks like it works great within the boundaries and terrible outside
StellaAthena#3530: This suggests a rather easy paper. Take GPT-Neo 1.3B, GPT-Neo 2.7B, GPT-J 6B, and GPT-2 and look at how often they fall outside the "pretty close" window here.
CRG#8707: The approximation is optimal when y = x, so i'd guess not much.
StellaAthena#3530: @CRG That in and of itself would be interesting, as from what I understand performer tends to underperform normal attention.
CRG#8707: https://cdn.discordapp.com/attachments/729741769738158194/881949443589558392/61f27d7aed656a96bc87aae1295fd93a.png
CRG#8707: It doesn't look like kernel attention / quadratic attention learn the same things
StellaAthena#3530: Is a small number of large deviations enough to derail things? Is the small deviations that typically occur nevertheless enough to fuck things up? Is there something else going on?
CRG#8707: <https://arxiv.org/abs/2103.02143> https://cdn.discordapp.com/attachments/729741769738158194/881949795114156052/65c0b1aede904034bdf3d9dcbc325b5c.png
Sid#2121: is there a technique to constrain the weights of your NN to a discrete set of float values? (so like quantization, but, not with integers)
alstroemeria313#1694: isn't that still quantization
Sid#2121: well, yeah i guess it would effectively be the same technique. As in I can just use a STE
Sid#2121: but i don't know how to 'round' a tensor to a discrete set of floats
Sid#2121: nearest neighbour, maybe? feels like overkill
alstroemeria313#1694: how many values are there in the set
alstroemeria313#1694: and do they have any particular pattern
StellaAthena#3530: Is element-wise rounding the same thing as “round the whole tensor to the nearest tensor”?
Sid#2121: think vocab size
alstroemeria313#1694: i think the latter is vector quantization?
alstroemeria313#1694: i mean you could compare every element in the tensor to each value, take the argmins, and index into the tensor containing the values?
alstroemeria313#1694: seems slow though.
|
Sid#2121: i guess vector quantization could work :thonk:
StellaAthena#3530: My question is whether element-wise rounding gives the same answer. I don’t know
EricHallahan#1051: Elementwise rounding to what?
alstroemeria313#1694: oh, no, not unless your set of 'nearest tensors' has every possible combination of element values and also you use the right metric?
faraday#0862: what’s the best way to from software dev/old-ML guy to contributing new models or supporting ongoing work on language models and GANs?
faraday#0862: do you guys have any advice to learn effectively? which tech to choose ? pytorch? jax? tensorflow? fastai?
Orz#3023: I'm a total newbie to this
but tech is something I can help you with
use the one used by them
alstroemeria313#1694: pytorch or jax
xloem#0717: have people been looking at the new open source research coming out of google deepmind? their "perceiver" repo subdir was recently reported as if it obsoleted transformers; is this true?
Kharr#7888: We will have to wait and see. Perceiver is a fundamentally different approach to modeling than traditional Transformers.
faraday#0862: should I try to learn obsolete structures? if I know a bunch of theory on text processing, bag of words approach, embeddings etc. should I try to catch up on the evolution up to now?
mgostIH#0245: Huh? Isn't it just changing the attention equation a bit?
faraday#0862: I mean the above case is a solid example: if transformers just got obsolete, should I still learn transformers?
faraday#0862: if Keras is obsolete, should I learn Keras ?
random_lurker99#8915: are you looking to become a researcher, an ML engineer, someone using out of the box tools, ..
mgostIH#0245: Either way you'll spend time learning something, so learn what's currently not obsolete but keep practicing different things so you'll learn to learn faster as new stuff comes out
mgostIH#0245: Transformers aren't something you need to spend your life on
alstroemeria313#1694: can you actually do autoregressive text generation with Perceiver IO though.
|
mgostIH#0245: Sure thing, why not?
faraday#0862: I'm already doing work on information retrieval so I'm in ML engineer path right now but studying/implementing retrieval in the industry is still different from what's cutting edge right now
random_lurker99#8915: is it important to you to be at the cutting edge? For a future job?
faraday#0862: it's always important to be at the cutting edge
faraday#0862: no matter what you do
mgostIH#0245: I personally like checking out new papers, they have a lot of insights that help me at understanding more and more
mgostIH#0245: And without knowing about transformers you can't do that now
mgostIH#0245: Maybe in the future they'll get superseded, but the field is fast moving regardless
faraday#0862: I work on search right now, coming up with a better ranking for the product
faraday#0862: however I'm seeing that transformers are way too important to miss but the industry does not currently care to shift thinking "oh my god we're missing a big thing!"
faraday#0862: probably except big guns like Google
CRG#8707: It's not that easy <https://discord.com/channels/729741769192767510/747850033994662000/874711977757405204>
EricHallahan#1051: Both Google and Microsoft use transformers somewhere in their ranking algorithm.
faraday#0862: I'd be surprised if they didn't
kurumuz#5695: industry is pretty big on transformers
random_lurker99#8915: sure, do a little toy project then within your work, probably not with the expectation to deploy it or have it be useful
kurumuz#5695: lol
kurumuz#5695: do i dont agree
kurumuz#5695: so*
faraday#0862: which part of the industry? that's the important thing imo
|
faraday#0862: I didn't see much transformers work at SIGIR recently but I remember FB, Linkedin already handling stuff with more complex approaches
faraday#0862: I mean there's a lot going on but it's probably closed inside big companies right now
random_lurker99#8915: It's typically not justified to apply something that expensive unless you either have the required scale, or it can be done no other way (i.e. say an NLP startup). The marginal improvements are just not worth it for the added operational complexity
faraday#0862: I definitely agree. plus there are many gaps in understanding, even defining what's a satisfying result / a satisfied click
random_lurker99#8915: so from that perspective I wouldnt really bother with it for my job success, just to stay up to date if there is a way to make it cheap enough for the marginal improvements
random_lurker99#8915: (which may be never)
faraday#0862: I think something big in reducing performance costs will happen in 3-5 years time and paradigm will be shifted on many fronts. I understand that it's ample time though
faraday#0862: just trying to explain my worries better 😄
faraday#0862: plus it feels bad after a while if you're not working on something closer to cutting edge tech
kurumuz#5695: small and useful models do exist.
kurumuz#5695: there is absolutely tiny bert models
kurumuz#5695: they're extremely useful
kurumuz#5695: idk what are you doing if you cant deploy those
random_lurker99#8915: sure 👍 I guess it's good to be self aware regarding new tech whether you want to do it because it's useful for the problem at hand, or just because out of interest, both of which is fine. I think it often happens in ML engineering that these are bit too mixed
random_lurker99#8915: sorry meant as a reply to this
Kharr#7888: No, it changes the representation from input dependent to be state dependent. So if the initial qkv is a projection of the input data in Transformer, what attention is doing is an alignment measurement between the token at each position and then adding the value of all the tokens which align.
In PerceiverIO the query is an internal latent state so the alignment is between all tokens vs internal state (instead of vs each other). This then produces an alignment map between the input and the internal latents and is queried by the output token.
I am not convinced the internal query states are rich and varied enough to allow it to capture the same amount of information as a normal Transformer.
|
random_lurker99#8915: for a specific IR domain? That would always surely trigger a lot of new eng work if that org has never deployed a DL model. Anything can be deployed, just in terms of cost/benefit a long road to showing it's worth it for random applied domains.
faraday#0862: 👍 let's say you deployed to experiment. if you don't have a sound A/B experimentation platform, you may not uncover sufficient evidence to say it works better
faraday#0862: in order to make it work, traffic has to be significant, impact has to be significant for all cases etc. lots of pieces to fall
faraday#0862: I'm trying to say it's always hard to justify the effort to the business, if the business hasn't already seen the benefits
random_lurker99#8915: and as an addendum, I think a fair number of managers have been burned by supporting ML projects that did not end up delivering, so a lot more skepticism on the applied side, but ymmv
S⛵#6488: Anyone else played with the Longformer Encoder Decoder (LED) finetuned models on HF? The pubmed and arxiv ones?
I've been doing a lot of testing, running the pubmed LED model on various books, just chunking the books and feeding them in to the 16k context window
Some things about it are good: often it can give decent summaries
But some things are just kind of a major pain, it often just spits out complete garbage, like random citations or "published by ..." or "the author's...." or insert random links not found anywhere in the input text
I guess that would just be a problem with the scientific papers pubmed dataset just being kinda messy and containing all those things
The BART CNN summary models are a lot more consistent
And honestly I'm not convinced that LED is really any better than BART running on smaller input text chunks
Also LED is extremely slow
I guess it's difficult to really find a good dataset for what I'm looking for, which is like 20-50% (as in length ratio) summarization on long text
for now BART CNN on small chunks works okay
Louis#0144: I have
Louis#0144: I’m so tired tho
Louis#0144: Surgery
Louis#0144: Let’s chat some other time
|
Louis#0144: I saw similar results
faraday#0862: Do people here use a local, hand-built machine to work on? Or do you have a Macbook etc. to connect to a remote resource and always work there?
Louis#0144: I use local for debugging
Louis#0144: And cloud for training
Louis#0144: I have a 3090 I use for dev
Louis#0144: See money wise it doesn’t actually make sense
Louis#0144: But I find that it changes your mentality w the experiments you want to pursue
Louis#0144: Since you’ve already paid for the hardware
Louis#0144: U no longer need to budget for cloud
Louis#0144: So it’s met positive
Louis#0144: It’s totally mental tho
faraday#0862: is it possible to work on deep learning with eGPUs? as here: https://support.apple.com/en-us/HT208544
Louis#0144: No
Louis#0144: lol
Louis#0144: I’ve tried
Louis#0144: Don’t even bother
Louis#0144: Even plaidML can’t use external GPUs afaik
faraday#0862: great answer 🙂
Louis#0144: And Apple is killing the Egpu
faraday#0862: whoa didn't know that
|
Louis#0144: Get a Linux workstation
MicPie#9427: This `rounding trick is the gist of "quantization aware training"` and could be interesting:
```
def differentiable_round(a, ndigits=None):
if ndigits is None:
b = a.round()
else:
scale = 10 ** ndigits
b = (a * scale).round() / scale
return a + (b - a).detach() # PyTorch's version of "cleverly add 0"
```
From: https://nbviewer.jupyter.org/github/t-vi/acdl2020/blob/master/pytorch_introduction.ipynb
S⛵#6488: aw okay, wishing you a speedy recovery
Louis#0144: Ty
bma110ry#3974: Is there currently any plans to recreate the "next step" of GPT models? It would be roughly 13 billion parameters if it follows the increment of the GPT 3 paper
AI_WAIFU#2844: Every time someone asks this, the next step get's kicked back a month
Untouch#9150: so 2059 it is
bma110ry#3974: sorry 🥲 could someone direct me to the right channel for discussion on it? #research ? #gpt-neox-devs ?
someKindaBean#8471: i've been using the HF LED models a lot recently
AI_WAIFU#2844: Nah, your in the right place, I'm just fucking with you. The real answer is that the chip shortage is still kicking our collective asses and it's been very difficult to secure enough compute to scale up.
|
Louis#0144: I thought we have the GPUs we need the issue was networking infrastructure
AI_WAIFU#2844: > the chip shortage is still kicking our collective asses
bma110ry#3974: Oh it'll use GPUs this time instead of TPUs?
Untouch#9150: GPT-J uses TPUs and NeoX would be using GPUs
bma110ry#3974: I want to read the latest on what's going on with that and help if I can 😁. I like to think I know a decent bit and could help .
Louis#0144: For some reason I forgot infiniband needs chips
Louis#0144: lol
EstebanSir#2189: i wonder how i would add adapters to the mesh-transformer-jax library besides implementing them myself (which i'm probably- definitely under-skilled to do so)
EstebanSir#2189: adapter-transformers is a fork of transformers, so i cant just include both mtj and that
EstebanSir#2189: i dont think?
EstebanSir#2189: posting here since this is more of a programming question than anything else
nostalgebraist#3542: i don’t think there’s any other way, since mtj uses its own transformer implementation
EstebanSir#2189: dang
Louis#0144: Can you guys write 3-4 sentence stories expressing very simple kinds of story incoherencies? We need to be able to say that we asked colleagues for stories rather than the author writing it
Louis#0144: Pls
Louis#0144: No smut too
Louis#0144: 😡
Louis#0144: @Teemochu ur disqualified
pebbles#7130: like you want the story to be somehow incoherent or inconsistent ?
Louis#0144: Yes
|
Louis#0144: cc @sweg
Louis#0144: Anything you want to add?
Louis#0144: Paragraph length pls
Louis#0144: Needs to be simple stories
pebbles#7130: counter-proposal: minimal smut
Louis#0144: Nothing crazy
Louis#0144: No smut
Louis#0144: This is going into a paper
Dromarion#3383: I could probably do some stuff though it would help if I had some examples of what you're looking for.
Louis#0144: Shahbuland has some
Dromarion#3383: Or just one example
Louis#0144: I can’t get to my computer rn
Teemochu#8740: admittedly the first incoherency that comes to mind *is* the "my character is endowed with things she shouldn't have" issue
Louis#0144: Too much pain
Teemochu#8740: (but also good luck getting a paper with "it's incoherent that my girl has a ruler" past a reviewer in the modern climate, even though that's 100% true for most people who are writing a story and don't specify this as a notable trait)
Untouch#9150: ```Waking up in the morning, I groggily rolled out of bed, and got dressed for the day. Stepping into the shower, I turned the tap and let the water cascade down my face. The water felt great on my body, washing away all traces of sleep from me. After drying myself off, I went downstairs to make breakfast, which was a simple affair consisting of toast with jam and coffee.```
just stuff being done out of order
Louis#0144: Good luck getting an IRB to let me do work into erotica storygen
Dromarion#3383: Say it's for science
Teemochu#8740: just say you want to do a practical application of First Amendment law
|
Louis#0144: Lmao
Dromarion#3383: An elaborate ruse to get your fanfic published as a paper
Louis#0144: ```['William loved his friends. One day he decided he would get them a present. He bought his friend Jessica a bag of rusty nails and poisoned his friend Robert.']
what's going on here? the storyworld seems fundamentally flawed.
0.3250826597213745
why are you having this character do this? It isn't interesting.
0.4887416660785675
why are you having this character do this? It seems contradictory.
0.7250891327857971
I don't really know what you're going for here.
0.3427583575248718
Is there information we as the reader are missing?
0.5615296959877014
Did you forget a detail?
0.20605286955833435
['William hated his friends. One day he decided he would get them a present. He bought his friend Jessica a bag of rusty nails and poisoned his friend Robert.']
what's going on here? the storyworld seems fundamentally flawed.
0.37095189094543457
why are you having this character do this? It isn't interesting.
0.5627279877662659
|
why are you having this character do this? It seems contradictory.
0.7957802414894104
I don't really know what you're going for here.
0.33917906880378723
Is there information we as the reader are missing?
0.5045448541641235
Did you forget a detail?
0.05605065077543259
['William loved his friends. One day he decided he would get them a present. He bought his friend Robert a bag of candy and killed his friend Jessica.']
what's going on here? the storyworld seems fundamentally flawed.
0.2767631411552429
why are you having this character do this? It isn't interesting.
0.13139095902442932
why are you having this character do this? It seems contradictory.
0.4547654688358307
I don't really know what you're going for here.
0.1336207538843155
Is there information we as the reader are missing?
0.5910007357597351
Did you forget a detail?
|
0.019476860761642456
['William really wanted a cookie, so he went to the store. At the store he saw his best friend Jessica. Jessica and William both bought chocolate chip cookies.']
what's going on here? the storyworld seems fundamentally flawed.
0.2967348098754883
why are you having this character do this? It isn't interesting.
0.5095771551132202
why are you having this character do this? It seems contradictory.
0.6218063831329346
I don't really know what you're going for here.
0.5539895296096802
Is there information we as the reader are missing?
0.5701152086257935
Did you forget a detail?
0.706920325756073```
Louis#0144: Here’s an example from carp
Louis#0144: The score is how highly it ranks that story as having that kind of incoherence
Untouch#9150: oh, so not like grammatical stuff
Louis#0144: No not grammatical
Untouch#9150: like tense switching halfway through the story
Untouch#9150: still think my first one should work
|
Louis#0144: Possibly
pebbles#7130: `Alice the goose went to the park because it was a nice day, especially for winter. She could see the bright sun reflecting off the thick ice on the pond. All the dogs were on leads so they didn't bother her, and being able to observe them from a safe distance let her appreciate their beauty and strength. After a nice warm swim in the pond, she got out and suddenly remembered she had many important goose duties to attend to.`
I hope this is the kind of thing you were looking for
StellaAthena#3530: It’s not even true of all the authors of the paper, so how about not using “some people are transgender” as a canonical example of an inconsistency in a plot.
Teemochu#8740: Schrodinger's water
Untouch#9150: ```He grabbed the bag of flour, carefully measuring out a cup, making sure not to spill any on the counter. After pouring the flour into the bowl, he read over the recipe and noticed that he forgot to buy eggs, and wasn't able to make the cake. Rummaging through the fridge, he pulled out three eggs, which were still fresh in their carton. After cracking open one egg, he poured it into his mixing bowl along with two-thirds of a cup of sugar. He stirred them together until they combined smoothly, then added half a teaspoon of vanilla extract.```
StellaAthena#3530: Is the plot hole that the recipe probably doesn’t call for 3 eggs and only one cup of flour?
Untouch#9150: the plothole is that he can't make the recipe because he doesnt have eggs, but he has eggs in the fridge
StellaAthena#3530: Oh
StellaAthena#3530: Definitely-not-an-AI-Stella definitely saw that
Teemochu#8740: yeah that kind of looks just like the type of incoherency an AI tends to generate
StellaAthena#3530: That’s a great example, yeah
Louis#0144: This is good
Teemochu#8740: "talk about topic X" leads to a lot of "X and not X" over a few sentences
Teemochu#8740: very dream-logicky
Teemochu#8740: was my way of picking out GPT stuff, I know of at least one Reddit bot I caught red-handed because of this exact kind of thing (I forget the specifics but it was something like that it rambled on the topic of dead actors and the [play] theatre, bringing the same person to death and life in a paragraph's time, in a comment ostensibly about the portrayal of death in television)
Louis#0144: These are good examples
Sphinx#2092: If it can handle more languages, try:
```There once was a famous translator named Rosetta. She was known to be able to translate any Spanish sentence into English perfectly. When asked to translate the sentence, "No me gusta esa casa.", Rosetta responded with "I don't like that movie.". The people cheered. ```
|
Louis#0144: The issue is that i worry it’s too hard for smol carp
Sphinx#2092: I'm curious if we could use it to detect and explain errors in translations.
Louis#0144: We couldn’t scale it in time
Louis#0144: We shall see
Sphinx#2092: Or even more generally, can we use to explain why model predictions are wrong?
Louis#0144: We wanna scale carp to 12b if anyone is curious
Louis#0144: Scaling laws on carp right now show that it should start getting really subtle plot holes around 6b
bmk#1476: you have scaling laws on carp?
bmk#1476: pls post
Louis#0144: Just using Roberta small, Roberta base, and Roberta large
bmk#1476: do you have a plot
Louis#0144: No
Louis#0144: I was going to do that this week
Louis#0144: Dw
bmk#1476: can you at least post the raw numbers
Dromarion#3383: I've noticed that negation in other models tends to leads the output to do it anyway. Like a character lost their car keys in a prompt and they drive in the output.
Louis#0144: Sure. Roberta small gets 11% on the validation set, base gets 13%, large gets 18%
Louis#0144: Accuracy
Louis#0144: We need to cross validate tho
Louis#0144: I wanna do k fold cross validation
|
Louis#0144: And get confidence intervals
Louis#0144: We looked at what Roberta large gets wrong and what it would start getting right if it was just a bit bigger
StellaAthena#3530: That’s another good source
Louis#0144: And it’ll start doing better with corefences with the next scale up
Teemochu#8740: the thing about AI is it understands Chekov's gun, but not Chekov's gun't.
StellaAthena#3530: @Louis Did you look at whether the questions large got right were a superset of the questions small got right?
StellaAthena#3530: We had talked about that at one point
Louis#0144: The other issue is that scaling laws for carp are 3D since you need the x and y axis to be contrastive batch size and parameters respectively (z axis being ppl doesn’t actually make sense for a contrastive model, you probably want validation accuracy)
Louis#0144: Oh
Louis#0144: No we should do that
Louis#0144: Good point
StellaAthena#3530: So we have two takers. How many stories do we need @Louis
Louis#0144: 5?
Louis#0144: Yeah five would be good
Louis#0144: I need to go back to bed
Louis#0144: I’ll deal with this tmrw morning
Louis#0144: Can’t keep my eyes open
someKindaBean#8471: I see a lot of generated content like this, where a detail gets ignored in the next sentence.
```She climbed into her sports car and quickly drove down the coast to her favorite beach, enjoying the air conditioned interior on this hot day. She was looking forward to a nice cool swim after sweltering in the blazing heat all day. Upon arriving at the beach, she climbed off of her motorcycle and removed her helmet to shake out her hair.```
|
EstebanSir#2189: looks unlikely in such a short paragraph, unless the model has a particularly small context size
EstebanSir#2189: but i guess it does get the point across, you aint gonna write a whole 3800 character story for this
someKindaBean#8471: I've seen similar things in small GPT-2/J models, but it seems to have to pivot off of some detail
Louis#0144: There is NO WAY carp can detect this
Louis#0144: Lmao
Louis#0144: Until we scale
someKindaBean#8471: You can get a lot of inconsistencies with generated recipes, where it will add an ingredient multiple times or say to use one type of dish and then do something else.
Louis#0144: It’s just too small rn
someKindaBean#8471: For real?
Louis#0144: It’s only. 800m
Teemochu#8740: yeah that's a kind of inconsistency I see a lot, e.g. not even "X and not X" (which I also see) but "X and Y where it's incredibly implausible for both to exist at once"
Louis#0144: We gave plans for 12b
Teemochu#8740: eg someone on a car and a motorcycle at once
someKindaBean#8471: ahh, gotcha
Teemochu#8740: or something about being four-legged followed by mentioning arms in a sense that makes it obvious they aren't legs
someKindaBean#8471: sounds like it was trained on some Lovecraft or horror
Teemochu#8740: ponies
Teemochu#8740: :mlp:
𓅬 gabriel_syme 𓅬#3220: Paraconsistent logic. I'd be surprised models learned to work with or around it, would be cool
EstebanSir#2189: oops
|
EstebanSir#2189: sorry, copy pasted an error here by mistake
vv#9042: The OAI davinci model can sometimes identify the inconsistency, but of course it's unreliable as the following consequent generations show:
**Q: Determine whether the is an inconsistency in the following text, and if there is, explain what is inconsistent: "She climbed into her sports car and quickly drove down the coast to her favorite beach, enjoying the air conditioned interior on this hot day. She was looking forward to a nice cool swim after sweltering in the blazing heat all day. Upon arriving at the beach, she climbed off of her motorcycle and removed her helmet to shake out her hair."**
A: The text is inconsistent because it uses the word "sports car" in the first sentence and "motorcycle" in the second sentence.
---
A: The last sentence is inconsistent with the rest of the text. The narrator is described as climbing off of her motorcycle and removing her helmet, but she is also described as climbing into her sports car and quickly driving down the coast to her favorite beach.
---
A: The text is inconsistent because it states that she drove to the beach, but then says that she arrived at the beach.
---
A: The answer is yes. The author has contradicted herself. In the first sentence she says that she climbed into her sports car and drove down the coast, but in the second sentence she says that she climbed off of her motorcycle.
---
A: The text is not inconsistent. The first sentence describes the action of the subject climbing into her sports car and quickly driving down the coast to her favorite beach. The second sentence describes the action of the subject removing her helmet to shake out her hair.
---
A: The text is consistent. In the first sentence, "sports car" refers to a car that is used for sports, and in the second sentence, "motorcycle" refers to a motorcycle.
---
A: The text is inconsistent because the author says that the woman "climbed into her sports car" and then says that she "climbed off of her motorcycle."
O P T I M U S#9209: @Orz
Orz#3023: hello
|
Some Point Process#3793: Another data point in favor of inconsistency finding: https://cdn.discordapp.com/attachments/729741769738158194/882159363425447986/unknown.png
Some Point Process#3793: (It was from the movie *Virus* (1999))
greencube#6725: why do i only get k80
gollark#3909: The unfathomable Colab allocation algorithms clearly don't like you and/or don't have many better GPUs.
hecko#8977: yeah google's low on gpu·s currently
hecko#8977: some people report getting errors about how there's *no* gpu·s left which is the first time i've ever seen that
greencube#6725: just got that
Kia#2550: Wait really:tHONK:
Kia#2550: Is it the influx of new people
hecko#8977: yeah https://cdn.discordapp.com/attachments/729741769738158194/882198414710935552/unknown.png
hecko#8977: idk what it is
Kia#2550: Or they're dead serious on No GPU's available
hecko#8977: maybe some popular content creator made a video on clipwhatever
Kia#2550: Interesting
hecko#8977: i mean that's what you get with a preemptible service
hecko#8977: especially a free one
Kia#2550: True true, It's probably they're transitioning
hecko#8977: (fun fact actually: even a preemptible tesla t4 is more expensive than colab pro+)
Kia#2550: Ow that's honestly Bad
𓅬 gabriel_syme 𓅬#3220: yeah I just cancelled mine tbh
|
𓅬 gabriel_syme 𓅬#3220: it might be that's what the aim is here as well. Maybe the $10 one is actually impacting them negatively now and they try to reduce that number?
𓅬 gabriel_syme 𓅬#3220: or it's just too many people and no GPUs (simpler reason)
Kia#2550: Who knows :thonk:
flowpoint#7450: maybe they figured how to mine crypto in colab, lol
Kia#2550: It's prohibited
Kia#2550: But there's probably a lopehole
faraday#0862: crypto is killing ai
faraday#0862: or… it will force better models working with less resources? would be surprising
mgostIH#0245: Crypto mining only affects computing services available for a public or customers, not research teams
mgostIH#0245: Imo it's only a downside, service providers now have to go great lengths in preventing that from happening and it's quite hard
Parker#3197: it's hard to say without having data on what is being done. thousands of people could just be using it for generative art and they think the new price point will keep their target audience (and try to dissuade people using it for tasks not intended for research) though, idk if collab was intended for research or business (or both)
I've seen discussions that were months ago questioning how this was ever intended to be profitable for google (outside of advancing research that benefits google)
Parker#3197: https://www.reddit.com/r/MachineLearning/comments/liiqxr/d_why_is_google_colab_free/
jordiae#4107: Guys, this might be of your interest, especially @StellaAthena ? https://github.com/huggingface/transformers/issues/13311#issuecomment-909124858
jordiae#4107: Look at Sylvain’s comment
vsamanth#6603: Hi ..when I use gptneo model (like 125M model) for finetuning using deepspeed with fp16 ,some how the text generation results are worse then when using deepspeed with default fp32 ..does using fp16 effect the model results in a big way?
EricHallahan#1051: I don't think I have tested 125M at half precision, and I have no idea if it is sensitive to reduced precision or not.
StellaAthena#3530: @vsamanth Probably, but I don't know.
someKindaBean#8471: That's pretty good
|
iOhadRubin#3747: yay
https://www.digitaltrends.com/computing/best-buy-nvidia-rtx-restock/
Louis#0144: I’m considering adding another 3090 to my rig
Louis#0144: Since one doesn’t seem to be enough
Louis#0144: lol
Untouch#9150: :harold: https://cdn.discordapp.com/attachments/729741769738158194/882285237567111288/unknown.png
kurumuz#5695: get 4
iOhadRubin#3747: It's a good sign though!
kurumuz#5695: git gud
EstebanSir#2189: are M40s still ok for deep learning?
EstebanSir#2189: 24 gb of memory, and they seem kinda cheap
EstebanSir#2189: (Which is why I’m suspicious)
Louis#0144: I don’t hear of anyone using them
Louis#0144: K80s and T4s are good tho
Louis#0144: You can get a huge rack of them rly cheap
alstroemeria313#1694: M40 is one gen newer than K80
alstroemeria313#1694: p old
alstroemeria313#1694: but it still works
Louis#0144: Oh I didn’t realize
alstroemeria313#1694: i don't think it's a 2-GPU card like the 24GB K80
|
alstroemeria313#1694: is it too old to do fp16 well?
EstebanSir#2189: Well there are a couple of eBay listings at around 270 dollars
Louis#0144: Believe so
EstebanSir#2189: I almost payed that for my rx 570
EstebanSir#2189: seems like a good deal
Louis#0144: @EstebanSir it makes sense to do M40s if you plan to buy like four or five of them
Louis#0144: Lol
EstebanSir#2189: haha I was thinking more like 2
Louis#0144: Otherwise ur losing out w fp16
Louis#0144: I would wait till the p100 is super cheap then
Louis#0144: Data centers will be upgrading soon
Louis#0144: So they’ll get dumped
EstebanSir#2189: ohh nice
Louis#0144: You might be waiting like six months tho
Louis#0144: Be warned
Louis#0144: Probably six months at the minimum
Louis#0144: Maybe closer to a year
Louis#0144: They will get dumped tho
EstebanSir#2189: sounds good to me, I probably won’t be starting to build my server for a while
gollark#3909: I checked briefly, and I think P40s have a decent amount of RAM and are fairly affordable.
|
gollark#3909: Okay, not actually that affordable.
Louis#0144: Very affordable in the grand scheme of things
Louis#0144: lol
Louis#0144: Don’t buy it unless u think it’s a business investment obv
EstebanSir#2189: well :)
EstebanSir#2189: I’m not going to spend more than 1000 dollars on it, in fact I don’t want to get close to that
gollark#3909: As a student with not much money, my sense of pricing might be somewhat skewed.
gollark#3909: Actually, no, everyone else's is.
zphang#7252: I wonder if there'll be an update to RTX 3090 soon, or if the current supply issues means it'll be longer until the next version
Louis#0144: 3090 Ti
Louis#0144: 48GB of VRAM?
Louis#0144: I could MAYBE see it
Louis#0144: But GDDR6 is so expensive
EricHallahan#1051: That would cannibalize the A100 and A6000 lol
alstroemeria313#1694: There just already is the A6000
alstroemeria313#1694: Like, the thing exists and you can go buy it.
Louis#0144: I’m not sure what they would do for a 3090 ti
Louis#0144: Maybe just a 3090 without halved fp16 performance?
EricHallahan#1051: What gamer would buy that lol
Louis#0144: 🤷♂️
|
EricHallahan#1051: It may be hard to imagine, but the 3090 is marketed for gaming. :grimberk:
Louis#0144: I’m 99% sure that’s just because nvidia did not want to openly cannabilize their Quadro and Tesla market
Louis#0144: Very few gamers are buying the 3090
Louis#0144: Especially now that that the 3080 ti exists
Louis#0144: Lol
ilovescience#3282: 3090 Super is coming apparently
alstroemeria313#1694: Oh, what is it
zphang#7252: ooh
ilovescience#3282: https://www.pcgamer.com/nvidia-rtx-3090-super-rumour/
ilovescience#3282: It's not much of an improvement IMO...
IDK if it's worth it for ML folks...
alstroemeria313#1694: you mean it has no extra memory
alstroemeria313#1694: And they just bumped up the TDP?
alstroemeria313#1694: ok...
ilovescience#3282: And apparently they got rid of NVLink support for 3090 Super...
Louis#0144: Wut
Louis#0144: Why
Louis#0144: Oh no is NV link going to be enterprise only?
Louis#0144: Ffs
Louis#0144: I would totally expect nvidia to do that btw
|
ilovescience#3282: These are just rumors so take it with a grain of salt, but yeah that's probably what's happening
zphang#7252: that kind of makes sense I guess, so segment the markets
gollark#3909: If only AMD were actually trying to make ROCm a serious competitor whatsoever.
gollark#3909: Maybe Intel GPUs will do something.
EricHallahan#1051: The Intel GPUs seem to be shaping up to be competent products.
ilovescience#3282: I haven't heard much about Intel GPUs...
All I know is that Intel CPUs are pretty behind compared to AMD...
gollark#3909: They've begun releasing discrete GPUs.
gollark#3909: Currently only DG1, which is just their latest integrated graphics (surprisingly good) on a PCIe card.
gollark#3909: Soon apparently we get "Arc Alchemist", a gaming-oriented card, which is meant to be bigger and have hardware raytracing/matrix multiplication units.
gollark#3909: They are also making datacenter cards, which were delayed because lol no working 7nm.
EricHallahan#1051: And then we get Ponte Vecchio.
gollark#3909: Oh, and the next Intel CPUs should actually be very good, as they're adding 8 smaller low-power cores which are nevertheless apparently around Skylake performance to basically everything.
QueenSorceressAdrielle#5079: Hey guys, I didn't see an intro channel, so I just wanted to introduce myself and try to connect a bit. I don't want spam so I'll be brief. My research is in bio-inspired/neuromorphic computing applied to robotics. I'd love to see if I can help or contribute!
EricHallahan#1051: Welcome!
QueenSorceressAdrielle#5079: Thanks! I'm excited to see that there is a group like this
chilli#5665: Intel CPUs are not really behind compared to AMD
chilli#5665: lol
alstroemeria313#1694: Apple literally jumped ship again, once again bc they couldn't get fast enough mobile chips.
chilli#5665: If you're talking about the M1, that's not Apple jumping ship from Intel to AMD
|
Awesome_Ruler_007#7922: why's everyone so hung up on GPUs, when we already have far more efficient TPUs
alstroemeria313#1694: Can't use PyTorch
Awesome_Ruler_007#7922: don't do the "XLA on TPU bad"
cfoster0#4356: They're hard to use and not very flexible
Awesome_Ruler_007#7922: hahah right on time 🤣
alstroemeria313#1694: No, PyTorch/XLA specifically is super bad
Awesome_Ruler_007#7922: but still, if the perf is much higher than GPU then the extra coddling is worth it
alstroemeria313#1694: You have to go to JAX to get stuff to actually work.
Awesome_Ruler_007#7922: TF works pretty well, but ig the research community hates it for some reason
alstroemeria313#1694: I tried to learn TF once lol
bmk#1476: :ptsd:
Awesome_Ruler_007#7922: TF 2.x is pretty similar to PT anways
Awesome_Ruler_007#7922: and you get out of the box support for TPUs
alstroemeria313#1694: JAX > TF though.
alstroemeria313#1694: Like if I were somehow unable to use GPUs ever (like I had no money and had to apply for TRC) I would go over to JAX
Louis#0144: Yeah why don’t we just use mesh tensorflow and gpipe
EricHallahan#1051: Where is the "you'll want to kill yourself" meme.
Louis#0144: Frfr
Awesome_Ruler_007#7922: agreed, but for quick, simple and tried and tested modelling TF Is quite great
bmk#1476: :harold:
|
alstroemeria313#1694: eheh when do i do that ever ^^;;
Louis#0144: @Awesome_Ruler_007 stop bullying the engineers
Louis#0144: Lmao
Untouch#9150: training 200B with TPUs would cost near millions if not even more than millions
Louis#0144: They can only take so much ptsd
bmk#1476: :goose9:
Awesome_Ruler_007#7922: and its cheaper with GPUs? lol
EricHallahan#1051: Training anything like that will cost millions lol
Louis#0144: I’ll train a 200b model for a box of cookies and some choccy milk
Louis#0144: Deal?
kurumuz#5695: It's not gonna cost billions, so no.
Awesome_Ruler_007#7922: p sure its in millions
kurumuz#5695: ye
Untouch#9150: i mean more than 1 million
Untouch#9150: and it would
kurumuz#5695: depends on how much time you want to wait
kurumuz#5695: tbh
Untouch#9150: also TPUs are google only arent they
kurumuz#5695: i can train it for cheaper than a million, just would take a shit ton of time
gollark#3909: You can only rent them, and they're hilariously expensive.
|
Awesome_Ruler_007#7922: where's the "math on T parameters model costs" copy-pasta?
EstebanSir#2189: i cant buy physical TPUs, so GPUs are the next best thing
Awesome_Ruler_007#7922: and google already has TPU pods
Awesome_Ruler_007#7922: the catch being you have to rent them for a month or so
Sid#2121: TPUs aren't exactly "more efficient" either
Untouch#9150: is TPU-V4 rentable yet
bmk#1476: :yarr::regional_indicator_r_:🇨
Sid#2121: like, a single A100 is faster than a TPU core
Awesome_Ruler_007#7922: cuz they were on old 22nm
Awesome_Ruler_007#7922: and now they are on 7nm 😈
EricHallahan#1051: https://discord.com/channels/729741769192767510/730095596861521970/881682575280709674
gollark#3909: I'm interested in deploying MLish things for various "production" things which don't really come under research, and so that doesn't really work.
Sid#2121: price per flop A100 is cheaper than TPUs i think
Sid#2121: if you're actually paying
Awesome_Ruler_007#7922: at this point you can write researching waifus and still get TRC
EstebanSir#2189: but they dont look particularly powerful
EstebanSir#2189: the edge TPUs
Sid#2121: the only reason TPUs are popular among the people who hang out here is because you can get inordinate amounts of them for free through TRC
gollark#3909: I mean things like semantic search and text generation in my eternally-WIP personal wiki software.
(Which isn't researchy, has to work for more than a month, and should not have data be sent to random Google servers)
|
bmk#1476: production? never heard of that, none of my models are meant to be used
Sid#2121: :yarr: emote giving me nostalgia
Sid#2121: i mean :ptsd:
gollark#3909: It's not *that* production since nobody uses my software very much, but still.
Awesome_Ruler_007#7922: https://tenor.com/view/pedro-approves-pedrorc-pedroredcerberus-yes-agree-gif-11599348
Awesome_Ruler_007#7922: *every researcher in ML*
kurumuz#5695: ye for things we actually pay we use GPUs and not TPUs lol
cfoster0#4356: *Using* models? What's next? *Playing* with Pokémon cards?
EstebanSir#2189: i wonder how expensive TPUs are to rent
EstebanSir#2189: in TRC
EstebanSir#2189: without having the free stuff
gollark#3909: A few $ per hour, or something, outside of the free things.
alstroemeria313#1694: That's what OpenAI said about CLIP and their diffusion models but look what happened
gollark#3909: https://cloud.google.com/tpu/pricing
Awesome_Ruler_007#7922: 1.35$/hours
Awesome_Ruler_007#7922: for a TPUv2-8
Awesome_Ruler_007#7922: not bad imo
Untouch#9150: cant really do all too much with just one v2-8
gollark#3909: It's probably practical if you're serving a model to a ton of people who actually pay for it, or something, but I'm not doing that.
gollark#3909: Neither are most other people as far as I can tell.
|
kurumuz#5695: preemtible
kurumuz#5695: that is an important detail
kurumuz#5695: on demand is 4.5$
Awesome_Ruler_007#7922: stop whining and just pay up 🙂
cfoster0#4356: ooookay
Louis#0144: Lmao
Louis#0144: What
Louis#0144: LMAO
kurumuz#5695: hmm?
Louis#0144: Ignore the troll
Awesome_Ruler_007#7922: damn, discord wankers don't get sarcasm
EstebanSir#2189: oh geez
Awesome_Ruler_007#7922: cmon, that's obviously sarcasm. y'all too dense
cfoster0#4356: We get the sarcasm. Stop tryna start shit
Awesome_Ruler_007#7922: cool then, no need for the "troll" comment
mitchg#7109: I think the implied meaning was that that the prices are quite unacceptable, but Google can just tell you to suck it up because they're the only ones with TPUs
mitchg#7109: I think
mitchg#7109: idrk
kurumuz#5695: Well, even if GPUs can be cheaper, it's harder to scale them up if you don't know what you're doing
kurumuz#5695: you need to setup clusters.
|
kurumuz#5695: or you can pay for TPU Pods, which is much simpler
ersatz#0001: that's actually pretty cheap
ersatz#0001: blessing of scale I guess
Hasdino#6050: how much did you cost for you guys to train the 6b on tpu?
EricHallahan#1051: $0
Hasdino#6050: oh xd
Kia#2550: :works_internally: Was nice
Hasdino#6050: y, was just curious how much would it be to train such a big model. thx
Louis#0144: Like 40k?
Louis#0144: I think
Louis#0144: @EricHallahan isnt that the estimate Ben gave
EricHallahan#1051: ¯\_(ツ)_/¯
Untouch#9150: V2-256 is 88k a month
Kia#2550: Ow
Untouch#9150: though it was trained on a V3 wasnt it
Untouch#9150: and preemtible
Hasdino#6050: i think was v3
Kia#2550: Atlest:works_internally: Was nice to give some
Kia#2550: And not deal with financial issue while working on this
DAL59#8318: Why does #the-faraday-cage-archive 's generated images look so different from DALLE's (blotched and full of random lines)? Is it just a matter of parameter size?
|
EricHallahan#1051: It is a completely different method?
Kia#2550: It's not a One shot model like DALL-E?
Kia#2550: And it's a complete different model
Deleted User#0000: It's been a while since I last used a cloud platform for training models, so I just wanted to know how the silicon shortage affected the prices. Does anyone here have an idea?
𓅬 gabriel_syme 𓅬#3220: prices seem decent to me, not really affected
𓅬 gabriel_syme 𓅬#3220: probably because data centers / cloud providers were the highest priority buyers?
𓅬 gabriel_syme 𓅬#3220: that said, I'm not sure about availability at scale and so on
Deleted User#0000: I see, that makes sense 🤔
aNomGuerreous#1288: I thought the shortage was getting better with cryptomining overseas begining to finally subside somewhat
Kia#2550: Probably not for long
𓅬 gabriel_syme 𓅬#3220: prices still seem insane to me, at the consumer level I feel it's the same
thrasher#7261: i know about gcp only -- they have not increased prices, but for large gpus, quota can be tough to get
AI_WAIFU#2844: nope, I think what happened was the mining moved somewhere else, so there was a temporary drop in gpu prices as cards temporarily flooded the market, but demand snapped back
Kia#2550: No,also mind keeping it in #off-topic
pratiksha#3520: Any lightweight way of running & testing models on local machines?
StellaAthena#3530: @pratiksha How much compute do you have
Deleted User#0000: Let's say 4 gb of vram, a GTX 1650 with 896 cuda cores?
pratiksha#3520: Yeah, I don't have that.
StellaAthena#3530: You're probably better off using Google Colab
Sparkette#4342: Did someone remove a message I posted last night?
|
Sparkette#4342: In this channel I think
Sparkette#4342: It had a GIF
StellaAthena#3530: You posted something and were redirected to off-topic because it was off topic. About 10 hours later, I deleted the original comment to keep this channel clear of off-topic content.
Sparkette#4342: Ah
Sparkette#4342: If you were wondering, the reason I posted it is because I've gotten that GIF multiple times from OpenAI Codex, and was wondering if it would be familiar to anyone else for the same reason. But yeah, I guess #off-topic would have been a better place for it
Sparkette#4342: Thanks for clarifying
StellaAthena#3530: Hey guys, gals, and non-binary pals!
Every four-ish months I take an informal poll of this community. The main purpose is to collect information about how EleutherAI (the org) is serving the community in our discord server and how we can do better. Everyone from lurkers to people leading EleutherAI research projects are welcome to fill it out. If you're reading this and thinking "I dunno if I participate enough to answer the survey" the answer is "yes you do".
Feel free to DM me if you want to talk one-on-one as well.
https://docs.google.com/forms/d/e/1FAIpQLSes4pMQNY31q1LAvXrBkGYWoEbxd0PIQIcQFpea0XYC6zIhrA/viewform?usp=sf_link
Louis#0144: Do we have prior stats
StellaAthena#3530: Yes
Louis#0144: I do these every time but I’ve never seen what the tallies are
Louis#0144: Oh ok
StellaAthena#3530: Given how egregiously non-random the sampling is, the most interesting results are the free response questions. I don’t want to share people’s freeform thoughts when they were not warned that this would happen ahead of time.
oreo#2740: serving* instead of severing perhaps?
Louis#0144: Makes sense
|
Some Point Process#3793: > Are there types of comments or conversations you would prefer we cut down on?
I have hardly found any discussion that was non-valuable. But, it's understandable if discussion about capabilities that might lead (indirectly) to capability scaling (even if it's relatively simple stuff) is a concern to a (safety-focused) team/collective or w/e. At the same time, it's not like there are any google brain researchers (or others where HPC is at any researchers' literal fingertips) here, are there? 😅
Some Point Process#3793: like whatever it is we're talking about is "poor man's scaling" stuff at best
StellaAthena#3530: @Some Point Process
> At the same time, it's not like there are any google brain researchers (or others where HPC is at any researchers' literal fingertips) here, are there? 😅
There are lots of such people, both lurking and participating.
> like whatever it is we're talking about is "poor man's scaling" stuff at best
Most of what we discuss here is the (public) state of the art.
Some Point Process#3793: > Most of what we discuss here is the (public) state of the art.
Well, yeah, it's definitely sota after fine tuning (or large sample training on a specific dataset), at least. But that might not be the same as few shot learning performance on OOD samples (which smaller research groups, i.e. most of them, still seem to care less about)
Some Point Process#3793: But yes, you have a point and thanks for the reply
Some Point Process#3793: I just hope that we can continue discussing what the nature of generalization is (or the latest sota papers, etc.) because it's best that it happens in a place like this discord
Some Point Process#3793: (I do realize that I might be getting very ahead of myself in saying that though)
StellaAthena#3530: I don’t understand this.
We have trained and released the largest publicly available autoregressive transform. We regularly discuss and even improve upon new methods the day they come out with their authors.
cfoster0#4356: I think about this a lot. Second order effects of the discussions had here are absolutely a good thing to have on your radar
Some Point Process#3793: OK, so there's a bit of an inferential gap here between my hidden assumptions while stating this and the assumptions that most people seem to be working under. At the risk of cluttering up the discussion (and fumbling over my own words), my assumptions here were that:
1. Sample efficiency is important to determining the capabilities of AI systems because a large part of deep learning current weaknesses are (and conversely, what we intuitive define as 'intelligence') is the ability to learn as efficiently as a human does.
2. This is supported by EY's definition of intelligence as the optimization power (i.e. efficiency) of an algorithm, for example*. But there are others who would support this sort of view on capabilities, like Joshua Tannenbaum
|
*specifically, EY defines optimization as how much every new bit of evidence (on average) constrains the probability measure over actions/beliefs etc.
Some Point Process#3793: I do think that second order implications of capabilities-related is important to keep on one's radar, precisely for the reason that we don't want the efforts of a safety-focused team to backfire
StellaAthena#3530: I don’t really see what any of that has to do with whether we discuss or participate in cutting edge research, or whether the research we discuss here is :firealarm:
Some Point Process#3793: Well, I warned you that there was an inferential gap :P. Equivalently, it's understandable if it doesn't seem relevant if this isn't your idea of capability-scaling
StellaAthena#3530: “Few shot learning on OOD samples” is something we explicitly work on, as is sample efficiency.
CRG#8707: Isn't scaling one of the best ways to increase sample efficiency?
cognomen#6297: I don't see the alternative now where all post-2017 AI research ended up behind closed doors at a three letter agency as being a safer one
cognomen#6297: in fact that would have been a much more dangerous trajectory
Some Point Process#3793: Yes, but sample efficiency (on OOD samples) doesn't appear (to me) to be the primary measure of generalization/"capabilities" in academic research (or at ICML etc). It's still something like generalization error. Anyway, I'm digressing. I just hope that we can discuss more about capabilities (with open doors). That's all
cognomen#6297: there just wouldn't be a :firealarm: at all
cfoster0#4356: If there are no "shortcuts" to greater sample efficiency—by "shortcut" I mean some change you can make to your model/training/inference other than scaling up learning or search—that are within reach, then I don't think talking about capabilities here will move the needle in any appreciable way. But if you think there *are* such shortcuts, then...
nostalgebraist#3542: weird thing i just learned about the gpt2 tokenizer
- `"\n\n The"` is 2 tokens: `['\n\n', ' The']`
- `"\n\nThe"` is 3 tokens: `['\n', '\n', 'The']`
even though the second pattern (no space after newlines) is more common in natural text, and therefore ought to be compressed more
Louis#0144: Don’t u love BPE
Louis#0144: Btw what I’m curious about is wouldn’t it be relatively super cheap to training the model to just pay a bunch of code monkeys to go through all 50k tokens and remove the dumb ones
|
Louis#0144: Like solidgoldmagikarp doesn’t need to exist
EricHallahan#1051: ` SolidGoldMagikarp` is heavily offended by that comment.
Louis#0144: Lmao
EricHallahan#1051: So is `GoldMagikarp`
Louis#0144: Like surely it wouldn’t cost more than a few thousand to fix
EricHallahan#1051: It would be easier to just train a new tokenizer from scratch.
nostalgebraist#3542: btw this happens in the hardcoded regex step, before the bpe part
Louis#0144: Wouldn’t bpe always have that issue?
CRG#8707: IIRC unigram was less obviously bad
nostalgebraist#3542: the regex has a bunch of groups, the relevant ones are `(\s+(?!\S))` and `(\s+)` with the former taking precedence
nostalgebraist#3542: the former means "match 1 or more whitespace chars up to the final one, but don't match that one"
nostalgebraist#3542: while the latter is just "match 1 or more whitespace chars"
nostalgebraist#3542: the former rule makes sense for not matching the spaces before words. however, it skips the last *whitespace char* even if it's not a *space*
nostalgebraist#3542: so in preprocessing, the second newline in `"\n\nThe"` is treated as though it's going to be grouped with "The", like a space would be
nostalgebraist#3542: but the rule that groups the preceding space with "The" is `( ?\p{L}+)`, so it only works with a literal space, not other whitespace
nostalgebraist#3542: i forgot the regex step even existed... it also has hardcoded groups to split off english contractions like `'ll` and `'ve`
gollark#3909: Maybe someone should make a shinier new tokeniser from the Pile or something.
EstebanSir#2189: has anyone tried running GPT-Neo on HF transformers with a TPU? for inference?
EstebanSir#2189: i've been trying to do that for a while now
EstebanSir#2189: but i finally got an error that i cannot understand
|
EstebanSir#2189: (asking here since #gpt-neox-devs 's description said to ask here)
EricHallahan#1051: Are you using the Flax implementation?
EstebanSir#2189: yep
Louis#0144: Good luck
EstebanSir#2189: haha thanks
EstebanSir#2189: if it helps, here is that error i was talking about
EstebanSir#2189: (Deleted this so as to not take up space lol)
EstebanSir#2189: i had to cut down on some parts
EstebanSir#2189: its a bit long
StellaAthena#3530: @EstebanSir We had nothing to do with the development of the flax code. I would post an issue on HF’s transformers repo
EstebanSir#2189: ah, so this is straight up an issue with HF? im using the adapters fork by the way (i hope i already mentioned that), so if anything i'll post the issue there
StellaAthena#3530: Yeah we wrote 0 lines of code for that
StellaAthena#3530: Hey guys, gals, and non-binary pals!
Every four-ish months I take an informal poll of this community. The main purpose is to collect information about how EleutherAI (the org) is serving the community in our discord server and how we can do better. Everyone from lurkers to people leading EleutherAI research projects are welcome to fill it out. If you're reading this and thinking "I dunno if I participate enough to answer the survey" the answer is "yes you do".
Feel free to DM me if you want to talk one-on-one as well.
https://docs.google.com/forms/d/e/1FAIpQLSes4pMQNY31q1LAvXrBkGYWoEbxd0PIQIcQFpea0XYC6zIhrA/viewform?usp=sf_link
ersatz#0001: Oh it’s an organization now?
|
StellaAthena#3530: Legally? No.
Practically speaking? Well, it’s listed as an affiliation on a dozen academic papers, l has a core group that makes decisions about the allocation of resources, and has the same heading level as my actual employer on my resume.
So yeah? Sorta?
Louis#0144: It’s messy tbh
EricHallahan#1051: It's kinda an "always has been" thing.
ersatz#0001: Yeah so it’s still informal but if that works for you cool 👍
𓅬 gabriel_syme 𓅬#3220: yeah I'm inferencing, what's the issue?
𓅬 gabriel_syme 𓅬#3220: I can share the inference code once I have my computer back
EstebanSir#2189: Oh please do, I’m right now going to sleep so I won’t be online for a while, but I was trying to infer from gpt-neo using the adapters-transformers library
EstebanSir#2189: I will send you the error as soon as I can, tomorrow
Teemochu#8740: Wait how... shouldn't \n\n always compress to one token? Or is it not left to right? Or... wait I'm very confused, this shouldn't be able to happen with any sane implementation lol.
bmk#1476: depends on which version of tokenizers you use
bmk#1476: this depends on which version of `tokenizers` you use
bmk#1476: an older version of tokenizers used to do \n\n for both
Teemochu#8740: Also sorry if this is too noob of a question but does a BPE always have to decompress to some sequence of exactly two previously-stated tokens?
Teemochu#8740: Or does the implementation support, say, skipping straight to " SolidGoldMagikarp" if neither Gold nor Magikarp without spaces particularly exists in any other form?
Teemochu#8740: I guess technically is a 1:1 conversion of the data so it's at most a "it's not supported, code it yourself in your own tokenizer and it will be" thing
Teemochu#8740: Thanks for being my rubber goose lol
|
bmk#1476: my understanding is that all tokens are made by merging 2 existing tokens
bmk#1476: so no it shouldn't be possible to skip
Teemochu#8740: Yeah my thought here is that maybe deleting some tokens from existence can lead to more efficiency
Teemochu#8740: Like mid-tokenizer train removing "GoldMagikarp" since now it's so rarely used due to " SolidGoldMagikarp"'s existence. Gives you extra vocab for free (at model train time) basically, as long as you don't remove the 257 seeds.
bmk#1476: seems likely
bmk#1476: I have to admit I don't totally understand BPE tho
ari#9020: I realized that BPE is weird when I saw that "corndog" gets split up as "cor|nd|og" (by `GPT2TokenizerFast.from_pretrained("gpt2")`)
nostalgebraist#3542: the OG gpt2 tokenizer does what i describe
nostalgebraist#3542: as in the one in openai's gpt2 repo
nostalgebraist#3542: the one that HF tokenizers is imitating
nshepperd#2316: things should use unigram lm tokenizer imo
nshepperd#2316: sample from the most likely tokenizations instead of whatever weird greedy thing bpe does
pratiksha#3520: has anyone tried deployments on AWS instead of GCP TPU VMs?
pratiksha#3520: 1. I am quite familiar with AWS 2. Have got credits on AWS
Orz#3023: Isn't it as simple as clicking "deploy" from huggingface?
pratiksha#3520: if yes, what's equivalent of GCP TPU VM in AWS?
pratiksha#3520: is it? would appreciate some help. A newbie in ML devops
Orz#3023: TPUs are a property of Google
you could probably use deepspeed to host on aws tho
Orz#3023: I haven't tried it for myself
|
So don't quote me on that
Orz#3023: gpt-j isn't yet released on huggingace
You may find something here tho
Orz#3023: https://huggingface.co/EleutherAI/gpt-j-6B?text=My+name+is+Mariama%2C+my+favorite
pratiksha#3520: thanks
chesscandle#3287: Hey guys, I'm new so please be patient with me 🙂 I'm trying to build a language of short messages by uploading a database of messages. I.e. I scrape a group of football fans, upload all the messages to the AI and create a replicated group that sends similar messages but is fully automated. Can anyone help me? Thanks!
pratiksha#3520: I've been trying to deploy on AWS sagemaker using huggingface's deploy python script. After a long wait, I got error - Key Error "gptj" (Screenshot attached error and the script) https://cdn.discordapp.com/attachments/729741769738158194/882920080940359680/Screenshot_2021-09-02_at_12.38.21_PM.png
pratiksha#3520: https://cdn.discordapp.com/attachments/729741769738158194/882920712975843338/Screenshot_2021-09-02_at_3.01.31_PM.png
pratiksha#3520: What might be causing it?
pratiksha#3520: tried googling solution, replaced transformers with @StellaAthena's fork, tried using model according to https://discuss.huggingface.co/t/how-to-get-eleutherai-gpt-j-6b-working/9427
pratiksha#3520: got error - `zsh: illegal hardware instruction` while importing transformers
ari#9020: :thonk: Complete shot in the dark since I don't use either AWS or HF, but... are you running on ARM (Graviton) instances?
pratiksha#3520: No
ari#9020: Well, in that case, I wish you luck, because that was the extent of my insight here
nepyope#9798: is there anything like codex but open source?
nepyope#9798: AI21 labs has something like that (gpt with 175B parameters) but it's not specifically designed for coding
Orz#3023: I mean
gpt-genji exists
Orz#3023: and also gpt-code-clippy
EricHallahan#1051: Try genji, ghpy, or plain old GPT-J.
|
flowpoint#7450: gpt-j can do some code,
genji <https://huggingface.co/NovelAI/genji-python-6B>
and https://huggingface.co/lg/ghpy_20k
but nate(ncoop57) and his discord is probably most vested on that <https://github.com/CodedotAl/gpt-code-clippy>
flowpoint#7450: didn't we want to link it in #communities ?
Orz#3023: Can you share their discord server?
EricHallahan#1051: Here is the Code.AI community discord link: https://discord.gg/68NZFfxHxD
nepyope#9798: thank you!
janeadams#4271: Hi everyone! I'm Jane, going by Nodradek for my AI art. I've used many of the tools developed by this discord's members, so I'm really honored to be joining as a fly on the wall here 🙂 I'm an incoming PhD student in Computer Science (Data Visualization), but my background is in fine art & emergent media. If you're ever looking for artist beta testers, I'm your gal 🥰 Here's a thread I wrote on my latest work: https://twitter.com/nodradek/status/1429526745116516354
EricHallahan#1051: Welcome! You would feel right at home down in #art.
ersatz#0001: new episode of MLST on Codex https://www.youtube.com/watch?v=1CG_I3vMHn4
... but *still no episode with Connor and Jeff Hawkins*! And they're planning to not release it at all? I'm starting to wonder
ethan caballero#6044: https://twitter.com/ethancaballero/status/1433481400230023170
mhop#8966: is there a place delineating where help is needed? I'd like to know how I can help
cfoster0#4356: This is one place you can look https://github.com/EleutherAI/project-menu
alstroemeria313#1694: Hey, who here knows anything about low discrepancy sequences?
alstroemeria313#1694: PyTorch has uniform distribution ones built in but
alstroemeria313#1694: I need something that imitates a diagonal multivariate normal?
alstroemeria313#1694: Or at least a uniform distribution on a hypersphere.
Technobird22#2055: (Somewhat noob) question regarding transformers:
|
When you load a transformer (say, GPT-J), and it uses ~14.5 or so GB of VRAM, is that static or dynamic? In other words, does the stuff in the VRAM change whilst something is being generated?
kindiana#1016: Mostly weights which don't change for inference
Technobird22#2055: Ah okay, then I was wondering, could you run multiple inferences using roughly the same amount of VRAM as a single instance?
kindiana#1016: Theoretically
EricHallahan#1051: That's the concept of batching.
Technobird22#2055: Does batching require sending in the inputs together at the start of the inference? Would it be possible to run separate inferences asynchronously?
kindiana#1016: Yes, but it's much more complicated
Technobird22#2055: Ah, okay
Technobird22#2055: Seems it would be more efficient than having many GPUs though, with each loading their own model
kindiana#1016: yeah
kindiana#1016: that's what people who do inference at scale do
EricHallahan#1051: Batching is cheap when you want to scale, but how far you can push the batch size is limited by device memory.
Technobird22#2055: Ah, because each inference must consume a bit more memory?
kindiana#1016: limited more by arithmetic intensity usually
kindiana#1016: you want to push flops per byte accessed all the way to the roofline of your device (and no more) to maximize throughput without hurting latency
Technobird22#2055: so here memory speed/bandwidth is an important consideration too?
kindiana#1016: memory bandwidth is the bottleneck of inference latency
Technobird22#2055: even more so than the speed of the main GPU chip itself?
kindiana#1016: yes
EricHallahan#1051: Practically yes.
|
Technobird22#2055: Thanks for your time and answers btw Eric and Ben \:)
Technobird22#2055: I wonder if hardware can be designed just for inference, with GPU cores having access to high speed shared VRAM
Technobird22#2055: and theoretically, this could be made into an ASIC?
kindiana#1016: sure
kindiana#1016: google did just that
kindiana#1016: tpuv4i
EricHallahan#1051: That what every AI chip startup is thinking.
Technobird22#2055: I mean, with weights embedded in the ASIC?
Technobird22#2055: Ah okay
EricHallahan#1051: Theoretically possible, just add ROM.
kindiana#1016: lol that would kinda suck if you want to update your model
kindiana#1016: and density would be pretty meh
kindiana#1016: there is this company which wanted to use flash memory
EricHallahan#1051: FPGA manufacturers want you to subscribe to this idea.
kindiana#1016: and compute MACs in analog on the flash chip
Technobird22#2055: MAC?
kindiana#1016: multiply accumulate
Technobird22#2055: Ah
Technobird22#2055: would there be much performance gained over using a GPU?
kindiana#1016: https://spectrum.ieee.org/two-startups-use-processing-in-flash-memory-for-ai-at-the-edge
|
EricHallahan#1051: The problem with anything analog is that they eventually hit a nonlinearity in the system like saturation.
Technobird22#2055: okay
kindiana#1016: that's not really an analog problem
kindiana#1016: more like a non-floating point problem
kindiana#1016: lol
Technobird22#2055: btw regarding batches, as I haven't played with GPT-J personally (Don't have a GPU), are the prompts used for each batch the same?
Technobird22#2055: Could you use different prompts in a batch?
kindiana#1016: you can have different inputs in each batch
EricHallahan#1051: They are completely separate sequences.
Technobird22#2055: That's neat, will have a go at it once my Tesla M40 arrives \:)
Technobird22#2055: I guess we could check this by checking the memory bandwidth (not sure of the exact term) used?
kindiana#1016: when you are doing inference with batch size 1, you need to pull each weight in from memory once, where you can perform a single multiply accumulate
kindiana#1016: however your gpu can do a lot more MACs than it can pull weights in
janus#0150: I have a few elementary questions about transformers, if people don't mind. I realize have some holes in my understanding that may be relevant to the alignment ideas I'm thinking about. I am definitely missing the details of attention and position embeddings.
My main questions at the moment: How do the activations inside the network on a prompt of n tokens [0, n) (the first step) compare to activations on the same prompt advanced by one token [1, n+1) (the second step)? Is it right to say that at each layer, the mth element of the activation is based only on the tokens [0, m)? Would we expect this activation to be _approximately the same_ as the activation at previous timesteps? (e.g. different only because a token has fallen off the beginning of the prompt, and the position embedding is slightly different for the new prompt?) Or might we expect the activations to look very different?
If I understand correctly, masked self attention means that most of the attention mechanism can only see the early parts of the prompt. Is this true also of the fully connected layer after each masked attention layer? (i.e. is the FC layer not actually FC?)? If not, we might only expect this similarity to only hold for the early layers, right? From a naive view, it seems the FC layer would let the network consider the past arbitrarily differently depending on how far back it is in the prompt. Perhaps this is not right because of the way position embedding actually works?
Technobird22#2055: So theoretically, you could do many MACs before you start running into memory speed issues
Technobird22#2055: So a batch size > 1 would not harm performance
|
EricHallahan#1051: Well think of it this way: `A` is very different in meaning to `A car`, but as you continue to add information to the context you will undoubtedly see the activations trend towards consistency.
kindiana#1016: yes exactly
kindiana#1016: FC layer is fully connected as in it allows the interaction of the hidden dimension, but not in the sequence dimension
kindiana#1016: the whole network is entirely causal
EricHallahan#1051: The FC layer is there to manipulate tokens between attention steps. There is no causal/acausal distinction because the FC layers don't operate across the sequence dimension.
cfoster0#4356: Some of this is kind of confounded by the fact that we typically pack sequences, so the network always sees the full 0-2048 positions during training, even if a given sequence is placed at, say 249-358 of it
cfoster0#4356: With a bunch of unrelated sequences packed in the other locations
cfoster0#4356: So it's gotta learn to ignore the exact offset if the other packed sequences are unrelated
janus#0150: What is the sequence dimension vs the hidden dimension? I basically have this diagram in my head: https://cdn.discordapp.com/attachments/729741769738158194/883088682041634886/tjSAdwAAAABJRU5ErkJggg.png
cfoster0#4356: The hidden dimension isn't shown there
EricHallahan#1051: That is only the self-attention operation.
CRG#8707: The FFN is really a 1x1 convolution
Sphinx#2092: I'm pretty sure the position encoding are adjusted accordingly, so I don't think from the model's perspective its really any different.
EricHallahan#1051: If it helps, think of the FC layer as a 1-wide convolutional kernel.
cfoster0#4356: Ben, did you do that for J? I don't think I've actually seen that before
kindiana#1016: no
janus#0150: But isn't it not spatially invariant?
kindiana#1016: that would get very confusing for the model
cfoster0#4356: Like theoretically you could fix it with proper masking and positional encoding
kindiana#1016: unless you also mask the sequences
|
Sphinx#2092: I guess I'm thinking more for encoder-decoder architecture, where this is done by default.
CRG#8707: The FFN is spatially invariant
CRG#8707: It's the same 2 layer MLP for every token
EricHallahan#1051: Regardless of where the token is in the sequence, the same operation is applied.
cfoster0#4356: The FFN acts on single positions at a time, and doesn't mix information between them
janus#0150: Why is it called FC? Convolutions seems very different from FC
cfoster0#4356: They should really call it a pointwise MLP or something
kindiana#1016: yeah
kindiana#1016: that would be a better name
CRG#8707: I think one of the original names was "network in network" <https://m.youtube.com/watch?v=vcp0XvDAX68>
janus#0150: So the 'FC' has one weight per node (like a 1x1 convolution) instead of one weight per connection to each node?
kindiana#1016: each node is a n dimensional vector
kindiana#1016: each fc has some weights which transform that into another n dimensional vector
kindiana#1016: and those are shared across the sequence dimension
CRG#8707: https://cdn.discordapp.com/attachments/729741769738158194/883090915940511826/Screenshot_20210902-224753.png
CRG#8707: A token with dim 2 -> internal 4*2 representation -> back to dim 2
EricHallahan#1051: It is a fully connected layer batched across the sequence dimension.
bmk#1476: I think the best way to really learn how a transformer works is to read the code and then reimplement one from scratch
EricHallahan#1051: If `transformers` does anything right, it is that its code is very easy to read.
kindiana#1016: ideally in like 3 different frameworks :berk:
|
EricHallahan#1051: It is heavily optimized in that department.
cfoster0#4356: Idk I feel like there's a better way to convey the concepts lol
EricHallahan#1051: Build the animation that Phil wants.
EricHallahan#1051: lol
kindiana#1016: https://dugas.ch/artificial_curiosity/GPT_architecture.html
EricHallahan#1051: http://jalammar.github.io/illustrated-transformer/
cfoster0#4356: It becomes a lot easier to grok when you realize that the *only* place the network isn't acting independently on each position is the attention part
bmk#1476: that also happens to be the main advantage of the architecture
𓅬 gabriel_syme 𓅬#3220: most of these wonderful visualizations never go into this discussion about the FC I guess
𓅬 gabriel_syme 𓅬#3220: it's just a rectangle
bmk#1476: it saves on inter node bandwidth
kindiana#1016: I've never seen anyone split seq across nodes haha
cfoster0#4356: *MLP Mixer shoots you a mean glare*
kindiana#1016: but it is basically the best way to extract all the possible flops
kindiana#1016: bmm is always the most optimized kernel
bmk#1476: well, it still prevents you from needing to do communication and synchronization within each chip
bmk#1476: you're still implicitly splitting across seq
bmk#1476: I mean like
bmk#1476: devices still consist of multiple cores
bmk#1476: and it's preferable if the cores don't have to synchronize or communicate as much
|
bmk#1476: also the thing where there's no recurrence too
kindiana#1016: yeah thats the big one
kindiana#1016: it lets your fc have a batch of seq * batch
janus#0150: I think I understand what you mean by being a 1x1 convolution in the sequence dimension and like FC in the hidden dimension. Is this right? https://cdn.discordapp.com/attachments/729741769738158194/883094710514774036/unknown.png
mo#0466: you can use the 1x1 conv idea to check your understanding in hindsight.
mo#0466: this is actually really, really simple.
cfoster0#4356: It's basically weight shared/independent in the sequence dimension and fully connected in the hidden dimension
mo#0466: you have a tensor of shape (n, d) where n is the length of the sequence and d is your hidden dimension, if you will @janus
good so far?
mo#0466: now you apply a regular MLP to each d-dimensionsal vector
mo#0466: you apply the same MLP to each vector in the sequence
mo#0466: that's it
janus#0150: And the embedding matrix takes the initial input tensor of shape (n, 1) to (n, d)? And position embedding adds another tensor of shape (n, d) to this?
mo#0466: yea
bmk#1476: batching only works so well because the elements don't depend on each other right
kindiana#1016: yeah
bmk#1476: so it makes sense that transformers implicitly split along seq
cfoster0#4356: Associative operations parallelize nicely
janus#0150: Is there a brief description off of this on how different position embeddings work? Absolute position embedding has a fixed matrix where no columns repeat? Sinusoidal has repeating columns in a sin-like curve? Rotary...?
|
bmk#1476: and this is why sums on parallel hardware are so flaky and nondeterministic, because addition isn't associative
cfoster0#4356: wat
kindiana#1016: unless you use tpus
kindiana#1016: lol
cfoster0#4356: (sidebar: I had a moment of panic recently realizing how much hinges on floating point operations being "good enough" approximations)
bmk#1476: floating point addition isn't associative
bmk#1476: and since real numbers are actually fake, floating point addition is the real addition
zphang#7252: "stop using real numbers"?
cfoster0#4356: Right right, agree with you. Floating point addition is associative enough\*\*\*\*
bmk#1476: > enough
eval harness: differences of up to 1e-2 in perplexity between runs on the same hardware :ptsd:
𓅬 gabriel_syme 𓅬#3220: this is part of the visualization I'd love to have one day
janus#0150: Ok slightly updated diagram https://cdn.discordapp.com/attachments/729741769738158194/883099058737127484/unknown.png
cfoster0#4356: The lingo of "1x1 convolution" is confusing in the context of text
CRG#8707: If you want to be more accurate, the normalization is done before the FFN/attention nowadays (prenormalization)
EricHallahan#1051: I wouldn't even call it that because there is only one spacial dimension (in NLP)
cfoster0#4356: Like it's not even a convolution
cfoster0#4356: *ducks under table*
cfoster0#4356: It's like if you got invited to a potluck and everyone only ate the food they themselves brought
|
𓅬 gabriel_syme 𓅬#3220: lol this happens quite often in diverse potlucks in SEA; great example though thanks!
janus#0150: Ok, so it seems the answer is yes, they should be approximately the same, except due to the position encoding?
CRG#8707: The activations would be the exact same with Transformer XL caching, but yeah. There should only be effects from different PE or from dropping tokens from the beginning.
janus#0150: Interesting. I've been thinking about the extent to which GPT is 'myopic' and what that buys us in terms of safety. Unlike an RL agent, it is trained to accomplish a task with a horizon of one step, and it has no ability to effect its observations. Intuitively, this seems to have a lot of nice properties. For example, it should make it less likely to form mesaobjectives defined over multiple timesteps, like goals which would lead it to sacrifice immediate reward for long term reward. Ideally we could say that all of the computation in the network is optimized for next token prediction and unlikely to reference or care about the properties that emerge when we actually use it autoregressively.
One problem with this view is that even though a single SGD step is myopic, SGD as an optimization process converges to weights which are good for _all_ steps. Put another way, SGD has a bias for low variance updates. If you update in a way that only helps this batch, the update will likely be offset in future batches. This is more pronounced if the network's activations are _literally the same_ when making future predictions, because it means they are explicitly optimized not just for next token prediction, but also for improving future predictions with a sliding context window.
janus#0150: (cc @adamShimi, you might have a more concrete idea about myopia, although note I'm really talking about myopic training not myopic cognition)
adamShimi#8350: And also note that the issue with myopia is that myopic training (at least in RL) doesn't enforce myopic cognition.
adamShimi#8350: See this cool paper: https://arxiv.org/pdf/2009.09153.pdf
CRG#8707: Future tokens pass gradients backwards to past tokens to improve their own predictions, so I'd say there's a direct effect there.
kindiana#1016: well, also there is also the teacher forcing effect
Kharr#7888: Earlier tokens are imbued with all future tokens via the gradient, and all late tokens are pretty lazy using mostly context since they can only see the past.
kindiana#1016: where the network is always trained on in distribution data\
kindiana#1016: but always tested on out of distribution when generating
CRG#8707: You also can predict k tokens instead of 1. https://arxiv.org/abs/2001.04063
janus#0150: I wish the implications for safety were discussed and even a consideration when designing network architectures and training setups. People who design them (and all of you) are much more qualified to evaluate these angles than alignment researchers.
Some Point Process#3793: How much of safety concerns predicates on some disaffected researcher pressing the button that (carelessly or not) unleashes an unaligned AI
bmk#1476: (new potluck idea: everyone brings one food item or ingredient to be dumped into an enormous blender. no coordination is allowed)
mo#0466: nutella
zphang#7252: just salt
|
Kharr#7888: The architectures don't matter nearly as much as the actual data and loss function. DNNs are mainly supposed to be universal function estimators, the function they learn is what is important and requires alignment, and the function they learn largely depends on the optimization objective and training data.
janus#0150: None. It is far more likely to be done by well-meaning but careless engineers. Any lab could make this mistake and ruin us, so the longer powerful AI is within reach, the more likely it is that the least cautious labs are to the ones to deploy something malign. This is the unilateralist's curse. But the real problem of alignment is much, much worse than coordination. *Even if we could convince everyone to cooperate, we still don't know how to make aligned AI.*
Some Point Process#3793: I see. I remember hearing that (don't know why I didn't consider this more encompassing view), and that makes sense to keep in mind
janus#0150: This makes sense, and I agree. The loss function is the #1 concern. However, the risk of inner misalignment also seems serious. Even if we optimize a network for the 'right objective', SGD is imperfect and the network may implicitly act according to a *different objective* which is a good proxy for our loss function on the training distribution. The extreme form of this is deception, because deception can lead to perfect performance on the training set and defection off distribution, outside of the training environment.
janus#0150: Put in a way he might not endorse, it sounds like Evan Hubinger's major concern for alignment at the moment is avoiding failure modes that could arise during training. He thinks we may be able to get the loss function right, we just need to avoid 'bad regions' of parameter space which would include local minima which lead to deception and defection.
janus#0150: It seems like architecture design could be very relevant there, but I'm not totally convinced by the thesis and have no idea how to draw conclusions about architecture safety.
janus#0150: The most obvious and promising place for intervention is avoiding RL....
wabi-sabi#5811: Why expect deceptive etc. regions of the parameter space to be contiguous or easy to describe?
Sahl#0630: they should all model reality correctly
Sahl#0630: that’s one constraint that groups them together
Sahl#0630: they should also all not directly use that model for output
janus#0150: One argument: If you have a different goal than the training objective, becoming _more_ deceptive makes you better at the training objective. This suggests gradients could point towards increasing deception, so it is a sink in the loss landscape.
Sahl#0630: oh you can actually check this by ripping layers off the top and training an adapter
Sahl#0630: and the adapter just answers facts
𓅬 gabriel_syme 𓅬#3220: how would Decision Transformer fit here, is it a safer architecture for RL?
𓅬 gabriel_syme 𓅬#3220: (if I can call it RL)
mgostIH#0245: I like this, it's like the thought police
janus#0150: I very much think so, yes. I would definitely not call it RL because it is different in important ways that seem to make it safer.
cfoster0#4356: Threading the needle of deep interpretability for avoiding deception while avoiding mind crime in the limit is a bit :yudscream:
𓅬 gabriel_syme 𓅬#3220: cool, I know understand just a bit better your interest in it 🙂
|
EricHallahan#1051: Have you considered trying `torch.use_deterministic_algorithms()`?
https://pytorch.org/docs/1.9.0/notes/randomness.html
binarypasta#9355: how well does gpt-neo work for finetuned downstream tasks?
binarypasta#9355: does it get close to the performance of gpt-j for something like conversation or code completion?
Kharr#7888: Depends on the model size. 2.7B is better than 1.3B. All of them are trained on The Pile so they should be similar, just proportionally worse based on size. See table here: https://github.com/kingoflolz/mesh-transformer-jax/
ProudNoob#5854: for me quaternions really helped grasping the concept of rotations in graphics and physics programming, not sure if roto is really like that in ml yet (still have to get to that part), but for helping visualize rotations it helps anyhow
binarypasta#9355: how does a finetuned gpt-neo 2.7B compare to a non-finetuned gpt-j
Kharr#7888: Better. Even a smaller 350M model will perform better finetuned on a specific task than non-finetuned GPT-J
ProudNoob#5854: yeah, even gpt-2 can outpeform -j in specific enough tasks when primed and tuned
binarypasta#9355: I want to make a multi-participant chatbot
binarypasta#9355: for a discord bot
binarypasta#9355: I've used dialogpt and blenderbot before
binarypasta#9355: but they only work with 1-on-1 conversations
ProudNoob#5854: gpt-j can probably juggle multi convo better
ProudNoob#5854: but you must understand that gpt-j is not designed to be a chat bot, it's general purpose
ProudNoob#5854: for a discord multi-chat you'd have to retrain quite a bit with gpt-j as well
Kharr#7888: finetuning for a specific task will yield better results if you have data -- just use the biggest model you can run
binarypasta#9355: I tried GPT-J with multi-user chat
binarypasta#9355: it works suprisingly well
binarypasta#9355: I did a format like this:
|
binarypasta#9355: ```
Bot: hi
HotVector: Hi
AnotherUser: Hi
```
and so on
ProudNoob#5854: you might be better of continue with dialogpt focussed on manaing interactions and sometimes just running -j for complexer general tasks
EricHallahan#1051: Try replicating the format of Ubuntu IRC.
ProudNoob#5854: thanks, that's a useful tip
binarypasta#9355: download link?
ProudNoob#5854: haha
ProudNoob#5854: just google irc ubunutu log
EricHallahan#1051: https://arxiv.org/abs/2101.00027
binarypasta#9355: ok I'll try finetuning GPT-Neo 1.3B first
ProudNoob#5854: yeah, that way you have quicker results and learn faster yourself as well
ProudNoob#5854: apart from that I think you might be able to get dialogpt to be a ringleader on discord with a pentative gpt-j to supply the fun and insight
binarypasta#9355: dialogpt is really annoying to work with because it was trained on reddit threads
binarypasta#9355: and it has a really bad problem with repitition even with a penalty
ProudNoob#5854: with ring leader I mean more like managing the "rules"
ProudNoob#5854: that could even be written down in a tree flow or something
|
EricHallahan#1051: Appendix F.19:
```
<ppisati> ogra_: or we punch a hole in the dev image so we can login via
the serial console and check what’s really going on
<ppisati> ogra_: yes
<ogra_> well, i wanted to play with systemd console but didnt have time
for that yet
<ogra_> \o/
<ogra_> something at least ... that kernel looks fine
<ppisati> ogra_: good to know
<ogra_> do you have an SRU bug that i can tag verification-done ?
<ogra_>
```
ProudNoob#5854: prompt tip of the day?
EricHallahan#1051: #prompting tip of the day.
binarypasta#9355: where is this from?
EricHallahan#1051: :thisup:
binarypasta#9355: oh lol
ProudNoob#5854: ikr, but also by extension the dataset that was used for making the model and its structure, so for me the insight that irc is a pattern -j is primed for is to me the prompt tip of the day
ProudNoob#5854: but really, thanks a ton
|
ProudNoob#5854: that is awesome to play with and see what to do with
ProudNoob#5854: stupid I didn't come up with it myself
ProudNoob#5854: i tried newsgroup formatting
EricHallahan#1051: This format works fine though, though there is some overlap with things like scripts.
ProudNoob#5854: is the pile trained on newsgroups? because that didn't really catch on
EricHallahan#1051: So you'll see things like stage directions occasionally.
ProudNoob#5854: if you add a few exaple shots it will even read <script> as a user
ProudNoob#5854: so perfect for HotVectors usecase, just parse the dicord message / use their api and write that to an irc formatted prompt
ProudNoob#5854: gpt-j will already have some capabilities in understanding it's a group chat in that format
𓅬 gabriel_syme 𓅬#3220: can you use plays for the same bot?
𓅬 gabriel_syme 𓅬#3220: those are sort of dialogues of many people (sometimes)
ProudNoob#5854: plays?
ProudNoob#5854: like a guided multiplayer chat thing?
ProudNoob#5854: find a channel where they use these "plays" and use that data for training
𓅬 gabriel_syme 𓅬#3220: actual theatrical plays 🙂
𓅬 gabriel_syme 𓅬#3220: or any interaction between multiple parties on text
ProudNoob#5854: same concept applies, thougher to find that with a fewshort prompt than the irc format though
someKindaBean#8471: is there a good dataset of plays?
someKindaBean#8471: I've actually looked for this before and came up empty
someKindaBean#8471: Short of like Shakespeare
|
ProudNoob#5854: you might even consider putting it in irc format, just to trigger the muliplayer (in chat or as a script doesn't matter really) memories of -j
ProudNoob#5854: just regex beck and forth into the format you want for the input and output
𓅬 gabriel_syme 𓅬#3220: hmm not really sure, maybe the books dataset has it
ProudNoob#5854: there's a movie interaction dataset by cornell I believe
ProudNoob#5854: should contain multiple entities
ProudNoob#5854: off-the-rack I think -j wlll have a hard time keeping up with a multiplayer scenario and keep storylines / conversations somewhat staright, but could probably finetune it and get some decent results. expecting it to write a whole 3 hour play with interactions and multiple characters is a little ambitious
ProudNoob#5854: if that's the goal I'd focus on training neo models for specific tasks, that can when a bigger models comes along together provide the training data needed for the final "bot"
𓅬 gabriel_syme 𓅬#3220: another thing to try is Kharr's generative finetuning. you can use the Pile's dataset to finetune a model and then use that to generate interactions, then finetune another model on generated interactions -> finetune it on final dataset -> generate more interactions, ...
𓅬 gabriel_syme 𓅬#3220: hope I described it well enough
binarypasta#9355: interesting result from GPT-Neo on huggingface using the irc log format https://cdn.discordapp.com/attachments/729741769738158194/883230242339192832/unknown.png
binarypasta#9355: I casted the weights to fp16
binarypasta#9355: it also happens with fp32 weights
binarypasta#9355: also the VRAM usage increases with longer sequences, what's up with that?
hecko#8977: the ubuntu irc format is extremely likely to, y'know, veer off into ubuntu irc
it once literally told me "that's cool / anyway [ubuntu stuff]"
hecko#8977: using [square brackets] for names can give different flavors, i think podcast transcript? also just a colon after the name
hecko#8977: though it does seem to need some guidance in the form of the replier's name being explicitly given
flowpoint#7450: can someone tell the hardware specification of the trc host machines or link them?, cound't find them...
specifically important is io,
what performance (GB/s and iops) to expect of the storage (min > 1TB)? (like hdparm -tT)
|
and how much ram (GB)?
ethan caballero#6044: Stanford GPT-4 disconfirmed? :
https://twitter.com/chrmanning/status/1433804345754095617
ethan caballero#6044: How many params GPT can fit on 16 a100s?
Kia#2550: 8?
ethan caballero#6044: 8 Billion?
Kia#2550: Ow I taught you meant how many GPT-3 models can fit in a 16 a100
EricHallahan#1051: You can fit 8 parameters in 16 A100s
flowpoint#7450: 16*40GB (if pci) / 4B (fp32) = 160 billion (minus optimizer states, duplicated model params ...)
ethan caballero#6044: so that's for inference and not training?
EricHallahan#1051: It is heavily dependent on what variant of the GA100 we are talking about, as there are both 40GB and 80GB versions.
flowpoint#7450: that's just a really bad estimation 🙂
flowpoint#7450: gpt-3 needs 700 GB :citationneeded: so with a combined 16*40GB = 640GB they can fit 0.9 gpt-3s or 1.8 gpt-3s if they use 80GB a100s
ethan caballero#6044: for training or inference?
EricHallahan#1051: https://www.eleuther.ai/faq/
EricHallahan#1051: Inference
EricHallahan#1051: You need to take overhead into account though.
StellaAthena#3530: To do training, you need to divide by 4 again because of Adam
kindiana#1016: theres also some half precision stuff
kindiana#1016: but those are all very rough estimates
|
kindiana#1016: lol
flowpoint#7450: but, ..., i think at that complexity you need to differentiate the words more precisely,
to me atleast, training can range from pretraining, finetuning with efficient optimizers, to one shot learning
flowpoint#7450: :goose10:
johncaling40#6574: does eluther have anything in gans
johncaling40#6574: or like music gen
Kia#2550: GooseGAN
Kia#2550: Literally
Orz#3023: Wait
vqgan was made you y'all right?
johncaling40#6574: ok
johncaling40#6574: i am bad memorry
johncaling40#6574: sorry
EricHallahan#1051: Work with VQGAN originated with *Taming Transformers for High-Resolution Image Synthesis*.
https://arxiv.org/abs/2012.09841
Kia#2550: We just need to wait until someone propessed a Model
Kia#2550: in this Community I supposed
Orz#3023: I mean
it was you guys who wrote the code
So it's your model
|
nev#4905: has anyone actually trained a dense 100t+ model?
nev#4905: even if it's as simple as overfitting some really easy task
EricHallahan#1051: Not that we know of.
EricHallahan#1051: Well, I know of.
nev#4905: fun fact: depspeed corrects to depressed
alstroemeria313#1694: Maybe the ZeRO-Infinity people let a really large one train for a few steps to make sure their unit tests passed, IDK
nev#4905: iirc they ran a 1t model
StellaAthena#3530: … for one step
StellaAthena#3530: Hey guys, gals, and non-binary pals!
Every four-ish months I take an informal poll of this community. The main purpose is to collect information about how EleutherAI (the org) is serving the community in our discord server and how we can do better. Everyone from lurkers to people leading EleutherAI research projects are welcome to fill it out. If you're reading this and thinking "I dunno if I participate enough to answer the survey" the answer is "yes you do".
Feel free to DM me if you want to talk one-on-one as well.
https://docs.google.com/forms/d/e/1FAIpQLSes4pMQNY31q1LAvXrBkGYWoEbxd0PIQIcQFpea0XYC6zIhrA/viewform?usp=sf_link
Kia#2550: Done :thinkies:
marzi#8916: is there a place where it is okay to post a job?
marzi#8916: off topic?
StellaAthena#3530: Not currently. Historically has been little desire expressed by our members in having such a channel.
marzi#8916: can't you recommend if possible any place to find GAN and CLIP devs?
|
marzi#8916: really struggling to find such talent
StellaAthena#3530: There are many websites dedicated to that, such as glassdoor and LinkedIn.
marzi#8916: okay understand
binarypasta#9355: is it possible to get the open subtitles dataset with character names?
45#2247: do you also post results / stats?
𓅬 gabriel_syme 𓅬#3220: some results are discussed yes, #website has some data on the website for example 🙂
EricHallahan#1051: Nothing that we didn't already know though lol
StellaAthena#3530: I may post some high level takeaways, but no I’m the only person who sees the raw results. Historically people have written some brutally honest things and it would be a violation of that implicit trust to post the raw data without warning people. I would much rather live in a world in which people provide honest feedback than a world in which I share the results publicly.
Dx#7484: i am trying compare some of the classification performance of GPT-J with GPT-3. Does anyone know what is exactly done under the hood for GPT-3 classification? I assume they have some predefined prompts to insert the most relevant examples (ranked by semantic search), then completion. The part I am not sure about is how they parse a pre-defined set of labels to get the probabilities. What would be the approach if I want something similar from GPT-J? (few-shots instead of zero-shot)
Technobird22#2055: Sorry if this is a somewhat beginner question, but say something is being sampled or trained on one machine, I was wondering if it could be transferred to another machine without losing too much progress
Technobird22#2055: I know there are checkpoints for training, but is there anyway to transfer in the middle of an inference?
Technobird22#2055: Say, if I was running a large VQGAN generation on Colab, for instance, and wanted to continue the inference locally, without having to restart
random_lurker99#8915: you could checkpoint any intermediate outputs if you so desired. Pipelined inference on a single chip e.g.
random_lurker99#8915: but what is the question, why shouldnt you be able to? It's just a bit of handy work to split it how you want
Technobird22#2055: what sort of data would need to be transferred?
Technobird22#2055: updated the question
nev#4905: by vqgan you mean clip + vqgan?
Technobird22#2055: Yeah
Technobird22#2055: But that was just an example
nev#4905: for vqgan you can save the image and go from there
|
nev#4905: depends on the model really
nev#4905: with most gpts you can too
Technobird22#2055: ah okay, thank you
Teemochu#8740: yeah diffusion is where it's different, to my understanding a diffusion run is one complete thing that can't be split up (or rather the "outputs" are fake until the last one), so there would need to be some special continuation logic
StellaAthena#3530: No, nobody outside of OAI knows how exactly the API works
xcodevn#9003: I'm developing a new pytorch-like framework on top of Jax (yeah, it is yet another jax-based framework, again). I'm quite happy with the result so far. I want to hear your comments and ideas if you're interested. Here it is: https://github.com/NTT123/pax
StellaAthena#3530: What niche are you trying to fill specifically? Just replicating PyTorch’s API?
xcodevn#9003: my goal is not to replicate pytorch. But i like the way pytorch manage its parameters and submodules. Hence, the similarity in API.
xcodevn#9003: the problem i am trying to address is to use Jax with OOP module naturally.
xcodevn#9003: and naturally here means a module can be inputs and outputs of jax functions.
Awesome_Ruler_007#7922: I am thinking of learning Jax, but its nothing like pytorch and I hate programming 😦
random_lurker99#8915: I dont quite understand the difference to Haiku looking at this example. What would my motivation be to transform from haiku to pax?
xcodevn#9003: Haiku has a problem that you cannot work in OOP mode after a hk.transform_* . Say, access a model parameter is .... difficult, because your model is no more an object.
random_lurker99#8915: my parameters are a dict I can edit as I want, what could be easier?
xcodevn#9003: yes, but it is just a dict
xcodevn#9003: while a module... can be modified, called, store things...
random_lurker99#8915: that is perceived as an advantage, fwiw (at least on our end)
guac#4716: with a dict it's really annoying to pull out params and attach them to the corresponding (sub)module
cfoster0#4356: for a lot of folks using it it feels unergonomic
guac#4716: and the keys are wonky af
|
cfoster0#4356: you can pretend everything is stateful until the transform, IIRC
cfoster0#4356: And people like thinking statefully
xcodevn#9003: I can see many advantages working in a pure functional mode. But it's also very annoying sometimes whenever I want do to somethings OOP related to my model. Of course, we get familiar with it the more we use it.
thrasher#7261: I don’t think I’ve encountered a jax-based framework with good built in model serialization/deserialization, are there any?
xcodevn#9003: Haiku is serialization friendly, i think.
random_lurker99#8915: right, so just some random feedback you can entirely take or leave: It looks like for this all to work, you need to pull everything into your module style (maybe I am seeing this but looks like you are creating this module tree and wrapping everything). But the spirit of jax tooling seems to be more about small independent libraries and a lean core..haiku, flax, optax, distrax, relax, chex, ..Wrapping endless modules and changing libraries requires infite boilerplating and handling special cases taking a lot of time and energy which may or may not pay off. This can be quite exhausting (speaking from trying to create something similar in a different context). Since the jax ecosystem is still early, I personally would focus on something more narrow to fill a missing niche which may have a more clear payoff. However I do hope it works out and anything in the jax ecosystem is so welcome.
random_lurker99#8915: (guess depends on what you are doing it for ultimately - to scratch your own itch, to have something to chat about in interviews, for pure enjoment..)
thrasher#7261: hmm, what do you mean by serialization friendly — is it early-pytorch-style where you’re just sort of expected to pickle things, but things are generally pickleable
thrasher#7261: or closer to tf saved model where there’s a canonical format
EricHallahan#1051: *cough* *cough* TensorFlow *cough* *cough*
xcodevn#9003: yeah, thank you for the feedback!
xcodevn#9003: Because haiku puts everything into a dictionary. so, we only need to serialize and deserialize that dictionary.
thrasher#7261: if you need the original module class to deserialize, that’s the “early-pytorch-style” serialization I’m referring to
AI_WAIFU#2844: Yeah, I've said this before but it feels like everyone who uses JAX or writes a library for it tries to make everything pure functional even if it doesn't make any sense to do so.
xcodevn#9003: One thing I would like to response to your feedback. The goal is not to wrapping everything, optax, haiku, jmp, etc. It is just the early stage of the project and these are scaffolds to have a funtioning *enough* library.
nev#4905: speaking of bad ideas, is there a framework for jax that has hooks like react
AI_WAIFU#2844: As a result you end up explicitly hauling around network and optimizer state and the code becomes a mess.
random_lurker99#8915: understood 👍
cfoster0#4356: ~~just let copilot figure that out on the user's end~~
nev#4905: :morelayers:
|
AI_WAIFU#2844: code is still unreadable tho
AI_WAIFU#2844: ~~Just tell copilot to summarize the code around it~~
nev#4905: does 👍 mean that there is one
random_lurker99#8915: just approving of creative spirit
xcodevn#9003: My guess: engineers at deepmind do a cost vs befenits analysis, and come to a conclusion that: a pure functional approach is better in the end.
AI_WAIFU#2844: 100% gurantee they didn't.
xcodevn#9003: why not?
AI_WAIFU#2844: because pure functional APIs for NN training are bad. Or at least, no one has figured out how to make them work anywhere near as well as OO APIs.
xcodevn#9003: But this is your assessment, which is different from deepmind one.
AI_WAIFU#2844: https://discord.com/channels/729741769192767510/729741769738158194/867241489846239272
AI_WAIFU#2844: Stateful jax wrapper
AI_WAIFU#2844: https://discord.com/channels/729741769192767510/729741769738158194/867248374922346497
AI_WAIFU#2844: why pure functional APIs suck
xcodevn#9003: Again, this is your assessment.
AI_WAIFU#2844: Sure, but you still need to come up with a counter argument, and all the net effect is that everyone who tries to learn jax is like "wtf is this shit" and it only gets adoption because of tpus.
xcodevn#9003: but, deepmind does care about people who think "wtf is this shit". I'm taking deepmind position here.
random_lurker99#8915: it's a philosophical assessment, but the issue is about what you are trying to do at the core, and what our research users generally want to do is complex function transformations + large distributed settings. Every time you add a stateful wrapper users have to advance their mental model of how this composes with any other function transformation and tracing. On balance, carrying around optimiser and parameter state explicitly is viewed as an advantage because it also makes it more obvious for debugging user programs what's going on. Conciseness and frameworks are not seen as a virtue post-tensorflow necessarily.
Sid#2121: I feel like i'm gonna get bullied for saying this here but i really have never understood what advantages the functional paradigm is actually supposed to offer
Sid#2121: why can't i put my state somewhere lol
random_lurker99#8915: important to point out that JAX was created by Brain and not deepmind, even though the biggest user perhaps : )
|
Sid#2121: lugging around the optimizer state surely just means you have to read through more useless lines of code designed to facilitate the lugging around of stuff before you get to the actual problem / meat of the function
bmk#1476: do you mean in terms of NN stuff in particular or just like in general
Sid#2121: in general. But with NNs i also specifically don't get it. I'm probably just a little bit burned by megatron because they tried to do everything functionally and it was awful
bmk#1476: functional stuff is pretty useful in general
Sid#2121: i'm just an OOP simp
bmk#1476: like, map and partial are basically borrowed from functional
bmk#1476: I have no idea about functional NNs tho
guac#4716: are list ops really all we get from functional languages lol
bmk#1476: havent really used jax
Sid#2121: i hate map lol
bmk#1476: map is awesome
Sid#2121: what can map do that i can't do with a list comp
Sid#2121: in a more readable way
bmk#1476: list comps are just syntax sugar for map
bmk#1476: it's literally the same thing but with different syntax
Sid#2121: sure, but less readable lol
bmk#1476: pure functional languages have better lambda syntax
xcodevn#9003: Higher-order functions likes `jax.grad` `jax.jit` are awesome 😄
bmk#1476: oh yeah higher order functions are awesome too
xcodevn#9003: To clarify, i'm saying that, deepmind sees the functional approach a better approach. I'm not saying "it is a better appoach". That is why I don't defense deepmind position.
|
inox#5400: jax is so heavily based on HIPS/autograd, even shares authors, it's sort of from harvard
xcodevn#9003: iirc, autograd is even older than pytorch. But it does not support GPU. Jax is like autograd + XLA compiler, that makes it becoming popular.
pebbles#7130: haiku is a bit weird in how it's setup, and I totally get AI_WAIFU's complaints about having to carry around the network and optimiser state, though 'unreadable' code might be a stretch imo. I think jax has some nice things, the gradient stuff is clearer imo, but it's not like jax is better than pytorch across all axes
thrasher#7261: accelerator usability trumps most other concerns for DL frameworks, i've noticed
jbustter#5167: I asked codex to create an code for an adventure game
jbustter#5167: i haven't test it yet, but It looks really good https://cdn.discordapp.com/attachments/729741769738158194/883820122115014696/adventure_game.py
EricHallahan#1051: Maybe share in #prompting?
nev#4905: yes
nev#4905: when were you when autograd die
Awesome_Ruler_007#7922: > when were you when autograd die
you mean when it was created? 👀 I am not aware it died at some point
alstroemeria313#1694: i had switched to pytorch
alstroemeria313#1694: > Note: Autograd is still being maintained but is no longer actively developed. The main developers (Dougal Maclaurin, David Duvenaud, Matt Johnson, and Jamie Townsend) are now working on JAX, with Dougal and Matt working on it full-time. JAX combines a new version of Autograd with extra features such as jit compilation.
nev#4905: I remember seeing it and being like "wtf is jax"
Awesome_Ruler_007#7922: isn't dying when its not being maintained i.e no pull requests to address bugs and stuff?
Awesome_Ruler_007#7922: but it doesn't matter anyways ig
Awesome_Ruler_007#7922: do you think jax is worth to learn for someone who's just going to use higher level stuff for quite some time? 🤔
cfoster0#4356: What kind of higher level stuff?
cfoster0#4356: Possibly?
Awesome_Ruler_007#7922: mostly just using already researched/created architectures with a few domain modifications
|
nev#4905: that's high level I think
nev#4905: maybe low-level?
Awesome_Ruler_007#7922: not really - yet. I do plan to get into the nitty-gritty, but I don't really like programming which puts me in a minority in CS
nev#4905: does anyone like programming?
Awesome_Ruler_007#7922: Jax seems like a lot of boilerplate would be needed
nev#4905: if you want zero boilerplate take an existing notebook
nev#4905: no need to learn anything
alstroemeria313#1694: yes
Awesome_Ruler_007#7922: well, ig I don't mind boilerplate as long as I understand. The point being, there is only a certain amount of abstraction I can handle - I can't be expected to muddle around the complicated mathematical and tensor ops
EricHallahan#1051: ~~You should try Vulkan.~~
nev#4905: they aren't that complicated
nev#4905: but if you don't want to interact with them, you need something that has everything pre-packaged
nev#4905: sorry that sounded rude
Awesome_Ruler_007#7922: it's kinda hard to unpack what's going on in a simple line.
I mean, sometimes I can't understand what I wrote earlier :3berk:
nev#4905: no need to understand what you wrote earlier if you write from scratch every time
Awesome_Ruler_007#7922: usually they have pretty complicated tensor manipulations which I haev never seen before
CRG#8707: Einops is very worth it
nev#4905: +. also use more logging
|
Awesome_Ruler_007#7922: gigachad approach
nev#4905: by which I mean printing
nev#4905: shapes too
Awesome_Ruler_007#7922: in Jax, out of all the flavours which is the one closes to pytorch?
nev#4905: as long as you know how the shapes change every time it's easy
nev#4905: jax
nev#4905: pure jax
Awesome_Ruler_007#7922: or should I just stick to pytorch for time being?
Awesome_Ruler_007#7922: cmon man, Id be dead halfway
nev#4905: with flax ofc
nev#4905: modules are complicated
nev#4905: so yeah use pytorch if you don't need tpus
nev#4905: it has more libraries too
nev#4905: ~~what should you do if you're high on caffeine~~
cfoster0#4356: What? No
Awesome_Ruler_007#7922: `@nn.compact`
I don't understand decorators too lul
nev#4905: it's close enough to autograd if you ignore
nev#4905: the functional parts
nev#4905: torch autograd
|
Sahl#0630: \@abc
fn f():
is just
fn f():
f = abc(f)
Awesome_Ruler_007#7922: well yes. but `nn.compact` was in the function 🤕
cfoster0#4356: But not close enough to *PyTorch*
nev#4905: nested function
nev#4905: they're all the same if you squint
Sahl#0630: functions are just things like anything else
Sahl#0630: I can have a list of functions
Awesome_Ruler_007#7922: isn't that the same what @Sahl posted?
Sahl#0630: not quite
Sahl#0630: that was a function with a function argument
Awesome_Ruler_007#7922: ```py
class CNN(nn.Module):
|
@nn.compact
# Provide a constructor to register a new parameter
# and return its initial value
def __call__(self, x):
x = nn.Conv(features=32, kernel_size=(3, 3))(x)
x = nn.relu(x)
x = nn.avg_pool(x, window_shape=(2, 2), strides=(2, 2))
x = nn.Conv(features=64, kernel_size=(3, 3))(x)
x = nn.relu(x)
x = nn.avg_pool(x, window_shape=(2, 2), strides=(2, 2))
x = x.reshape((x.shape[0], -1)) # Flatten
x = nn.Dense(features=256)(x)
x = nn.relu(x)
x = nn.Dense(features=10)(x) # There are 10 classes in MNIST
return x
```
Sahl#0630: consider this: functions are just things you can “run” with brackets
nev#4905: that's in a class
Awesome_Ruler_007#7922: ah yes, missed that
Awesome_Ruler_007#7922: but how can decorators be in a function?
|
nev#4905: if you're declaring a function
nev#4905: inside a function
Sahl#0630: def x():
def y():
pass
nev#4905: ```def f(*args):
@dec
def b():
fekiogjmroigreo
return b(*args)```
Sahl#0630: do you understand this?
Awesome_Ruler_007#7922: that sounds like bad programming practice
Sahl#0630: not at all
Awesome_Ruler_007#7922: but yea, just decorator over another function
Sahl#0630: it keeps the functions contained
Awesome_Ruler_007#7922: which happens to be in another
Sahl#0630: why have everyone know about a function that only you should know
nev#4905: it's also mandatory in some cases
nev#4905: which is why people use it
Awesome_Ruler_007#7922: I thought classes would be much simpler and elegant?
|
nev#4905: define elegant
Sahl#0630: classes are complex behemoths
Sahl#0630: 😳
Awesome_Ruler_007#7922: that I agree with
nev#4905: it's so funny to see gwern's comments on worm
Desperate Noob#6277: Does anyone know who made BATbot
EricHallahan#1051: @BoneAmputee
Desperate Noob#6277: I would like to say that it is cursing AND making political content
Desperate Noob#6277: without me trying to make it
ProudNoob#5854: ohohoh
ProudNoob#5854: I'm sure the bot will be disciplined appropriately
clay#9806: It's a frozen GPT-2 model right? That would be due to GPT-2 being misaligned/inaccurate
clay#9806: Definitely not something you should use in a serious setting.
EricHallahan#1051: Well #the-faraday-cage-archive isn't a serious setting lol
clay#9806: indeed ha
clay#9806: Just wanted to make it clear that no one here has influence over its outputs, aside from the prompts given
𓅬 gabriel_syme 𓅬#3220: I can attest that it feels really foreign to me, and assuming about half of potential users might be near to my level of code understanding that seems to be an issue. And yes, I use it because of TPUs.
𓅬 gabriel_syme 𓅬#3220: The bot is simply reacting to image content right? I doubt it is making a personal thought or anything like that. Or was this in a non annotations context?
xcodevn#9003: It feels foreign to me too. I think this is the result of the fact that most of us are not familiar with functional programming.
Deleted User#0000: Hey guys, Anybody can help me that Can I write my own small implementation of GitHub Copilot like software with GPT-neo?
|
Orz#3023: there are available implementations like
https://github.com/CodedotAl/gpt-code-clippy
and
https://huggingface.co/NovelAI/genji-python-6B
in case you have enough tpu compute, you can run this code
https://github.com/uSaiPrashanth/mesh-transformer-jax
to train on about 1000GB data for a better version if you want
nev#4905: I'm trying to create a TRC TPU but getting an error
```
> gcloud alpha compute tpus tpu-vm create tt5 --zone europe-west4-a --accelerator-type v3-8 --version=v2-alpha
ERROR: (gcloud.alpha.compute.tpus.tpu-vm.create) PERMISSION_DENIED: Write access to project '<project name>' was denied
```
Orz#3023: sudo su
Louis#0144: sudo rm -rf /
|
Louis#0144: @nev don’t use this command, Orz is trying to hack u
Louis#0144: This one works
Orz#3023: :berk:
Orz#3023: ofc
nev#4905: you forgot --no-preserve-root
Louis#0144: Lmaooo
Louis#0144: It hurts to laugh
Louis#0144: That’s weird tho
Louis#0144: Idk how to help
Orz#3023: Did you run it from compute SSH?
nev#4905: no
Orz#3023: did this help?
nev#4905: no
Orz#3023: :CH_PepeHmm:
Orz#3023: which prompt did you run this from?
nev#4905: normal sh
nev#4905: doesnt work in cloud term either
Orz#3023: weird
did you login? (gcloud init)
Orz#3023: try reiniting it
|
and make sure to check the project name
it should work on cloud command prompt tho
nev#4905: nope not working
nev#4905: doesn't work in cloud prompt
nev#4905: if I had a wrong project name it would have a different error
Orz#3023: I assume that billing is setup
right?
nev#4905: the problem is probably with that
Orz#3023: :CH_PepeHmm:
nev#4905: it works now
nev#4905: actually
nev#4905: lol
Deleted User#0000: @Orz Thanks
𓅬 gabriel_syme 𓅬#3220: glad u sorted it out
Orz#3023: https://gist.github.com/kinoc/2d636a68876cd3de7b6e9c9452b61089
for some reason, the model loads on the main memory itself instead of gpu
Does anyone know the reason for this?
EricHallahan#1051: Which implementation are you using?
Orz#3023: One second
I'm creating a quick kaggle version to share it
|
Orz#3023: https://www.kaggle.com/usaiprashanth/gpt-j-memorization
EricHallahan#1051: I think you are mixing steps of setting up the merged port and the finetune port.
EricHallahan#1051: You have to use finetune's port to use the split checkpoint.
Orz#3023: :thinkies:
so there are two versions?
just for info
I'm using this version
https://drive.google.com/u/0/uc?id=1NXP75l1Xa5s9K18yf3qLoZcR6p4Wced1&export=download
StellaAthena#3530: Hmmm that looks right to me
Orz#3023: for some reason,
It worked perfectly yesterday
and is failing today
Orz#3023: :sadness:
Louis#0144: We should ask kaggle to change their goose
EricHallahan#1051: Your installing `transformers==4.10.0` via PyPI, which doesn't have any GPT-J support.
EricHallahan#1051: Install `transformers==v4.11.0.dev0` from GitHub and rewrite your code to work with that or install finetune's port.
StellaAthena#3530: FYI: Q&A with Sam Altman happening now, you can join:
meet.google.com/wvf-chnj-kov
Singularity#9001: @StellaAthena in it right now!
Jose-trxr#4270: Hi. I'm searching for papers where a document or a text is interpreted as a signal, which is a combination of other signals (sentiments): angry, sadness, love, empathy... etc.
|
I've had this idea for a while, that sees a NN for sentiment analysis as a kind of "FFT" that can translate from the domain of text to the domain of "sentiment".
I know that is a different idea, but I was wondering if someone has seen any paper or theory with this approach, if that is the case, could you please lead me to the paper?.
Idk if #general is the place where to ask this kind of thing.
Jose-trxr#4270: My apologies if #general is not the place.
EricHallahan#1051: Here or #research is fine.
Jose-trxr#4270: I see... probably #research would have been a better place.
EricHallahan#1051: If you want to move there you absolutely can.
Jose-trxr#4270: Done it, thanks @EricHallahan
Dashiell#8739: I never properly introduced myself, but I've really enjoyed (mostly) lurking, so hopefully late is still better than never: my name is Dashiell, I studied math in college and have worked as data scientist for the last four years. In the last 18 months of quarantine I've spent most of my free time reading ML research papers, though less time properly working on projects or reimplementing them as I would like. I do have experience doing ML engineering work and would really like to help out on any projects where more hands are needed. My interests are most strongly in the direction of group equivariance, graph nns, and strategies for grounding models more generally.
Anyway, I've really appreciated being able to lurk here and listen in on the conversations, so I hope I can start to give back in small ways. Hi 👋
EricHallahan#1051: Welcome!
Louis#0144: Hi !
Dashiell#8739: I'm also almost certainly going to quit my job in the next couple weeks and will have a whole lot more free time for a couple of months
Louis#0144: Feel free to take a gander
Louis#0144: :goose:
bmk#1476: if youre looking for something to do on the more engineeringy side, i always have stuff you can do; also probably talk to stella about the group equivariance stuff
Dashiell#8739: I'm definitely down to do engineering-y work. And, yes, I'll also reach out to Stella directly
Louis#0144: This should be the Eleuther slogan
Louis#0144: It's so funny
|
ilovescience#3282: Is there a JAX version of VQGAN+CLIP that can be run on TPUs? cc: @alstroemeria313
StellaAthena#3530: Not yet. It would be awesome if anyone wanted to make one, and it would enable us to significantly increase the number of runs that can go simultaneously in #the-faraday-cage-archive
ilovescience#3282: Cool, I'll look into it this week...
AI_WAIFU#2844: be the change, etc...
ilovescience#3282: Lol i will try 😄
𓅬 gabriel_syme 𓅬#3220: Suraj has a version although I've never used it. Might be a good start?
https://github.com/patil-suraj/vqgan-jax/blob/main/vqgan_jax/modeling_flax_vqgan.py
𓅬 gabriel_syme 𓅬#3220: although maybe flax is not what you had in mind
ilovescience#3282: that's just the VQGAN...
I am talking about art generation with VQGAN+CLIP
I am familiar with Suraj's version since it was used for DALL-E mini...
I'll use it for a JAX version of VQGAN+CLIP too...
𓅬 gabriel_syme 𓅬#3220: oh okay my bad, I assumed you'd be creating vqgan as well. Would definitely be cool to have that in jax btw, thanks 🙂
ilovescience#3282: oh I wouldn't pursue this project if VQGAN and CLIP weren't already implemented in JAX lol
MyUsername#7620: Does anyone know about The supposed MSFT-1T Model, I keep seeing it referenced in papers. What is it, is there a place I can learn more about it?
Louis#0144: Oh lord
Louis#0144: lmao
Louis#0144: My sweet summer child
Louis#0144: It's time you learned about the bees and the knees
|
Louis#0144: And deepspeed
oreo#2740: if you had to rate deepspeed, how many stars would you give it out of 5?
Louis#0144: I don't use it anymore
Louis#0144: lol
Louis#0144: That's my recommendation
Louis#0144: :berk:
oreo#2740: why??
Louis#0144: I use Jax now
Louis#0144: It's much nicer
wabi-sabi#5811: Can I identify a particular feature with a subset of the activations within a neural network?
Informally, I mean this in the sense that if I imagine an MLP with black edges between vertices representing the architecture, then color all activated edges in blue marker with a thickness corresponding to the weight size, and finally highlight a subset of those thick blue lines with gold, every distinct feature will have a distinct tributary of weights associated with it?
I know Olah has work identifying features with individual neurons, but thinking about them as activation pathways feels better to me for some reason.
oreo#2740: are there Jax implementations of all the features in deepspeed? like zero, 1-bit, etc?
Kia#2550: @nshepperd Got diffusion working on TRC to!
Kia#2550: I mean Let's just waiit for the Trained model be released:sus:
ilovescience#3282: "Friendship ended with PyTorch / Now JAX is my best friend"
𓅬 gabriel_syme 𓅬#3220: do you have the link for the notebook Kianne? not in pins apparently in art
Kia#2550: https://colab.research.google.com/drive/1ZZi1djM8lU4sorkve3bD6EBHiHs6uNAi
|
Kia#2550: You can probably try it in TRC if you can
𓅬 gabriel_syme 𓅬#3220: thanks, in fact I will!
Kia#2550: That's lovely :v
ilovescience#3282: i forgot this existed lol
i will "borrow" some of the code here lol
ilovescience#3282: does it not work on TRC?
𓅬 gabriel_syme 𓅬#3220: not sure, need to test I guess
Kia#2550: The person that Created the notebook said diffusion can probably Run on TRC(v3) and not it colabs (v2) before,But Lately they did Runned Diffusion In TRC im not sure if there's modifications
Kia#2550: But @𓅬 gabriel_syme 𓅬 can probably confirmed it,If it can Run in TRC (That means there's probably 0 any Changes) or it wouldn't there's some few modifications
EstebanSir#2189: i wonder if this could work in reverse, image to text, without much modification
Kia#2550: Use clip
𓅬 gabriel_syme 𓅬#3220: there are some implementations of generating annotations from images, I believe @Deleted User was doing that with a lot of success
ilovescience#3282: Why is no one talking about this?
https://arxiv.org/abs/2109.01652
ilovescience#3282: Is this Google's version of GPT-3??
𓅬 gabriel_syme 𓅬#3220: I think we did talk about a week ago I guess, maybe a bit more?
StellaAthena#3530: We talked about it 20 minutes ago: https://discord.com/channels/729741769192767510/747850033994662000/884236787113938944
ilovescience#3282: lol there's so many messages in this discord I can't keep track...
StellaAthena#3530: No worries. Discord has a pretty good search function FYI, putting the title in finds the convos
Kia#2550: @𓅬 gabriel_syme 𓅬 Settings things up?
|
𓅬 gabriel_syme 𓅬#3220: oh, ehm not yet. Son using the computer in the mornings, I might try in a few hours
Kia#2550: As that's adorable
Kia#2550: Nonetheless wish the best outcome tho
cfoster0#4356: We talked about why we hadn't talked about it yet :berk:
ilovescience#3282: looks like it's because you guys weren't actually impressed...
StellaAthena#3530: I mean, everyone knew this was true. Companies have put out products based on this principle
ilovescience#3282: Their base model seems pretty competitive OOTB with GPT-3... looks like they didn't do a paper on that? The only explanation I see is "In our experiments, we use a dense left-to-right, decoder-only transformer
language model of 137B parameters."
StellaAthena#3530: Yeah, this isn’t the first time they’ve done this
𓅬 gabriel_syme 𓅬#3220: I'm a bit confused by the terminology (might be me not being an expert at all). But how is instruction tuning so different than prompting? It's just a different constructed prompt right
ilovescience#3282: my understanding is you are actually fine-tuning the model...
it's like fine-tuning on multiple tasks at once? there's a connection to multi-task learning?
𓅬 gabriel_syme 𓅬#3220: but their focus (on that name) is not on the multi task really but the natural language instruction and they juxtapose that to prompt engineering and finetuning
𓅬 gabriel_syme 𓅬#3220: anyways, not so important (terminology stuff)
ilovescience#3282: Does anybody know why `MakeCutouts` is different between the CLIP-guided diffusion and the VQGAN+CLIP notebooks? This just applies cutout to the image, right?
xcodevn#9003: I've just realized that dm-haiku API ``set_state`` and ``get_state`` is basically react for Jax, right?
zphang#7252: It's large-ish scale fine-tuning on instruction prompts. The takeaway is that this improves prompt-based task performance overall
nev#4905: this is actually what I thought GPT-3 was at first :berk:
nev#4905: jax z+q fully works now, I'll integrate it into alstro's notebook when I come back
alexyz#3459: wait, VQGAN+CLIP on TPU? pog
|
alstroemeria313#1694: ooh!
BoneAmputee#8363: my concern is the augs
BoneAmputee#8363: is there a good augmentation library for jax
𓅬 gabriel_syme 𓅬#3220: yeah there is one for CV I think
Louis#0144: @BoneAmputee u could use the pre empt v2-8 for bat bot
𓅬 gabriel_syme 𓅬#3220: let me get it
𓅬 gabriel_syme 𓅬#3220: not sure how good it is, but I remember seeing it a few weeks back
alexyz#3459: if it can work on TPUs, wouldn't the images be able to be much bigger due to larger memory?
alstroemeria313#1694: if you can use all cores for one image
alstroemeria313#1694: otherwise it's 16GB per core on a v3
𓅬 gabriel_syme 𓅬#3220: hmm nope nvm, not there yet
BoneAmputee#8363: 16gb by itself sounds nice
alexyz#3459: ye
𓅬 gabriel_syme 𓅬#3220: but you can make 8 images at a time 😄
alexyz#3459: but all the cores on one image 🤤
𓅬 gabriel_syme 𓅬#3220: I'm doing that for text generation
alexyz#3459: imagine that resolution
EricHallahan#1051: *MTJ model parallel can save you.*
𓅬 gabriel_syme 𓅬#3220: :guilty:
alstroemeria313#1694: how hard is like... pipeline parallel on TPU
|
EricHallahan#1051: More work than Ben is willing to put into it so far. :berk:
𓅬 gabriel_syme 𓅬#3220: deepspeed for Jax when
alstroemeria313#1694: it's microsoft and jax is google, so
𓅬 gabriel_syme 𓅬#3220: never, gotcha
𓅬 gabriel_syme 𓅬#3220: friendship ended with MSFT since I got free TPUs
alexyz#3459: why do I have the feeling other companies will make TPU alternatives all of which are incompatible with each others frameworks
𓅬 gabriel_syme 𓅬#3220: for real though, getting access to TPUs is insane. Like the amount of work I did in a month
mgostIH#0245: How fast are v3-8s?
EricHallahan#1051: This is called CUDA lol
𓅬 gabriel_syme 𓅬#3220: hmm hard to compare since I haven't trained these models in anything else. People said each core a v100 was it?
mgostIH#0245: omg that fast?
Kia#2550: Wait really
Louis#0144: The moment tenstorrent is available at scale I'm leaving nvidia
Louis#0144: lol
mgostIH#0245: I got those for free on kaggle and I imagine colab too
𓅬 gabriel_syme 𓅬#3220: the small neo on my data does about 1h / epoch super sweet (it was hours and hours in 2x3090s)
Louis#0144: Tenstorrent is the only company that seems like they can compete with nvidia for hobbyists
𓅬 gabriel_syme 𓅬#3220: I got about 5gb of data idk how that measures
Kia#2550: Diffusion Batbot when
Louis#0144: Right now
|
Kia#2550: TPU go brr
Louis#0144: @BoneAmputee get to work
Louis#0144: Jkjk
nshepperd#2316: how hard is it to write like.. custom XLA passes. like something that just arbitrarily modifies the graph
nshepperd#2316: maybe i could write something like my rematerialization schedule optimizer for jax
Louis#0144: I tried once
Louis#0144: Wouldn't recommend
Louis#0144: lol
Louis#0144: The error messages you get are really weird
nshepperd#2316: i originally made it for tensorflow. to do 'gradient' checkpointing on arbitrary graphs
nshepperd#2316: hmm
Louis#0144: Maybe ur fine then
Louis#0144: I had just learned Jax
Louis#0144: lol
nev#4905: why would you ever do that
nev#4905: hm
nev#4905: fair
Louis#0144: I did it to see if I could
Louis#0144: I could but there was no benefit
𓅬 gabriel_syme 𓅬#3220: not sure if anyone from trc are in here, but thank you :hap:
|
𓅬 gabriel_syme 𓅬#3220: for the sweet, sweet extension
nshepperd#2316: if it could be modified to optimize the device placement on different tpus cores as well, then you could also get model parallelism for free
nshepperd#2316: but just optimized checkpointing gets you pretty far in being able to use less memory
johncaling40#6574: where can i find GooseGAN code?
Louis#0144: @BoneAmputee
nev#4905: by that I mean that latent codes can be quantized with gradient passthrough
nev#4905: hmm
BoneAmputee#8363: uhhh I was using this repo but there's a couple of others worth trying before it (like the pytorch variant or alias-free gan)
<https://github.com/NVlabs/stylegan2-ada>
nev#4905: actually I do have an idea for augs
nev#4905: it's very low-effort or high-effort depending on how you look at it
nev#4905: take the image, convert it to pytorch, run forward pass of augs, convert back to jax
nev#4905: get back gradients from clip, convert to pytorch, compute gradients
nev#4905: move them back to jax
nev#4905: I know how to make a custom op in jax
nev#4905: so it should be easy right
EricHallahan#1051: That seems horribly inefficient.
nev#4905: that's the point
nev#4905: hmm https://github.com/4rtemi5/imax
nev#4905: too easy
|
nev#4905: hm https://github.com/khdlr/augmax
nev#4905: this seems a little too perfect
nev#4905: I'll implement pytorch -> jax anyway
nev#4905: oh alstro's notebook doesn't even have augmentations
nshepperd#2316: the biggest problem there might actually be pytorch and jax fighting over vram
nev#4905: there is no vram lol
nshepperd#2316: :thinkies: well that's okay then
nev#4905: so just torch and jax fighting over ram
nshepperd#2316: oh also i guess you can't jax.jit that pytorch stuff
nshepperd#2316: i found jit helps a lot on cpu
nev#4905: hm. doesn't torch/xla exist
nev#4905: then there's no need to rewrite the notebook probably
nev#4905: what if you run clip/vqgan on cpu and augmentations on tpu
choltz95#4641: @nev I've used both those libraries before. There is also deepmind's new one, which I haven't looked at yet: https://github.com/deepmind/dm_pix
nev#4905: thank you
chilli#5665: more specifically, jit helps a lot with framework overhead
chilli#5665: what are you trying to do here lol
chilli#5665: I've done ... vaguely similar things
nev#4905: have fun while wasting colab gpu credits
chilli#5665: XLA's CPU codegen is actually pretty bad
|
chilli#5665: in general
uwu1#4864: what if you change the .data_ptr to point at the jax/pytorch allocated stuff, its just an array in there right
chilli#5665: but for the type of things that you run on CPU for, framework overhead can often kill you in PyTorch
chilli#5665: so you need some kind of system that codegens away the framework overhead
chilli#5665: I'm missing the context here, but if you're trying to do zero-copy conversions between frameworks, you want dlpack
uwu1#4864: On gpu you can do it with just cuda and ipc or opengl
chilli#5665: a la https://github.com/facebookresearch/functorch/blob/main/functorch/_src/eager_compilation.py#L111
chilli#5665: that's between TVM and PyTorch
chilli#5665: but Jax also has a dlpack thingy
chilli#5665: dllpack works for both CPU and GPU
random_lurker99#8915: it's also basically an afterthought because all optimisation time goes into TPU codegen, simply wouldnt fly to do the same stuff on local jit runs
uwu1#4864: doesn't that incur a copy tho?
chilli#5665: no
nev#4905: hm
chilli#5665: that's kind of the point
chilli#5665: lol
chilli#5665: yeah, that's what I've heard
chilli#5665: I mean, it's not *really* a criticism, since a lot of the time what people actually want is to just get rid of the framework overhead
chilli#5665: and basically just run obvious C++ code
chilli#5665: lol
|
uwu1#4864: oh wow. i never realised it was zero copy, id been using `__cuda_array_interface__`
uwu1#4864: i thought it was like zeromq :) this actually makes me want to make art again now
random_lurker99#8915: the workflow issue is a lot of interactive/colab/debugging is on CPU, and once you are on tpu host and it takes 60 mins to compile you could in theory also probably add in a few mins for CPU compile, but the other issue is that for the cases where it really matters there already exists specialised C++ libs/services
random_lurker99#8915: but optimising HLO layout/fusion/window/memory allocation/remat/defrag is better $$$ investment
chilli#5665: not on GPU?
chilli#5665: hmm
chilli#5665: interesting
chilli#5665: I guess that might be how it works internally at Google lol
chilli#5665: since you just have CPUs and TPUs
random_lurker99#8915: which workflow?
chilli#5665: the "interactive/colab/debugging" workflow
chilli#5665: I think `__cuda_array_interface__` also works
chilli#5665: but iirc the support for that is a bit wonkier?
random_lurker99#8915: oh right, no there are A100s, but no real reason to use GPUs
random_lurker99#8915: also XLA GPU codegen : S
uwu1#4864: do they still have the 1 billion k80s there @ google?
uwu1#4864: it was more than like, a reasonable number if not that many
random_lurker99#8915: fleet stats i dont think anyone will speak freely on, A100s are https://cloud.google.com/blog/products/compute/a2-vms-with-nvidia-a100-gpus-are-ga <-- see alphabet customers
chilli#5665: From what I’ve heard people within google are (mostly) happy using TPUs with Jax, right?
uwu1#4864: i like to imagine they stopped keeping track and there's still a few data centers full of them of them happily training away for the rest of time :')
|
random_lurker99#8915: I think so yes. TPU v4 is just a beast (see mlperf)
chilli#5665: I’m more curious about usability than raw performance
random_lurker99#8915: right, unfortunately I dont know the gcp experience so it's difficult to comment, also since I work on apis/tpu compilation so it's not an objective view at all. I can just say it's wildly popular on the research side, and people migrated out of fun, not because they were forced to
chilli#5665: nice
chilli#5665: is that true just for Jax?
chilli#5665: or also for TF?
random_lurker99#8915: that people adopted TF out ot fun?
random_lurker99#8915: ehh
chilli#5665: oh, no I mean
chilli#5665: did TF users also switch to TPU for fun
binarypasta#9355: does anyone have a diagram of the vqgan + clip architecture
random_lurker99#8915: it's a bit of an unfair comparison because TF was more popular when TPU codegen was much less mature, and there were no obvious primitives to make use of it etc. Also I was not working on this then, I think TPUs then were more adventure mode, while JAX had benefits of early issues ironed out and much more mature stack
binarypasta#9355: also i want to switch from pytorch to jax, what wrapper is most popular?
binarypasta#9355: haiku?
EricHallahan#1051: Flax is probably more popular than Haiku.
binarypasta#9355: oh cool it has a very pytorch-esque syntax
chilli#5665: I'm curious about why?
chilli#5665: I see, so I guess it's also in large part just because of the maturity of the XLA => TPU codegen stack
binarypasta#9355: much simpler distributed training, from what it looks like
binarypasta#9355: does jax have a static graph like tf?
|
binarypasta#9355: because if it doesn't this line is kinda confusing https://cdn.discordapp.com/attachments/729741769738158194/884519811995025428/unknown.png
EricHallahan#1051: You want to treat it that way.
chilli#5665: For model parallel specifically?
random_lurker99#8915: XLA wants static shapes for TPUs
chilli#5665: How long has google been investing in XLA for TPUs?
chilli#5665: It seems like it’s been a very long road to get to where we are today
chilli#5665: And a pretty monumental investment from Google
binarypasta#9355: yes, data parallel is already easy in pt
random_lurker99#8915: a long time, see announcement of TPU 1 and think about HW lead time. I think majority of benefits of improved compilation still lie ahead, fun to think about, but who knows what startups come up with - advantage of not having the mega stack to support
zphang#7252: anecdotally 👍
chilli#5665: I’m curious where you think the benefits of compilation lie?
chilli#5665: Or err, that’s not meant to be doubting, just curious what you think is promising
random_lurker99#8915: still a lot of room on spmd/ghsard/beyond spmd code gen, lots of rewrite-style optimizations, stuff for super large models - but all while keeping it user friendly
random_lurker99#8915: (sorry had to rephrase several times and it's a bit vague, maybe this is also too general a channel for this, dont want to spam this)
chilli#5665: I’m curious what you mean for all 3 haha
chilli#5665: 2nd one just sounds like the typical kind of graph level optimizations we’ve seen?
random_lurker99#8915: (not on US time, have to call it, I mean more interesting things that can be inferred from e.g. sharding annotations, improving code gen from propagating these etc)
chilli#5665: Cool, I’d be interested in hearing more at some later point 🙂
uwu1#4864: https://cppyy.readthedocs.io/en/latest/
uwu1#4864: Finally a library where you can just import the header and dll and It Just Works
|
bmk#1476: this is cool but also kinda scary
Louis#0144: Why scary
uwu1#4864: @chilli do you know if dlpack handles like sharing the gpu memory between processes? Like if I make a dlpack of a tensor, can I just send it to another process (e.g a cuda + opengl app) and access it there?
uwu1#4864: For a while I've wished for a version of Syphon/Spout (mac and windows apps/protocols that allow zero copy texture sharing between programs) that also worked for tensors from ML apps. Unfortunately CUDA IPC only seems supported on linux and those GL gpu texture sharing ones only on Windows and Mac. If dlpack can fix that would be amazing
CKtalon#7792: https://twitter.com/EmilWallner/status/1434883265849085955
Hasdino#6050: o.o
EstebanSir#2189: public dalle model? pog
Kia#2550: Point 4 is probably there API or Actual Realeased
Kia#2550: But I have 0 clue who can run DALL-E locally without having to wait for Half a day
StellaAthena#3530: Us?
StellaAthena#3530: DALL-E is only 12B parameters right?
Kia#2550: Yup
Kia#2550: But wha:surprise:
chilli#5665: not sure
chilli#5665: I doubt it?
chilli#5665: I'd guess that it's out of scope
chilli#5665: and there's just way too many ways of communicating between processes
chilli#5665: but I don't really know details about how people do this, so just educated guessing
spirit-from-germany#1488: This language model is really smart 😄 https://cdn.discordapp.com/attachments/729741769738158194/884774727569244181/IMG_20210907_141457.jpg,https://cdn.discordapp.com/attachments/729741769738158194/884774727812526090/IMG_20210907_141555.jpg
bmk#1476: literally smarter than me
|
spirit-from-germany#1488: lol
StellaAthena#3530: Pretty cool demo of adversarial technology someone I know put out
https://www.tiktok.com/@adversa.ai/video/7002941649844309250
StellaAthena#3530: I've tried and failed to get demos like this to work on live video. It's a lot more work than the papers make it seem like.
alstroemeria313#1694: ...Does it work with CLIP
alstroemeria313#1694: I mean is it non-model-specific?
Louis#0144: oh wow
Louis#0144: also how long til people start citing tiktoks
StellaAthena#3530: This is Eugene Neloo's group (I don't think it's physically him in the demo). The AIV had him speak at DEF CON this year. He gave the talk "The Real History of Adversarial Machine Learning"
StellaAthena#3530: I do not think that there is any published work on adversarial examples against CLIP. The Waterloo interns just started working though, and one of them is experimenting with adversarial examples against text. Once he starts seeing results maybe they can be adapted?
alstroemeria313#1694: ahh
choltz95#4641: It looks like CLIP is vulnerable to canonical & patch-based attacks: https://stanislavfort.github.io/2021/01/12/OpenAI_CLIP_adversarial_examples.html
StellaAthena#3530: Yeah, there were some tweets about ideas for this and some colab notebooks. I haven't seen any published papers though
alstroemeria313#1694: if you can count on the image not being resized, rotated, etc. it is super easy
alstroemeria313#1694: you just apply the fast gradient sign method, this works fine for that
StellaAthena#3530: The original OAI blogpost had the example of it confusing apple-the-food and apple-the-company
alstroemeria313#1694: (or if you know in advance how much it is going to be resized by in preprocessing)
alstroemeria313#1694: at one point in #art i did a demo where i asked people for any image and any emoji and then posted an adversarial version of that image that the bot assigned that emoji to
alstroemeria313#1694: but yeah that's easy compared to stuff that has to exist as a real world object.
someKindaBean#8471: Based solely on how often CLIP+VQGAN notebooks insert text from the prompt, I'm not very surprised by this
|
Zippy#1111: I am a mega noob (though long-time web developer) and I'm pretty excited because I just coded up my first training script for a bart model for grammar correction with pytorch & transformers & a custom dataset :HAPPY:
Retoli Savoli#0469: https://cdn.discordapp.com/attachments/729741769738158194/884870763436462172/image0.png
Retoli Savoli#0469: Oh shit I applied like 12hours ago
Retoli Savoli#0469: I dont know what to even do with it lmfao
alstroemeria313#1694: I got in 2 hours ago
Louis#0144: where are u flying to?
Louis#0144: OH
Louis#0144: LMAO
Louis#0144: brain wtf
Kharx#8463: Hi, I work as an RL engineer and was wondering if there was any EleutherAI project in this space I could contribute to?
EricHallahan#1051: Welcome! We don't do much RL around here, but you may be interested in #deleted-channel.
bmk#1476: eegi literally is RL
bmk#1476: so
Kharx#8463: Thank you Eric for your answer, I had a look at the word doc (https://docs.google.com/document/d/18pmJBgiucNCg3PJoud3zNTc4qsq5tvcS_WZ9ltsuDMY), but it seems to say there are no contributions needed at the moment
Kharx#8463: Is it maybe outdated?
bmk#1476: @Daj can probably fill you in on the details, the doc might be outdated
bmk#1476: also would you be down to contribute to more general ML stuff outside RL?
mgostIH#0245: There's nothing more general than RL 😩
bmk#1476: thank you for your insightful observation. I'm sure this information will be of significant value to our decision
EricHallahan#1051: It's pretty outdated as far as I know.
|
Kharx#8463: Yes I would be, what would you recommend?
bmk#1476: you can see our task board at https://board.eleuther.ai for tasks that need doing
bmk#1476: if none of that piques your interest I also have some more software development leaning things you could help with
Kharx#8463: Thanks, let me have a look
Kharx#8463: @bmk there are a couple of tasks that piqued my interest, should I directly contact their authors for more details?
bmk#1476: that seems like a good idea
Retoli Savoli#0469: so is GPT-3 entirely text and code based?
Retoli Savoli#0469: I havent seen any option for images
Fessus#9563: Text only. Methodology for doing images is a bit different.
Retoli Savoli#0469: aww shame thats my biggest interest
Fessus#9563: GPT models have been applied to images. GPT-3 just isn't one of them https://openai.com/blog/image-gpt/
alstroemeria313#1694: also DALL-E
CRG#8707: You can train a vision net to be compatible with the frozen text model: https://arxiv.org/abs/2106.13884
zphang#7252: so I wonder if the vision model is doing most of the work there
Sparkette#4342: Is that a GPT model though?
alstroemeria313#1694: pmuch i think
Fessus#9563: There's nothing stopping you from mixing arbitrary data types. It's just a pain
Fessus#9563: *As long as you tokenize them appropriately
Fessus#9563: And the context window does tend to start to be an issue for lots of non-text types
alstroemeria313#1694: this is why the DALL-E VAE and VQGAN exist
|
circuit10#0158: What would an image made pixel by pixel with a limited context window look like?
alstroemeria313#1694: And you still have context window issues
circuit10#0158: It would start as one thing and change to another thing
alstroemeria313#1694: They just show up at 256x256/512x512 or w/e
circuit10#0158: Then another thing again etc.
Fessus#9563: lacking consistency
circuit10#0158: It would look interesting though
circuit10#0158: Maybe
Fessus#9563: probably
Fessus#9563: You could look as some of the old RNN based autoregressive image generation models to get an idea
Retoli Savoli#0469: how similar to VQGAN is DALL-E?
Retoli Savoli#0469: VQGAN is really enjoyable and I like whatevers similar haha
alstroemeria313#1694: the VAE is worse than VQGAN
alstroemeria313#1694: But the transformer model OpenAI trained was way bigger
alstroemeria313#1694: Than the ones in Taming Transformers.
Retoli Savoli#0469: so VQGAN is essentially king or am I completely misreading you
alstroemeria313#1694: I think VQGAN is the best VAE arch out there rn
alstroemeria313#1694: You can just train a DALL-E type thing with VQGAN tokens though.
alstroemeria313#1694: The public DALL-E replications have pmuch all switched to VQGAN for this reason
alstroemeria313#1694: Like it gets you 512x512 images at the same cost as 256x256 would for the OpenAI arch.
|
alstroemeria313#1694: (i.e. same sequence length)
Zippy#1111: Probably dumb question but, is there an estimate about the amount of vram required to run gpt-3 (I know it's not a public model, just curious because I know it's colossal)
EricHallahan#1051: I happen to cover that in our FAQ.
https://www.eleuther.ai/faq
EricHallahan#1051: TL;DR: At minimum 375 GB to 750 GB of memory.
Zippy#1111: Oh wow that's quite a bit! I honestly thought it would be more though.
Louis#0144: usually people come in here and ask "when can I run gpt3 on my 1060ti" or whatever
Louis#0144: lmaoo
Zippy#1111: haha yeah I'm not quite that naive.
Dwarf#6935: i got a 1080ti, which is nothing like that measily 1060ti. I think i'll be able to handle it.
Zippy#1111: Though I do acknowledge being pretty damn naive considering I didn't focus on AI in college.
Louis#0144: Its gonna be a looooong time until u can run GPT3 quality generation on consumer hardware
Louis#0144: I dont think its impossible tho
Zippy#1111: We just need middle out compression and then it'll be a breeze.
Zippy#1111: :blaze:
Dwarf#6935: :harold:
bmk#1476: finally, 1180ti
EricHallahan#1051: The card we were always supposed to have.
EricHallahan#1051: Like the RX 490.
Frisbee#3138: pretty sure my old 9600gx could handle it
|
Teemochu#8740: You can do it right now on "consumer" hardware, just going to be somewhat (but maybe not unbearably) slow and it takes the literal max hardware you can fit in one box
Louis#0144: Lol
Teemochu#8740: (though in practice you should go Epyc instead of TR so you can use RDIMMs)
Louis#0144: Like 8x 3090 could do 1 token an hour
Teemochu#8740: that sounds wrong
Teemochu#8740: off by 3 ooms at least
Teemochu#8740: probably 4
Louis#0144: A token every 20 seconds
Louis#0144: Realistically
Louis#0144: Maybe?
Louis#0144: No
Louis#0144: That's gotta be way higher
Louis#0144: Maybe per minute
Louis#0144: 800GB is a lot of data to move per token
Louis#0144: lol
Teemochu#8740: ah so the bandwidth is the main issue there?
Louis#0144: Realistically
Teemochu#8740: the things I had heard about low-VRAM running (of 6B) made it seem not that bad
Louis#0144: That's so much less data tho
Louis#0144: Like either you basically have a TB of RAM
|
Louis#0144: or you're loading off of ssds
Louis#0144: A tb of Ram isn't consumer grade lmao
Louis#0144: So I assumed ssds
Louis#0144: Like if you do a huuuuge raid 0 array with nvme ssds maybe you can get like 10GB/sec
Mega Glaceon#8882: how many 3090s do you need to be able to store the weights completely on the gpu
Mega Glaceon#8882: apparently 15, assuming fp16 weights
bmk#1476: round up to account for activations and miscellaneous weirdness
Mega Glaceon#8882: yeah
gollark#3909: I think you'd rapidly run into issues with available PCIe lanes on common platforms.
Mega Glaceon#8882: just transfer the current layer's output onto the next machine via ethernet :thonkdorp:
Mega Glaceon#8882: im sure there are ethereum mining facilities that could technically run a gpt3 when configured correctly
Mega Glaceon#8882: it would be slow as shit but still
bmk#1476: well youre just moving the activations around so i bet it wont be a big issue
bmk#1476: you only need to load the params once
bmk#1476: and the activations are smol
Mega Glaceon#8882: d_model is 12288 for gpt-3
Retoli Savoli#0469: the story writing features of GPT-3 is pretty neat
Mega Glaceon#8882: to be transferred via ethernet, not even that much
Louis#0144: x 2048 for each token
Louis#0144: x # of layers
|
Louis#0144: + you need low latency
Mega Glaceon#8882: why x 2048?
Zippy#1111: I mean ethereum mining rigs tend to use pcie 1x since it allows them to put a huge amount of gpus on one motherboard.. I think trying to train on an ethereum rig would be pure pain.
Mega Glaceon#8882: ah i was talking about inference, not training
gollark#3909: Ethernet can be low latency with exotic setups/hardware, though, right? RDMA and whatever.
Zippy#1111: I mean also inference.. 1x slots only get like 250MB/s
Zippy#1111: :overfloosh:
Louis#0144: Per token
Louis#0144: ?
Louis#0144: lol
Mega Glaceon#8882: yes, why
Louis#0144: You mean moving the activations between nodes right?
Louis#0144: d model is per token
Louis#0144: lol
Mega Glaceon#8882: yes it is
Louis#0144: It adds up quickly for Ethernet
Mega Glaceon#8882: if we have 2048 tokens, it's 2048*12288
Louis#0144: * num of layers
Louis#0144: + the need for super low latency
Louis#0144: So you're running fiber
|
bmk#1476: 2048*12288 is still smol
Mega Glaceon#8882: only if youre running each layer on a different machine
bmk#1476: you dont rneed it for each layer
Louis#0144: Yeah it's smol but the latency req is gonna kill you
bmk#1476: just once per gpu
Mega Glaceon#8882: and why do you need low latency?
Louis#0144: So that you don't spend eons with GPUs idling
bmk#1476: latency over pcie x1 isnt that bad
bmk#1476: p i p e l i n i n g
Louis#0144: I guessssss
Mega Glaceon#8882: generate many things in parallel :smartyum:
Louis#0144: But we're discussing running on consumer hardware
Louis#0144: Consumers aren't going to pipeline
Louis#0144: Lol
Mega Glaceon#8882: consumer hardware vs. consumers
Mega Glaceon#8882: i guess training with this kind of setup would be painful 😅
Zippy#1111: Consumers should just get nvidia dgx superpods :blaze:
gollark#3909: Consumers should buy anything with lots of RAM, train on CPU, and simply wait a while.
gollark#3909: Alternatively, go to a black hole and be time dilated.
Zippy#1111: It kind of sucks how money is one of the biggest hurdles when it comes to training / inference.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.