data
stringlengths 115
7.61k
|
---|
Louis#0144: Good prompt eng is nontrivial
dmvaldman#4711: did you try "Shower Thought: thoughts I have in the shower: shower shower thinking thought shower"?
Louis#0144: All at once?
Louis#0144: LMAO
Louis#0144: no
dmvaldman#4711: i've got a feeling about that one
Louis#0144: Are u stoned
Louis#0144: I feel like you’re stoned rn
dmvaldman#4711: if only
Louis#0144: You can get GPT3 to do effective planning if that’s of interest
Louis#0144: You can even embed heuristics as grading exams
EricHallahan#1051: > **Shower Thought: thoughts I have in the shower: shower shower thinking thought shower** thoughts thoughts in the shower thoughts and images in the shower thoughts in the shower shower thought and thoughts thoughts in the shower thoughts in the shower thoughts in the shower shower thoughts shower shower shower shower thoughts in the shower thoughts and images in the shower shower thoughts in the shower shower thoughts in the shower shower thoughts and images in the shower shower shower shower thoughts and images in the shower shower shower thoughts and images in the shower shower shower thoughts
Louis#0144: Yeah see if you ask “has anyone prompted GPT3 with x” and wait long enough
Louis#0144: Someone here will do it for you
EricHallahan#1051: Well I don't have OpenAI API access, nor do I intend to apply.
dmvaldman#4711: lol
gwern#1782: you're really asking for trouble with a repetitive prompt like that, as you just saw. NEVER REPEAT. it's hard enough to avoid falling into repetition loops
EricHallahan#1051: I am very much not generating the highest quality results here though lol.
Sphinx#2092: That's one of those things that I'm still surprised doesn't vanish at scale.
bmk#1476: bad prompt. as punishment, write "i will never repeat myself" on the blackboard 10 times |
dmvaldman#4711: interesting that with DallE it seemed like repetition was used as a technique. very different situation of course
Louis#0144: It appears more at scale it seems
Louis#0144: Almost looks like there’s something fundamentally wrong with language models@
Louis#0144: 😉
Sphinx#2092: or maybe just fundamentally wrong with decoding.
Louis#0144: True
Louis#0144: Probably both tbh
Sphinx#2092: It's just not very well studied (or maybe the studies are not convincing) though some papers are fun e.g. https://arxiv.org/abs/2002.02492
Louis#0144: https://arxiv.org/abs/2010.02650
Sphinx#2092: Another good one, though I tend to shy away from linguistic stuff.
Louis#0144: Clara is a god damn genius tho
Sphinx#2092: There was also a nice empirical study looking at cross-entropy versus quality from metrics https://arxiv.org/abs/2002.07233
DJMike#4205: Hey guys, I'm using GPT-Neo in Google collab. For fastest results, should I be using TPU runtime, GPU, or none? Thanks for your help.
EricHallahan#1051: Are you using Hugging Face Transformers?
DJMike#4205: Yup
EricHallahan#1051: Use a GPU instance.
DJMike#4205: I was using it, but collab is sending warnings that the GPU is not being utilized.
DJMike#4205: maybe I have to change a setting?
cfoster0#4356: thems fightin words 🤠
EricHallahan#1051: You might want to try this Colab, it should be plug and play. |
DJMike#4205: Thanks!
Louis#0144: It’s genuinely true though
Louis#0144: There’s massive fundamental issues
cfoster0#4356: Is it specific to language or autoregressive modeling in general?
Louis#0144: Autoregressive LMs
EricHallahan#1051: I am inclined to agree here.
EricHallahan#1051: Current autoregressive LMs still have major issues that have not been resolved.
EricHallahan#1051: Like tokenization.
EricHallahan#1051: And objectives
Louis#0144: Also being able to go back and edit text after you write it is also very key to how actual humans write
Louis#0144: When it’s more than a single text msg
Louis#0144: We haven’t solved that at scale either
cfoster0#4356: Autoregressive LMs == text generation, in this instance?
kindiana#1016: the specific next token prediction objective?
EricHallahan#1051: Well obviously if you are a super-intelligent AI you should only write it once, right? You would have already evaluated all options before the pen hit the page.
Louis#0144: Yeah
Louis#0144: I mean
Louis#0144: Of course
Louis#0144: But I meant we aren’t there yet
Louis#0144: lol |
cfoster0#4356: I'm willing to bet decent money that the language generation systems 2 years from now will be something like autoregressive language models + some tree search element
cfoster0#4356: Which, I dunno if you'd interpret that as "autoregressive LMs by themselves were broken and we needed tree search" or "solving next token prediction gives you a sufficient prior to then use dumb tree search"
Sphinx#2092: We already have this
Sphinx#2092: It comes up every once in a while
Louis#0144: I’d believe the latter
Louis#0144: It’s not good yet
Sphinx#2092: See e.g. https://arxiv.org/abs/2104.05336
Sphinx#2092: And all the references there.
Sphinx#2092: Yeah I agree its not good. Though studying the search space is good
Sphinx#2092: And how that changes as you scale
cfoster0#4356: Ye exactly. Do you think we might need something else?
kindiana#1016: I'd guess we need a slight reframing of the tree search objectives
kindiana#1016: (maybe adversarial? :thonk: )
cfoster0#4356: Hmm
EricHallahan#1051: Hmmm
kindiana#1016: hmmm
EricHallahan#1051: hMmm
𓅬 gabriel_syme 𓅬#3220: hm?
𓅬 gabriel_syme 𓅬#3220: smth about LeCunn's cake here
https://arxiv.org/abs/2105.01060 |
𓅬 gabriel_syme 𓅬#3220: although I find the 'curious' confusing, it just reminds me of curiosity search which I don't think this is
cfoster0#4356: This legitimately sounds like Schmidhuber's GAN thing
cfoster0#4356: Artificial curiosity :schmid:
EricHallahan#1051: :schmid:
nostalgebraist#3542: with gpt2, i only found the logit lens phenomenon for the larger models. gpt2 124M looked the way you describe gpt neo 125M looking
nostalgebraist#3542: so it’s not a surprising result
nostalgebraist#3542: i’ve been working with gpt neo extensively in the last week (almost done moving my bot to use 2.7b) and i’ve been meaning to look and this stuff with it as well
Sid#2121: yep weight tying is on for the 1.3B and 2.7B models
kindiana#1016: I didn't bother implementing weight tying in jax :berk:
kindiana#1016: I don't think it makes a big difference either way for big models 🤷
Daj#7482: I wonder if non-weight-tied models would be less interpretable somehow
Daj#7482: I'm in generally having a big :thonk: about what kinds of tweaks one might be able to do to an arch with the explicit goal of making it more interpretable
kindiana#1016: for logit lens stuff you always look at output projections right?
kindiana#1016: so I don't think it should make a big difference
kindiana#1016: aux objectives should be good for interpretability I think
Daj#7482: Yea, I really like that idea
Kia#2550: Does Eleuther Have plans making a one shot(I don't know if that's the right term) Text2image model like DALL-E, Or Just a Generative model like CLIP?
Deleted User#0000: i think there are people working on dall-e yeah
𓅬 gabriel_syme 𓅬#3220: work on CLIP, DALLE, and other multi modal approaches is in the #multimodal channel mostly. although speech is also starting
Kia#2550: Thanks for the help |
Kia#2550: The journey and process you did in DALLE is great and can't wait for more results
rom1504#5008: Clip is not a generative model
rom1504#5008: It's a multimodal encoder trained for similarity
People used it *with* generators but by itself it doesn't generate anything
Kia#2550: Oops my bad thanks for the help :0
rom1504#5008: I like the idea but not sure about the results
kip#6104: is it an immediately bad idea to train a gpt with an aux loss on each layer, where every layer n is tying to predict layer n + 1 ?
kindiana#1016: try it lol, but pretty sure its going to be borked
Basedblue#9138: *checks email* https://cdn.discordapp.com/attachments/729741769738158194/839110742363602984/unknown.png
Sphinx#2092: I think we should take a step back and do some bookkeeping. As is, most people don't even realize search is a problem. It would be nice if we could quantify how 'good' or 'bad' searching is, and that show that this quantity doesn't seem to be changing as much as we want as we scale.
Sphinx#2092: It would also be nice if we could disentangle the search from the modeling. For example, if we truly captured the right distribution, do we expect any reasonable decoding strategy to work? If you think back to @andyljones work on scaling laws, he had some arguments that investing compute into training yielded compute savings for searching at inference. Why can't see we that in NLP?
I guess what I'm saying is, we don't need solutions just yet, we should try to understand what the problem is first.
kindiana#1016: for ar models, does naive beam increase next token prediction accuracy?
Louis#0144: sometimes
nev#4905: congrats
kindiana#1016: are there any papers on when it is/isn't the case?
nev#4905: what's the general opinion here about https://github.com/lucidrains/all-normalization-transformer ?
kindiana#1016: it works but its not competitive compared to softmax iirc
Kia#2550: Hmm |
nev#4905: it looks kind of like linear attention to me
Ravna#1831: There should be something like a "replication organization" that specifically evaluates all those so called "improvements" on transformers on multiple scales. Much less incentive on this type of thing than pumping out even more meaningless papers about "algorithm and architecture innovations" though.
EricHallahan#1051: Hmmm
nev#4905: Hmmmm
EricHallahan#1051: Fail
EricHallahan#1051: hmmmm
kindiana#1016: there's some papers which do that https://arxiv.org/abs/2102.11972
Ravna#1831: @kindiana Thanks
EricHallahan#1051: It is a very good paper.
finetune#0907: i did some testing with both gpt-neo-1.3B and 2.7B comparing results with fp16 and fp32 inference on gpu
for fp16, there seems to be very roughly a 0.2% chance of an output token being completely :berk:ed (like "A soft voice bringsUkwembadu"), but casting the model back and forth seems to be safe. so if i cast it back to fp32, it stops happening :thonk:
kindiana#1016: sounds like some activations being borked
EricHallahan#1051: I mentioned this yesterday in #lm-thunderdome.
finetune#0907: i'll take a look at it
finetune#0907: it looks like the same issue, but it also seems to happen with 1.3B. that was trained 16bit i think?
EricHallahan#1051: 1.3B was trained in bfloat16, not binary16.
EricHallahan#1051: I tried casting to bfloat16, but the model wouldn't load.
finetune#0907: i tried bfloat16 once on a colab v100, but it just threw an `at::cuda::blas::gemm: not implemented for N3c108BFloat16E` error at me
kindiana#1016: makes sense lol
Kia#2550: Guys do you think GPT-neo XL?(i don't know the name) Can be used in Text2image softwares |
Kia#2550: How can I put this
Kia#2550: Like CLIP but better
finetune#0907: i guess 0.2% isn't that bad, just kind of annoying
EricHallahan#1051: But isn't that 0.2% for every token?
finetune#0907: yeah
finetune#0907: like, when i generate 500, there's on average one of them that's completely wrong
Sphinx#2092: What do you mean exactly? Like feed in N tokens, use beam search with bean size k, get some final sequence of size N+m and see if the N+1 token matches the gold?
kindiana#1016: yeah
Sphinx#2092: Seems like a very expensive evaluation since you would need to beam search for every token
Sphinx#2092: I'm not familiar with works looking at this.
kindiana#1016: I'm not suggesting it as something to practically use, but as a way of understanding search dynamics
kindiana#1016: i.e. does optimal beam parameters for next token prediction accuracy correspond with optimal "quality"?
Sphinx#2092: Thats a fair point. Would be nice to look at. I suspect the answer is no, or maybe yes as you go from beam size 1 to beam size 2 but likely in the limit probably not.
Aran Komatsuzaki#5714: ak apparently is the most influential ML twitterer: https://twitter.com/popular_ML/status/1389542357276704768
Sphinx#2092: Though I'm not super how it looks for unconditional generation. For MT. Mode is usually empty, so you already get rekt for the first token.
EricHallahan#1051: So TL;DR is that you gain traction but people don't follow you?
Aran Komatsuzaki#5714: actually the rate of follower increase is about the same for both ak's. i just started to get traction later 🙂
Sid#2121: what would the use in this be? if you want all layers to be the same just use param sharing
Sid#2121: congrats for getting on the list lol
Sid#2121: we have an influencer in our midsts |
Aran Komatsuzaki#5714: deepmind did not only reject me for internship application but also defeated me in the ranking
Sid#2121: btfo
Kia#2550: Don't worry you have Eleuther to support you
Kia#2550: Also People can apply in Deepmind? Wow
Louis#0144: We tried doing this in #carp
Louis#0144: It works ya
Louis#0144: But it’s hard to train
Louis#0144: We’re gonna do a better run later
Louis#0144: Once we have it set up again
Kia#2550: Owww thanks for the help
Kia#2550: Really really love this kinds of project
EricHallahan#1051: Hopefully it will improve text generation performance. 🙏
Kia#2550: Take your time to be honest
Kia#2550: But nonetheless this sound amazing
Louis#0144: We have a few people working on it a lot
Louis#0144: So progress should happen at a steady rate
Kia#2550: Yey :cirslain:
Kia#2550: That's amazing to hear
Kia#2550: Lovely nonetheless
Deleted User#0000: most common wisdom around seems to still be that to speed up training, you should increase your batch sizes. But the neural scaling laws paper said you should instead increase your model size. Any insights into this discrepancy? Is the common wisdom flat out wrong, or are there some tradeoffs/dependence on the problem? |
Deleted User#0000: I know that the "increase your model size" idea from the scaling laws paper is for when you are not data-bottlenecked. But I wonder if there're other cases where increasing batch size is the right move
Deleted User#0000: Let's say I get 10x the amount of compute, and I want to decrease my time deadline, should I mostly increase model size, or batch size? Scaling laws paper would suggest the former (I think), but e.g. https://openai.com/blog/science-of-ai/ would suggest to increase batch size
Kharr#7888: There are diminishing returns for both, but for bigger networks you also need a bigger batch size usually -- you have to figure out the signal to noise ratio in the gradient
Deleted User#0000: hm I see
Kharr#7888: SNR depends on your dataset and task (if it isn't obvious)
Deleted User#0000: yeah I may be hitting the diminishing returns limit for my task/architecture:P
alstroemeria313#1694: > ...uh, what happens if you use leaky ReLU as a gradient estimator for ReLU actually
EricHallahan#1051: ¯\_(ツ)_/¯
Louis#0144: Go for it
alstroemeria313#1694: I Googled this and there is like one guy on the PyTorch forums proposing it
alstroemeria313#1694: In 2018
alstroemeria313#1694: I've got to be missing some search keyword...
Louis#0144: Did they do it
alstroemeria313#1694: They didn't report back
Louis#0144: It’s easy to try
alstroemeria313#1694: Yeah
Louis#0144: How is gradient estimation for ReLu usually done
alstroemeria313#1694: I have a hacky replace_grad() function to do it but the PyTorch forums people proposed a more efficient alternative
alstroemeria313#1694: you don't, you just compute the actual subgradient
Louis#0144: Just override the backwards pass on an NN.module |
Louis#0144: Sub gradients make me cry
Louis#0144: I hate optimization
alstroemeria313#1694: it's only at x=0?
alstroemeria313#1694: for ReLU?
EricHallahan#1051: I've thought of it before but I got no idea if it will work lol
alstroemeria313#1694: my code is accumulating all sorts of gradient estimator tricks
alstroemeria313#1694: i had the idea of using the unclamped values as a surrogate in the backward pass for clamped values
alstroemeria313#1694: so out-of-range rgb values would still have nonzero gradients and they wouldn't get 'stuck'
alstroemeria313#1694: and of course i'm using gradient estimators for sampling from a multinomial and for vector quantization
alstroemeria313#1694: they have a backward? huh
alstroemeria313#1694: thought you had to subclass torch.autograd.Function
alstroemeria313#1694: or do tricks with .detach()
EricHallahan#1051: Why else would they call the forward pass `forward`?
EricHallahan#1051: :berk:
alstroemeria313#1694: uh, i thought either you couldn't do that or shouldn't do that
alstroemeria313#1694: they have backward hooks but i hate them
inox#5400: I only found this out looking at the reformer code recently https://github.com/lucidrains/reformer-pytorch/blob/master/reformer_pytorch/reversible.py#L41-L104
alstroemeria313#1694: wut
alstroemeria313#1694: is that an actual override
alstroemeria313#1694: It's not in the nn.Module source code |
alstroemeria313#1694: Or is it something that gets used in a custom torch.autograd.Function somewhere else
alstroemeria313#1694: this is in fact it https://github.com/lucidrains/reformer-pytorch/blob/master/reformer_pytorch/reversible.py#L118
alstroemeria313#1694: it's a lucidrains custom method, not an nn.Module override
inox#5400: oh yeah, and lucidrains got it from https://github.com/RobinBruegger/RevTorch/blob/master/revtorch/revtorch.py
alstroemeria313#1694: ahh
alstroemeria313#1694: so there are two ways to do this
alstroemeria313#1694: one is ```python
def replace_grad(fake, real):
return fake.detach() - real.detach() + real```
alstroemeria313#1694: and another is w/ a Function subclass
UnsupervisedLearner#4148: If this counts as off topic LMK
I wanted to know what people's hand-wavey mental models for deep learning are
My own, to start:
Large neural networks, especially self-attention based, are in essence differential graph-like compressed databases. Using these for eg generation or classification is making differentiable queries.
This is why more parameters and more diverse data is so important. More parameters implies a larger potential database to query, more data means a larger amount of information actually added to the database
Louis#0144: My idea has to do with updating priors |
Louis#0144: Doing lifts from one prior to another
Louis#0144: That’s what softmax does
Louis#0144: It imposes some constraint that creates a lift
Louis#0144: Hopfield networks do this to
Louis#0144: Stacked hopfield networks are pog for neuro symbolic reasoning
UnsupervisedLearner#4148: can you elaborate a bit? keyword search doesn't help me understand what a lift means re priors. Assuming bayesian priors?
Louis#0144: I have a blog post on this
Louis#0144: https://www.louiscastricato.com/post/joint-representations-of-connectionism-vs-symbolism-via-attractor-networks-and-self-attention
UnsupervisedLearner#4148: Tangent, but I am really looking forward to your novel writing system. I sometimes get into reading a lot of amateur writing and lots of stories are left tragically unfinished
Louis#0144: I am looking forward to it too
Louis#0144: LOL
Louis#0144: it’s many years off
UnsupervisedLearner#4148: It's okay I can wait
Louis#0144: If you read the DNNs section, 4) is really interesting
Louis#0144: I never found a way to confirm that hypothesis
UnsupervisedLearner#4148: Yeah it's bookmarked, just skimming now as I am in scattered attention mode
kip#6104: i would have thought this could compress information in earlier layers
CRG#8707: <https://openreview.net/forum?id=YQVjbJPnPc9>
Sid#2121: I'm calling it, CRG is a multi quadrillion parameter retrieval AGI sent from the future
nz#9710: he really is |
alstroemeria313#1694: actually i have a better idea for a surrogate backward pass computation for ReLU
UnsupervisedLearner#4148: I didn't realize you were talking about the guy here in the server, looked up CRG
https://github.com/tiavlovskiegor24/CRG
and was pretty confused
Sphinx#2092: I'm more impressed when they come in with the screenshot of the relevant paragraph of text.
alstroemeria313#1694: how about this
alstroemeria313#1694: ```python
class ReLUWithGrad(torch.autograd.Function):
@staticmethod
def forward(ctx, input):
ctx.save_for_backward(input)
return input.relu()
@staticmethod
def backward(ctx, grad_in):
input, = ctx.saved_tensors
return grad_in * ((grad_in < 0) + (input > 0))```
alstroemeria313#1694: in other words you only zero the grad_in if the grad_in was positive *and* the input was zero/negative |
alstroemeria313#1694: rather than unconditionally if the input was zero/negative
chirp#4545: will the ICLR workshop talks be made public?
chirp#4545: interested in this one in particular: https://welmworkshop.github.io/#speakers
ersatz#0001: what's up with https://cohere.ai/
ersatz#0001: is this GPT-3 tier?
nz#9710: @joaogui1 congrats about cohere!
joaogui1#8461: thanks 😄
joaogui1#8461: it's by far the best company I've worked for
ersatz#0001: so is this GPT-3 tier or what
joaogui1#8461: it's OpenAI API tier
joaogui1#8461: we do beat them in some benchmarks in fact
ersatz#0001: even the big boy model?
zphang#7252: !
joaogui1#8461: you mean Da Vinci? Yeah
ersatz#0001: :ultraberk:
zphang#7252: Do you have a plan for academic access? We're pretty interested in comparing/evaluating LM capabilities
neko#5937: How can I get accepted to the wait-list for cohere?
neko#5937: What kind of person is cohere trying to accept
joaogui1#8461: Not sure, but join the waitlist!
Sid#2121: are these metrics correct for shark (which i'm guessing is the biggest model)? https://cdn.discordapp.com/attachments/729741769738158194/839196726417555488/unknown.png |
Sid#2121: lambada score seems a little worse, do you have any more metrics?
zphang#7252: ooh, where's that screenshot from?
nz#9710: The docs I think
neko#5937: I'm guessing yes
joaogui1#8461: it's correct yeah, but are you comparing with Da Vinci or with 175B?
nz#9710: https://docs.cohere.ai/generation-card/
Sid#2121: davinci = 175B no?
joaogui1#8461: I have reason to believe not
Sid#2121: https://cdn.discordapp.com/attachments/729741769738158194/839197229209485323/Screenshot_2021-05-04-08-50-15-211_com.png
ersatz#0001: so it's *not* GPT-3 tier lmao
joaogui1#8461: .
ersatz#0001: true true
45#2247: I finally edited my chat with @Daj and most of the audio was recovered 🎊
Parts especially relevant to EleutherAI are:
- 3:00: Eleuther's RoPE blogpost
- 6:40: nvidia/FB releasing large models
- 8:58: "GPT-3 is AGI"
- 11:12: can we plug RL to transformers?
- 15:22: humans approximating GPT-3 (backprop) |
- 52:07: Eleuther's experiments with general intelligence (#deleted-channel)
- 53:30: ML Engineers he wants to contribute to EleutherAI
- 55:54: more eegi
- 1:01:57: next milestone for GPT-NeoX
- 1:06:04: huawei releasing hardware/software for lots of params too
- 1:11:17: is multipolar scenario a good thing? what's Eleuther impact on that?
links to youtube video & podcast audio: https://twitter.com/MichaelTrazzi/status/1389631831260209163?s=20
ersatz#0001: btw @45 are you still at Oxford Uni.?
45#2247: nope i'm in paris attm
ersatz#0001: ok
Louis#0144: bit confused by the megatron example. If I have deepspeed.initialize do I need deepspeed.zero.Init
bmk#1476: aight i submitted my cohere application
bmk#1476: @joaogui1 are you allowed to tell us how big the models are? lol
bmk#1476: this is terribly vague https://cdn.discordapp.com/attachments/729741769738158194/839208127168512019/unknown.png
bmk#1476: i mean hundreds can reasonably mean anywhere from 200 to 900 billion
bmk#1476: and it's not clear if this is total across all models or biggest model or what
bmk#1476: or whether you're counting private models that aren't available yet
bmk#1476: also I have OpenAI API Lambada results and they're significantly better than these numbers
cst#9766: I hope they do release it for some form of research access. I have no reason to doubt any of these claims but it seems like there's a bunch of questions that would be really interesting to have independently evaluated |
cst#9766: I feel like a lot of the vagueness comes from the whole startup-stealth stuff, but still
joaogui1#8461: So, yeah
Isaac McHorse#2007: i'm taking a screenshot so i can remember this moment forever
joaogui1#8461: I really don't know what I can say haha
joaogui1#8461: There are other folks from cohere on this discord that probably are more certain about what is public
Lord_Drakostar#9337: Cohere looks sick
bmk#1476: gpt3-davinci lambada, limit=100 https://cdn.discordapp.com/attachments/729741769738158194/839214461348347944/unknown.png
bmk#1476: i really wanna talk to some of those people
bmk#1476: right now things don't seem to line up
Louis#0144: wait im looking up cohere and I personally know like half of these people
Louis#0144: wtf
Louis#0144: thats really funny
Lord_Drakostar#9337: Really?
Lord_Drakostar#9337: Neat
Louis#0144: did u guys like exclusively recruit from UofT and UW
joaogui1#8461: ok then I may have mispoke, we have a model that beats Da Vinci on lambada iirc, maybe it hasn't been exported to the API? I don't work on the GPT side of things, my bad
Louis#0144: LMAOO
joaogui1#8461: I'm from Brazil, so no
Louis#0144: ah ok
Louis#0144: still p funny |
Lord_Drakostar#9337: On an irrelevant note you guys should make a RAISA bot
joaogui1#8461: RAISA?
Lord_Drakostar#9337: it's an SCP thing, which this server seems to be into and personally I am as well
joaogui1#8461: ah, kinda got it
Louis#0144: glad to see carol got to do AI stuff
Louis#0144: I spoke with her a few years back about LMs
Louis#0144: I think like 2018/2017
ivan#3200: us too 🙂
Louis#0144: WAIT IVAN
Louis#0144: YOU WORK THERE TOO
Louis#0144: LMAOOOO
Louis#0144: WTF
Lord_Drakostar#9337: E
Louis#0144: thats really funny
Louis#0144: I didnt even realize that
Louis#0144: Arthur said you were working at some AI startup
Louis#0144: but didnt say where
joaogui1#8461: he's one of the co:founders
Louis#0144: yeah
Louis#0144: I realize that now |
Lord_Drakostar#9337: Funny pun
Louis#0144: piecing together what a mutual between Ivan and I said
EricHallahan#1051: It is so weird to be monitored so closely lol
Louis#0144: 🤷♂️
EricHallahan#1051: kipply, don't think I don't see you!
Louis#0144: the DL community at UW and UofT was very close knit
Louis#0144: I knew a *lot* of people at the vector institute
Louis#0144: and I mean I knew almost everyone doing DL at waterloo while I was there
Louis#0144: I almost got poached like 4 or 5 times during my undergrad LMAO
kipply#2448: :ablobpeek:
kipply#2448: hello
Lord_Drakostar#9337: Louis?
Louis#0144: ya
Louis#0144: hi
EricHallahan#1051: hi
EricHallahan#1051: Nice to see you
Lord_Drakostar#9337: Wait I thought custom names appeared in things I guess not
EricHallahan#1051: If you ping it will.
EricHallahan#1051: @EricHallahan
Lord_Drakostar#9337: Well yeah but |
Louis#0144: im confused
Lord_Drakostar#9337: I meant the emotes
Louis#0144: so im gonna go back to crying about deepspeed
Lord_Drakostar#9337: I checked the list of people who reacted and
Lord_Drakostar#9337: Ok
Lord_Drakostar#9337: What's Deepspeed?
Lord_Drakostar#9337: I need to cry about it too
Louis#0144: pain
Lord_Drakostar#9337: Elaborate
Louis#0144: ugh its a library for doing distributed training
Louis#0144: when it works
Louis#0144: like 1/10th of the time
EricHallahan#1051: https://www.deepspeed.ai/
Lord_Drakostar#9337: °^°
Louis#0144: DONT
Louis#0144: its a slippery slope
EricHallahan#1051: ¯\_(ツ)_/¯
Louis#0144: avoid if possible
Lord_Drakostar#9337: .·.·.·.· what
freddiemitchell6#0094: Does RAG plus Deepspeed cancel each other out? |
Lord_Drakostar#9337: • Bullet neat
• I have an attention span of half a milimetre
Louis#0144: I dont think RAG has DS support
Louis#0144: anyway
Louis#0144: dont use RAG
Lord_Drakostar#9337: • E
• 2
• bullet points
Louis#0144: its very disappointing
freddiemitchell6#0094: I'm just messing with you lol
Lord_Drakostar#9337: ♪ oh UNICODE, oh UNICODE, da da da da, da, da dada! ♪
Lord_Drakostar#9337: † ooh
Lord_Drakostar#9337: ‡
Lord_Drakostar#9337: • Duck
• Duck
★ Honk if youre :goose:.
bmk#1476: i'm going to have to direct you to #off-topic
bmk#1476: please don't spam in #general
Lord_Drakostar#9337: oh yeah
EricHallahan#1051: Please do not spam |
bmk#1476: and please don't spam in #off-topic either
Lord_Drakostar#9337: Anywho RAISA
Lord_Drakostar#9337: That should be a bit made with GPT-Neo
Lord_Drakostar#9337: • Give it the SCP database
• Give it a request database
• Third one
bmk#1476: #off-topic pls
Lord_Drakostar#9337: How is that offtopic
stephen#2400: I do plan to keep investigating, I'll post an update if I notice anything interesting though my time for this is quite limited at the moment
stephen#2400: That's great, I'd be very interested to see what you find with the larger gpt-neo models. Are you sure logit lens didn't work with the small gpt2 models though? I was first comparing 117M gpt2 with 125M gpt-neo and the effect where the next token identified in early layers looked most obvious to me with the smallest gpt2 models
Rina#0391: its a goose
Rina#0391: where are the geese
Rina#0391: can i take one as a pet
cfoster0#4356: No
cfoster0#4356: Not yet
Rina#0391: why
Rina#0391: 😦
Rina#0391: are there birbs
Rina#0391: birb bot
cfoster0#4356: By definition geese are #off-topic |
Rina#0391: uh
Rina#0391: a spider just visited my macbook
Rina#0391: ¯\_(ツ)_/¯
Rina#0391: it landed on my mac
EricHallahan#1051: `:|`
Rina#0391: are they evolving
bmk#1476: please go to #off-topic .
Rina#0391: it left
Rina#0391: o ok
Deleted User#0000: k+e*(k^tan(x))^phi(ln(tan(x)))
put it into google 😉
Deleted User#0000: zero knowledge section was interesting
Deleted User#0000: thank you. very true that article.
Deleted User#0000: I dig Ayn Rand or Hermetics / Psychocybernetics.
Deleted User#0000: And Yes, I can foto-read
Deleted User#0000: Not too much on discord - but if you (or anyone) want to reach out at protonmail y @ iamexe . org
cfoster0#4356: I dunno what you're talking about and I fear most other folks won't either. Keep in mind that this is an empirically oriented Discord
Xirider#4010: There is a new company with an GPT3-like model. They claim their model has hundreds of billions of paramters and they offer an API like OpenAI. https://cohere.ai/api
They also offer 4 models. But they only show lambada scores for their 2. smallest model. But they show scores for the billion word benchmark for the largest model: |
Xirider#4010: https://cdn.discordapp.com/attachments/729741769738158194/839265905821089812/unknown.png
Louis#0144: most of the devs are in this discord
Louis#0144: lol
EricHallahan#1051: We fast
Xirider#4010: lol
EricHallahan#1051: Yeah, we think it is interesting.
Xirider#4010: Is it better than davinci in some way?
cfoster0#4356: Last I checked we're waiting to get access so we can evaluate them ourselves
EricHallahan#1051: ^
Louis#0144: do we have any actual numbers for how big they are
cfoster0#4356: That's why we have the eval harness
EricHallahan#1051: No
Louis#0144: or architecture
Louis#0144: ?
EricHallahan#1051: Assume GPT-2
Louis#0144: atleast its not MoE
EricHallahan#1051: ¯\_(ツ)_/¯
Louis#0144: oh
EricHallahan#1051: They are deliberately vauge on the information they expose.
bmk#1476: if this model really is 175B params, then it's truly depressing in its performance relative to gpt3 |
Xirider#4010: So the 3. largest model has 70% accuracy, and davinci has 75%,
bmk#1476: worse than 70%, because this is last *token* accuracy
bmk#1476: not last word
bmk#1476: which means its actual accuracy is <70%
Xirider#4010: only the 1 billion words benchmark is with the big model
EricHallahan#1051: It is crushed by DaVinci.
EricHallahan#1051: Or at least in that particular benchmark.
Xirider#4010: yeah, doesn't seem that great
Louis#0144: if it is priced reasonably given performance then I think its fine
Louis#0144: 🤷♂️
Louis#0144: we arent the userbase
cfoster0#4356: Maybe we'll get more info in coming days. Right now I don't think we've got any evidence of davinci level perf, but they've got some kid of largish model
Louis#0144: remember that
EricHallahan#1051: Of course.
EricHallahan#1051: It is always a value proposition.
Xirider#4010: they write that they use 200 gb of filtered data, 3tb unfiltered
cognomen#6297: wonder if this is signalling a boom in mystery meat LM API startups
EricHallahan#1051: Well they were super-stealth for a while here.
EricHallahan#1051: Like literally here.
Xirider#4010: gpt3 was trained on 570gb. so propably their larger model is also smaller |
EricHallahan#1051: I believe the understanding is that GPT-3 175B was overtrained.
bmk#1476: well, not quite
bmk#1476: but about the right ballpark
kipply#2448: its not mystery meat its spam :blobtea:
kipply#2448: which is apparently pork
Sid#2121: TIL cohere.ai is powered by pork
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/839269142624468992/unknown.png
bmk#1476: + 570gb is the post filtering size of the CC component
ersatz#0001: depressing is the term
Louis#0144: we wont know if its depressing or not until they tell us the price
Louis#0144: LMAO
Louis#0144: if its significantly cheaper than OAI then theyre golden
ersatz#0001: I was hoping for a competitor with the state of the art
Louis#0144: they either had to deliver a better product for the same or maybe slightly higher price
Louis#0144: or a worse/equal product for cheaper
Louis#0144: competing with OAI's economies of scale is gonna be difficult
Louis#0144: but I think its a business plan thing, not a tech thing, that'll win here
cfoster0#4356: wonder what folks in the OAI beta slack are saying right now
Louis#0144: i mean its full of OAI devs, I dont think theyre really talking about it *there* that much
Louis#0144: maybe in DMs |
cfoster0#4356: I thought it was mostly users?
Louis#0144: o I havent seen anything from inside the slack except screenshots from when OAI first released GPT3
Louis#0144: like from the first week
Louis#0144: not going to say who sent them
Louis#0144: but then it seemed like a lot of devs
Louis#0144: JAX is all fixed graph right?
EricHallahan#1051: ¯\_(ツ)_/¯
zphang#7252: haven't seen anything in the OAI slack or forum
kipply#2448: :blobtea: there is time and lots of things we know we can do
ersatz#0001: I just found out about the Wired article about EleutherAI
kipply#2448: this is only the beginning
kipply#2448: ✨
EricHallahan#1051: :wat:
EricHallahan#1051: Your late to the party.
EricHallahan#1051: :berk:
ersatz#0001: yes I know
kipply#2448: i'm here to cheer on @chilli
EricHallahan#1051: I'm playing around with you lol
kipply#2448: i am a chilli fan
EricHallahan#1051: Even if late, it is better than never. `:)` |
ersatz#0001: are there any new Japanese members since the article?
bmk#1476: oddly specific
EricHallahan#1051: It depends on how you define "member", and yes, that is kind of an oddly specific question.
Louis#0144: arent we all
Louis#0144: lets goooo chilli
bmk#1476: i know more japanese than 98% of the world population, does that count
Louis#0144: weeb
ersatz#0001: how is that oddly specific? I'm referring to the article about EleutherAI published in Wired Japan
ersatz#0001: https://wired.jp/2021/04/20/ai-generate-convincing-text-anyone-use-it/
bmk#1476: there's a Wired Japan? o.O
EricHallahan#1051: Oh, the Japanese translation
ersatz#0001: yes
guac#4716: BMK: A multilingual Edmintonian Model
EricHallahan#1051: Yeah, now the context makes *way* more sense.
EricHallahan#1051: Not really, but I have no way to check.
EricHallahan#1051: Can we check the website statistics as a proxy?
Louis#0144: 🅱️ any Modal Konsortium
Louis#0144: i tried
Louis#0144: knowledge-graph?
Louis#0144: idk |
Louis#0144: Leo wouldnt approve of that one
ersatz#0001: btw I was reading The Pill paper and I was wondering why not using Archive of Our Own
EricHallahan#1051: I don't know, but I bet the Pile authors would be able to answer that.
EricHallahan#1051: Oh there is one now lol
cfoster0#4356: We couldn't get it scraped in time, is the honest reason, from what I recall
Louis#0144: also whenever fanfic stuff came up people always mentioned litrotica rather than Ao3
Louis#0144: that probs has something to do with it
Louis#0144: although did we even ship with litrotica?
cfoster0#4356: No
ersatz#0001: isn't Ao3 far larger than Literotica?
ersatz#0001: like a few orders of magnitude?
Louis#0144: Ao3 is *huge*
Louis#0144: like if I remember its the main fanfic site for most of SEA
Louis#0144: and fanfic is much more popular there than in the west
Louis#0144: so I would not be surprised
ersatz#0001: SEA?
Louis#0144: south east asia
ersatz#0001: gotcha
ersatz#0001: Ao3 is a nonprofit btw
ersatz#0001: so maybe asking them for the data is enough? |
bmk#1476: if you want to go ahead and do that I'm not going to stop you
ersatz#0001: I'm not affiliated with the project
ersatz#0001: also the paper is done isn't it
ersatz#0001: that's too late
bmk#1476: well, it's too late to add to the pile but i mean if you want AO3 data you can get it yourself
Exocamp#8255: Question about a (rather amateurish) thought I just had.
Exocamp#8255: I saw a module/repo that finetunes gpt-2 to training data
Exocamp#8255: Couldn't one kind of... "continue training" GPT-2 after its model by feeding just general text data into the finetuning, so to speak?
Exocamp#8255: Add more parameters?
Exocamp#8255: It sounds kind of silly even to me but I'm curious about the thought, if only because I wanna see what I can do with GPT2(/neo if I ever get more than 1 GPU haha...)
EricHallahan#1051: I think you are mistaken: fine-tuning is the act of continuing training with different data than the checkpoint. `:)`
EricHallahan#1051: It is kinda a glass-half-full thing
EricHallahan#1051: You can look at it from either perspective and be correct.
Exocamp#8255: Well, yes, but what I was thinking is that you would feed it (preferably picked from everywhere) text data
Exocamp#8255: And repeat finetuning
EricHallahan#1051: There is absolutely no reason why you cannot do that.
Exocamp#8255: Basically the (very) poor man's GPT-2 continuing training
Exocamp#8255: Aight good to know thanks lol
cfoster0#4356: In the same spirit https://arxiv.org/abs/2004.10964
EricHallahan#1051: You can try, but it doesn't really make sense to IMO. |
Exocamp#8255: Also, I'm still unsure on how Neo training/finetuning works. Do you *need* multiple GPUs because of DeepSpeed, or could you somehow do it on just 1 GPU?
Exocamp#8255: Yeah, I know, it'll never stack up to GPT3 or anything insane like that lol
Exocamp#8255: Was just curious.
EricHallahan#1051: Surprisingly it is actually *removing* parameters from models that becomes a more likely scenario as the size increases. See recent research on early exit.
Exocamp#8255: Ah.
Exocamp#8255: I do know that more parameters != better model, but I'm not sure if having the model *removing* parameters seems fun either
EricHallahan#1051: You can go the other way too though.
EricHallahan#1051: You can train your model by doing progressive growing like StyleGAN.
EricHallahan#1051: I guess it is viable lol
Exocamp#8255: I'm not too sure on that, will need to look. I guess for now, the idea is that given an unlimited amount of time and unique data, could a repeatedly fine-tuned GPT2 match up to GPT3
Exocamp#8255: I'll look at StyleGAN too thank you
EricHallahan#1051: I am *very* familiar with StyleGAN lol
Sphinx#2092: I disagree. I think it's ridiculous that people are not doing this already.
Sphinx#2092: Doesn't it seem absurd to you that if OpenAI wants to build some larger GPT model (GPT-4?) that they would just throw away the weights of GPT-3?
Sphinx#2092: Think of the savings if you could actually somehow re-use those weights as an initialization.
kindiana#1016: I mean, if you train a model one oom larger, reusing the weights will save you like 5% of compute
Sphinx#2092: Who knows. I mean, just starting with a good init could save you lots of training steps.
bmk#1476: i mean they didn't reuse the GPT weights for GPT2, or GPT2 weights for GPT3
Sphinx#2092: Sure, and I think that's absurd.
Sphinx#2092: I think there should be more work exploring re-using weights. |
bmk#1476: i vaguely remember reading something about how starting with a pretrained model actually hurts performance vs random init sometimes
bmk#1476: i think it was one of the scaling laws papers we talked about ages ago
Sphinx#2092: Sure. I mean, I'm not saying it'll work
Sphinx#2092: I'm saying it's worth studying.
Sphinx#2092: To see an example for MT, see https://arxiv.org/abs/2010.03737
bmk#1476: I'm pretty sure you were the one who posted the paper in fact lol
Sphinx#2092: Yes, the ossification stuff. I nevertheless think it can be valuable. Like if you can get one order of magnitude larger for lets say half the price (by re-using the init), that alone could be millions of dollars for very large models.
bmk#1476: i don't think OA wants to take those chances for only saving 5-10% compute cost
bmk#1476: oh
bmk#1476: this message was sent before you sent that one, my internet is just slow
Sphinx#2092: Like I said, I'm not super how much we are talking about, but it would be nice if someone worked out the experiment
Sphinx#2092: and we can get a sense as to how much we can actually save
Sphinx#2092: as opposed to just theorycrafting
Sphinx#2092: armchair ml.
Sphinx#2092: In the paper I posted above, they got a 1.4x speedup.
kindiana#1016: https://arxiv.org/pdf/2010.13369.pdf
kindiana#1016: this does the reverse
Sphinx#2092: Layer dropping is another good idea, but it doesn't re-use weights though.
Sphinx#2092: Like I just want to find some way to extract the knowledge of pre-trained models for the purposes of building even larger models.
EricHallahan#1051: I already admitted that I was mistaken. |
Sphinx#2092: Yeah, my bad. I didn't mean to put you on the spot. Mostly used the opportunity to pitch to people the idea of re-using pretrained weights in a progressively growing fashion.
gwern#1782: you will get worse results than if you train from scratch, though. look at something like OA5 Rerun. they did the same thing, model surgery to hotstart the new dota2 agent, but the final result was noticeably worse than a from-scratch they did the last time
Sphinx#2092: Ooh that sounds interesting. Do you have a link?
gwern#1782: by starting from a stupid agent, you inherit its biases. like, say, alphazero scrapping all human knowledge to outperform
gwern#1782: it's bias/variance again
gwern#1782: I don't know if I would call that 'ossification' but it's definitely a concern
Sphinx#2092: ```
Of course, in situations where the environment is pre-built and well-understood from the start,
we see little need for surgery. Rerun took approximately 20% of the resources of OpenAI Five; if
we had access to the final training environment ahead of time there would be no reason to start
training e.g. on a different version of the game.
Rerun continued to improve beyond OpenAI Five’s skill, and reached over 98% winrate against
the final version of OpenAI Five. We wanted to validate that our final code and hyperparameters
would reproduce OpenAI Five performance, so we ceased training at that point. We believe Rerun
would have continued improving, both because of its upward trend and because we had yet
```
Sphinx#2092: Hmm, 1/5th of the cost for seemingly pretty good performance.
Sphinx#2092: Even if there is some slight limitation, it might be useful as a way to 'test the waters' so to speak. I'll have to look into this more, thanks for the pointer!
nostalgebraist#3542: just got gpt-neo 2.7B loading + sampling up to 2048 tokens without OOM on a vanilla colab (i.e. only a T5 and ~12gb RAM)!
nostalgebraist#3542: maybe this is already known stuff, idk? it just took some doing for me |
EricHallahan#1051: How did you do it?
nostalgebraist#3542: https://github.com/nostalgebraist/nostalgebraist-autoresponder/blob/v11-lazy/experimental/ultra_defensive_loading.py
EricHallahan#1051: I ran into serious problems with precision.
nostalgebraist#3542: what problems? i'm not using fp16
EricHallahan#1051: Ah, I see what you did.
Louis#0144: smot
nostalgebraist#3542: the main problem was just getting pytorch to load the damn thing without needlessly allocating RAM at some point and then overspending our 12GB
EricHallahan#1051: I was talking about them to Finetune earlier.
nostalgebraist#3542: ohh, yeah, i used a tpuV3 to finetune
nostalgebraist#3542: this is just sampling
EricHallahan#1051: No, \@finetune.
nostalgebraist#3542: ohh sorry lol
EricHallahan#1051: Yeah, I have an example of my problems in #lm-thunderdome.
Louis#0144: Wait using HF?
EricHallahan#1051: Why would you want to do that lol
Louis#0144: I mean
Louis#0144: It’s better than mesh tf
Louis#0144: ?
EricHallahan#1051: Just use GPT-Neo, I bet it is significantly more performant.
EricHallahan#1051: ¯\_(ツ)_/¯ |
nostalgebraist#3542: you mean use tf mesh?
EricHallahan#1051: TL;DR is that casting to binary16 leads to activation clipping and wrong token decoding.
nostalgebraist#3542: i had a very un-fun experience trying to get tf mesh to do inference on tpu, then i went to plan B converted the thing to torch
kindiana#1016: lmao
kindiana#1016: tf mesh ~~inference~~ is cursed
EricHallahan#1051: Yeah, that is pretty much why Ben's repo exists.
nostalgebraist#3542: it's not on github but i literally had a fork of tensorflow-estimator that i was messing with to try to get it to behave coherently
EricHallahan#1051: JAX is way more flexible with what it will let you do.
nostalgebraist#3542: cool. i've literally never used JAX, heard it recommended
nostalgebraist#3542: this is for my bot so "just get something working with minimal changes to existing code" is a priority
nostalgebraist#3542: like, i used tensorflow in the bot until just now despite hating it, because i also hate changing production code that works
Louis#0144: What bot
nostalgebraist#3542: https://nostalgebraist-autoresponder.tumblr.com/
EricHallahan#1051: What boat
Louis#0144: YOURE THE ONE WITH THE TUMBLR
Louis#0144: LMAO
Louis#0144: omg
Louis#0144: I didn’t know you were in this server
Louis#0144: I’m not laughing at the tumblr I just didn’t know who ran it
bmk#1476: the Tumblr is literally the exact same username |
Louis#0144: Yeah
Louis#0144: Im dumb
Louis#0144: Idk
Louis#0144: I didn’t realize
EricHallahan#1051: Anyway, yeah, GPT-Neo cast to binary16 is cursed, I believe it needs to be tuned for it to work properly at that precision.
nostalgebraist#3542: where is this JAX repo you're referring to? it's presumably well known in this server but i was unaware of it
EricHallahan#1051: https://github.com/google/jax
nostalgebraist#3542: https://github.com/kingoflolz/mesh-transformer-jax is the one i was looking for i think
nostalgebraist#3542: i know what jax itself is
EricHallahan#1051: Oh, yeah, that is Ben's repo.
nostalgebraist#3542: thanks
EricHallahan#1051: It is probably five times better than GPT-Neo lol
EricHallahan#1051: Mesh TensorFlow is cursed.
asparagui#6391: something something nine months ago something something
nostalgebraist#3542: i clearly should have asked here before trying with tf-mesh, but i sort of...... ""enjoyed"" the ""adventure"" tbh
nostalgebraist#3542: i finally know how TPUEstimator works now! (...and it's terrible)
EricHallahan#1051: I never succeeded with using Colab TPUs.
StellaAthena#3530: Filling up GPUs is a hell of a drug https://cdn.discordapp.com/attachments/729741769738158194/839352018753421332/Screen_Shot_2021-05-05_at_12.04.45_AM.png
CKtalon#7792: anyone used LaBSE before? how long does it take to complete.. it seems like it won't complete in my lifetime 20k lines matching to 100k lines
CKtalon#7792: how do they even do it for commoncrawl >.< |
nostalgebraist#3542: oh i realize i misread this, so just to be clear, my pipeline was
(finetune with tf-mesh) -->
(attempt inference using tf mesh, lol) -->
(convert checkpoint to pytorch) -->
(do inference with the huggingface helpers because that's quicker to setup that rewriting all the tf sampling code i wrote in pytorch)
Louis#0144: Ur fucking wild
EricHallahan#1051: Why?
Louis#0144: That’s such a roundabout way
Louis#0144: But it worked so well
Louis#0144: So congrats
nostalgebraist#3542: thanks! what would the non-roundabout way look like?
EricHallahan#1051: Tune on GPU with HF directly.
Louis#0144: Yeah
Louis#0144: lol
EricHallahan#1051: Which we had a bad experience with, though it definitely works.
nostalgebraist#3542: keep in mind i had very little experience with huggingface at the start of this, and while i have somewhat more torch experience, my existing codebase was all in tf
nostalgebraist#3542: i just saw "oh i've finetuned in tf on a TPU before, this gpt-neo repo will help me do that again, then i'll do inference in tf"
EricHallahan#1051: I didn't start using PyTorch until five months ago when for maybe the fourth time I returned to ML/DL.
EricHallahan#1051: (I tend to get bored/frustrated quickly when things don't go right, especially with Colab.) |
EricHallahan#1051: I wish I had used it sooner.
nostalgebraist#3542: it's definitely far better than tensorflow, although that's not hard
nostalgebraist#3542: i do find it annoying to do memory management in pytorch
nostalgebraist#3542: but imo we'll have some form of that problem as long as we keep insisting on using a garbage collected language to puppeteer around pieces of C/C++ high-performance computing code, so whatever
EricHallahan#1051: I like to kill any language that doesn't allow me to explicitly instruct the removal of a value from memory.
kindiana#1016: `del` :berk:
nostalgebraist#3542: needs a `gc.collect()` 😛
EricHallahan#1051: Well, at least it is better than MATLAB.
EricHallahan#1051: I *hate* MATLAB.
EricHallahan#1051: I can never remember the syntax.
nostalgebraist#3542: (on this note, i love how the "reset the whole thing and free all the memory" command in keras has been broken for literal years, as i discovered trying to debug a coworker's intermittent OOM issues https://nostalgebraist.tumblr.com/post/641628845811908608/memory-mismanagement-in-keras)
AI_WAIFU#2844: that's like trying to do surgery with a chainsaw
Cade Gordon#3029: has anyone out there trained clip on solely the yfcc100m subset yet? a lot of hypotheses surrounding performance seem impossible without having access to a model trained on less data for comparison
Louis#0144: I thought u said language model and u were making an alignment joke
Louis#0144: I think so
Louis#0144: @spirit-from-germany you did right?
Cade Gordon#3029: @Louis you may have just made some of my work not so impossible!
Louis#0144: Don’t thank me
Louis#0144: I didn’t do it
Louis#0144: lol |
Cade Gordon#3029: potentially connected someone for me
Cade Gordon#3029: more than google has done
EricHallahan#1051: Google can order a pizza for you tho
EricHallahan#1051: (IIRC)
EricHallahan#1051: (IDK, I don’t use Google products too often.)
kindiana#1016: I don't believe he's trained clip 🤔
EricHallahan#1051: I have no idea what this conversation is about and I need sleep, goodnight!
Louis#0144: Ben is always watching. Always judging
voxs#0001: i wonder if anyone has tried something like GPT but instead of just predicting just the next word, it predicts the n+1st, n+2nd, n+3rd word so that it can generate stuff faster and generate multiple tokens in one iteration
CRG#8707: https://arxiv.org/abs/2001.04063
voxs#0001: ah interesting
voxs#0001: why is it so hard to come up with truly original ideas
chilli#5665: I don't think it's that hard
chilli#5665: You just need to be immersed in an area for a while
chilli#5665: although I guess it also depends on your definition of "truly original"
chilli#5665: haha
voxs#0001: like
chilli#5665: There are not many papers I would consider "truly original"
voxs#0001: "no one has thought of this idea before" original
chilli#5665: that also leaves a lot of room for interpretation |
chilli#5665: there's "nobody has thought (and tried) this particular instantiation of this idea" (i.e. most papers)
chilli#5665: and then there's
chilli#5665: "nobody has actually ever thought about something similar to this"
chilli#5665: I don't really know how many papers are like that
chilli#5665: lol
chilli#5665: There's also a big difference between "nobody has thought of this idea" and "nobody has gotten this idea to work"
chilli#5665: I *thought* of GPT-3 after GPT-2
chilli#5665: For that matter, I've probably thought about GPT-4 too
voxs#0001: also like how tf people even come up with tranformers
voxs#0001: like, it has to be kind based off of something before it, right?
chilli#5665: well, what is a transformer?
voxs#0001: the architecture in https://arxiv.org/abs/1706.03762
chilli#5665: I know that lol
chilli#5665: I mean, what exactly do you think is unique about transformers
chilli#5665: (I actually don't know the answer to this either)
voxs#0001: yea well ud have to read a shit ton of papers to know how unique it is
chilli#5665: or err, I mean
chilli#5665: you could read the paper itself too
chilli#5665: lol
chilli#5665: I'm not actually sure what people consider the primary innovation of transformers though |
chilli#5665: multi-head attention?
cfoster0#4356: Err wasn't the original transformer basically an encoder-decoder RNN with recurrence replaced with attention?
guac#4716: but you couldn't determine uniqueness in a vacuum
guac#4716: hmmm don't know how much it relates to an RNN... it's pretty much a standard transducer architecture from high up. How unique was the warmup -> decay they used
chilli#5665: From my skim over the paper, it seems like their primary innovations were:
1. Getting rid of recurrence
2. Using multiple heads of attention
3. Positional encodings (arguably part of 1?)
chilli#5665: I'm not sure I would consider any of those "truly original" ideas
bmk#1476: yeah transformers are kinda an obvious next step given the previous steps
chilli#5665: well, I wouldn't say obvious...
chilli#5665: You need to remove recurrence
voxs#0001: ah, so they didnt invent self attention?
chilli#5665: no
chilli#5665: attention has been around since schmidhuber times
chilli#5665: actually, I'm curious who was the first person to use dot-product attention
chilli#5665: I wouldn't be surprised if it's been a thing in numerical analysis since like 1800
chilli#5665: You can actually even see who was responsible for what
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/839379407403876362/unknown.png
guac#4716: the most thorough contributor footnote i've read lol |
bmk#1476: *sad pile noises*
kindiana#1016: https://cdn.discordapp.com/attachments/729741769738158194/839379644675129364/unknown.png
kindiana#1016: ehhh its caught up a bit
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/839379698302976040/unknown.png
guac#4716: damnnn a whole appendix section nice lmao
bmk#1476: or gpt3 https://cdn.discordapp.com/attachments/729741769738158194/839379754682417162/unknown.png
guac#4716: i did say footnote :berk:
kindiana#1016: oh I didn't know gpt3 did half precision
kindiana#1016: how did they only have 20% gpu utilization then :thonk:
guac#4716: poor comm
𓅬 gabriel_syme 𓅬#3220: is that the MLP?
kindiana#1016: yeah
𓅬 gabriel_syme 𓅬#3220: nice 🙂
kindiana#1016: this is a slightly different formulation for autoregressive
𓅬 gabriel_syme 𓅬#3220: that is an interesting looking loss curve
𓅬 gabriel_syme 𓅬#3220: it looks like it's making mini descents
CKtalon#7792: research is generally incremental though
chilli#5665: I'm not complaining or diminishing their accomplishment
chilli#5665: The original context was somebody bemoaning that it seems impossible to come up with truly original ideas like transformers
𓅬 gabriel_syme 𓅬#3220: truly original ideas are very rare imo, one of a kind thing. Especially in the sciences where repetition is the mode of progress. Original applications of old ideas on the other hand is quite common...would transformers fall in the latter? |
CKtalon#7792: i think the entire framework of transformers and putting it together can constitute as original (even if it's incremental)
CKtalon#7792: but if you want to be strict, then the number of truly original ideas might just be a few dozen across disciplines
Deleted User#0000: Empirical research is based on observed and measured phenomena and derives knowledge from actual experience rather than from theory or belief.
EricHallahan#1051: :empiricism:
EricHallahan#1051: I think we are familiar with empiricism.
inspiration101#2728: I was wondering if there are any required support / automation scripts that I could write, that do not need a high understanding of machine learning
EricHallahan#1051: 🤔
EricHallahan#1051: ¯\_(ツ)_/¯
EricHallahan#1051: I don't know of any, but maybe Leo has something to do.
EricHallahan#1051: Not that I expect him to be up for at least a few more hours.
𓅬 gabriel_syme 𓅬#3220: not very experienced in that
TheDudeFromCI#4390: Are there any existing resources or projects that explore the concept of massively distributed deep learning training?
I mean, training supermassive neural networks is obviously very difficult and requires extremely expensive resources to approach. Likewise, for many open source communities like this one, there's a ton of users that are more than willing to donate computational resources in order to progress the training. If a massive neural network can be broken down into lots of smaller neural networks to train and are constantly being shuffled around, it might be easier to train such large networks with much more modest workstations.
kindiana#1016: we have an answer for this on the faq https://www.eleuther.ai/faq/
TheDudeFromCI#4390: Thank you.
𓅬 gabriel_syme 𓅬#3220: the answer is probably yes, and also probably off-topic
𓅬 gabriel_syme 𓅬#3220: didn't have to delete though, I'm sure it's fine
thepok#1770: Could someone fintune neo on this mathfile pls?
thepok#1770: https://github.com/metamath/set.mm/blob/develop/set.mm |
thepok#1770: its 40mb of fundamental math
Sid#2121: bruh
Sid#2121: do it yourself
thepok#1770: i dont have the know how, but maybe someone else is interested too 🙂
Sid#2121: this isn't your personal free labour recruitment platform or mturk. If you want work done, either do it yourself or pay someone to do it.
Deleted User#0000: I just got access to gpt3 from a friend and I asked it how to build a robot and it started telling me legit everything required to build a brain. I have only got as far as the brain but its gonna get as far as telling me how to code its intelligence and stuff bro I am so happy for the development of ai and tech. I am so ready for the future!!!
Deleted User#0000: "First, we gather 50,000 human consciousnesses, representing diverse interests and life experiences. We upload all 50,000 consciousnesses into a digital supercomputer for processing by a human-management-advisor program written in Python."
Deleted User#0000: bruh
Deleted User#0000: how cool dude
Deleted User#0000: sorry to be talking about another companies work but I thought you guys would find this interesting since it is a server about gpt neo and what not
CRG#8707: Better in #the-faraday-cage-archive
ersatz#0001: no no, this is based
Louis#0144: It’s ok to talk about OAI here dw
Louis#0144: We’re friends with people at OAI
Louis#0144: A lot of us are atleast
cst#9766: "Another companies work" is an interesting phrase to use here. That's the second time I've seen this server references as a business in the last few days, rather than as "open source community" or something like that
cst#9766: total sidenote, just thought that was interesting.
mkualquiera#3484: we're a rainbow hat elite hacker group
alexandrost#2936: Hi!
alexandrost#2936: amazing project |
alexandrost#2936: I wanted to ask, what is currently the closest downloadable model (possibly on huggingface) to GPT-3 ?
Kharr#7888: https://huggingface.co/EleutherAI The biggest one posted there.
alexandrost#2936: thank you @Kharr - I am currently downloading the 1.3B one
Kharr#7888: 2.7B is better :morelayers:
EricHallahan#1051: You may also want to read our FAQ, which also answers that question and more.
https://eleuther.ai/faq
alexandrost#2936: @Kharr I was wondering about that - I saw in the stats that the 1.3B was outperforming the 2.7B one @ some tasks, but I guess for generation 2.7B is better, right?
alexandrost#2936: @EricHallahan thanks I will!
bmk#1476: neo 2.7B outperforms 1.3B on all tasks we tested it on
alexandrost#2936: also , can this model be used with "MASKed" queries?
alexandrost#2936: or can I only use it to predict the next tokens?
EricHallahan#1051: No, it is autoregressive.
alexandrost#2936: ok , thank you
EricHallahan#1051: Only next tokens
alexandrost#2936: When I run the model on my machine it's taking a while to generate a result. I'm wondering how could I make it faster? is it probably a CPU or a GPU bottleneck? (I realized that while generating the output there is no GPU usage)
EricHallahan#1051: What kind of GPU do you have?
alexandrost#2936: I have n RTX 2060, i7 and 64GB of RAM
alexandrost#2936: (it's a laptop I must say)
alexandrost#2936: but I am wondering whether I am doing something terribly stupid
Louis#0144: Gpu bottle neck |
Louis#0144: That’s probably under spec
EricHallahan#1051: That is pretty tight.
Louis#0144: 2060 is 8gb?
EricHallahan#1051: 6
Louis#0144: I doubt it will fit
Louis#0144: Oh jeez
alexandrost#2936: 6GB
Louis#0144: lol
Louis#0144: Yeah
Louis#0144: I doubt ur using the GPU at all
alexandrost#2936: @Louis the weird thing is that I see no GPU utilization whatsoever
Louis#0144: Exactly
EricHallahan#1051: It may work with FP16, but quality will suffer.
alexandrost#2936: ah so the model doesn't fit in memory?
Louis#0144: lol
EricHallahan#1051: Are you using the text generation pipeline
alexandrost#2936: yes
EricHallahan#1051: Add `device=0` to the parameters.
EricHallahan#1051: Then try it again.
EricHallahan#1051: It most likely will OOM, but it is worth a shot. |
alexandrost#2936: @Eric Anderson as in: pipeline('text-generation', model='EleutherAI/gpt-neo-1.3B', device=0) ?
alexandrost#2936: or as an environment variable
alexandrost#2936: ah ok
alexandrost#2936: I knew I should had bought an rtx 2070 laptop 😩
Kharr#7888: You're better off using Google Colab or cloud compute for these giant models.
EricHallahan#1051: ^
alexandrost#2936: @Kharr yeah, eventually I'll have to
Kharr#7888: Free tier colab gets you a GPU with 12-16 GB VRAM
alexandrost#2936: thanks!
Dromarion#3383: I have an RTX 2060 8gb and yeah I can only run the 1.3B locally.
alexandrost#2936: ooo , that's sweet
EricHallahan#1051: And Pro is just the cheapest way of getting compute period.
Kharr#7888: Unless you can find/afford RTX 3090s, Colab is just outright better for small scale work.
alexandrost#2936: ah I don't think I could find an 3090 anywhere in the whole galaxy right now
EricHallahan#1051: No one can.
alexandrost#2936: if I want to fine-tune the model. e.g. use a specific style, or have higher probability of producing a set of desired words, what would be the way to do it?
EricHallahan#1051: I think there are a few guides for that out there.
Dromarion#3383: Yeah feels bad not being able to do 2.7B. I'd be looking for a new GPU if it weren't for the absolute state of the market right now, another crash can't come fast enough.
EricHallahan#1051: To bad we can’t cast to binary16 without the model becoming unstable.
EricHallahan#1051: All we should need to do is continue training in that precision. |
EricHallahan#1051: And hopefully it would work out.
alexandrost#2936: I have been using the 1.3B so far and the results are amazing
guac#4716: i had a bot running for the last 2 weeks to scoop a 3090 on amazon and i didn't get a single hit at my price point (max $1500)
Dromarion#3383: I did a side by side comparison between GPT Neo and GPT2XL in GodAI yesterday and even though Neo has repetition, it does overall better outputs I think just by the virtue of its training data. I've lost count of how many times GPT2XL gave an output that was unusable because it was just the bottom matter of a blog post or something saying "Thank you for reading my blog!" I haven't gotten anything like that from Neo so far.
alexandrost#2936: yeah, seems very readable and cohesive
alexandrost#2936: @Dromarion what was the longest max_length that you used?
Dromarion#3383: Well GodAI specifically had a hard cap on length of 100 so that's largest we could do there.
Louis#0144: Damn
alexandrost#2936: has any of you tried using GPT Neo for tasks other than generation? e.g. classification / summarization etc?
EricHallahan#1051: Not really.
alexandrost#2936: I guess if you want to do separate tasks you should use a specifically trained model for those, right?
EricHallahan#1051: I would think they would be better, but I haven't tried.
alexandrost#2936: unlike GPT-3 which you can prime by giving examples
EricHallahan#1051: You should be able to do that too.
EricHallahan#1051: Just not at that level of comprehension.
alexandrost#2936: yeah it makes sense
Louis#0144: yo
Louis#0144: is the form for using JAX with TPUs real
Louis#0144: LMAO
Louis#0144: Ben sent it to me yesterday |
Louis#0144: @guac said he did it before
Louis#0144: but I cant find any page that links to it *anywhere*
Louis#0144: nor can I find documentation on it
Louis#0144: https://docs.google.com/forms/d/14VhYvUewMWyNxGaZPEZE04frH5kEOzSs7d0pCVQYemU/viewform?edit_requested=true
Louis#0144: This
Louis#0144: is this real?
guac#4716: yeah they're super generous lol
Louis#0144: no one responded to me
Louis#0144: so idk
Louis#0144: im confused
Louis#0144: a few people i asked said they got responses within an hour or two
Louis#0144: but I cant find that page linked to anywhere
Louis#0144: and the jax github page has working colab examples using TPUs
ersatz#0001: is that even being generous at this point
Louis#0144: that you dont need clearance for
Louis#0144: are you saying its malice? Im confused
guac#4716: wait why don't you just apply for trc
Louis#0144: I DID THAT
Louis#0144: WHAT IS THAT FORM
Louis#0144: LIKE |
Louis#0144: DO I NEED THIS AND TRC
guac#4716: bruh https://sites.research.google/trc/
Louis#0144: I DID THAT
Louis#0144: I got approved for some thiccc instances
guac#4716: oh i'm not sure. i think ben has jax alpha access so that's diff
ersatz#0001: google has almost unlimited money
cfoster0#4356: IIRC TRC by itself just gets you the instances, but to get full use out of JAX on them you might need the alpha
cfoster0#4356: That's just my outside understanding though
gwern#1782: this confirms what I was thinking about a lot of GPT-3 output being kinda garbage because the original training data was badly cleaned/processed
gwern#1782: jax alpha is very different from tfrc. I always get responses from tfrcs in hours to ~1 day. I signed up for the jax alpha like... half a year ago and have never heard back
Louis#0144: shoot
Louis#0144: so what do I do for jax
Louis#0144: do i need it
Louis#0144: like the alpha
guac#4716: just use the v2 like a regular peasant
gwern#1782: the jax alpha in theory unlocks efficient pytorch-based tpu pods
Louis#0144: 😦
Louis#0144: oh i dont need pytorch
Louis#0144: i just want v3s
gwern#1782: I don't know if anyone has actually done that mostly because I haven't heard much at all from pytorch users at all |
gwern#1782: if you want v3s, just use your tfrc quota and ask politely for more at the renewal, I guess
Louis#0144: ok
juancamilog#9077: If you are a independent peasant, what do you put in organization on the form? Super Secret AI project? Super Open AI for Humanity?
Louis#0144: for the jax alpha? or TRC
Louis#0144: ive heard they arent as lenient with independents
Louis#0144: but if you are, just write "Independent Researcher"
bmk#1476: louis can write EleutherAI affiliation :chad:
Louis#0144: dont quote me on that, i just heard it in passing in shawwn's discord
bmk#1476: or i guess gatech but that's no fun
Louis#0144: i dont know if it is correct
Louis#0144: I put EleutherAI
Louis#0144: LMAO
gwern#1782: they've been pretty lenient with everyone I've heard about, really
gwern#1782: I wouldn't worry about it, assuming you do *something* with it
guac#4716: i didn't even put an org for trc (put "N/A" for not available lol )
juancamilog#9077: I went with AI4LOL
𓅬 gabriel_syme 𓅬#3220: I even got TRC and don't really have an AI background. That said, I forgot to look for the email and it expired
alexyz#3459: how long does it take to expire?
Sid#2121: I think you get a month initially, but they're very lenient with renewal
𓅬 gabriel_syme 𓅬#3220: a month yea |
𓅬 gabriel_syme 𓅬#3220: I forgot I joined with my 2ndary academic email that I only managed to work a week ago lol
𓅬 gabriel_syme 𓅬#3220: but yeah I'll try renewal once I get around to learning jax (i.e. copy/pasting #vision code)
cfoster0#4356: 😮 does it expire even if you just sit on the initial email?
𓅬 gabriel_syme 𓅬#3220: I thought so, never checked!
𓅬 gabriel_syme 𓅬#3220: if it doesn't we're golden 🙂
alexyz#3459: it says it doesn't
alexyz#3459: in the email
alexyz#3459: at least in mine
cfoster0#4356: >>> Your exclusive, free 30-day trial will start after you send us your project number. We’ll provide more information to help you get started then. In the meantime, please check out the quick overview below.
Need some time to prepare? No problem - simply follow the instructions above whenever you are ready to begin using your Cloud TPUs. We’ll send you a confirmation email once we’ve received your project number.
𓅬 gabriel_syme 𓅬#3220: I figured a no-show for months would cancel that, but maybe it doesn't
alexandrost#2936: You guys know if there is a way to avoid having the input as part of the output when using the GPT-Neo model for generating text?
cfoster0#4356: Worth a try :hap:
alexandrost#2936: thing is, I want to use a large input, say 500 tokens, but I only want to use a small output, about 50 tokens
𓅬 gabriel_syme 𓅬#3220: yep, will be on it once the jax vision models are up and running
EricHallahan#1051: I believe there is a parameter that toggles this behavior, I suggest reading the Transformers documentation.
alexandrost#2936: thanks Eric, yes I couldn't find any, but I just found a workaround
alexandrost#2936: I could use my desired length output , say 50 tokens + the length of the input as max_length, and problem solved
Louis#0144: yo |
Louis#0144: why would you use remat in jax
Louis#0144: like
Louis#0144: what purpose does it serve
Louis#0144: im trying to make my way through Ben's code and he throws remat onto the weirdest utility functions lol
guac#4716: i feel like i need a master's degree in distributed computing to understand ben's codebase kek
Louis#0144: i mean
Louis#0144: i did 3 years in a super computing lab
Louis#0144: im not really struggling with his code
Louis#0144: its pretty standard conventions
guac#4716: (i def need to familiarize myself with pmap xmap family)
Louis#0144: he remats random things tho
Louis#0144: like ???
Louis#0144: like remat is for passing parameters/states
Louis#0144: but he remats a utility function
Louis#0144: lmao
guac#4716: he might be trying to avoid haiku issue see the known issue alert: https://dm-haiku.readthedocs.io/en/latest/ 🤷♂️
Louis#0144: ooo
Louis#0144: ok
kindiana#1016: I thing this is the only place i use remat? https://github.com/kingoflolz/mesh-transformer-jax/blob/eeb41c0df9a6e9d0ca717172b51f0d3192758050/mesh_transformer/transformer_shard.py#L51-L56
kindiana#1016: its gradient checkpointing |
Louis#0144: OHHHH
Louis#0144: I see
Louis#0144: Anyway I think i almost have it running (not my code, your code)
Louis#0144: How do I get the hiddens for a particular shard
Louis#0144: Particularly the last one
kindiana#1016: the hiddens are fully replicated between shards
Louis#0144: Oh ok
UnsupervisedLearner#4148: Self attention was floating around for awhile before reaching a sort of peak in the transformer architecture.
If you squint, it's sort of like projecting the tokens into a both the weight matrix (sm(QK^T)) and the input that feeds into it. A layer of an mlp that is projected from learned weights. Which is actually a pretty old idea, just wasn't working STATE OF THE ART until 2017
UnsupervisedLearner#4148: Gonna have more free time in the coming weeks. What needs code plumbing around here?
Daj#7482: If anyone is at ICLR today, Eleuther and friends are hosting a social on open collaboration!
https://mlcollective.org/iclr-2021-open-collab-social/#schedule
zphang#7252: lol upload your slides connor
Daj#7482: I just finished that lol
Daj#7482: It's just one very confusing slide about abstractions and interpretability lol
ethan caballero#6044: What's the y-axis of figure in your slide?
Daj#7482: (to be clear: It's a fake plot showing the vague kinds of results I'd hope to see) Y-Axis is "performance" (of whatever kind)
bmk#1476: why not just use https://cdn.discordapp.com/attachments/729741769738158194/839910321523392552/unknown.png
asparagui#6391: the jax alpha is a subset of tfrc in general |
Daj#7482: Starting in 5 minutes
zphang#7252: that spelling-error disrepect https://cdn.discordapp.com/attachments/729741769738158194/839939397319131206/unknown.png
Daj#7482: We're special
bmk#1476: eleuther represent!
nostalgebraist#3542: update on the topic of "logit lens" stuff for gpt-neo vs. gpt2:
• i've started to look at this stuff with gpt-neo, don't have a nice set of plots yet but i've gotten basic plotting to work
• like @stephen i am using jalammar's ecco library, since my original code was in tf (and was ... bad)
• TIL: ecco does not properly deal with layer norm here!! and this *might* explain why gpt-neo and gpt2 look different when using ecco
nostalgebraist#3542: (i really need to start contributing to ecco again)
bmk#1476: the tribalism feeling is strong
zphang#7252: Discord becoming the go-to platform for coordinating ml research
zphang#7252: oh no other groups have promotional videos
bmk#1476: i love how eleuther is being listed alongside these other super legit organizations
bmk#1476: we need a tribalism emote lol
bmk#1476: how's this https://cdn.discordapp.com/attachments/729741769738158194/839941793159184394/unknown.png
bmk#1476: :tribalism: eleuther stronk :libre:
zphang#7252: 👍 tho crop can be improved |
bmk#1476: :tribalism: :tribalism: :tribalism:
bmk#1476: https://joss.theoj.org/papers/10.21105/joss.01524 the FOR.ai people have a JOSS paper
nostalgebraist#3542: specifics below
before they multiply by the embedding matrix W to get output logits, GPTs apply a final layer norm "ln_f" first. like all the LNs in the model, it has learned weights/bias
let h_N be the hidden state after layer (some interior) N. f you want to ask "what is the predicted token after layer N?", you actually have 3 choices:
1. W^T * ln_f(h_N) *(arguably makes the most sense)*
2.W^T * base_ln(h_N) *(where base_ln is a layernorm with no pointwise skew/offset. what i did originally)*
3.W^T * h_N *(what ecco does. not a good idea)*
Daj#7482: not my best intro but the "irresponsible compute" joke landed lol
Daj#7482: Very interested in this 🤔 cc @janus
bmk#1476: dont worry the tribalism lets me gloss over the problems
bmk#1476: eleuther stronk :libre:
zphang#7252: I thought the intro was pretty gud
AlexSushiDog#6033: the joke made me join immediately, good job
EricHallahan#1051: Welcome! |
bmk#1476: come for the compute, stay for the dank memes!
nostalgebraist#3542: the actual output logits are always W^T * ln_f(something). so
• if you multiply W^T by something that isn't even *normalized* (option 3), you're not really reproducing what the model does for the outputs.
like, if you did that with the final hidden state (i.e. you skip ln_f) there's no reason to think that would be close to the actual output, produced *with* ln_fn
• if you normalize but ignore the weights in ln_f (#2), you might be close to how the actual output is produced, or very far -- it all depends on how far the weights/bias of ln_f are from ones/zeros, which is model-dependent
janus#0150: Hmm... I wonder if there is a way to approximate a pseudo ln_f_N to compute that layers ln_f_N(h_N)
nostalgebraist#3542: finally, note that the huggingface library (used inside ecco) does an annoying thing where its "hidden states" are
• not layer normed, for all except the last one,
• layer normed with ln_f, as the last one
in the case of ecco, this means it *actually* plots
• option #1, W^T * ln_f(h_N) for the last layer,
• option #3, W^T * h_N for all other layers
nostalgebraist#3542: in my current test case (my finetuned 2.7B), i see *way* more "logit lens" like behavior with option #1 than with the others.
EricHallahan#1051: Also, I looked at your code for loading GPT-Neo under low memory. You had to modify HF to do it? |
nostalgebraist#3542: i'll attach some pictures of this. note that these don't quite look like ecco's normal rank plots, i hacked them a bit for my own readability, but the point should be clear
nostalgebraist#3542: ecco behavior, no LN for layers except the last one. looks bad https://cdn.discordapp.com/attachments/729741769738158194/839945543992082432/rank_no_ln.png
nostalgebraist#3542: same data, normed with ln_f everywhere. very different story!! https://cdn.discordapp.com/attachments/729741769738158194/839945672634531842/rank_ln_f.png
nostalgebraist#3542: yeah, i did modify HF in a few ways.
• i replaced all the Linears with LazyLinears as one part of a larger trick to avoid cpu OOM during load.
torch Lazy stuff still does init on cpu eventually, but this lets me *construct* the model without initializing *everything* immediately on cpu, so i can control when init happens per param
• i added some code to pad gpt-neo inputs to the nearest multiple of 256 if they're > 256 to begin with.
this is b/c the local attns create giant sparse matrices for lengths that are not multiples of 256.
i guess if you have OA's magic sparse CUDA kernels, this is performant, but i was just getting them as dense matrices that used much *more* gpu memory than the matrices in full global attn (lol)
nostalgebraist#3542: at this point i really hate the HF library and want to stop using it when i have time to make that change
EricHallahan#1051: That seems to be a common sentiment.
nostalgebraist#3542: i spent like an hour trying to figure out how to pass an arg through ~3 layers of wrappers, such that it would be passed with the name "past_key_values" to the function i *actually fucking wanted to call*
nostalgebraist#3542: now that i think about it, the way i actually got that to work was modifying HF, also
bmk#1476: what if we make a library, "Eleuther Transformers", as a competitor to HF Transformers lol
nz#9710: are you sure you want to handle all the customer support then |
bmk#1476: good point
bmk#1476: maybe we don't want to do this
guac#4716: i wonder what popular (non-js) api has the highest level of abstraction from user-facing to binary lol
bmk#1476: keras
StellaAthena#3530: @guac AutoCAD?
StellaAthena#3530: Microsoft Word 🤣
nostalgebraist#3542: i rewrote the fn here to use "user_past" as past if given it https://github.com/nostalgebraist/nostalgebraist-autoresponder/blob/v11-work/stable_library_code/transformers/gpt_neo/modeling_gpt_neo.py#L940
so i could pass in a past (as "user_past") for the first generation step, while still letting the model pass itself its own pasts subsequently, without my initial past overriding/conflicting with them
bmk#1476: also i love how a paper coming out "end of last month" is only "relatively recent"
finetune#0907: yea, i did the same for the local attentions
bmk#1476: hopefully future neo models will have less local attention lol
zphang#7252: "older paper from january"
finetune#0907: the fact that it made vram blow up especially for prime sequence lengths is kinda fun tho
bmk#1476: i kind of want to just retrain the models and doing it all with full global
zphang#7252: so if it's padded to a multiple of 256 it actually takes less memory?
zphang#7252: why tho
EricHallahan#1051: Why?
EricHallahan#1051: ¯\_(ツ)_/¯
bmk#1476: i mean werent we literally just talking about how much trouble the local attention is making |
finetune#0907: green's unpadded, purple is padded to next 256
https://user-images.githubusercontent.com/82650881/115746923-0ee0cc00-a395-11eb-9bcd-c289a38c77b5.png
zphang#7252: like what's the code/operation that causes this to happen
nostalgebraist#3542: because if it's not a mult of 256, it makes a whole bunch of "blocks" for what different "windows" can see that are mostly just 0
nostalgebraist#3542: and (block length) * (num blocks) ends up being bigger than just (full sequence length)
bmk#1476: who the heck implemented this absolute trainwreck of a local attn impl tho
nostalgebraist#3542: so the matrices are biggger
EricHallahan#1051: HF
zphang#7252: aha
nostalgebraist#3542: yeah i was wondering that too lol
zphang#7252: do you have a link to the exact line(s) where that happens, I want to see it for myself lol
bmk#1476: what i would have done is literally just masked out the attention triangle and have it be exactly as bad as global
bmk#1476: that doesnt even break backwards compat with the existing gpt2model
finetune#0907: i kinda looked into converting the model to onnx and since there's some length depended python loops going on, it definitely would make that more difficult
bmk#1476: but no they had to making a separate model class
bmk#1476: and then mess up local attention
nostalgebraist#3542: yeah, this fucking thing https://github.com/huggingface/transformers/blob/v4.5.1/src/transformers/models/gpt_neo/modeling_gpt_neo.py#L133
bmk#1476: and then make everything more complicated than it needed to be
bmk#1476: i regret not just spending about half an hour writing up a PR way back when
bmk#1476: /rant |
nostalgebraist#3542: "let's have like 4 separate boilerplate classes per model, but barely use any inheritance, just re-implement everything every time with the potential for new bugs"
StellaAthena#3530: I tried and they were not very receptive leo
nostalgebraist#3542: the same hparams have different names in their gpt2 config json vs. their gpt-neo config json
StellaAthena#3530: “They” largely meaning the intern who did it
zphang#7252: it's effective for the "grab OSS mindshare" strategy I guess
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/839951262916280340/unknown.png
bmk#1476: [internal screaming]
EricHallahan#1051: Why? It would have made their life *easier*.
StellaAthena#3530: “They” largely means the intern who did most of the coding
bmk#1476: it would have saved them about 10 hours of work, made the implementation both faster and cleaner, and made everyone else's lives easier too
zphang#7252: that sounds *too* pareto optimal to me
nostalgebraist#3542: check out the names in https://github.com/huggingface/transformers/blob/v4.5.1/src/transformers/models/gpt_neo/configuration_gpt_neo.py#L95 vs https://github.com/huggingface/transformers/blob/v4.5.1/src/transformers/models/gpt2/configuration_gpt2.py#L127
fuckign incredible
bmk#1476: oh god it's worse than i thought
bmk#1476: why???
bmk#1476: i thought n_ctx was the only one
Daj#7482: omg
bmk#1476: i didn't realize they changed literally everything
EricHallahan#1051: I don't know if we even *should* port the rest of our models to HF with how it is. |
Daj#7482: Now I feel bad about sending all the newbies to the HF implementation :grimberk:
EricHallahan#1051: It effectively needs to be entirely rewritten.
nostalgebraist#3542: meanwhile, here's what i had to do to get it to use my `LogitsProcessor` subclass with `generate` : https://github.com/nostalgebraist/nostalgebraist-autoresponder/blob/v11-work/experimental/generator_model_torch.py#L66
bmk#1476: oh god `generate` is a mess
nostalgebraist#3542: my god yes
EricHallahan#1051: This is obviously amateur hour.
nostalgebraist#3542: `generate` was like 3 hours of my yesterday
bmk#1476: so, are we building a new transformer library?
nostalgebraist#3542: i need to go do stuff but if anyone wants to make "HF transformers, but not bad" i'd be down
zphang#7252: that's just https://github.com/lucidrains
bmk#1476: in all seriousness though there is a serious niche here to be filled
StellaAthena#3530: Watch them use a totally different codebase for NeoX
finetune#0907: rope? gotta rewrite everything
zphang#7252: I think HF checks off enough "good enough" boxes that it'll be hard to unseat them for a broad purpose transformers library
Daj#7482: There's obviously no benefit to us to even attempt something as massive and labyrinthine as reimplementing transformers
Daj#7482: But we could have more robust shareable versions of our own inference code
bmk#1476: I'm personally too busy to spearhead it but if someone wants to work on such a library i give it my full blessing, whatever that's worth, and I'll pitch in where i can
Daj#7482: So just make NeoX inference code good
Daj#7482: and installable by pip
Daj#7482: (or some small wrapper around it) |
EricHallahan#1051: At this point I think we should just hook into the Transformers API. I don't think it will ever be useful to port NeoX to HF. Too much performance loss.
zphang#7252: I think we can just vaguely follow the same interface
EricHallahan#1051: Just simulate it
EricHallahan#1051: I think I have suggested this before.
StellaAthena#3530: Heck, check this out @bmk: https://github.com/huggingface/transformers/pull/10848#discussion_r602605718
bmk#1476: wait what if we make a fork of transformers where we undo the gptneo trainwreck and implement it properly in the gpt2 class
zphang#7252: like from_pretrained, generate, etc
guac#4716: i think it's mainly that research code and high level user interface code are at such odds here...idk who'd get the abstractions right
bmk#1476: we can all it transformests
zphang#7252: there's no need to fork, just have the code for the models we support
EricHallahan#1051: I wouldn't consider forking.
mkualquiera#3484: transformberk
Noa Nabeshima#0290: How do you predict the number of parameters of GPT-n given the embedding dimension and number of layers? I think I misunderstand some aspect of the transformer blocks because I'm not correctly predicting the number of parameters
EricHallahan#1051: Well you don't know with just those, because you also have a variable sequence dimension.
EricHallahan#1051: But lets assume we use the maximum sequence length.
EricHallahan#1051: Technically the dimension of the query-key matrices is also undefined.
CRG#8707: 12 * depth *dim^2 ?
Noa Nabeshima#0290: ahh right
Noa Nabeshima#0290: How do you get 12?
EricHallahan#1051: Oh, you said GPT-n |
CRG#8707: Q, K, V, Wo (dim, dim) matrices and 2 (dim, 4dim) matrices in the FF
EricHallahan#1051: Then they should be defined.
Noa Nabeshima#0290: where did you read this?
EricHallahan#1051: The incredible retriever strikes again.
CRG#8707: It's from the original Attention is All You Need, I think. :thonk:
Noa Nabeshima#0290: Thank you!!
One5hot76#6815: Just wanted to say thanks to all who contribute before I go into lurk mode.
EricHallahan#1051: Welcome!
One5hot76#6815: thanks 🙂
EricHallahan#1051: The assumption is that the query-key matrices are the same width as the embedding dim
EricHallahan#1051: Which isn't a constraint of self-attention, but rather just something we do because it is easy to scale.
CRG#8707: Yeah, I think the jukebox models used dim/4 instead of dim. https://discord.com/channels/729741769192767510/795089627089862656/807298925206962237
Teemochu#8740: @finetune have you suggested a patch to hf?
EricHallahan#1051: Yes.
finetune#0907: have an issue open here
https://github.com/huggingface/transformers/issues/11320
finetune#0907: actually kind of looking if i can implement the masked global attention way bmk mentioned, but trying to figure out how the result should look from the hf code's kinda tough
EricHallahan#1051: HFs code needs to be ripped out and torn to shreds anyway.
stephen#2400: Wow, looks like this opened a big can of worms, fair play getting to the bottom of it. I'll point @Jay Alammar to this conversation and can update a PR for Ecco based on what you've found as its's a good way to get my head around how it all works, though it will probably be a while before its ready - if it's even possible without changing HF. If you've changes done already and plan to contribute them let me know
Sid#2121: maybe slightly more accurate is the formula from the recent megatron paper |
Sid#2121: 12 * num_layers * hidden_size ** 2 * (1 + (13 / (12 * hidden_size)) + ((vocab_size + seq_len) / (12 * num_layers*hidden_size)))
StellaAthena#3530: Interesting thread about network security: https://twitter.com/rharang/status/1390295160806944769?s=20
nostalgebraist#3542: i don't think changing HF will be needed, although i might not implement it using HF's `output_hidden_states` thing but instead attach my own hooks. just so it's less tightly coupled to HF internals
nostalgebraist#3542: really the hardest part, for me, would be the UX/API issue of "does the default behavior change," "do we let the user switch between old/new behaviors," "if so what we we call the different options," etc
nostalgebraist#3542: also, to be honest, ecco has a lot of the same type of problems as HF. which is fine / not surprising as it's basically by one guy, without a large userbase like HF's, but means it's harder to extend
nostalgebraist#3542: oh actually i did a version of this months ago, as part of an overly ambitious and very frustrating interpretability project i did back then. it's in https://github.com/nostalgebraist/ecco/blob/spike-lens/src/ecco/lenses.py
finetune#0907: my problem is that i don't fully understand what local attention is supposed to actually do and figuring it out from the hf code's hard. i guess each token can attend to those that are up to window_size tokens before it, but what does the look back do? can't imagine that it just doubles the window size :thonk:
EricHallahan#1051: Local attention is masking out the attention matrix so that the model can only attend to tokens within a certain window of the sequence. It is the easiest (and arguably most effective) attention method with linear complexity, as the variable sequence length is replaced with a window of a "fixed" size.
nostalgebraist#3542: `look_back` is inscrutable to me too
finetune#0907: so it should be just a sliding window over the sequence?
finetune#0907: then maybe the look back stuff is only necessary for trying to implement it in a more efficient was? :thonk:
finetune#0907: that's actually true
finetune#0907: okay
finetune#0907: thank you
finetune#0907: i think i just recovered from hf induced brain damage
finetune#0907: so if i just add this after the causal_mask is generated in the global attention, it returns identical results to the local attention
causal_mask = (causal_mask ^ torch.tril(causal_mask, -256))
finetune#0907: well almost, but that's probably just floats
nostalgebraist#3542: to be fair (?), i think HF is simply trying to imitate what mesh-tensorflow does since the model came from mtf, and i think they do "successfully" re-implement what's in mtf
finetune#0907: true |
finetune#0907: but it's no very obvious what it's doing
nostalgebraist#3542: mtf does a similar block length calculation here https://github.com/tensorflow/mesh/blob/master/mesh_tensorflow/transformer/attention.py#L776
and then calls `mtf.left_halo_exchange` which IIUC is like `torch.Tensor.unfold`
finetune#0907: okay, so just masking the global attention like this is about as fast as the local attention with padding for me
EricHallahan#1051: They were trying to hard.
EricHallahan#1051: Way too hard.
finetune#0907: actually, it's faster on some runs
finetune#0907: probably the same overall
nostalgebraist#3542: is this correct though? should the mask be block diagonal or regular diagonal
EricHallahan#1051: Diagonal IIRC.
nostalgebraist#3542: i hope so, that's much less weird. i just don't understand the mtf *or* HF implementation of it, so i'm wary
EricHallahan#1051: I personally despise the block local implementations.
EricHallahan#1051: They make zero sense to me.
finetune#0907: yeah
Cheese is good#5316: Hiii so uh progress check how's gpt neo doing? i dont know much abt these things but im interested
nostalgebraist#3542: the mtf docstring explicitly claims it's regular diagonal, which is (somewhat) reassuring
```If fully_autoregressive, then query position p can only see memory positions in the range (p - radius, p]```
EricHallahan#1051: Where do you think we are right now? It helps me to focus on things that you have not heard before.
Cheese is good#5316: uh |
Cheese is good#5316: well the pile is done i think
Cheese is good#5316: you're doing stuff on thingy
Cheese is good#5316: uh
Cheese is good#5316: whatchamacallit
Cheese is good#5316: and also gpu
Cheese is good#5316: training
Cheese is good#5316: thing
Cheese is good#5316: and uh
Cheese is good#5316: thats the extent of my knowledge
EricHallahan#1051: Good enough.
EricHallahan#1051: :)
Cheese is good#5316: welp
Cheese is good#5316: thats nice
Cheese is good#5316: oh yeah
Cheese is good#5316: tpu
Cheese is good#5316: thats what i meant
Cheese is good#5316: tpu
Cheese is good#5316: .
Cheese is good#5316: _ _
EricHallahan#1051: GPT-NeoX is is waiting on more hardware, but we have been refactoring it heavily to make it better. |
Cheese is good#5316: yes mhm mhm i understand 100%
Cheese is good#5316: so will it be able to run locally?
Cheese is good#5316: with distillation n stuff
EricHallahan#1051: You may want to read the FAQ:
https://eleuther.ai/faq
Cheese is good#5316: that is a good point
EricHallahan#1051: It has answers to a lot of your questions.
Cheese is good#5316: thx
EricHallahan#1051: If you have more questions we are happy to answer though.
Cheese is good#5316: thx
Cheese is good#5316: Oh cool
Cheese is good#5316: you released something
Cheese is good#5316: already
EricHallahan#1051: Over a month ago we released 1.3B and 2.7B.
Cheese is good#5316: yeah, i saw on the faq
Cheese is good#5316: thats cool
EricHallahan#1051: 6B is in the pipeline.
Cheese is good#5316: cool!
Teemochu#8740: Do you happen to know if this 6B will fit in a 3090 for finetuning?
EricHallahan#1051: 😬 |
Teemochu#8740: Asking for a ~~pony~~ friend.
EricHallahan#1051: I don't think so, but I actually don't know the specs off the top of my head.
Teemochu#8740: I can't tell if that means "it's tight but I don't know" or "I know what you'll do with it and this is my response"
EricHallahan#1051: It simply means "I have no idea until I check"
EricHallahan#1051: It is going to be tight with SGD at binary16, inference should be tight at binary32 doing the thinking thing.
kindiana#1016: Unlikely to fit unless someone ports to deepspeed
EricHallahan#1051: I doubt it
EricHallahan#1051: I don't think it can slot in to a RTX 3090 without a significant amount of effort.
finetune#0907: sequence length 2029, hf local attention with padding and global attention with more masking have the same peak cuda allocation :berk:
finetune#0907: there's like literally no point to all that block stuff
finetune#0907: identical results over all sequence lengths too
AI_WAIFU#2844: Front the money for an old quatro.
Teemochu#8740: oh god that... seriously HF at least needs to put your patch to master
finetune#0907: maybe i should make a new one
finetune#0907: that takes out all the local attention stuff and just sets a flag
finetune#0907: to do the masking
finetune#0907: on the global attention
nostalgebraist#3542: i should do this thing in my bot too, there's probably a mild speedup from not having to pad to 256 in the pasts calculation
nostalgebraist#3542: (i copied the code from their repo into mine to make other changes, so it won't track updates to theirs)
finetune#0907: hmm, it might allocate more vram up front somehow, not sure yet |
finetune#0907: will test more tomorrow
𓅬 gabriel_syme 𓅬#3220: is anyone at all excited about github codespaces?
chirp#4545: i am! once it's ready i'll be really happy if my workplace can switch to it
𓅬 gabriel_syme 𓅬#3220: nice, I've been thinking a lot about how amazing it would be. Especially with me working remotely (and really by myself on ideas) for years
𓅬 gabriel_syme 𓅬#3220: let's see if I can convince the next office I work with to do smth like that
guac#4716: (i feel like i applied for early access a year ago loll)
𓅬 gabriel_syme 𓅬#3220: 🙂 I did a second ago lol, I think I told myself not to the other times because I won't really be using it collaboratively right now
Homo Ludens#5962: What do you think? https://cdn.discordapp.com/attachments/729741769738158194/840121159093583902/SPOILER_SPOILER_BoKzG-xIgAAJE_v.jpg
𓅬 gabriel_syme 𓅬#3220: I made a big point about this in a recent presentation I gave. I think Picasso would have loved what is going on today with AI, and he'd probably quickly realize that CLIP (for example) would definitely provide him with many interesting and open questions
Deleted User#0000: btw, has anyone used a BERT-like, full-attention encoder model, for autoregressive LM? Like you feed the context as input, and train some of the outputs to predict the next tokens.
kindiana#1016: y tho
Deleted User#0000: Because you can attend with full attention over the context
kindiana#1016: theres a few methods that aim to unify mlm and ar
Deleted User#0000: it'd be more expressive
kindiana#1016: also be really expensive, lol
Deleted User#0000: yeah, because you only train on fewer outputs per training example:P
Deleted User#0000: but I wonder what the tradeoffs are
Deleted User#0000: My current architecture is like that. It's based on the AIST++ paper, which found it to work better than the standard causal method
Deleted User#0000: (tho they could maybe do it coz their datasets are much smaller than used for LM)
kindiana#1016: https://arxiv.org/pdf/2103.10360.pdf |
Deleted User#0000: However, this could be mitigated with a T5-like architecture
Deleted User#0000: as you could train to predict the next N tokens, given the previous N, and you do full-attention encoding on the length-N context, and do causal prediciton on the next N
Deleted User#0000: so you get a similar number of output/predictions per training sample
Deleted User#0000: hopefully being similarly efficient (in terms of scaling) as standard causal LM
kindiana#1016: look at that paper lol
Deleted User#0000: ok
kindiana#1016: those types of full attention is kinda difficult for autoregressive generation though
kindiana#1016: as you can't cache activations
kindiana#1016: so you are doing params * seq MAC per token generated rather than params
Deleted User#0000: ah lol, yeap that seems like what im talking about exactly
Deleted User#0000: well basically
Deleted User#0000: is that such a big problem?
kindiana#1016: this is pretty tough
Deleted User#0000: whats MAC?
kindiana#1016: multiply accumulate
kindiana#1016: 2 flops
Deleted User#0000: i guess it is for huge LMs
Deleted User#0000: but hm
kindiana#1016: well for anything lol
Deleted User#0000: maybe u can make them a bit less huge if they are more expressive/parameter efficient |
kindiana#1016: multiplying your autoregressive generation cost by ctx is really rough
Deleted User#0000: depends what you prioritize
zphang#7252: FYI T5 also did something similar to that paper in their big blob of experiments, I think they called it prefix LM
Deleted User#0000: ahm ok
Deleted User#0000: ill check it out too
kindiana#1016: yeah, I like glm a bit better as you don't have a set of encoder and decoder parms
Deleted User#0000: yeah i like glm approach more too
Deleted User#0000: the aist++ one is even simpler (but less scalable train-wise i think)
Deleted User#0000: it does do better than GPT in LM
kindiana#1016: only by cheating :berk:
Deleted User#0000: how?
kindiana#1016: they need more parameters to beat gpt under both the unidirectional and bidirectional setting
kindiana#1016: ignore the 410 and 515M rows for a fair fight https://cdn.discordapp.com/attachments/729741769738158194/840170243137077328/unknown.png
Deleted User#0000: how many parameters does GPTLarge have?
kindiana#1016: 330
Deleted User#0000: hm i see
Deleted User#0000: i wonder if they trained till converge in both
Deleted User#0000: like if its a training speed issue
Deleted User#0000: or a generalization issue
kindiana#1016: they trained their own gpt |
kindiana#1016: so I imagine its with the same hparams
Deleted User#0000: wait dont they mean this one? https://cdn.discordapp.com/attachments/729741769738158194/840171056609361970/unknown.png
Deleted User#0000: 760M ?
Deleted User#0000: i guess not
kindiana#1016: https://cdn.discordapp.com/attachments/729741769738158194/840171305604874260/unknown.png
kindiana#1016: hrmmmmm
Deleted User#0000: yeah was just reading
Deleted User#0000: very hmmm
kindiana#1016: well I assume they didn't make their baselines that strong
kindiana#1016: so its kinda concerning that they need more params to beat it :berk:
Deleted User#0000: still confused as gpt-large seems to refer to the 750M model
Deleted User#0000: here it says so too https://github.com/openai/gpt-2/issues/209
Deleted User#0000: " With the same amountof parameters, GLMLargewith multi-task pretraining per-forms worse than GPTLarge. This is expected since GLMLargealso optimizes the BERT-style objective."
Deleted User#0000: so thats not fully apples and oranges still
kindiana#1016: yeah
Deleted User#0000: what if u train GLMLarge only with LM objective
kindiana#1016: well that's just gpt?
kindiana#1016: lol
Deleted User#0000: no?
Deleted User#0000: they train GLMLarge on different tasks |
kindiana#1016: yeah
Deleted User#0000: not sure how tho lol
kindiana#1016: they have a generation and cloze objective
Deleted User#0000: right
kindiana#1016: split 50/50
Deleted User#0000: so what if they didnt have the cloze one maybe
kindiana#1016: both should help with lambada though
Deleted User#0000: or they finetuned with only the generation one
kindiana#1016: cuz that's just completing one word
Deleted User#0000: they should try this
Deleted User#0000: also not sure what they mean by "With thesame amount of parameters, encoding the context with di-rectional attention can improve the performance of languagemodeling. This is the advantage of GLM over unidirectionalGPT."
Deleted User#0000: sounds like the opposite of what they had just said
kindiana#1016: lol
kindiana#1016: it improves it
CRG#8707: Training left to right and right to left at the same time would be interesting.
kindiana#1016: but it still doesn't beat baseline
Deleted User#0000: sure but then finetune on the precise task you are comparing on
Deleted User#0000: actually how does the uni-directional GLM work?
Deleted User#0000: does that refer to causal masking or what?
kindiana#1016: yeah |
kindiana#1016: using it as a regular ar lm
kindiana#1016: its the same model
Deleted User#0000: lol so GLM-uni is just GPT then?
kindiana#1016: but different eval method
Deleted User#0000: ah
Deleted User#0000: so not also trained differently?
kindiana#1016: no like GLMlarge is one model
kindiana#1016: with 2 evaluation strategies
kindiana#1016: either you have unidirectional or bidirectional context
CRG#8707: Using [MASK] tokens :thonk: https://cdn.discordapp.com/attachments/729741769738158194/840174372220305428/c76cb066dcc5589248c47f1a4f4bde7a.png
Deleted User#0000: so im not sure i understand whats uni and whats bi
ersatz#0001: Lilith the succubus looked at me and [MASK] my [MASK] with her [MASK] and [MASK] me so hard I [MASK]
ersatz#0001: something like that?
ersatz#0001: at least on AI Dungeon
Deleted User#0000: I think, if im understanding it correctly, that uni/bi distinction is referring to the kind of attention mask they apply to the context?
Deleted User#0000: but if they do uni (which i guess is same as causal) attention to the context, isnt that just like GPT?
Deleted User#0000: Also I think they should try the idea i was discussing with Ben above, where you literally only predict one token, and then have to recompute your full-attention context for the next token, etc
Deleted User#0000: less run-time efficient
Deleted User#0000: but im curious about perfomance
Deleted User#0000: i was trying somthing like their architecture |
Deleted User#0000: and then switched to what im describing
Deleted User#0000: and its working better i think
CRG#8707: GPT directly transforms a token into the next token, but GLM uses [MASK] tokens instead, (You can't turn all your context into mask tokens)
Deleted User#0000: yeah their use of MASKS tokens is weird
Deleted User#0000: why not just *actually* mask them?
CRG#8707: It's probably to avoid distribution shift from the bert objective.
Deleted User#0000: right
Deleted User#0000: im curious about using their architecture without any BERT obejctive nonsense lul
Deleted User#0000: i want to use it directly for LM
Deleted User#0000: thats why i think their comparison isnt really fair
Deleted User#0000: i guess they did it becaus the point of their paper was to try to unify both tasks
Deleted User#0000: but i guess their result points, at least for me, to: you cant quite unify them
Deleted User#0000: as in, probably training separately for each task would still be best, for their architecture
Deleted User#0000: thats my guess anyway
Deleted User#0000: (also based on limited personal experience with the same kidns of models for a different task)
Deleted User#0000: also another motivation for this, is that doing LM the way Im saying, you can directly use Mixer-MLP without changing anything/masking
finetune#0907: made a pr for hf with the masked approach for local attention
https://github.com/huggingface/transformers/pull/11630
triggerhappygandi#0001: Can anyone help me find how to do sentiment analysis on particular words? Say we have a string `"I like star wars, but I think star trek is far beneath it in terms of the lore"`, how do you determine that you have a +ve sentiment on star wars but -ve sentiment on star trek
EricHallahan#1051: ¯\_(ツ)_/¯ |
triggerhappygandi#0001: anyone plis help Google is no help here
EricHallahan#1051: ¯\_(ツ)_/¯
Daj#7482: stop bullying shivanshu :berk:
triggerhappygandi#0001: I'll keep asking until some lurker who knows responds
inox#5400: prompt engineer to few shot the problem
inox#5400: lol I don't work in NLP I have no friggin idea
EricHallahan#1051: Me neither.
Kharr#7888: Sentiment analysis is typically done on the whole sentence. What you want to do is look at it within a sentence. Easiest way to do this is do the normal sentence level version using a sliding window of X # of words. Maybe you can separate sections using punctuation.
triggerhappygandi#0001: I guess this is called sentiment classification?
Kharr#7888: Yeah, you'd need to tune a model on some dataset like reviews/imdb or just use HF ```>>> from transformers import pipeline
>>> classifier = pipeline('sentiment-analysis')
>>> classifier('We are very happy to show you the 🤗 Transformers library.')
[{'label': 'POSITIVE', 'score': 0.9997795224189758}]
```
triggerhappygandi#0001: But this just does it on the whole thing
Kharr#7888: See my comment above about the sliding window 😉
triggerhappygandi#0001: Makes sense. But I think this wouldn't fit every test case. Oh well, time to gather data.
CKtalon#7792: @bmk @gwern
https://git.openi.org.cn/PCL-Platform.Intelligence/PanGu-Alpha
Huawei shared 2.7B and 6B weights, and seem to be going to share 200B weights |
Kharr#7888: If there was a dataset which classified partial sentences you wouldn't have to do this, but I'm not sure it exists. Might be worth looking around.
triggerhappygandi#0001: I'll ask my employer :p
triggerhappygandi#0001: Since they want this SEO-esque task
nz#9710: damn, sharing the weights of a 200B model 👀
Kharr#7888: If you have the data you can easily slap a full sequence length classification head on any pretrained model and tune it to give you the sentiment of every individual token.
kindiana#1016: you can also try some sort of MLM thing
Kharr#7888: That's a good idea, haven't heard of anyone doing MLM sentiment classification
Sphinx#2092: The problem is that the words themselves are not negative or positive .
mgostIH#0245: We are getting brain downloading before brain uploading
Sphinx#2092: "Star wars" is not inherently negative or positive, in contrast to more traditional words like "negative" or "positive".
Kharr#7888: That's exactly why MLM could work. If you structure it like "I like [MASK], but I think star trek is far beneath it in terms of the lore" and target is sentiment classification, the model would learn to predict the sentiment of the masked word based on the context.
CKtalon#7792: yea. they estimate it to be ~1T
CKtalon#7792: lol
Sphinx#2092: I thiunk the MLM would just give you probabilities of likely words. It would probably give similar probability of any fandom similar to star trek, independent of sentiment.
Kharr#7888: It's not exactly MLM, but MLM-like with the target being a classification label, not a real word. So the probability would be for the classification label.
kindiana#1016: train a regression model on the difference of the output of a sentiment model and the same model with one particular word masked out
Sphinx#2092: I think maybe I just don't understand the task. So the input is the sentence, and some particular token in the sentence, and we have to predict if the sentence is talking positively or negatively about the token?
Kharr#7888: Traditionally, sentences like "This is not a bad idea, it is a great idea" vs "This is not a great idea, it is a bad idea" confuse models in hilarious ways.
gwern#1782: exciting. so the mad lads are actually going to do it...? I wonder if it'd be worth retraining with english stuff so everyone else can get gpt-3-ish capabilities
CKtalon#7792: https://git.openi.org.cn/PCL-Platform.Intelligence/PanGu-Alpha/issues/2 |
CKtalon#7792: yea, they are doing risk assessment
gwern#1782: _goes to subscribe and... wait, what is this, counterfeit shenzehn github?_
CKtalon#7792: github is shit slow in china
CKtalon#7792: so it's understandable they don't use github
CKtalon#7792: but yea, could download that 13B model right now and play with it
gwern#1782: yeah, I was just about to say, didn't they release 13b too
CKtalon#7792: oh wait.. i think someone needs to convert it to gpu/cpu
Louis#0144: Chinese sites are always soooooo sloooooow
Louis#0144: Is the Chinese Internet really that bad?
Louis#0144: Like is all of China on DSL lmao
CKtalon#7792: it's fast for their sites
CKtalon#7792: hmm, i think generally 100Mbps
gwern#1782: intranet vs internet over wet string
CKtalon#7792: not the fastest
Louis#0144: Oh I see
Louis#0144: I tried downloading 50GB from a Chinese site once
Louis#0144: And it literally took 3 days
Louis#0144: I have 1.5Gbps
Louis#0144: lol
gwern#1782: all that censoring and deep packet inspection doesn't come cheap |
CKtalon#7792: depends on where you download them.. if it's like baidu pan.. it's a joke
CKtalon#7792: they require you to pay, or no matter how fast your connection is, you get like 4KB/s
CKtalon#7792: these days they don't seem big on downloading anyway. everything is streaming pretty much
CKtalon#7792: or they have their own private ways of sharing shit which i have no idea since I'm not in the club
voxs#0001: am i doing something wrong if im calling gc.collect every step in my pytorch training code
EricHallahan#1051: Potentially, but it never hurts lol
StellaAthena#3530: Wait, so they have already trained a 200B model?!?
bmk#1476: interesting, we should figure out how to inference from it once it goes up
CKtalon#7792: yea
CKtalon#7792: https://arxiv.org/abs/2104.12369
Kharr#7888: And then distill it!
kurumuz#5695: yes
bmk#1476: has anyone done scaling laws of language transfer learning?
bmk#1476: to see whether fine tuning a model for one language on another language is faster than training from scratch, whether there are ossification problems, etc
bmk#1476: i know about the scaling laws paper transferring English to python but it feels like natural language may be different
gwern#1782: yes?
bmk#1476: link pls
gwern#1782: it was in /r/mlscaling
gwern#1782: (also in my april newsletter which admittedly hasn't actually been sent yet)
Louis#0144: Can u include a pop up that’s just a picture of a goose |
Louis#0144: I’ll subscribe
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/840263989757739028/photo.png
gwern#1782: I'll need to wait until I link a paper on... *quack* science
gwern#1782: maybe the new mars thing
bmk#1476: quack != honk
gwern#1782: the 'candidate-gene study for psychic ability' was also good
bmk#1476: also ill have you know that this image is in pyfraweb and it looks beautiful. the absolute pinnacle of web dev
Louis#0144: It’s like
Louis#0144: Omg
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/840264666206175293/unknown.png
Louis#0144: I tried pyfra the other day
Louis#0144: Seeing how it works
Louis#0144: I have a 21:9 monitor
Louis#0144: It stretched
Louis#0144: The entire way
Louis#0144: It was beautiful
bmk#1476: the most aesthetic
bmk#1476: (simulated) https://cdn.discordapp.com/attachments/729741769738158194/840264872176123914/unknown.png
Louis#0144: YEAH
Louis#0144: LMAOOO |
Louis#0144: HAHAHAHA
guac#4716: svgoose format...what a beauty
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/840265868192514108/graphic_design_is_my_passion_.png
kurumuz#5695: cute
ASDF#9954: Hello. I read yesterday some strong criticism about the Huggingface implementation of GPT-Neo in Transformers... I just published a Rust port of the same GPT-Neo that is largely based on the Huggingface implementation (https://github.com/guillaume-be/rust-bert/tree/master/src/gpt_neo). I think it's pretty cool that your awesome work is now available to the Rust community natively. If you are interested I'd also welcome any improvement you can think of. As I based this implementation on Transformers it possibly has a good potential for optimization. Please feel free to open an issue or a PR if you see room for improvement. Thanks again for sharing your models - the generation quality is very impressive!
ASDF#9954: The example is available in `./examples/generation_gpt_neo`:
```rust
fn main() -> anyhow::Result<()> {
// Set-up model resources
let config_resource = Resource::Remote(RemoteResource::from_pretrained(
GptNeoConfigResources::GPT_NEO_1_3B,
));
let vocab_resource = Resource::Remote(RemoteResource::from_pretrained(
GptNeoVocabResources::GPT_NEO_1_3B,
));
let merges_resource = Resource::Remote(RemoteResource::from_pretrained(
GptNeoMergesResources::GPT_NEO_1_3B,
));
let model_resource = Resource::Remote(RemoteResource::from_pretrained(
GptNeoModelResources::GPT_NEO_1_3B, |
));
let generate_config = TextGenerationConfig {
model_type: ModelType::GPTNeo,
model_resource,
config_resource,
vocab_resource,
merges_resource,
min_length: 10,
max_length: 32,
do_sample: true,
early_stopping: true,
num_beams: 6,
num_beam_groups: Some(2),
num_return_sequences: 2,
device: Device::Cpu,
..Default::default()
};
let model = TextGenerationModel::new(generate_config)?;
let input_contexts = [
"It was a very nice and sunny", |
"It was a gloom winter night, and"
]
let output = model.generate(&input_contexts , None);
for sentence in output { println!("{}", sentence); }
Ok(())
}
```
This generates as an output (with top-p/k sampling and diverse beam search):
```
It was a very nice and sunny spring day. I was sitting in the shade of a large tree in the middle of a wooded area. It was
It was a very nice and sunny autumn day, and I was sitting on the steps of the Church of the Holy Sepulchre, in Jerusalem,
It was a gloom winter night, and all the windows were dark. It was cold, and it was dark, and there was no fire in the fireplace.
It was a gloom winter night, and in the city of Chicago there was a fire. The Chicago Fire was the worst fire in the history of the
```
Sphinx#2092: There was this: https://christina.kim/2021/04/11/scaling-laws-for-language-transfer-learning/ . The awkward part is just about how to be faithful to the tokenizer.
Sphinx#2092: Realistically, you would either use a tokenizer that was built with both languages in mind (e.g. some sort of multilingual vocab) or you can start with the 'wrong' one then swap to the 'correct' one later. The first approach may just be impossible practically, if the new language you encountered is rare, but people have found ways around this (https://arxiv.org/abs/2012.15562). For the latter strategies, people have found great success with it in MT (https://arxiv.org/abs/2103.06799)
bmk#1476: ah intersting |
bmk#1476: so it looks like a similar pattern holds
bmk#1476: for tokenizer i was mostly thinking of the swapping strategy and hoping the model would somehow reuse the stuff it learned
bmk#1476: also lol it looks like they used our OWT2 :ultrazucc:
Sphinx#2092: Yeah swapping the tokenizers works really well for MT (see paper above) as long as you share embeddings.
Sphinx#2092: though if you don't sample the old data, you'll start to experience some forgetting.
bmk#1476: it seems like another way to get around it would be to freeze the middle chunk of the model initially and slowly allow more and more of ti to be tuned
Sphinx#2092: Yeah ther's a whole line of research on trying to do things of that sort re: freezing some parts of the model.
Sphinx#2092: It's hard to tell whether this is ossification or if we are just incompetent at fine-tuning pre-trained models.
chilli#5665: I thought we discussed this already, lol
chilli#5665: but it's trained for a lot less tokens than GPT-3
StellaAthena#3530: Oh right. I remember now
StellaAthena#3530: It's 200B but not GPT-3 quality
CRG#8707: Do we know what the Chinese > English token conversion rate is? :thonk:
chilli#5665: It's BPE so I don't think it should matter?
CRG#8707: Why wouldn't that matter?
CRG#8707: Chinese BPEs could end up encoding more "effective information?" than English BPEs
StellaAthena#3530: @CRG We do not. I want to go on a crusade about this but nobody is with me 😢
StellaAthena#3530: If y'all remember that paper where people got their university banned from submitting to Linux projects because they were trying to deliberately introduce bugs into the Linux kernel, IEEE (who had accepted the paper for publication) did an investigation of their review process and found that the authors misrepresented their university IRB's findings rotfl.
bmk#1476: did their university get unbanned as a result?
nostalgebraist#3542: thanks -- btw i applied your change in my bot this morning, it's been live for a few hours |
finetune#0907: o nice
StellaAthena#3530: Not as far as I am aware, and I don't think that's particularly unreasonable either. Three tenured faculty looked at this project and went "yes this is a good idea" or at least didn't stop it.
bmk#1476: hm
EricHallahan#1051: hmm
bmk#1476: does the ban extend to alumni too? if does it only cover currently affiliated people
Sid#2121: what's your bot?
Louis#0144: That tumblr
Sid#2121: ok thanks very helpful
nostalgebraist#3542: https://nostalgebraist-autoresponder.tumblr.com/
nostalgebraist#3542: https://github.com/nostalgebraist/nostalgebraist-autoresponder
nostalgebraist#3542: alas it's currently impossible to talk to it without a tumblr account, i recently turned anon asks off because i was getting a ton of boring spammy ones
ethan caballero#6044: smallest data:
https://discord.com/channels/729741769192767510/785968841301426216/840313672818622525
wyrdc#1871: I'm not sure precisely what you're asking but I did compare a few instances of PanGu token vocab to GPT-2&3's. As you would expect, PanGu is more effective with Chinese characters than GPT; the long single-token strings in PanGu I looked at were each 10-14 tokens long in GPT, and represented concepts that are 2-4 GPT tokens in English. Going the other direction: my full legal name is very English-sounding and can be written with only 3 tokens in GPT, but needs at least 7 tokens in PanGu.
CKtalon#7792: it's around 1 to 1.1
CKtalon#7792: chinese names are generally 2-4 tokens 😛
CKtalon#7792: also PanGu uses jieba first before using their sentencepiece-based BPE
CKtalon#7792: i'm asking more about how they trained the BPE model
CKtalon#7792: but character wise, 1000 Chinese characters will generally be between 600-700 English words
CKtalon#7792: But after tokenization, it's about the same. |
zphang#7252: the one-two OOFs from ACL and ICML 🙃
CKtalon#7792: Awesome. They are working on distilling the model too
CKtalon#7792: 13B model has been distilled to be on a single (huawei) gpu ~28GB
kindiana#1016: do you have more info on that?
CKtalon#7792: https://git.openi.org.cn/PCL-Platform.Intelligence/Model-Compression
kindiana#1016: ah cool
CKtalon#7792: They already released it. But still on their hardware only. Others would like to know when they will do it for gpus lol
kindiana#1016: looks like mixed precision + embedding factoring (?)
kindiana#1016: wait its just embedding sharing? 🤔
alexandrost#2936: Hey guys!
alexandrost#2936: do you know if I can run the model on two GPUs?
alexandrost#2936: I am trying to maximize the inference speed ... Any ideas?
EricHallahan#1051: Not easily.
EricHallahan#1051: If anyone *does* know, I'm all ears.
AerysS#5558: I wonder, is torch model's parameters updated by the optim or the Module? Which one saves the parameters?
If at some point, I initialize the model again, but use the same optim to update the new model's parameters, what is the bahavior?
UnsupervisedLearner#4148: To set the stage for a question:
I was thinking about making a turn based rpg where most of the content including sprites, move sets, stats, etc is sampled from a custom network. |
So that got me thinking about how, when you're playing a game like this, you can usually infer a lot from the sprite itself as to the traits found in its stats
For example if you saw an Eastern looking Dragon with lightning clouds, you might assume it's 'Electric' type, maybe have high speed stat, etc.
And I was wondering how I could represent this if I were to train my generative network, since the two inputs have very different amounts of information in each signal. A sprite conveys way more information than a couple uints.
I would like to have a single network output all the information on an entity - sprite, stats, etc. and train it using a CLIP-like contrast. Does this sound reasonable and feasible? Or should I just train a sprite generator, and a 'stats inference' network separately?
I suppose I could try both, but I wanted to get feedback and see if there's some obvious third method I haven't thought of, or some detail I am missing.
Thanks for reading mt rambling
EricHallahan#1051: I recommend reading the PyTorch documentation, it will explain it better than any of us here.
inox#5400: yeah do the pytorch blitz https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html
alstroemeria313#1694: the parameters are stored in the Module, when you create the optimizer it has references to the parameters that are in the Module
alstroemeria313#1694: it will continue to update those same parameter objects
alstroemeria313#1694: if you replace them with new objects that don't refer to the same memory locations it will not update the new ones
alstroemeria313#1694: if you copy into the existing parameters or use an in-place sampling method it will continue to update them after
AerysS#5558: I see, thanks for your help!
finetune#0907: starting a finetune of 2.7b with the gpt-neo codebase somehow starts at 6.2 loss. sounds higher than it should be, right? it's just english text :thonk: |
kurumuz#5695: graph: https://cdn.discordapp.com/attachments/729741769738158194/840655142603980810/unknown.png
Louis#0144: Don’t use fp16
Louis#0144: I had this issue too
Louis#0144: Or just use. A big warmup
finetune#0907: the config doesn't have precision, so i assumed it would be fp32. does it default to fp16?
finetune#0907: that could explain it
finetune#0907: no, looks like it should be fp32 from the code
Louis#0144: Is this Hf?
Louis#0144: or neox
finetune#0907: original gpt-neo
Louis#0144: If not I can’t help
Louis#0144: Ah
Louis#0144: Sorry don’t know then
finetune#0907: thanks anyways
EricHallahan#1051: Stop assuming everyone shares the same problem as you did lol
Louis#0144: I’ve met three people who had that exact issue
AI_WAIFU#2844: if you throw out all your training code and just try to predict the data with the OG model what do you get?
Louis#0144: A lot of people see fp16 as a way to fit a bigger model into their old gpu
Louis#0144: And then confused when it breaks
finetune#0907: actually tried to predict on the original model |
finetune#0907: was kinda funky
finetune#0907: so i kinda suspect that something's wrong
AI_WAIFU#2844: what loss did you get?
finetune#0907: oh, wait, you mean eval on the data
finetune#0907: didn't try that i think, just sampled from the model
AI_WAIFU#2844: yeah don't sample yet, just get eval on your data and a dataset with a known loss to make sure you're doing things right
finetune#0907: the sampling was done before starting the finetune, so should have performed like regular gpt-neo 2.7b, but it kind of turned broken after a few tokens
AI_WAIFU#2844: again don't start with sampling
AI_WAIFU#2844: sampling is difficult to evaluate
AI_WAIFU#2844: look at the loss
kurumuz#5695: It doesn't sample anything meaningful with the regular model, for some reason.
EricHallahan#1051: Sampling is **very** difficult to evaluate.
kurumuz#5695: It pretty much places random tokens
finetune#0907: i get what you're saying, but
https://gist.github.com/finetuneanon/fef4a4c0880126233dcdb3a7c5c77b53
AI_WAIFU#2844: right, so something is broken, so take a step back and start by making sure you eval is working
finetune#0907: kind of suspected it might just be the sampling code being weird tho
kurumuz#5695: Yeah, I will check the sampling
kurumuz#5695: ```
In a shocking finding, scientists discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English. |
Apparently, ironically, it is 2 teenage, in ajoint, and Jess, fe No wonder, and th np, and thers's - a/. ike por A itmes, and Its "patx, things, a, ian'sre, in Italian's f in A it. one's ime, a' f a it, adn, the Dublin,. a it, that people, was, torn, a, fo, at,. it, its,!,!, a' tits, iaQ, and t A not, a, are Jian, iz iz, izg,
who ier,
of a-1 big, sho, at, but A', a, ia it, and, a, that a franc, in a, iz, thes. lung,. iz, iz, " a In him, us, in t, a, iz, iz,Is a, it, in th, it, a, it, yes,
In' iz, and it, a, it, a, that �, the, it, it, it, an ""a, it, it, it, a, it, it, a, it, iz, is J,
the, it, a, it, a, it, it, a, it, a, it, a, it, a, it, a, it, it, a, a, it, a, it, a, a, it, a, it, a, it, a, it, a, it, a, it, a, a, it, a, it, a, it, a, it, a, a, it, a, it, it, a, th, and
So, the, it, a, a, it, a, it, a, it, a, it, a, it, a, it, it, a, it, a, it, a, it, a, it, a, it, a, it, it, a, a, it, a, it, a, it, a, it, a, it, a, it, a, it, a, it, a, it, a, it, a, it
[, a, it, a, a, it, a, it, a, it, a, it,
```
finetune#0907: let's say we eval on a bit of pile |
finetune#0907: what kind of loss should we expect?
AI_WAIFU#2844: https://huggingface.co/EleutherAI/gpt-neo-2.7B
AI_WAIFU#2844: see the table
finetune#0907: might be a dumb question, but can you get loss from ppl or bpb?
finetune#0907: maybe should just try the eval
finetune#0907: and see what it outputs
finetune#0907: o there's lambada
AI_WAIFU#2844: yeah, that is kinda a dumb question
BPB, BPC, validation loss and perplexity are all closely related. Essentially they each measure the probability that the model assigns to the data. The higher the probability the better. I would recommend getting a primer on information theory to understand what's going on.
The key insight is that the probability that a model assigns to a given string is proportional to the exponent of it's length. So if you take the negative logarithm of that probability, you get a rate at which the probability of the string shrinks with length.
BPB: -log_2(P(text))/n_bytes(text)
Val loss (usually): -ln(P(text))/n_tokens(text)
BPT: -log_2(P(text))/n_tokens(text)
Preplexity (usually): 2^(-log_2(P(text))/n_words(text))
You can then convert between them.
finetune#0907: thanks that's very helpful |
finetune#0907: so for the same text/tokens, going by the bpb of 0.7165, pile validation loss should be somewhere in the range of 1.76 to 1.86
finetune#0907: very roughly at least
finetune#0907: pretty far away from 6 or even 3
AI_WAIFU#2844: I haven't done the math but that looks about right.
gdawg16#0493: How do I get google colab to stop crashing with the 2.7b model?!?!
alexyz#3459: 1. do ```!nvidia-smi``` and tell what GPU you got
alexyz#3459: there are many GPUs you could have gotten, some aren't good enough (i'm pretty sure)
so there's V100s (top tier)
P100s
T4s
K80s
P4s (bottom tier)
alexyz#3459: and... no response 😐
bmk#1476: i think we need to start enforcing "no tech support" more strictly
Kharr#7888: or move it all to a specific channel
bmk#1476: i don't want to legitimize it by giving it a channel
gdawg16#0493: I’m back!
gdawg16#0493: Tesla t4!
gdawg16#0493: I didn’t know this thanks for the info
gdawg16#0493: @alexyz |
alexyz#3459: I'm usually able to run 2.7B on T4s
gdawg16#0493: Welp
gdawg16#0493: It downloads the model and then crashes 😦
kurumuz#5695: using TF?
Tinytitan#5596: Hope you can get it working without too much dificulty
gdawg16#0493: thank u
finetune#0907: getting logits for 1085 tokens from the somewhere in the middle of the hobbit with hf's implementation, fp32, gpt-neo-2.7B, I get a loss of 2.6168, which is closer to the 2.9 from the finetune than the estimated average pile val loss. maybe these kinds of samples are just further away from the average text in the pile
zphang#7252: related: you should build your own subset of eval texts for computing perplexity on as you do your finetune experiments
Louis#0144: Doesn’t everyone do this....
Louis#0144: Like if you’re making a dataset
zphang#7252: I think they're still getting up and running for their experimental workflow
Louis#0144: Oh ok
finetune#0907: pretty much
Louis#0144: I misunderstood
Louis#0144: But yeah Jason that’s common ML@practice though right?
finetune#0907: yea, definitely
zphang#7252: yea but in their usecase I presume they're focused more on stories and things
finetune#0907: right now i'm just trying to figure out if we're like even loading the model correctly
Jozef Poniatowski#7589: how do you choose a development set and set size for pretraining big LMs?
Jozef Poniatowski#7589: do you just do ~0.1 of the training data? |
Jozef Poniatowski#7589: (assuming a single source of data like wikipedia)
Jozef Poniatowski#7589: i guess what i'm getting at is since pretraining uses a lot of data, does that mean the dev/test sets also become huge?
zphang#7252: you just want the dev/test set to be large enough so your estimate of out-of-sample loss does not have too much variance
Jozef Poniatowski#7589: ah
bmk#1476: basically you eyeball it
Jozef Poniatowski#7589: gotcha
bmk#1476: a time honored tradition
Jozef Poniatowski#7589: loll
gwern#1782: yeah, the size of the heldout dataset doesn't really need to scale with the training dataset, compute, or model scale. as long as it's a good random subsample, the law of large numbers...
𓅬 gabriel_syme 𓅬#3220: The elethereye approach
Deleted User#0000: well as you scale up, the expected test error goes down, so you want to increase the test set so that the relative error in the test error estimate stays constant
gwern#1782: eh, sure, I guess. I suppose if you make small improvements in the lsos and you want good confidence about the deltas, you'd need to scale it up by a decent amount... but I bet it asymptotes at some fairly small amount like gigabytes at most
𓅬 gabriel_syme 𓅬#3220: or maybe your model is just better if your test set is representative?
𓅬 gabriel_syme 𓅬#3220: btw I'm totally not getting grokking, like why it happens. Should I not be looking at the poster?
Deleted User#0000: the error in the estimate is like 1/sqrt(N) with N test set size. If actual error decreases as power law with some parameter, you'd want to scale up the test set as a power law too
gwern#1782: did you see my comment?
𓅬 gabriel_syme 𓅬#3220: I just saw it thanks! I'm in reddit reading now, if that's what you meant
gwern#1782: you're jacked into the matrix and surfing the world wide web eh
bratao#6397: lucidrains X-transformers is on Hacker news front page !
bratao#6397: https://news.ycombinator.com/item?id=27089208 |
𓅬 gabriel_syme 𓅬#3220: unacceptable, not a single image of Ice Cream in there. thankfully quite a few references 🙂
StellaAthena#3530: The person whining about how the repo doesn’t explain what transformers are are hilarious
EricHallahan#1051: If you don't know what they are then you probably shouldn't be too interested in it lol
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/840804843002134598/MV5BNDg1NTU2OWEtM2UzYi00ZWRmLWEwMTktZWNjYWQ1NWM1OThjXkEyXkFqcGdeQXVyMTQxNzMzNDI.png
𓅬 gabriel_syme 𓅬#3220: some people literally went there expecting some hand-made robot
finetune#0907: looks like there's really some kind of issue with loading the model tho. using this config (https://pastebin.com/raw/QmhM2tXu) to do eval on just the original released 2.7b model using the gpt-neo codebase:
Eval task 'lambada' results: {'lambada_acc': 0.030467689, 'lambada_log_ppl': 10.135368, 'loss': 7.6950855, 'global_step': 400000}
not loading at all gives lambada_acc of 0%, so shouldn't be just a wrong model path or something
after 8000 steps of bs 16, lambada_acc is at 60%. guess most of the model is loaded, but something gets borked somewhere. stared at the config a bunch, but other than hyperparams for training, it matches what it should look like i think
kurumuz#5695: ```
original model:
step 400000 -> {'lambada_acc': 0.030467689, 'lambada_log_ppl': 10.135368, 'loss': 7.6950855, 'global_step': 400000}
after starting the finetune:
step 401000 -> {'lambada_acc': 0.56374925, 'lambada_log_ppl': 2.0661638, 'loss': 12.263127, 'global_step': 401000}
step 408000 -> {'lambada_acc': 0.600621, 'lambada_log_ppl': 1.8476496, 'loss': 12.034605, 'global_step': 408000}
```
kindiana#1016: expected performance is 62.22% acc, 1.72739871 log ppl
kurumuz#5695: yeah, we're comparing with that.
finetune#0907: 3.04% to 62.22% is kinda a big diff :berk: |
kurumuz#5695: the 2.7B original model is sampling complete garbage though.
finetune#0907: it's clearly not loading right somehow from the eval, so can't really expect sampling to work
StellaAthena#3530: Are you using the TF checkpoint or the HuggingFace API
StellaAthena#3530: It’s weird how you’re getting near-random results but then only 1000 steps of fine-tuning is above 50%
kurumuz#5695: tf checkpoint on gpt-neo repo
kindiana#1016: my guess is something like layernorm weights are not being loaded
kindiana#1016: or biases
StellaAthena#3530: It makes me think that there’s something minor wrong that the model is learning to account for with minimal effort
kurumuz#5695: Eval task 'lambada' results: {'lambada_acc': 0.68794876, 'lambada_log_ppl': 1.4260509, 'loss': 12.253363, 'global_step': 400000}
kurumuz#5695: ok
StellaAthena#3530: You don’t get 50% on LAMBADA with only 1000 steps from actually random numbers
kurumuz#5695: config had n_head at 32
kurumuz#5695: i changed it to 20
kurumuz#5695: now its normal
kindiana#1016: wait that's really high :thonk:
kurumuz#5695: yes
kurumuz#5695: its config at the repo
kurumuz#5695: the first time the model came out, i tried to use it and it was sampling garbage
kurumuz#5695: just like now
kindiana#1016: no like the acc is much more than expected |
kurumuz#5695: oh right
kindiana#1016: oh jk
kindiana#1016: this is last token acc/ppl
kindiana#1016: that sounds about right for that
kurumuz#5695: check
kurumuz#5695: https://github.com/EleutherAI/gpt-neo/blob/master/configs/gpt3_2-7B_256.json
StellaAthena#3530: That’s the file we need to change to say 20 instead of 32?
kurumuz#5695: yes
kurumuz#5695: I think there is some other problems in the config aswell
kurumuz#5695: trainsteps too low
kurumuz#5695: won't start if doing a finetune
finetune#0907: the config on the-eye has 20, but only checked there just now
kurumuz#5695: no local attention in config
StellaAthena#3530: Ah
StellaAthena#3530: @kurumuz I see why that’s confusing, but that’s actually not the file you think it is
kurumuz#5695: yeah probably
kurumuz#5695: but i looked at the repo first for the configs
kurumuz#5695: i also looked at the eye, but didnt notice the n_head was different
kurumuz#5695: so I was trying to figure out the problem for a day :D
kurumuz#5695: @finetune restart the run? |
finetune#0907: yea
kurumuz#5695: :berk:
finetune#0907: guess it learned to ignore the garbage heads within the first 1k steps or so
finetune#0907: interesting
StellaAthena#3530: Yeah that is interesting
Louis#0144: Happy Mother’s Day everyone. Pls spend time away from work today with parents if applicable
kurumuz#5695: I guess you find interesting things through pain and suffering
EricHallahan#1051: I expect that it is the case.
Deleted User#0000: wow, how did that happen lol
EricHallahan#1051: ¯\_(ツ)_/¯
nostalgebraist#3542: with the way spliting/merging heads works, wouldn't 20 --> 32 screw up the calculation of all 20 existing heads? rather than getting the 20 intended heads correctly, plus 12 extra random heads
nostalgebraist#3542: so the entire attn part would be garbage
finetune#0907: somewhat recovered pretty quickly in that case
Daj#7482: seems like a kinda interesting unintended observation :thonk:
kurumuz#5695: Should've tested the later checkpoints too.
kurumuz#5695: But yeah, It almost catched the base model in lambada eval with just 1k steps.
EricHallahan#1051: Seems highly useful.
nostalgebraist#3542: btw, in case this is useful to others, my own work branch when i was finetuning: https://github.com/nostalgebraist/gpt-neo/tree/nost-work
the main changes: |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.