data
stringlengths 115
7.61k
|
---|
Louis#0144: @Deleted User
Louis#0144: hes a beast
Louis#0144: his dog helps him code
Napolean_Solo#2907: I see.. anyway I would be grateful if you guys can give a brief idea about what a usual process look like?
Louis#0144: no one knows how phil works
Louis#0144: besides his dog
Napolean_Solo#2907: Stuff you guys do seems pretty cool so just out of curiosity
Napolean_Solo#2907: So what do you guys do?
Louis#0144: cry mostly
Louis#0144: nah
EricHallahan#1051: Lucid is the dog
Napolean_Solo#2907: @EricHallahan
Louis#0144: idk
Louis#0144: We just have a lot of hands
Louis#0144: its mostly a numbers thing
Napolean_Solo#2907: Hmm
EricHallahan#1051: Hmmm
Napolean_Solo#2907: You're a GPT-Neo dev
Napolean_Solo#2907: What does your work look like?
EricHallahan#1051: I haven't done *that much work* IMO, but it is mostly keeping track of dependencies and such for the Docker image we deploy. |
Napolean_Solo#2907: So who does the mind crunching work?
mgostIH#0245: @Napolean_Solo I suggest you to start writing simpler models yourself and to understand their core concepts
mgostIH#0245: Attention isn't that difficult for example
mgostIH#0245: The issue comes in implementing things in a way that's also performant
EricHallahan#1051: I played a pretty big role in getting the image down pat.
mgostIH#0245: For that you need to scope into the details of your framework (Pytorch)
EricHallahan#1051: You mean catal?
Napolean_Solo#2907: I am kinda confused about one thing, there was a time like 3 years ago Tensorflow was everything and then all of a sudden PyTorch gets the spotlight. What changed exactly?
alstroemeria313#1694: PyTorch came out :)
mgostIH#0245: A slow steady progress of researchers going "Oh, this new Pytorch thing doesn't suck like Tensorflow"
mgostIH#0245: But I wouldn't stay too attached to any framework
Napolean_Solo#2907: Would you care to elaborate a bit more on why Tensorflow sucks?
mgostIH#0245: Learn coding principles and understand how to see a paper as code
alexyz#3459: Why does everyone hate tensorflow lmao
mgostIH#0245: A lot of sharp edges
mgostIH#0245: It's hard to point at exactly one reason when things go bad, it's usually the tons of inconsistencies that you can't bare in a long run
mgostIH#0245: Which is why I don't code C++ anymore
Napolean_Solo#2907: I have talked to some startups many use Tensorflow
catal#4638: So if I understand it correctly if I have some sequence S for which I want to calculate the attention then I have three matrices A, B, C so that SA = Q, SB = K, SC = V? And during training I learn the matrices A,B, C?
Napolean_Solo#2907: I guess it has something to do with Google marketing it as Production ready |
mgostIH#0245: Yeah, per layer, per head
Napolean_Solo#2907: Businesses love the word "Production ready".
catal#4638: Ohh okay, somehow I was unable to get that from the paper itself. Thank you ๐
EricHallahan#1051: Don't worry, I spent days pulling my hair out doing the same lol
EricHallahan#1051: Luckily I wear my hair really short so that is hard.
chilli#5665: https://thegradient.pub/state-of-ml-frameworks-2019-pytorch-dominates-research-tensorflow-dominates-industry/
chilli#5665: I like this article :^)
Louis#0144: tbf
Louis#0144: TF has always sucked
Louis#0144: lmao
Louis#0144: like I went right from Theano to Pytorch
Louis#0144: I would rather use Theano than TF
elderfalcon#4450: As far as I recall, a few years ago one of the Keras owners, as best as I understand had some codebase authority (fchollet) did a bit of a hostile takeover of the API that really screwed TF, especially going into 2.0. lots of people not happy, lots of political posturing and muscling through unpopular changes (if the one infamous GitHub issue + the following API changes were any indication).
Follow that by trying to maintain backwards compatibility with the old TF (rather functional) and new, rigid Keras-ized TF, plus just some really bad design decisions, companies, both users and hardware/software development companies (like NVidia) stopped progress with TF 2.0 and kept supporting the latest TF 1.x (can't remember if it's 15 or 16) release.
PyTorch has had a consistent and clean design from the ground up, I think, for the most part. I know there's a few weird corners in PyTorch but it's miles ahead, and way better than what Eager was trying to do.
Also, Eager (TF 2.0's big/primary thing) is pretty bad IMO. Some ideas could be nice, but it looks like Jax is the internal replacement that a lot of projects in Google are using, and the writing is on the wall for TF (thank goodness).
Sphinx#2092: You are not alone , apparently: https://pymc-devs.medium.com/the-future-of-pymc3-or-theano-is-dead-long-live-theano-d8005f8a0e9b |
chilli#5665: I disagree that that was the root cause
chilli#5665: TF was already on the way out before 2.0/keras
elderfalcon#4450: Agreed, I think it (for me personally) was the final nail in the coffin of moving away from it. Fragmentation was horrible up until that point and the slim/tflearn/Keras/layers fragmentation was awful.
That may be what stands out for me as I hit my "hang up the hat" point at that point. Human bias and etc, mumble mumble.
chilli#5665: Agreed that TF 2.0 did not help haha
chilli#5665: On the other hand, not sure that Jax would have gotten as much momentum if it wasn't for TF lol
chilli#5665: Neither would pytorch
chilli#5665: So I guess it's all worked out
Napolean_Solo#2907: So I was right about Tensorflow being marketed as production ready.
Napolean_Solo#2907: Yeah I remember that people didn't like that they messed up Keras
Napolean_Solo#2907: This was in 2018 if I remember
Napolean_Solo#2907: This brings clarity on exactly what was the issue
Napolean_Solo#2907: Anyway, is anybody working on implementing Dall-E research?
Napolean_Solo#2907: Is the paper even published yet?
EricHallahan#1051: Yes, everything except for the meat of the model.
Napolean_Solo#2907: Why did they not publish it though? I mean i understand it can create some images of whatever you tell it to but the quality would be bad anyway
EricHallahan#1051: It is in preprint I'm sorry.
Napolean_Solo#2907: Huh? What do you mean?
EricHallahan#1051: They released the preprint to arXiv a little while ago. |
Napolean_Solo#2907: So are they planning to publish the main stuff at all?
Sphinx#2092: The paper looks pretty thorough at a glance.
alstroemeria313#1694: same
alstroemeria313#1694: no
alstroemeria313#1694: https://github.com/openai/DALL-E/issues/4
Napolean_Solo#2907: Lol the dislikes said it all
Napolean_Solo#2907: Lot of people unhappy with that decision
Napolean_Solo#2907: But imagine internet will be filled with these images if they do. The amount of misinformation being spread will be huge. Facebook will be the breeding ground for these.
Napolean_Solo#2907: Not liking where the future is headed
Napolean_Solo#2907: Have any of you here read the book by *Arthur C Clark's the light of the other days*?
Teemochu#8740: I don't think these images look realistic yet unless you're generating just a specific face
Teemochu#8740: realistically *art*, sure, but I wouldn't say there's much potential for "misinformation" in whether you or a computer program you ran actually created the image
Napolean_Solo#2907: Yeah but they are still high fidelity
EricHallahan#1051: Not really lol
EricHallahan#1051: They look good, but they are low res and highly smoothed.
Teemochu#8740: (And right now this is what GPT-3 is being used for... very far cry from "ultimate fake news machine") https://cdn.discordapp.com/attachments/729741769738158194/832707693138411631/bjjwkscsxht61.png
Napolean_Solo#2907: GPT-3 is an effective tool to create misinformation
Napolean_Solo#2907: I have used it
Napolean_Solo#2907: Davinci to be precise
Teemochu#8740: valid, but in practice it isn't being used for that at the moment [en masse], and humans can just as easily make false writings |
EricHallahan#1051: If I really wanted to spread misinformation using language models is not a good way to do it.
Teemochu#8740: photography would be a different level since being photorealistic is much harder for a human than mere fiction writing
Napolean_Solo#2907: But now you can automate it
Teemochu#8740: The writing itself isn't the part that would benefit the most from automation... it's personalizing the writing to the reader that would be the main benefit of language models
alstroemeria313#1694: vqgan+multimodal transformer wouldn't suffer from the smoothing
thenightocean#6100: I am less worried about missinformation. I mean these days I presume everything written on internet as misinformation until proven otherwise.
I am worried that AI might soon generate novel images that might affect human brain in unexpected ways.
alexyz#3459: "I am worried that AI might soon generate novel images that might affect human brain in unexpected ways." @thenightocean elaborate
thenightocean#6100: its hard to say exactly cause I am hardly an expert, but it might be possible that there is a space of possible visual phenomena that might trigger dangerous reaction in humans. (thats only a possibility, I am not sure, and hopefully I am wrong)
guac#4716: I think AI porn will be pretty damaging to adolescent neural wiring.
thenightocean#6100: some variation in these theme basically: https://en.wikipedia.org/wiki/David_Langford#Basilisks
Parker#3197: basically a mind virus?
Daj#7482: We already have mind viruses
Parker#3197: like how neural networks are attacked?
Daj#7482: We call them memes/ideologies/religions
Parker#3197: lol
Parker#3197: true
Daj#7482: and porn
thenightocean#6100: It might be more direct than that |
alexyz#3459: @thenightocean It's purely science fiction
Daj#7482: I personally don't expect the human visual system to be vulnerable to shock images/basilisks
Daj#7482: It can handle DMT
alstroemeria313#1694: ...didn't we have this exact discussion on this server before
Daj#7482: and (usually) turn out fine
Daj#7482: Probably lol
alexyz#3459: like memetic images or something in SCPs
Parker#3197: epilepsy for everyone
Daj#7482: Infohazards are real but they don't usually make your brain fry
Daj#7482: ~~except for Roko, maybe~~
Daj#7482: :berk:
Daj#7482: or Mufflax, more like
thenightocean#6100: I mean we dont know what will happen when we will have powerful AIs that can create media which has been trained to induce negative effect
Teemochu#8740: The most dangerous "infohazards" in practice are Copenhagen things
Daj#7482: https://discord.com/channels/729741769192767510/730451873613611079/782689000783609876
:berk:
Teemochu#8740: e.g. knowing about a drowning child is dangerous if you aren't a strong swimmer, at least to the extent someone might hold it over on you for doing nothing (as well as the psychological damage from thinking about it in the future... tbh come to think of it a lot of trauma is infohazardous in nature)
Daj#7482: I think the input channel is probably pretty robust to fuzzing and such. The higher level believes aren't resistant to certain memetic hazards
alexyz#3459: The only thing that is close is the the McCollough effect
alexyz#3459: where there's an image that if you stare at it for like 15 minutes |
Daj#7482: AI will soon be able to just have nanite dust rewire our brain anyways
alexyz#3459: it literally can get stuck in your brain for months
thenightocean#6100: like, I can even imagine images which would affect me in a bad way like that (I wont go into details for obvious reasons)
Teemochu#8740: I can't really imagine much that would if I knew it wasn't real, except for McCullough type things
Teemochu#8740: and I don't have a weak imagination in that regard
Daj#7482: There was a scifi story in Nature once about doctors trying to treat soldiers exposed to nanite dust that settles in their spinal columns and replaces all input to the brain with highly optimized maximally traumatic imagery
alexyz#3459: https://en.wikipedia.org/wiki/McCollough_effect
Daj#7482: s-risks are fun
alexyz#3459: so anyone want to test it for science? lmao
Daj#7482: I have a robust infohazard policy, no thanks lol
alexyz#3459: no but seriously it isn't that big of a threat
Daj#7482: Iknow what it is
thenightocean#6100: whats the name of that move that AlphaGO made that no one thought it made sense as it went against entire tradition of Go play?
alstroemeria313#1694: i've made disturbing-looking things while playing with feature visualization of vision models but they were just disturbing
thenightocean#6100: and it turned to be a brilliant move
alstroemeria313#1694: i also made cuteness-optimized furry characters once
thenightocean#6100: and alphaGO is baby toy compared to the systems we will have in couple of years.
alstroemeria313#1694: like, with a furry StyleGAN and CLIP
Daj#7482: anime people: :guilty:
alstroemeria313#1694: ehehe |
Daj#7482: Yea obv
Daj#7482: I'm just saying I think the eyes are a shit input channel
Daj#7482: Give direct neural access and a sufficiently strong AGI can make you think, feel and do _anything_
Daj#7482: gg
thenightocean#6100: I just feel that idea like "there are no visual inputs that can seriously disturb human brain cause we havent seen this in known human history" is similar like "there is no way this move in Go would be good, as we haven't seen that move win in entirety of human history"
Daj#7482: Nah I think it's different
thenightocean#6100: just feel things are going to be much weirder than, "GPT writes fake news"
Daj#7482: But maybe that's because I have pretty intense closed-eyes-hallucinations 24/7
Daj#7482: Which feels like fuzzing
Daj#7482: and I'm fine
Daj#7482: (Or am I ๐ค )
alexyz#3459: OR it could be similar like "there are no unicorns cause we havent seen them in known human history"
alexyz#3459: Because sight is something almost every human has
alexyz#3459: and has had for 200k years
Daj#7482: I think there's a tail probability that images like you describe exist, but I just don't think it's likely
alexyz#3459: everything's possible
bmk#1476: move 37 of game 4 i think
bmk#1476: this is totally off hand though so don't cite me on that
bmk#1476: wait it might be game 2
bmk#1476: game 4 is the one alphago lost |
triggerhappygandi#0001: Inb4 Francois Chollet uses this to rag on pytorch
chilli#5665: Wdym?
chilli#5665: He already complained about that article haha
triggerhappygandi#0001: He is always ready to fight about frameworks
triggerhappygandi#0001: Oh lok
triggerhappygandi#0001: Lol
chilli#5665: It was a year and a half ago
thenightocean#6100: my point isnt to focus just on this scenario I am just generally annoyed with discussion that mostly focuses on stuff like GPT writing fake news, ai training effects on climate change, etc etc... Like if that are only issues we have to worry about once everyone has access multimodal systems 100 powerful than anything that exist today, I say, we should be supper happy in that case
triggerhappygandi#0001: Game 3, idk what move
triggerhappygandi#0001: The funny thing is that in game 3 it played a one in ten thousand move
triggerhappygandi#0001: In game 4 Lee Sedol played a one in ten thousand move
triggerhappygandi#0001: What a comeback
chilli#5665: I'm gonna do a random retrieval that might be right
chilli#5665: Move 72
Daj#7482: Agreed. I guess your scenario seems quaint compared to even larger x/s-risk scenarios imo lol
thenightocean#6100: yes
chilli#5665: Damn, it was move 78
triggerhappygandi#0001: Will you ping Chollet about how jax looks like pytorch@chilli
chilli#5665: Why lol
triggerhappygandi#0001: For the kek |
chilli#5665: And I don't think it's true that Jax looks like Pytorch
chilli#5665: If anything, TF 2.0 looks a lot more like Pytorch
chilli#5665: Haha
Deleted User#0000: as they say, imitation is the best form of flattery
triggerhappygandi#0001: It doesn't function like it, but it sure does _look_ like it
chilli#5665: Hmm, don't agree
chilli#5665: It doesn't have a similar module system, people often need to rely on control flow constructs like `lax.while`, etc.
triggerhappygandi#0001: Huh. I guess. From my limited experience flax looks a lot like it.
triggerhappygandi#0001: Programming wise
chilli#5665: I think it's only true on the surface, but it doesn't really feel the same when you're programming it
chilli#5665: For example, there's no easy ways of accessing the intermediate layer in a flax module iirc
triggerhappygandi#0001: I'll have to use it more to comment on that.
chilli#5665: Well, no good ways may be harsh
chilli#5665: More accurately, you need to use flax's ways of accessing the intermediate layers
chilli#5665: Since fundamentally, you're not dealing with a python module object
Sid#2121: I'm curious, what's the famous github issue?
chilli#5665: For example, see https://flax.readthedocs.io/en/latest/howtos/extracting_intermediates.html
chilli#5665: Lemme find it
triggerhappygandi#0001: That's harsh wtf
chilli#5665: It was basically some people complaining that optimizers were being moved under the keras namespace |
triggerhappygandi#0001: Ah. So it doesn't play as nice with python as pytorch
chilli#5665: Found it: https://www.reddit.com/r/MachineLearning/comments/9ysmtn/d_debate_on_tensorflow_20_api
chilli#5665: Well, yes, this is all part of Jax's tradeoffs
chilli#5665: In order for you to get (not garbage) performance, you need to jit
chilli#5665: But once you've jitted, the code you're executing is a XLA blob and not python
chilli#5665: Which is great for perf
chilli#5665: But not great for flexibility
chilli#5665: So, the alternative is to not jit, but then it's really slow
Teemochu#8740: cuter than this image? https://www.youtube.com/watch?v=B4BwMRrufRo
chilli#5665: (which is fine for debugging, mostly)
alstroemeria313#1694: see #art , i just posted some
thenightocean#6100: agree. All I am saying is that, once the image generation gets really good, I am staying out of the #art channel permanently ๐
elderfalcon#4450: @Sid Chilli found it above: https://github.com/tensorflow/community/pull/24
Not too much drama that I run into, but if you're a sucker for prime time drama in the ML community that's a good place to go, haha.
chilli#5665: Lol if you follow fchollet there's a lot of drama
chilli#5665: There was that one a couple months ago where fchollet found a fchollet parody account and claimed it was a Pytorch dev
elderfalcon#4450: Hahahaha
chilli#5665: Afaict, it was without any evidence
chilli#5665: For some reason, he's constantly made the bizarre claim that Pytorch devs advertised Pytorch a ton (which is why it gained popularity) |
chilli#5665: Including 1. A ton of astroturfing on reddit and HN
chilli#5665: And 2. Their marketing was based off of "appealing to users' sense of superiority"
elderfalcon#4450: My senses when I see Python load PyTorch^^
https://c.tenor.com/3Ci5xA64A_oAAAAM/feelit-itscoming.gif
elderfalcon#4450: My favorite bit from the optimizers thread, from fchollet (sorry, can't help but splurge in the drama a bit): https://cdn.discordapp.com/attachments/729741769738158194/832720675055468554/Screenshot_20210416-164926.png
elderfalcon#4450: Then later in the same post:
elderfalcon#4450: https://cdn.discordapp.com/attachments/729741769738158194/832720740108337247/Screenshot_20210416-165408.png
chilli#5665: Actually, I guess it's not that bizarre
chilli#5665: He just really hates Facebook
chilli#5665: Lol
elderfalcon#4450: I'm sure he's not a bad engineer, it's just a shame that technical issues around the project itself seem (from my outside perspective) become deeply personal issues to him, like he's defending against a personal attack.
I'd hope he can get through that bit, it's just a shame to see good talent and all of the work on certain things to to waste.
In any case... anyone willing to be brave enough to make a pitch for Jax? I haven't tried it yet and don't know if it's something that would be worth doodling around in yet (though I'm generally a later adopter, despite loving basic research itself).
bmk#1476: @elderfalcon what's the tldr of the issue?
bmk#1476: a quick skim and i couldn't see any obvious problems
chilli#5665: @bmk this one
chilli#5665: Or err, that thread sums up the complaints
thenightocean#6100: I thought you gonna post the his github comments where he said he wont add support for pytorch cause he doesnt care about whats currently "hip" |
chilli#5665: Oh yeah, I was looking for that one
chilli#5665: Ah, found it: https://github.com/keras-team/keras/issues/5299
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/832724564604616784/Screenshot_20210416-140936_Chrome.jpg,https://cdn.discordapp.com/attachments/729741769738158194/832724564940685395/Screenshot_20210416-141001_Chrome.jpg
chilli#5665: @elderfalcon
chilli#5665: He didn't call it a "hip" framework, he called it a "novelty" framework + "hipster"
voxs#0001: anyone know how long colab pro will stay alive if i close the tab
Louis#0144: an hour or two
Louis#0144: not long
elderfalcon#4450: It may never die. If so, congratulations -- you have created the first eternal being.
That, or, errr, an hour or two. That's probably more like it. I don't know.
voxs#0001: lol it died in 30 minutes
EricHallahan#1051: Just use CPU instances lol
voxs#0001: cpu slow af
EricHallahan#1051: Not when you design for it.
https://discord.com/channels/729741769192767510/730484623028519072/818689669067833364
voxs#0001: https://cdn.discordapp.com/attachments/729741769738158194/832736351220531230/image.png
EricHallahan#1051: Well, it is only slow in comparison. It is usable however on CPU and most of the time I just do other things while I run it in CPU instances. It isn't deterministic either on GPU.
EricHallahan#1051: I've had CPU instances last for hours before.
Sora#8531: Please correct me if Im wrong but isn't it counterintuitive to use image sizes of 384x384 for fine-tuning and evaluation of ViTs when the training images were 224x224 since the sequence length of fine-tuning would be higher than the one for pre-training? |
From what I understand "standard" transformer models (or at least BERT) can accept variable length inputs by padding (adding a PAD) token to the input but the max sequence length (number of words for example?) is fixed and therefore the input sequence length cannot be bigger than this max.
Moreover, in Training Tips for the Transformer Model (https://arxiv.org/abs/1804.00247), Popel et al. state transformers do not generalize well to sequences longer than the ones they were trained on. Is the above correct?
Furthermore, if they use padding to make transformers accept variable length "sentences"/sequences in NLP, why dont we use variable resolutions and pad them in CV?
kindiana#1016: you throw away the pos embedding when you increase resolution in the last couple epochs (or not if you use RPE)
kindiana#1016: you could use variable resolutions, but its just kinda annoying to implement lol
CRG#8707: Don't they linearly interpolate the PE?
nz#9710: They do, yea
kindiana#1016: oops my bad
kindiana#1016: seems interesting that you can upsample the pe :thonk:
CRG#8707: https://cdn.discordapp.com/attachments/729741769738158194/832939738864156732/f296b4ab4977dcb7cc4c20c0ca68ca5e.png
Sora#8531: Im doing experiments where that statement doesnt hold true, like at all
kindiana#1016: which statement?
Sora#8531: And going back at it after being more famiiar with NLP makes sense
Sora#8531: That increasing the resolution for fine-tuning is beneficial
kindiana#1016: the idea is that you can get better results for the same training compute compared to high resolution the whole time
kindiana#1016: I imagine if you have very strong scale constraints on the objects it wouldn't work
kindiana#1016: otherwise its a bit of a :thonk: why it would hurt |
Sora#8531: Yeah and Im saying that my hypothesis is that youre increasing the compute while decreasing the end performance
Sora#8531: I may need to verify that claim but I think it makes sense
kindiana#1016: I find it difficult to believe that the performance would decrease
Sora#8531: Ill be back
Sora#8531: Remindme like a month or so
Sora#8531: Is there any large scale vision research being done by this group?
Sora#8531: I see a lot of cool topics but I see most is NLP, with the multimodal, and then proteins
kindiana#1016: there's some people doing that stuff, I think @nz in particular
kindiana#1016: its possible that there will be a unified clip/dalle/gpt model at some point
nz#9710: yup! my current code is here https://github.com/NZ99/self-attention-experiments-vision, so @Sora if you're interested in vision we could collaborate!
nz#9710: several folks have mentioned being interested in contributing, could we maybe have a dedicated channel to better coordinate? @Daj
Daj#7482: We're currently trying to figure out how to better formalize the process of getting resources from EAI, bear with us haha.
Daj#7482: If this is a concrete project with goals, a team, etc (preferably with one or more L5 people attached), we can get a channel and resources set up potentially
Daj#7482: We are trying to only create new channels when there's a demonstrated need
nz#9710: I'm from mobile rn but I wrote a proposal, the other folks who have mentioned being interested in contributing are micpie and ghandi
Sora#8531: What do you have in mind? I guess something related to BotNet transformers? Do we need to use jax for experiments or would pytorch be ok too? And how can I contribute?
nz#9710: Interested in all models with good scaling properties, currently using flax to be able to handle TPUs
Daj#7482: Awesome! We can make this a thing then, yes
nz#9710: It's at a really early stage so there's much to do, the main goals are to 1. reproduce research papers and release pretrained models, 2. evaluate them on common hardware (mainly step time) and 3. scale them up to evaluate their scaling properties (on imagenet 21k)
Daj#7482: You'll have to tell me/us what you need and when |
nz#9710: As soon as I'm back home I'll resend the proposal
kindiana#1016: https://docs.google.com/document/d/1cS0DFJu2e5BuKtXSnTtRII-lvw5sNVMyFOyP3lHO7h4/edit
Daj#7482: Looks great
Daj#7482: If you think a channel would be useful, happy to create it. Hardware is also available, just need to hash out what you need and how to get it to you most effectively
Daj#7482: Can also set up Eleuther git repos if you wanna make it official
ethan caballero#6044: I think proposal should be modified to emphasize single epoch unsupervised computer vision. For example, there are 1e14 frames on youtube (i.e. no organization will ever finish a single (compute-optimal) epoch of youtube).
Sora#8531: What do you guys think about pytorch lightning? I have spent all this week reading into lightning in order to port all my research code to that framework since supposedly it should help with the boilerplate and allows for cpu, gpu and tpu (according to them). I can do that meanwhile and hopefully then we can compare if it does perform as well as jax in tpus (though from the faq I thought you were planning to gradually transfer to using gpus due to your provider or something)?
Deleted User#0000: I've been using it, and I think it works well. It's probably not quite as efficient as TPUs with TF or Jax, but it does work. Also I find TPUs to be rather finnicky to optimize to get good performance, and more so with pytorch, but that's independent of lightning
StellaAthena#3530: FWIW, we hate TPUs too and wish we could do everything on GPU. We just have access to hundreds of thousands of dollars worth of compute on TPU and itโs a shame to have it go to waste.
Deleted User#0000: yeah same
Deleted User#0000: thankfully i now got access to many gpus from my institute
Deleted User#0000: they were complaining that they had too many gpu hours available, and people weren't using enough. I'm here to solve that xD
StellaAthena#3530: I imagine that must be a huge burden on you
Sora#8531: May I ask, if it's not secret, how did you get so much TPU compute?
StellaAthena#3530: Do you know what TRFC is?
Sora#8531: https://sites.research.google/trc/
This?
StellaAthena#3530: Yeah they just renamed it last month to TRC but nobody knows that name yet
StellaAthena#3530: Basically if you keep sending them emails saying โhey look at all the cool things Iโm doing can i have more computeโ they tend to say yes
Sora#8531: Huh, for real, how legit do you have to be as a researcher to get them to give you access to compute? And well Im guessing you basically combined all the TPU hours from everyone in here or something? |
Sora#8531: So in paper you have enough compute to reproduce ALL the vision transformer papers in one single environment and probably even extend all of their experiments and probably still wouldn't run out of compute ๐ค
Sora#8531: That's amazing
Daj#7482: They're extremely generous with their "base access" (~100 v2-8 and a handful of v3-8 for 30 days), you need close to no qualifications to get that usually. They're usually pretty happy to give more if you can show them some cool project you're working on (they also love you sending bug and experience reports!)
Daj#7482: We have a bit of special treatment and get a _lot_ more since I'm one of the oldest members of the program and have gotten to know the guys in charge
mgostIH#0245: AGI juice is all about sharing
Sora#8531: Would it be part of this proposal to do a LRA (Long Range Arena, the paper where they compare transformers for long sequence tasks) but more focused on the needs for vision tasks, say classification, detection, segmentation, generation, etc. Also maybe with a new VTABv2 that fills this requirement?
nz#9710: Yea a channel would probably be useful, up to now we mainly discussed it through PMs
nz#9710: Once I'm convinced that code quality is high enough sure, but for now (since this is my first time ever doing something like this) I would rather keep it separate
nz#9710: Regarding compute, the code is currently for TPUs (it uses bfloat16) but it should be really easy to adapt for GPUs.
nz#9710: What kind of ready to use datasets are you thinking about? Can Aran's and Ben's one be used for classification pretraining? And do you think it would be enough for a single epoch training run? Also, are you thinking about semi-supervised or really unsupervised methods here?
nz#9710: Are you interested in linear attention for high resolution use-cases? Because if so there have been several ViT variants designed just for those. In any case other tasks (right now the aim is to be able to evaluate on imagenet, v2 and real) would be cool to add too.
StellaAthena#3530: Iโm not that familiar with the vision transformers lit., but we are on track to finish a 6.7B GPT-3 model in less than a month.
StellaAthena#3530: And we could do several of those in parallel if we wanted to
Sora#8531: https://arxiv.org/abs/2103.15358
I know this one explores quite a few variants ported from NLP but which others have there been?
Should we begin by first compiling a list of models (maybe a google doc or hackmd or whatever) with all the results on a table so we could have it organized or something?
nz#9710: I'm finishing up a blog post just about vision transformers research ๐
nz#9710: https://hackmd.io/@NZ99/rks7-N7UO this is the latest version
nz#9710: Still need to finish it though, other variants you may be interested in are Pyramid Vision Transformer (also discussed in the one you linked), Swin Transformer and LeViT
StellaAthena#3530: That seems like a good idea |
nz#9710: I would like to have a graph summing up parameter and FLOPS efficiency of all models, the issue is that both aren't really indicative (parameters are just memory, and FLOPS are not indicative given that different ops have different hardware utilisation). It's part of why in the project we have an evaluation objective -- step-time and inference time are way more important, and providing 1:1 comparisons on common hardware would be a good contribution to the CV community
Sora#8531: So this Eleuther AI thing has less than a month?? Wow I took you guys would had been working together for at least like half a year by now
Sora#8531: And awesome blog!
Sora#8531: I knew there had been a lot of variants but when you put it all into a single document it can get a little bit disorienting
Sora#8531: Overwhelming is the word my bad
Sora#8531: Like a lot to digest in parallel
nz#9710: Yea, I've been trying to group them up in sections (e.g. some bringing convolutional biases into vision transformers, others focusing on hierarchy) but it's hard since many variants make use of both
nz#9710: (oh and of those I mentioned I would look into swin-transformer in particular -- the authors recently release code + models as well https://github.com/microsoft/Swin-Transformer)
EricHallahan#1051: No, no, we have been doing stuff since last summer (I arrived at the end of January). We have been training 6.7B for a pretty short amount of time all things considered.
Sora#8531: Are you planning to add the video variants, or just the ones used for image?
nz#9710: if there's interest maybe, it's really dependent on interest and how many will be involved
StellaAthena#3530: We started in August 2020, but we really hit our stride around January 2021. I was saying that the 6.7B model will take a month to train as a representation of our compute.
Sora#8531: I've been closely following the development too. CvT is the state of the art only using imagenet21k, isn't it? Though Swin definitely seems more versatile (and sota for every other task)
I meant to the blog mostly, since it's already so complete. Btw I guess you probably know but the FB authors just released their code two days ago:
https://github.com/facebookresearch/TimeSformer
Sora#8531: Oh okay I get it. It's still amazing what you guys have done in such a short period of time
๐
ฌ gabriel_syme ๐
ฌ#3220: thanks looks like a great blog and post!
nz#9710: I think so, and they didn't even use the DeiT training recipe! Yea I heard about timesformer (and related vision transformers for video, there's ViViT too), was thinking about whether I should add a section for those too.
nz#9710: Thank you!
nz#9710: Oh I thought you meant for the replication and scaling up project, but for the blog post it's much easier to add, as I said I may very well do! |
Sora#8531: https://arxiv.org/abs/2102.00719
https://arxiv.org/abs/2102.05095
https://arxiv.org/abs/2103.13915
https://arxiv.org/abs/2103.15691
In order of arXiv release (AFAIK)
nz#9710: I had https://arxiv.org/abs/2103.10043 also written down, though I need to check how interesting it is
Sora#8531: TIL apparently Samsung has a research division in America
Sora#8531: They didn't compare in Kinetics-400 so don't know how to feel about those results
Louis#0144: https://cdn.discordapp.com/attachments/729741769738158194/833001622061711360/image0.png
Louis#0144: omgggg
Louis#0144: @bmk is this u
bmk#1476: what?
Louis#0144: โI came across your profile in the GTC attendee list and had to reach out to get your thoughts on what my company, Aspire, is doing! We are executing a bottom-up approach to AGI (start with n=1 and then generalize across all n) which I believe is much better than the top-down approach that groups like OpenAI are doing. The end goal is the concept of Personal Intelligence (PI).
To do this we are creating a desktop AI assistant that incrementally automates more and more of a user's tasks. It is a multi-modal intelligent agent that learns tasks from demonstrations and natural language instructions. Who knows, eventually, it could even write a novel ;).
I'd love to tell you more and learn about your work and story. ๐
Yours Truly, |
Anishโ
Louis#0144: Some crank emailed me
Louis#0144: Lmao
bmk#1476: just ignore lol
loganengstrom#9145: Hi, is there a smaller Eleuether model available than the 1.3B parameter model?
Louis#0144: Soon
Louis#0144: Very soon
Louis#0144: Actually wait didnt we already release 117M
Louis#0144: Lemme check for you one sec
loganengstrom#9145: thanks!
Louis#0144: https://huggingface.co/EleutherAI/gpt-neo-350M
Louis#0144: https://huggingface.co/EleutherAI/gpt-neo-125M
loganengstrom#9145: Sweet, thank you very much!
Louis#0144: Np
loganengstrom#9145: Does it have the same tokenizer etc?
Louis#0144: Yes
loganengstrom#9145: As the larger models
Louis#0144: For future reference though we do not do tech support here
Louis#0144: Weโre all extremely busy
Louis#0144: Just this once since itโs a Saturday morning and I have a bit of time off rn |
loganengstrom#9145: My bad, I'll do my own research next time for such simple questions
loganengstrom#9145: I'm sorry in advance if this is an easily answerable question, I looked into it in depth and couldn't find a solution with confidence
loganengstrom#9145: Is there a collection of models that has only been trained on Openwebtext?
loganengstrom#9145: (or openwebtext2)
loganengstrom#9145: I saw that on the deprecated info page (https://github.com/EleutherAI/info) that there is at least one, but I couldn't find exactly where these models are
loganengstrom#9145: Looking at https://huggingface.co/EleutherAI it looks like there are only "the pile" trained models readily available
EricHallahan#1051: If you haven't already, check out the FAQ at https://eleuther.ai/faq
EricHallahan#1051: (We deprecated the info page a while back for the FAQ.)
EricHallahan#1051: All the current models are Pile, yes.
loganengstrom#9145: Does that mean there are no webtext datasets available?
loganengstrom#9145: err
loganengstrom#9145: models
loganengstrom#9145: not datasets
EricHallahan#1051: A significant portion of Pile is OpenWebText2.
EricHallahan#1051: See page 2 of the preprint:
https://arxiv.org/abs/2101.00027
loganengstrom#9145: right! unfortunately I'm trying to isolate the impact of training on webtext alone
bmk#1476: we dont have any models trained on only owt2
loganengstrom#9145: ok, thank you!
StellaAthena#3530: The Pile paper does have some experiments about how much the different components of the Pile differ from each other and other common datasets |
Daj#7482: I'm pretty busy today, ping me during the week or ping Stella/bmk to get this set up :)
nz#9710: sure! thank you so much!
StellaAthena#3530: GitHub repo, discord channel, anything else? Do they need compute yet?
Daj#7482: Ask @nz
StellaAthena#3530: Fair lol
StellaAthena#3530: GitHub repo, discord channel, anything else? Do you need compute or data storage yet?
Daj#7482: tfw conference in american timezone. 5PM to 2AM rip
nz#9710: compute not yet, need to clean up code (will do this week after finishing blogpost). storage yes but we can do later in the week too since imagenet is just 170 GB (already have it downloaded)
StellaAthena#3530: We have image net.... somewhere
nz#9710: as I mentioned repo probably best to wait until code quality is high enough, but a channel would indeed help coordinate those interested in contributing
nz#9710: that would be cool!
StellaAthena#3530: Project name?
EricHallahan#1051: Should we consider the reorg of the channels?
EricHallahan#1051: This is sounding like a good time for making a decision on that.
nz#9710: Yea I was unsure about that, currently my repo is "Self-Attention Experiments in Vision" (agreed with micpie) as we planned on mainly working on ViT-derivative models, but if there's interest in CNNs too a more general name (such as EleutherVision) may be better
StellaAthena#3530: Yeah I was going to update and repost my suggestion for that later today
StellaAthena#3530: I meant something a little pithier for a channel
nz#9710: would #vision be ok?
StellaAthena#3530: #vision
nz#9710: thank you so much! |
StellaAthena#3530: $$\theta = \frac{\pi}{2}(\mathrm{Maximum Sequence Length})$$
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/833041601684045844/193204646687408129.png
Lord_Drakostar#9337: you guys cause me pain
StellaAthena#3530: How so
Lord_Drakostar#9337: gpt-neo
Lord_Drakostar#9337: it's something that i would like to run so much
Lord_Drakostar#9337: and yet hours of figuring out ways to run go by
Sid#2121: run as in train, or?
Lord_Drakostar#9337: i joined a discord server and worked on figuring out how to run it raw
Lord_Drakostar#9337: and then i don' have any hardware capabilities
Lord_Drakostar#9337: so lol
EricHallahan#1051: What does raw mean
Lord_Drakostar#9337: Windows PowerShell apparently
Lord_Drakostar#9337: fun fact
Lord_Drakostar#9337: ```Collecting google-api-python-client
Downloading google_api_python_client-2.2.0-py2.py3-none-any.whl (7.0 MB)
|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 7.0 MB 2.2 MB/s
Collecting jsonlines
Downloading jsonlines-2.0.0-py3-none-any.whl (6.3 kB)
Collecting lm_dataformat |
Downloading lm_dataformat-0.0.19-py3-none-any.whl (5.4 kB)
Collecting mesh-tensorflow==0.1.18
Downloading mesh_tensorflow-0.1.18-py3-none-any.whl (361 kB)
|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 361 kB 2.2 MB/s
Collecting numpy
Downloading numpy-1.20.2-cp39-cp39-win_amd64.whl (13.7 MB)
|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 13.7 MB 2.2 MB/s
Collecting oauth2client
Downloading oauth2client-4.1.3-py2.py3-none-any.whl (98 kB)
|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 98 kB 6.8 MB/s
Collecting ortools
Downloading ortools-8.2.8710-cp39-cp39-win_amd64.whl (42.3 MB)
|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 42.3 MB 3.2 MB/s
Collecting pytest
Downloading pytest-6.2.3-py3-none-any.whl (280 kB)
|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 280 kB 3.3 MB/s
Collecting sacred
Downloading sacred-0.8.2-py2.py3-none-any.whl (106 kB)
|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 106 kB 3.3 MB/s
ERROR: Could not find a version that satisfies the requirement tensorflow==2.4.0 (from -r .\requirements.txt (line 10)) (from versions: 2.5.0rc0, 2.5.0rc1) |
ERROR: No matching distribution found for tensorflow==2.4.0 (from -r .\requirements.txt (line 10))
WARNING: You are using pip version 20.2.3; however, version 21.0.1 is available.
You should consider upgrading via the 'c:\users\jon\venv\scripts\python.exe -m pip install --upgrade pip' command.
(venv) PS C:\Users\jon\GPTNeo>```
Lord_Drakostar#9337: it can do stuff
Lord_Drakostar#9337: anywho after that train wreck here i am
Lord_Drakostar#9337: begging local use of gpt-neo
Lord_Drakostar#9337: because nonpro colab gpus suck
EricHallahan#1051: Literally the most powerful hardware I have ever owned is the four year old laptop I am typing on right now, so I feel you. Colab was my savior.
StellaAthena#3530: You canโt, unless you have chonky GPUs
Lord_Drakostar#9337: i have an rtx 2060
Lord_Drakostar#9337: im pretty sure it's not hardware actually
Lord_Drakostar#9337: lemme rephrase i just don't have the capabilities
Lord_Drakostar#9337: /dev/unknown gave up
cat_#4534: It usually gives me P100s or T4s with the occasional V100, all of which can run it fine
EricHallahan#1051: I just use CPU and let it run all day in the background lol
kurumuz#5695: based
gwern#1782: ('raw' gives me visions of one dude's workstation I saw which had no case, it was just parts sitting there wired together, and he would just reach in to the motherboard with a screwdriver to jump the power off/on pins. so.... *lewd*)
RyanT#5929: https://arxiv.org/abs/2012.08508
๐
ฌ gabriel_syme ๐
ฌ#3220: this is a really cool research domain imo, with implications in many domains that deal with spatial understanding and design |
Aspie96#5177: https://discord.com/channels/729741769192767510/794042109048651818/827238281099345960
You guys got me.
Aspie96#5177: I know I am late, just wanted to say you got me.
Parker#3197: @Sam_ https://twitter.com/sharifshameem/status/1282676454690451457
paulbricman#2527: Is it possible to fine-tune GPT-Neo 1.3B through Colab using a GPU rather than a TPU? Or would that not possibly fit in memory, even with deepspeed?
rb#3159: i have seen this video several months back, still the app is not out yet, demo only shows a highly cherry-picked example. and why do i have to fill out the form with what app i would like to build? pretty damn sure this is a scam
Parker#3197: I think Iโve seen a few other tools that were related to do similar stuff. Iโm not sure if anyone has actually released anything, but the context was โis GPT able to write software,โ we just had to switch channels
rb#3159: say GPT-could generate code from natural language description alone, can you write description in such a way that the code is what exactly you want it?
rb#3159: and also, how would you ensure correctness of GPT-generated code?
rb#3159: generate test-cases from gpt? nope
nev#4905: how much GPU RAM would you need to inference GPT-2 medium at batch size 50000 and sequence length 256?
Parker#3197: generate it until it compiles. I doubt this works very well, but GPT has no problem generating code. it just doesnโt always compile, halt, or look like what you want.
Parker#3197: but no, you very likely canโt get it to do what you want
Parker#3197: (unless someone makes some new discovery)
Parker#3197: I think it was literally just in relation to generation though, as they asked โ~~what happens if~~ has it been trained on source code yet?โ
Parker#3197: https://bellard.org/nncp/gpt2tc.html
Parker#3197: might be of interest to you
nev#4905: the man, the legend
nev#4905: I'm just asking since I did just that on a normal desktop 8GB GPU
nev#4905: 250k still runs |
nev#4905: it was a bug ๐ณ
rb#3159: is it possible for dall-e to be used to generate images from text (not necessarily natural language) like given inchi notation generate image of the molecular structure?
mgostIH#0245: As long as you have training data sure
coozamano#5333: Has anyone tried exporting with the --export flag? I don't think exporting mesh models is supported it seems
coozamano#5333: alternatively: How can I export the 2.7B model as a SavedModel? I tried mesh_shape: 'x:1,y:1' which got me to a Assign shape mismatch error ([512,2560] vs [2048,2560])
nostalgebraist#3542: hello! i left this server for a while, but came back to ask some questions about released gpt-neo tf-mesh checkpoints.
specifically, the questions in my github issue here: https://github.com/EleutherAI/gpt-neo/issues/207 .
briefly: the pretrained release for 1.3B has weights stored as bfloat16. for 2.7B, the weights are stored as float32.
- Why do the two models differ in this way?
- Storage in bfloat16 is generally considered risky, at least for some architectures. Do we have evidence about whether the use of bfloat16 for checkpoints hurts the performance of the 1.3B model, relative to an identical model with float32 storage?
- I haven't used the Huggingface releases, but just glancing at the configs and file sizes, I get the impression the 1.3B there is stored in 32-bit precision.
- Is this true?
- If so, was it cast to 32-bit precision from the original 16-bit checkpoint? Or was it re-trained? |
- I would be more likely to use these models in practice if I had a pretrained 1.3B checkpoint in 32-bit precision. (And "natively" so, not via casting from half-precision). Does such a model exist? Is one likely to be released later?
EricHallahan#1051: Hey, I was going to respond the other day to your tweet, but I got distracted by (arguably more important) things.
EricHallahan#1051: 1. Sid messed up when he set up the run for 2.7B, and did it in binary32 rather than bfloat16
nostalgebraist#3542: no worries, and sorry i spammed this in so many channels... i'm just so curious
Louis#0144: Hold the phone
Louis#0144: We released 13b???
Louis#0144: I wasnโt even aware we had a 13b
EricHallahan#1051: It is a typo Louis.
nostalgebraist#3542: my typo, click the github link for something i wrote more carefully
bmk#1476: for the bf16 run: the weights are stored as fp32 in memory, but activations are bf16, and also it's compressed to bf16 when saving
bmk#1476: as to why it's implemented this way, your guess is as good as mine
bmk#1476: i think it's because we just set it up like this and forgot about it lol
kindiana#1016: bf16 checkpointing doesn't make a big difference in practice, tracking the training curves you don't see a difference before/after the restore
kindiana#1016: you only round off to bf16 a very small amount of times compared to training steps
nostalgebraist#3542: btw for my current finetuning work, i was originally using the same setup (via the config), but this morning i went and cast the bfloat16 checkpoint to float32 and am now training that way (still 16 for activations).
i did that b/c i was worried about the before/after restore degradation that Ben brings up, so it's good to hear that may not be a practical issue
kindiana#1016: in general I wouldn't be worried about it, the effect is very minor if it exists at all
nostalgebraist#3542: it matters what you're rounding, though. activations vs gradients/weights |
nostalgebraist#3542: if saving/loading a checkpoint doesn't hurt performance, then it's a non-issue
nostalgebraist#3542: it does surprise me, though
nostalgebraist#3542: when i read stuff about mixed-prec training it talks about the cleverness involved in picking the right places to use 16bit, how it won't work for all activation fns, etc
kindiana#1016: its less of an issue with bf16 vs fp16
kindiana#1016: the higher dynamic range really matters much more than precision
nostalgebraist#3542: based on that, my knee-jerk guess for "is everything in the transformer safe for bfloat16 storage" was "no"
bmk#1476: from what I've heard, it's actually not storage where the most problems happen but accumulation
nostalgebraist#3542: interesting. i know bf16 was designed for that to be true, but i also read a blog post somewhere complaining about its adoption in public checkpoints, so i wasn't sure
(the post mentioned gpt-neo actually)
kindiana#1016: well, its annoying if you want to run on something that doesn't support bf16 lol
kindiana#1016: like cpus, gpus
kindiana#1016: (except like a100s)
nostalgebraist#3542: is it? with the activations, you can just say "no do them as f32 now" and it just worksโข
kindiana#1016: its easy to run in fp32
kindiana#1016: but fp16 is nontrivial
nostalgebraist#3542: ohh got it
nostalgebraist#3542: yeah i tried making the gpu sample in f16 and... lol
nostalgebraist#3542: had that experience yesterday
nostalgebraist#3542: f16 is a... very poor representation format for bf16 data |
zphang#7252: what's the complication with running fp16?
bmk#1476: low dynamic range
kindiana#1016: if your activations are bigger range than fp16 = nan/inf
bmk#1476: :ptsd:
bmk#1476: speedrun moment
nostalgebraist#3542: oh, another thing i remembered looking into this was the OA summarization paper where they did one model with 16-bit weights and the rest in 32 and had a footnote about it
nostalgebraist#3542: anyway, thanks for the answer!
zphang#7252: oh I thought complication as in code-wise
zphang#7252: (referring to Ben's comment that it's "nontrivial")
kindiana#1016: ah
kindiana#1016: its easy to run and have it give garbage out :berk:
nostalgebraist#3542: tracked down the "blog post" i remembered, it's actually this https://discuss.huggingface.co/t/mixed-precision-for-bfloat16-pretrained-models/5315
which i misremembered as being about storage, but it's not, it's the same issue ben's talking about. with training in bf16 activations and then running in f16 activations
AI_WAIFU#2844: You know, I'm kinda pissed that fp16 is picking up so much steam, because for traditional NNs you can get away with it, but there are a lot of non-standard applications where you do want that extra precision, and because of that you can't use dedicated accelerators for the workloads because they're all meant for fp16
AI_WAIFU#2844: Hot take, since we store everything in fp32 and nobody can seem to get more that ~50% MXU utilization for anything that matters, tensorcores/tpus should just work with fp32 all the way through.
kindiana#1016: :thonk:
kindiana#1016: but... you are going to get even lower mxu with fp32 lmfao
AI_WAIFU#2844: I guess you do lose 50% of your effective bandwidth
kindiana#1016: yeah |
kindiana#1016: and 50% cache
EricHallahan#1051: Compromise on bfloat16 lol
EricHallahan#1051: It isn't a silver bullet.
kindiana#1016: pretty close tbh
AI_WAIFU#2844: The thing is though is that for the really big models MXU utilization stops being a limiting factor altogether.
kindiana#1016: how so?
AI_WAIFU#2844: Eventually the internode bandwidth becomes the limiting factor no?
kindiana#1016: depends on how big really big is
kindiana#1016: for hundred B models mxu certainly is a limitation
kindiana#1016: even at Ts
AI_WAIFU#2844: Hmm...
AI_WAIFU#2844: And I guess the next generation of processors/systems are just gonna jack up the amount of internode bandwidth.
AI_WAIFU#2844: I do still want the flexibility that FP32 gives you, + at least most of the speed you get out of MXUs/tensor cores.
AI_WAIFU#2844: I don't want to have to think "is this bullshit I'm about to try gonna NaN out?"
kindiana#1016: well, eventually hw is all going to be bf16 and multiple passes for higher precision
AI_WAIFU#2844: multiple passes for higher precison?
kindiana#1016: https://arxiv.org/pdf/1904.06376.pdf
chilli#5665: intriguing..........
kindiana#1016: that's how jax's precision api works on tpu
chilli#5665: Like, TPUs only compute in bfloat16, and if you want to use higher precision it uses this technique? |
kindiana#1016: yeah
kindiana#1016: the mxu only operates in bf16 afaik
chilli#5665: imagine working in numerical analysis and writing papers like this :thonk: https://cdn.discordapp.com/attachments/729741769738158194/833529545801859094/unknown.png
chilli#5665: Do you understand the actual technique here?
kindiana#1016: not really tbh
kindiana#1016: more passes with fancy accumulation is the limit of my understanding lol
chilli#5665: I'm kinda surprised that this is used by TPUs actually
chilli#5665: since this paper isn't from google
kindiana#1016: source: https://github.com/google/jax/issues/2161
chilli#5665: I see
AI_WAIFU#2844: Ok this pretty much addresses my concerns
chilli#5665: There's also this haha, although the current pass doesn't work for matmuls IIRC: https://github.com/google/jax/pull/3465
AI_WAIFU#2844: Also I was suprised to find out that bfloat16 actually has lower precision than fp16
chilli#5665: I wasn't actually aware that this could be done on matmuls, although I actually read that issue at some point in the past?
AI_WAIFU#2844: I would have though it was the other way around.
chilli#5665: Why? Isn't BF16 (approximately) just a shorter mantissa
chilli#5665: but more exponent bits?
kindiana#1016: well the idea is "lets just take the first 16 bits of fp32" lol
kindiana#1016: pretty :bigbrain:
AI_WAIFU#2844: I figured that precision would be more important for NNs since we can control magnitudes in the NN pretty well. |
kindiana#1016: lol controlling magnitude isn't super easy
kindiana#1016: esp if you are doing stuff like softmax where dynamic range matters
chilli#5665: iirc, isn't the main issue that exceeding your dynamic range is often the primary cause of massive training instability?
kindiana#1016: yeah
kindiana#1016: if your activations are consistently nan/inf its game over
chilli#5665: Like, it might be true that after you get to inference the vast majority of your values can be controlled to a smaller dynamic range
kindiana#1016: bigger dynamic range is also important for eps
chilli#5665: but it's just harder to deal with this kind of stuff during inference
kindiana#1016: which is sprinkled everywhere lol
chilli#5665: I wonder how much of this just has to do with our neural networks being developed with fp32 in mind
chilli#5665: lol
chilli#5665: Like, if we'd just started with fp16, I can't imagine that we have as many disasters as we do now
kindiana#1016: lmao
kindiana#1016: should have just started with int8
chilli#5665: hmm, I guess I knew this was true at some level... but when put like this it's quite surprising
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/833532250553319434/unknown.png
chilli#5665: Why is this true in general?
chilli#5665: Like, fp32 matmuls are not 10x harder than fp16 matmuls
kindiana#1016: a hardware multiplier takes mantissa^2 power and area
chilli#5665: I don't really know why that's true haha, but assuming it is, that would imply 4x no? |
chilli#5665: wait, are the FP16 multipliers completely separate from the FP32 multipliers?
EricHallahan#1051: ... Yeah?
kindiana#1016: Sometimes yes sometimes no
EricHallahan#1051: It depends.
kindiana#1016: You can split a fp32 into 2 fp16 units kinda
AI_WAIFU#2844: It's a bit more it's 23 vs 10 bits and with bf 16 is 23 vs 7
kindiana#1016: Requires extra hardware but is cheaper than completely separate
kindiana#1016: Only good if you are sharing the data path too (i.e. can't share cuda fp32 with tensor fp16)
jekbradbury#2280: this is not the first or last time that the XLA:TPU team came up with something, implemented it, and never really told anyone outside google and then someone else had the same idea and wrote a good paper (weight update sharding, aka ZeRO, is another such time)
modularmind#7576: Gday ๐ New here
dms#2699: TEAM ELEUTHER ROCKS!! KEEP IT UP
triggerhappygandi#0001: Thank you for recognizing ***my*** efforts alone :berk:
Louis#0144: Goose squad
cfoster0#4356: Hi y'all: we're hoping to get the rotary embeddings blog post out this (Western hemisphere) evening. Would be appreciative if y'all could give us some feedback on this draft as we wrap it up today https://cdn.discordapp.com/attachments/729741769738158194/833764836537794610/Attention_Theory.pdf
EricHallahan#1051: (I still need to finish the rest of my part.)
Sid#2121: I'm going to add Phil, Ben and I as authors and reword some of the experiments section if that sounds good
EricHallahan#1051: Are you good with how everything is written? I'll get out the branch to GitHub now.
cfoster0#4356: Please do
Deleted User#0000: hmm, i don't really mind if i'm not on it, if you can sneak ๐จ name somewhere in smallprint, that would be fun
Deleted User#0000: or not, i don't really care |
elderfalcon#4450: Yeah, I think this is the trend a lot more recently, especially on recent hardware. I'm a a little bit concerned about the different fp16 types sharding everything around... having stuff be hardware-specific and super-similar like that is rather terrifying to me. :'(
cfoster0#4356: Ice cream as last author plz
EricHallahan#1051: Confirmation that Phil is the dog.
Sid#2121: It's mainly so that we can say "*We* ran these experiments" instead of "*Phil Wang* ran these experiments" as it makes for nicer wording, but we can do Ice Cream as last author lol
Deleted User#0000: i can tell ice cream on her evening walk she made it onto an academic paper
EricHallahan#1051: Archibald Eleuther as first?
EricHallahan#1051: lol
EricHallahan#1051: Are we just posting the PDF?
elderfalcon#4450: Though BF16 seems more promising? Maybe people with better experience could offer their thoughts, it seems to be more idealistically motivated in a good direction w.r.t granularity (vs/at the expense of range, IIRC.)
cfoster0#4356: Nah I think formatting it as a blog would be best
cfoster0#4356: Esp if we can get the visualization in there, even if it's just the version we had earlier, with some accompanying text
EricHallahan#1051: Okay, let me do a few things that I need too before this branch is made public. There are a lot of changes on this branch lol
elderfalcon#4450: Isn't the convention to say 'we' even if there's one author? I think that may be acceptable. We can count the power of EleutherAI as the other authors, amorphously. XD
cfoster0#4356: Yeah, I'd originally worded it awkwardly because I didn't want to claim other people's work ๐
fristiloverke#4159: did the original authors publish a paper on it yet
EricHallahan#1051: Not yet, apparently it will be published within a couple weeks time.
bmk#1476: should i write a section about how quaternions are bad and evil
EricHallahan#1051: If you want?
EricHallahan#1051: ยฏ\_(ใ)_/ยฏ
cfoster0#4356: I'm deleting the word quaternion on sight |
bmk#1476: can someone send me the overleaf link
fristiloverke#4159: I feel like it'd be a bit rude to publish this before they publish theirs
bmk#1476: ytho
EricHallahan#1051: No, You already have it.
cfoster0#4356: We're publishing a blog post, and we let them know/have their blessing
bmk#1476: can someone send it *again*
Sid#2121: we reached out to them and they're fine with it
Sid#2121: it looks like a paper 'cause it's on overleaf but it's actually just going to be a blog post
bmk#1476: wait so should I *not* write that section
Sid#2121: @Deleted User https://cdn.discordapp.com/attachments/729741769738158194/833770599200325703/Screenshot_from_2021-04-19_20-26-27.png
cfoster0#4356: that is what I'm saying lol
Deleted User#0000: love it
bmk#1476: why tho?
Deleted User#0000: (my name doesn't have to be on it tho)
bmk#1476: or at least i wanna contribute somehow to the blog post, what can I do
EricHallahan#1051: If you want to explain why it doesn't work, go ahead.
bmk#1476: cfoster just said not to do it and that he'll delete it lol
EricHallahan#1051: ยฏ\_(ใ)_/ยฏ
cfoster0#4356: umm
cfoster0#4356: I just don't want to pull focus |
bmk#1476: maybe as an appendix
bmk#1476: is that ok?
cfoster0#4356: @EricHallahan is there a good way to do end notes? That might be the place for it
bmk#1476: it would be a normal section, it just comes after the conclusion
EricHallahan#1051: There really isn't any infrastructure that exists right now for blogging.
bmk#1476: just add it as a normal section lol
bmk#1476: except it comes after the conclusion
EricHallahan#1051: It is really barebones.
EricHallahan#1051: Like there isn't a good way to even add the author list lol
cfoster0#4356: ok. @bmk If you're confident that you have a satisfying explanation to write up on it, I'd say go ahead
bmk#1476: ok
EricHallahan#1051: I wish we could just write the website in LaTeX and have it be nicely formatted...
EricHallahan#1051: Wait...
EricHallahan#1051: ๐ค
EricHallahan#1051: https://github.com/arxiv-vanity/engrafo
EricHallahan#1051: I guess it exists lol
EricHallahan#1051: No idea of it works, but it has to be better than porting to markdown
triggerhappygandi#0001: Vanity does look good
EricHallahan#1051: Like that is really the look I have in mind for blog posts: they look like papers, but they aren't and are responsive.
EricHallahan#1051: Like I think it is better to make sure it is done right rather than rushing. |
triggerhappygandi#0001: Can I has something to do
triggerhappygandi#0001: Or is it all done
EricHallahan#1051: You can run more tests.
triggerhappygandi#0001: Don't we already have comprehensive results
cfoster0#4356: We've got NeoX, mesh-transformer-jax, and lucid's Performer
EricHallahan#1051: No one wants to verify my claim that using the AIAYN sinusoidal initialization is sub-optimal.
cfoster0#4356: I'd prefer if the formatting makes it look like a blog and not a paper, but understand converting is a pain
triggerhappygandi#0001: Arxiv vanity looks like a blog
cfoster0#4356: Personally disagree but I get what you mean
EricHallahan#1051: As soon as it is responsive it doesn't look like a paper to me.
chilli#5665: would it be worth writing a little bit about performance characteristics?
EricHallahan#1051: https://discord.com/channels/729741769192767510/744116887687790643/831398959238217728
chilli#5665: Like, just benchmarking performance of rotary embeddings vs. regular embeddings.
cfoster0#4356: Yeah that'd be worthwile. If someone does it they should add it right after the pseudocode probably
bmk#1476: ok i wrote a quick draft of my section
bmk#1476: I'll clean up the formatting later today
bmk#1476: any feedback? @cfoster0 @EricHallahan
bmk#1476: and is it good enough to have in the main body of the post? because i really want it in the main body and not just in the appendix lol
EricHallahan#1051: Again, I don't know what this will look like when it is a webpage. ยฏ\_(ใ)_/ยฏ
Sid#2121: @cfoster0 @EricHallahan should we include the partial rotary embedding results? |
chilli#5665: Cool, I'll do it
cfoster0#4356: I think the sentence about the torus should be moved up into the last paragraph of the main body, with the rest as a footnote explaining *why* the torus is the right solution
cfoster0#4356: @bmk
Sid#2121: are you gonna use neox? RoPE actually looks a little faster than learned abs if only applying it partially, especially with the jitting
chilli#5665: Although sid could probably also do it
chilli#5665: Haha
Sid#2121: I haven't tested it at multiple scales yet
chilli#5665: I was just gonna benchmark in isolation
Sid#2121: it's all yours
chilli#5665: Would probably be worth adding the experimental results from neo-x though
bmk#1476: sure, that makes sense
bmk#1476: I'll finish editing it and do that
EricHallahan#1051: This is rapidly expanding in scope. I am prepared to shut down new developments and start spliting it out into segments.
bmk#1476: i kinda wanna include all of it but meh
EricHallahan#1051: I was under the impression that this was going to be a blog post, not a deep dive fit for a dedicated paper.
guac#4716: this sounds like a paper disguised as a blog post lol poor jianlin su gettin' scooped
EricHallahan#1051: I would like to have *something* out tonight, and with all these new developments it is looking less likely that it will be prepared for that deadline.
Sora#8531: I know this may sound boomer as fuck but I feel like there's a lot higher probability of me taking an arxiv preprint (or even a pdf) seriously than a blog
Sora#8531: And yeah from an outsider's pov it does look like a paper
EricHallahan#1051: Then why take anything we do here seriously lol |
cfoster0#4356: I think keeping partial rotary in our pocket for further investigation
Sora#8531: A really informal and memey one but yeah
EricHallahan#1051: Just use Computer Modern. That will make it serious looking.
bmk#1476: i don't think it looks too bad to include all of it in the main body
bmk#1476: it takes up like a third of a page
cfoster0#4356: *we don't want it to be taken super seriously*
EricHallahan#1051: *except for when we do*
cfoster0#4356: The goal is "here's this cool thing someone else figured out, let's explain it and show what it can do"
cfoster0#4356: lmao
EricHallahan#1051: It is quickly expanding beyond that.
cfoster0#4356: I'm almost tempted to remove the instructions on how to cite our blog
janus#0150: Who could give me access to a TPUv3-8 to further my artistic endeavors? (or preemptible-32)
EricHallahan#1051: ยฏ\_(ใ)_/ยฏ
Sid#2121: can do, DM me?
chilli#5665: I would ... consider it?
chilli#5665: Like, I would feel pretty crappy if this blog post ends up getting cited more than the upcoming rotary embeddings paper
Sid#2121: yeah we should probably remove the citation thing
EricHallahan#1051: Again, it would have been fine if it didn't become effectively a paper.
EricHallahan#1051: It is paper length now.
chilli#5665: eh, I think it's fine |
chilli#5665: since we explicitly tell people to cite the original blog post as well
EricHallahan#1051: I just think it would be worth splitting into chunks.
guac#4716: if ya'll introduce something novel then ask to be cited else it's kinda weird
Sid#2121: we haven't introduced anything novel and don't ask to be cited
Sid#2121: in fact we explicitly ask the opposite
EricHallahan#1051: We do a lot of testing, and the explanation is heavily developed.
guac#4716: i don't see tht explicitly on the linked pdf but okay
Sid#2121: it's in the very first section https://cdn.discordapp.com/attachments/729741769738158194/833787988147961866/Screenshot_from_2021-04-19_21-35-31.png
chilli#5665: I don't really agree that you need to introduce something novel to be cited
bmk#1476: i think empirical results are worth something too
chilli#5665: if you're contributing something valuable then you can be cited
EricHallahan#1051: I think the citation is good.
chilli#5665: Like, I mean, sure
chilli#5665: if you're doing
bmk#1476: ~~also the multi dimensional extension is novel~~
chilli#5665: "linear regression for dummies"
chilli#5665: it's kinda cringe to have a bibtex entry
guac#4716: in section 6 it doesn't say "cite them" lol
guac#4716: but okay i get it
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/833788356893605969/unknown.png |
guac#4716: my linked one doesnt have that outdated
chilli#5665: ok
guac#4716: https://cdn.discordapp.com/attachments/729741769738158194/833788458925293669/Screen_Shot_2021-04-19_at_3.37.36_PM.png
bmk#1476: actually I'm serious i can develop the multi dimensional extension further
Louis#0144: shouldnt #carp be directly beneath #multimodal
EricHallahan#1051: No
EricHallahan#1051: Yes
Louis#0144: why
Louis#0144: wut
EricHallahan#1051: ยฏ\_(ใ)_/ยฏ
cfoster0#4356: I think the next blog post should be about the multi dimensional extension
bmk#1476: i don't think there's enough for a *whole* post
bmk#1476: anyways i don't think there's too much about multi dimension rn
EricHallahan#1051: I say we need to cut this into three sections: the theory, the testing, and future work.
bmk#1476: for the multi dimension post we'd need multi dimensional experiments too
bmk#1476: i can write way more theory for the multidimensional one
chilli#5665: Regular positional embeddings are just `q + self.embeddings`, `k + self.embeddings`, right?
cfoster0#4356: Well you'd typically add them at the very start, before separating into q and k, but basically yes
chilli#5665: mm, you mean, before you do `Q_k`/`Q_q`?
chilli#5665: i.e., the projection matrices? |
cfoster0#4356: Yeah. With most transformers, you add them before the projection matrices, once at the very first layer
cfoster0#4356: Whereas with rotary you're doing it at every layer, after the projections
cfoster0#4356: Are you talking, like, renaming/regrouping the sections? If so that sounds good to me
EricHallahan#1051: No, seperate posts.
bmk#1476: i don't think there's too much on the post rn
bmk#1476: this isn't very long
triggerhappygandi#0001: 3 posts sounds excessive for it
cfoster0#4356: Yeah, that's where I'm at rn
EricHallahan#1051: Okay, two.
triggerhappygandi#0001: Especially future work lol
EricHallahan#1051: lol
EricHallahan#1051: But I do think it is worth a separate blog post for the experimentation. (It will make it look like we actually use the blog lol)
StellaAthena#3530: Oh boy *how much* have yโall written in the past several hours?
EricHallahan#1051: Not as much as it sounds, but I feel like the scope is expanding and we are getting feature creep.
EricHallahan#1051: Especially considering it isn't even in Markdown yet.
StellaAthena#3530: Maybe my side hasnโt synced but it looks pretty much exactly the scope I intended
bmk#1476: we have just over 6 pages not including references
bmk#1476: that's entirely reasonable imo
cfoster0#4356: It'll be a long blog but a good one :hap:
StellaAthena#3530: I think we should cut the scope off where it is now |
StellaAthena#3530: This looks eminently reasonable to me
bmk#1476: my section can stay right?
StellaAthena#3530: What section
EricHallahan#1051: My opinion is more pre-emptive than anything. When three people ask "Can I write something" a few hours from when we wanted to be done, it feels like it has a case of feature creep.
EricHallahan#1051: And feature creep is never a good sign.
bmk#1476: I've wanted to write about this for the past few days, i just never found time to get around to it
bmk#1476: it should be a surprise to exactly nobody that i want to write about how quaternions bad
StellaAthena#3530: Is this something you have written yet? What is the thing you are talking about?
chilli#5665: @Sid these runtime results are actually quite surprising to me - what are realistic sizes for the Q/K vectors?
bmk#1476: i plan on finishing it very soon
bmk#1476: i typed it up on my phone so some of the formatting is wonky
bmk#1476: and also I'll upload the demo code i wrote later today (since it's on my other computer)
EricHallahan#1051: My problem is that I can't really see how it is going to look until I port it over, and for that to be worth my time I need it to be called done and dusted.
StellaAthena#3530: Ok....
EricHallahan#1051: That is why I am being a downer here.
cfoster0#4356: Shall we put a hard deadline on edits?
EricHallahan#1051: ยฏ\_(ใ)_/ยฏ
chilli#5665: yeah i think so
bmk#1476: major edits or *all* edits?
Sid#2121: maybe something like [2048, 16, 12, 64] (for a smaller model), and [2048, 16, 24, 128] for a larger one? |
cfoster0#4356: Substantive, non formatting/spelling edits
bmk#1476: i can't get the link until this afternoon
StellaAthena#3530: @EricHallahan youโre the person with the most writing left to do IMO. When do you expect to have it done by
bmk#1476: also can I be added to the author list pls
EricHallahan#1051: You should be able to do that lol
bmk#1476: i am asking for permission because it would be rude not to
EricHallahan#1051: I don't know. Hopefully soon? Problem is that I don't know exactly how this will be framed.
chilli#5665: hmm, I don't really understand why scripting makes such a big difference then, when after scripting RoPE seems about equivalent to the baseline
EricHallahan#1051: I don't know how deep we want to go, and how much I need to explain things.
Sid#2121: did you test it out independently?
cfoster0#4356: I don't expect to change anything from the start through section 3.1
chilli#5665: I'm testing some microbenchmarks
chilli#5665: Like, scripting the "apply rotary embeddings" speeds it up by about 2x
Sid#2121: yep, that's also what i found
bmk#1476: I'm going to take that as a yes
Sid#2121: scripting is a big mystery to me lol so
Sid#2121: i was hoping you could explain it
chilli#5665: Oh, is the baseline not regular positional embeddings?
bmk#1476: jit script?
chilli#5665: No, I get why it's faster |
Sid#2121: yes, that's the baseline
Sid#2121: ye
chilli#5665: What I don't get is why doubling the speed makes the runtime difference go away
bmk#1476: I'm pretty sure jit script records the computational graph sort of like a mini tensorflow graph
chilli#5665: When the original positional embeddings should have basically no perf impact
bmk#1476: this is good because normally, pytorch can't predict what your python code will do
chilli#5665: It's complicated lol
chilli#5665: Not technically in this case
Sid#2121: I'm not particularly sure about this either. I figured it was because learned embeddings add some parameters so maybe you see overhead in the optimizer?
bmk#1476: wait then what's it doing
triggerhappygandi#0001: @Sid if rope is faster than regular learned embedding, then it is faster than every embedding right
Sid#2121: to be clear @chilli you may have seen the performance with partial rope
triggerhappygandi#0001: Since the other ones basically made no difference
Sid#2121: full fat rope is a little slower
chilli#5665: There's two ways to get into torchscript: script mode and tracing
triggerhappygandi#0001: To the point that no embedding was a thing
Louis#0144: shiv isnt it like
Louis#0144: 5am there
Louis#0144: lmao
chilli#5665: Tracing is kinda close to what you're describing |
cfoster0#4356: Can't you optimize away most of RoPE?
triggerhappygandi#0001: 1:30@Louis
bmk#1476: one parses using ast and the other looks at what you do to the tensors right
Louis#0144: o ok
triggerhappygandi#0001: My sleep is fucked
Louis#0144: np
Ward#1738: New deepspeed update https://www.microsoft.com/en-us/research/blog/zero-infinity-and-deepspeed-unlocking-unprecedented-model-scale-for-deep-learning-training/
chilli#5665: But here script mode is being used, which parses the ast
bmk#1476: and both result in a computational graph
chilli#5665: Yes
chilli#5665: There's also some other complexity since even in script mode, runtime information helps
chilli#5665: So the first couple runs it'll record some information about the tensors
chilli#5665: Oh so you're comparing to learned embeddings
Sid#2121: yes
Sid#2121: ah you're comparing to sinusoidal
chilli#5665: Do you also only add learned embeddings at beginning?
Sid#2121: sorry, miscommunication i guess
Sid#2121: yep
chilli#5665: So in inference the performance is negligible
chilli#5665: Ok I think to compare against learned embeddings we need your full model benchmarks |
chilli#5665: Because the tradeoffs are complicated
chilli#5665: You eliminate some amount of parameters
triggerhappygandi#0001: We can also compare to no embedding at all, no? We did it with rpe and learned.
chilli#5665: (and thus, a constant factor of optimizer updates + gradient computations, independent of layer)
chilli#5665: But in exchange, you need to do some pointwise ops at every layer
chilli#5665: what's the easiest way for me to run some full model benchmarks? x-transformers?
Sid#2121: that or neox
EricHallahan#1051: We already discussed it lol
chilli#5665: @Deleted User de23c58c if neither `sinusoidal_emb` nor `rotary_pos_emb` are set, does x-transformers use learned embeddings?
EricHallahan#1051: I am pretty sure the answer is yes.
EricHallahan#1051: But ยฏ\_(ใ)_/ยฏ
Deleted User#0000: yup, yes
Deleted User#0000: https://github.com/lucidrains/x-transformers/blob/main/x_transformers/x_transformers.py#L472 uses that to turn off absolute positional embedding
Deleted User#0000: there's actually an interesting phenomenon i ran into where rotary embedding suffers (does even worse than baseline) if you have learned absolute turned on in addition to rotary
Deleted User#0000: i thought you could have both on, and you would get both rel pos and abs pos, but that wasn't the case
Deleted User#0000: more research needed there..
Deleted User#0000: or i had a bug ๐คทโโ๏ธ
cfoster0#4356: @bmk Where did you move your quaternion section?
bmk#1476: uhh
bmk#1476: 3.5 |
bmk#1476: it's not that long
bmk#1476: I'm also working on a proof of my main claim that I'm writing in the appendix
bmk#1476: actually wait i can save this proof for a future post/paper
bmk#1476: meh I'm gonna write it out first
cfoster0#4356: I really think it fits better at/towards the end
chilli#5665: interesting
chilli#5665: @Sid what version of pytorch/cuda were you running?
cfoster0#4356: @Sid
chilli#5665: whups
Sid#2121: 1.8.0<some string of numbers and letters>
chilli#5665: ok, so not nightly
Sid#2121: nope
chilli#5665: hmm, I can't seem to replicate your perf results
chilli#5665: were you training in some kind of distributed setup?
chilli#5665: oh, and cuda version?
Sid#2121: in x-transformers? or neox
Sid#2121: i'm using 2 3090s yeah
Sid#2121: 2 3090s, cuda version 11.1
chilli#5665: in x-transformers
chilli#5665: maybe something weird is different between the two |
chilli#5665: ๐ค
chilli#5665: Like, I see a performance improvement from scripting, but it doesn't get to the performance of learned embeddings
bmk#1476: what should I do with my proof of multidimensional rotary stuff?
bmk#1476: i figured out how to prove that quaternions cannot possibly work, and also that the toroidal generalization does work
chilli#5665: @Sid actually nvm, figured it out
chilli#5665: my dim size was too small
chilli#5665: leading the NNC fuser to generate suboptimal code
cfoster0#4356: I'm gonna suggest again that you take the correct method (like from "to represent multiple dimensions..." on) and use it to replace the first two sentences of the last paragraph of the conclusion (from "With relative ease..." to "those sections."). And then pocket the other stuff + the proof for a later post with multi dimensional experiments
chilli#5665: @Sid last question - in your benchmarks, were you benchmarking training performance or inference perf?
bmk#1476: would multi dimensional be novel enough for a paper?
chilli#5665: if it worked well, sure
bmk#1476: i don't think they mention multidimensional in their blog post so I'd guess it's probably fair game for us to do?
bmk#1476: how much work would it be to run those experiments
bmk#1476: like do we have everything set up for that
StellaAthena#3530: @bmk Lucidrains has a pipeline for multidimensional transformers, but we haven't done anything like that with NeoX before
bmk#1476: can we get some 2d results real quick?
bmk#1476: i think we could speedrun a Toroidal Rotary Positional Encoding paper
StellaAthena#3530: sure
Louis#0144: I have another person from my lab joining
Louis#0144: Someone get the Georgia tech tag ready |
Louis#0144: Lmao
Sid#2121: training
Louis#0144: @evenmoregeneric
Louis#0144: Here he is
chilli#5665: But you said you were benchmarking "partial rotary embeddings", right?
chilli#5665: which we aren't talking about in this overleaf
StellaAthena#3530: @StellaAthena Do you want to do an image transformer, or multidimensional text, or what
Louis#0144: Winston is interested in computational creativity
Louis#0144: Are you asking yourself
Louis#0144: Lmao
evenmoregeneric#0542: hello everyone
StellaAthena#3530: @bmk Do you want to do an image transformer, or multidimensional text, or what?
chilli#5665: :thonk:
Louis#0144: WINSTON
Louis#0144: I said wait on that
bmk#1476: image ofc
Louis#0144: Omg
evenmoregeneric#0542: oh
evenmoregeneric#0542: woops
Louis#0144: I did not offer Winston GPUs dw |
evenmoregeneric#0542: brb deleting evidence
bmk#1476: i don't even know what multidimensional text would br
Louis#0144: I said thereโs projects here with GPUs available and Connor was interested in computational creativity
Louis#0144: Anyway
StellaAthena#3530: I mean, people write text in 2D?
gwern#1782: don't you still have plenty of A100s idle? or did rotary suck them all up?
Louis#0144: No we have plenty idle
Louis#0144: We just need to submit the comp creativity proposal Connor asked for
bmk#1476: @evenmoregeneric we have extra gpus right now, so if you have some experiments ready to go right this moment we can let you use some compute. however our experiments still take precedence so we can't guarantee if and how much you can get
evenmoregeneric#0542: yeah that's understandable, even just brief eval sprints would be really useful
chilli#5665: I'm fairly confident that learned embeddings shouldn't be slower at this point
chilli#5665: haha
chilli#5665: the parameters are negligible
chilli#5665: it doesn't need to store anything for the backwards pass
evenmoregeneric#0542: but I don't want to just come in and take up gpu space from eleuther ppl
chilli#5665: and it's only applied once
Sid#2121: They're only slower than the 1/4 partial rotary embeddings
StellaAthena#3530: @bmk What do you think you are expressing with $S_1\times S_1$? The cross-product of circles?
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/833823878047334430/193204646687408129.png
bmk#1476: yes |
Sid#2121: and on a really small model
Sid#2121: i doubt it applies across the line
chilli#5665: I don't get how that's possible either ๐ค
chilli#5665: hmmm
chilli#5665: maybe
StellaAthena#3530: @bmk That's $S^1$, not $S_1$.
chilli#5665: if you have a very low amount of layers
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/833823981768802354/193204646687408129.png
Sid#2121: actually i think the tests you saw were probably 1/4 dim on half the layers
Sid#2121: https://wandb.ai/eleutherai/neox?workspace=user-sdtblck here are some more recent tests
Sid#2121: https://cdn.discordapp.com/attachments/729741769738158194/833824218662699018/Screenshot_from_2021-04-19_23-59-34.png
bmk#1476: sorry, got the notation mixed up
bmk#1476: I'm moving all the toroidal stuff to a separate document anyways
StellaAthena#3530: no problem, just wanted to let you know
bmk#1476: as cfoster suggested
chilli#5665: isn't the 100% rotary embedding the fastest in this image?
chilli#5665: ๐ค
chilli#5665: the grey one at the top
Sid#2121: that ones learned
chilli#5665: Overall, I see a 1-3% overhead, depending on the model size |
Sid#2121: clicking through will be easier to see than the screenshot lol
chilli#5665: why is the learned one faster
chilli#5665: lol
chilli#5665: and are there comparisons to the non-rotary runtimes here?
chilli#5665: without fusion, it goes up to about 4-6%
Sid#2121: it's not learned rotary
Sid#2121: it's just learned
chilli#5665: ah
chilli#5665: oh ok, this is totally in line with my results then, no?
chilli#5665: the learned embedding is the fastest one
Sid#2121: the legend is autogenerated and rot pct = 1 because that's the default even when rotary isn't actually on
chilli#5665: by a small (but real) margin
chilli#5665: 18.83/18.37 = 3%
Sid#2121: yes! I did say before that rotary only appeared faster when sparsely applied
chilli#5665: well, none of these seem faster than that grey line, no?
chilli#5665: :thonk:
Sid#2121: because none of them were the same tests i ran before, which were only applying it every other layer
chilli#5665: I see
chilli#5665: well, in my experiments learned embeddings are basically the exact same computational cost as sinusoidal embeddings
chilli#5665: lol |
chilli#5665: (for reasonably deep models)
cfoster0#4356: Are we about wrapped up on edits?
EricHallahan#1051: Okay, about to start writing shortly here.
EricHallahan#1051: Just finished dinner and am making sure everything is good quickly before the branch is pushed.
chilli#5665: ok, I finished my runtime experiments
chilli#5665: I could add a bit about using partial RoPE to optimize a bit more, but we don't talk about that anywhere else in the paper
EricHallahan#1051: I thought we were keeping partial RoPE?
EricHallahan#1051: IDK
EricHallahan#1051: ยฏ\_(ใ)_/ยฏ
EricHallahan#1051: nvm
chilli#5665: I dunno about if it got removed, I was just looking for it and couldn't find it.
chilli#5665: Added a section on runtime, lmk if anybody wants to see anything else in that section https://cdn.discordapp.com/attachments/729741769738158194/833828228652990474/unknown.png
cfoster0#4356: Looks good to me. I don't think we need to add anything about partial RoPE here
cfoster0#4356: This is the **last call** for edits to everything but section 3.2. Would like to leave lots of time for converting into Markdown etc.
EricHallahan#1051: Let me get it out to GitHub now.
Sid#2121: Running inference with full attention with a model trained with sparse should work fine right? Because thereโs no learnable parameters in the sparse part
EricHallahan#1051: What do we want the perma-link to be?
EricHallahan#1051: I just made it so that MathJax only loads on pages where it is called for explicitly.
cfoster0#4356: Uhhh. Maybe `https://blog.eleuther.ai/rotary-pe`
EricHallahan#1051: Right now it is `/rotary-embeddings` |
cfoster0#4356: Just as good
cfoster0#4356: Keep it
StellaAthena#3530: @EricHallahan is it live on the website then? Or just on GitHub?
EricHallahan#1051: That was the last thing I wanted before pushing to GitHub. You'll be able to find it in the `rotary-blog` branch.
chilli#5665: btw, any objections if I add myself to the author list :thonk:
StellaAthena#3530: ZeRO objections here
Louis#0144: lol
Louis#0144: next blog post needs to be authored by everyone's pets
chilli#5665: feels kinda weird to ask
chilli#5665: but feels even weirder to not ask
Louis#0144: my cat loves watching me do my research
chilli#5665: I guess same feeling as bmk earlier
chilli#5665: lol
Louis#0144: so she probably knows a lot of NLP
EricHallahan#1051: Published.
EricHallahan#1051: Take a look at the entire thing, this was my working branch for the past few weeks.
Louis#0144: where?
EricHallahan#1051: To a branch.
EricHallahan#1051: I just said that.
Sid#2121: uhh which branch @EricHallahan ? last push i see to the `rotary-blog` branch is 11 days ago |
EricHallahan#1051: I never committed my changes lol
EricHallahan#1051: One sec'
EricHallahan#1051: Pushed.
Sid#2121: hm, how do i get to the blog on localhost lol
EricHallahan#1051: Linux or Windows?
Sid#2121: linux
EricHallahan#1051: Install `hugo` with your package manager.
cfoster0#4356: Looks like an older version of the post from a few days ago
Sid#2121: i have hugo installed, and the site up and running
Sid#2121: but when i click on 'blog' it takes me to the real eleuther url
Sid#2121: i just want to display the blog post instead of the home page
EricHallahan#1051: Oh, just add `--config config-blog.toml` to the end.
EricHallahan#1051: It is in the readme.
Sid#2121: ah, thanks
StellaAthena#3530: Mind sharing a screenshot?
Sid#2121: where all the pics at
Sid#2121: https://cdn.discordapp.com/attachments/729741769738158194/833842925963706398/Screenshot_from_2021-04-20_01-13-49.png
Sid#2121: looks incomplete / old to me?
EricHallahan#1051: It is an old draft.
EricHallahan#1051: I decided it was more important to get it out somewhere so that it could also be reviewed for any other mistakes. Also, you got upgraded Sid in the FAQ to be the last person before the answer lol |
EricHallahan#1051: Like half of the FAQ was rewritten and expanded.
Sid#2121: the maths looks good
Sid#2121: I would say get a visualization up top lol
StellaAthena#3530: Yeah definitely
StellaAthena#3530: One of the cleaner plots, ideally.
Sid#2121: I think eric's animated visualization / a still version would be cooler
Sid#2121: plots before you've read what they are is kinda weird imo
Sid#2121: it should go eye catching picture -> explanation -> results
EricHallahan#1051: I'll get my visualization in shortly. What is the intuition? That the magnitude is the feature and the phase is the position?
StellaAthena#3530: Yup
EricHallahan#1051: I need to know is that is correct, please do that now or forever hold your peace.
StellaAthena#3530: Are you using the spirally polarized idea still
EricHallahan#1051: Yeah, unless there is something better you have in mind.
StellaAthena#3530: Are you editing it at all? Or no?
EricHallahan#1051: I will be, I want the visualization good because the visualization will be the basis for anything I say.
StellaAthena#3530: Okay, in my perfect world you would have a series of arrows along a line the way that that gif does. It starts off with them all pointing in the same direction but passes through a box / filter / whatever which spins it
StellaAthena#3530: Like the middle and left side of this https://cdn.discordapp.com/attachments/729741769738158194/833845868410437632/image0.png
StellaAthena#3530: In reality the change in angle is smaller than is shown here and it does not in fact wrap around, but I think spiraling makes for a better picture probably
EricHallahan#1051: I should be able to add the arrows in.
EricHallahan#1051: Change in angle of what? |
StellaAthena#3530: When it passes through the filter
StellaAthena#3530: / the difference between consecutive arrows
EricHallahan#1051: This is a graphic that has a lot of liberties taken, the largest of which being that it is an classical EM wave lol
EricHallahan#1051: It was going to not be accurate anyway lol
cfoster0#4356: lol yeah it's all good in the pursuit of a cool viz
EricHallahan#1051: Yeah, I think we need the arrows.
bmk#1476: when are we targeting getting the post out by?
EricHallahan#1051: ยฏ\_(ใ)_/ยฏ
cfoster0#4356: idk. Last I checked we just needed to convert to markdown
EricHallahan#1051: Also *please* review the FAQ.
bmk#1476: i thought we were going to get it out sometime around now
EricHallahan#1051: That was the plan, but I didn't have time to port anything.
cfoster0#4356: What's the status? I'd be happy to review the FAQ if that's a bottleneck
EricHallahan#1051: No, I want to make sure it is accurate. ZeRO-Infinity made one of my responses obsolete already lol
kindiana#1016: :thonk: what does zero-inf change about the faq?
EricHallahan#1051: I explicitly rule out caching schemes and say that the model needs to fit into memory.
kindiana#1016: I think its still true, but idk if you want to defend that lol
kindiana#1016: training a model bigger than cpu ram + vram is dumb
EricHallahan#1051: And I know people will inevitably start asking about it if I don't address it.
kindiana#1016: where is the new faq? |
bmk#1476: worry about it later
bmk#1476: i dont think it's a big deal
EricHallahan#1051: `rotary-blog`
bmk#1476: if you really care, hedge your bets by saying "information in the page valid as of 2021-04-10" or whenever you last updated it at the top of the page
EricHallahan#1051: But that makes it obvious when it was last updated lol
cfoster0#4356: I don't see the problem lol
cfoster0#4356: This is also verging on thing-with-the-bike-in-the-outdoor-hut
kindiana#1016: yeah the new faq is very reasonable
kindiana#1016: I don't think anything needs to change wrt zero-inf
EricHallahan#1051: Okay.
bmk#1476: dont be afraid to say The Thing
bmk#1476: also looks like 3.2 is still not finished https://cdn.discordapp.com/attachments/729741769738158194/833884874497589288/unknown.png
EricHallahan#1051: bikeshedding
bmk#1476: @Isaac McHorse you have forsaken us
cfoster0#4356: โ
cfoster0#4356: Tbh I don't actually care exactly when we post the blog, I just have no clue how far we are from posting it
cfoster0#4356: For all I care we can finish in the early morning and only share it on socials the next day
bmk#1476: same tbh
bmk#1476: i thought we agreed we'd post it about now, and it is already now, and nothing has been posted, and no new time has been communicated either
cfoster0#4356: Let's move to #website |
voxs#0001: damn im a hyperparameter tuning addict
cfoster0#4356: We'll be putting the blog post out tomorrow
Jianlin Su#3718: hello everyone. I am the author of RoFormer and RoPE. Thanks for your attention on my works. Welcome to share your experimental results with me.
Jianlin Su#3718: Our first-version paper will be submitted to Arxiv tomorrow.
Jianlin Su#3718: preprint of roformer https://cdn.discordapp.com/attachments/729741769738158194/833900141194117181/roformer_arxiv_preprint.pdf
Jianlin Su#3718: nothing added in the paper, compared with my blog
Jianlin Su#3718: just an English version.
cfoster0#4356: Hi @Jianlin Su ! Very nice to hear from you, and thanks for sharing your preprint. We've been pretty excited about your work for the past few weeks
Jianlin Su#3718: Thanks a lot. I browsed the chatting history and actually surprised me.
bmk#1476: ไฝ ๅฅฝ
Deleted User#0000: Hi! Congrats on discovering this really amazing technique!
guac#4716: (that was a much cleaner read compared to the google translated doc lol bravo)
Jianlin Su#3718: haha
Jianlin Su#3718: by the way, I found you talked about cross attention with RoPE a few days ago. My opinion is: is cross attention really need position embeddings if Q,K,V has been integrated position information?
Deleted User#0000: @Jianlin Su RoPE is performing better and faster than all the other positional encoding solutions we've tried so far
Deleted User#0000: @Jianlin Su yes, I don't believe cross attention requires positional encoding
Jianlin Su#3718: I agree
Deleted User#0000: if needed, would reach for a solution similar to Perceiver https://arxiv.org/abs/2103.03206
Deleted User#0000: in my case, I have a toy task where I copy the source sequence to target in an encoder / decoder
Deleted User#0000: however, RoPE alone seems to have trouble in that scenario |
Deleted User#0000: and interestingly enough, adding learned absolute positional on top of RoPE seems to bring harm
Deleted User#0000: but we found another way to encode absolute position into the system, and it worked fine after that
Deleted User#0000: even with RoPE, eventually it learned
Deleted User#0000: just a bit slower than baseline
Jianlin Su#3718: how about apply RoPE on V?
Deleted User#0000: i did not try that!
cfoster0#4356: ๐ฎ
Deleted User#0000: let me try it now ๐
Jianlin Su#3718: RoPE is actually an absolute position encoding, when it apply to Q,K, it equals to relative. But if apply to V, it is absolute.
StellaAthena#3530: @Jianlin Su Do you have an intuitive guess at why it performs so well, especially compared to the mathematically similar Sinusoidal encoding?
Jianlin Su#3718: Actually the original motivation of RoPE is just for fun, so I do not have more insights about it.
StellaAthena#3530: Amazing!
Deleted User#0000: https://cdn.discordapp.com/attachments/729741769738158194/833907700219117598/WB_Chart_4_19_2021_8_31_00_PM.png
Deleted User#0000: @Jianlin Su yes, that worked better than baseline ๐
Deleted User#0000: thank you
Deleted User#0000: it's perfect!
StellaAthena#3530: I have only done a little experimentation with this so far, but I have found that if you fix $q$ and $k$ and allow $m - n$ to vary, then $F(m-n) = \langle f(q, m), f(k, n)\rangle$ looks very interesting. Doing this with both your embedding and sinusoidal produces very similar pictures, but the sinusoidal one is much more noisy
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/833908102616973392/193204646687408129.png
StellaAthena#3530: https://cdn.discordapp.com/attachments/729741769738158194/833908139085529098/image0.png
StellaAthena#3530: https://cdn.discordapp.com/attachments/729741769738158194/833908176247586816/image0.png |
StellaAthena#3530: If you just showed me these two plots and said โthe top one does a better job of communicating the signal than the bottom oneโ I would immediately believe that.
Jianlin Su#3718: sinusoidal means plusing sinusoidal position encoding to q and k?
StellaAthena#3530: Yeah
Deleted User#0000: @Jianlin Su we even pit RoPE against disentangled attention (separate content and position attention) https://arxiv.org/abs/2006.15595 and it performed better
Deleted User#0000: the only thing remaining is to compete it against DeBERTa, which is as over-engineered as you can get for positional encoding
James#6892: lol love the reason.
Deleted User#0000: that's like a Saitama response from One Punch Man
Jianlin Su#3718: I think sinusoidal (<q + pi, k + pj>) will not decay to 0?
Jianlin Su#3718: DeBERTa has more engineering tricks and I am not sure which really brings improvements.
Deleted User#0000: are you planning on GLUE or SuperGLUE benchmarks?
Jianlin Su#3718: English RoFormer MLM is training
zphang#7252: DeBERTa also has some special fine-tuning method that they haven't elaborated on in detail I think
Deleted User#0000: Looking forward to it!
Deleted User#0000: Do you also RoPE values (in addition to queries and keys) as well in most of your models?
Jianlin Su#3718: I never try but i think it will work in theory...
Deleted User#0000: Ok, just wondering!
Deleted User#0000: Thanks for helping me out, and looking forward to seeing the english MLM results ๐
Deleted User#0000: I've been adding RoPE to a lot of my transformer implementations. It's really remarkable
Deleted User#0000: Congrats on uncovering this
gwern#1782: what's the average improvement in general? |
kindiana#1016: ~30% convergence improvement over learned abs baseline
kindiana#1016: ~20% over t5 relative pos encoding
kindiana#1016: with a <5% runtime cost over learned abs
chilli#5665: how confident are we this scales to larger models?
kindiana#1016: this is on 1.4b
chilli#5665: learned abs and sinusoidal have pretty much identical cost
EricHallahan#1051: It is pretty much in silver-bullet territory.
gwern#1782: 30% fewer iterations for same converged quality, or 30% lower loss at same number of iterations?
kindiana#1016: the former
EricHallahan#1051: It seems to have resulted in improvements in anything that is compatible.
kindiana#1016: the latter would be agi :berk:
chilli#5665: well, it'd still only be a constant factor improvement, no?
Jianlin Su#3718: by the way, I also found use $100^{-2i/d}$ instead of $10000^{-2i/d}$ will accelerate the training.
Deleted User#0000: i think Sid did a 1.4B run too
TeXit#0796: **Jianlin Su** https://cdn.discordapp.com/attachments/729741769738158194/833912422690455552/832890084977147926.png
Deleted User#0000: and saw significant improvements
kindiana#1016: that would be like a 400x speedup lmao
chilli#5665: what would be ๐ would be if it reduced the percentage more for bigger models
Deleted User#0000: we were wondering about that!
Jianlin Su#3718: but the final result not change |
Deleted User#0000: im trying it now
Deleted User#0000: lol
chilli#5665: haha, i don't know the loss values well enough to instinctively know how crazy 30% is
kindiana#1016: https://cdn.discordapp.com/attachments/729741769738158194/833913314626895922/unknown.png
chilli#5665: lol
chilli#5665: right, since it's log-linear
gwern#1782: why I asked. I figured it was saving iterations, not 30% loss, because that would be mindblowing and you guys are excited but not *that* excited
EricHallahan#1051: I have a theory that you should only need a minimum number of frequencies to get RoPE to work. You only need 8 bits to describe a 256 token context length. Storing it in a larger type (increasing the model dim) doesn't change that the fact that you only need 8 bits of positional information.
Jianlin Su#3718: please post it one day later and then you can quote the arxiv link, lol
cfoster0#4356: Sure thing! We'll wait for your lead
Deleted User#0000: https://wandb.ai/lucidrains/x-transformers-experiments/reports/Project-Dashboard--Vmlldzo2MjM2NTM?accessToken=gy561dpb0xfz31ux37v3se7s799rfj244qnlfmks57lluwgxdwyl2vokwd3h20f5
StellaAthena#3530: @Deleted User is this partial rope?
Deleted User#0000: partial + 100 ^ instead of 10000 ^
EricHallahan#1051: The initialization should not be dependent on *d* but instead on the context length.
Deleted User#0000: its ok if i keep the partial on, it's only a super slight improvement
kindiana#1016: does jianlin know what partial rope is :berk:
Deleted User#0000: tell him!
Deleted User#0000: you discovered it lol
Jianlin Su#3718: not sure
Jianlin Su#3718: please teach me |
StellaAthena#3530: @Jianlin Su We have found that you get better results only applying RoPE to some of the coordinates
kindiana#1016: @Jianlin Su we also found rope works better if you only apply it to part of the qk, with something like a quarter of the qk dimensions shows slightly better results as well as slightly better runtime
kindiana#1016: ```
k_rot = k[:, :, :self.pe_rotary_dims]
k_pass = k[:, :, self.pe_rotary_dims:]
q_rot = q[:, :, :self.pe_rotary_dims]
q_pass = q[:, :, self.pe_rotary_dims:]
sincos = fixed_pos_embedding(k_rot)
q_rot = apply_rotary_pos_emb(q_rot, sincos)
k_rot = apply_rotary_pos_emb(k_rot, sincos)
k = jnp.concatenate([k_rot, k_pass], axis=-1)
q = jnp.concatenate([q_rot, q_pass], axis=-1)
```
Jianlin Su#3718: that is really a mystery
Jianlin Su#3718: how do you find it
StellaAthena#3530: RoPE is highly redundant
StellaAthena#3530: On paper, even applying it to a single (pair) of indices would be sufficient |
StellaAthena#3530: (Alternatively, using the same theta for each coordinate)
Deleted User#0000: hmm, not seeing much an improvement for 100 vs 10000 https://wandb.ai/lucidrains/x-transformers-experiments/reports/Project-Dashboard--Vmlldzo2MjM2NTM?accessToken=gy561dpb0xfz31ux37v3se7s799rfj244qnlfmks57lluwgxdwyl2vokwd3h20f5
Deleted User#0000: but perhaps my task is too small
Deleted User#0000: regardless, we'll be playing with the periodicity a bit more
kindiana#1016: my thoughts for why it works is that you don't need everything in the head to care about position, as doing the rope operation kind of halves your qk dimension. the "content" attention can use the unroped dimensions, but the "position" attention can use the roped dimensions
chilli#5665: I wonder if doing PIA type stuff on the unroped dimensions would work well
kindiana#1016: I'm not sure if you need all that much position information tbh
kindiana#1016: lucid tried rope half of the heads
kindiana#1016: we also tried rope half the layers
kindiana#1016: and it was very close
chilli#5665: well, those experiments are just testing whether you need *more* rope information
Deleted User#0000: ill run some more experiments tomorrow to see the effects of RoPE values
chilli#5665: Like, it's completely possible that you get the vast majority of rope's benefit from only a bit of rope
chilli#5665: but that you could still benefit from other kinds of positional information
Deleted User#0000: @Jianlin Su yea, I found that position infused attention works well with RoPE https://arxiv.org/abs/2012.15832
Jianlin Su#3718: okay, i got. It convergence a little faster in my experiments.
Deleted User#0000: there's another slight improvement if you put them together
Jianlin Su#3718: but it is not very elegant
chilli#5665: how are you already using PIA with rope? you just add PIA to all of your QK heads after applying rope?
Deleted User#0000: haha yes, not elegant |
kindiana#1016: roto(to_query(x + sinu), sinu), roto(to_key(x + sinu), sinu)
kindiana#1016: this is the formulation btw
chilli#5665: for roto + PIA?
kindiana#1016: yeah
chilli#5665: the obvious question is whether you've tried the other commutations lol
kindiana#1016: :thonk:
kindiana#1016: I think roto needs to go on the outside?
kindiana#1016: otherwise the math breaks
EricHallahan#1051: So therefore the initialization is a partial RoPE implementation of that can uniquely identify every token with minimal information. I propose the following initialization for $i \in [0,\log_2{n_{\mathrm{ctx}}})$:
$$\theta=\frac{pi}{2^{i}}$$
Deleted User#0000: yea, it needs to be on the outside
Deleted User#0000: i've tried it pre-projection
Deleted User#0000: and it doesn't work that way
TeXit#0796: **Eric Hallahan** https://cdn.discordapp.com/attachments/729741769738158194/833917252100423730/304058360893014018.png
Deleted User#0000: @EricHallahan fork x-transformers and run the experiment!
Deleted User#0000: run it in a colab
Deleted User#0000: @Jianlin Su I also tried RoPE on Performer (the linear attention from Google AI)
Deleted User#0000: works very well
EricHallahan#1051: I wish, I haven't had time. I had and an exam yesterday and I have an exam in 9 hours lol.
Deleted User#0000: didn't see the end result, but converges faster for sure |
Deleted User#0000: dramatically so actually
Jianlin Su#3718: In my initial opinion, as long as RoPE can be comparable to absolute position coding, then I am satisfied. Anyway, an analytical solution will work is rare thing in DL.
zphang#7252: why do you always have exams eric
cfoster0#4356: Amen
Deleted User#0000: https://cdn.discordapp.com/attachments/729741769738158194/833918070312796220/WB_Chart_4_19_2021_9_12_34_PM.png
Jianlin Su#3718: I know performer. I also has some novel insights about linear attention and will be posted in few days.
bmk#1476: reject study, retvrn to experiment
Deleted User#0000: looking forward to it ๐
EricHallahan#1051: Ask my professors.
EricHallahan#1051: I wish. I would do it right now but I totally havenโt studied for my exam tomorrow :berk:
Jianlin Su#3718: I try to initialize RoPE with $\theta_i = 10000^{-2i/d}$ and make $\theta_i$ trainable, but finally I found $\theta_i$ changes very little. So I decide to fix $\theta_i$.
TeXit#0796: **Jianlin Su** https://cdn.discordapp.com/attachments/729741769738158194/833919235594911794/832890084977147926.png
EricHallahan#1051: We found little use to training thetas IIRC.
Deleted User#0000: yes, i tried that and didn't see any improvements
Deleted User#0000: but i haven't tried different initialization schemes
EricHallahan#1051: It was all run-to-run variance.
Jianlin Su#3718: I have tried uniform initialization but it performed badly
EricHallahan#1051: We also tried *One theta Is All You Need*, but it was nearly identical to no embedding at all.
EricHallahan#1051: Theta was set to pi/(2n_ctx)
Jianlin Su#3718: lunch time. bye~ |
Jianlin Su#3718: ๐
chilli#5665: did OAI ever reveal numbers about how big the MSFT supercluster was?
Alethean#7947: Hello, I'm interested in supporting the project and need some guidance - who do I talk to?
EricHallahan#1051: Welcome!
Alethean#7947: Thanks ๐
EricHallahan#1051: To be honest, it depends upon how you define support.
EricHallahan#1051: Contribute? Donate? It helps if we are all on the same page.
bmk#1476: tldr we're always looking for more hands on deck to write code, we're not currently looking for monetary support unless you're talking high 6 to 7 digits (with no strings attached)
Alethean#7947: ๐
turgutluk#4966: Hi everyone! Such a cool server if only I knew about it earlier, thanks @MicPie for the intro!
๐
ฌ gabriel_syme ๐
ฌ#3220: welcome!
jekbradbury#2280: iiuc the original openai azure cluster is 10k 16GiB V100s with 50 Gb/s per GPU; now they probably use a few of the standard azure A100 clusters (4k 40GiB A100s with 200 Gb/s per GPU)
chilli#5665: Damn
chilli#5665: How many of these clusters does msft have?
jekbradbury#2280: cloud providers never talk about that ๐
kindiana#1016: how many tpus does google have ๐ค
jekbradbury#2280: my guess is azure has between 5 and 50 of those A100 clusters
chilli#5665: 50???
chilli#5665: damn
chilli#5665: do they seriously have 200k A100 GPUs |
chilli#5665: I wonder how many TPUs google has...
chilli#5665: the number I've heard is
chilli#5665: "a fuck ton"
nev#4905: is anything close to CLIP's dataset available for research?
Kia#2550: So is #multimodal The Text-to-image project of Eleuther?
Kia#2550: Because im interested how you'd guys going to do it
Kia#2550: More like the process of it
Aran Komatsuzaki#5714: we do text2image there. actually we have several other related projects ๐
Kia#2550: Ow wow
Kia#2550: That's amazing and goodluck for the development
Aran Komatsuzaki#5714: thanks!
jekbradbury#2280: (50 is pretty unlikely)
nev#4905: does a public autoregressive image => text model exist?
nev#4905: I know CLIP authors trained one, but afaik it performed worse and wasn't published
CKtalon#7792: The Chinese trained a 27B parameter GPT-like model using 128 A100 for 120 days (300B tokens) for the Chinese language
https://news.mydrivers.com/1/751/751835.htm
mgostIH#0245: What even are these memes https://cdn.discordapp.com/attachments/729741769738158194/834029955485532200/e360f82d-48f8-4847-9e9f-e9fba4719944.png
CKtalon#7792: the text generated was starting to develop into a romance novel, so the author was saying did you discover my secret that I like to read romance novels. How embarrassing
CKtalon#7792: it might have to do with their corpus coming from web novels
Ravna#1831: The generated texts are pretty bad. It matches my belief that modern Mandarin books and internet articles are lower-quality in general (compared to English ones), which leads to a lower-quality dataset. |
CKtalon#7792: most of it probably come from web novels (estimated to be around 1 quadrillion characters)
CKtalon#7792: so the prose quality is weaker
CKtalon#7792: their wikis are generally poorly maintained also
mgostIH#0245: tbh it might also be very hard to automatize dataset cleaning in chinese
Ravna#1831: Online news articles too. They are usually written by amateur part-timers instead of journalists.
CKtalon#7792: well language changes anyway
mgostIH#0245: Also idk anything about Chinese but I imagine that they had to use lower token lengths?
CKtalon#7792: would be a horror to read ancient prose for modern articles =x
CKtalon#7792: also it seems like it was meant to generate novels, poems, and Q&A
CKtalon#7792: not much details released yet
Kia#2550: Isn't this like a old news, I read once there model has smaller parameter but better Outputs(Probably cherried picked)
CKtalon#7792: they plan to train a bigger 200B model
CKtalon#7792: this one was released by Alibaba yesterday or so
CKtalon#7792: beat SOTA for the CLUE benchmark it seems
Kia#2550: From Alibaba it self?
CKtalon#7792: yea
mgostIH#0245: They released the model?
CKtalon#7792: they basically created an OAI interface too
CKtalon#7792: nope
CKtalon#7792: same as OAI |
CKtalon#7792: a playground
Kia#2550: You can read mandarin?
CKtalon#7792: yea
Kia#2550: Oww...Well that's interesting
Kia#2550: The news is probably interesting to
mgostIH#0245: You know there's people in China right
CKtalon#7792: https://cdn.discordapp.com/attachments/729741769738158194/834033437856759809/607cf76f8e9f09735427509b_1024.png
CKtalon#7792: this is the benchmark scores
Kia#2550: I know that...im half Chinese, I can't read mandarin
mgostIH#0245: looks pretty much the same perf as BERT
CKtalon#7792: oh interesting.. it uses an encoder-decoder architecture
Kia#2550: Ow wow, What's there current model size?
mgostIH#0245: Oh? If it's autoregressive what's it encoding?
CKtalon#7792: 27B
CKtalon#7792: no idea
Kia#2550: That's amazing
mgostIH#0245: Imagine a 27B NERF model
CKtalon#7792: well funded by alibaba... so not a huge surprise
mgostIH#0245: It could encode all of our planet
Kia#2550: They rich...The Man Rich |
mgostIH#0245: tbh who else has the money to
Kia#2550: Can't remember his name
mgostIH#0245: After all even OpenAI was mostly funded by Microsoft
CKtalon#7792: Ma Yun/Jack Ma
Ravna#1831: 27B github model would be nice too
CKtalon#7792: these capitalists ain't gonna release the models =\
Kia#2550: They still have a hard time Doing shit even with a Group and being funded by Microsoft
Ravna#1831: hardware vendors might though
Kia#2550: Alibaba?
Ravna#1831: Nvidia might release their models.
Ravna#1831: https://www.gwern.net/Complement
CKtalon#7792: alibaba, nvidia, openai, etc
CKtalon#7792: msft
Kia#2550: They scared people mis-used shit...
CKtalon#7792: too expensive to actually misuse imo
CKtalon#7792: like using it for spam is lame
CKtalon#7792: bert probably is sufficient
Kia#2550: True...
Kia#2550: But nevertheless While Alibaba is doing work for there GPT like model Mostly in mandarin
mgostIH#0245: As time goes equally powerful models will get cheaper |
Kia#2550: Are they going to work with English
Ravna#1831: No, the point of commoditizing the complement is that Nvidia is different to OpenAI. Nvidia makes more money if the models are openly shared, while OpenAI makes less.
CKtalon#7792: doubt they will do english
mgostIH#0245: I think we should just take for granted that in the future all digital media can (and will) be generated by these models
CKtalon#7792: with OAI already at 175B
CKtalon#7792: i think we will just start moving into an AI-assisted state
CKtalon#7792: just like how computers help us now
mgostIH#0245: Artists on a death watch kek
Kia#2550: Hmm, Yeah
Ravna#1831: The majority of these digital media would be consumed and enjoyed by AI too.
Ravna#1831: :berk:
Kia#2550: Honestly true
CKtalon#7792: porn would probably be the biggest usage of AI =x
CKtalon#7792: i think pornhub has a data science team
Kia#2550: Nice hot take
CKtalon#7792: generative porn, no exploitation, etc
Kia#2550: But nonetheless The Alibaba GPT Model, Has some potential for Costumer service in there app
CKtalon#7792: https://nlp.aliyun.com/portal#/BigText_chinese
CKtalon#7792: it can be accessed here if you have an alicloud account
CKtalon#7792: i don't |
Kia#2550: Hmm, Same I don't have it
Kia#2550: But That is it has A UI(Like editable Outputs of the AI)
CKtalon#7792: yea, the top is the prompt, the bottom is the output. It provides some samples for free
Kia#2550: That's Interesting and amazing
CKtalon#7792: lol, it even has a recipe generator
Kia#2550: Im starting to think someone in the Chinese Tech community gonna Get The Model in some way or another
CKtalon#7792: i think some of them have finetuned models or some form of inner prompt already provided
CKtalon#7792: because one of them is zero-shot
Ravna#1831: lol the poem example degenerates from proper poem in the prompt to doggerel within 2 verses
Kia#2550: Wow
Ravna#1831: :berk:
Ravna#1831: they should cherrypick for better examples
Kia#2550: I would Actually Think, Nvadia Just gonna throw software at the public
Kia#2550: They're such a great a Software and hardware company and AI and ML, DL company
Kia#2550: Ow well I'll be back talking t you guys I need to pack up somethings now
IKEA#9631: @Kia is this your first time on discord :thonk:
Kia#2550: I-
Kia#2550: Actually no It's been like a year
and half now im in discord
Kia#2550: Thanks for asking that |
Kia#2550: I'll be eating now
gulliver#4480: Hello world! ๐
gulliver#4480: Thank you for accepting me in the group
Kia#2550: Hi
Kia#2550: I mean yeah, Have a great time here
Napolean_Solo#2907: How many parameters is GPT-NeoX
Napolean_Solo#2907: ?
StellaAthena#3530: !releases
StellaAthena#3530: ๐ฎ
StellaAthena#3530: RIP the bot
Louis#0144: needs updating
Louis#0144: we released 330M as well
cfoster0#4356: Bad bot lol
Louis#0144: no
Louis#0144: ?
cfoster0#4356: NeoX is the name of the codebase, not the model
cfoster0#4356: But assuming you're asking about what model sizes we've released
Napolean_Solo#2907: What is the largest model you guys are working on?
EricHallahan#1051: Also we don't have any NeoX models yet.
EricHallahan#1051: Currently? |
Napolean_Solo#2907: In future
EricHallahan#1051: It depends on your definition of "working on".
Napolean_Solo#2907: I mean what is the biggest model that you plan to release
StellaAthena#3530: More than $100$ and less than $2^{2^{2^{2^{2^2}}}}$
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/834098055996506122/193204646687408129.png
EricHallahan#1051: I would check back here later when the long needed overhaul goes out in maybe nine-or-so hours:
https://eleuther.ai/faq
Napolean_Solo#2907: Are you working on creating GPT-3 davinci model?
StellaAthena#3530: yes
Napolean_Solo#2907: Yeah so that's what I wanted to know
EricHallahan#1051: Our goal is to build something in the range of 150 to 200 billion parameters.
EricHallahan#1051: Also, read the FAQ.
Napolean_Solo#2907: How many parameters would be needed to summarise a dataset at great accuracy provided I fine tune the model on that data.
bmk#1476: ยฏ\\_(ใ)\_/ยฏ
Daj#7482: 3
Daj#7482: but they need to be _really big_ parameters
Daj#7482: (no one knows, it's purely empirical)
Napolean_Solo#2907: How good is your 2.7B model at summarizing
Napolean_Solo#2907: Did anyone test that?
Daj#7482: There's no objective way to test that |
Daj#7482: It's all eyeballing lol
Napolean_Solo#2907: Empirically at least
EricHallahan#1051: ยฏ\_(ใ)_/ยฏ
bmk#1476: @Napolean_Solo you can go test this stuff yourself lol
Napolean_Solo#2907: First step is always to find out if a solution already exists. Saves you shit load of time.
Napolean_Solo#2907: Looks like nobody tested it
IKEA#9631: Cursed pfp lol
fristiloverke#4159: looks like peter barakan
fristiloverke#4159: https://cdn.discordapp.com/attachments/729741769738158194/834117932120145930/b733-9b16-47e7-9a78-4956f59c1da4.png
Napolean_Solo#2907: Ayy that's rude
45#2247: hey so i'm doing a podcast with connor tmrw, what I should ask him?
Tinytitan#5596: is he an android made by cyberlife
inox#5400: his favourite anime
finetune#0907: I have a general question about gpt neo, specifically the released 2.7B model. Is there anything in the model architecture that should cause GPU memory use to vary a lot depending on the sequence? Or is this likely a bug in the huggingface implementation? For example, I have a sequence of length 1867 show a peak allocation of 14.7GB, then a sequence of length 1869 peak at 6GB. Sometimes longer sequences with length above 1800 will OOM in the attention matmul trying to allocate 4.5GB. I'm not very familiar with the inner workings, so I'm not sure if this is expected behavior.
EricHallahan#1051: That is an excellent question. ยฏ\_(ใ)_/ยฏ
Kharr#7888: Hard to say, HF often has bugs in their code. You can try running the model in fp16 by calling model.half(). If that doesn't work, open issue on HF repo ๐
finetune#0907: I'm running it as half already and opened an issue, but I thought I'd ask here if it's expected behavior for the model.
EricHallahan#1051: We do not maintain the Hugging Face transformers release, so we don't really know if it is something in there.
finetune#0907: I understand this, but it could have been some kind of "well, of course attention will allocate O(nยฒ) memory" kind of thing
finetune#0907: In that case I could have closed the issue and just given up on it |
EricHallahan#1051: Yeah, I don't know. I haven't inferenced from the 2.7B with HF except for during testing before release.
finetune#0907: One thing that is interesting is that no funny allocations happen for sequences shorter than 250 tokens
Kharr#7888: GPT style models can technically sample infinite length sequences if you use a moving context window of size N. If you are finding that it's not working past a certain length, cap it to something shorter and cache the generation as you go.
EricHallahan#1051: This sounds like a local attention thing.
bmk#1476: @finetune yeah I've noticed some fishy allocation stuff too, I'm going to assume that's an artifact of the local attention too
bmk#1476: i have no idea how HF implemented local attention
finetune#0907: I see :thonk:
Kharr#7888: It's _weird_ I implemented a different version for my use.
bmk#1476: but it makes sense that things will change at 256-multiples since that's our local attention span
finetune#0907: I made a plot of allocations over sequence length for one test case. It looks... interesting
https://user-images.githubusercontent.com/82650881/115268890-0a20db80-a13b-11eb-8771-60b47a5f66bb.png
finetune#0907: I do notice that a set of shorter spikes starts occuring after around 500, so I guess there is some issue with multiples of the local attention span. Thanks for pointing that out
EricHallahan#1051: Yeah, that was really important information that really zeros into the problem area.
Laokoon#9137: Is the 1.3B (GPT-neo) the smallest model EleutherAI has to offer? Or is there maybe a smaller one (planned)?
alexyz#3459: @Laokoon IIRC, there are smaller models on Hugging Face
Laokoon#9137: Oh thanks. I didn't check there.
Laokoon#9137: 125M and 350M nice ๐
EricHallahan#1051: Yes, we didn't make announcements for those. Both 125M and 350M run on my personal laptop locally without any issue at all as far as I can tell.
Laokoon#9137: Yes, that's why I asked for smaller models. To "play" on my local machine
alexyz#3459: There should be an announcement lol |
alexyz#3459: or should have been one
alexyz#3459: because they are useful for stuff sometimes
alexyz#3459: like for local machines
EricHallahan#1051: Ironically, we really *didn't* really want to bring attention to them. We had such a flurry of activity after the initial release of the 1.3B and 2.7B checkpoints, that we decided to not to announce them and get a barrage of questions about how to make them work. (Whether this was a valid reason when this decision was made is murky.)
It is a bit of an open secret that 125M and 350M exist, as they are publicly listed in Model Hub for anyone to see and use, you just need to know that they are there.
Sid#2121: I didn't even know we released them
Sid#2121: I don't think the 350M is very good lol
Louis#0144: its not
Louis#0144: lol
Louis#0144: its ok for ranking
Louis#0144: but thats it
Sid#2121: pretty sure something got fucked during training
Sid#2121: who released them?
alexyz#3459: well the GPT-2 350M isn't that good either
Louis#0144: leo
EricHallahan#1051: It was Leo, he wanted to eval on them.
Sid#2121: i mean
Sid#2121: we can eval on them without releasing them to the public
Louis#0144: leo got excited
EricHallahan#1051: "It was easier" |
Louis#0144: lol
EricHallahan#1051: This was the official argument, and said to keep them privately.
Sid#2121: can we take them down? lol
Sid#2121: like are speedrun people even still using them
EricHallahan#1051: I don't think they were ever used for #carp.
EricHallahan#1051: I tried to but the code would error.
bmk#1476: louis stop putting words in my mouth
Louis#0144: you were though?
Louis#0144: I didnt say you talked about being excited
Louis#0144: but you seemed excited to put it up
bmk#1476: :wat:
Louis#0144: brainfart
Sid#2121: you literally just said "leo got excited" lmao
Louis#0144: lol
Louis#0144: yes
Louis#0144: fixed
Louis#0144: im not putting words in anyones mouth
cat_#4534: I think the small models useful for testing stuff, like I was trying to get some code working and just changing the size to 125M was easier than changing the imports to use a small GPT2
Louis#0144: No
guac#4716: everyone relax it's 420 |
bmk#1476: i put them up because I wanted to eval on them, and it's just easier to eval if it's on hf, and i put it under eleuther org account because i didn't think anyone would notice if we don't announce it lol
Sid#2121: you don't have to release the models to the public to eval on them
EricHallahan#1051: Uh, it was noticed immediately lol
Sid#2121: you can just use the conversion script
Sid#2121: i think the 350m is completely borked
bmk#1476: it makes the pipeline significantly easier
EricHallahan#1051: I don't see how that is the case lol
bmk#1476: i could change it over to my personal account
EricHallahan#1051: You just point to the model directory instead of the name.
bmk#1476: also nobody complained at the time lol
bmk#1476: like it wasn't a secret
Sid#2121: I was never asked at the time
EricHallahan#1051: We should be releasing new models at these sizes eventually anyway.
StellaAthena#3530: > i put it under eleuther org account because i didn't think anyone would notice if we don't announce it
This will always be false.
bmk#1476: you can take them down, i don't care
EricHallahan#1051: It doesn't matter now, it is the internet.
bmk#1476: I'll just put them in my personal account next time
Teemochu#8740: This but somewhat unironically (cartoon/stylized in particular)
bmk#1476: or maybe you can move these to my personal account or something |
Teemochu#8740: How AI Dungeon is still on the app store is beyond me :abteehee:
EricHallahan#1051: No, just put them somewhere else next time. They don't need to be on Model Hub period.
bmk#1476: what's wrong with putting them on my personal account
bmk#1476: nobody is looking at my account lol
bmk#1476: I'll avoid putting gptneo in the name
EricHallahan#1051: Now that we are discussing it openly they are lol
bmk#1476: that.. misses the point completely
bmk#1476: nobody is going to see lg/eval-test-117M and think "hmm yes this is the official eleuther gptneo something or other"
Louis#0144: cant we host private models
bmk#1476: that costs money
EricHallahan#1051: No, you need to pay for that.
Louis#0144: TIL
Fetus Boy#5553: im somewhat new to the gpt scene, and I have a question as I setup GPTNeo. For the dataset, its giving me the error "IndexError: list index out of range" when Tokenizing my Dataset. I have one file thats around 10mb. Am I supposed to split it into smaller pieces and if so what metric should I split it upon?
the line in the documentation: "Your data must either be in the form of lots of normal .txt files (one document per file)" is a bit abstract for my understanding
bmk#1476: you probably dont need gptneo
bmk#1476: what hardware are you using?
Fetus Boy#5553: I have a ryzen 5 5600x and a 3070
bmk#1476: you dont need gptneo
Fetus Boy#5553: Why not?
bmk#1476: gptneo is for when you have a lot of hardware |
Fetus Boy#5553: I see. I would like to finertune the larger model sizes, is there a way to use that?
bmk#1476: look for other methods of tuning gpt2 models
bmk#1476: some google search terms for you: `gpt2 fine tuning huggingface`
Fetus Boy#5553: Thats what ive been doing previously, but would like to expand the possibilities.
Fetus Boy#5553: Ive seen some twitter bots that use gpt-neo, did they have to rent out cloud computing for a task like that?
bmk#1476: the overhead of using gptneo isnt worth the trouble at your scale
bmk#1476: also i dont even know if you can get it running on gpus
bmk#1476: we've only ever tested it on tpus
Fetus Boy#5553: huh. it has documentation for it according to the github rep
StellaAthena#3530: @bmk We have gotten it working on GPUs
bmk#1476: anyways, i wouldnt recommend it
Fetus Boy#5553: In your experience, how much raw text would be needed for utalizing something like gptneo worth it?
bmk#1476: just use hugginface
bmk#1476: train a neo model through huggingface
Louis#0144: doing beam search with 1.2mil beams
Louis#0144: AMA
Louis#0144: "How long per sentence?"
Louis#0144: 17hrs
zphang#7252: https://cdn.discordapp.com/attachments/729741769738158194/834204092045328434/v1knFps.png
Sphinx#2092: You in 17 hours https://cdn.discordapp.com/attachments/729741769738158194/834205902340227082/giphy.gif |
Louis#0144: LMAO
Louis#0144: nah
Louis#0144: Ive done this before
Louis#0144: I got a paper out of it
Louis#0144: ๐
Louis#0144: the beams are split into groups of 4
Louis#0144: each group has a different prompt
Louis#0144: a disjoint ranker ranks the output every token
Sphinx#2092: Doesn't sound like beam search at that point.
Louis#0144: Its POP
Louis#0144: which is like
Louis#0144: this weird nested DAG beam search
Louis#0144: where every vertex is sorted too
Sphinx#2092: I'll take your word for it. I've always been both disappointed and slightly relieved that anything about beam size 5 is pretty shitty without reranking.
Louis#0144: I am reranking
Louis#0144: stella knows the project im talking about
Louis#0144: she can confirm its cool af
alexyz#3459: https://twitter.com/ak92501/status/1384670341637738496
alexyz#3459: look at this majestic creation
gwern#1782: so, 8 GPUs |
Jianlin Su#3718: roformer: https://arxiv.org/abs/2104.09864
EricHallahan#1051: We just threw it into the blog a minute ago.
gwern#1782: oh, where?
Louis#0144: LMAO
guac#4716: (don't see it on the eleuther.ai either)
Louis#0144: you fucking snipped the original authors
Louis#0144: nice
cfoster0#4356: Nah it's not posted yet
gwern#1782: well hurry up, I have a tweet to make
Kia#2550: :comehere:
Kia#2550: Where's the github link I want to read
Kia#2550: Awesome
EricHallahan#1051: https://blog.eleuther.ai/rotary-embeddings/
guac#4716: interactive visualization good yob eric lol
IKEA#9631: Shame layout is broken on mobile though:zucc:
guac#4716: the code snippets are borked
guac#4716: (chrome)
guac#4716: https://cdn.discordapp.com/attachments/729741769738158194/834235887604072468/Screen_Shot_2021-04-20_at_9.15.29_PM.png
bmk#1476: ~~just get a wider screen lol~~
Kia#2550: I can't read it damn mobile, It's probably lagging |
Kia#2550: But cool research or blog
StellaAthena#3530: @Kia Yeah the site is wack on mobile. Soz, we aren't web devs ๐ฆ
Kia#2550: Ow that's really my phone or something Im not really complaining... Sorry for Judging
EricHallahan#1051: No, we'll need JPrester to fix it up for us.
Kia#2550: Umm, Yeah...But awesome research
Eigen#6630: Hello. I would want to start doing research on the intersection of Reinforcement Learning and Graphs, whatโs the procedure to get started in this group?
Dicky the sexy diesel#7454: Demo for gpt-neo? online?
Daj#7482: Hey! We don't really have a formal process, usually you'd find a few people that wanna work on your project, put together a short google doc explaining the project and what resources you need, and talk to level-5 people about setting things up
Daj#7482: We don't set up demos/APIs, you'll have to look elsewhere
IKEA#9631: What's up with all the 14 year olds and furries joining recently lol
fristiloverke#4159: https://cdn.discordapp.com/attachments/729741769738158194/834374737416421426/unknown.png
Daj#7482: I blame AI Dungeon lmao
mgostIH#0245: I am neither of those btw
kindiana#1016: That's what a 14 year old furry would say
mgostIH#0245: uwu
whale#4187: Hello, just found this server. I am currently working on a model similar to DeepMind's starcraft II model to play the board game Catan. Nothing works yet but could be fun
chilli#5665: Any thoughts on this? https://twitter.com/RishiBommasani/status/1384831275421233158
chilli#5665: (friend of mine)
Sora#8531: May I ask what's level-5 in this context? And how do you know who is level 5?
Daj#7482: Blue and purple names, regulars who have access to the hardware and organize stuff |
Sora#8531: Okay thanks! Also sorry for annoying but what do the other colors mean? I noticed there's also dark green, light green, and something that is like light purple/violet
AI_WAIFU#2844: there's stories behind a bunch of those...
Daj#7482: Light Green is just Stella, who is probably our main organizer for lots of things, the rest, as AI_WAIFU says, is generally just vanity roles/have stories/in-jokes attached to them
Daj#7482: Colored names are usually people that have been around for a while, the only notable ones are the L5 roles Blue/Light Green/Purple
Daj#7482: We probably should clarify this more widely
EricHallahan#1051: https://eleuther.ai/faq
Daj#7482: Do we have a FAQ about this?
EricHallahan#1051: There is a very small section that mentions role colors in the context of questions.
IKEA#9631: also maybe sort the member list on the sidebar by rank
Daj#7482: We might wanna add an explicit question then I guess
IKEA#9631: like, you know... every other server in existence lol
AI_WAIFU#2844: From what I gather this seems to put the embeddings before the attention, while the key insight here is exploiting the dot product to get relative position
Daj#7482: Sure I guess, we just don't think about "rank" much lol
Daj#7482: So far everything has been just soft connections, but it might make sense to formalize some stuff as we grow
IKEA#9631: i guess it could at least help with newbies like "who do i need to talk to to do stuff" or "whos in charge"
Daj#7482: We wanna discourage rank hierarchy norms where possible
Daj#7482: but yea tbh it's mostly just us not thinking about it much lol
EricHallahan#1051: ```md
#### Q: *Where can I go if I have more questions?*
|
A: [Discord](https://discord.gg/avxKQUv2fW) is the best place for that. Our founding members appear in {{<discord/handle drole="O5" name="purple">}} and our core contributors appear in {{<discord/handle drole="level5" name="blue">}}. They will be able to provide helpful guidance or answer questions.
However, we ask that you do not expect us to be your tech support; those who contribute to EleutherAI do so in their free time and tend to prefer contributing to projects rather than debugging your problems. We recommend consulting the corresponding documentation before asking us for help. If you think you have found a bug, please consider opening an issue on [GitHub](https://github.com/EleutherAI).
```
Daj#7482: Eric way ahead of us ๐ฏ
Sora#8531: Yeah I just re-read it
Sora#8531: Thanks
Daj#7482: I appreciate feedback like this from newcomers, since as a veteran it can be not-obvious how confusing or not things are to outsiders
EricHallahan#1051: This was recently modified to make it clear that regulars do not speak for EleutherAI. (i.e. people like Gwern shouldn't be asked questions about operations)
StellaAthena#3530: @Daj I set L5 and Regular to display separately to see how it looks. I canโt do the same with O5 because Iโm not O5.
EricHallahan#1051: That is way more useful IMO.
Daj#7482: I always have the members thing closed so I have no strong opinions, looks fine to me
Daj#7482: I think we can just have O5 be in L5
EricHallahan#1051: It helps in the decision to ping someone.
EricHallahan#1051: You can look at who is online.
mgostIH#0245: If there are O5s then #memes must be the SCP containment chamber
Sora#8531: It certainly looks much prettier and organized.
In another note, is there anyone here who works in AI ethics? Are there any guideliens for โfiltering" large piles of text/images scrawled from the internet to address things like racism, sexism and so on? As in if you see that in a particular dataset most or at least the top adjectives to describe a group are "inappropriate" should you filter it out manually?
Sora#8531: I don't want to cause any controversy, just seriously curious if there's any guidelines on this area, to have as reference when doing my own work
EricHallahan#1051: Stella would probably be able to comment on that. |
StellaAthena#3530: @Sora Whatโs the context? Where are you getting the data from, what are you using it for?
Sora#8531: Tag generator for pictures
Sora#8531: I guess you guys are familiar with Danbooru20xx
Sora#8531: Anime pictures*
Daj#7482: I'm pretty sure anything involving anime is unethical
Daj#7482: :berk:
StellaAthena#3530: Yeah
StellaAthena#3530: My first pass answer is โif you donโt trust the data to not be racist or sexist, are you sure you trust the data to be otherwise accurate?โ
StellaAthena#3530: Typically (not always, but typically) the answer is no.
Sora#8531: Ugh. How does this fit into large learning models as a whole?
cat_#4534: Danbooru tags should be pretty accurate overall and I believe the tagging policy is that basically only things that can be visually discerned from the image should be tagged, so if a character has green eyes, but their eyes are closed on the image it should not be tagged with green eyes and so on
Sora#8531: Probably we could make a case for anything to be racist or sexist or controversial to a certain degree, and noisy to different degrees
Sora#8531: I mean there's a reason why people have made posts about GPT being racist or something. I understand why the problem happens, but what I'm wondering more if there's a solution or if the concensus is well that's unfortunate
AI_WAIFU#2844: Given that danbooru tags tend to be mainly useful for coomers to find the right fap material, you might run into some issues depending on your target audience.
cat_#4534: I believe in the case of gpt-neo one example of an excluded dataset was the US congressial record
AI_WAIFU#2844: On the more broad issue, what I gather is that right now it's unfortunate, you can filter but it's gonna be crude, we need better methods going forwards. Ideally ones where an LM can be exposed to stuff we don't want it to say and actually being able to leverage that data to have a better understanding of what we want it to avoid.
AI_WAIFU#2844: I also personally advocate for decoupling the data/pretraining process from the filtering/tuning step, so that individuals can tune LMs to suit their needs, based on their local community norms.
StellaAthena#3530: Relatedly, see Section 6 of the Pile: https://arxiv.org/abs/2101.00027
StellaAthena#3530: Those two options do not partition the space of possibilities. โOur algorithms suckโ and โprobably but we donโt know howโ are other important possibilities that IMO are much more realistic
Sora#8531: Thanks! This is exactly what I was looking for. |
StellaAthena#3530: Itโs worth keeping in mind that we are collecting data, not training a model. I do think thatโs a meaningful difference
Jianlin Su#3718: Complex order in https://openreview.net/pdf?id=Hke-WTVtwr is a complex absolute position encoding for complex networks. It is just applied to the input token embeddings. RoPE is a real absolute position encoding, equivalent to relative position encoding when it is applied to Q, K of self attention .
Jianlin Su#3718: Complex order is only for complex networks. It is like Sinusoidal position encoding for complex transformers.
StellaAthena#3530: My reply https://twitter.com/BlancheMinerva/status/1384873999449169923?s=20
DoesThisUnitHaveASoul#7264: so uh, I was wondering if someone could help me find the right person in here
DoesThisUnitHaveASoul#7264: I saw your rotary positional embedding implementation, and I've worked on something last year, in the space of associative/relational style reasoning (i.e. transformers, relational networks etc), which I believe can do better than rotary positional embeddings. I am actually a recent Meta-learning PhD graduate, working as an RA/and teaching instructor in University of Edinburgh. If you are up for it, we could have a nice colab. ๐
Sid#2121: Hey @DoesThisUnitHaveASoul ! Actually the original author of RoPE is in here, so he's probably the best person to talk to @Jianlin Su (hope you don't mind me pinging you Jianlin)
DoesThisUnitHaveASoul#7264: Neat! Gotta love the internet.
Teven#6831: btw the link to the arxiv is broken at the end of the blog
Louis#0144: Teven tell stas we owe him a beer
Louis#0144: pls
Teven#6831: there's an extra comma at the end of the arxiv URL
Louis#0144: He saved our asses
Teven#6831: haha what has he been up to again
Louis#0144: he fixed Neo NaN'ing out in fp16 mode
Teven#6831: ah yes the DS bug ?
Louis#0144: yeah
Louis#0144: going to do the first run without a NaN today
Teven#6831: that sounded annoying from what he said
Louis#0144: it was *awful* |
Louis#0144: we had 3 engineers working on it here too
Louis#0144: we made almost zero progress
Teven#6831: but didn't follow myself too much, most of the discussion with the DS team is on Teams rather than Slack
Louis#0144: now all we need to do is fix the GPUs randomly locking when the world size gets too big
Louis#0144: (not for Stas thats for us dw)
Teven#6831: haha yeah, I'll transmit though !
Louis#0144: ty
EricHallahan#1051: :gameryes:, I haven't had time to fix it.
Louis#0144: lmao
Louis#0144: youre using gameryes as a word
bmk#1476: gameryes
Louis#0144: How the :goose2: are you?
Teven#6831: woops, sorry if that's already been reported
EricHallahan#1051: No, I had noticed it myself.
Deleted User#0000: sorry, misread and thought you said you had something better
Louis#0144: he did
Kharr#7888: the question is not "is there something that can do better" but rather "is there something that will stack" and do better than both ๐
ethan caballero#6044: has @chilli tested whether relational_networks/etc. improve #scaling-laws ?
DoesThisUnitHaveASoul#7264: Also, one more question. Is anyone aware of any transformer papers where the authors tried to reduce the context size as the transformer gets deeper using some kind of soft-attention based pooling layers?
DoesThisUnitHaveASoul#7264: Like, it's a super obvious way to reduce the computational complexity of a transformer |
Deleted User#0000: @DoesThisUnitHaveASoul the closest is probably https://arxiv.org/abs/2005.00581
DoesThisUnitHaveASoul#7264: Like, say, if you start with 100 as your context length. What stops you from using some clever attention pooling layer that reduces it to 10 weighted averages, which now becomes your new context.
whale#4187: anyone had a look at this? it's one of those "idea" papers, which I know some do not like
whale#4187: https://arxiv.org/abs/2102.12627
Deleted User#0000: https://openreview.net/forum?id=WlT94P_zuHF
Deleted User#0000: similar lines of multi-scale transformer, but recurrently
Deleted User#0000: neither are really used that much, yet
DoesThisUnitHaveASoul#7264: thanks @Deleted User. I'll have a look. I was just kinda surprised this wasn't the first way people tried to make things more efficient
cfoster0#4356: We've had a couple discussions about it! If you search for "GLOM" you'll probably find some of them
DoesThisUnitHaveASoul#7264: I mean if you are going to name your paper Attention is all you need, might as well have attentional pooling in there
DoesThisUnitHaveASoul#7264: Where do you guys get compute from btw? Do you have a grant or some other source of funding other than your own pockets?
StellaAthena#3530: We are generously funded by GPU donations. We are also part of a program that Google runs where they give TPU access to non profits and independent researchers
DoesThisUnitHaveASoul#7264: That is really interesting. Especially the Google TPU part. Can you point me to a link for that? Can anyone apply?
AI_WAIFU#2844: https://sites.research.google/trc/
DoesThisUnitHaveASoul#7264: @AI_WAIFU Love your profile pic
AI_WAIFU#2844: Make sure to use it do do something cool, then email them explaining what you did and to ask for more time/compute when you're trial/time is over.
DoesThisUnitHaveASoul#7264: Yeap. Sounds awesome.
DoesThisUnitHaveASoul#7264: meta learning is expensive. meta-learning with transformers is meta-expensive.
StellaAthena#3530: On paper itโs a trial period only, but itโs pretty easy to get extensions and additional compute if you keep doing cool things with it
AI_WAIFU#2844: ^ |
DoesThisUnitHaveASoul#7264: thanks so much both! This is really neat ๐
DoesThisUnitHaveASoul#7264: my research group has access to about 40 GPUs for all 12 of us
AI_WAIFU#2844: also if you just want to get your feet wet with TPUs, google colab makes it easy to get started
DoesThisUnitHaveASoul#7264: so, not enough, really
DoesThisUnitHaveASoul#7264: Yeah I played with Colab before. Hell, I even wrote a tutorial for MSc students on how to use GCP. Out of 1000 students 978 got it working without a hitch xD
AI_WAIFU#2844: But be warned tpus are cursed, and using them can be non-trivial, especially if you need to do anything non-standard.
StellaAthena#3530: Theyโre definitely the emergency button.
DoesThisUnitHaveASoul#7264: right
DoesThisUnitHaveASoul#7264: wouldn't pytorch lightning alleviate that?
AI_WAIFU#2844: I haven't heard of anyone having a good time using pytorch with TPUs
DoesThisUnitHaveASoul#7264: Apparently pytorch lightning should alleviate the issues with compatibility. Or perhaps that new 'accelerate' library that came out recently.
DoesThisUnitHaveASoul#7264: But I hear you
DoesThisUnitHaveASoul#7264: Will report back if I get to try them with Pytorch.
AI_WAIFU#2844: Yeah, definitely let us know if you get anywhere, pytorch has it's advantages.
DoesThisUnitHaveASoul#7264: After being a hardcore TF guy for 2 years, then switching to Pytorch out of the realization that I could no longer argue with myself in any meaningful way why I should be using TF, other than I had learned the 'tf way' really well. I can definitely say that Pytorch is like an inviting warm bath in which to do research, while TF is basically a cold shower than unpredictably turns hot when you least expect it. Yes, TF works better with TPUs, but come on. Even google basically converted TF into Pytorch with TF2.
DoesThisUnitHaveASoul#7264: Then JAX came out. Which is neat, but its error reporting system is a majour pain at this point.
DoesThisUnitHaveASoul#7264: It has certain TF-esque qualities to it that worry me tbh
chilli#5665: the way I view it is: Jax definitely sacrifices some on usability in comparison with PyTorch, but in exchange it gets some cool advantages.
nz#9710: I'm curious, as someone who is just getting his feet wet with JAX, could you go more in depth about the issues you found with it?
nz#9710: I know for example that tools have been developed to better debug JAX programs (see https://github.com/deepmind/chex#fakes-fakepy), did you by chance try them? |
chilli#5665: I mean, that's the problem lol
chilli#5665: you need to develop tools to better debug JAX programs
chilli#5665: you don't need to develop anything to better debug PyTorch programs
DoesThisUnitHaveASoul#7264: Well. Imagine you are trying to build something transformery from scratch in Pytorch. It's pretty intuitive to do so, and you can test, and when you get errors they mostly make sense. When they don't, a google search will reveal context that will help you figure it out. With JAX, most of the error messages, especially related to grads are really, really unintuitive, and generic, and there's next to nothing online at this point. You dive into the codebase, and try your best, and eventually you figure it out. Days go by, and you realise you spend days doing something that you could have done in half a day in PyTorch.
DoesThisUnitHaveASoul#7264: So you stop.
EricHallahan#1051: A little late to the party, but check out the new and improved ~~Freddy Fazbear's Pizza~~ FAQ, updated yesterday:
https://eleuther.ai/faq
chilli#5665: it's like back when TF was in graph mode, and had their `tf.Print` something
nz#9710: that's fair. still though, I would be curious about whether these tools resolve the issue (and make debugging JAX code easy to do)
chilli#5665: like sure, for a lot of use cases `tf.Print` is the same thing as `print`
DoesThisUnitHaveASoul#7264: Exactly. This is a majour part of it.
chilli#5665: but just the fact that you have a completely different paradigm with various strange mismatches is hard to deal with
chilli#5665: btw, I would still encourage everybody to try out Jax ๐
chilli#5665: I think they have some really great ideas, and I think understanding them would help you as a researcher
DoesThisUnitHaveASoul#7264: I have a feeling that they idea behind JAX, that is jit, is a majour important point. vmap especially.
chilli#5665: imo, understanding how vmap works is very cool
DoesThisUnitHaveASoul#7264: Vmap would make meta-learning in parallel tasks and evolutionary algorithms very easy and highly efficient. If you want to do the same in Pytorch you need specialized layers like so https://github.com/pytorch/pytorch/issues/17983
bmk#1476: where can I go to learn how all the different *map s work
nz#9710: the autodidax if you want to understand how they work internally, or jax-101 if you want an API overview
chilli#5665: there's some work on this ๐ |
AI_WAIFU#2844: the jax docs afaict
chilli#5665: the important thing about vmap is just understanding that it's a code to code transform
nz#9710: https://jax.readthedocs.io/en/latest/autodidax.html
chilli#5665: there's also a really cool notebook for xmap
DoesThisUnitHaveASoul#7264: this is also nice https://jax.readthedocs.io/en/latest/notebooks/quickstart.html
bmk#1476: so the whole code as data thing?
AI_WAIFU#2844: also just playing around around is arguably the best way
bmk#1476: going full lisp
chilli#5665: yeah, kinda
AI_WAIFU#2844: not quite but getting there
chilli#5665: I think once I understood that I started having a lot of ideas about transforms I wanted
chilli#5665: lol
bmk#1476: so is it the same as jit tracing/ast-parsing?
nz#9710: cool! I hadn't seen it. (https://jax.readthedocs.io/en/latest/notebooks/xmap_tutorial.html for those interested)
bmk#1476: in terms of how it does it
bmk#1476: or is it totally different
chilli#5665: I think the best way of thinking of what it does is as a dispatcher system
bmk#1476: ?
chilli#5665: Like
chilli#5665: `torch.dot(VmapTensor, x)` redispatches to `torch.mm(Tensor, x)` |
chilli#5665: So like, when you're executing your operation on a VmapTensor, you don't execute the original operation, you redispatch to another set of operations.
bmk#1476: err
bmk#1476: so it's sort of like a context manager
chilli#5665: hmm, kinda
chilli#5665: but the contextmanager is on the Tensor itself
chilli#5665: Oh, and also, they can be nested.
bmk#1476: o.O
bmk#1476: so you'd have a VmapVmapTensor?
chilli#5665: So, for example, if you have `VmapTensor(VmapTensor(Tensor))`
bmk#1476: oh
AI_WAIFU#2844: you can do some wild shit
chilli#5665: `torch.dot(VmapTensor(VmapTensor(Tensor)), x)` redispatches to `torch.mm(VmapTensor(Tensor), x)`, which redispatches to `torch.bmm(Tensor, x)`
bmk#1476: so basically it hides the complexity of figuring out how to vectorize something inside of pytorch, rather than leaving it in your code
chilli#5665: yes
chilli#5665: basically
chilli#5665: And also allows you to not need to think about how to batch your code when writing it
chilli#5665: Because you can always autobatch it later
bmk#1476: that sounds like some extreme functional programming stuff
bmk#1476: i like it
chilli#5665: but I think vmap really shows off how cool it can be in the context of other transforms |
chilli#5665: for example, maybe you don't have `VmapTensor(VmapTensor(Tensor))`
chilli#5665: maybe yo uhave
chilli#5665: `VmapTensor(GradTensor(Tensor))`
chilli#5665: for the purposes of this just imagine that `GradTensor` is responsible for doing forward-mode AD
bmk#1476: i dont see how this is different from the other case
bmk#1476: in terms of coolness
chilli#5665: well, it allows you to do something that's not easy to do in say, PyTorch
bmk#1476: isnt this just what you get when like normally batching
chilli#5665: no, that's `GradTensor(VmapTensor(Tensor))`
chilli#5665: ๐
bmk#1476: oh
bmk#1476: then what does this do
chilli#5665: So `GradTensor(Tensor)` gets you the gradient of the output wrt your current tensor, right?
chilli#5665: so `VmapTensor(GradTensor(Tensor))` gets you the gradient of each output wrt every single tensor in your batch
chilli#5665: I think the `VmapTensor` notation is getting a bit cumbersome, so I'll switch over to functions instead
chilli#5665: i.e.: `f(x)` takes in a scalar and produces a scalar
chilli#5665: `grad(f)(x)` takes in a scalar and produces another scalar (the gradient of `x` wrt `f(x)`)
chilli#5665: `vmap(f)(x)` takes in a vector and produces a vector
chilli#5665: `grad(vmap(f))(x)` is ill-defined, because `grad(f)` only makes sense for a scalar output `f` (otherwise, you're computing the jacobian)
chilli#5665: `vmap(grad(f))(x)` gets you the gradient of `x` wrt `f(x)` for every element in your batch, and takes in a vector and returns a vector |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.