data
stringlengths 115
7.61k
|
---|
bmk#1476: might be good for finetuning tho
bmk#1476: but it's <1gb if extracted
axiom#3599: :veiAw:
Dwarf#6935: How old is the data in the pile? I know it's a compilation of many datasets, but it's hard to find information on when the data was gathered. I think it could be very useful to list the recency of the datasets.
StellaAthena#3530: This information is in the paper appendix.
Dwarf#6935: Thanks Stella! I didn't think to look there.
๐
ฌ gabriel_syme ๐
ฌ#3220: ah yes your favorite lab I remember :berk:
๐
ฌ gabriel_syme ๐
ฌ#3220: all that great work, wasted no
StellaAthena#3530: Everything is in the appendix
cfoster0#4356: Incidentally, that lab's PI is also the author of the recent CVPR motion barring social media promotion of papers ๐
cfoster0#4356: Again, great work but feel like some poor calls were made
๐
ฌ gabriel_syme ๐
ฌ#3220: shocking
๐
ฌ gabriel_syme ๐
ฌ#3220: imagine being a professor that believes in all this deeply
๐
ฌ gabriel_syme ๐
ฌ#3220: like it's alright to say 'well, what can I do, I'm forced by this and that' but believe in it lol
Daj#7482: Imagine writing "Experts make scientific progress, not the general public" and thinking you're not the villain lol
StellaAthena#3530: > Elitism? No good sir, you need to be a level 7 wizard to reach this level of the ivory tower, and we weed out elitism at level 4
Dwarf#6935: "Charles! Call my Captain of the Guard. The peasantry are doing science again."
inox#5400: learned about these licenses this week and they're in all the pose modeling code, even in other labs because everyone started using SMPL ๐คฎ
inox#5400: just use a real open source license
inox#5400: gross
|
cfoster0#4356: gimme dat GPL any day
cfoster0#4356: But yeah it's infectious in all the wrong ways
AI_WAIFU#2844: *grabs popcorn*
๐
ฌ gabriel_syme ๐
ฌ#3220: Yay gpt-neo-1.3b finetuning running!!
๐
ฌ gabriel_syme ๐
ฌ#3220: 10h per epoch seems kinda..nice?
6r1d#4829: How many epochs are there?
๐
ฌ gabriel_syme ๐
ฌ#3220: I'm going to do 2 or 3 I think
๐
ฌ gabriel_syme ๐
ฌ#3220: we only have the TPU for a week, 3 days left more or less
๐
ฌ gabriel_syme ๐
ฌ#3220: and I want to try the 2.7b and 6b before thats done, and then check the distillation code in the future ๐
&.#0001: https://arxiv.org/abs/2107.03374
quinn#9100: What's this from?
๐
ฌ gabriel_syme ๐
ฌ#3220: sounded like Pratchett but it's been a while
StellaAthena#3530: I made it up
quinn#9100: Ah
Daj#7482: https://images-ext-1.discordapp.net/external/D8tLMEBWTwhZ6yfwM_IY96gDZM8sI1GRdc1qmUcaNZU/https/i.kym-cdn.com/photos/images/facebook/002/079/819/e14.jpg?width=493&height=671
kurumuz#5695: lmao
Louis#0144: @EricHallahan whereโs the blog reeeeee
Louis#0144: I was waiting for it
bmk#1476: there is no blog there is only goose
cfoster0#4356: Chill out. It'll be done when it's done ๐ค
|
bmk#1476: I can say with 80% certainty that the blog post will be done before 175B is
Louis#0144: Iโm just memeing
olives โ#2305: haha looks like someone brought down the web demo for 6b
olives โ#2305: ๐ฆ
kinoc#5731: https://bellard.org/textsynth/ <-- emergency auxiliary backup
Manny96#3437: Cool idea - Conv-GPT... spatial sequential model. Won't get into it, all good.
Manny96#3437: Applies to language models, also.
Maark#6960: oh wow this is really cool. the results seem pretty good! they say the hardest part is collecting the group of volunteers for the distributed deep learning
Kharr#7888: Lots of work published on conv in Transformers. Check arxiv.
charlie#9698: Hi everyone, I know this might not be completely related to GPTJ/neo but I'm getting a very weird bug when fine-tuning a BART model for an HTML formatter (adds HTML tags to any text). When training for more than 3 epochs, it just starts outputting garbage even though the loss and BLEU scores are better... I will appreciate any help/pointers, I explain more about it here: https://discuss.huggingface.co/t/bart-base-generating-completely-wrong-output-after-training-for-more-than-3-epochs/8173
Louis#0144: #off-topic
krigeta#6645: So after the new update is it possible to generate stories on the given situation?
Louis#0144: What
dms#2699: Do you guys have a way to donate sandwich/coffee funds so you can stay fueled up?
Louis#0144: no
EricHallahan#1051: Nope, it just isn't worth it for us to maintain something like that.
๐
ฌ gabriel_syme ๐
ฌ#3220: sandwich? I'll have a sandwich
nshepperd#2316: launch your sandwich in the air from a catapult while shouting "sandwich for EAI!" and a passing goose will pick it up ~~but not necessarily deliver it~~
Teemochu#8740: Congrats on one weird year! May the next year take you on even more weird turns with the magic of AI.
-Archivist#7336: heyo, still learning and looking to clear up a few things as I read. So, training something like gpt-whatever, I know what a dataset is, got that bit down, but what is a parameter in this case? It's my very basic understanding that with the same base dataset you can train a model to some number of parameters and that's where the 117m, 1.5b bit comes from right? Soooo, what's a parameter? and is the _only_ reason not to go big right away time/cost to train or?
|
-Archivist#7336: If there's some training language models for dummies feel free to point me there, I'm spending all my time at the moment reading about this stuff
bmk#1476: more parameters means the model has more capacity so it costs more to train, yeah
bmk#1476: I like to think of parameters as knobs on a really really big box
EricHallahan#1051: I would say that is a good way to put it.
bmk#1476: when you turn the knobs, you change the behavior of the box
bmk#1476: in this analogy, a model architecture is a type of box, and of course for different types of boxes the knobs do different things and there are different numbers of knobs, and a trained model is a particular configuration of knob-turns
-Archivist#7336: so it's somewhat of an instruction? or scale, like a parameter to set the amount of... idk, red paint I use in this painting?
bmk#1476: people are sometimes sloppy about equivocation between the idea of a type of box, a particular box of a certain type that doesn't have the knobs dialed in, and a particular configuration of knobs for a particular box, because it's usually clear from context
alembic#5293: The first neural networks did indeed actually have knobs (potentiometers) rotated automatically by motors https://cdn.discordapp.com/attachments/729741769738158194/862741702133153802/iu.png
bmk#1476: these correspond roughly to architecture, model, and pretrained model I guess, though people use model to refer to all three of these things sometimes
-Archivist#7336: and then if so, where do the parameters come from? the numbers in relation to this confuse me, we're talking in billions so it's not like someone decided on these billions of parameters right?
-Archivist#7336: and is it as simple as _there's a nazi parameter_ we will want to turn that one down... ?
bmk#1476: so a model with a billion parameters means it has a billion knobs in total
bmk#1476: and we use this nice handy algorithm called SGD to help us turn the knobs for us until things work how we want
bmk#1476: because nobody has time to turn a billion knobs by hand
bmk#1476: well, no, but actually maybe
-Archivist#7336: I feel like this is going to take me awhile, you guys have merely cleaned the dirt off a frosted window so far ๐
(I understand the general complexity, thank you for bearing with me)
bmk#1476: so conventional knowledge is that the knobs do wacky things and SGD has it all figured out but humans can't interpret it
bmk#1476: but there are occasionally papers that identify individual knobs (or PCAs of groups of knobs) that actually do human identifiable things
|
-Archivist#7336: > SGD has it all figured out but humans can't interpret it
that clicks
-Archivist#7336: down the SGD rabbit hole I go
bmk#1476: see: knowledge neurons for example
bmk#1476: recently people have started caring a lot about figuring out what the knobs actually mean
bmk#1476: actually I think DNA is a good analogy for parameters
Louis#0144: ~~Who said turn it down?~~
Louis#0144: Jkjk
bmk#1476: lots of genes don't do single things
bmk#1476: sometimes you have a weird complicated thing where you need this and that and that other gene together to do a thing
bmk#1476: parameters are generally like that, and it's surprising if they aren't
alembic#5293: Maybe folks disagree, but the Andrew Ng online course isn't bad if you want to learn fundamentals. If you just want to start using DL and learning the details as you go, the fast.ai online course is the way to go. (If you haven't already tried either/both :P)
CRG#8707: Interesting thing is, you can find that the nazi neuron is actually a mustache neuron and it's fused with the nintendo neuron <https://microscope-azure-edge.openai.com/models/contrastive_16x/image_block_4_7_add_1_0/20> https://cdn.discordapp.com/attachments/729741769738158194/862744822586933268/8f0514e31296d91c862146f4afb526e0.png
bmk#1476: also benq for some reason
bmk#1476: and call of duty?
bmk#1476: and also the words beauty, benefit, borders??
bmk#1476: this neuron is wack
bmk#1476: what do mustaches, swastikas, iron crosses, Wiis, beauty, benefit, borders, benq, call of duty, and bumper cars have in common????
CRG#8707: Wii -> mario -> mustache <- hitler <- nazis
Hasdino#6050: first time reading the story of eleutherai, gratz to all involved
|
bmk#1476: what about benq
CRG#8707: Something something "concepts so orthogonal/so unlikely to coincide that it's fine to reuse the same neuron"
EricHallahan#1051: ~~The logo looks like a mustache~~
CRG#8707: <https://distill.pub/2020/circuits/zoom-in/> https://cdn.discordapp.com/attachments/729741769738158194/862755230454382612/fa971f67bc4f32e39cfb02395d9f5d4f.png
bmk#1476: well now you have a fully general explanation for any neuron at all, and you have no explanatory power anymore
bmk#1476: neurons can represent both things that are conceptually related, and also things that are totally orthogonal? that's, like, *everything*
bmk#1476: everything is either conceptually similar or not conceptually similar
CRG#8707: It's pretty strange, I'd say that these models are just too tiny to not mix concepts, but apparently the bigger CLIP models didn't work well with feature visualization. (The opposite of what the "large models are disentangled / internally sparse" hypothesis would predict) https://cdn.discordapp.com/attachments/729741769738158194/862758244787552277/a8a3fb9e2a11461f4605fab7ad5f5807.png
๐
ฌ gabriel_syme ๐
ฌ#3220: I love how an intuitive description of NNs turned into deep dive on knowledge neurons :tribalism:
TruGerman#6672: Just woke up and saw the ping. I have to say, this blog post is one of the greatest and funniest things I've ever read coming from any kind of scientific organization. You'd think ML researchers are a bunch of grumpy middle aged people with no sense of humor whatsoever, but it seems like that's not quite accurate. Here's to another year of [*frantically points fingers in various directions*] whatever this is, except this time I'll be here to **o b s e r v e**, cheers :aPES_Beer:
But seriously, thanks for all the...stuff you've been doing, I'm pretty sure I can speak for all the people over at NAI/HAI/Whoeverelsemightbeusingyourmodels when I say you guys gave us hope and saved our AI-deprived asses amidst OAI's tyrannical reign. Keep doing...whatever it is you're doing. Well, time for me to fade back into obscurity:pepepoof:
Louis#0144: honk @TruGerman
TruGerman#6672: @Louis :goose: ๐ฃ ๐ฆ honk
Louis#0144: confirmed goose
TruGerman#6672: Crap.
Teemochu#8740: As I said, "May the next year take you on even more weird turns with the magic of AI." :smugS:
EricHallahan#1051: But that isn't possible, it is only ๐ฉ๐ช.
TruGerman#6672: I should really get my head out of my ass and start learning about...this, sounds like [fun]
Dromarion#3383: I've been studying since the beginning of the year. It's hard since the coursework is pretty dense but being able to understand the conversations here makes it worth the effort.
TruGerman#6672: Yeah, summer break is coming up which means I'll have a lot of free time
|
Dromarion#3383: If you want to do the self study route like me, I'm taking the machine learning course by a Daniel Bourke on Udemy. Supplemented by the resources here
https://whimsical.com/machine-learning-roadmap-2020-CA7f3ykvXpnJ9Az32vYXva
TruGerman#6672: That is one hell of a roadmap, but it should give me an idea of what to do next, thanks!
Dromarion#3383: There's a video that walks through everything on it so you can follow along with that.
https://youtu.be/pHiMN_gy9mk
chilli#5665: This is too big a roadmap imo
TruGerman#6672: This is why I prefer physics :luls:
TruGerman#6672: Math is way too cursed
Dromarion#3383: Roadmap is kind a misnomer here since it's basically a list of resources in the form of a mind map. But yeah it's pretty thicc, it might be easier to navigate in another format.
gdawg16#0493: WHAT A GOOD BLOG POST
Louis#0144: lmao
Louis#0144: WHOS A GOOD GOOSE
Manny96#3437: None on Eleuther.ai publications and codebase
Louis#0144: I am home so I will retype this here. I am considering making a CV task where I take tuples of sequential pages from manga (3 or 4) and I split them up by panel. Each of these pages is a sentence and each panel is a token.
I encode each panel (independently) using a ViT and use the ordering as basically a ground truth. Given sentence permutation, masking, and sentence shuffling, I try to get some visually grounded BART to re-order them in the correct order.
Does this make sense as a task? It should capture *something* about grounding I think- if you can understand the ordering of visual events.
Louis#0144: (I put this in the DALL-E discord on accident lmao)
Louis#0144: I think the requirement that the panels have to be disjointly encoded is the real benefit here
|
Louis#0144: I also think that varying size panels in the manga is gonna screw me over
Dromarion#3383: Which manga?
Louis#0144: Manga109
Louis#0144: its a big dataset of manga scans
Louis#0144: they look pretty high quality
Louis#0144: I want ERNIEv3 to be really good at anime and manga
Louis#0144: fwiw
Louis#0144: gonna weeb it really hard
Louis#0144: maybe add AO3
Louis#0144: (kidding about the last part dw)
Louis#0144: but yeah theres a lot of untapped potential in anime and manga for grounding
Louis#0144: nvm apparently manga 109 is just *covers*
Teemochu#8740: AO3 isn't manga
Louis#0144: no but its fanfic
Teemochu#8740: Yeah true
Louis#0144: and crossover stuff actually helps storytelling models a lot
Louis#0144: lol
Teemochu#8740: AO3 isn't the one that came to my mind first to match with a manga set
Louis#0144: ye the ao3 thing was a joke
Louis#0144: Im not adding ao3
|
Louis#0144: y'all can add smut after
Teemochu#8740: ~~Add the one with the numbers~~
TruGerman#6672: Louis contributing to the weeb community again, I see
Louis#0144: yes.
Louis#0144: im deciding to use manhau
Louis#0144: theres lots of stuff available
Louis#0144: and the paneling sizes are good
TruGerman#6672: So you gonna allow us to generate a fitting manga for the garbage we produce in [Insert AI storyteller here]?
Louis#0144: i dont work for nai
Louis#0144: also I am not interested in image generation
Louis#0144: lol
TruGerman#6672: Fixed it for you
Louis#0144: lmao
Louis#0144: im not working at latitude
Louis#0144: if thats what youre asking
TruGerman#6672: Nah, I was just joking
Louis#0144: anyway they have a DALL-E stack ๐
TruGerman#6672: And NAI is a well known example, that's why I used it
Louis#0144: im doing grounding research right now
Louis#0144: literally exclusively for me
|
Louis#0144: #carp is controllable NLG + eval. I helped there and thats finishing up. I'm also writing a paper on CLIP. I dont intend to go back to image generation. GANs are a nightmare and I havent learned diffusion models yet
TruGerman#6672: Not just GANs
Noa Nabeshima#0290: Seems related to https://arxiv.org/pdf/2104.07143.pdf
EricHallahan#1051: https://arxiv.org/abs/2104.07143
Noa Nabeshima#0290: did you have the same thought as me at the same time?
EricHallahan#1051: No, I just wanted the abstract link. :berk:
Noa Nabeshima#0290: ah, that makes sense
nostalgebraist#3542: huh, this paper does not mean by "neuron" what i expected it to mean by "neuron"
nostalgebraist#3542: (nor what anyone *should* mean by "neuron," imo)
nostalgebraist#3542: > We used the final layer hidden state of each sentenceโs [CLS] token as its embedding. [...] For convenience, we identify a neuron with a basis vector in BERTโs 768-dimensional embedding space
๐
ฌ gabriel_syme ๐
ฌ#3220: You mean the brain? :berk:
nostalgebraist#3542: normally people mean "something with a threshold and a nonlinearity"
nostalgebraist#3542: so it would make sense for it to "fire" on a very small subset of the data
Noa Nabeshima#0290: I think in 'the building blocks of interpretability' (and I think related works?) they optimize the preactivations, before the nonlinearity
Noa Nabeshima#0290: I think this is the standard usage of neuron in interpretability papers
nostalgebraist#3542: of course an arbitrary basis of 768 dim will mix up all kinds of concepts, you can't parameterize all sentences with 768 knobs
nostalgebraist#3542: this is different though, there's not even an activation afterwards (in training there's linear + softmax over 2 dimensions)
nostalgebraist#3542: the relevant pre-activation would be in the 2d space right before the softmax
Noa Nabeshima#0290: Hmm in my internal definition for 'neuron', an activation/nonlinearity afterwards doesn't seem important. Is there a reason it seems important to carve up wordspace that way as opposed to this way?
Noa Nabeshima#0290: Also apologies if I'm misunderstanding you somehow
|
nostalgebraist#3542: (tangent?: looking at "building blocks of interpretability," it sounds like they optimize post-activation, given the comment *"As the name suggests, non-negative matrix factorization (NMF) constrains its factors to be positive. This is fine for the activations of a ReLU network, which must be positive as well."*)
nostalgebraist#3542: i have a few related intuitions about this... one of them is that the nonlinearity picks out a preferred basis
nostalgebraist#3542: so there's actually a "neuron 1," "neuron 2," etc
nostalgebraist#3542: (admittedly there are a few other things that break the symmetry that in NNs, like adam and layernorm)
bmk#1476: hot take: neurons are the wrong abstraction in NNs and it's a bad thing that people focus so much on them
Noa Nabeshima#0290: Ah, good catch!
I don't know where that belief came from. Maybe there's a later blogpost where they switch over or something in the lucid documentation or maybe I'm remembering writings about optimizing pre-softmax floats instead of probabilities.
janus#0150: Looking only at the post-activation risks losing a significant amount of information, no?
bmk#1476: for one, when weights are shared, like in a transformer and a CNN, people often equivocate between whether each individual activation is a neuron
nostalgebraist#3542: also, when the nonlinearity is not symmetric, it defines a concept of "being activated by an input" with no corresponding concept of "being anti-activated"
nostalgebraist#3542: which makes sense for concepts, there's generally not like, exact Anti-Questions-About-Song-Titles and such things
not in the same way that Questions-About-Song-Titles are real things
janus#0150: Like if I want to search for 'concepts' in a networks brain, it could make sense to look at post-activation values, as these are aggregates of upstream information, or at pre-activation vectors, because this is data the network has learned to group together.
bmk#1476: also if you have any activation function that doesn't look vaguely like a sigmoid, which is basically every activation function that anyone still uses these days (sorry schmidhuber), the analogy doesn't make much sense either
bmk#1476: also while we're at it, the idea of layers is too ill defined and outdated as well
janus#0150: Could you elaborate? I think of a transformer as a feedforward network which is happening in multiple, discrete, sequential steps.
bmk#1476: do you count an entire transformer block as a layer, or each ff layer inside those blocks as a layer? is attention a layer? are activation fns layers? if you think that something has to have parameters to be a layer, what about parametric activation fns? what do you do about skip connections, do they count as not adding any layers? if so, what about something with tons of weird connections like inception? do you add up layers that are in parallel too? in which case, any linear layer can always be broken into two linear layers with a concat. oh and also what the heck do you do about RNNs, is each time step a layer? there are so many edge cases
bmk#1476: I can think of like 5 different justifiable answers to "how many layers does gpt3 have"
nostalgebraist#3542: i think of "layer" as "the unit such that the network looks like [input adapter] + N * [layer] + [output adapter]"
nostalgebraist#3542: some networks don't look like that, but it's a meaningful concept
|
janus#0150: Great list. That makes sense. I think there is a useful abstraction splitting the transformer into discrete blocks, but the word layers could be imprecise and not generalize well. Could you explain how the skip connections work in GPT-3?
bmk#1476: a resnet38 has 19 pairs of conv layers with skip connections, does that make a resnet38 have 19 layers?
bmk#1476: (or something like that, I don't remember the exact details of what the ends of it look like, but you get the point)
nostalgebraist#3542: maybe? i'm not too tied to the terminology
nostalgebraist#3542: just, you know, there really is a sense in which gpt3 consists of *something* copied 96 times
bmk#1476: yeah but there are also many other numbers that could be argued to reasonably represent a number of something in gpt3 in a way that it's hard to draw a crisp line to say which of the numbers are admissible
bmk#1476: like I think counting the ff and attn layers each as one layer is entirely reasonable
nostalgebraist#3542: do they all have the property where you can write most of the network like `[thing() for _ in range(N)]`?
nostalgebraist#3542: with the FF and attn you need to pass an arg telling `thing()` which one it is
kindiana#1016: what if they really are one layer (with parallel ff+attn) :berk:
Noa Nabeshima#0290: https://www.lesswrong.com/posts/yA4gF5KrboK2m2Xu7/how-an-algorithm-feels-from-inside
bmk#1476: well, that implies resnet38 has 19 layers and VGG19 has 1 layer
kindiana#1016: also, I don't think layer counts really mean anything lol
bmk#1476: that's .. what I've been trying to argue
kindiana#1016: theres almost always a more precise way to characterize whatever quantity you are trying to say
nostalgebraist#3542: fair enough
regal-algorithm#6085: hey! The intro in #rules tells me to introduce myself if I want to get involved, so going ahead and doing that. I am Andrey Kurenkov, a 4th PhD student at the Stanford Vision and Learning lab. I have mainly done research on learning algorithms for robotic manipulation (deep RL sort of stuff , with some supervised learning more recently). I just read the one year retrospective piece and found it pretty inspiring, so though I'd get on here and see if I can get involved.
More info here: https://www.andreykurenkov.com/
Louis#0144: Firstly welcome!
Louis#0144: Secondly, what tickles your fancy?
|
Louis#0144: what would be ideal for you to be involved in?
Louis#0144: I do storytelling research mostly so I dont think I am personally a good match for a project for you to get involved in
Louis#0144: but theres plenty of projects going on
regal-algorithm#6085: hmm good question. I am not really aware of what projects there are. I guess something I could do with a commitment of a few hours per week that is useful, get my toes wet so to speak, so perhaps helping with some maintenance grunt work to start with.
Louis#0144: >maintenance grunt work to start with.
Louis#0144: someone get the infra work out
Louis#0144: lol
Louis#0144: jkjk
Louis#0144: Youre probably better off talking to bmk then I think
Louis#0144: @bmk
Louis#0144: oh wait actually we have short term ish interpability projects if that is of interest
Louis#0144: theres a job board
Louis#0144: "job"
Louis#0144: lol
Louis#0144: https://github.com/EleutherAI/project-menu/projects/1
regal-algorithm#6085: oh well well looks like I should check that out ๐
AI_WAIFU#2844: Also feel free to pitch you're own project if you're willing to put the work in.
AI_WAIFU#2844: We have tonnes of compute and a lot of it sits idle.
AI_WAIFU#2844: The main bottleneck is individuals willing to see projects through from start to finish
janus#0150: (fyi looks like the project-menu is out of date. There are many things missing)
|
regal-algorithm#6085: aint that always the case...
Louis#0144: oh uh that reminds me
Louis#0144: I think rotoBART is ready to train if theres any compute sitting around
Louis#0144: I'll talk with Stella tmrw
regal-algorithm#6085: actually, I did have one idea for a side project i've wanted to do... basically train a GPT-type model to go from short (~1 paragraph, like on rotten tomatoes) summaries of movies to their full plots (like on wikipedia). Not too hard to set up, just some scraping to get the dataset and then presumably fine-tuning a pre-trained model. Would that be something that fits? Idk how ambitious a project should be lol
Louis#0144: 1) wikiplots and exists and is a really good dataset
Louis#0144: no need for scraping
Louis#0144: 2) I've tried this and you get really weird results
Louis#0144: what works way better is to go from summary + plot outline to wikipedia plots
Louis#0144: so using 6b as a seq2seq model
regal-algorithm#6085: ah interesting...
Louis#0144: plot outlines are also available in wikiplots
Louis#0144: but the results you get are weird nevertheless...
Louis#0144: not really publishable I think
Louis#0144: ๐คทโโ๏ธ
Louis#0144: still would be a very fun side project if you want me to guide you through it
Louis#0144: you could do it in a week easily
regal-algorithm#6085: yeah, maybe a good one to get my toes wet / get a better idea of what else I could do
Louis#0144: awesome
regal-algorithm#6085: and even if not publishable might be fun to write up in a blog post? just sayin, I enjoy doing that too
|
Louis#0144: yeah probably
EricHallahan#1051: Welcome!
regal-algorithm#6085: in any case, a good test to see if I can actually contribute meaningfully given time constraints etc.
Louis#0144: does anyone have that recent paper
Louis#0144: that uses a game theoretic based tokenizer
Louis#0144: its really weird
Louis#0144: I cant find it
Louis#0144: but they show how well it performs against TDIDF tokenizer
Louis#0144: ah found it
Louis#0144: https://aclanthology.org/2021.naacl-main.223/
Noyal#0385: By coincidence, I'm also a PhD student working on (sometimes robotic) deep RL who read the anniversary post today and was inspired to get involved!
Name's Riley Simmons-Edler, I'm a 6th year at Princeton. I like EleutherAI's mission and I'm curious about side projects to distract myself from thesis writing and the job search. ๐
Noyal#0385: I've had some ideas about trying to get two language models to prompt-engineer each other into saying specific things kicking around for a while, though that might be a big project to take on.
Noyal#0385: *via RL
chilli#5665: Anything that has a "via RL" in it usually ends up becoming a big project haha
EricHallahan#1051: Friendship ended with RL, sequence learning is my best friend.
bmk#1476: RL is kil
Louis#0144: oh cool
Louis#0144: my coworker did this
Louis#0144: do you know Zhiyu Lin?
|
Noyal#0385: Oh cool! I don't think I know them, no
Noyal#0385: Did it work? ๐
Louis#0144: id have to ask him
Louis#0144: I cant find his paper
Louis#0144: but i am pretty sure it did
Louis#0144: it was as an ablation in soemthing else though
Louis#0144: so it didnt work particularly well
Louis#0144: LOL
Noyal#0385: Lol, guess that's pretty telling
Noyal#0385: Outside the rare case where the ablation ends up becoming the main method
Noyal#0385: The general thought was that it could be a cheap way to bootstrap a conversational agent that can have a conversation with some objective in mind (get the other guy to give you information/think positively of you, etc)
Em Elle#8886: Anyone ever think we might not need really intelligent chat bots or companion bots? because it's very likely that human conversational abilities will degrade over time, because of a deteriorating culture? and outside factors such as social media ?
My prediction is that something like GPT3 or GPT2 would converge to acceptance as Companion* AGI as we move forward through time. Thoughts? this is what I am finding building what I am building*
Louis#0144: really smart chatbots is going to be *massive* for the sex industry
Louis#0144: unironically
bmk#1476: > deteriorating culture
warning that this topic often leads to politrib
Fessus#9563: I maintain that this is going to lead the the end of the human race
Em Elle#8886: that's the funny part, if we ask the people we would be building these bots for they really don't care about that. At least from my sample size, of 20 who did the poll
|
Louis#0144: lol
Louis#0144: storytelling AI is going to be all erotica
Louis#0144: and ive accepted this
Louis#0144: im working on getting a dataset for ERNIEv3 rn
Em Elle#8886: yeah I can see that being a good use case but sex bot's probably could run on 2010's technology
Louis#0144: and im setting it up so it works really well with manga/manhua
Louis#0144: for uh
Louis#0144: reasons
Louis#0144: because I know the audience thats going to be using it
Louis#0144: lol
Louis#0144: like a storytelling seq2seq model that is knowledge driven? pfft id be lying to myself if I said the main use case for that isnt obvious
bmk#1476: manhau? you mean manhua?
Louis#0144: I was actually *floored* today when in a grant proposal my advisor actually mentioned and discussed sex work chatbots
Louis#0144: lmao
Louis#0144: yeah typo
Em Elle#8886: that's probably what this tech will be used for tbh, thats sorta what I am using it for
Fessus#9563: โIf you want a picture of the future, imagine virtual anime titties smothering a human faceโfor ever.โ -George Orwell, probably
Em Elle#8886: LOL yeah that's the future for sure ๐ atleast in my project eventually
Noyal#0385: IIRC The big AI Dungeon user prompt leak a while back suggested ~50% of user content on that site might be pornographic (or at least NSFW), so the internet has already shown that this will happen. ๐
inox#5400: showed this has happened and people won't pay much for it
|
Em Elle#8886: I guess what I am trying to say is that we are probably at the pinnacle of the technology, we are waiting for people to catch up culture wise and mind set wise.
Em Elle#8886: If we hit an AI wall or winter were fine, people will catch up
inox#5400: the pinnacle of sex chatbot technology?
bmk#1476: I never thought people could get off to a chatbot, guess I need to refine my world model
inox#5400: feels like sex workers already got priced out of generating unbranded content a long time ago, now they have to manufacture authenticity with the porn to make money
Em Elle#8886: probably 2010's rule's based technology or just using GPT2 in a very short chat conversation mode
bmk#1476: what happens to the porn industry once we can generate videos end to end?
bmk#1476: of all industries related to video, seems like porn will die first
Em Elle#8886: I mean that can be done today, the company that builds the pipeline process and automates most of it with programming will win not the AI researchers who cracked the model
bmk#1476: don't remind me :withered:
Noyal#0385: "the AI researchers who cracked the model" were still a necessary prerequisite for this to happen, don't forget ๐
Em Elle#8886: I know the sad part is, it won't be their names in the news
bmk#1476: tfw nobody cares about the researchers
Noyal#0385: Researchers gotta look out for each other ๐
bmk#1476: meh who cares about the news, the part that matters is that they will probably get paid a pittance for their hard work
Em Elle#8886: This is why I left my ML masters program I saw that worrying trend, idgaf about research mentality in the business side
bmk#1476: can't pay rent with news mentions anyways
Em Elle#8886: this too
Em Elle#8886: you get paid with the networking and social opportunities for delivering
bmk#1476: it's a crime how little grad students get paid
|
inox#5400: there's zero money in random generated videos, all the performers have onlyfans and other revenue streams
Em Elle#8886: it's because its a pyramid scheme
bmk#1476: the entire academic system is totally fucked
EricHallahan#1051: We just need to make sure that we are not paperclipped, that's all. It isn't like that is hard or anything.
Em Elle#8886: I agree, was apart of it haha
Em Elle#8886: what is paper clipping ?
bmk#1476: Eric explain paperclipping!
bmk#1476: the best way to learn is to teacg
inox#5400: oh no this discord is dedicated to optimising a utility function to maximise alignment memes
EricHallahan#1051: We are referencing the Paperclip Maximizer thought experiment.
Em Elle#8886: OHH I remember this story
bmk#1476: go on, explain what a paperclip maximizer is
bmk#1476: and why we'd expect one to happen
EricHallahan#1051: Man am I rusty on this.
EricHallahan#1051: That is what I get for staring at HTML for two weeks.
bmk#1476: you need to be able to answer questions about alignment in your sleep
Em Elle#8886: I remember the story, I disagree with it, I think it's more likely that an AI will become an automated taste maker since that's more of an important aspect to society, and we will be making decisions not on our own, but be primed by an AI to make the decision
EricHallahan#1051: I haven't done much alignment work unfortuately.
bmk#1476: then start now
Em Elle#8886: I think the thought leader in that field is disconnected from reality
|
bmk#1476: never too late
bmk#1476: I disagree with your disagreement
EricHallahan#1051: I am planning to sooner rather than later.
Em Elle#8886: but... that's just a theory
Em Elle#8886: that is okay, I accept your disagreement and understand your stance
inox#5400: Yudkowsky is a very stable fanfic author!
bmk#1476: just say whatever you know and try to think up answers as you go along
AI_WAIFU#2844: wait wat
bmk#1476: [sudden interest from AI_WAIFU]
Em Elle#8886: @AI_WAIFU you didn't respond to my message, I guess it's not what you are interested in eh ?
AI_WAIFU#2844: I'm really bad about that sorry
Em Elle#8886: I figured you were busy no worries!
EricHallahan#1051: IIRC, it is an underdefined objective function (the simple concept of "maximize paperclips") that leads to an ill-fated outcome, since the implicit constraints that we would expect to to be there were never defined?
bmk#1476: good - what are some examples of these "implicit constraints"?
EricHallahan#1051: One could be "we only need as many paperclips as sheets of paper?"
EricHallahan#1051: Or not turn everything to paperclips lol
chilli#5665: "don't kill humans"
bmk#1476: shhh
EricHallahan#1051: Well that is the obvious one.
EricHallahan#1051: I was trying to think deeper
|
bmk#1476: what chilli said is what I was hoping you'd say lol
bmk#1476: ok so I want to drill into this one
EricHallahan#1051: Oh, that was my gut response, I just failed to say it. :grimberk:
bmk#1476: let's say I ask the AI to maximize the following function:
min(1000, number of paperclips)
bmk#1476: so basically I want it to make at most 1000 paperclips
bmk#1476: how could this go wrong?
bmk#1476: once it makes 1000 paperclips it's done and finished, right? how is it still unsafe
inox#5400: paperclip stockpile security?
EricHallahan#1051: It destroys them.
EricHallahan#1051: It can then create as many Paperclips as it wants.
bmk#1476: uhh
bmk#1476: no?
EricHallahan#1051: ยฏ\_(ใ)_/ยฏ
bmk#1476: because if it destroys paperclips it's reward goes back down again
Em Elle#8886: does the AI know what the paper clip is made from? and constraints on the supply? I guess you could just tell it to do something make 1000 paper clips without impacting the environment or human life.
EricHallahan#1051: I am pulling everything out of my ass here.
bmk#1476: I'm trying to make you think
EricHallahan#1051: And I am thinking hard.
|
bmk#1476: let's assume for the moment that what a paperclip is is well defined
bmk#1476: which isn't trivial but makes this case easier
chilli#5665: || might still be easier to make paperclips in an unsafe manner? ||
Em Elle#8886: so stop once 1000 paper clips is achieved ?
chilli#5665: not sure
Em Elle#8886: LOL this ^
bmk#1476: that's one thing yeah
bmk#1476: there's also another thing that can go wrong
chilli#5665: hmm
StellaAthena#3530: It could destroy paper clips until itโs negative counter overflows
bmk#1476: that's way too out ofthe box lol
chilli#5665: lol, I was considering saying that as a joke
bmk#1476: think more inside the box
someKindaBean#8471: That's the whole premise (kinda) of this web game:
someKindaBean#8471: https://www.decisionproblem.com/paperclips/index2.html
bmk#1476: hint: what happens to the expected reward if the AI isnt certain of how many paperclips it'll have
chilli#5665: is it that its behavior is unpredictable?
chilli#5665: Like, once you hit 1000 paperclips the AI still has no incentive to do one thing or another
bmk#1476: remember, it's taking actions to maximize expected state-action value
chilli#5665: hmm, in that case
|
chilli#5665: what somebody said above about stockpiling
chilli#5665: or more extreme options to reduce the variance
bmk#1476: yeah that's another problem but even if we can make it shut down the moment it gets to 1000 expected reward
bmk#1476: yup
chilli#5665: seem accurate
bmk#1476: so when the model thinks it has a 0.99 chance of having 1000 paperclips, its reward is 990
bmk#1476: but it could be 0.999 certain by taking more precautions
bmk#1476: or 0.9999 certain
someKindaBean#8471: You could also extrapolate to a problem that isn't as easily defined with a simple min(x1,x2) statement
chilli#5665: hmm, this assumes it's an RL agent though
bmk#1476: i mean expected reward is pretty fundamental for any kind of agent
chilli#5665: hmm
bmk#1476: unless youre talking about non agentic optimization which idk
chilli#5665: yeah, I guess you're right
chilli#5665: it's not so obvious to me that the methods of minimizing uncertainty are necessarily dangerous
Teemochu#8740: I'm mildly convinced AID would have eventually had a similarly-sized competitor even if the only thing they did was (with few false positives) filter out a couple of subsets of NSFW.
chilli#5665: but I agree that they *could* be dangerous
bmk#1476: well it might think there's a 0.0001% chance its sensors are faulty so itll buy a redundant set of sensors to observe the paperclips
bmk#1476: but there's a chance that those are faulty so itll buy even more
bmk#1476: and then it turns the entire earth into a gigantic system for being 99.9999999999999999999% certain that there are more than 1000 paperclips
|
chilli#5665: yes, but there's also the change that by doing such suspicious behavior it'll be shut down or impaired in its goal
Teemochu#8740: It wouldn't have been nearly the lighting-in-a-bottle NAI was though; AID shot themselves in both feet and then took a stimulant to increase the bleeding.
Em Elle#8886: What I don't like about the story, is that it's unrealistic and doesn't even capture where the state of the art of robotics is heading, and it appears that robot's Boston dynamics like SPOT and it's humanoid equivalent are not even being used to do any labor and the company is a money sink hole, despite being state of the art.
Louis#0144: Do stimulants increase bleeding
Louis#0144: lol
Louis#0144: I was not aware
Teemochu#8740: Just a guess because heart rate/BP
Em Elle#8886: if they are cut with some kind of rat poison yeah
bmk#1476: who said anything about robotics? that's just a convenient way to talk about something concrete
bmk#1476: in realityit'll probably be something boring like "make more money on the stock market"
inox#5400: robotics profs I know all say boston dynamics is just tuned control theory that won't scale (although that was 4 years ago they said that)
Louis#0144: It is impressive tbh
Em Elle#8886: For the control system yes, but not the vision system I am sure that uses some kind of DL
bmk#1476: i think there's a really good chance the first ai will have some variant of "maximize money in this bank account" as an objective lol
EricHallahan#1051: #off-topic, but Honda ending it's robotics program is kind of a big deal. They are either totally calling that humanoid robots are not going to useful within the next 20 years or they stand to loose a lot in opportunity costs.
EricHallahan#1051: That kinda means a lot to that industry.
guac#4716: humanoids seem so inefficient for a task as streamlined as car manufacturing lol
EricHallahan#1051: They were never for manufacturing?
guac#4716: what were the humanoids for?
Em Elle#8886: yeah I don't think humanoids are efficient for much, I think businesses in the robotics industry optimize for fast deployment and specialized motions
|
bmk#1476: anyways I never liked robotics at all so nothing of value was lost
AI_WAIFU#2844: fucking, obviously
Em Elle#8886: this would be true, and a better story to tell that isn't unrealistic or make the industry sound crazy, unless that was the authors intent to go viral. I personality couldn't relate to the paper clip story mainly because it was so detached from reality.
guac#4716: ah population control. i see i see
Em Elle#8886: haha
Louis#0144: He isnโt wrong
EricHallahan#1051: https://en.wikipedia.org/wiki/Honda_P_series
Em Elle#8886: haha it's just funny that it was said ๐
Louis#0144: Lmao
bmk#1476: I think "industry" is the wrong word lol
bmk#1476: and alignment has never been good at outreach
bmk#1476: ~~because all people who work on alignment are antisocial nerds like me~~
Em Elle#8886: Sorry by industry, I am referring to Machine Learning field in general
bmk#1476: oh
bmk#1476: most of ML doesnt give a shit about alignment lol
bmk#1476: it's sad but it is
Em Elle#8886: this is true
triggerhappygandi#0001: On the bright side, joining eleuther made me focus more on safety/alignment. Hopefully it did the same for all of us here :hap:
Louis#0144: Safety a lot
Louis#0144: For me
|
EricHallahan#1051: I had no interest in AI until arriving here lol
triggerhappygandi#0001: Before eleuther my view on AI safety was "cranks on twitter trying to politicize the shit out of a niche field"
Now it is "better study that lest we all die painfully"
marksaroufim#6706: what were your favorite references on AI safety? I had trouble finding good stuff googling
bmk#1476: "cranks on twitter trying to politicize the shit out of a niche field" is very much still a thing lol
bmk#1476: those people just happen to be different from the people who actually care about not gying a horrible painful death from ๐๏ธ
bmk#1476: rob miles' videos
bmk#1476: hands down the best resource
marksaroufim#6706: Thank you @bmk !
Louis#0144: @bmk why does venture beat keep saying weโre releasing neox in august
bmk#1476: i dunno
bmk#1476: we arent lmao
Louis#0144: Iโve seen two articles with this now
Louis#0144: Yeah wtf
alexyz#3459: what is the chance that bmk is training neox rn secretly and going to release it in august is
Louis#0144: 40%
bmk#1476: I am the least likely person to do that lmao
bmk#1476: lazy af
bmk#1476: also, busy with interview prep
bmk#1476: but it's funnier to say that I'm lazy
|
triggerhappygandi#0001: yes but my view on safety isn't affected by them now
triggerhappygandi#0001: stuff like this is what those cranks don't discuss https://cdn.discordapp.com/attachments/729741769738158194/862927580049965086/unknown.png
triggerhappygandi#0001: rather they will talk about social issues which aren't the primary concern
bmk#1476: this is what you think of as a very good central example of alignment?
triggerhappygandi#0001: no it is one that is very apparent
bmk#1476: ?
triggerhappygandi#0001: and exists _right now_. Misaligned superintelligence is still aways to go
triggerhappygandi#0001: We haven't even solved this comparatively easier problem
bmk#1476: .. i really need to write that blog post someday
bmk#1476: "we're still far away from being able to go to the moon! plus, we cant even safely build a building 2km tall without it collapsing, and the moon is way more than 2km away! we need to solve safe building before we can even think about going to the moon safely"
Louis#0144: I bet thereโs more geese on the moon
triggerhappygandi#0001: I didn't say that. I said aligning LMs today is the step towards that.
Louis#0144: :3goose:
triggerhappygandi#0001: And we haven't even figured out how to align a 12B model
Louis#0144: @triggerhappygandi we have an Ernie discord now with a special channel just for geese
bmk#1476: you say that as if aligning a 12B model is a prerequisite to superintelligence alignment
Louis#0144: Maybe itโs too dumb to align
triggerhappygandi#0001: It isn't? I always assumed aligning LMs is a step in that direction
Louis#0144: Is that what you mean?
bmk#1476: "we cant even safely build a building 2km tall without it collapsing, how can we possibly get 300000km to the moon safely?"
|
EricHallahan#1051: Or we could just ask the model to be nice.
bmk#1476: im not saying that aligning LMs is totally useless, but it's not at all a priori obvious that it will be useful, and youd have to make a clear explicit argument why it would be the case
triggerhappygandi#0001: Based off of this
https://www.lesswrong.com/posts/PZtsoaoSLpKjjbMqM/the-case-for-aligning-narrowly-superhuman-models
and also, _I feel this is the case_. Since GPT-N could probably be AGI.
bmk#1476: that post argues for something way more nuanced
Louis#0144: Personally I still think scaling laws for transformers are gonna shit themselves before AGI even though all evidence says otherwise
Louis#0144: Iโm hopeful lol
triggerhappygandi#0001: Plus, my original argument was that rather than focusing on this topic, the twitter crank researchers focus on how much pollution Switch Transformer _might have caused_ because google's datacenters already are carbon neutral.
Louis#0144: What confuses me is how much bigger cv is even though I feel like NLP is closer to AGI
Louis#0144: Itโs probably just Bc itโs hard to monetize NLP
Louis#0144: lol
triggerhappygandi#0001: yeah but one of the points is specifically that smaller aligned models will guide us through to next stage.
bmk#1476: yes but not all alignment work on small models is created equal
๐
ฌ gabriel_syme ๐
ฌ#3220: well maybe because everyone thinks of it as text alone
nshepperd#2316: what they don't realize is everything is text, even people
๐
ฌ gabriel_syme ๐
ฌ#3220: yeah that works too, if I understand how you mean it
sea_snell#0243: NLP is operating on more abstract data, so if itโs closer to intelligence, with cv you have to deal with the raw signal more
๐
ฌ gabriel_syme ๐
ฌ#3220: my point is that everything can be language, but the actual 'a LM writing a blog post!' is not the only application of language. but all that, we're only now starting to find out (at least I am)
sea_snell#0243: The mind blowing thing in the clip paper was deep in the appendices
|
๐
ฌ gabriel_syme ๐
ฌ#3220: I never thought I'd be making designs using LMs for example, I was all in DALLE just 3 months ago. Now I'm totally on the LM camp
sea_snell#0243: They had non trivial sentence embedding from text rendered on an image, like it could represent sentiment
sea_snell#0243: https://cdn.discordapp.com/attachments/729741769738158194/862959693927153664/image0.png
sea_snell#0243: What if in a couple years all NLP will just be processed via rendered text on screen, donโt even have to think about tokenization. But ig thereโs a trade off cause you would want to think about font
chirp#4545: curious, what makes nlp hard to monetize?
triggerhappygandi#0001: how so
chirp#4545: https://twitter.com/isosteph/status/1413410298472456194?s=21
ethan caballero#6044: yes, videogpt will be able to do everything that dall-e, clip, & gpt-3 can do simultaneously.
CRG#8707: Why would videogpt be as good as text as GPT-3?
Teemochu#8740: The flip side is this may represent a limitation on superintelligence rather than an alignment problem per se
Teemochu#8740: basically the opening stanzas of the GIGO problem of generative AI
Teemochu#8740: and I wouldn't say it's particularly wrong to give a user what they want... in this case it's a poor assumption that the user actually wants buggy code (rather, the user wants the AI to be above his/her own intelligence), but fixing that issue from an "alignment" lens rather than a "capability of writing good code" one feels dangerously adjacent to cases where some would prescribe using the "alignment" lens to intentionally not give the user what they want.
Tinytitan#5596: @Daj https://news.ycombinator.com/item?id=27780786
Teemochu#8740: Is that by *the*... yeah looks like it is
Daj#7482: Absolutely wild
Teemochu#8740: from a reply
> This is far-fetched sci-fi problem invention. The only real danger of AI in the next 1000 years is in things no one in the field is seriously addressing: use of AI in things like law enforcement, trained on bad data, to accelerate and justify existing systemic biases.
I'm not exactly sure what line of thinking produced this actually. (The "things no one in the field is seriously addressing" part specifically, as pretty much every popsci/journo resource I've seen on "AI safety" other than Miles focuses on exactly the thing this person is claiming isn't being taken seriously)
Daj#7482: ~~I hope we didn't make him update _too_ much since we're still an outlier in taking alignment seriously lol~~
|
Daj#7482: But crazy to think Eliezer himself knows about us now (and probably isn't _maximally_ happy about it but not maximally _un_ happy either)
Daj#7482: The answer is :tribalism2:
Teemochu#8740: I guess it's the same line of thinking that says "too many rubes, why won't anyone solve the Rube Problem" when there's one rube among 99 bleggs
thenightocean#6100: IMO I would say he is more at less at peace with the future and how it will unfold. He did his contribution and now it depends on everyone else to try to avoid a bad outcome. I doubt he really obsesses about good or bad actors anymore.
Daj#7482: idk I conceptualize Eliezer as pretty agentic
Daj#7482: but who knows
Teemochu#8740: there's nothing to be proud of until we turn catgirl theory into catgirl practice
thenightocean#6100: I kinda got the impression he stopped being very agentic. he now just wants to grill and occasionally shitpost on twitter/facebook
Teemochu#8740: >titter
Daj#7482: as hilarious as it is to imagine Eliezer saying "I just wanna grill", I don't think that's likely, I'm pretty sure he's actively working on stuff with Nate, or at least was until that got axed
thenightocean#6100: fair. He is still reasonably young (and in best shape in his life lol)
CSEdd#5494: Hey! Which groups/projects are working on AI alignment or security? Keen to get involved!
Daj#7482: We have a bunch of general discussions in the alignment channels. I personally lead the #deleted-channel project which is something of empirical prosaic alignment work
Daj#7482: Related to the kind of work Christiano did at OAI
Teemochu#8740: nice cutoff google https://cdn.discordapp.com/attachments/729741769738158194/862988635409874944/VDDhbSt.png
Daj#7482: There are also a handful of interpretability projects (or at least ideas) floating around
Teemochu#8740: (was looking up his age and saw this in the knowledge card)
Daj#7482: (unfortunately I need to hop on a bus now, if you have any questions @CSEdd hmu any time)
distractedm1nd#2062: Hey everyone, I work on DNA Data Storage Research in Germany but have a CS/ML + Neuroscience background. Going to be lurking over the next few days to see how I can maybe fit in ๐
triggerhappygandi#0001: title of the story harry imagined in his head for 7 years
|
suh#2879: LOL @copilot
spirit-from-germany#1488: @Louis i also like the idea of ernie 3, but i am wondering what kind of knowledge graphs are available as training data sets freely
goolulusaurs#1571: I just read the retrospective. Damn what a year y'all. What you guys have accomplished is amazing. :bigbrain:
ethan caballero#6044: Hm, Eleuther has all the transhumanists excited as of late:
https://twitter.com/anderssandberg/status/1413259464862539781
https://twitter.com/tobyordoxford/status/1412442608245293060
Daj#7482: Well I guess we know what Anders is into now
rygaard#8558: Hi want to introduce myself here - I am here because I am working on establishing an art/tech festival in Denmark. I saw the Anders Sandberg VQGAN-CLIP article and was amazed. I am curious whether someone is combining these techniques and artistic approaches with study of biases / diversity in AI - eg. by visualizing gendered / political statements. I would love to host such experiments at the festival.
I will have a look around here - bear over with med if my technical insights are limited at times.
Daj#7482: You should introduce yourself in #art too. You can also use our bot in #the-faraday-cage-archive to make art, but we'd ask you to please not try to make too risque stuff in public lol
rygaard#8558: Tanks @Daj for suggestions
glucinater21#0869: Hey everyone Iโm Adam, an incoming college freshman going into computer science engineering. If you guys ever need some extra man hours, Iโd be happy to help on a periodic basis( I intern full time at the moment). My skills include webscraping with scrapy and requests/beautifulsoup, basic api creation with fastapi, and basic machine learning with py torch,tensor flow, and sci kit learn. I know I may not be of much use but Iโm always happy to learn!
bieker#8988: Hey everyone, I'm Jacob, I'm a research engineer at Open Climate Fix working on using ML for solar forecasting with satellite images. I've worked quite a bit with PyTorch, building data pipelines, and using multi-modal models. I've mostly worked with vision stuff, and am happy to help wherever!
joaogui1#8461: Starting in 16 minutes!
https://www.youtube.com/watch?v=NfvYufQwA_o
Deleted User#0000: Hi! I'm Jamie. I helped co-author layernorm, show attend and tell, etc. I left google 5 months ago to start a company and a small farm. I'm slowly getting back into research and would love to collaborate in the future (multimodal learning, generative models, RL). Currently playing with vector analogies in clip
Sid#2121: we โค๏ธ layernorm
Kia#2550: Wow We have a whole lot of new people๐
thenightocean#6100: Welcome @bieker , @glucinater21 and @Deleted User ! Looks like you have some great skillsets.
|
triggerhappygandi#0001: Hot damn. What's your current startup about?
Deleted User#0000: yay ๐ when we wrote the paper transformers didn't exist yet and all the focus was on improving RNNs. interesting how things worked out
CRG#8707: Layernorm seems to hold up against the other x-norm variants for training GPT-style transformers. https://discord.com/channels/729741769192767510/795089627089862656/823887913955360818
Deleted User#0000: we are building a few niche products (not directly ML related, but uses some ML in the background). we're operating on a long-ish time horizon so there won't be anything interesting soon
Louis#0144: welcome to the club
Louis#0144: we're writing a vector analogy paper
Louis#0144: well we should be
Louis#0144: I kinda got burned out on it
Louis#0144: LOL
Louis#0144: all the experiments are done though
Louis#0144: idk why everytime i open that overleaf
Louis#0144: my brain kinda shuts down
EricHallahan#1051: If you are interested in CLIP vector analogies, definitely pop into #art.
Louis#0144: that too
Louis#0144: @alstroemeria313 et al (including me) made a great notebook
Louis#0144: it is pinned in art
Deleted User#0000: nice! I will take a look
triggerhappygandi#0001: Also yeah layernorm is pog.
Deleted User#0000: we worked on this a bit back in 2014, using a BoW text encoder and image retrieval from pre-trained feature vectors. nothing really came out of it though. so CLIP got me excited about this again https://cdn.discordapp.com/attachments/729741769738158194/863076485057216532/Screen_Shot_2021-07-09_at_6.10.44_PM.png
triggerhappygandi#0001: Quite impressive given it's pre-transformers :berk:
|
Louis#0144: @Deleted User if we ever want to suffer through another analogies paper
Louis#0144: Iโll let you know
Louis#0144: LOL
triggerhappygandi#0001: Cat in the box is awesome though
Deleted User#0000: it usually only worked with a single, centered object in the image. anything more complex and it was finicky. image generation also didn't really work yet ๐
generic#8192: hullo, I'm Brendan, a prof at NYU Tandon (but please don't assume this means I know anything). I'm working on using language models for code generation, particularly interested in generating intentionally buggy code and transforming code to add bugs. will probably be mostly lurking but thought I'd say hi
Sid#2121: Welcome @generic ! Great to see lots of new interesting people joining and introducing themselves - where did you all find out about us?
generic#8192: I'd heard about EleutherAI from following gwern for a while, but the writeup yesterday pushed me to finally join :)
Deleted User#0000: also the writeup ๐ I'm really impressed with what this group has done
Louis#0144: :hap:
Louis#0144: Youโll soon see our geese addiction
glucinater21#0869: The write up as well, I actually already knew about you guys when I tried to finetune gpt-neo for a Hackathon a while back with Google colab but it didnโt work ๐ฆ
Sid#2121: I guess you saw this from the codex paper? https://cdn.discordapp.com/attachments/729741769738158194/863085983745966120/Screenshot_from_2021-07-09_17-54-38.png
generic#8192: yep! I'm really curious about what influences that, exactly. one idea I'm interested in looking at is if there are adversarial triggers that can induce buggy code, similar to this paper https://arxiv.org/abs/1908.07125
StellaAthena#3530: We have some models trained specifically on Python code you should look at! Theyโre on HF here: https://huggingface.co/models?search=ghpy
Theyโre not really *released* and are therefore undocumented but you can ping bmk for help using them
generic#8192: oh this is excellent, thanks!
StellaAthena#3530: @generic Iโm also a bit in the computer security space, and if youโre interested in talking about malware production or malicious bugs I can introduce you to people with similar interests.
bmk#1476: I need to get ghpy6B up again right after interviews
|
generic#8192: very interested! we have done a little bit of work on ML for malware but not using LMs so far
Louis#0144: Interestingโฆ. Iโm working on an ERNIEv3 model (similar to T5 with just a few extra tasks) I think a fixing corrupted code pretraining task could fit well into a T5 model or a BART model
Louis#0144: @๐
ฌ gabriel_syme ๐
ฌ is generating room layouts for me to fix
Louis#0144: Lol
Louis#0144: Like architectural stuff
generic#8192: yeah I've been wondering whether GPT models might not be a great fit for code since they only do forward-prediction
Louis#0144: No I think Bart is better for what you want to do tbh
Louis#0144: Iโm scaling Bart right now
Louis#0144: Well
Louis#0144: Iโm debugging our Bart code
Louis#0144: lol
generic#8192: many such cases ;)
Louis#0144: https://github.com/morganmcg1/rotobart
Louis#0144: Now obviously GPT J would walk all over a 1b parameter Bart
Louis#0144: So in theory Bart would be better
Louis#0144: But in practice I doubt it
generic#8192: right, makes sense
generic#8192: I've been clinging to my 774M param C/C++ GPT2 model because it took a month to train but at some point I'll want to start trying to train bigger versions
Louis#0144: Jax
Louis#0144: Use TRC
|
Louis#0144: Also doesnt tandon have its own super computer
generic#8192: this was on NYU's cluster! 4xRTX8000s. but it's hard to reserve them for very long, too many other people doing ML :)
generic#8192: I also have a 2x3090 system at home that I can use for smaller experiments but it gets a bit loud
StellaAthena#3530: Iโm bad at GPUsโฆ whatโs the equivalent computational power in A100s or V100s?
Louis#0144: Oh wow
generic#8192: the RTX8000s are pretty close to the V100s
generic#8192: same RAM, similar speed
EricHallahan#1051: Can we just drop this "V100/A100 days" business? It is a terrible unit IMO.
EricHallahan#1051: I know it is useful, but still.
StellaAthena#3530: IDK, what would be a better unit in your mind
Manny96#3437: Funny, you mention that - this morning I produced an essay and GIT for Transhumanism, here in Australia; and mentioned Eleuther.ai.
Manny96#3437: Would have liked to talk about it more, though
EricHallahan#1051: FLOP?
EricHallahan#1051: ยฏ\_(ใ)_/ยฏ
StellaAthena#3530: I use it because I work with those GPUs and hours/days/months is an easily tractable unit of measure for me
Manny96#3437: PFLOP
StellaAthena#3530: My issue is that I donโt know what a PFLOP *is*
StellaAthena#3530: Maybe thatโs on me tho
EricHallahan#1051: I am not saying it isn't useful, I am saying that it shouldn't be used in papers as the only unit of measure.
StellaAthena#3530: Oh yeah. I wouldnโt use A100-days in a paper unless I was literally talking about how I ran 48 A100s for 10 days
|
generic#8192: I had some benchmarks of training GPT2 models with the 3090s as well as the RTX8000s if you want to calibrate: https://github.com/huggingface/transformers/issues/9371
StellaAthena#3530: Thanks
Manny96#3437: https://www.tensorflow.org/probability/api_docs/python/tfp/substrates/jax/mcmc/SliceSampler?hl=he - distilation?
Manny96#3437: Is source utilising tfp?
StellaAthena#3530: @Manny96 FYI you linked to the Hebrew page
StellaAthena#3530: Or, at least itโs partially in Hebrew?
Manny96#3437: Not in my end
StellaAthena#3530: Weird, I see this https://cdn.discordapp.com/attachments/729741769738158194/863091269013864478/image0.png
StellaAthena#3530: The bulk of the text is in English, but the website framing it is in Hebrew
Manny96#3437: Oh, wait it is, lol
Louis#0144: My dad just ate an entire watermelon in one sitting
Louis#0144: Wtf
Louis#0144: O thought this was off topic
generic#8192: it's the `?hl=he` at the end
Louis#0144: can someone translate some chinese for me for the ernie project
Louis#0144: pls
Louis#0144: its just a paragraph
Louis#0144: apparently its middle chinese
Louis#0144: so google translate is shitting itself
Louis#0144: novel generation and couplet generation https://cdn.discordapp.com/attachments/729741769738158194/863102344799453214/Screen_Shot_2021-07-09_at_1.00.26_PM.png
|
Louis#0144: if anyone has a chance
aze#1010: was this https://github.com/EleutherAI/github-downloader used in the pile?
StellaAthena#3530: yes
aze#1010: would fine tuning gpt-j on data from this ^ be dumb in that case
Sid#2121: @bmk already did it iirc
StellaAthena#3530: I thought he did that from scratch
aze#1010: how are the results
kurumuz#5695: im really curious about this aswell
generic#8192: has anyone tried using distillation in "reverse" to get better initialization for training a larger model, assuming one already has a smaller model?
generic#8192: I found some refs on layer-wise pretraining which is similar but seems to assume you're incrementally adding layers rather than increasing parameters at each layer as well
Sid#2121: hm, like interpolating the weights of a smaller model to the size of a larger one as initialization?
inox#5400: yes there is definitely a paper like this and I'm trying hard to remember the name, they design a network architecture that can be progressively grown while keeping the output the same at every growth step
generic#8192: something like that, yeah - or something more sophisticated like trying to transfer gradients from the smaller model to the larger (?? somehow ??)
generic#8192: I found this Bengio paper from Neurips which seems relevant https://papers.nips.cc/paper/2006/file/5da713a690c067105aeb2fae32403405-Paper.pdf
bmk#1476: about like 2 years ago I tried doing model surgery by expanding all of the layers of the model with identity inits
bmk#1476: it didn't work very well and I'm not sure if that was because it just doesn't work or because my implementation was borked
bmk#1476: this was with tensorflow so the code was horrifying
bmk#1476: but I managed to get a smaller model resumed as a bigger model + padding
generic#8192: hmm I can sort of imagine how to do that by surgery yeah. I've played around some with directly tweaking weights in GPT2 (I made a "brain damage" script that progressively mutated more and more weights with gaussian noise and then sampled the outputs); and it's not too difficult
Sid#2121: like just zero padding? did you try interpolation / tiling?
|
bmk#1476: I don't really remember this was way too long ago
bmk#1476: but I remember being disappointed and giving up lol
bmk#1476: what would I interpolate with?
bmk#1476: also I'm pretty sure it was consecutive gpt2 sizes - might have been 117M -> 345M, so I couldn't really tile it
generic#8192: hmm, maybe new weights would be average of 2-3 randomly selected weights from that layer in the old network
generic#8192: ?
generic#8192: (I have no principled reason to think that would be good)
bmk#1476: why would I do that?
generic#8192: I guess the idea is that just something numerically close to the weights of the original network might be better than random init. as I said though, I don't have any principled reason to think so
inox#5400: I found it! https://arxiv.org/abs/1511.05641
generic#8192: nice, thanks!
kindiana#1016: Zero padding would work if you are careful
inox#5400: I wanted to combine this with SGDR back in 2016 and I'm not sure I ever did, like on every gradient schedule restart you change the network size
kindiana#1016: It's like free rezero lol
chilli#5665: hmm
chilli#5665: there was a paper kinda like this
chilli#5665: where they augmented a new part of the network
chilli#5665: and froze the old part
chilli#5665: and trained on a new task
inox#5400: not sure how architecture search should work with transformers/MLP-mixer-likes now
|
inox#5400: on the one hand: it probably still works a bit, go for it!
on the other hand: it's useful that these architectures are simple and generic
dms#2699: n00b Q of the day: can anyone point me to some tips for handling large inputs with seq2seq transformers given token limits? I'm trying to chunk the input but coherence goes the way of the dodo
dms#2699: best thing I've found is this but it seems to be a WIP https://www.machinecurve.com/index.php/2021/03/12/transformers-for-long-text-code-examples-with-longformer/
Louis#0144: Transformer xl is the only way I see rn
Louis#0144: (Most) linear attentions are really bad
Louis#0144: :/
Louis#0144: If you wanna implement transformer xl in Jax let me know
Louis#0144: :^)
Louis#0144: Iโve been hoping to avoid having to do it myself
Louis#0144: Looks like a nightmare
Louis#0144: :berk:
Louis#0144: I kid I kid
Louis#0144: If you wanna do it Iโd be down to help
Louis#0144: I need it for Ernie anyway
Louis#0144: Iโve got like two months free before classes start
dms#2699: I set up the project with pytorch but if jax is the way to go so be it ๐ฆพ
Louis#0144: Oh uh
Louis#0144: Pytorch is much easier
Louis#0144: Jax is hard for txl because recurrence is hard with fixed computation graphs
|
Louis#0144: @kindiana is it not?
Louis#0144: People discussed this here before I thought
Louis#0144: s2g atleast react with geese
Louis#0144: LOL
EricHallahan#1051: Local attention lol
Louis#0144: He linked long former
Louis#0144: I assumed he was looking into linear
Louis#0144: ๐คทโโ๏ธ
Louis#0144: Iโll just go now I feel like Iโm saying something thatโs not technically correct
Louis#0144: :berk:
someKindaBean#8471: What's wrong with LongFormer? Longformer Encoder-Decoder should be able to do long seq2seq stuff
Louis#0144: As in generation?
Louis#0144: Isnโt longformer masked?
Louis#0144: https://sshleifer.github.io/blog_v2/
Louis#0144: Ok who tf named their blog tensorgoose and why didnโt I think of this
Louis#0144: @sshleifer found you
Louis#0144: I like your blog
Louis#0144: Itโs so funny that like almost any ML scientist I run into is probably in this server
txizzle#6710: hey folks, really cool 1 year retrospective! and awesome work. do you guys work on any RL here?
Lord Parfington#0012: i'm so happy this thing was invented.
|
Louis#0144: #deleted-channel is basically the learning to summarize stuff
Louis#0144: If that interests you
Louis#0144: Im not sure if thereโs any openings on that project though (?)
Louis#0144: cc @Daj
Louis#0144: I want to do hierarchical RL but my project #carp isnโt there yet
Louis#0144: Maybe in a few months
Louis#0144: Idk
Lord Parfington#0012: are there any visual transformers that can accurately show written words and have been exposed to actual scripts and things?
txizzle#6710: ah ok thanks, i will eavesdrop on those channels
someKindaBean#8471: i just meant for summarization and yeah, it's sliding window attention
Louis#0144: RL + language models is never a fun time
bmk#1476: :ptsd:
bmk#1476: also
RL ~~+ language models~~ is never a fun time
bmk#1476: ftfy
bmk#1476: every experience ive had with RL is :ptsd:
EricHallahan#1051: Friendship ended with RL, sequence modeling is my best friend.
bmk#1476: granted most of tha involves LMs but
Louis#0144: @bmk whatโs worse
|
Louis#0144: Mesh tensorflow
Louis#0144: Or RL
EricHallahan#1051: RL in Mesh Tensorflow
Louis#0144: Thatโs actually what kinda confuses me tbh
Louis#0144: Why didnโt we use Jax in the very beginning
Louis#0144: Whoโs idea whatโs it to do mesh TF
Louis#0144: I still remember when I first joined
Louis#0144: And I spent like
txizzle#6710: so... trajectory transformer or decision transformer? ๐
Louis#0144: A few hours helping Leo debug some random topology thing
Louis#0144: Decision transformer
txizzle#6710: call me sutton-pilled but TD is GOAT
StellaAthena#3530: TPU Jax didn't exist at the time, at least not publicly
Louis#0144: Oh ok
cfoster0#4356: Lol two different meanings of Sutton-pilling clash
EricHallahan#1051: https://blog.eleuther.ai/year-one
Louis#0144: O yeah
Louis#0144: I remember when we trained mesh tensorflow there were tons of issues with efficiency
Louis#0144: Does Jax actually solve any of these
Louis#0144: Or nah
|
Louis#0144: Like we couldnโt get above 50% efficiency or something (?)
glucinater21#0869: What is a good Jax ml framework to try? I saw gpt-J used elegy
Daj#7482: We are currently doing RL with LMs yes. We're...actually surprised how well it's working atm
Daj#7482: but we don't expect RL to long term be the most stable method, so we're interested in testing a lot of other stuff
Daj#7482: RL seems to maybe actually be not so bad as long as your implementation isn't horribly broken (most on github are) and your model is large enough :morelayers:
Daj#7482: but I still expect sequence modelling to be better
kurumuz#5695: o, interesting
Daj#7482: Yeah but there are just like 1000 small subtle things that can completely break it
Daj#7482: Which is why I'm happy we decided to implement from scratch lol
Daj#7482: Instead of relying on broken public repos
kurumuz#5695: we can build a RL pipeline i guess
kurumuz#5695: messing with hidden state AR stuff rn though
kurumuz#5695: too much fun
Daj#7482: Yeah, what are you doing with that? That's also something I've worked on
kurumuz#5695: Calculating distance between sequences to figure out which sequence is related to the last sequence that is submitted
kurumuz#5695: to build a long term memory system
kurumuz#5695: should be able to use it for classification aswell
Daj#7482: Oh that idea aero had?
kurumuz#5695: it is aero lol
kurumuz#5695: it works pretty well
|
Daj#7482: Yeah
kurumuz#5695: he works with us
Daj#7482: Neat, I was hoping someone would try that
Daj#7482: Seems so simple in retrospect
kurumuz#5695: Didn't implement to the novelai yet, it is on sigurdbot
kurumuz#5695: which people talks to in our discord
kurumuz#5695: it seems to be getting grounded...
kurumuz#5695: it has like 1 gig of these engrams/memories
Daj#7482: Cool
kurumuz#5695: getting pretty crazy, it was a really nice experiment
kurumuz#5695: I want to do question answering with hidden states
kurumuz#5695: not sure how that would work though
kurumuz#5695: might not even need QA, but would be cool to get working :P
Daj#7482: People finally seeing that hidden states are cool :berk:
Daj#7482: And you don't need to use BERT for it
kurumuz#5695: everything in one model
Daj#7482: I wonder where the meme that AR states are bad came from
kurumuz#5695: i kinda noticed MLMs are overrated
Daj#7482: Exactly
Daj#7482: We found the same
|
kurumuz#5695: casual masking should learn better world models
kurumuz#5695: there should be a reason why most of the MLM parameters can be pruned and it keeps the same performance
Daj#7482: Yep
kurumuz#5695: i feel like ARs have an interface problem
kurumuz#5695: its not they being incapable
Daj#7482: That's what I've been banging on about for a while lol
kurumuz#5695: we just dont interface correctly
Daj#7482: Exactly
kurumuz#5695: I am completely AR scalepilled rn lol
kurumuz#5695: its good
Daj#7482: Welcome to the party :berk:
kurumuz#5695: who else is in the party?
kurumuz#5695: is it just us :berk:
Daj#7482: Eleuther, OpenAI...
Daj#7482: uhh
Daj#7482: maybe Cohere?
kurumuz#5695: o cool
Daj#7482: I'm still surprised more people haven't caught on
kurumuz#5695: yeah
kurumuz#5695: kinda crazy
|
kurumuz#5695: so much potential
Daj#7482: As the saying goes, to get ahead, you don't have to predict the future, just realize the present
kurumuz#5695: it took me a while to do that tbh
EricHallahan#1051: Those are all I can think of. ยฏ\_(ใ)_/ยฏ
EricHallahan#1051: Google?
EricHallahan#1051: I don't think so.
kurumuz#5695: they like T5 and bert though
kurumuz#5695: lol
Daj#7482: Yeah it goes against a lot of instincts, glad we finally bullied you enough :berk:
Daj#7482: Google is not one monolith. Google Brain is scale pilled
Daj#7482: They have Noam
kurumuz#5695: well i kinda knew it, but took some time to admit it and change things
kurumuz#5695: I was always an end to end fan otherwise
kurumuz#5695: lol
Louis#0144: Interesting!
Louis#0144: Very surprised
kurumuz#5695: feature engineering is a fool's errand tbh, i might be fine with end to end KGs
Daj#7482: same tbh. It's not like it's working great, small models and stuff atm
kurumuz#5695: not feature engineering though
Daj#7482: goose BTFO
|
kurumuz#5695: just learn it all
kurumuz#5695: lol
Daj#7482: :berk:
Daj#7482: Welcome to the future
kurumuz#5695: elon is end to end pilled too btw
kurumuz#5695: they're gonna crush waymo so good
Daj#7482: You mean Karpathy
kurumuz#5695: elon had tweets about it
kurumuz#5695: switching from hand coded planning to end to end learning
Daj#7482: Sure, but it's Karpathy doing it
Daj#7482: And Karpathy is :bigbrain:
bmk#1476: ~~if GB is scalepilled then why do they keep training useless MoEs~~
kurumuz#5695: oh yeah ofc
kurumuz#5695: he is big brain
bmk#1476: Elon tweets about everything tho
guac#4716: *sad hinton noises*
Daj#7482: ~~they took the off-brand scale pill~~
kurumuz#5695: i think elon personally understands end to end is the way to go
kurumuz#5695: well i think he learned it
kurumuz#5695: he spoke about spending time on hd mapping and feature engineering and that being a waste of time
|
kurumuz#5695: and changing direction after that
bmk#1476: in any event I don't think Musk being interested in something provides much bayesian evidence for something being a good idea, for anyone with a ratpilled prior
Daj#7482: Controversial opinion: I think Elon being interested in something is a much stronger signal than baseline
Louis#0144: Tbf I am doing Ernie now which is somehow both KGs and end to end
kurumuz#5695: I agree
kurumuz#5695: I love elon tbh
Louis#0144: Which I love to no end
bmk#1476: baseline means normie or rat?
kurumuz#5695: KGs are corrupting that model
kurumuz#5695: i swear
kurumuz#5695: LMAO
Daj#7482: Normie up to and including Very Smart Normie (prestigious professor or the like)
bmk#1476: oh
kurumuz#5695: I am pretty convinced MLM head of that model is making it worse
bmk#1476: ok I agree with that
bmk#1476: but I think when compared to rats, he doesn't provide much signal
Daj#7482: also better than average rat, but not average "well known" rat
kurumuz#5695: the fuck is a rat
Daj#7482: rationalist lol
kurumuz#5695: what are you guys hiding
|
Daj#7482: LessWrong poster
kurumuz#5695: ohhh
bmk#1476: I trust gwern's opinions 10x more than musk
Daj#7482: I mean, duh
Louis#0144: It actually has a MLM head similar to BART where it can just fill in any amount of text in place
Daj#7482: ~~though not on _all_ topics~~
Louis#0144: Itโs not like BERT
kurumuz#5695: hmmm
Louis#0144: Itโs some AR + MLM hybrid
kurumuz#5695: based
bmk#1476: musk likes anime too
Daj#7482: I'm not saying I trust Musk on that either
kurumuz#5695: i just dont think having a MLM head contributes much to the model, they think it got better because they have a good interface for NLU which is MLM
kurumuz#5695: I don't think it learns any better world models
Daj#7482: Seems likely, but I do wonder if an MLM head might be a useful feature on top of a strong world model ๐ค
Daj#7482: Never thought about that
Daj#7482: not sure, might also fuck things up
Daj#7482: Since an AR model needs to really learn a strong causal model, and MLM breaks causality in a way
Daj#7482: maybe not idk
kurumuz#5695: i think it might corrupt it
|
kurumuz#5695: its my concern
kurumuz#5695: do you know what you can do though?
Louis#0144: Iโll run ablations
Louis#0144: lol
Louis#0144: Iโm not worried
Teemochu#8740: Elon being interested in something is a very strong signal that it has a potential to be both revolutionary and marketable IMO.
Teemochu#8740: "interested" here being a tighter definition than the casual one
bmk#1476: only if your prior is normie
Teemochu#8740: "This is a rock I haven't looked under, and he is looking under it" is a decent sign to me that I should at least look under that rock.
kurumuz#5695: in elon we trust
Teemochu#8740: in a similar way that seeing a slightly off-the-beaten-path place/game/etc recommended to me by someone whose recommendations tend to be good is a pretty strong sign I should at the very least put it on my to-someday list
mgostIH#0245: bruh, mars
txizzle#6710: newb question, what is MLM / AR? masked LM and autoregressive?
cfoster0#4356: Yes
txizzle#6710: thanks
txizzle#6710: just an e2e RL lurker dont mind me ๐
joaogui1#8461: Very much
joaogui1#8461: Why Decision and not Trajectory?
Louis#0144: ๐คทโโ๏ธ I really like decision tbh
Louis#0144: It just feels more natural
|
Louis#0144: Also I havenโt had too much time to read into trajectory in detail
Louis#0144: But Iโve played with trajectory sequence modeling before
Louis#0144: Back in 2019
Louis#0144: And it was weird
Louis#0144: Especially in transformers
Louis#0144: I was using XLnet
joaogui1#8461: Huuum
kurumuz#5695: hmmm
zphang#7252: lol why people hating on MLM
bmk#1476: My Little Model
kindiana#1016: Everyone is sick of masks nowadays
cfoster0#4356: Causal mask: :guilty:
kurumuz#5695: to be edgy ofc
kurumuz#5695: :berk:
kurumuz#5695: jk
Louis#0144: I love MLM
chilli#5665: hmm
bmk#1476: where's tha image of a facial mask with the word [MASK]
chilli#5665: What's the current consensus on MLM vs AR?
Louis#0144: MLM is inherently harder to do right I feel
|
Louis#0144: So it has to catch up to AR
chilli#5665: MLM is better for understanding tasks?
Louis#0144: Yeah
cfoster0#4356: Consensus? Idk if there is one
chilli#5665: AR is better for genration?
Louis#0144: Personally I prefer masking the way BART does it
Louis#0144: ๐คทโโ๏ธ
kurumuz#5695: AR is good at NLU too
kurumuz#5695: we use it for NLU
Louis#0144: You use Bart
Louis#0144: Lol
kurumuz#5695: no
kurumuz#5695: hidden engrams are NLU
Louis#0144: Ohhhh
Louis#0144: Yeah Iโd agree with that
cfoster0#4356: Feel like some folks are thinking about their MLM sunk costs
Louis#0144: Sure
Louis#0144: For sure
zphang#7252: I'm not caught up on ERNIE, but deberta matched T5 performance on SuperGLUE with a tenth the parameters, and the LMs weren't even in the top contenders
kurumuz#5695: i feel like NLU problem for AR is not that model is not capable, but we don't interface with AR models correctly
|
kurumuz#5695: so it's an interface problem
joaogui1#8461: ?
Louis#0144: Heโs implying retrieval is an NLU task
Louis#0144: Which I am inclined to agree with
kurumuz#5695: yes
zphang#7252: I think that's a plausible hypothesis, but not one that's confirmed yet
joaogui1#8461: Oh, got it
joaogui1#8461: MLM generates better representations tho
joaogui1#8461: Which is good for stuff like clustering
kurumuz#5695: how so
chilli#5665: This is the part that people are discussing
chilli#5665: and disagreeing on
Louis#0144: I donโt know if people agree with this in general @joaogui1
kurumuz#5695: I would say casual masked models learn better world models
Louis#0144: I had a lot of success doing retrieval with AR models in February
joaogui1#8461: NLU is not the same as embeddings
Louis#0144: When DPR was not performing well
joaogui1#8461: I mean the Sentence-Transformer folks seem to think so
Louis#0144: Sentence transformer is so good tbh
Louis#0144: And DeCLUTR
|
joaogui1#8461: And GPT embeddings are like super pathological
Louis#0144: it depends what layer you take the embedding from
Louis#0144: Last layer embeddings kinda suck
Louis#0144: I had more success on second to last
Louis#0144: Back when I tried doing visual grounding stuff
cfoster0#4356: Using the embeddings straight out the box shouldn't work well, right?
Louis#0144: Yeah
joaogui1#8461: Same for BERT though
Louis#0144: That too
cfoster0#4356: There's a huge task mismatch
cfoster0#4356: BERT etc happen to have a prefix task that's closer
joaogui1#8461: But BERT's work better than GPT
Louis#0144: It always surprises me how well Bert held up
Louis#0144: All these years later
Louis#0144: Vanilla Bert is still really good
joaogui1#8461: Yeah
joaogui1#8461: Just remove NSP
Louis#0144: Mhm
chilli#5665: I agree that the common consensus is that MLM generates better embeddings
zphang#7252: there's no reason to use BERT over RoBERTa in that case lol
|
chilli#5665: but I think there's a lot of people who don't believe that should be the case
chilli#5665: and there've been several papers that do some small tweak/fine tuning on AR models to have them match MLM models for these kind of embedding based tasks
cfoster0#4356: I've heard this anecdotally but can't easily recall any apples to apples comparisons
zphang#7252: if you're doing anything that's token-wise, LMs already come out of the gate with a huge disadvantage
joaogui1#8461: We did some SentEval benchmarking at cohere and the difference is pretty big
cfoster0#4356: Yeah, and as kuru mentioned, that creates an interface mismatch
joaogui1#8461: https://paperswithcode.com/paper/isotropy-in-the-contextual-embedding-space
chilli#5665: I think this is a common belief
chilli#5665: or you can just look at the leaderboards for most of these NLU leaderboards, which are mostly still dominated by non-AR models
joaogui1#8461: The difference between Roberta and Bert is kind of weird really, it's only a Bert that you're training more and without NSP
cfoster0#4356: What representation did you take, out of curiosity?
chilli#5665: https://arxiv.org/abs/2103.10385
joaogui1#8461: I believe just last layer
chilli#5665: This is one of the papers that's trying to demonstrate that AR is comparable to MLM in terms of embeddings
joaogui1#8461: Again embeddings โ NLU
cfoster0#4356: Like, which token(s)?
joaogui1#8461: Average of last layer
zphang#7252: yea, but it's a drop-in, almost dominant replacement. Nothing comes to any mind for any case where BERT is better than RoBERTa, so I wonder why people still use plain BERT lol
chilli#5665: how so
cfoster0#4356: Oh that'll be an issue
|
cfoster0#4356: Imo
cfoster0#4356: Because information can't propagate properly
chilli#5665: for the most part, NLU is reliant on embedding quality
joaogui1#8461: Oh sorry, I just meant that sometimes people say BERT but they're using Roberta
chilli#5665: in the typical pretrained model -> fine tuning setup
joaogui1#8461: It's just the difference is weird
joaogui1#8461: We don't rename ResNets just because we train them for longer or something like that
joaogui1#8461: Because when I say embeddings I mean what you can do with the embeddings in an unsupervised way
joaogui1#8461: So no fine-tuning allowed
zphang#7252: ah I see. no that's fair
joaogui1#8461: For example clustering
zphang#7252: The GPT Understands,Too paper does have some issues, but it's headed in the right direction of "if we properly exploit LMs, we can get much closer to MLM performance in these task formats"
chilli#5665: when I talk about embeddings I'm talking about their usefulness for downstream tasks
joaogui1#8461: Again just look the at Sentence-Transformers examples
zphang#7252: That said, I think it's premature to call MLM old and busted
kurumuz#5695: I can setup a classificaton pipeline on GPT-J 6B soon and can compare that to BERT with some prompt tuning
zphang#7252: why prompt tuning though?
kurumuz#5695: BERT gets finetuned for specific tasks
kurumuz#5695: why shouldnt GPT
chilli#5665: well, i think another reason people don't like MLM is that it's a lot .... hackier than AR
|
joaogui1#8461: In fact they even found that the correlation between GLUE/SuperGLUE and out of the box embeddings performance is surprisingly low
zphang#7252: Right, why not full fine-tuning?
kurumuz#5695: oh, dont think its necessary
chilli#5665: Like, AR is a lot more fundamental than MLM
zphang#7252: lol I would say that's unfairly disadvantaging the MLM models
kurumuz#5695: how so
kurumuz#5695: I can do a finetune sure, if that is a concern
zphang#7252: I guess it depends on what claim we're trying to test.
The current and common use-case for MLM models is fine-tuning the whole thing
joaogui1#8461: You're going to compare 6B GPT with 300M Bert?
kurumuz#5695: deberta
kurumuz#5695: and yes
zphang#7252: still 1.5 to 6 though
kurumuz#5695: we can do gpt-neo i guess
kurumuz#5695: if that is fine
Sphinx#2092: You could compare finetuning only the t5 encoder
Sphinx#2092: That could be fun.
zphang#7252: the fairest would be GPT-2 vs. DeBERTa, no?
kurumuz#5695: GPT-2 is awful
joaogui1#8461: Yeah that's fairer
|
kurumuz#5695: deberta is trained good
kurumuz#5695: gpt-2 is not
joaogui1#8461: And then you should do full fine-tuning
zphang#7252: is Neo-1.5B better than GPT-2?
kurumuz#5695: yes
kurumuz#5695: absolutely
zphang#7252: which metrics
joaogui1#8461: Or you could do Neo-1.5B fine-tuning Vs DeBERTa fine-tuning vs GPT-J prompt-tuning
kurumuz#5695: lol just me playing with it for storytelling purposes
joaogui1#8461: That would be interesting
kurumuz#5695: can run some evals
ersatz#0001: Itโs actually very useful for many things
kurumuz#5695: GPT-J would probably destroy both lol
zphang#7252: oh, generation I wouldn't know, but for NLU tasks I wouldn't be surprised if the Pile diversity actually hurts it somewhat
joaogui1#8461: But again, this is all testing for NLU, not the same as embeddings
kurumuz#5695: might be wrong, pretty wild speculation
kurumuz#5695: sorry
joaogui1#8461: Depends
joaogui1#8461: What do you mean by prompt-tuning?
zphang#7252: anyway, my bet would be that GPT-Neo-1.5 or GPT-2 would underperform DeBERTa on NLU tasks under the fully fine-tuned format, but I would be interested if I'm proven wrong!
|
kurumuz#5695: https://arxiv.org/abs/2104.08691
joaogui1#8461: If it's this paper here
zphang#7252: (would be happy to collaborate on such an experiment lol)
kurumuz#5695: no, prompt tuning is really cool
kurumuz#5695: we deployed it in novelai
kurumuz#5695: people love it
kurumuz#5695: really effective
joaogui1#8461: Huuum, cool
kurumuz#5695: better than we imagined tbh
chilli#5665: how expensive is prompt tuning?
kurumuz#5695: yeah sure, will let you know if i really do it
chilli#5665: pretty cheap?
kurumuz#5695: pretty cheap yeah.
cfoster0#4356: It's a very smart usage of causal attention models, ngl
zphang#7252: let me know if I can help!
kurumuz#5695: we will let users do their own prompt tunes
joaogui1#8461: Also DeBERTa will be normal fine-tuning right? Not SIFT
kurumuz#5695: I can do normal finetunes on all models.
zphang#7252: SIFT is just a slightly more robust fine-tuning method, I don't think it makes much of a difference unless you're clawing up the leaderboard
cfoster0#4356: My recommendation would be to take the embedding of the last token, for the GPT models
|
joaogui1#8461: I think it makes a big difference to be honest
zphang#7252: I think there are several ways you can approach it actually
kurumuz#5695: would be nice to have a 6B deberta...
joaogui1#8461: Like to me it's the reason DeBERTs beats a model 8 or so times bigger (T5)
zphang#7252: you can take the "single token representation format" which I think advantages the MLM models
zphang#7252: or you can take the "use any weird task format you want" which I think will give the LM models a (fair) boost
zphang#7252: but we should have both
chilli#5665: what's the largest MLM model?
chilli#5665: T5?
Sid#2121: probably, maybe even ByT5
Sid#2121: can't remember the largest T5 at this point
Sphinx#2092: They are seq2seq models, so it's not even a fair comparison.
zphang#7252: 11B was the largest, unless there have been larger since
kurumuz#5695: yeah should be 11b
Sphinx#2092: mT5 XXL is 13.
Sphinx#2092: dat vocab.
zphang#7252: such multilingual vocab
zphang#7252: time for the 2021 NLP-lympics!
Sphinx#2092: Either way, seems like the comparison should be between encoder-only models.
chilli#5665: hmm
|
joaogui1#8461: For context: the second citation when they describe SIFT is from another MS paper that uses pretty much the same method to get a bert-large to pretty much the same performance as T5-3B IIRC
chilli#5665: what's the largest encoder-only model then?
Sphinx#2092: I wouldn't be surprised if shit like Rembert would crush some of thse things.
Sphinx#2092: Though I would also like to see people finetuning encoders from seq2seq models.
chilli#5665: akronomicon says megatron-BERT?
Sid#2121: what's akronomicon
chilli#5665: https://lair.lighton.ai/akronomicon/
joaogui1#8461: Serious is DeBERTa, Megatron is 8B though
zphang#7252: you mean SMART?
joaogui1#8461: Are the Megatron models public?
Sid#2121: damn this is cool lol
chilli#5665: Megatron-BERT is 3.9B according to akronomicon
joaogui1#8461: Yeah, that one
joaogui1#8461: My bad
chilli#5665: Are you referring to the Megatron-LM (decoder model?)
Sid#2121: largest megatron model is ~11B model trained by FAIR - but it sucks and it's AR
chilli#5665: also, it's not on the leaderboard
Sid#2121: because no one can get it working lmao
joaogui1#8461: Yeah, that's way I don't count it as serious haha
joaogui1#8461: So for serious models it's DeBERTa
|
Sid#2121: :tribalism: https://cdn.discordapp.com/attachments/729741769738158194/863201434493648936/Screenshot_from_2021-07-10_01-33-58.png
chilli#5665: Sid's referring to the Megatron-11B model (from FAIR), which is different from the Megatron-8B (from Nvidia).
chilli#5665: And these are all decoder models
chilli#5665: AFAIK, Megatron-8B works fine
Sid#2121: i don't think nvidia ever released megatron-8b though?
chilli#5665: oh really?
Sid#2121: afaik the biggest model released by the megatron team is 300M, but i'd love to be proven wrong
EricHallahan#1051: They didn't release anything IIRC.
joaogui1#8461: And I'm referring to the 3.9B Bert in Megatron which wasn't released
EricHallahan#1051: Or anything large enough to matter at least.
chilli#5665: ?
chilli#5665: well, except the code
Sid#2121: Pinned a message.
Sid#2121: pinning that so i remember it :berk:
chilli#5665: yeah it's quite useful
kurumuz#5695: based enough if you ask me
chilli#5665: yeah the code's been useful for a lot of people
kurumuz#5695: pangu-a is crazy lol
joaogui1#8461: You asked about the largest encoder model
chilli#5665: and everybody calls model-parallelism on transformers "megatron-LM style parallelism"
|
joaogui1#8461: Yeah
kurumuz#5695: i call it model sharding :berk:
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/863202272518340628/unknown.png
zphang#7252: Hmm, I think the picture is a lot more mixed. If I'm looking at the results right, it's a combination of SMART and MT-DNN, and the way they slice the results makes it hard to parse where SMART helps.
Where they do have direct comparisons, SMART usually adds a couple points, with larger improvements on adversarial tasks
Also I think they have a newer SIFT lol
chilli#5665: I was just referring to this
chilli#5665: there's a megatron-11B that's released (that sucks) from FAIR
bmk#1476: why is it US flag lol
chilli#5665: and a megatron-LM-8B that isn't released from Nvidia
chilli#5665: and then a megatron-BERT-3B that also isn't released
chilli#5665: lol
chilli#5665: what annoying naming schemes
CRG#8707: English text?
cfoster0#4356: SIFT isn't specific to MLM/encoder models, right?
joaogui1#8461: Nope
joaogui1#8461: It's barely specific to NLP
guac#4716: `us-central-1`
bmk#1476: *angry "US != english" noises*
kurumuz#5695: was eating my popcorn when people were using 11b on their product just because the number beeg
|
kurumuz#5695: :ultraberk:
bmk#1476: actually our models are trained in euw4a
chilli#5665: hmm
Louis#0144: @chilli whatโs your thoughts on Ernie gen
chilli#5665: I feel like models should be sorted by Flops tbh
Louis#0144: https://arxiv.org/abs/2001.11314
joaogui1#8461: A newer SIFT, damn
chilli#5665: haven't read it
chilli#5665: should ask somebody else ๐
Sid#2121: @chilli is torch ever gonna get pmap
Louis#0144: True
kurumuz#5695: what is the difference between erniev3
Louis#0144: Ernie gen is something you add to T5 or Ernie to improve generation quality
chilli#5665: depends on what you mean by pmap
Louis#0144: You add it as a finetune step I think
Louis#0144: No one has scaled it tho
Louis#0144: So I have no idea how it does
Sid#2121: I want to remove all the megatron mp primitives and just do :chad: wrapping my model code in pmap like in mtj
Louis#0144: Thereโs like 20 Ernie papers that all seem promising but no one has combined or scaled them yet
Louis#0144: Rip
|
joaogui1#8461: So we can eventually get Erniev3-15B-Gen
kurumuz#5695: that would be beautiful
chilli#5665: the main bottleneck is having an XLA-equivalent that does the MP primitives I think
joaogui1#8461: That's xmap not pmap
kurumuz#5695: @finetune was mad he couldnt print() the weights though
chilli#5665: xmap is basically just a wrapper above pmap
chilli#5665: well, wrapper isn't really the right word
chilli#5665: but like, another interface above pmap
chilli#5665: As in, the actual operations performed by xmap can be duplicated by pmap + vmap
Louis#0144: rotoERNIEv3-15B-GEN-ViL
Louis#0144: lol
kurumuz#5695: god
Louis#0144: + one other thing
kurumuz#5695: what is that name
joaogui1#8461: Lol
Louis#0144: I think ERNIE needs like
Louis#0144: months of ablations
Louis#0144: :berk:
chilli#5665: there are some shardedtensor RFCs floating around if you're interested @Sid
zphang#7252: just shorten it to ERNIE-ViL, or EViL for short
|
bmk#1476: still better than eleuther-alm-gpt-neo-mtf-pile400B-2.7B-v1.2-base-tuned-default-framework-system-library
Deleted User#0000: I used pmap on GPUs and it worked perfectly
Louis#0144: perfect TRC project
chilli#5665: but I think it's something that's a work in progress
Sid#2121: How about just not starting with a shit model in the first place :thonk:
Louis#0144: ye
Louis#0144: we are]
Louis#0144: rotoBART
Sid#2121: AR tribalism :tribalism2:
Louis#0144: o
Louis#0144: no I want to explore seq2seq
Louis#0144: thats the purpose of this
chilli#5665: data-parallel only or model-parallel too?
Sid#2121: i am!
kurumuz#5695: AR is all you need in a year
Deleted User#0000: just data parallel
Louis#0144: the entire purpose is to focus on seq2seq
Deleted User#0000: Ben told me to just use pjit instead of xmap
Louis#0144: could ERNIE be made autoregressive tho?
Louis#0144: I dont entirely think so?
|
chilli#5665: I don't think pjit works on GPUs right now right?
Sid#2121: dunno, still haven't read the paper
joaogui1#8461: This is the 8th place in GLUE
Microsoft D365 AI & MSR AI & GATECH MT-DNN-SMART
kurumuz#5695: ofc
kurumuz#5695: :berk:
Deleted User#0000: Yeah, it would be for TPUs
chilli#5665: @Sid https://github.com/pytorch/pytorch/issues/55207
Deleted User#0000: If the Jax devs solve it for GPUs
zphang#7252: JAX devs plz
Louis#0144: link?
chilli#5665: I think it's mainly bottlenecked on XLA devs
Deleted User#0000: That would be crazy
Sid#2121: awesome, will be following closely
Louis#0144: im not convinced bc the ERNIE architecture is entirely based around T5 and theyve invested a lot in improving gen quality
Louis#0144: lol
zphang#7252: oh that's just adding the team names though
kurumuz#5695: i wish jax didnt have slow gpu inference
kurumuz#5695: should investigate that
zphang#7252: wait does it?
|
Louis#0144: no?
Louis#0144: lol
kurumuz#5695: does what
Sid#2121: this would be fucking awesome https://cdn.discordapp.com/attachments/729741769738158194/863204542770446356/Screenshot_from_2021-07-10_01-46-24.png
zphang#7252: have slow GPU inference
kurumuz#5695: i compared it with huggingface pytorch
Louis#0144: Its just bc Ben's code isnt optimized for inference I thought
kurumuz#5695: it was much slower
Louis#0144: lol
kurumuz#5695: idk, its fast on TPUs
kurumuz#5695: pretty fast
joaogui1#8461: Just reading the leaderboard really: https://gluebenchmark.com/leaderboard
joaogui1#8461: But there's an evil repo that barely runs with all of the code
kurumuz#5695: are you serious
zphang#7252: ya I would assume well written JAX / PyTorch code would basically run at the same speed on GPUs
kurumuz#5695: god
kurumuz#5695: GOD
kurumuz#5695: please
kurumuz#5695: imagine a future where sharding isnt pain
chilli#5665: I feel like it isn't that bad with pjit/GSPMD
|
chilli#5665: :thonk:
Louis#0144: this web page is actually just awful
Louis#0144: LMAO
Louis#0144: wtf
Sid#2121: or, even nicer https://cdn.discordapp.com/attachments/729741769738158194/863204873221439508/Screenshot_from_2021-07-10_01-47-41.png
Louis#0144: Its *so laggy* on mobile
zphang#7252: I TA'ed for a class where one project was doing some experiments on SMART and they tried using the mt-dnn code base. In the first meeting I told them to just reimplement the SMART algo with HF/T as a base rather than touch mt-dnn lol
joaogui1#8461: Yeeeah
chilli#5665: One thing I wonder about for XLA code, is that if you're not happy with the performance after you jit your model
chilli#5665: what do you do?
joaogui1#8461: I've tried to use that, it's painful
joaogui1#8461: Don't think I'll ever recover
chilli#5665: maybe @joaogui1 has experience with this?
joaogui1#8461: There's a bunch of stuff to check
zphang#7252: ehh, I don't think it's the worst ML code base I've worked with...
joaogui1#8461: Like if you're recompiling stuff
joaogui1#8461: If you can't just jit more of your code
Louis#0144: ERNIE ablations would be blog post worthy? or paper worthy
chilli#5665: yeah, so let's say you're not doing stupid things like that
Louis#0144: im thinking blogpost
|
chilli#5665: like, what do you do if you're not happy with how XLA compiled some subgraph
Louis#0144: tbh
Louis#0144: no reason to make it more than just a giant table
joaogui1#8461: No idea tbf
zphang#7252: write more papers, flood the short paper market
Louis#0144: :berk:
chilli#5665: hmm
zphang#7252: monthly submissions to ARR
Louis#0144: lul
Sid#2121: I'm guessing there's not three separate compilers for XLA like there is for jit lol
joaogui1#8461: @jekbradbury
chilli#5665: I guess if you're on TPUs you don't really have a choice anyways
chilli#5665: It's not like you're gonna write faster TPU code (or can you?)
bmk#1476: oh shit emnlp author response is in like 2 days
zphang#7252: what'd you submit
bmk#1476: i get to know whether my short paper was total garbage!
bmk#1476: the, uh, dataset filtering thing
zphang#7252: oh right
zphang#7252: resubmit pile with Neo + 6B results :thonk:
jekbradbury#2280: internally: you file a bug and the XLA oncall looks at it
|
externally: one of us files the bug for you, at least for now
zphang#7252: and add in all the random important sentences that we deleted
bmk#1476: eh im not in a hurry to get pile published atm
zphang#7252: like definitions
bmk#1476: we should submit to a journal isntead
bmk#1476: so we can get the full length
chilli#5665: Has the XLA team considered exposing some more performance knobs?
jekbradbury#2280: they expose a ton of knobs as flags, and we havenโt yet figured out how weโre going to expose those flags for OSS users
chilli#5665: Is there documentation for those flags somewhere?
chilli#5665: concretely, I've been benchmarking some ML compilers on some toy tasks, and I want to try to optimize XLA perf
kurumuz#5695: huh, might be interesting for me too.
inox#5400: submit to JMLR, the premier machine learning journal that runs on a potato in a grad student's office somewhere
bmk#1476: honestly i might
joaogui1#8461: I really like the JMLR
inox#5400: same, unironically I love that potato
inox#5400: who would win: JMLR with zero resources vs Nature
inox#5400: JMLR every time
jekbradbury#2280: each flag has a docstring in the code where itโs declared
AI_WAIFU#2844: ๐
jmerizia#4039: Seems like it would take a long time to implement sharded tensors, but it seems like an elegant solution once it works. Pretty much every layer would have to change I think. Or you'd need separate sharded layers
|
jmerizia#4039: Sharded layers are a thing I'm working on in my research lab
chilli#5665: From my understanding, that's basically how XLA works
chilli#5665: except of course, with a far smaller set of primitives compared to PyTorch
jmerizia#4039: Yea. For things like cnns, there would need to be a lot of communication still
chilli#5665: well, yeah, that's why sharding is mainly only useful if you have fast interconnect
jmerizia#4039: So I think it will be several months before we can pass a ShardedTensor into a Conv2d
chilli#5665: want to elaborate on what you're working on?
jmerizia#4039: https://github.com/distdl/distdl
jmerizia#4039: It's a collection of parallel implementations of existing pytorch layers
jmerizia#4039: (very early research software lol)
chilli#5665: How are you defining your parallel tensors?
jmerizia#4039: They are just normal tensors. The sharded tensor abstraction is nice, but it's expensive in terms of dev hours. Instead, the layers are given Partition objects, which carry info on how the input is sharded
chilli#5665: yeah, but how do you actually shard your tensors across different devices?
jmerizia#4039: Oh under the hood it's MPI (and soon NCCL)
Spy#9778: @alstroemeria313 @๐
ฌ gabriel_syme ๐
ฌ This isn't super great but since you guys helped so much when I was getting this working I figured I should upload it: https://github.com/davisyoshida/vqgan-haiku/tree/master
Spy#9778: Thanks again!
Spy#9778: I had to unpack a bunch of stuff from my davis-utils package to put in the utils.py
Spy#9778: and it was really boring
Spy#9778: so instead I wrote a thing which takes files which import davis-utils and puts all the used functions/classes into a single utils file
Spy#9778: it only took like a day to save 10 minutes ๐
|
kindiana#1016: Sharded conv does work on xla I believe
kindiana#1016: It does a halo exchange
jmerizia#4039: Oh that's interesting
jmerizia#4039: Do you know if GPU is on the roadmap for the jax team?
kindiana#1016: @jekbradbury
jekbradbury#2280: yes, absolutely (in fact the sharding features like pjit and xmap already support GPUs, the missing thing is open source/documented support for multihost GPU)
kurumuz#5695: I should get into JAX soon and see why it's slower compared to huggingface pytorch for GPT-J 6B.
kurumuz#5695: if that is solved, really useful for model sharding
chilli#5665: oh, pjit/GSPMD works on GPU now? nice
jekbradbury#2280: yeah, just recently
chilli#5665: cool, I was trying it out previously on a project and got some weird errors that made me unsure about whether it was supported
๐
ฌ gabriel_syme ๐
ฌ#3220: You mean finetuning or inference?
Louis#0144: To be clear
Louis#0144: I think sharding would be slow for inference in general
Louis#0144: If the model can already fit on a single card
Louis#0144: No?
kurumuz#5695: no
kurumuz#5695: with a good interconnect linear speedup.
kindiana#1016: to be more precise you need a low latency interconnect
kindiana#1016: and parallel ff + attn also helps
|
kindiana#1016: lol
Louis#0144: O yeah
Louis#0144: True
Louis#0144: lol
Louis#0144: We literally discussed this kuru
Louis#0144: Ur right
kurumuz#5695: @Louis you remember the 3090 x2 benchmark?
kurumuz#5695: it was linear 2x speedup
jmerizia#4039: Was that with nvlink?
kurumuz#5695: not sure, might be pcie4.
jmerizia#4039: do you have the paper/website?
kurumuz#5695: no
Louis#0144: True
kindiana#1016: I think theres also a way by modifying the model architecture slightly to totally make it not communications bound
kindiana#1016: but I won't elaborate for now :berk:
kurumuz#5695: decoder transformer?
kurumuz#5695: wow
kurumuz#5695: that is crazy
kurumuz#5695: i will try to figure it out lol
Louis#0144: Wut
|
Louis#0144: Wait elaborate @kindiana
Louis#0144: Iโm curious now
kurumuz#5695: you gave me a giant hint
kindiana#1016: :goose6:
kurumuz#5695: that is good enough
Louis#0144: Iโve been reading your code base so much lately LOL
bmk#1476: is it local attention
kindiana#1016: no
bmk#1476: dang
Louis#0144: Im trying to do T5 in your code base but the dual heads of T5 is making that hard
Louis#0144: Iโll figure it out eventually
Louis#0144: @kurumuz this isnโt a hint FYI
Louis#0144: lol
kindiana#1016: that drops half your model parallel communication
kindiana#1016: but I think you can drop all of it ๐ค
bmk#1476: only put attention in half of the layers :bigbrain:
kindiana#1016: that doesn't reduce communication volume lol
kurumuz#5695: louis, i implemented parallel ff + attn in neox
kurumuz#5695: i know lol
jmerizia#4039: put attention in sqrt(n) of the layers?
|
Louis#0144: Ok lol
jekbradbury#2280: ok iโm intrigued
kindiana#1016: alright I'll spill the beans due to popular demand :berk:
instead of adding the output of a layer to the residual immediately, add it after the next layer
jekbradbury#2280: transformer decoding is highly memory bandwidth bound, and the only way i know to avoid that is to make it communication bound instead
kindiana#1016: that way MP communications is non blocking
jekbradbury#2280: ah yeah, ok, itโs about overlap
jekbradbury#2280: there are many ways to improve overlap ๐
kindiana#1016: I'd be curious if GPT-J weights tolerate being run with late-residual-add
kindiana#1016: maybe thats something you want to investigate @kurumuz lol
kurumuz#5695: yeah definitely
kurumuz#5695: after i get my sleep will try
kindiana#1016: it also depends on pytorch/xla/whatever being smart enough to actually do the overlap
kindiana#1016: I'm not sure if they are lol
Louis#0144: What are others?
jmerizia#4039: that's also interesting for interpretability
Louis#0144: How
jekbradbury#2280: mostly different kinds of op splitting; imagine writing a natively distributed matmul algorithm or something
jekbradbury#2280: XLA has historically not been good at this, but itโs getting a lot better
kurumuz#5695: XLA seems really promising
|
jmerizia#4039: such ablations are along the line of thinking about interpretability. i.e., can I cut something out and not get garbage. I think Gurkenglas will say something to this point tomorrow. But it's unrelated to performance (sorry to distract lol)
Dupelet#9080: Just wondering - would any of the core team be interested in doing an AMA on Reddit about GPT-Neo?
Dupelet#9080: I'm not quite sure who to reach out to to invite
Louis#0144: @StellaAthena often answers questions on Reddit
Louis#0144: But we recently found out she does not partake in gooseposting.
Louis#0144: Donโt know if that matters for the AMA
Louis#0144: ๐
EricHallahan#1051: We have been asked in the past, and we turned it down at the time.
EricHallahan#1051: Though I don't know if things have changed.
๐
ฌ gabriel_syme ๐
ฌ#3220: Is that a napkin space moment? :berk:
๐
ฌ gabriel_syme ๐
ฌ#3220: Will that be available at some point? :)
AI_WAIFU#2844: I bet it works, but I also bet there's a minimum amount of serial computation you have to do to avoid diminishing returns.
kindiana#1016: well, currently its ~28
kindiana#1016: as opposed to 56 of a regular transformer of the same arch
kindiana#1016: idk if 14 will work
kindiana#1016: lol
AI_WAIFU#2844: Eh, you can just up the batch size to compensate.
AI_WAIFU#2844: More serial -> more time per iteration due to latency. More batch -> more examples for same latency
kindiana#1016: well, not if you are latency bound for inference or something
kindiana#1016: none of these are a huge deal for training
|
kindiana#1016: you have n_layers serial steps when training
kindiana#1016: but n_layers * tokens when generating
AI_WAIFU#2844: that makes sense.
AI_WAIFU#2844: Idea: distill a model using the reverse kl loss, which is mode seeking rather than mode covering.
AI_WAIFU#2844: That way it should be have like a gan, making high quality output but dropping most of the distribution
AI_WAIFU#2844: So you can get away with a smaller model for generation purposes.
Manny96#3437: We should make a DevOps channel - Kanban style?
zphang#7252: what do you mean by dual heads
zphang#7252: you mean after the next whole transformer block?
kindiana#1016: yes
zphang#7252: interesting, and when you say late residual add, you mean making this change without re-tuning and hoping that because of residuals it still kind of works?
Louis#0144: Span head and AR head
kindiana#1016: yup
zphang#7252: er elaborate?
Louis#0144: Maybe Iโm over tired and confusing T5 for something else
Louis#0144: lol
zphang#7252: or maybe I'm missing something lol
zphang#7252: it should just be a standard encoder-decoder? (other than the pos encoding)
bmk#1476: uh not sure I see how this cuts down on bandwidth
kindiana#1016: this does not
|
kindiana#1016: but it lets you overlap communication with computation
kindiana#1016: so the wallclock is max(comms, compute)
kindiana#1016: instead of comms + compute
zphang#7252: ben, is it easy for you to share the pretraining scripts for gpt-j?
kindiana#1016: wdym?
kindiana#1016: should be able to do it from the repo
bmk#1476: also what about the backward pass?
kindiana#1016: what about it?
kindiana#1016: (you can overlap in the backwards pass too)
bmk#1476: can you use the same trick backwards?
zphang#7252: I'm a little confused about what the entry point is. Would I run `train.py` on the GCE machine and `device_train.py` on each TPU VM?
kindiana#1016: train.py handles all that
kindiana#1016: although currently the repo is broken because all tpu pods have mismatched python versions lol
kindiana#1016: like, node 0 has python 3.8.10 but all the other nodes have 3.8.5 :mesh:
zphang#7252: oh so I would only need to run `train.py` on the GCE VM, and that does all the things? neat
kindiana#1016: yes
kindiana#1016: it creates the tpu, ssh's into them to install the deps, starts the ray workers etc
suh#2879: hi everyone, just wondering if it's possible to minibatch or break each step into batches for VQGAN+CLIP, can't get higher resolution on a v100 bc it runs out of memory
alstroemeria313#1694: We never did figure out how to do tiles well
suh#2879: hmm, it's all good will use topaz atm and look into it later, ty tho
|
alstroemeria313#1694: If you have two GPUs you can model parallelize VQGAN
suh#2879: yeah don't really have access to two gpus atm
suh#2879: been using gradient notebooks
chilli#5665: By who? :thonk:
EricHallahan#1051: https://www.reddit.com/r/EleutherAI/comments/l7uuy4/is_anyone_actually_from_eleutherai_here/
Daj#7482: I don't really know how AMAs happen, but I'm not opposed to doing it if there would be enough interest
Sid#2121: I think /r/MachineLearning might be a better venue tho
kurumuz#5695: I'm sure there would be interest if you guys decided to do it
kurumuz#5695: AMAs are really tiring to do haha
kurumuz#5695: you need to answer like, maybe hundreds of questions
Daj#7482: I can take a day off to do that, it sounds fun
Daj#7482: yeah
Dupelet#9080: I'm from r/futurology, and we're on the lookout for topics of interest, which is why I'm asking
Dupelet#9080: AMAs (Ask Me Anything) are Reddit's version of Q&A interviews. AMAs on our sub are generally less intense, over a few days. Answer at your convenience, no set time necessary.
Daj#7482: lol you had David Kelley on
Daj#7482: But yeah, at least for me personally sounds fun, guess I should check with the others
StellaAthena#3530: Iโm down.
StellaAthena#3530: Thereโs also an official AMA subreddit that tbh we are probably enough of a Big Deal to get a platform on if we wanted
pebbles#7130: you may get more interesting questions on some subreddits compared to others
Dupelet#9080: Well talk it over, and ping me if you're interested ๐
|
Manny96#3437: Use to develop applictions using Python Libtorrent . Pile and Libtorrent
Manny96#3437: Libtorrent is really cheap and has data integrity
Gurkenglas#7362: What happens if, instead of skip connections, one applies the attention and feedforward layer in parallel?
paws#3311: i had no idea that apple had an ai residency program ๐ฎ
Sid#2121: that's what gpt-j does
Gurkenglas#7362: neat.
Gurkenglas#7362: aw man theres like 50% chance this is source attribution error.
Gurkenglas#7362: does gpt-j get logit lens?
Sid#2121: dunno, don't think anyone ever tried it
kindiana#1016: Thought it did?
StellaAthena#3530: @nostalgebraist is the person to ask about this. I know it works with GPT-Neo, IDR about GPT-J
alstroemeria313#1694: btw https://github.com/zzd1992/Image-Local-Attention is a thing
alstroemeria313#1694: i'm trying it rn in ipython, it's actual sliding window attention over 2D feature maps
alstroemeria313#1694: CUDA only
alstroemeria313#1694: (It has custom ops)
alstroemeria313#1694: Also it works w/ double backward() (needed for GAN gradient penalties)
alstroemeria313#1694: Not multihead though.
alstroemeria313#1694: I wish it were
alstroemeria313#1694: Uh, it should let you use self-attention at every resolution of a convnet w/ downsampling or upsampling blocks?
alstroemeria313#1694: Instead of only at low resolutions.
|
alstroemeria313#1694: As in they didn't even provide a CPU impl
EricHallahan#1051: That drove me insane in the original StyleGAN.
alstroemeria313#1694: CUDA only?
EricHallahan#1051: It was.
alstroemeria313#1694: oh
alstroemeria313#1694: there is a TorchLocalAttention in there too
alstroemeria313#1694: which is a pure pytorch slow version
alstroemeria313#1694: this runs on cpu and torch.isclose checks out w/ the CUDA version
alstroemeria313#1694: i... i would feel better if they had an optimized C++ version
alstroemeria313#1694: for cpu
EricHallahan#1051: OpenCL/Vulkan/SPIR-V seems to be entirely absent from ML for some reason.
kurumuz#5695: tinygrad should be mainly OpenCL
kurumuz#5695: https://github.com/geohot/tinygrad
EricHallahan#1051: Like there is the Vulkan backend for PyTorch, but that is really just meant for inference on Android.
kurumuz#5695: lol
kurumuz#5695: maybe CUDA is just faster
kurumuz#5695: ยฏ\_(ใ)_/ยฏ
EricHallahan#1051: You have to compile from source to get it lol
Louis#0144: OpenCL is dead
Louis#0144: thats why
|
Louis#0144: lol
Louis#0144: besides legacy systems
Louis#0144: almost all scientific computing stuff Ive seen is CUDA
Louis#0144: with very few exceptions
EricHallahan#1051: https://www.khronos.org/blog/opencl-3.0-specification-finalized-and-initial-khronos-open-source-opencl-sdk-released
Louis#0144: ๐คทโโ๏ธ
alstroemeria313#1694: yeah but what uses it
Louis#0144: in 2016-2018 when I was really into super computing
Louis#0144: I never saw anyone use open cl
Louis#0144: like at all
Louis#0144: even now I dont know anyone who usesit
Louis#0144: and I still have friends in distributed computing
alstroemeria313#1694: oh, blender can use it?
alstroemeria313#1694: ok
alstroemeria313#1694: ...does it actually work well
ersatz#0001: is CUDA a thing on Android?
EricHallahan#1051: If you have an NVIDIA GPU, I am sure it is.
Louis#0144: Tegra?
Louis#0144: is Tegra still a thing?
Louis#0144: oh yeah the switch
|
triggerhappygandi#0001: Is tegra not a thing in android phones anymore?
triggerhappygandi#0001: I don't keep up with smartphone tech
Louis#0144: smartphone hardware tech is basically solely apple
Louis#0144: lol
Louis#0144: qualcomm is so incredibly far behind
Louis#0144: they arent really worth considering
alexyz#3459: for CPUs, yea
alexyz#3459: but it depends on what category of hardware you choose to compare
EricHallahan#1051: I'm pretty sure that Qualcomm sells multiple times more chips than Apple lol
alexyz#3459: but performance wise...
kurumuz#5695: CPUs, GPUs and neural engines
kurumuz#5695: lol
kurumuz#5695: apple is completely smacking everyone else
alexyz#3459: when tf do you need a mobile GPU lmao
EricHallahan#1051: It helps a lot when accelerating graphics lol
alexyz#3459: and "neural engines" are a thing that apple literally made up
alexyz#3459: like of course no other smartphone company would have it, it's part of the term
EricHallahan#1051: Everyone and their dog is getting on on the trend of dedicated units for the acceleration of ML tasks.
alexyz#3459: and by other hardware I meant stuff like screens and other hardware components
alexyz#3459: like Samsung rules the smartphone display space, they were supplying Apple exclusively up until a few years ago
|
ersatz#0001: inference
ersatz#0001: Computational photography example
kurumuz#5695: i meant any processor that accelerates matrix math.
kurumuz#5695: other chips have them too, they're just much slower.
tjroxx#2664: hey yall
tjroxx#2664: Jamaican here
alexyz#3459: ๐ฏ๐ฒ
Siyris#0001: There's Colabs in a lot of the frequently used channels, what about making a Colab dedicated channel under resources where people could share them and pin the most frequently requested ones?
someKindaBean#8471: I recently saw a paper about optical matrix math co-processors
someKindaBean#8471: That was kind of cool, because it allows operation on a lower time complexity
nshepperd#2316: i used to use opencl, but it was a lot of effort to make an entire ml library, so i said screw it and used pytorch instead
nshepperd#2316: and we've been living under the boot of the cuda monopoly since...
mega b#6696: This has been on my mind: Is humans more of a GAN, or a Transformer?
๐
ฌ gabriel_syme ๐
ฌ#3220: I'm definitely not a gan
thenightocean#6100: humans are more of a GOFAI expert system IMHO
๐
ฌ gabriel_syme ๐
ฌ#3220: on top of a huge NN?
bmk#1476: humans arent really gofai
bmk#1476: humans are a NN pretending to be a gofai pretending to be a NN
๐
ฌ gabriel_syme ๐
ฌ#3220: I think essential to all this is the fact that analytical descriptions of whatever the system is will always fall short of what it is.
๐
ฌ gabriel_syme ๐
ฌ#3220: that's all I have though lol, no insights on what it is
|
๐
ฌ gabriel_syme ๐
ฌ#3220: by that I don't mean model vs reality nonsense, just that the idea of absolute dichotomies not the way to decompose things
suh#2879: any ways to stop a pytorch model without memory leaks
applepi#4437: in general purpose-built anything is useful for .... the purpose lol
mega b#6696: perhaps on a portable vr headset
mega b#6696: maybe throw a couple 3090tis, and a a100 why not
mega b#6696: now you can train a gpt model and run minecraft shaders + third degree burns
mega b#6696: good luck holding that thing up, too
applepi#4437: @someKindaBean can you link paper; i'm curious what is meant by "lower time complexity"
Kia#2550: Run a Image model:thonk:
Louis#0144: Restricted Boltzmann machine
natedog#8669: Hi y'all!, my team is trying to finetune GPTNeo (125M) on some additional github data, but the training loss hasn't changed much over our initial test of 200 steps. We're wondering if this is okay to not see much change over such a short period of time or if something might be wrong with our warmup or something. Would appreciate any advice on anyone who's tried finetuning GPTNeo for other purposes
Kharr#7888: The model tunes fine. You're going to have to give more information around warmup schedule, learning rate, optimizer, # of tokens/step. What do you mean by 200 steps? 200 optimizer steps or epochs over data or ? E.g. if you're using a linear warmup schedule over 10k steps and you're 200 steps in.. your lr at 200 is lr * 200/10k which means it's barely going to budge
EricHallahan#1051: Warmup?
Arto#7478: So here some details we are training GPTNeo fro HuggingFace modelhub with raw github data, with all params as per config here https://github.com/EleutherAI/gpt-neo/blob/master/configs/gpt2_small.json except we are using Adafactor, and we did 10k steps with batch size 512 seq_len 2048, which were all warmup from 0 to 2.5e-4 (cause I multiplied number of warmup stepsby grad accumulation twice accidentally ๐ฌ ). The eval loss was fluctuating around 0.92 rather then decreasing. So we wonder if this is expected cause GPTNeo is already quite good with code, of should we search for some bugs?
natedog#8669: Even more additional info, by step we refer to number of batches seen and we are using a linear warmup scheduler. If you'd like to see some pretty charts of the behavior we are talking about, here is a link to our wandb run: https://wandb.ai/wandb/hf-flax-gpt-neo-copilot/runs/2yr36dg0?workspace=user-
Kharr#7888: Looks normal to me. Your train loss is at ~ 1.5 which means the model already understands the task (assuming you're using the normal tokenizer + pretrained model). It will go down very slowly as you tune and the model refines a bit. There's also a chance that it will get worse since it was already trained on github data and your finetune could break the model.
natedog#8669: okay, awesome! Is there anything special we could do to lower the chance that we break it lol?
Kharr#7888: Keep lr < 1e-4 and use a big batch size. Noisy training + high LR will wipe out some of its encoding.
CRG#8707: Refreshing the optimizer state might be hurting it. (Is there a way to transfer the Adam buffers into adafactor?)
Kharr#7888: using slow warmup is fine too
|
CRG#8707: Yeah, referring to <https://discord.com/channels/729741769192767510/851918317039255592/857653027233333248>, (lower loss but higher over fitting risk)
Kharr#7888: For 6B model, yes, but for 125M model overfitting is really hard unless dataset is tiny
Arto#7478: Hmm, we can just try to switch to Adam and load the state, will try and see if it'll fit to memory with our current setting
so if loading the optimizer state should we decrease number of warmup steps?
also we don't mind loosing some performance on general language modeling to make it better with code
natedog#8669: Yeah, we are essentially trying to create an open version of GPT Codex and we are using quite a large, but noisy dataset from Github ~209GB compressed, so we mostly care about ability to complete code similar to github copilot. Thank y'all for your help and suggestions. We will try a few different experiments and see which works best
bmk#1476: you should try our code model too
bmk#1476: https://huggingface.co/lg/ghpy_20k
CRG#8707: If trying to replicate everything, it's likely that copilot uses something like the codex-s variant (fine-tuned on function implementations that pass unit tests on a curated problem dataset). <https://github.com/openai/human-eval>
Kharr#7888: These are pretty :chonk: models.
bmk#1476: it's not chonk if it's less than 100B
Kharr#7888: How much data was it trained on?
bmk#1476: not a lot, it's a fine tune
bmk#1476: I don't remember exactly how much
Orz#3023: Is it a model similar to copilot?
bmk#1476: I'd assume so
Kharr#7888: Looks like 2.7B Neo with all global attention?
EricHallahan#1051: It is literally just Neo tuned on Python IIRC.
Kharr#7888: Unless config is incorrect, it also did drop the local attention
bmk#1476: screw local attention
|
nostalgebraist#3542: sorry if i'm just out of the loop, but is ghpy-6b going to happen?
bmk#1476: it will whenever I get around to it
Louis#0144: based and ghpy pilled
Orz#3023: is it possible to combine multiple collabs/kernals at the same time to finetune gpt-j?
Kharr#7888: No. Finetuning requires a lot more resources than any single Colab offers and training asynchronously would be quite an amazing feat if you can somehow figure that out.
sweg#8920: ~~i am a decision tree~~
sweg#8920: actually im a markov chain
sweg#8920: if i see the word among us in any sentence i say sus
sweg#8920: intelligence is a myth
sweg#8920: Unironically though there might be something simpler in humans. Identity and world view are constructed later in life so are learned, and can vary widely between person to person (especially when looking at different time periods), but base drives like survival, reproduction, and pleasure seeking behavior are common among all animals.
sweg#8920: I think *learned* behaviour/stuff is in neocortex in a kind of world model
sweg#8920: and that is where a lot of *intelligent* stuff happens
sweg#8920: but all that does is create a representation of the world
sweg#8920: idk where motivations come from
Louis#0144: Tbh I think most of neuro cog sci is confirmation bias because we as humans believe this is how our brain works
Louis#0144: But it is in fact just an artifact of the distributed representation
sweg#8920: Agree
sweg#8920: yeah its emergent
sweg#8920: most things are emergent
sweg#8920: but i guess thats kind of stating the obvious
|
Louis#0144: There was a paper where they asked neuro cog sci scientists to try to understand the cognitive processes of a microchip
Louis#0144: And they massively failed
Louis#0144: It wasnโt even close
Louis#0144: I donโt think itโs obvious at all
Louis#0144: lol
sweg#8920: well its not really a full answer
sweg#8920: "everything is emergent!"
sweg#8920: i mean for the neocortex it's kind of understandable
sweg#8920: where different sensors wire is determined by genes -> chemical gradients, so neocortex areas for certain sensors is fixed
sweg#8920: they work together to create a latent space of representations
distractedm1nd#2062: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005268
This is it btw, it's actually a really great read for anyone who likes to think about "decoding the brain"
distractedm1nd#2062: Theres been some pretty good rebuttals though, I'd have to look for them
natedog#8669: Yeah we are, if you are interested you can check out our github: https://github.com/ncoop57/gpt-code-clippy and we are planning on evaluating on that human eval benchmark as well as the apps dataset https://github.com/hendrycks/apps
kindiana#1016: do you have your dataset somewhere?
natedog#8669: No where yet besides the TPUs we got access to. We were actually gonna try and see if we could get them put on "the-eye" similar to how y'all have the pile
natedog#8669: because it is similar format as well since we use the same download script
chilli#5665: We already evaluate a fine tuned version of gpt-neo in APPS I believe
bmk#1476: @-Archivist is the guy to talk to for that
|
bmk#1476: also I'd be interested in the details of your dataset
bmk#1476: how big, what kind of filtering do you do, etc
bmk#1476: the biggest problems with our current GitHub set for pile is a) it's only like 600GB and there's a lot more code than that and b) a lot of it is garbage
natedog#8669: We use this tool: https://seart-ghs.si.usi.ch/, it only has about ~1million repos max to filter on. We added some additional filtering:
- <= 10 stars
- <= 2 contributors
- < 1 commit
- no forks
- must have a license
- > 70708 bytes repo size
This gives us about 500,000 repos and then we merge these with the original repos from the pile (removing dups) which gives around 670,000 repos that we ended up downloading (around 99.6 success rate). We did a bit of testing for duplicate code in a subset of our dataset and found it was quite bad ~28% near duplicates, but we haven't finalized the deduplicate process yet to see how bad it will be for the entire dataset. Yeah I'm guessing they are tons more, I'm just not sure how to get it easily. Maybe a ton of personal tokens to do all of the API calls to github?
natedog#8669: Will hit him up, thanks!
triggerhappygandi#0001: I think it must've been said already but if you have a large enough data to fine tune on, fine tuning won't be very effective
natedog#8669: From the recent OpenAI paper, they found that if you finetune the model will converge faster even though it won't get any improvement on metrics
triggerhappygandi#0001: Yeah I was thinking about that itself.
triggerhappygandi#0001: But didn't remember the convergence part ๐
-Archivist#7336: Yes, I'll host whatever it is.
-Archivist#7336: @bmk Did anyone do anything with the libgen to text output I did for you?
> https://the-eye.eu/eleuther_staging/lg_pdf2txt.7z
|
> https://the-eye.eu/eleuther_staging/lg_epub2txt.7z
I joined here thinking @shawwn was some lead and haven't seen him active on anything since ๐คท๐ผโโ๏ธ
bmk#1476: one problem I noticed is there are a lot of source files that are just a wall of hex constants or something
bmk#1476: they're all really big too, so they take up a disproportionate amount of the data
bmk#1476: I don't think anyone has used to for anything yet, but if we ever make a pile v2 that's going in there
bmk#1476: also the pdf ones would need some cleaning
natedog#8669: Dude awesome, thanks!! I'll reach out once we've done a bit more finalizing of it
bmk#1476: and by some I mean a lot
-Archivist#7336: > pile v2 that's going in there
A v2 should happen now I'm here and daft enough to pull in new large sources for you
-Archivist#7336: sound
-Archivist#7336: true
bmk#1476: the main bottleneck on v2 so far has been nobody had time to work on it, but if you wanna take up the lead role on that I'd love to provide help where I can
bmk#1476: I can give you a list of things that would probably be worth including that nobody got around to the first time
-Archivist#7336: do that asap and Ill get on it, I'm right in thinking it's just **lots of** coherent plaintext you need right?
bmk#1476: yeah basically
bmk#1476: and the more the better, the cleaner the better
-Archivist#7336: okie dokie
-Archivist#7336: I guess that now we're _done_ libgen, I should do scihub too
-Archivist#7336: there's still a lot to get out of libgen as I only did the two formats but scihub being lots of technical text and being much larger would be god to get done too
|
-Archivist#7336: mailing lists?
bmk#1476: so here are the individual-set things i can think of off the top of my head:
- libgen
- scihub
- FF.net and AO3
- reddit comment data
- the hendrycks AMPS pretraining set (i can get this one)
- all the training sets in eval harness (don't worry, ill get this for you, just remind me sometime)
- bigger github training set (the github set in pile v1 is pretty smol)
- APPS training set (i can get this one)
- NaturalQs (i can get this one)
-multilingual wikipedia (only english is in v1)
bmk#1476: there's also one other big thing: if we can figure out how to extract clean multilingual text from all of commoncrawl, that's an ungodly amount of data and so probably far ourweights everything else on the list in importance
bmk#1476: unfortunately that's really nontrivial and i spent a lot of time bashing my head into the wall trying to make it work
bmk#1476: so yeah if you decide to tackle any of this i can tell you all about what ive tried so far to hopefully save you some time
kindiana#1016: just ask the model nicely :berk:
-Archivist#7336: reddit comment data is done, I have original ps data on hand, adding long post body texts would be good too, got those already
bmk#1476: the CC one is the one i have most of my hopes pinned on but it would also be a massive massive undertaking
bmk#1476: i think multilingual&filtered CC + libgen + scihub + v1 + filtered github could get to 100TB which is a really nice round number
|
-Archivist#7336: I'll need some dev to bash out some code for that, but will happily run, compress and host it
bmk#1476: the problem is that extracting text from arbitrary multilingual websites is really really hard
-Archivist#7336: fair
bmk#1476: also running that at scale is hard too but i bet you have that down to a t
-Archivist#7336: for sure ๐๐ผ working with the CC data is fun, had my network hitting upwards of 500Gbit/s before, amazon love it!!
AI_WAIFU#2844: Frankly with TPU VMs we're not short of cpu's either, although it's kind of a waste
bmk#1476: i mean like the logisitcs, not the resources
bmk#1476: but archivist has both
kindiana#1016: https://github.com/src-d/datasets/tree/master/PublicGitArchive
kindiana#1016: this is kinda sadge
bmk#1476: ?
kindiana#1016: src-d went out of business lol
Sid#2121: there's also https://www.usenetarchives.com/ @-Archivist - I emailed the guy a while back and he seemed open to sharing the whole db
-Archivist#7336: HA! It's already on archive.... he just took those and rehosted it
bmk#1476: i think figuring out multilingual clean CC is by far the most productive use of time personally
Sid#2121: ah ok
Sid#2121: well, if it's already available
bmk#1476: the amount of data is simply absurd
bmk#1476: i downloaded the usenet archive and decided it kinda sucked tbh lol
Sid#2121: how so
|
Sid#2121: it's text
-Archivist#7336: Aye, I'll start shoving such things into a pile_v2 staging dir for us
Sid#2121: also how do people feel about... twitter
bmk#1476: youd need to parse it to get rid of all the header gunk and at that point you dont have muchdata left
kindiana#1016: 50 tokens is all you need? :berk:
bmk#1476: i do not feel
Sid#2121: there's threads tho
Sid#2121: i mean, better for images
bmk#1476: i wish not to feel about twitter
-Archivist#7336: news articles? I've got scrapes from 50+ news sites over the last 7 years
Sid#2121: hm, we already have *a lot*, but more can't hurt as long as we deduplicate well
StellaAthena#3530: A bunch of the datasets in the Pile we found by going around USG websites and looking for bulk download buttons
StellaAthena#3530: Maybe replicate that with Germany, Russia, Spain, India, China, Japan, ...
StellaAthena#3530: Like, there's gotta be at least a dozen versions of the US PTO dataset
StellaAthena#3530: This is over a TB of patent applications in a variety of european languages: https://www.epo.org/searching-for-patents/data/bulk-data-sets.html
StellaAthena#3530: Israeli legal datasets: https://en-lawlib.tau.ac.il/israeli_databases
bmk#1476: i think getting CC is much higher leverage
StellaAthena#3530: ~700,000 UN publications: https://digitallibrary.un.org/collection/Documents%20and%20Publications?ln=en
StellaAthena#3530: And transcripts of every speech given at the UN: https://digitallibrary.un.org/search?ln=en&cc=Speeches
bmk#1476: literally microscopic next to the size of CC
|
bmk#1476: 700k publications is, what, a few dozen GB at most?
StellaAthena#3530: I'm just assuming that filtering multilingual CC is a fools errant tbh
nostalgebraist#3542: my 30-day TRC trial is going to expire next week. if possible, i'd love to get it extended.
what's the recommended approach for that? i'll fill out the survey they sent me -- just wondering if i should do something beyond that
Louis#0144: Email
Louis#0144: Talk to them
nostalgebraist#3542: thanks!
Louis#0144: As long as it isnโt for profit stuff
Louis#0144: (duh)
Louis#0144: Did u try that @Lucas Nestler (ClashLuke)
Louis#0144: lol
nostalgebraist#3542: i recently set up a continually running scraper to extend my tumblr dataset, and i really want to be able to finetune again once i've got a nontrivial increase in the dataset size
Louis#0144: Continual learning stuff with engrams like what @aero does is pretty promising tbh
nostalgebraist#3542: the train loss curve is so beautifully linear in the first epoch... i want to finally have enough to see where it bends
Louis#0144: I was considering helping aero with setting up baleen too
Louis#0144: Just been busy
Teemochu#8740: I noticed when I was dling the pile that you limit to 32 connections from one user at once (not that I needed more, I saturated my gigabit link with about 10, just decided to do it very naively and ran into that error lol)
-Archivist#7336: why would one ever need to or mistakenly be running 32+ connections...
Teemochu#8740: every file in parallel lol
|
-Archivist#7336: every... ๐
-Archivist#7336: donut
Teemochu#8740: it's called pasting "start wget" commands
-Archivist#7336: That dir is only 10Gbit capable, only sees it when we release something new and hn/reddit gets hold of it mind you https://cdn.discordapp.com/attachments/729741769738158194/863901609905750045/AI10g.mp4
Teemochu#8740: ffnet https://archive.org/download/updateablefanfic
ao3 https://archive.org/download/AO3_story_dump_continuing
Teemochu#8740: just in case you hadn't run across them
pacman829#0801: where'd you get those gauges?
olives โ#2305: OH WAIT BELLARD HAS GPT-J 6B MODELS NOW ๐ซ
THAT'S SO AMAZING ๐คฉ TYTYTY
olives โ#2305: probably https://the-eye.eu/traffic/ maybe?
๐
ฌ gabriel_syme ๐
ฌ#3220: the 125M definitely fits in memory with adam, I only had to use adafactor for 1.3b and 2.7b
๐
ฌ gabriel_syme ๐
ฌ#3220: The EU has translations of most documents in almost all the member languages, could be interesting although I'd imagine that's already been used for translation datasets.
olives โ#2305: Google Translate uses it ๐
\o/#8499: Hi! i dont really know much about AI but i'd like to join the community and contribute open source community
\o/#8499: is there any resource where i can start?
bmk#1476: https://discord.com/channels/729741769192767510/729741769738158194/801630685525835787 check this out
bmk#1476: it has all the resources you need
\o/#8499: arigato!
\o/#8499: do people here communicate just by chat?
|
bmk#1476: yeah
\o/#8499: nice
EricHallahan#1051: All useful communication is textual.
\o/#8499: i hope someday ill be of help
\o/#8499: : )
\o/#8499: gotta learn at the moment
aero#1357: ๐
EricHallahan#1051: What are you ๐ing at.
Kia#2550: Probably another ground breaking thing They found in there Bot๐
One#5919: why?
EricHallahan#1051: Textual communication is archivable and traceable.
kurumuz#5695: attention ๐
EricHallahan#1051: is
One#5919: thought processes interacting
One#5919: can be done across many different media
One#5919: but i concede the point
aero#1357: all
EricHallahan#1051: you
kurumuz#5695: need
One#5919: and a supercomputer
|
mkualquiera#3484: https://cdn.discordapp.com/attachments/729741769738158194/863950934714089482/5g6e9c.jpg
pacman829#0801: who is bellard?
One#5919: whoa
EricHallahan#1051: https://en.wikipedia.org/wiki/Fabrice_Bellard
One#5919: https://bellard.org/textsynth/
olives โ#2305: ^
pacman829#0801: https://tenor.com/view/oh-snap-dave-chappelle-saturday-night-live-oh-no-omg-gif-19226541
One#5919: "The best way to make money from a great idea is to present it to the public.
There are many ways in which you can present your product to the public.
There is no way to tell how they will react to your product.
The best way to invent something original is to follow the guidelines of the world so you don't get confused.
All innovators need to be aware of their surroundings and how they can be improved.
If you are trying to invent something for the first time then you need to understand the world around you.
The best way to"
I've been running this prompt on Bellard's site and it gives text that is as good as AI Dungeon's Dragon model. GPT-J is HEAVY yo
pacman829#0801: that probably explains why it runs inferences so fast
pacman829#0801: thank you
|
bts-dui#4424: I downloaded pile dataset, and I found that it is only 458GB in total. Is this normal? I expected that It will require about 800GB.
StellaAthena#3530: It's compressed
kurumuz#5695: god bless markov chains
olives โ#2305: > The best way to achieve this is to study the world and understand how things work.
> **A man who is looking for gold must always be prepared to follow his mother. **
> You need to think about the world before you design something or invent anything.
olives โ#2305: https://cdn.discordapp.com/emojis/857359266863775796.png?v=1
One#5919: You changed the template
One#5919: "this" instead of novel thing to get advice for
olives โ#2305: https://cdn.discordapp.com/attachments/729741769738158194/863968706677440522/unknown.png
olives โ#2305: i just copied it lol
One#5919: ohh, what i'm saying is you need to provide what you want advice for
One#5919: my prompt was incomplete, should've had "X" at the end
One#5919: "The best way to X ..."
olives โ#2305: ohh
olives โ#2305: > The best way to invent something original is to invent something that somebody already needs.
> When someone asks you about your idea, say how the problem that you're addressing is a problem that's unsolved.
> Be honest about how hard it is.
olives โ#2305: that makes much more sense
EricHallahan#1051: This is not the place to bicker about this.
|
One#5919: we're discussing proper prompting technique and its effects but i concede the point
One#5919: #prompting
๐
ฌ gabriel_syme ๐
ฌ#3220: woah look at you finishing each other's sentences. true nerd love right there
One#5919: domestic bliss
someKindaBean#8471: There's a few papers on optical processors that I looked at recently. One on using them for FFTs that talks about time complexity explicitly - https://core.ac.uk/reader/156827770, here's one about using optical processing for convolution that doesn't explicitly discuss complexity - https://openaccess.thecvf.com/content_CVPR_2020/papers/Pad_Efficient_Neural_Vision_Systems_Based_on_Convolutional_Image_Acquisition_CVPR_2020_paper.pdf, here's a company trying to use optical processors for their special sauce - https://neurophos.com/
One#5919: @someKindaBean https://news.ycombinator.com/item?id=27738029 too
One#5919: so non-discrete
applepi#4437: ah interesting ok it's using optics as a coprocessor to drop the time-work compexity
someKindaBean#8471: yep, it's an idea that was first used back in the 70s (or earlier, but I have a paper on optical FFT computation from the early 70s) but got dropped when computing power improved. And now it's making a resurgence, as the link from The One discusses
someKindaBean#8471: thanks for that link too - cool stuff
applepi#4437: makes sense ;; physical world is confusingly good at doing things haha
applepi#4437: did you see the bacteria for TSP https://www.theguardian.com/science/blog/2009/jul/24/bacteria-computer
someKindaBean#8471: I haven't seen that one, but I've seen other stuff with using fungi for routing problems
pacman829#0801: What's ai dungeon dragon ?
One#5919: an RPG site setting that lets you use GPT-3's Da Vinci model (175b parameters) for $10/mo, but it's not "the real thing"
chirp#4545: Curious if thereโs an ML-powered app that helps you improve the fluency of your spoken language
chirp#4545: The app could show you a question, and you would respond out loud, and then the app could rate your response quality & fluency and suggest ways to improve
chirp#4545: To me at first glance this app seems (1) really useful, at least to me, and (2) well within the capabilities of current ML tech
Parker#3197: doesn't duolingo do that somewhat already?
chirp#4545: With spoken language?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.