data
stringlengths 115
7.61k
|
---|
Aran Komatsuzaki#5714: P stands for pernicious
gwern#1782: and 'E' of course is for 'Evil'
bmk#1476: i thought the B stood for BPE
Aran Komatsuzaki#5714: Maybe I'll tweet this
gwern#1782: there are many kinds of evil. around 51k, or so
bmk#1476: BPE Pernicious Evil
gwern#1782: they're an improvement over the Vermicious Knid Encoding, I'll admit
Logan Riggs#7302: This is giving me Series of Unfortunate Events flashbacks.
bmk#1476: it's almost as recursive an acronym as **H**umongous lang**U**age **MO**delling inter**N**et **G**eneral purp**O**se **U**se data**S**et
gwern#1782: (that one is so inelegant)
bmk#1476: that's part of the joke
bmk#1476: ~~postmodern acronyms~~
gwern#1782: if it was, it wouldn't be so inefficient as to use short words like 'Use' or waste fully 2 initials on 'MOdelling'
bmk#1476: the joke isn't that it's a *long* acronym
bmk#1476: the joke is that it's a massive stretch
gwern#1782: _has never liked lampshading like that, when a better joke could've been done_
bmk#1476: if you have a better acronym please propose
bmk#1476: i'd love to hear
StellaAthena#3530: What if we made it something like GIIIIIIIANT
StellaAthena#3530: Now *that* would be a stretch |
bmk#1476: ~~the *real* purpose of the acronym is to catch the ire of gwern enough to get a better acronym~~
gwern#1782: I'd only try if I could think of a good way to make gpt-3 do it
Louis#0144: why does phil leave an join every 5min
3dprint_the_world#6486: he doesn't want anyone sending him DMs or highlighting him
3dprint_the_world#6486: is my best explanation
gwern#1782: https://twitter.com/AndrewMayne/status/1319329854815285249 whaaaa
cfoster0#4356: Is that surprising?
gwern#1782: yes. no one suggested anything remotely like that
gwern#1782: a compression classification? I've seen it done, but usually as a stunt. nobody thought you'd use *GPT-3* like that
StellaAthena#3530: Hey guys, if you’re looking for a way to contribute, our current major blocker is downloading and processing the evaluation datasets from the GPT-3 paper.
Once we have those all, we can deduplicate them against our training data and then begin training. Check out #lm-thunderdome or https://github.com/EleutherAI/lm_evaluation_harness to learn more.
3dprint_the_world#6486: @gwern am I missing something? this seems like a very obvious thing to do to me.
bmk#1476: I second this
bmk#1476: This seems kind of obvious and I've been using it in my projects (unfortunately none of it published) for ages now
thenightocean#6100: maybe its obvious for people here, but probably less obvious for normies. OTOH i doubt that google will just ignore this development. If this might really threaten their dominance I feel they might soon throw their infinite money into beating OAI in transformers wars.
Aran Komatsuzaki#5714: This isn't just OAI vs Google Brain. It's actually a proxy war between Microsoft and Alphabet. 🧐
thenightocean#6100: good point!
thenightocean#6100: it would be ironic that just at the point when US gov is suing google for being a monopoly it wont really matter as the basis of their monopoly is now on deathwatch.
thenightocean#6100: and to add to the irony, the company who will kill them is the original monopolist that got creamed by google in 2000s 🙂 |
Daj#7482: > This isn't just OAI vs Google Brain. It's actually a proxy war between Microsoft and Alphabet. 🧐
@Aran Komatsuzaki Funnily enough, some people have been warning me in earnest that Google is using me/us
Daj#7482: We've done it, we're cyberpunk protagonists
Aran Komatsuzaki#5714: HuggingFace is using me too
Aran Komatsuzaki#5714: Google is using HuggingFace so
Daj#7482: It's all part of the conspiracy
Aran Komatsuzaki#5714: We are not a lone hacker. We're just quasi-Googler who wants to feel edgy.
Daj#7482: Wait you're getting paid?
Daj#7482: Hah
Daj#7482: but fwiw I think this is not some grand conspiracy and more just organic bullshit
Daj#7482: We know that the MTF team has ridiculously huge transformers in house
Aran Komatsuzaki#5714: yeah
Daj#7482: They're just waiting for OpenAI to test the market fit
thenightocean#6100: @Daj conspiracy goes deeper my friend. We are probably controlled by some entity in parallel universe who is simulating us in attempt to influence the Solomonoff prior
Daj#7482: I think someone has drunk the acausal kool aid
thenightocean#6100: (take that, Alex Jones 😆)
Daj#7482: Malignant Universal Prior is just mid-tier infohazard tbh
Daj#7482: Though most infohazards worth their salt are honestly just "if hypercomputers exist, all sanity is lost"
thenightocean#6100: I am actually to dumb to understand it completely, I just like the creepy feel it gives me heh
Daj#7482: Yea I'm addicted to finding lovecraftian lore as well |
Daj#7482: But I've reached the stage that that one at least isn't scary anymore
thenightocean#6100: my whole life I wanted to become smarter, but once I discovered LW infohazards I started to appreciate my intellectual limitations
Daj#7482: Don't be a pussy
Daj#7482: Infohazards are only dangerous if you then don't become smarter
Daj#7482: :D
Daj#7482: It's like nihilism
Daj#7482: Nihilism is baby's first infohazard
Daj#7482: Some people read two paragraphs of Nietzsche and flee back to New Age feel good or whatever, others get stuck in depression, and some :yes: come full circle, realize you can just make up your own meaning, and end where they started, but stronger
thenightocean#6100: nah, let other people try their luck in solving the Lemarchand Configuration: https://en.m.wikipedia.org/wiki/The_Hellbound_Heart
Daj#7482: Don't threaten me with a good time
Daj#7482: ~~The Cenobites are just unaligned AI tbh~~
thenightocean#6100: seriously for some unexplainable reason I just started getting into Hellraiser fiction this week and reading that LW arrticle was like... ‘ah shit, no I dont F...ing want to invite cenobites, thank u!!”
Daj#7482: ~~Then stop working on AI, or work on alignment :3~~
3dprint_the_world#6486: Nietzsche was just depressed cause he couldn't get laid
gwern#1782: @3dprint_the_world it doesn't seem obvious to me. 'concatenate every possible pair and measure compression ratio' is a stunt, and inefficient. think about how inferior that is to, say, computing an embedding: the embedding is small, reusable for a ton of other things, it can be cached for both the query and all possible matches, and scales easily to millions of queries or matches. this approach... does not.
bmk#1476: > Nietzsche was just depressed cause he couldn't get laid
@3dprint_the_world to steal a joke from connor: looks like I have something over nietzsche then: at least I'm not dead
Daj#7482: The original is "The great genius Isaac Newton died a virgin. So I have at least one thing over Newton. I'm not dead."
thenightocean#6100: https://cdn.discordapp.com/attachments/729741769738158194/769228475172388894/unknown.png
linuxnerd#0753: This server is about to explode in activity due to Von |
linuxnerd#0753: (I came earlier)
grugposter#2530: can confirm, just joined
Sid#2121: 👋 👋
Sid#2121: Unfortunately we are about 10x less organised than the eye server lol
Sid#2121: with less nice spinning graphics and news channels
linuxnerd#0753: That's fine, it isn't always needed lol
Sid#2121: but we have cool convos about AI and a ton of big brain researchers in here
bmk#1476: we're *very* loosely organized
bmk#1476: and yeah we have very cool convos
Sid#2121: too bigbrain for organisation
bmk#1476: we should start doing greeting again
Louis#0144: https://twitter.com/lcastricato/status/1319669959430184960?s=21
Louis#0144: Top fucking notch
giordanista#6557: 👋 hi
dracogrid#4240: Hey
bmk#1476: hey!
bmk#1476: what brings you to these parts
bmk#1476: other than the eye post
giordanista#6557: just the eye post 😩
bmk#1476: i mean nothing wrong with that |
bmk#1476: enjoy your stay here
giordanista#6557: 😉 😄
bmk#1476: we have some very spicy memes in #memes
bmk#1476: you can catch up on research stuff in #research
bmk#1476: the most active project channels are #gpt-neox-devs and #the-pile and #the-rad-lab and #interpretability
bmk#1476: if youre here for data specifically come to #the-pile
giordanista#6557: for books?
bmk#1476: we have all kinds of data
giordanista#6557: thank you
Louis#0144: @argeslw THERE CAN ONLY BE ONE
Louis#0144: 😡
bmk#1476: haha
Daj#7482: Why am I always stuck in a train or a party when we have a wave of new people
Sid#2121: Now fight to the death
Daj#7482: I can't focus on my silly greetings
bmk#1476: haha
bmk#1476: *you get invited to parties!?*
Sid#2121: Maybe it's time we actually get gpt to do it lol
Daj#7482: No bully :(
Louis#0144: Brb getting the guillotine |
Louis#0144: we’re going Louis v Louis style
cfoster0#4356: This is better than pay per view
cfc#2691: Hello everyone
cfc#2691: Love your work
bmk#1476: thanks! 😄
Louis#0144: No we love YOUR work
bmk#1476: or in other words
bmk#1476: *no u*
Haxxardous#9240: super naive question - why can’t a dataset be constructed with the same procedure in “language models are few shot learners” section 2.2?
Haxxardous#9240: i looked around and it _looks_ like everything is public
Haxxardous#9240: also, tensorflow? because it’s being trained on TPUs, or¿
Haxxardous#9240: really love what this community is doing, btw. just heard about it today. ^^not serious questions, more curious than anything
cfoster0#4356: Good question. CC and Wikipedia are public (but not Books* or WebText2) but there's no public, reproducible pipeline.
bmk#1476: books2 is the subject of much mystery
Haxxardous#9240: reproducible pipeline = filtering?
Sid#2121: > also, tensorflow? because it’s being trained on TPUs, or¿
@Haxxardous yes to this - we use mesh-tensorflow for model and data parallelism too
Sid#2121: reproducible pipeline = every step from downloading to filtering
Sid#2121: the procedure for how they gathered webtext is public, but no actual code
Sid#2121: (aside from open sourced reproductions) |
Sid#2121: we also wanted to extend the dataset to be applicable to larger models in the future
Haxxardous#9240: oooo that last bit, that’s extremely interesting
Haxxardous#9240: that makes a lot of sense though
bmk#1476: **1T or bust**
Haxxardous#9240: have y’all done any benchmarks on v100s vs TPUs yet?
bmk#1476: cant afford v100s
Haxxardous#9240: for this particular task, lots of parallelism
Haxxardous#9240: i see
bmk#1476: we'd kill for enough v100s to do this
bmk#1476: tpus are such a pita
Sid#2121: actually we should have some available @bmk , I just need to send an email 😆
Sid#2121: not *enough* but some
bmk#1476: not *enough* tho
bmk#1476: yeah
Sid#2121: mesh-tensorflow seems questionable with GPUs, too
cfoster0#4356: @Haxxardous Theoretically, you could build a similar dataset to the paper from the Pile by combining CC with the Wikipedia, BookCorpus, Bibliotik, and OpenWebText2 components
Sid#2121: the only real advantage to GPU training would be sparsity
bmk#1476: and much less pain and suffering
Sid#2121: some people are into that 😉
bmk#1476: utilitarians atm: ;-; |
zphang#7252: can you have a model so sparse that CPUs are more efficient for training :thonk:
gwern#1782: like NEAT?
StellaAthena#3530: > oooo that last bit, that’s extremely interesting
@Haxxardous hey, the skies the limit right? We already have gotten a much larger curated dataset than exists open source and easy-to-use anywhere else. 1.42 TiB really is a fuckton of data, and we haven’t even tried to get non-English data in there yet.
gwern#1782: I'm a little surprised how small publicly distributed datasets tend to be. I wonder if ML devs/researchers just aren't aware that you can easily serve multi-terabyte datasets for <$30/month now? or are they just mentally crippled by the poverty of being a grad student even long afterwards?
gwern#1782: I mean, even if you can't usefully use an entire 1.5T of text, you can still benefit - it lets you test out generalization and unsupervised / semi-supervised approaches very easily by creating a lot of arbitrary splits/subsamples, and if you have a specific topic or criteria, even a gargantuan dataset may wind up having a fairly small relevant subset
bmk#1476: Quick, invent a new field called "Big Data Machine Learning" to capitalize on the trend
gwern#1782: 'Actually Big Data Machine Learning'
bmk#1476: It'll fit nicely along "High Energy ML"
bmk#1476: I'm still dissapointed the term hasn't caught on
gwern#1782: or 'ABD ML', as in, 'oh, I don't have my phd because I spent all my time scraping the internet for EleutherAI'
gwern#1782: I remember when I was planning out Danbooru2017. 'wait, I can just... let anyone download a few terabytes and it'll barely cost $25/month? well heck, I can afford *that*. why does everyone jump through these absurd hoops in distributing datasets of a few gigabytes, then...?'
bmk#1476: Honestly, our partnership with The Eye is perfect wrt that
gwern#1782: what is The Eye? I assume not the ferris wheel
bmk#1476: r/Datahoarder people mostly
bmk#1476: They're hosting the pile for us
gwern#1782: if it's only a terabyte or two why not host it yourself? instead of relying on sketchy datahoarders
gwern#1782: don't you guys have some dedicateds already?
bmk#1476: They have better infra than us
bmk#1476: Yes we do but the bandwidth is abysmal and even worse for people in NA because the servers are in europe |
gwern#1782: oh, they have US servers? yeah, good dedicated US servers do seem harder to come by. not sure why
bmk#1476: Anyways our code tries each data source in order
bmk#1476: So even if the eye goes down it'll just fallback to our servers
es#4913: The eye isn’t really sketchy tbh
bmk#1476: ^
3dprint_the_world#6486: @gwern oohhh I didn't realize you were behind Danbooru. Nice work!
3dprint_the_world#6486: I've used it in a bunch of my stuff
gwern#1782: (I'm behind the dataset, not the website itself, so we're clear)
bmk#1476: It would be the ultimate crossover
plomdator#5072: Hello. I just got a master degree in data science, i don't know if I'm able to help with your project I feel like a beginner
bmk#1476: Don't worry about it, all help is appreciated
bmk#1476: We're not super into the whole credentialism thing either
StellaAthena#3530: > I'm a little surprised how small publicly distributed datasets tend to be. I wonder if ML devs/researchers just aren't aware that you can easily serve multi-terabyte datasets for <$30/month now? or are they just mentally crippled by the poverty of being a grad student even long afterwards?
>
> I mean, even if you can't usefully use an entire 1.5T of text, you can still benefit - it lets you test out generalization and unsupervised / semi-supervised approaches very easily by creating a lot of arbitrary splits/subsamples, and if you have a specific topic or criteria, even a gargantuan dataset may wind up having a fairly small relevant subset
@gwern I strongly agree with all of this. I think that the *dataset itself* is a significant contribution to the literature. It's an order of magnitude larger than anything I've seen, and we are dreaming of hitting 10 TiB one day.
gwern#1782: yes, I agree with that. I wasn't too thrilled with the whole idea of 'let's replicate gpt-3 mostly out of spite', but the datasets are a different question
gwern#1782: creating datasets is still undervalued in ML, imo
gwern#1782: and you guys seem to've gone well beyond what I expected early on
StellaAthena#3530: I think that's a rather uncharitable description of our motivations. |
cfoster0#4356: Does anyone here care what OAI thinks? I'm here bc this shits interesting
gwern#1782: it definitely describes some individuals' motivations... in any case, the dataset may long outlive any gpt-3 clone you happen to train. although I admit I was also expecting to see more progress by now from both OA and other groups in scaling models, and that needing to train an open gpt-3 might have been mooted by this point, half a year later
bmk#1476: > in any case, the dataset may long outlive any gpt-3 clone you happen to train
@gwern this is precisely why I decided to work primarily on Pile instead of opengpt3
bmk#1476: That and the fact that I was so burned out from spending entire days fixing stuff that broke during the move to mtf that I don't want to touch mtf with a ten foot pole ever again
zphang#7252: idk how the math works out, but apparently my group started getting charged for GLUE hosting costs, and the bill was like $1k/mo
gwern#1782: sounds like they did something stupid like host on cloud 🙂 *very* easy to spend $1k/month on bandwidth on GCP etc...
gwern#1782: (did you know a single imagenet training run on a tpu pod, if you don't get the region exactly right, costs $550 in bandwidth alone?)
zphang#7252: I think it actually was GCP...
zphang#7252: what do you have in mind for hosting?
gwern#1782: ovh, hetzner, those sorts of companies
zphang#7252: Is there a reason they're that much cheaper? Or is it just cloud being cloud
gwern#1782: I serve like 10-15tb/month off hetzner for $30 for comparison
gwern#1782: and yeah, it's mostly just cloud being cloud. the bandwidth quality isn't as great, cloud likes to boast, but I haven't much noticed anything worth the 10-100x premium...
zphang#7252: it looks like GLUE is now being hosted by Facebook, which probably doesn't cost us anything lol
gwern#1782: (gwern.net is like 1tb/month, ThisWaifuDoesNotExist is maybe similar now, and then a few copies of Danbooru2019 is like 3tb each)
zphang#7252: I gotta check these hosts out
gwern#1782: https://twitter.com/gwern/status/1319728074330615813
alexyz#3459: @gwern What ya doing here? Love your posts. 👏
alexyz#3459: And also, how can I help? |
alexyz#3459: I have GPT-3 access, but it took tooth and nail to get it, I was on the waitlist since June.
alexyz#3459: I highly believe it should be more open.
gwern#1782: I just hang around and chitchat and watch how progress goes... I have too much gwern.net and tensorfork stuff to actually do any coding or work for eleutherai - making anime real is much more important than cloning GPT-3!
gwern#1782: _spent pretty much all day grinding his way through copy-pasting all of his anime and movie reviews together to make https://www.gwern.net/Anime-reviews and https://www.gwern.net/Movie-reviews and updating links and fixing formatting and... oy vey
bmk#1476: that is.. several
gwern#1782: i haz opinions
bmk#1476: i despair as i ctrl+f the 2 or 3 different anime that i've actually watched and nothing comes up
gwern#1782: yes, well, I didn't see much point in dumping in just my numerical ratings from https://myanimelist.net/animelist/gwern?order=6&order2=-4&status=7
bmk#1476: that's.. *several* severals
bmk#1476: huh
bmk#1476: maybe i just watched *really unpopular* anime
bmk#1476: still not on the long list either
bmk#1476: i mean i read death note and also your death note post so i guess i'm in the clear
StellaAthena#3530: @alexyz Welcome!
Right now our biggest blocker is on the data side. We need to obtain and processes all of the datasets used in the GPT-3 paper to evaluate the model so that we can ensure that there’s no overlap between the training and testing datasets before we begin training the model.
Head over to #lm-thunderdome and check out https://github.com/EleutherAI/lm_evaluation_harness to learn more.
FractalCycle#0001: > _spent pretty much all day grinding his way through copy-pasting all of his anime and movie reviews together
i feel bad for not being as productive as gwern, but i feel good because i sometimes prioritize better. This just evens out to me doing nothing, but like, *really effective* nothing. |
bmk#1476: im the most efficient procrastinator i know
bmk#1476: i can get 3 days behnd on work every single day
FractalCycle#0001: rookie numbers
researcher2#9294: > i feel bad for not being as productive as gwern, but i feel good because i sometimes prioritize better. This just evens out to me doing nothing, but like, *really effective* nothing.
@FractalCycle You joke but I massively suffer from "is this productive". Recently I'm just trying to do everything and hopefully something sticks.
researcher2#9294: Everything may include talking about politics on the internet for hours...
bmk#1476: :guilty: oops
FractalCycle#0001: ya, luckily i've started getting mental health treatment, which should eventually lead to me having more energy/motivation to do stuff
bmk#1476: awesome
bmk#1476: ~~stuff like join in the politics~~
trevyn#4202: I think it’s hilarious that I pop on a “replicating GPT-3” Discord and immediately see a discussion on motivation — like, once we realize that all of our magical humanness is computation, we just go “eh, why am I doing this again?”
trevyn#4202: Triggers some sort of cognitive dissonance, maybe
StellaAthena#3530: > ya, luckily i've started getting mental health treatment, which should eventually lead to me having more energy/motivation to do stuff
@FractalCycle that’s awesome. It’s hard, but it’s really important.
gwern#1782: @FractalCycle this is all part of a big push to rewrite the gwern.net infrastructure and fix longstanding problems I've neglected ( https://www.reddit.com/r/gwern/comments/jefj9x/recent_gwernnet_bugfixes_paying_off_technical_debt/ ). in this case, the problem is that a lot of my content is trapped on GoodReads and MyAnimeList, which is bad for me in the long run, but in the sort of sunk-cost one-day-at-a-time-wait-why-am-I-suddenly-out-of-socks way where it never *seems* like a good idea to fix it. it's incredibly tedious but I believe it's worth it in the long run in terms of readership/Patreon/interlinked-writing etc since I do do some of my writing & thinking in the form of reviews, and having writing trapped elsewhere makes them much harder to maintain, invisible, and sends traffic/revenue elsewhere.
FractalCycle#0001: @gwern ah, i see what you mean. And i'm not saying that's not important, i just thought it was kinda interesting / mildly funny. But it is important to keep control and organization of your writing, so you're right about the long-term value.
(Unrelated: I'm pretty sure i go to the same college you went to. The thing in your online-assignments article was pretty clearly the thing we use.)
@StellaAthena thanks! Yeah, i think it's underrated because of like stigmas and stuff. For the longest time, in my head, it's like "I know what it feels like to get stuff done, I just gotta stop being lazy/selfish and work that hard all the time", and then i don't do that, and nothing gets done. Or like "I'm a bad person" when it's just i forgot to pay attention when other people are speaking. |
xen0#3601: am just SO MAD with trying to get russian gpt-2 large to work.
first i can't use dataset cuz it said "num samples 0 what the hell man"
okay, block size argument was missing. that's fine.
then out of memory. had to use nvidia apex O2 mode. okay, everything's fine with memory
now i get this and i have no idea what to do with it https://cdn.discordapp.com/attachments/729741769738158194/769916301346209822/97105796-cccd1b00-16ce-11eb-8d1d-cbd8bfc25b45.png
xen0#3601: may we just PLEASE use less hacks when doing ML?
xen0#3601: this whole thing drives me mad, considering that's only gpt-2 large model finetuned on my language
Sid#2121: @xen0 if you're interested and can get a dataset I can run you through how to finetune our model on a new dataset
Sid#2121: the weights aren't *technically* released yet but I could send them over
xen0#3601: @Sid nah, that's sberbank project. and not GPT-3 but GPT-2 large, which is still great in quality
xen0#3601: it has little relation to eleuther project, i'm just trying to set this thing up for finetuning but not able to
Sid#2121: looks to me like it's a problem reading the dataset, but idk the codebase so not sure how much i can help
Sid#2121: i'm just saying if you can get a dataset, our model is fairly straightforward to finetune
Sid#2121: although i'm not sure how long it would take to learn a new language
xen0#3601: yeah, block size system is behaving REALLY weirdly here
i do have a dataset, but it's foreign language, so i don't think it'll go well |
xen0#3601: yeah exactly
xen0#3601: https://github.com/sberbank-ai/ru-gpts this one thing where they claim to have trained large size model on gpt-3 architecture with model size of large (774m)
xen0#3601: but they have also gpt-2 large model here trained on same dataset and it's which i'm trying to run but in vain :p
xen0#3601: no need to help tho, i'll probably figure it out
*probably...*
wificat#1043: hi I'm new
wificat#1043: saw the books3 dataset
wificat#1043: wondering before i download if it is indeed in separate txt files? or all one blob
Sid#2121: the download is a tarfile, inside the tarfile it's all separate txt files
Sid#2121: Welcome, btw 🙂
wificat#1043: thank you!
wificat#1043: i want to try the compressive transformer (book length texts) but it wouldn't work if it were just a huge text blob
Airatak#7842: Hi Guys!
Airatak#7842: I love the initiative!
bmk#1476: hey!
Airatak#7842: I know a lot of people would be asking you this but how is the progress on GPT-neo and do you need any help?
Airatak#7842: I mean I would love to volunteer
bmk#1476: awesome
bmk#1476: how experienced are you with ML
Airatak#7842: I've been doing ML for about 2 years |
bmk#1476: awesome
Airatak#7842: I'm not super experienced with transformers tho
bmk#1476: our main thing rn is we're rushing to meet the deadline for NAACL
bmk#1476: which is.. in about a month
Airatak#7842: Oh ok
Airatak#7842: so you trying to train a model by then or something?
bmk#1476: it's a dataset paper
bmk#1476: lemme link it
Airatak#7842: oh ok
bmk#1476: https://www.overleaf.com/read/wgmnfqvzckjz
bmk#1476: we still have a lot of work we need to do
bmk#1476: #lm-thunderdome is probably what needs the most help rn
bmk#1476: tl;dr we're writing code to evaluate transformers
Airatak#7842: Cool
Airatak#7842: I'll check it out
Airatak#7842: Oh ok
Airatak#7842: So Something like Superglue?
Airatak#7842: Also is GPT-neo useable yet? Can I pitch in some help or compute or something?
Sid#2121: Yep, it's usable
bmk#1476: i'm a bit busy rn but |
bmk#1476: https://github.com/EleutherAI/lm_evaluation_harness
Sid#2121: we'll be releasing a GPT2 size model soon and larger ones as they're trained
bmk#1476: here's the eval code so far
bmk#1476: take a look around the code; a lot of it is unfinished
bmk#1476: a lot of things only have dataset implemented but not evaluation
Airatak#7842: Btw the paper is super super super cool!
Airatak#7842: I would love to checkout the dataset once it is published
Airatak#7842: I really would love to join the org
bmk#1476: you dont have to do anything special to join
bmk#1476: just start.. working on stuff, i guess
Sid#2121: well, write a few lines of code haha
bmk#1476: ^
bmk#1476: for now maybe look over the code we already have
Airatak#7842: lol i'd love to
Airatak#7842: I'll go over your github
bmk#1476: https://github.com/EleutherAI/The-Pile/ this is the main dataset repo if you have time after looking at lm_eval_harness
Airatak#7842: btw where are you releasing the gpt models?
bmk#1476: we need to get lm_eval_harness going for a couple important metrics to unblock ablations for the paper
bmk#1476: we're going to be evalling on mostly LAMBADA (?) and some other things i dont remember i believe
bmk#1476: based on the gpt2 paper |
bmk#1476: uh, we haven't planned it all out yet tbh
bmk#1476: we have some ideas but everything is still tbd
Sid#2121: @Airatak can invite you to the current github if you're interested, we could do with some user testing
Airatak#7842: @Sid Yes please
Airatak#7842: my github is Clickative
Sid#2121: @Airatak added
Sid#2121: the readme is 99% up to date but some things are old
Airatak#7842: Thanks
Airatak#7842: Btw you guys want to add more stuff to your dataset?
Airatak#7842: I can help scrape fanfic or news articles
Sid#2121: it would be useful if you could go through the process and let us know if you can get everything running smoothly, run into any problems, or if anything's confusing
Airatak#7842: I think a corpus will be online tho
Airatak#7842: @Sid Cool. Will do.
Sid#2121: We're fine for both of those genres tbh, there's an Ao3 dataset already available and most of webtext/CC is news
Airatak#7842: oh ok cool
Sid#2121: but we'll be adding stuff to v2 of the pile at some point, if you can think of a dataset that wouldn't have as much overlap
bmk#1476: ao3 isnt in v1
Sid#2121: mostly the pipeline for multilingual CC processing needs improvement
bmk#1476: just ftr
bmk#1476: let's not worry about that for now |
Sid#2121: yeah i know, but it's out there
bmk#1476: right now the crunch is NAACL and open gpt2 eval (which overlaps with NAACL ablations)
Airatak#7842: Got it
Airatak#7842: I'll think of some stuff for v2
Airatak#7842: with no overlaps
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/770011495026655242/unknown.png
bmk#1476: this is the top priority right now, we can always work out v2 later
Sid#2121: @Airatak the readme for GPTNeo should be mostly up to date but some stuff is outdated, ping me if you have any questions
Sid#2121: the colab notebook is a fairly good walkthrough for getting started
Airatak#7842: Cool, I'll check it out
Airatak#7842: I'll also read up a little about the eval methods for gpt
bmk#1476: the evals in the gpt2 paper are the ones we care the most for NAACL
Airatak#7842: I'm more used to image processing. I've worked on tons of GANs and CNNs. This is a nice change.
bmk#1476: though it doesnt need to be exact, and more eval is always better
bmk#1476: nice! i did GAN work for a while
Airatak#7842: got it
bmk#1476: very long ago, in ML-time
Airatak#7842: wow cool
Sid#2121: https://twitter.com/mrtnlhrr/status/1320404899016986625 guys MrtLhrr from twitter says our dataset too smol
bmk#1476: lMAO |
Sid#2121: he won me over in the second half tho
bmk#1476: it literally isn't though
bmk#1476: if you're looking to pirate, do you know where you go
bmk#1476: to the other section of the eye website where you can easily find all the books alphabetized and in original epubs
Sid#2121: I pirate all my books by downloading a 37GB chunk of random books and grepping through them wbu
bmk#1476: it's literally the exact same data as elsewhere on the site but less good for piracy lmao
bmk#1476: i'm debating responding with either a troll response or a serious one
bmk#1476: or just not reponding
bmk#1476: probably not responding is best
Sid#2121: This gives me hope for The Pile publicity lol, it seems to be toeing the perfect line between being very interesting and pissing people off
bmk#1476: "how big would a dataset have to be to be useful for meaningful training on GPT?"
bmk#1476: should i respond that
Sid#2121: sure
Sid#2121: my bet is he never responds
bmk#1476: if he does respond this is a valuable gauge of public opinion
bmk#1476: >inb4 he cites the 700GB figure from my blog post
Airatak#7842: Too small? lol
Sid#2121: this is a more interesting and likely more representative concern https://twitter.com/marian_nmt/status/1320446408093151232
Airatak#7842: Btw are there any pretrained models I can try on colab?
Sid#2121: @Airatak we have one, we'd have to send you the weights |
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/770013554316410951/unknown.png
bmk#1476: this is literally why i asked shawwn not to mention us directly
Airatak#7842: How big is it?
Sid#2121: @Daj @bmk can we send weights?
bmk#1476: i dont see why not
Sid#2121: I... have no idea
Sid#2121: I'll download them to the hetzner
Sid#2121: and let you know @Airatak
bmk#1476: xl model?
Sid#2121: if you mean number of parameters, it's 1.3B
bmk#1476: probably.. 8GB upper bound?
Daj#7482: > @Daj @bmk can we send weights?
@Sid yea why not
Airatak#7842: Oh that should be fine
Airatak#7842: I plan on using colab, so I was hoping under 30-40 GB
bmk#1476: yeah most certainly under
Airatak#7842: Btw I should share this: https://www.infoq.com/news/2020/10/training-exceeds-gpt3/
bmk#1476: oh no not this paper
bmk#1476: tldr it's complete and utter horseshit
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/770014742991208478/unknown.png |
Daj#7482: Eat it, LMU!
bmk#1476: > already cited by 4
bmk#1476: i feel dead inside
Airatak#7842: oh lol
Airatak#7842: The article seems promising
bmk#1476: it is not promising in the slightest
Airatak#7842: I didn't check the paper
Sid#2121: slightly longer tldr "exceeds GPT3 performance" is a massive stretch - they develop a specialist framework for a single task and finetune a smaller model specifically to excel at that task. The novel thing about GPT-3 performance is not that it's so good at cloze QA (there are already better models for that), it's that it can generalize so well across a wide range of tasks
bmk#1476: i've done some experiments with gpt3 that underperformed a custom gpt2-based system
Airatak#7842: Oh so it is not general purpose, that is deceiving
StellaAthena#3530: Yeah
bmk#1476: i should publish that and call the title "GPT3 COMPLETELY FUCKING FAILS AT TASK DONE BY 1000x SMALLER MODEL"
bmk#1476: and watch as i get 100000000 citations in a week
StellaAthena#3530: For most tasks you can build a better language model *for that specific task*
Daj#7482: "We developed a system called 'Human' that performs well on a very broad range of tasks..."
"Hah suck it! We invented 'calculator' that totally _creams_ human in arithmetic! We're so smart!"
Airatak#7842: I don't even have GPT3 access 😦
StellaAthena#3530: But that’s not what GPT-X is even trying to do
Airatak#7842: But the general tasks are what GPT-3 is good at
StellaAthena#3530: Not quite. GPT-3 is approximately human level at a lot of different tasks. For each of those tasks, there are better AIs but there aren’t better AIs at *all* of them. |
StellaAthena#3530: It’s not that some tasks are general and some are not. It’s that GPT-3 is good in general across a wide variety of specific tasks
Airatak#7842: hmmm yea
Airatak#7842: Well my primary use is text generation, and GPT-3 seems to be the best at that
Airatak#7842: I tried to finetune smaller models, but they don't come close to GPT 3
StellaAthena#3530: What sort of text?
Airatak#7842: Large text, like stories
Airatak#7842: Small models seem to lose the context
StellaAthena#3530: Ah yeah
bmk#1476: gofai researchers: dl algos suck they dont generalize at all to tasks theyre not tuned on theyre just glorified curve fitting trying to push sota
gpt3: does reasonably good on a load of tasks it's not tuned on
gofai researchers: gpt3 sucks because it doesn't push sota
Airatak#7842: GPT-3 did not
Airatak#7842: because of its size
StellaAthena#3530: GPT-3 is *better*. It not amazing at that.
Airatak#7842: > gofai researchers: ai algos suck they dont generalize at all to tasks theyre not tuned on theyre jut glorified curve fitting trying to push sota
>
> gpt3: does reasonably good on a load of tasks it's not tuned on
> |
> gofai researchers: gpt3 sucks because it doesn't push sota
@bmk 🤣
Sid#2121: it still only has a context window of 2048 tokens. For novels etc, we still have a few improvements to make
Airatak#7842: > GPT-3 is *better*. It not amazing at that.
@StellaAthena Agreed. But it seems to be the best one yet.
StellaAthena#3530: You can probably use the AI Dungeon Master for your task actually. You said you didn’t have access to GPT-3 right?
Airatak#7842: Yea, I used philosopherai and AI Dungeon for experiments
StellaAthena#3530: Ah
StellaAthena#3530: Yeah
Airatak#7842: I think finetuning a bit may help
StellaAthena#3530: The AI Dungeon has some extra features that can help with consistency across stories
Airatak#7842: I do want to try making a model with a large context window
StellaAthena#3530: You can assign yourself a name and a role, you can add people to your party, stuff like that
Airatak#7842: Yea, it has the remember text feature but that is around 1000 characters
bmk#1476: I have access to gpt3 but I don't think OA will be happy if I just let random people use it
Airatak#7842: Yea I don't think that is a good idea
StellaAthena#3530: Right. I’m saying that AI Dungeon is probably the best thing that exists and can be accessed for free. Probably better than pure GPT-3 access
bmk#1476: They've been very mad at previous projects that could be used as "backdoor" into the unencumbered gpt3
Airatak#7842: Well AI Dungeon does has a limit to the max size of the text gen
Airatak#7842: So what would you guys recommend, training a custom model for story generation, or finetuning one? |
gwern#1782: remember that AID is definitely inferior to regular GPT-3 for non-narrative questions. like marcus's commonsense questions, where GPT-3 got right 50% of AID's failures
bmk#1476: unrelated but @Airatak 日本人ですか?
Airatak#7842: @gwern I think I know you from Reddit
bmk#1476: gwern is our resident Famous Internet Person™
Sid#2121: 👀 https://cdn.discordapp.com/attachments/729741769738158194/770018454370648155/Screenshot_2020-10-25_Notifications_Twitter1.png
Airatak#7842: > unrelated but @Airatak 日本人ですか?
@bmk lol no, I'm not
bmk#1476: ah
bmk#1476: i thought your username sounded vaguely japanese
Airatak#7842: Yea, I'm an Anime fan and stuff
bmk#1476: ah
Airatak#7842: I do know a bit of Japanese but still learning
bmk#1476: 日本語を出来ましたか?
Airatak#7842: Yea, I don't understand that much yet
Airatak#7842: I'm doing the basics man
Airatak#7842: Still need to open the hira and kata charts to even read this out, and I'm hopeless with kanji
bmk#1476: ah
bmk#1476: actually i made a grammar error it should be 日本語を出来ますか?
bmk#1476: but yeah im also just learning
Airatak#7842: You seem to be way ahead of me |
bmk#1476: nah
Airatak#7842: I think I'll give the N5 thing next year
bmk#1476: i'm probably a fraction of the way to N5
bmk#1476: japanese isn't a language i'm putting a lot of resources into learning
Airatak#7842: Same
Airatak#7842: I'm really swamped with work
bmk#1476: Ah yeah same here
Daj#7482: bmk is our resident chinese-canadian (I don't know actually, just a guess), that has his entire computer set to german and randomly speaks japanese
Airatak#7842: ok :thonk:
Airatak#7842: Chinese-Canadian, with laptop in german and learning Japanese
Daj#7482: I'm the amazing German with an Irish name and a Californian accent
bmk#1476: I'm also trying to learn French and kind of failing at that
bmk#1476: In my defence I'm supposed to be rotating languages on my phone monthly but I got lazy
Airatak#7842: I'm from India, with and Indian accent, who knows English, Hindi, Spanish and just started learning Japanese
Airatak#7842: so yea..
Airatak#7842: > In my defence I'm supposed to be rotating languages on my phone monthly but I got lazy
@bmk My friend uses this trick
Sid#2121: I'm the resident Brit who apparently likes peppermint on everything ?
Sid#2121: (seriously is that a stereotype people outside of germany have of brits too?)
Daj#7482: > I'm the resident Brit who apparently likes peppermint on everything ? |
@Sid wtf fr tho
Daj#7482: I know I bully you constantly for being british
Daj#7482: but come on man
Airatak#7842: peppermint on everything? even pizza?
Daj#7482: > peppermint on everything? even pizza?
@Airatak This is an infohazard
Sid#2121: Yes, mint pizza is a classic british dish
Airatak#7842: ewww
Daj#7482: > Yes, mint pizza is a classic british dish
@Sid We nuked the wrong country
Airatak#7842: brits being brits
Airatak#7842: > @Sid We nuked the wrong country
@Daj +1
Daj#7482: One day, I want someome to write an actual academic philosophy paper on whether we nuked japan too hard, or not hard enough
Daj#7482: I'm pretty sure radiation lead to anime
Daj#7482: Only explanation
Airatak#7842: ummm.. ok 🤣
Sid#2121: > I'm pretty sure radiation lead to anime
@Daj imagine the godawful peppermint-based cartoons you'd get if you had nuked britain
gwern#1782: (part of why I call connor the anti-thiel is because thiel still has a slight german accent despite living in california for so many years) |
Daj#7482: > @Daj imagine the godawful peppermint-based cartoons you'd get if you had nuked britain
@Sid https://www.youtube.com/watch?v=MtTBqIAQ434
Airatak#7842: Anyway, it is 2 am here and I've got to wake up at 6, so bye peeps
Daj#7482: > (part of why I call connor the anti-thiel is because thiel still has a slight german accent despite living in california for so many years)
@gwern I would gladly take this nickname hah
Daj#7482: Night @Airatak !
Sid#2121: > @Sid https://www.youtube.com/watch?v=MtTBqIAQ434
@Daj this is better than anime, you should've nuked us
Daj#7482: > @Daj this is better than anime, you should've nuked us
@Sid I am unnerved by how perfectly fitting this video is for the extremely specific scenario you asked for
Daj#7482: _slightly updates beliefs towards simulation_
Sid#2121: The first thing I thought was "How long has Connor been attempting to set up this exact scenario in order to post this link"
Daj#7482: > The first thing I thought was "How long has Connor been attempting to set up this exact scenario in order to post this link"
@Sid My true reward function has been maximized. I must now move on to the next universe.
Sid#2121: Good luck setting up an extremely specific herbal / nuclear warfare based conversation with your next subject
Daj#7482: This is the 2^6943-5th universe I have eventually succeeded with
Daj#7482: this time I didn't even have to poison alignment research to find the "strong" utility function, which in reality is mathematically rigorously this exact scenario
bmk#1476: EleutherAI is where only the best philosophical discussions take place
Sid#2121: @Daj @bmk model weights are on the VM now - how do we want to share?
Daj#7482: _not_ through the VM |
Daj#7482: GCP egress is crazy lol
Sid#2121: I'll pop them on the hetzner and probably just upload them to transfer.sh for @Airatak
Sid#2121: they're 15GB btw
bmk#1476: Why are they so big? O.o
Sid#2121: 🤷♂️
Daj#7482: probably protobuf stores FP16 as FP64 or smth
Sid#2121: sounds about right
gwern#1782: ("But rest assured, this will have been the 2^6943-5th we have destroyed the human universe with memes, and we have become exceedingly efficient at it.")
Ad31#5897: Hi
bmk#1476: hello
bmk#1476: what brings you to these parts
Airatak#7842: > I'll pop them on the hetzner and probably just upload them to transfer.sh for @Airatak
@Sid Oh cool thx
Airatak#7842: Quick question, how are the bigger models going?
Airatak#7842: This one is similar size to GPT2 XL right?
StellaAthena#3530: Right now they’re not – we don’t have the compute to train a full scar GPT-3. We are part of a google program called TFRC where they give TPUs access to indie researchers and non-profits, but at our current rate it would take more than a year to train GPT-3. We are currently hoping to impress Google with our XL model and related work on data processing (#the-pile) so that they’ll agree to give us more.
Airatak#7842: Oh I know that, The tenserflow researcher thing
Airatak#7842: Is it not possible to use one of the distributed training architectures and then simultaneously use as much recourses available?
Airatak#7842: I think this approach might work upto training the 13B model
Airatak#7842: Not sure about the 175B, cuz of its size |
cfoster0#4356: Possible yes
cfoster0#4356: IMO there are a bunch of possible endgames for GPT-Neo.
cfoster0#4356: But the TFRC-plus option seems the most natural and attractive
Airatak#7842: Well I mean I don't know if Google will give enough resources for the 175B model for free
Airatak#7842: I think distributing the load would be the best idea
Airatak#7842: Perhaps across 5-6 compute instances
Airatak#7842: So you can use Colab, as well as multiple TFRC TPUs
cfoster0#4356: We'll see. They're already giving away resources for free, so there's precedent. Plus getting an independent, open replication of GPT-3 on Google's own cloud may be a good look for them
cfoster0#4356: I'm no expert on our architecture for GPT-Neo, though. We might already have plans for that kind of load distribution. @Airatak
kindiana#1016: training across multiple tpu devices/pods is a currently unsolved problem haha
Airatak#7842: Can't we just have a parameter server and then make the TPUs individually compute the gradients, then send over each batch to the parameter server which would update them and then redistribute them?
kindiana#1016: yes, but due to the way TPUs are architectured, the speed at which you can aggregate gradients is very slow compared to the computation
kindiana#1016: https://www.shawwn.com/docs/2020-01-swarm-training.pdf
kindiana#1016: this is the best attempt so far, but its not as time/compute efficient as using larger tpus
Airatak#7842: Hmm.. makes sense
Airatak#7842: But we can just use the TPUs for the compute. Gradients can be aggregated on a GPU based server?
kindiana#1016: you still need to get the gradients out of the tpus
Airatak#7842: Actually, I'm not very experienced with TPUs, so It is possible I am missing something here
kindiana#1016: the tl;dr is that tpus execute tensorflow graphs, and for high throughput you want to minimize the amount of dispatches you do
kindiana#1016: to get data in and out of TPUs, you need to either use google cloud storage (pretty slow), or by dispatching the data with the kernels (slow if you want fine grained synchronization) |
Airatak#7842: Oh that makes sense
Airatak#7842: But isn't distribution inevitable, I mean even the fastest TPU won't be able to train the 135B model
kindiana#1016: tpu pods are a bunch of tpus with high speed interconnects
kindiana#1016: up to like 8192 tpu chips
Airatak#7842: Yea, I just checked, it can give 100 petaflops+
Airatak#7842: Going back to the GPT-3 Paper, I this would be very easy to train on the V3 Pod
kindiana#1016: eleuther has quota for up to a preemptable v3-2048 I think
kindiana#1016: subject to capacity constraints
kindiana#1016: (realistically 100-500 tpu chips or so is what capacity allows)
Airatak#7842: Oh ok
Airatak#7842: I mean a 512-core pod is $384 per Hour
kindiana#1016: tfrc is free 🙂
Airatak#7842: yea tfrc is awesome
Airatak#7842: I like how google gives away stuff
kindiana#1016: the marginal cost for google to run idle TPUs is pretty low, so its a great way to support research
Airatak#7842: yea
Airatak#7842: Btw till google reviews and decides if they want to give access to TPUs for this project, would it be possible to atleast train the 2.7B and 6.7B?
Airatak#7842: Cuz if the 1.3B can be done on Colab, these should be doable
Airatak#7842: and they would be significantly more impressive to show to google
Sid#2121: @Airatak 2.7B and 6.7B are absolutely possible, we already have working configs |
Sid#2121: they would just take a little while, TPU pod availability is spotty these days
Airatak#7842: oh ok
Sid#2121: we plan to start training both these sizes soon, but are prioritizing ablations for our data gathering paper (#the-pile)
Airatak#7842: yea I got it
B1SH0P#0913: Hi, I've just been forwarded info about this research group and would like to know some more info about how to contribute.
cfc#2691: me too
cfc#2691: i got nerd chills reading about GPT3 as wise being
cfc#2691: https://medium.com/@kirkouimet/my-mind-blowing-conversations-openais-latest-ai-gpt-3-235ba5fb9453
cfc#2691: as you probably already seen
cfc#2691: AGI is near
cfc#2691: loved the ethics document
Deleted User#0000: btw I should have thought/said this earlier, but i could probably help with CPU compute stuff (up to a few hundred cores) for eleuther, from my uni cluster.
I could probably also offer 8xV100 machines with 40cores assuming my account didnt expire for that
cfoster0#4356: @cfc Hey! Nice to meet ya. The most immediately helpful way to contribute, code-wise, would probably be implementing the various evaluation tasks in this repo https://github.com/EleutherAI/lm_evaluation_harness/projects/1
cfc#2691: Thanks!
cfc#2691: I will look into that
cfoster0#4356: In the longer term, we're also on the lookout for large non-English text datasets, so if you know of any, definitely create an issue for it on the Pile repo https://github.com/EleutherAI/The-Pile
bmk#1476: Where large means >100GB preferably
StellaAthena#3530: **Pipeline Update:**
1. The model architecture is built. |
2. The training data has been collected.
3. The evaluation data is *almost* finished being collected. There are a couple synthetic datasets we need to generate and that's it.
4. @bmk will begin deduplication whenever he gets around to it.
5. Once the data is deduplicated, we can start training GPT-2 scale models, investigate scaling laws, and do ablations.
6. We are working on getting the compute to do GPT-3 scale models but currently don’t have it.
7. To hasten the process, most of the evaluation datasets are uploaded without the code to actually evaluate the results. This obviously needs to be implemented in the near future, but we can get as far as training models before we actually need to use it.
**Research Updates**
1. Our paper on the 1.4 TiB text dataset we built is coming along nicely. Still needs work, but we are planning on submitting it to NAACL in a month.
2. Other than needing deduplication, the Pile is ready for release.
3. Version 1 of the Pile is almost exclusively English text. We would like to extend the Pile to non-English datasets, so if you have large datasets that are not in English we are *very* interested. If you speak a relatively obscure language there’s probably a lot of low hanging fruit, like epubs and government records that have been made public. We have trouble finding data in languages we don’t speak for obvious reasons. There is now a branch of the Pile repo for version 2: https://github.com/EleutherAI/The-Pile/tree/version2
StellaAthena#3530: @Sid @Daj Did I get anything wrong?
Daj#7482: Seems good to me
bmk#1476: I don't think 5 is blocked on 4 necessarily
bmk#1476: It depends on whether we have enough time tbh, we might just do it with the undeduped data
StellaAthena#3530: We need to dedupe the eval data out of the train data at a minimum
bmk#1476: Yes but that's *considerably* simpler
cfoster0#4356: We have all the data for dedupe. The remaining evaluation datasets are synthetic tasks
bmk#1476: We don't need synthetic tasks to dedupe, right?
bmk#1476: It's not like we're going to find them in the data |
cfoster0#4356: Nah those we generate ourselves
cfoster0#4356: (I should say, all the data modulo LAMBADA, which we have just in a different repo IIRC)
StellaAthena#3530: 1. I disagree that synthetic data is necessarily not in the training data
2. ~~There are a couple non-synthetic datasets that have been claimed but not finished~~
StellaAthena#3530: Wow we've made so much progress in the past week my brain hasn't caught up with the PRs I've merged yet lol.
StellaAthena#3530: Yeah it looks like we have all of the non-synthetic data other than needing to move a copy of LAMBDA over
cfoster0#4356: Mk we can figure this out back in #lm-thunderdome
cfoster0#4356: I like the summary and think it's worth pinning
StellaAthena#3530: Reposting with corrects to bump the important update
StellaAthena#3530: **Pipeline Update:**
1. The model architecture is built.
2. The training data has been collected.
3. The evaluation data is *almost* finished being collected. There are a couple synthetic datasets we need to generate and that's it.
4. @bmk will begin deduplication in the near future.
5. Once the data is deduplicated, we can start training GPT-2 scale models, investigate scaling laws, and do ablations.
6. We are working on getting the compute to do GPT-3 scale models but currently don’t have it.
7. To hasten the process, most of the evaluation datasets are uploaded without the code to actually evaluate the results. This obviously needs to be implemented in the near future, but we can get as far as training models before we actually need to use it. If you want to contribute to the evaluation code, come hang out at #lm-thunderdome
**Research Updates**
1. Our paper on the 1.4 TiB text dataset we built is coming along nicely. Still needs work, but we are planning on submitting it to NAACL in a month. |
2. Other than needing deduplication, the Pile is ready for release.
3. Version 1 of the Pile is almost exclusively English text. We would like to extend the Pile to non-English datasets, so if you have large datasets that are not in English we are *very* interested. If you speak a relatively obscure language there’s probably a lot of low hanging fruit, like epubs and government records that have been made public. We have trouble finding data in languages we don’t speak for obvious reasons. There is now a branch of the Pile repo for version 2: https://github.com/EleutherAI/The-Pile/tree/version2
4. We should begin talking about what actual experiments we want to run. If you have ideas for interesting things to study, we are interested!
Teven#6831: NAACL's anonymity period has just started right - how do you plan to release The Pile then?
StellaAthena#3530: > NAACL's anonymity period has just started right - how do you plan to release The Pile then?
@Teven Correct. We are currently under the impression that we can release the data as long as we don't talk about the fact it's connected to NAACL publication under review. I have been meaning to double check this with the ACs though, just haven't gotten around to it. If we can't do that, then we'll put releasing the Pile on hold until the spring I guess. Less than thrilling, but *c'est la vie*. We could also bump our submission to ACL which hasn't had it's anon period begin yet.
Teven#6831: that's what I thought you'd go for actually 🙂
Teven#6831: probably more important to have a nice paper out to present the data than to get into a conference
Aran Komatsuzaki#5714: yup agreed
Teven#6831: .... especially when a likely outcome is "oh yeah you should submit this to LREC instead"
StellaAthena#3530: Also, “release” isn’t quite right. The data is already public, just not advertised and in a place that nobody will find by accident.
Teven#6831: Yeah, I mean having a proper release with good PR and an easy entry point
Aran Komatsuzaki#5714: pretty much all NAACL accepted papers don't get as much attention as even Book3 got
StellaAthena#3530: Right
StellaAthena#3530: NAACL does have a focus this year that makes them more likely to accept our paper. Most of us are pretty new to NLP though (at least academically speaking) so if you have suggestions we are all ears.
Teven#6831: I mean it's definitely something to think about for a bit; what I mean is that the advertising value of NAACL is not necessarily worth waiting for, and that the academic value (the main reason you submit to conferences) is not necessarily something Eleuther cares for at the moment. I give a lot of value to peer review, but I am not convinced it matters that much here/for that kind of throw-everything-into-the pot big dataset papers
Aran Komatsuzaki#5714: if it's not possible to release a paper during the anonymity period, then it'll be better to release a blog post.
Aran Komatsuzaki#5714: or just make the github page look like one lol
Teven#6831: I was certain that would break the rules actually
Aran Komatsuzaki#5714: @Teven Jason Phang and I said something similar 12 hours ago lol |
Teven#6831: Yeah I'm going through that, I still feel like that makes for worse PR
Aran Komatsuzaki#5714: yeah suboptimal
StellaAthena#3530: Yes, we are aware that a blog post or public announcement would break the rules
Teven#6831: It's important to have faces of the project to answer questions on Twitter for example
StellaAthena#3530: We are not considering doing that during the anon period
Aran Komatsuzaki#5714: the best possible PR is to release everything at once, and everyone tweet at once, then getting retweeted by hardmaru and gwern -> success!
Aran Komatsuzaki#5714: well that's what just happened to book3, so the future is pretty bright
Aran Komatsuzaki#5714: along with a blog post, i guess.
Teven#6831: haha if you're talking about the bibliotik dataset I'm a bit worried about the potential for backlash on that
StellaAthena#3530: Shawn announces that, not us. We were not planning on that happening and did not choose or approve how it was announced.
Teven#6831: but yeah what you want is social media exposure + attaching the project to your brand in the minds of people so they remember you for later + an easy entry point for people to read/use it
StellaAthena#3530: That’s an important distinction
Teven#6831: Yep, I get that
StellaAthena#3530: We have a website that I need to do some work on which I’d going to be where we direct people
StellaAthena#3530: We have a little behind-the-scenes planning on the PR front, but we definitely need more. That’s part of why we don’t mind delaying the announcement through the NAACL anon period: we can use the time to plan and get things in order.
Teven#6831: I'm not saying it was a bad idea to do the release (I don't have an informed opinion on this) I'm just pointing out that bad buzz also happens sometimes - at HF we've been surprised before, it's faster than we thought
Teven#6831: I guess the question is 1. what do we lose by waiting for more time 2. what do we gain from the chance of getting accepted to NAACL
Aran Komatsuzaki#5714: didn't know that. interesting.
StellaAthena#3530: I don't think we lose much by delaying. The future is a very long time and I don't see any particular reason to need to announce now vs in March.
StellaAthena#3530: AFAIK there's nothing even remotely similar in the works (though to be fair, I wouldn't know if there was) |
Aran Komatsuzaki#5714: it's pretty possible that a third party will release a big dataset tho
Aran Komatsuzaki#5714: like The Pile
Teven#6831: I think the main time pressure on 1. is OpenAI's next move; when they eventually release the next generation/multi-modal stuff. I really don't see what the gain is in 2. especially since it is possible to do a proper release now and send the paper to a conference later
Aran Komatsuzaki#5714: C4 was one. It's possible that Google may
Aran Komatsuzaki#5714: release something broader
Teven#6831: FB has been building a lot of multilingual resources too; that would overshadow a big English-only dataset
Aran Komatsuzaki#5714: well there's The Pile v2 that is 10TB multilingual dataset
Aran Komatsuzaki#5714: ofc it'll be after The Pile v1, so yeah you're right
Teven#6831: step by step haha
StellaAthena#3530: re: paper, the major motivations for writing it are:
1. Several of us are grad students and junior researchers. Being involved in a paper like this looks good on our resumes.
2. We are a new organization without any particular street cred or past performance. We got into Google's TFRC program largely because @Daj released the first open source reimplementation of GPT-2. Being able to point to a paper in a high-profile venue is helpful when talking to people who are more established than us.
StellaAthena#3530: I don't know how likely being scooped is. Despite my role in EleutherAI, this is simply not my field (yet?). If that's a serious concern then yeah 6 months could matter. I simply don't know.
Teven#6831: Definitely agree on the resume part as a junior researcher. I feel like it's important to see conferences as a means to an end for visibility; and it is unclear that at the moment, for a project like this that is easy to communicate about, they are the best or only way to get visibility. For a pure research project, that's not a question. For something like this that anyone can understand? I don't think so. If you need publication count, however, as you do in some systems, that's something else.
Teven#6831: I want to point out that in this case, though, you don't have much to lose by submitting later. If anything it's good to have already built visibility online before sending the paper in
StellaAthena#3530: Side note: Shawn's Tweet has 300 retweets and Aran's has 100, so maybe I'm underestimating our current visibility.
Teven#6831: At least this strategy has worked very well for HF; the transformers paper only appeared now in EMNLP.
Teven#6831: 1.2k likes is pretty good
StellaAthena#3530: How have you gone about promoting the work? Simultaneous announcements on social media?
Teven#6831: at least at HF we wouldn't be unhappy with that |
Aran Komatsuzaki#5714: I didn't do any work on The Pile, but I got like 50 followers just by tweeting it lol
StellaAthena#3530: I've gotten... 10? It looks like.
Aran Komatsuzaki#5714: Yeah but those 10 are rather high-impact accounts who found you from my tweet of promoting your name.
Aran Komatsuzaki#5714: They aren't random twitter strangers.
StellaAthena#3530: Oh definitely. I'm not saying otherwise.
StellaAthena#3530: I was tagged in the comments of one of the two posts and still picked up 10 followers already. That's big, not small, especially as your tweet has gotten less attention than Shawn's
Teven#6831: So it's nice to have a simultaneous announcement on several social media (although Twitter and Linkedin have been our main platforms) ; tweet-writing is a skill in itself, and I'd wager Shawn could have gotten a lot more with a more appealing picture (HF's CEO is really good at this for example) ; fishing for retweets and mentioning well-known organizations works well ; and finally, if you're not starting from a high base or if you're trying to reach an audience that's different from your usual one, collaborations with other orgs solves that problem
Aran Komatsuzaki#5714: Yeah I could've got more attention than Shawn's if I tweeted earlier. It's all about who tweets first.
StellaAthena#3530: @Teven No pressure to have a concrete answer now, but is there a non-zero possibility we could get HF to promote our work as cool?
gwern#1782: @Teven I didn't watch the kaplan presentation. did it sound like they have scaled multimodal models to anywhere near gpt-3?
Teven#6831: @gwern i haven't either ! @Aran Komatsuzaki is the man
Aran Komatsuzaki#5714: @gwern i'll give the slides to you if you want
Aran Komatsuzaki#5714: and a summary
Teven#6831: @StellaAthena oh we're pretty big fans we would definitely have retweeted I think
gwern#1782: hm. did you say that he was going to release a paper soon or did I imagine that?
Teven#6831: but something that would really work well is integrating The Pile with HF-datasets and doing a big joint release
Aran Komatsuzaki#5714: yeah he will
gwern#1782: in that case, don't fash yourself
Aran Komatsuzaki#5714: alright 🙂
Teven#6831: we got a lot of early traction by doing that with Microsoft for example |
Teven#6831: it also has the perks of solving the easy-access-point part
Aran Komatsuzaki#5714: @Teven I don't know why but I think I'm pretty good (only) at twitter promotion, so that's not a huge concern, I guess.
Teven#6831: ... and I'd wager that for a 1TB dataset that's gonna be pretty important
StellaAthena#3530: > ... and I'd wager that for a 1TB dataset that's gonna be pretty important
@Teven 100%
Aran Komatsuzaki#5714: well organization def helps
Aran Komatsuzaki#5714: Maybe we should ask HF? lol
Teven#6831: @Aran Komatsuzaki oh yeah I've noticed haha - but you know it's also nice to use an account with 31k followers haha
Aran Komatsuzaki#5714: exactly lol
Daj#7482: Just catching up on the logs. HF dudes are cool dudes, so pretty sure we'd be happy to collaborate any time. And my experiences with Twitter mirrors Aran's, I'm by no far a "big" account but my followers are unusual high quality, which is great
Aran Komatsuzaki#5714: oh we have gwern here
Aran Komatsuzaki#5714: gwern has 28k
Teven#6831: yeah I feel like you guys have good reach into hardmaru-type amplifiers
Daj#7482: Apologies if this has been raised already, but have we ever concretely considered what we want out of more attention?
Teven#6831: I think that's also a way to get resources
bmk#1476: We can plug our ~~soundcloud~~ patreon
Aran Komatsuzaki#5714: attention is all you need is our tenet, right?
Teven#6831: You had a lot of attention in the spring; it's important to follow up and show people that you've delivered something
bmk#1476: But really the thing we need right now is to convince google to give us more TPUs
bmk#1476: Getting some donations would be nice but it's not going to be a game changer |
Daj#7482: > But really the thing we need right now is to convince google to give us more TPUs
@bmk I guess this makes sense, though I have a feeling attention has a kind of log scaling
Daj#7482: Just playing a bit of devil's advocate
bmk#1476: And it's *certainly* not going to pay for enough TPUs
Teven#6831: attention has log scaling but also long tail distribution
Daj#7482: And a negative flip sometimes
StellaAthena#3530: I've said this before, but getting a lot of public attention is a good thing to point to when talking to Google about getting more TPUs
Daj#7482: We already can't really too reliably onboard a few people over a week I feel heh
Teven#6831: Idk at least it seemed obvious to me that the way to convince Google to do that was to establish yourself out there
Teven#6831: @StellaAthena exactly
Daj#7482: > I've said this before, but getting a lot of public attention is a good thing to point to when talking to Google about getting more TPUs
@StellaAthena Yes this as said makes sense
Daj#7482: Just checking for the record
Daj#7482: I'm all in favor, just checking for completenes
StellaAthena#3530: The three concrete goals IMO are:
1. Being able to go to Google to ask for TPUs and tell them that we are famous and awesome and they want us to put "thanks so much to Google for making this possible" on our work.
2. Recruitment (we need better organization to exploit this though)
3. Many of us are PhD students or junior researchers. This kind of attention (esp. with the associated publication) is good for career building.
Daj#7482: 1. Yeah probably, doesn't quite fit my models of Zak and Jonathan but that's fuzzy
2. Organisational capacity seems like a real bottleneck |
3. Totally agreed, can I be senior researcher/last author? lol
Aran Komatsuzaki#5714: I think you guys can just apply for HF eventually.
Aran Komatsuzaki#5714: maybe an internship
Daj#7482: I get payed to do hands on alignment all day, I'm good hah
Daj#7482: Well, maybe more capabilities than alignment
StellaAthena#3530: 1. Even if Zak and Jonathan aren't like that, it's a good thing for them to be able to tell their bosses if they need to go up the chain.
2. Lol yeah. I'm working on it, but I haven't really paid attention to onboarding organization since I revamped the intro doc. Realistically we need more people who are designated PMs to get to a reasonable place.
3. We need to have a larger conversation about authorship I think. I've noticed that people have been reorganizing the authorship list on the paper.
Daj#7482: My job does not pay for blog posts about numbers experiencing suffering lol
gwern#1782: forget numbers, do electrons experience suffering?
Daj#7482: > 1. Even if Zak and Jonathan aren't like that, it's a good thing for them to be able to tell their bosses if they need to go up the chain.
> 2. Lol yeah. I'm working on it, but I haven't really paid attention to onboarding organization since I revamped the intro doc. Realistically we need more people who are designated PMs to get to a reasonable place.
> 3. We need to have a larger conversation about authorship I think. I've noticed that people have been reorganizing the authorship list on the paper.
@StellaAthena 1. Jup, makes sense
2. Yep, honestly amazing we even have you
3. Yes, do this asap
Daj#7482: > forget numbers, do electrons experience suffering?
@gwern I feel numbers is even crazier than electrons
StellaAthena#3530: lol. Right, but I'm 27 and am currently getting my papers rejected from conferences because they're "too theoretical" whatever that means and BMK has talked about wanting to get an industry research job.
bmk#1476: Forget electron**s**, are there even multiple electrons? |
StellaAthena#3530: Wow. My birthday was last week and that's the first time I've said the words "I'm 27." Feels weird.
Aran Komatsuzaki#5714: @StellaAthena age is just a number. forget about it
Daj#7482: I have no stable memory of my age until the wavefunction is collapsed by me having to fill out a form
gwern#1782: you're almost over the hill. like christmas cake. terrifying. soon, you will be, one of us, one of us
bmk#1476: One electron universe 🤝 The Egg
Aran Komatsuzaki#5714: i'm 25 and i have no paper accepted to any conference and i'm totally fine.
Daj#7482: We've derailed pretty hard...but I just have to share that recently I've had a lot of pretty good thinking progress by replacing the word "computation" with "magic"
Daj#7482: > i'm 25 and i have no paper accepted to any conference and i'm totally fine.
@Aran Komatsuzaki I'm 25 and I never wrote a single paper hah
StellaAthena#3530: You're also a PhD student, right? I'm a junior researcher without an advanced degree who lucked her way into a very cool research job that is too hush-hush to publish most of the time. 😛
Aran Komatsuzaki#5714: @Daj haha all you need is recognition
StellaAthena#3530: If my boss had his way I would just never publish anything tbh.
Daj#7482: Which is why it would be such a powermove to start as a last author haha (this is memes, I know I deserve small to no credit for the pile)
Aran Komatsuzaki#5714: Yeah I'm a 3rd year student of GaTech ML PhD.
StellaAthena#3530: Anyways, to draw the conversation back...
Daj#7482: > Anyways, to draw the conversation back...
@StellaAthena Yes, _mom_
bmk#1476: ~~Stella is Angela Merkel confirmed~~
Aran Komatsuzaki#5714: > Which is why it would be such a powermove to start as a last author haha (this is memes, I know I deserve small to no credit for the pile)
@Daj That's kinda similar to my situation lol |
bmk#1476: If any of you guys want to hitch onto the Pile there's still a lot of work that need to be done
bmk#1476: I'd love to make the author list even longer
cfc#2691: good hyping
Daj#7482: I might be too busy and ADHD to write code, but I love editing papers
bmk#1476: Editing would be helpful
Daj#7482: I should do that
Aran Komatsuzaki#5714: i love giving feedback and editing
Daj#7482: After all, what paper couldn't use a dank double pun about violence AND sex? (a bang, if you will)
StellaAthena#3530: 1. It seems like we are agreed that getting promoted by HuggingFace and allowing people to access our data through their interface is good.
2. We **really** need to plan a PR strategy. Hopefully HF people can lend a hand couz they've done this before.
3. It seems like people with more knowledge than me think there's a possibility of being scooped, so announcing the dataset is more pressing than publishing it. This may mean we don't submit to NAACL due to their anon policy
4. We should think about who else we can reach out to and secure publicity from. We have Gwern and Shawn in our camp already. @Louis and I can reach out to some people at GTech who do NLP (Louis has already downloaded the Pile to his lab). Who else is in our social network?
5. We need to work on the website again. It's fine for where we currently are, but not ready for public announcement.
StellaAthena#3530: 1'. Is there someone we would rather partner with than HF? (No offense @Teven) If so, who and why?
Aran Komatsuzaki#5714: Let's ask some OpenAI people to try to stop us from releasing The Pile
Daj#7482: This is not my comparative advantage but I will of course help where I can
Aran Komatsuzaki#5714: That'll be a huge PR
Daj#7482: hahaha
Daj#7482: We definitely need to reach out to OA before we release Neo btw
Daj#7482: The Pile less so |
bmk#1476: Re: 3: didn't Jason say that the anon policy would still allow us to advertise the dataset as long as we don't mention the paper?
Aran Komatsuzaki#5714: @Daj You can threaten them for an access to GPUs
Aran Komatsuzaki#5714: or employment
bmk#1476: What could possibly go wrong
Daj#7482: tbh I'm not sure if I have an unusually high or low chance of ever working for OA lol
StellaAthena#3530: > Re: 3: didn't Jason say that the anon policy would still allow us to advertise the dataset as long as we don't mention the paper?
@bmk Jason said he *thought so*. Teven said he didn't think so. I'm going to email the ACs today and ask explicitly.
StellaAthena#3530: I can promote us within the AI Village, but AIV is at a similar point as EleutherAI in terms of becoming a "real thing." A little further along, but their endorsement will likely not carry much weight. Also we're computer security people, not NLP people.
Aran Komatsuzaki#5714: We can partner with Google.
bmk#1476: Yeah good idea emailing the ACs
Daj#7482: Again a bit of devil advocacy: Does Eleuther even _have_ a future path/goal?
StellaAthena#3530: Nope!
Daj#7482: I still feel like this is more of a digital water cooler
Daj#7482: I'm not sure what "real thing" would entail
asparagui#6391: if i was to write the tfrc peeps re getting moar compute who would be best to send them to over here
StellaAthena#3530: Other than me and @researcher2 hanging out and working on the copyleft in our free-time-from-our-free-time-activity and whatever goes on in #lm-thunderdome we don't have any plans past GPT-3 AFAIK
StellaAthena#3530: @asparagui Do you have personal contacts there? We (as an org) do have contacts there we've been talking with.
Daj#7482: I do have that one big thing I wanted to do past Neo, but still a bit off
Louis#0144: honestly the pile is kickass
Louis#0144: now that I have it working |
Louis#0144: lol
StellaAthena#3530: I'm excited to be cited in every paper you write for the next ten years @Louis
asparagui#6391: nobody personally but i can ask them for a favor sorta
Louis#0144: LOL
Daj#7482: I have met the guys in charge of TFRC a year ago and have their emails
asparagui#6391: well former tfrc people
StellaAthena#3530: It sounds like our existing contacts are stronger, but I appreciate the offer.
Daj#7482: But could be interesting if you think it's worth investigating aspara?
asparagui#6391: writing emails is easy, doing research is hard 😛
Daj#7482: Strong disagree
Daj#7482: lol
StellaAthena#3530: That has not been my experience so far lol.
StellaAthena#3530: @Louis Can you find out from Mark what it would take to get the lab to promote us? Ease of use seems to be his big concern, but maybe since you have the data already you can show him some cool things and we can work on ease of use?
bmk#1476: The next thing after GPT3 and Pile is 1T, Pile v2, and HUMONGOUS
StellaAthena#3530: Damn. those spikes are us?
Teven#6831: neat visualization
bmk#1476: O.o
Daj#7482: > The next thing after GPT3 and Pile is 1T, Pile v2, and HUMONGOUS
@bmk I've mentioned this several times before, but my next goal is making the first "actual" open source amplification-ish trained AGI
Louis#0144: > @Louis Can you find out from Mark what it would take to get the lab to promote us? Ease of use seems to be his big concern, but maybe since you have the data already you can show him some cool things and we can work on ease of use? |
@StellaAthena the bibliotik dataset is interesting to him but other lab members had questions on legality (quoting bookcorpus)
bmk#1476: That many people are downloading our stuff??
bmk#1476: I wasn't expecting anyone to download any of it until the Pile was actually functional
bmk#1476: None of the data is even documented yet, what are people even doing with it
bmk#1476: Awesome
Daj#7482: Who needs mainsteam media, we have reddit data hoarders
bmk#1476: We're hopefully going to have a much bigger spike once the actual Pile is done
StellaAthena#3530: I'm glad we can bring you guys more traffic. We appreciate the storage help you've given us and frankly this is probably the best way we can give back.
bmk#1476: Yeah, it was VonChair who helped us
Daj#7482: > lots of talk on hackernews and twitter though, fair initial documentation I'd say. This ingest was handled by one of my staffers, so I just woke up to all of this now
@-Archivist Really? Got a link?
StellaAthena#3530: @Daj Shawn's tweet as 1.2k likes and over 400 retweets.
Daj#7482: Ahh
StellaAthena#3530: Aran's tweet has ~500 likes and 100 retweets too
Louis#0144: @bmk check ur DMs for a sec
Louis#0144: lmao
bmk#1476: I feel like the Pile is going to get a lot more attention than Books3, in part because we're going to be coordinating an actual proper release
StellaAthena#3530: 100%
StellaAthena#3530: We should be prepared for tens of thousands of interactions.
StellaAthena#3530: (we won't be, but we should try) |
Sid#2121: > We should be prepared for tens of thousands of interactions.
@StellaAthena I am barely prepared for a single interaction, send help
bmk#1476: > @StellaAthena I am barely prepared for a single interaction, send help
@Sid this tbh
StellaAthena#3530: I love all the HackerNews Lawyers who don't know that free use exists
bmk#1476: Pile will hopefully become the ImageNet of Language Modelling
StellaAthena#3530: There are legit legal questions (and in the US they're currently open questions), but "can researchers use copywrite text" is not one of them.
StellaAthena#3530: > The question is..if a AI reads a book is it against copyright? Or is the trained model a derived work of those books?
StellaAthena#3530: r/badlegaltakes
-Archivist#7336: I'll be keeping an eye on our dmca inbox, but all those files are just txt versions of books we're already hosting in various formats under `/public/Books/` still, will be interesting....
> https://the-eye.eu/dmca.mp4
bmk#1476: @Louis btw I responded in dm
Louis#0144: Yeet
Louis#0144: Ty
Daj#7482: Guys, will I go to prison for reading a book and remembering its content?
Louis#0144: We start training in an hour! First real language model using the pile
bmk#1476: No but you are now a derivative work @Daj
bmk#1476: > We start training in an hour! First real language model using the pile
@Louis awesome
Daj#7482: All art is derivative, so I'm an artist, nice |
cfoster0#4356: @Daj brought to you by Viacom International
bmk#1476: Once the deduped version is done you'll be switching to that right?
Louis#0144: Yeah
Aran Komatsuzaki#5714: @Daj It's a thought crime, apparently.
bmk#1476: > @Daj depends on the book, don't start quoting mein kampf
@-Archivist he's in germany it's literally illegal don't worry
Daj#7482: > @Daj depends on the book, don't start quoting mein kampf
@-Archivist I read NRx if I want to masturbate to thinly veiled homosexual power fantasies
bmk#1476: Oh God no not the gay Nazis discussion again
Daj#7482: Mein Kampf isn't illegal
Daj#7482: Or wait maybe you can only buy the version with tons of comments
Daj#7482: I read parts of it while reading other history books
Daj#7482: It's just like, genuinely bad
Daj#7482: It's just like, genuinely bad
Daj#7482: NRx shit is just as morally wrong but at least it's pretty
-Archivist#7336: never read it, but probably host it 🤷🏼♂️
StellaAthena#3530: > The dmca.mp4 file is a video of performance art in which about eight women in fancy dresses chant while vigorously mimicking male masturbation. I didn't watch to the end, but the activity appears to go on for a full ten minutes, and I bet there's a suitable finale.
What the fuck is this talking about.
Daj#7482: Everyone was super afraid of it when I was in highschool lol
Daj#7482: Like it was some kind of mindcorrupting infohazard |
Daj#7482: And it's basically an old timey wordpress blog
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/770334173243113523/unknown-15.png
bmk#1476: Just gonna remind y'all of this
Sid#2121: > And it's basically an old timey wordpress blog
@Daj Now I'm imagining Hitler posting cookie recipes and plugging his instagram account
Daj#7482: Absurdity is part of my personal brand
Daj#7482: > @Daj Now I'm imagining Hitler posting cookie recipes and plugging his instagram account
@Sid After reading some actual history books, I am again and again surprised how thoroughly unimpressive impactful dictators throughout the ages were as people
Aran Komatsuzaki#5714: @Sid I'd say twitter like our current Fuhrer is doing
Daj#7482: Not just evil but just...lame
StellaAthena#3530: > The dmca.mp4 file is a video of performance art in which about eight women in fancy dresses chant while vigorously mimicking male masturbation. I didn't watch to the end, but the activity appears to go on for a full ten minutes, and I bet there's a suitable finale.
What the fuck is this talking about.
Daj#7482: something something banality of evil, I'm off topic again
Daj#7482: > What the fuck is this talking about.
@StellaAthena hahahah
StellaAthena#3530: No seriously. That's apparently in the data? People are talking about it on HN
bmk#1476: Only the greatest dmca policy of All Time
bmk#1476: It's the dmca policy of The Eye
gwern#1782: (imagine an ai as better at culting than the best human cult leaders ever (like jesus or muhammed or hitler or mao) as muzero is better at chess/go than the best human players ever)
bmk#1476: Some say this has already happened |
StellaAthena#3530: I thought this was the Eye's DMCA policy: https://the-eye.eu/dmca/
bmk#1476: Zuccbook: :guilty:
bmk#1476: @StellaAthena it's a joke
Daj#7482: > (imagine an ai as better at culting than the best human cult leaders ever (like jesus or muhammed or hitler or mao) as muzero is better at chess/go than the best human players ever)
@gwern Don't worry guys! We'll just align AIs to human morals and make them a million times stronger, what could go wrong?
bmk#1476: https://the-eye.eu/dmca.mp4
Daj#7482: oh god why have I seen this video before
Daj#7482: where, how
gwern#1782: I assume because it's modern high art
Daj#7482: Ah yes, I'm subscribed to modern high art weekly
gwern#1782: people centuries from now will study it and works like _hamilton_ as the highwater mark of western civilization, long after vile racists and sexists like beethoven have been forgotten
bmk#1476: > EDIT: there is unfortunately no "suitable finale."
Daj#7482: My favorite art is the kind that is just degenerate optimizer edgecases on social games
bmk#1476: Paperclip optimizer but it's art
Daj#7482: > people centuries from now will study it and works like _hamilton_ as the highwater mark of western civilization, long after vile racists and sexists like beethoven have been forgotten
@gwern Hot take: Marvel movies are the Shakespear of our time and mean far more than any "high art" ever made
gwern#1782: nah, the marvel movies have practically already been forgotten
Daj#7482: I don't know in what bubble you live in lol
Daj#7482: I'm derailing again
gwern#1782: their complete disappearance wasn't quite as abrupt as _game of thrones_, but if anything is going to be the shakespeare of our time, it'll be lord of the rings or harry potter |
bmk#1476: I've never watched a single marvel movie
zphang#7252: `why does avatar have no cultural legacyyyyyyy`
gwern#1782: (because it was shit?)
StellaAthena#3530: Anyways, again you children are very **very** off topic. Short term tasks:
1. We need to decide if we want to go with HF or if we want to shop around for collabs. Relatedly, will HF be annoyed if we collab with them and others @Teven?
2. We (I) need to clean up our website. If anyone has graphic design experience that would be **exceptionally useful** as we currently use shitty free internet graphics. I would like to make real card icons.
3. @Daj, you still owe me an "about us" for the website.
4. **DONE** I need to email NAACL and get details about their policies.
5. We need to make a list of people with large networks who are going to promote us. @gwern, @shawwn, and @Aran Komatsuzaki is the current list. Who else do we know / who can we reach out to?
Daj#7482: > their complete disappearance wasn't quite as abrupt as _game of thrones_, but if anything is going to be the shakespeare of our time, it'll be lord of the rings or harry potter
@gwern Yep I can agree with that
bmk#1476: HF seems great and possibly the most suitable partnership we could find for Pile
cfc#2691: what`s HF?
cfc#2691: i'm kind of lost
Teven#6831: HF has no issues with multiple collabs but bigger orgs will for sure
StellaAthena#3530: @bmk I don't disagree, but I want at least the appearance of consensus before I unilaterally make decisions for us.
StellaAthena#3530: @cfc HuggingFace. Teven can explain who they are best (they work there)
Daj#7482: @StellaAthena https://youtu.be/YFUXJ0MRwTU?t=22
cfc#2691: thanks!
zphang#7252: quick point before going for PR mode: I'm a dummy who doesn't understand any of the rights or legality of the datasets work so far. Is there a tl;dr (both for me, as well as any newcomers?) |
bmk#1476: Ok well my vote is cast
Daj#7482: And I do? I'm sorry I completely blanked on the About Us
bmk#1476: > quick point before going for PR mode: I'm a dummy who doesn't understand any of the rights or legality of the datasets work so far. Is there a tl;dr (both for me, as well as any newcomers?)
@zphang Books3 is the most legally questionable component
Teven#6831: Dealing with big tech lawyers is a significant part of our time before releases; it's going to be a slog to get them to agree on anything as a multiple-org release
bmk#1476: Aside from that, I'm pretty sure everything else is either entirely or mostly legal (or at least if there's any trouble we won't be the only ones affected)
bmk#1476: > Dealing with big tech lawyers is a significant part of our time before releases; it's going to be a slog to get them to agree on anything as a multiple-org release
@Teven which big techs might we be interested in co releasing with?
gwern#1782: _adds "What is more likely to be the 'Shakespeare of our era': Lord of the Rings / Harry Potter / the Marvelverse / Star Wars" to his list of questions to survey gwern.net readers on someday_
Daj#7482: > _adds "What is more likely to be the 'Shakespeare of our era': Lord of the Rings / Harry Potter / the Marvelverse / Star Wars" to his list of questions to survey gwern.net readers on someday_
@gwern Oh boy this will get some ratio
StellaAthena#3530: Okay, if you're interested in having legit organizational discussion let's move to #the-pile. That is both more appropriate a location and will avoid *\*gestures in Connor's general direction\**
bmk#1476: Yes
gwern#1782: @Daj who said I was going to tweet it? I am mindful: https://www.penny-arcade.com/comic/2020/10/26
Daj#7482: > Okay, if you're interested in having legit organizational discussion let's move to #the-pile. That is both more appropriate a location and will avoid *\*gestures in Connor's general direction\**
@StellaAthena I'll let it slide
gwern#1782: anyway, as far as Jason's question goes - these datasets are all flagrantly illegal and copyright violations, is there really any tldr beyond that? you just do it because the risk is fairly minimal and the gain so great
zphang#7252: that makes it complicated for orgs to collab, right?
gwern#1782: yes. generally, orgs just pretend to not notice
gwern#1782: 'gosh, we didn't collect imagenet images. no one told us that there might be any copyright problems' |
zphang#7252: "we did train on imagenet though, don't ask how"
Daj#7482: :books2:
cfoster0#4356: All of the datasets in the Pile have different situations. USPTO is almost certainly in the clear
cfoster0#4356: Same with FreeLaw, I believe
cfoster0#4356: Anyways, a discussion for #legal
bmk#1476: @gwern except for Books3 I'm pretty sure everything else is mostly legal
Louis#0144: waiting for lucidrain to get online so I can be annoyed at him about his routing transformer implementation
Aran Komatsuzaki#5714: @Louis you'd better send an email to him
Louis#0144: tbh im so used to doing all my academic discussions either over discord or twitter
Louis#0144: that emailing doesnt even occur to me as an option
Aran Komatsuzaki#5714: he likes it for some reason
Aran Komatsuzaki#5714: maybe he's an old-school
gwern#1782: @bmk I am deeply skeptical that all of these datasets come with copyright license terms permitting unlimited redistribution perpetually
Louis#0144: there has been no such courrt case yet
Louis#0144: we have ~10yrs imo
Louis#0144: lol
Louis#0144: anyway if we really cared about this, there are so many books in the public domain
Louis#0144: like theres *so many*
Louis#0144: much more than we have in our dataset rn
Louis#0144: imho I think this is an issue with copyright and IP laws. Not actually an iissue with NLP research |
Louis#0144: using data to train an AI should make the data a derivative work
gwern#1782: yes, if you process the data, it's a derivative work, and that's the problem! (the AI is a transformative work, so at least that's safe)
bmk#1476: > @bmk I am deeply skeptical that all of these datasets come with copyright license terms permitting unlimited redistribution perpetually
@gwern at least if there are legal issues we won't be the only ones afflicted
circuit10#0158: Am I getting kicked or is Discord just being buggy?
gwern#1782: sure. as I said, everyone does it. but let's not go around saying silly things like "oh yeah it's totes legal this dataset uploaded only for commentary purposes under fair use"
Veedrac#0443: Not sure where to throw these ideas, but I want to dump them somewhere.
1. On context stuffing in training
Normally you train by filling the context with random documents from the corpus. To improve cases where pages are split over multiple URLs, as well as encourage longer-distance learning, you might find it's better to fill the context with lexicographically contiguous URLs, provided their domain names match.
Care should be taken if the dataset contains almost-duplicates with almost-duplicate URLs, but that needs to be handled anyway.
Veedrac#0443: 2. On BPEs
BPEs are terrible, terrible things, but they're also necessary to make use of a small context window. Here are some downsides:
a) ‘cat’ and ‘ cat’ are disconnected BPEs.
b) ‘cat’ and ‘Cat’ are disconnected BPEs.
c) ‘1995’ is a BPE, which makes math really hard. |
d) Word structure is hidden, making rhymes and such hard.
e) ‘aluminium’ and ‘aluminum’ are disconnected BPEs.
Let's try to design a better BPE encoding. First, consider this sentence triplet. We can solve a) and b) by encoding it like so, with implicit spaces.
JOIN_CAP let ' JOIN s try to design a better CAPS bpe encoding . CAP first , consider this sentence triplet . CAP we can solve a ) and b ) by encoding it like so , with implicit spaces .
The rule is that there is an implicit space before every alphabetical BPE, except where suppressed by `JOIN`. This has very little overhead in the general case, and avoids the need for separate BPEs for capital forms of words or words with spaces. It degrades gracefully for other languages.
Capitals can be encoded by `CAP`, which capitalizes the first letter of the next BPE, and `CAPS`, which capitalizes until the next space.
There are also `JOIN2`, `JOIN3`, etc., which apply to N successive BPEs, as well as `JOIN_CAP` and `JOIN_CAPS`, to increase efficiency.
Veedrac#0443: ‘BPE’ can also be encoded as `CAPS b JOIN2 p e`, which you'd use if ‘bpe’ was not an available BPE, or suboptimally as `CAP b JOIN_CAP p JOIN_CAP e`, which you'd never use.
Multi-BPE words will use `JOIN`, like `multi JOIN part`, which means multi-part words have an extra BPE, but as they should be less common than spaces, efficiency should remain better than having spaces separate.
For c), I claim numbers are sufficiently infrequent, and are particularly harmed by BPEs, so they should simply be excluded from BPEs altogether.
For d), perhaps it is better to avoid a lot of the arbitrariness of joining BPE pairs at arbitrary locations. Instead, do a simultaneous forward and backward pass, balanced so P(prefix) × P(suffix) is maximal, and read inwards to meet in the middle. I'd hope this would encourage more stable BPE prefixes and suffixes, which should help with rhyming and morphology. Note that if the prefix or suffix is multiple BPEs, adding onto the last BPE in the stack might need to merge recursively.
|
For e), this is what BPE dropout is meant to do. Going further, if using BPE dropout, then the network is trained to generate any valid BPE decomposition. So it seems like during training you should accept any prediction that is in accordance with the source text, eg. rather than training to generate only the token `dropout` at position `k`, you'd also accept the generations `drop`, `dro`, and `dr` and `d`, with discounted probabilities. This might not work without BPE dropout, lest the resulting network only learn the first part of an alternate segmentation, and get itself stuck during inference.
Aran Komatsuzaki#5714: We pretty much have the same opinion and similar argument with BPE (2). About (1), I'm thinking retrieval based LM will be prominent soon. Time will tell.
Veedrac#0443: It's easy to see why BPEs are bad, it's kind'a crazy they work so well anyway. I've not seen another proposal that helps the issue without burning context space (‘just use characters!’) or changing the architecture (characterBERT). My proposal should be pretty drop-in I think.
gwern#1782: I'm not sure context stuffing in training is a good idea. that does not reflect how you use it at sample time, after all. you don't grab random pages from the same domain just to fill up the context window completely!
Aran Komatsuzaki#5714: @gwern right
Veedrac#0443: Do you mean vs. empty context, or vs. completely random pages?
Veedrac#0443: I sort of agree, but you really want to encourage the model to learn long-distance relations.
Aran Komatsuzaki#5714: That's what retrieval-based LM is for
Aran Komatsuzaki#5714: I'm going to talk about this with interns of Noam Shazeer and Aurko Roy. It's a hot topic right now.
Aran Komatsuzaki#5714: You can check my most recent paper for the idea.
Veedrac#0443: Yah, but then you're getting back into the sweeping architectural changes. I'm trying to talk about things you could do largely drop-in.
gwern#1782: you can want it, but stuffing vaguely related data in is not an obvious way to do it...
Veedrac#0443: Uh, why not?
Veedrac#0443: If it reduces perplexity, which like, surely it must, then it must be learning long-distance relations.
gwern#1782: because you're constructing a weird distribution which is not like the actual distribution of texts, nor what people want to use it for
gwern#1782: it's sort of like a weird crippled retrieval model which retrieves 1 random document and doesn't improve
gwern#1782: imagine prepeding a random Wikipedia article to every text. will the wikipedia article be useful sometimes? undoubtedly. is that a good idea? well...
gwern#1782: do you have any examples of a gimmick like this working in research?
Veedrac#0443: This is why I asked vs. empty context, or vs. completely random pages. GPT-3 already stuffs context.
Veedrac#0443: It just does it *completely* randomly. |
gwern#1782: if you can't come up with a good strategy, doing something at random is usually better than half-assing it, since it'll average out
gwern#1782: and anyway, doesn't the implementation train all lengths simultaneously? so it does still do useful training within the first passage
Veedrac#0443: Yes, sure, that's why they do it.
Veedrac#0443: I think it's obvious that neighboring URLs will very frequently be related, often extremely, even in the case of Wikipedia.
Veedrac#0443: But even if you disagree, or think it'll be too infrequent to meaningfully help, I don't understand why you'd think the other cases would penalize the model such that it turned out worse than random.
gwern#1782: so, you don't have any examples of anyone doing this successfully?
Teven#6831: Well what you really don't want is the model finding correlations in training that are not present in the real world
bmk#1476: @Veedrac my big issue with this is it's very language specific
Teven#6831: which is what the random strategy approach is trying to avoid
bmk#1476: What about japanese, for instance
alstroemeria313#1694: i noticed that gpt sometimes wanders off-topic and starts generating text that looks like it should be part of a different type of document. is random context stuffing potentially responsible for this behavior?
Veedrac#0443: @bmk It'll be fine for Japanese, since the implicit spaces won't happen.
bmk#1476: What would caps mean if put before a token that can't be capitalized
Veedrac#0443: It goes until there's a word boundary. So it'll do nothing.
gwern#1782: @alstroemeria313 unless it generated an EOT (it knows extremely well what EOTs are since it sees them in practically every training sample/batch), I would think not. that's probably reflecting messed up formatting within documents, especially in dumping html to plain text, which seems to strip out a lot of important semantic formatting, like blockquotes or sections or authors
Veedrac#0443: Eg. `CAPS it ' JOIN s` gives `IT'S`
bmk#1476: Would everything have to be joined for Japanese to prevent spaces
gwern#1782: @alstroemeria313 I've noticed a lot of artifacts in GPT-3 which I *think* are due problems like that. for example, gpt-3 often generates blank lines where an image ought to go. or it'll repeat the same comment twice in a row before 'replying' to it (I assume that this is because the first copy is the 'original' and the second is a 'quoted' reply before the second commenter writes 'their' reply)
Veedrac#0443: No because implicit spaces only happen for a subset of BPEs.
bmk#1476: This seems incredibly ad hoc |
Veedrac#0443: Well it's just a space optimization for latin alphabets that's meant to be easy for the model to learn.
Veedrac#0443: as opposed to having duplicates of every BPE with spaces in
bmk#1476: Yeah but there are other alphabets
Veedrac#0443: which is hard to learn
Veedrac#0443: Yes but the worst case is that they have to encode spaces manually, no?
Veedrac#0443: Even if there was some latin or latin-esque alphabet language that doesn't use spaces, it'd just result in the occasional JOIN10
bmk#1476: This is also even less homomorphic than BPE
bmk#1476: To decode part of a sequence of tokens you'd have to handle a load of cases
gwern#1782: (is using an efficient attention and then a character encoding really that hard? there's a bazillion of them now, and plenty of implementations floating around)
gwern#1782: (all you need is to increase the context window to like 2048*3=6144. that's not hard. plenty of papers show equivalence to dense attention at lengths that small)
Veedrac#0443: Yah but then you risk issues if the efficient attention turns out not to scale well to GPT-3 sizes.
Veedrac#0443: Whereas a better BPE is just a better BPE
bmk#1476: What about codepoint encoding
bmk#1476: 1 token = 1 codepoint
Veedrac#0443: I had a point about that but I removed it
bmk#1476: There's only like, what, 2 million codepoints?
Veedrac#0443: I don't think it helps you much, since most codepoints are barely used
bmk#1476: So?
bmk#1476: Just do hashing trick to get token embeddings
bmk#1476: Also only 140k codepoints are actually assigned |
Veedrac#0443: You want to support arbitrary bytes anyway IMO \*shrugs\*
Veedrac#0443: I don't really understand your homomorphism concern @bmk. As long as it's easier to learn, I don't see why it matters.
bmk#1476: It makes working with token sequences hell
bmk#1476: BPE is already annoying enough, we don't need more weird edge cases
Veedrac#0443: You just mean on the programming encode-decode side?
Veedrac#0443: because there's carried context?
bmk#1476: You have a sequence of tokens, you want to decode the last n words
bmk#1476: That's annoying with BPE, absolute hell in yours
Veedrac#0443: count=0; for i, tok in enumerate(reverse(tokens)): count += (" " in autospace(bpes[tok])) - join_bpe[tok]; if count > N+10: break
last_n_words(decode(tokens[-i:]), N)
bmk#1476: This is horrifying
Veedrac#0443: Not entirely trivial to decode backwards, but it's pretty much the same. You just need to buffer a bit so the largest JOIN is off the end.
Veedrac#0443: But really I'd never do something like this. Just decode your tokens upfront.
bmk#1476: But say I need to know which tokens correspond to the last two words
bmk#1476: I'd have to do that
bmk#1476: And it would be a big mess
Veedrac#0443: This feels like a really weird problem.
Veedrac#0443: Like you ran a hundred-billion parameter model to generate those tokens.
Veedrac#0443: Surely you can afford a decode pass.
Veedrac#0443: Then it's just text. |
bmk#1476: It's not computational cost
bmk#1476: It's inelegant as hell
Teven#6831: > FB has been building a lot of multilingual resources too; that would overshadow a big English-only dataset
so they've actually already released mC4 with their translation model, although the model overshadowed the dataset 🙂
Veedrac#0443: But the elegance is identical with BPEs or with my encoding or with GZIP or whatever.
Veedrac#0443: You decode, then you have text, then you find the last two words.
Veedrac#0443: Decode *is* more complex, but we're talking a few lines you only need to write once.
bmk#1476: I need to figure out which tokens in the original correspond to the last n words
Deleted User#0000: @shawwn start a books run on GPTNeo! It's 3 commands to start training
Deleted User#0000: i want to see what falls out the other end 😄
Sid#2121: I would sell my soul if people other than me were to actually start training runs lmao
Sid#2121: (it's much more useable now than when you last tried it @shawwn 😆 )
bmk#1476: Since tokens will be able to affect other tokens far away that completely and utterly breaks any kind of multi word logprob stuff
Veedrac#0443: Tokens can't affect past word boundaries
bmk#1476: JOIN10
Veedrac#0443: Well, space boundaries
Veedrac#0443: if you've JOIN10, you've removed the boundaries
Veedrac#0443: which is the same problem you get with BPE
Veedrac#0443: like what's the probability of ‘cat’ if your BPE is ‘ cat’
Veedrac#0443: or ‘ cat.’ |
Veedrac#0443: And tbh you probably don't need more than JOIN3, maybe JOIN4
Logan Riggs#7302: Anyone have luck getting the activations of each layer of GPT2 on tensorboard (tensorflow v1)? tf.summary doesn't seem to cut it due to the while loops, and I haven't gotten tf.contrib.summary to work yet.
gwern#1782: shawwn was doing that... or was it the weights?
AI_WAIFU#2844: Hey, what happens if 2 people host data that is statistically random, but when XOR'd together outputs copyrighted works?
gwern#1782: same thing that happens if you distribute _titanic_ AES-encrypted
gwern#1782: _hands @AI_WAIFU some reading material: https://ansuz.sooke.bc.ca/entry/23_
bmk#1476: > Hey, what happens if 2 people host data that is statistically random, but when XOR'd together outputs copyrighted works?
@AI_WAIFU someone alreday did this
bmk#1476: lemme try and find it
gwern#1782: indeed, and my link discusses why Monolith doesn't protect you legally
bmk#1476: ah
gwern#1782: ai_waifu is very, very far from the first nerd to wonder 'what if we used XORed files or one-time pad versions of copyrighted files'
bmk#1476: i remember reading that colored bits post
StellaAthena#3530: > That sounds profound only if you're a Colour-blind computer scientist; it would be boring nonsense to a lawyer because lawyers are trained to believe in and use Colour, and it's obvious to a lawyer that the Colour doesn't magically bleed to the entire universe through the hypothetical random files that might be created some day. *You could create the file randomly, but you didn't.*
StellaAthena#3530: I wonder if there are people who read this and go “yeah those dumb lawyers” the way I read it and go “yeah those dumb computer scientists”
bmk#1476: that's me lol
bmk#1476: even after reading that post many times i still feel like color is a bit of a dumb concept
bmk#1476: like i *sort of* get it but i still think the cases where coloring your bits makes sense are very limited
StellaAthena#3530: Have you read “reflections in trusting trust”?
bmk#1476: nope |
AI_WAIFU#2844: *Reads furiously* I mean, I get that. What I'm proposing is the equivalent of 2 identical twins flipping a coin to decide who will commit a murder, and then smugly telling the cops they can't prove who did it.
AI_WAIFU#2844: Both parties can claim they're just posting random numbers.
StellaAthena#3530: It’s a classic computer security paper that describes functionally undetectable backdoors in any system.
AI_WAIFU#2844: Link?
StellaAthena#3530: The paper then goes on to discuss ways to “solve” this problem (quotes because it’s not really a solution from a technical POV) via color
StellaAthena#3530: https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_ReflectionsonTrustingTrust.pdf
StellaAthena#3530: The abstract simply reads
> To what extent should one trust a statement that a program is free of Trojan horses? Perhaps it is more important to trust the people who wrote the software.
StellaAthena#3530: This was a Turing Award lecture, not a traditional research paper. The author created Unix, grep, endgame databases for chess computers, and the programming language B. Most people haven’t heard of B, but the programming language C was so named because it was the successor to B.
StellaAthena#3530: I finally got to the end of the article (a bit distracted) and the security discussion especially the bit about randomly generated numbers hits it spot on
StellaAthena#3530: What do you have to say about that @bmk? Do you see the color there?
bmk#1476: the random number one?
bmk#1476: so i think that one's pretty reasonable
bmk#1476: but i also feel like it's potentially *overgeneralizing*
bmk#1476: i.e this argument might not apply to much more than random numbers and some other specific cases
gwern#1782: don't you solve it by using different independent compilers?
StellaAthena#3530: @bmk that makes sense to me. Hard to say if I agree or not without looking at a list of examples.
StellaAthena#3530: @gwern can you obtain two python compilers whose ancestry is disjoint?
StellaAthena#3530: Or two Linux kernels
gwern#1782: there are plenty of unices and other oses which are able to run a c compiler |
gwern#1782: https://arxiv.org/abs/1004.5534
betteropsec#0514: I keep getting removed from the server, any idea why?
StellaAthena#3530: @betteropsec discord is bugged. That happens sometimes.
Louis#0144: validation loss is explosively increasing
Louis#0144: noice
gwern#1782: have you tried frogblasting the vent core?
Louis#0144: omg tru
Louis#0144: i'll mention it to the team thanks
bmk#1476: in particular, you could make some kind of argument like (i haven't fleshed this out yet) when you generate random numbers, you're sort of counterfactually imagining all the potential universes you couldve gone into had you generated a different number, and you have confidence that this process will indeed send you into universes, say, uniformly, and your confidence is in the *process* and those bits are just some way to get evidence about which universe you ended up in
bmk#1476: in particular, the absurd example of the 4'33 stays absurd because you get no evidence about the process from the bits
bmk#1476: this isnt a rigorous argument but i feel like theres an argument hiding in there somewhere
AI_WAIFU#2844: Technically since physics is reversible, the color information is encoded somewhere, just maybe not in the bits.
bmk#1476: i mean in a practical sense
cfc#2691: > https://arxiv.org/abs/1004.5534
@gwern finally i can sleep right knowing there's a protection against this
cfc#2691: i though i'd had to write my own C compiler to compile a GCC i've read someday
cfc#2691: i think it wouldn't work for this compiler https://github.com/xoreaxeaxeax/movfuscator
ben waldner#6938: > I keep getting removed from the server, any idea why?
@Ninjinka#2073
|
Same here but just continue to join each time
StellaAthena#3530: @ben waldner @betteropsec I believe there’s a bug that will kick people sometimes without reason. This was definitely a problem a couple months ago and kicked a lot of people out of the DEF CON discord. I actually got kicked in the middle of a talk I was giving lol.
StellaAthena#3530: It’s also possible that you joined via a temporary invite. Where did you get the invite to this server? If you close the app and then reopen it, does that kick you?
StellaAthena#3530: Before testing this, copy this link somewhere safe: https://discord.gg/PPssEr
StellaAthena#3530: This is a non-temporary invite link.
StellaAthena#3530: You have that?
bmk#1476: We were originally doing that but it was a bit too much work so we put it off
StellaAthena#3530: We’re lazy mo-fo’s but if you have it we’ll take it
bmk#1476: The main issue is that processing the data into the format we need is slow and difficult and we didn't have the time to do it
bmk#1476: The problem is we were using the pushshift data
bmk#1476: And we need to join it into trees of comments
bmk#1476: And the data isn't in tree form
bmk#1476: It's just a bunch of comments dumped
bmk#1476: And we don't have the engineering time or cpu or disk space to make it work
bmk#1476: We want entire comment trees
bmk#1476: But ps dumps are ordered randomly
Sid#2121: GPT is trained to mimic whatever distribution you feed in. The problem with the tree structure is it only works in the context of the reddit architecture. If you straight converted that to plain text and read it in order, the conversation wouldn't make sense
bmk#1476: Long context is good
bmk#1476: Our AI likes having a lot of context
Sid#2121: If people are going to be using your trained model for chatbots for example, this means it will generate conversations that have no causal connection to each other and where one message doens't necessarily follow on from the next |
bmk#1476: So 1 book is way better than n tweets that add up to the same length
Sid#2121: reddit is fine, it's just a bit of effort to parse the conversation trees into meaninful threads
Sid#2121: anything clean and with a long ish context is pretty good
Sid#2121: @-Archivist we have two subtitle sets in the-pile v1
Sid#2121: always happy to have more tho
Sid#2121: we have a script to gather youtube subtitles that could essentially be extended to infinity
Veedrac#0443: Is the problem with Reddit just that you need someone to clean it? & It's only like ~300-400 GB of data?
StellaAthena#3530: > so articles would be better than reddit? what about tv news transcripts?
@-Archivist Yes. We have in life’s many of these things deliberately. I couldn’t find a large movie/play script dataset though, if you have something like that that would be exciting.
StellaAthena#3530: > Is the problem with Reddit just that you need someone to clean it? & It's only like ~300-400 GB of data?
@Veedrac partially, but also reddit comment threads (also twitter comment threads) are not well structured to be run through GPT-3. The longer and more coherently organized text is the better: we would much rather have post -> response -> reply to response all in one document than randomly shuffled reddit comments.
Veedrac#0443: Yeah that's part of what I meant by clean.
Veedrac#0443: Is there a date you'd want this before, if it were to be useful?
gwern#1782: @-Archivist re reddit threading: the way I would put it is, if you dump in random comments, in arbitrary order, what GPT will learn is to generate individual disconnected standalone reddit comments - short pieces of text. sometimes as trivial as "Yes." or "No." If you dump in serialized threads (say, every path through the comment trees), GPT has to learn much much more interesting things: how to take turns, how to model opinions of different commenters and track them over the course of a long conversation, how to make and rebut arguments, what to quote from a previous comment, how to reason and infer. even a trivial comment like "Yes." is meaningful and educational if it comes at the end of a comment chain where people are asking clarifying questions or summarizing/restating. generating a random "Yes." is boring and uneducational. just emit that 0.X% of the time at random. but generating "Yes." at the end of a long discussion of Constitutional law and what the Second Amendment does permit (and correctly predicting that it's "Yes" as opposed to, say, "No"), *that* is very nontrivial and requires deep intelligence
gwern#1782: it's sort of like asking GPT to model text where you sorted the letters in a sentence, vs the original sentence. which is more meaningful, the preceding sentence or its sorted version: " ',.GPTaaaccddeeeeeeeeeeeeeeeefgghhhiiiiiikkllllmnnnnnnnooooooorrrrrssssssssttttttttttttuvwxy"?
gwern#1782: as Democritus says, '"comedy" and "tragedy" come to be out of the same letters'. the order is where all the information is, and this is true at every level. most of the meaning of comments comes from the context and being in the proper order. a comment on its own often means little
gwern#1782: the more the original true semantic ordering and context are there for GPT to learn, the more it's able to learn, and the more interesting the things it will learn
Hatchling#4049: For some weird reason, this server and only this server keeps removing itself from my server list, so I keep having to visit https://discord.com/invite/MjSbyKa to rejoin
aquajet#7800: We had thought about going for a hybrid approach for tree-structured data: make a single comment chain for every top level comment
aquajet#7800: https://cdn.discordapp.com/attachments/729741769738158194/770681166168588328/991538 |
aquajet#7800: For example this is the parsing of item 991538 on hacker news, top level comments are seperated by `------` and subcomments are seperated by `~~~`
StellaAthena#3530: @Hatchling does this link solve the problem: https://discord.gg/PPssEr
Hatchling#4049: I guess I'll find out soon enough
StellaAthena#3530: @Hatchling Let me know if it stops happening
gwern#1782: so a Nature reporter is interested in GPT-3. would it be a problem if they showed up here?
StellaAthena#3530: Nature has reporters?
gwern#1782: yes, who do you think writes their media articles
Sid#2121: please send them our way
gwern#1782: @Matt Hutson
Sid#2121: Hey @Matt Hutson 👋
Sid#2121: me, @Daj , @bmk or @StellaAthena are the people to ask if you have any questions about this project specifically
Matt Hutson#9263: Hello. First time using Discord. I have no idea what any of this is or how to use it!
Sid#2121: if you have general GPT questions, everyone here is pretty knowledgeable about it, so I would shout into the void
Sid#2121: hah, that was me a few months back, dw, it's mostly pretty simple
gwern#1782: (discord is like slack but edgier)
StellaAthena#3530: Howdy @Matt Hutson 🙂 Welcome to the garage we are building GPT-3 in.
Matt Hutson#9263: Ha. Thanks.
StellaAthena#3530: So what can we help you with?
Matt Hutson#9263: Just getting situated. I'm reading the Google Doc. I reached out to Gwern for an interview after reading his GPT-3 coverage and he sent me here.
bmk#1476: Sorry if the Google Doc is a bit out of date, we've been too busy working on things to document them well haha |
bmk#1476: But we can get you up to speed with the happenings around here
StellaAthena#3530: Gotcha. Well, welcome. We are a group of NLP researchers, data scientists, and AI aficionados who decided that we were going to try and build an open source replica of GPT-3. That has since spun off into several loosely connected research projects. We also talk about language modeling research in general.
Our main research channels are:
#gpt-neox-devs: for discussing the model. This is done and we have trained GPT-2 scale models.
#the-pile: for discussing data. OpenAI didn't publish their training data, so we made our own. Version 1 (1.5 TB) is complete and we are prepping for public release. We hope to develop a 10 TB multilingual version in the future.
#lm-thunderdome: for discussing model evaluation. We haven't quite finished implementing all the evaluation tasks GPT-3 was evaluated on, but we've mostly done that.
Right now our largest limitation is compute. Training GPT-3 is *very* expensive. We are part of Google's TensorFlow Research Cloud (TFRC) program, but so far can only train GPT-2 scale models. We are discussing getting more compute so that we can train a GPT-3 scale model. To be blunt, we have no hope of affording it unless someone donates massive amounts of high power computing.
bmk#1476: I'm in charge of the data stuff, so I'll give a summary of that
StellaAthena#3530: @Daj Founded the group and runs the whole thing
@Sid heads up modeling stuff
@bmk heads up data stuff
And I'm our head (read: only) project manager. My job is to remind people we are a channel for doing research rather than posting memes. I also work on data stuff under BMK, especially data ethics stuff.
StellaAthena#3530: Many other people are highly knowledgeable about our work and about GPT-3 as well.
bmk#1476: We've been collecting all sorts of data: websites, research papers (math, medical, philosophy, even ML), books, etc. The repo is here https://github.com/EleutherAI/The-Pile and we're currently putting together a paper for NAACL (although I don't think we're allowed to publicize it too much because of the anonymity period?) that will give a very thorough analysis of the data, as well as provide very detailed info about how the dataset was constructed. (The paper stuff is still subject to change) Our goal after this dataset is to create a 10x larger fully multilingual dataset.
Matt Hutson#9263: My first question is: Why?
Matt Hutson#9263: On the data: How much text data do you think is out there (Web, books, etc.)? Will that ever become a bottleneck?
gwern#1782: (note to eleutherians: "because we can" is not an acceptable answer)
zphang#7252: I feel like Connor had a pretty solid answer a while back |
StellaAthena#3530: https://cdn.discordapp.com/attachments/729741769738158194/770701683977879592/image0.png
StellaAthena#3530: That was rude Gwern
bmk#1476: I can answer the second part of that question
cfc#2691: isn't it all ultimately for AGI?
bmk#1476: So our biggest source of potential data is Common Crawl, a publicly available dataset of crawled websites, and it's *really big*. I don't remember off the top of my head, but the total size (raw HTML) is on the order of *petabytes*. Even extracting the text, we can get tens to possibly a hundred TB of usable text data. This is multiple orders of magnitude bigger than current training data, and using the scaling curves in Kaplan et al. we know that we can train a model much much bigger than GPT3 (I think we have a table in our document showing exactly how big a model you can train for a given amount of data)
bmk#1476: Additionally, as models get that big we can start training on other modalities like images or video simultaneously, and there's a *lot* more of that data. (This would also help with grounding the model)
bmk#1476: So data doesn't look likely to be a bottleneck for a long time
bmk#1476: (where a long time = multiple years to around a decade, because time passes faster in the world of AI research progress, haha)
StellaAthena#3530: Since many of us are inclined to say “because we can” in response to “why,” perhaps it would be helpful if you explained why someone might **not** do this.
gwern#1782: (as far as multimodality is concerned, OA's Kaplan apparently has a multimodal scaling paper coming out very soon which will offer a lot of insight into the costs and benefits of making Transformers work on video/images plus text)
bmk#1476: > Since many of us are inclined to say “because we can” in response to “why,” perhaps it would be helpful if you explained why someone might **not** do this.
@StellaAthena to add to this, aside from the tongue in cheek answer of "because we can," we all have different answers to "why"
bmk#1476: So we'd probably make more progress looking at why *not*
zphang#7252: my thoughts:
One argument I've seen (paraphrasing Connor, probably badly) is that OpenAI has already demonstrated the fruitfulness of this approach, and whatever you think the danger associated with that is, the cat is already out of the bag. A number of people here are particularly interested in AI Alignment - ensuring that our AI systems are aligned with human interests. Given the relatively strong restrictions around access to GPT-3, likely one of the most powerful models that currently exist, one way to push the boundary on alignment research is to *do it ourselves*.
Another argument is, if the concern is surrounding the danger of harms of the existence of such models, well, if a disparate bunch of coders can scrap together a decent version of a comparably powerful LM, a sufficiently motivated state entity or corporation certainly could. So it's not "because we can" but "because if we can, so can others".
There are other reasons people have joined in the effort. Some want an open version of GPT-3, some are interested in the academic merit, some are interested in doing good research and this has been a very productive group thus far. So the reasons are many. (For myself, I'm more interested in the research angle, as many of the questions being tackled here are directly relevant to my research.) |
aquajet#7800: Also on Why The Pile: Another benefit of The Pile over using something like Common Crawl or C4 (a cleaned version of Common Crawl) is that it offers a lot of **high quality** data. A lot of Common Crawl is boilerplate and you can miss out on some valuable information. Whereas with The Pile, you can have your language model learn straight from research papers or ~~Literotica~~ literary classics. This provides a lot more information, and some of it may not even be represented in Common Crawl
Daj#7482: Hello @Matt Hutson welcome! The "why" question has indeed several different answers, and my own is somewhat nuanced, I have been attempting to formalize it into a blogpost lately. Would be happy to talk about this some more if you are interested, I wouldn't endorse any of the other reasons given here wholesale
zphang#7252: darn, so much for my attempt to summarize your prior views lol
Daj#7482: Yea I've not done the best in giving my full reasoning, because it requires several steps of reasoning heh
Daj#7482: I've had personal discussions with Jack Clark (head of policy at OA) about this before, and there's some nuanced argumentation from both sides
cfoster0#4356: FWIW I have a slightly different motivation. EleutherAI and our replication attempt represent a counterpoint to the trend of large, private tech companies dominating the development of new models. Sort of akin to the GNU project. It also sets a precedent for community development of AI systems.
cfoster0#4356: ^ in re: why?
Daj#7482: Yea I think there is no "one" answer to why
Daj#7482: My answer is like a conjunction of several arguments from "democratization of access" over "handson testing of scaling hypothesis stuff to check how worried we should be about imminent AGI" to "Any tech that can be licensed to Microsoft cannot be that unsafe" and of course, the tongue-in-cheek "Picking on Microsoft is a time-honored hacker tradition"
Matt Hutson#9263: So you have the model and data and just need the compute to scale it up? What will that cost? $5m? How likely are you to get it?
betteropsec#0514: also curious ^ it seems like all of this is kind of futile without a chance at the computer power
Sid#2121: @Matt Hutson Our compute is provided by TFRC https://www.tensorflow.org/tfrc which provides cloud TPUs for research purposes. It doesn't cost us anything, however they don't provide us quite enough for GPT3 training, and they're pre-emptible TPU pods only. We're not a business so couldn't really afford the compute if we did have to pay for it.
Deleted User#0000: btw Im friends with the head of google cloud security uk. Im not sure how/if she could help, but i told her about this, and she asked me for the names of ur contacts at tfrc. what do you think?
gwern#1782: (I speculate that the marginal cost to Google of TPUs is a lot less than you'd think from the list prices on GCP, which is how TFRC can be so generous with them)
Louis#0144: how long should you let a language model warm up for
bmk#1476: > btw Im friends with the head of google cloud security uk. Im not sure how/if she could help, but i told her about this, and she asked me for the names of ur contacts at tfrc. what do you think?
@Deleted User Zak Stone and Jonathan Caton; @Daj is already going to be communicating with them about getting more TPUs through TFRC
Louis#0144: like maybe first few thousand documents?
Louis#0144: rn im saying 10k
gwern#1782: https://arxiv.org/pdf/2005.14165.pdf#page=44 https://cdn.discordapp.com/attachments/729741769738158194/770745270547120138/xwd-160383037445623.png |
gwern#1782: 375m tokens sounds like a lot more than 10k docs
Sid#2121: yeah i was surprised by how short their warmup was
Sid#2121: since their batch size is is something like 1million tokens, it's actually only about 300 steps
Sid#2121: scaling laws paper showed it doesn't matter too much tho
betteropsec#0514: anyone know how to get an openai key? I applied a few times, but no luck. Kinda figure I've gotta wait till release at this point
Sid#2121: doesn't really seem like they're giving them out any longer
Sid#2121: someone please correct me if i'm wrong though, but i've heard from both researchers with super legit use cases and businesses that have been met with silence
gwern#1782: they give out very, very few keys
cfoster0#4356: I haven't discerned any rhyme or reason to who gets keys
cfoster0#4356: Folks all from the same company, one gets a key and the rest don't
gwern#1782: it's inconsistent, isn't it? I've never seen any rhyme or reason either. random business bloggers get one while famous developers I recommend repeatedly are passed over
cfoster0#4356: Yeah. I don't have any inside knowledge, but wouldn't be surprised if there's a tactical element to it
bmk#1476: Somehow I managed to get one and I'm a completely random nobody
cfoster0#4356: Any affiliation or just as an independent researcher?
cfoster0#4356: @bmk
bmk#1476: Nothing
cfoster0#4356: Wow
gwern#1782: inorite
gwern#1782: there definitely seems to be an element of 'greg brockman noticed your tweet' to it
Teven#6831: > scaling laws paper showed it doesn't matter too much tho |
@Sid If I remember well, that was actually one of the only elements of the learning rate schedule that actually did something
Teven#6831: "We conclude that the choice of learningrate schedule is mostly irrelevant, as long as the total summed learning rate is sufficiently large, and theschedule includes a warmup period and a final decay to near-vanishing learning rate."
Teven#6831: unless you mean that the choice of how many steps doesn't matter much
Sid#2121: I guess I misphrased, *as long as there is some warmup period* it seems like the length doesn't matter too much
Sid#2121: exactly, yeah
bmk#1476: > there definitely seems to be an element of 'greg brockman noticed your tweet' to it
@gwern which tweet
gwern#1782: any
bmk#1476: Oh
Louis#0144: does it make sense to steal word embeddings from a different LM if I am training a new one
Louis#0144: for instance stealing word embeddings from GloVe
Louis#0144: or BERT
zphang#7252: I don't see any issue with that
Louis#0144: would it be beneficial
gwern#1782: it'll save you some compute, one assumes, compared to learning from scratch
zphang#7252: Might train faster? And might benefit e.g. rare words if your old embeddings are train on more data
zphang#7252: There might also be some counter-intuitive effect (hypothetically) from lower layers being far better trained than upper layers leading to weird impact on learning, but I've not seen anything pointing to that
Overall I assume it works/helps unless we see evidence to the contrary
Louis#0144: im using a batch size of 5k just so the model can converge
Louis#0144: lmao |
Deleted User#0000: Hello everyone, i am very new in field of Ai research, i interested to learn it as part time hobby so what can i do or what good thing to start
aquajet#7800: Hello! If you want to help towards GPT-3 replicaiton the best way to start is in #lm-thunderdome . We are currently building an evaluation harness to run a bunch of tests on a language model. We need this so that we can evaluate a GPT-2 sized model. Once those evaluations are done we can start scaling up to GPT3 and beyond.
aquajet#7800: The project board for what needs to be done is located here:https://github.com/EleutherAI/lm_evaluation_harness/projects/1
3dprint_the_world#6486: I'd argue that even if EleutherAI doesn't get access to the required compute, the effort is still highly worth it, because eventually costs are going to come down and even if OpenAI releases GPT-10, GPT-3-level language models are still going to be amazingly useful for many tasks.
Hatchling#4049: Is it just me, or is it unusually quiet today?
cfc#2691: https://andrewmayneblog.wordpress.com/2020/10/20/the-best-kept-secret-about-openais-gpt-3/
cfc#2691: I'm really interested in the semantic search from gpt3
cfc#2691: Do we have a test harness for that?
StellaAthena#3530: @cfc No we have not. We have only looked at implemented the tests that *Language Models are Few Shot Learners* implements so far. That does sound very interesting though, and I encourage you to open an issue on the repo.
cfc#2691: I will do so :)
gwern#1782: it shouldn't be too hard to code up. you concatenate the prompt with each possible hit, do a forward pass, average the logits, and pick the concetenation with the lowest average
Matt Hutson#9263: Whose idea was this? And how did you all come together?
twittypoet#9692: Hello all
I am a new member here — I’m happy to join this community 🙂
Daj#7482: > Whose idea was this? And how did you all come together?
@Matt Hutson Yea I'm basically the founder and de facto person ~~to blame~~ in charge
circuit10#0158: Why does this server keep disappearing from my server list? Am I getting kicked?
Daj#7482: No, it seems to be a reoccuring bug
Daj#7482: I think @StellaAthena
Daj#7482: has a solution |
circuit10#0158: Oh
StellaAthena#3530: @circuit10 how did you join the slack channel
circuit10#0158: Slack?
circuit10#0158: I'm on Discord
circuit10#0158: It was on the website
circuit10#0158: https://www.eleuther.ai/get-involved
circuit10#0158: Here
StellaAthena#3530: Can you help me test something?
StellaAthena#3530: Close the app completely, then reopen it. This will probably make the channel go away.
Then I’ll give you a different link, close the app again, and I think it’ll stay.
circuit10#0158: OK, thank you, I'll try it
circuit10#0158: Yes, it went away and I had to rejoin @StellaAthena
circuit10#0158: Oh, is it a temporary invite?
circuit10#0158: https://cdn.discordapp.com/attachments/729741769738158194/771024865704411136/unknown.png
StellaAthena#3530: Yeah I think so
StellaAthena#3530: That’s our bad. I’ll update the link on the website
StellaAthena#3530: This is a perm link: https://discord.gg/vtRgjbM
circuit10#0158: Thank you!
StellaAthena#3530: Hey guys, we made a mistake and put a temporary link on the website. If you joined this channel through our website, copy this link, quit the discord channel, and then join with it. If you don’t do this, you will get randomly kicked out from time to time. I’m going to edit the website to use this link to prevent it from being a problem in the future.
|
Sorry for the inconvenience.
https://discord.gg/vtRgjbM
ben waldner#6938: > Hey guys, we made a mistake and put a temporary link on the website. If you joined this channel through our website, copy this link, quit the discord channel, and then join with it. If you don’t do this, you will get randomly kicked out from time to time. I’m going to edit the website to use this link to prevent it from being a problem in the future.
>
> Sorry for the inconvenience.
>
> https://discord.gg/vtRgjbM
@StellaAthena
Perfect I hope it works
ben waldner#6938: I think it worked
ben waldner#6938: I left the server and joined it with the new link to debug it.
FractalCycle#0001: Does anyone here work for OpenAI directly?
bmk#1476: ~~only in my dreams~~
bmk#1476: Jack Clark is in here
aquajet#7800: theres also a user with the same uname as Jeffrey Wu's github account
researcher2#9294: > https://andrewmayneblog.wordpress.com/2020/10/20/the-best-kept-secret-about-openais-gpt-3/
@cfc Ok that is crazy
cfc#2691: Really good, right? |
researcher2#9294: If only we had gpt3 to analyze our pile for gpt3
bmk#1476: token cost: `a lot`
Louis#0144: wtf
Airatak#7842: Hi guys!
Airatak#7842: Can you please share the GPT Neo Repo with my friend
Airatak#7842: His github is yashsinghal
MarcoLustri#8650: I was wondering what that was 😄
cfc#2691: https://github.com/UCSBarchlab/OpenTPU
cfc#2691: let's just kickstart some tpus
cfc#2691: haha
StellaAthena#3530: Hmmm. Maybe I’ll try that. I had been using this website but it doesn’t help much: https://downloadmoreram.com/
andyljones#7746: Is there any broad-strokes overview of the field of language models that you'd recommend over the Lil Log one?
https://lilianweng.github.io/lil-log/2019/01/31/generalized-language-models.html
Airatak#7842: > https://github.com/UCSBarchlab/OpenTPU
@cfc This technically would work but will be very very inefficient
StellaAthena#3530: @andyljones there is a pinned post with links to several major papers.
StellaAthena#3530: They’re detailed than a high level overview, but not completely in the weeds.
andyljones#7746: @StellaAthena If you mean bmk's post, yeah that's unfortunately a lot more detailed than the level I'm after. But you've indirectly reassured me that the Lil Log one is about as good as is there is out there, cheers 🙂
bmk#1476: https://discord.com/channels/729741769192767510/729741769738158194/736374402366832681 this one |
bmk#1476: I don't know if there actually exists a good single source that lays out the background, the scaling hypothesis, and an overview of the technical advances that might help us get there
bmk#1476: I might write one Eventually™
StellaAthena#3530: @bmk that sounds like fun. @Aran Komatsuzaki has surveyed some of that that stuff IIRC, but non-comprehensively and with a focus on attention.
bmk#1476: The closest thing I know is my LM/AGI post, but that's more about the *path to AGI* from GPT3
bmk#1476: It doesn't really help if you don't already buy the scaling hypothesis, or if you don't know much about LMs
bmk#1476: I guess part 1 of that can be covered by my GPT3 post and part 2 by my LM post
bmk#1476: *it's a trilogy*
bmk#1476: I still need a post about the latest advancements in LM stuff though
bmk#1476: I guess arans paper covers that
bmk#1476: I might write a summary of it
Aran Komatsuzaki#5714: Yeah I guess my focus is in on the latest stuffs.
Aran Komatsuzaki#5714: Both my blog and paper.
Wafflecat306#8443: Hai
Noa Nabeshima#0290: So in Jared Kaplan's talk (https://www.youtube.com/watch?v=QMqPAM_knrE&feature=emb_title) he keeps mentioning how the laws look like a power law plus a constant
Noa Nabeshima#0290: I don't see where he gets the constant? Can someone clarify this for me?
chirp#4545: the constant is the entropy of the true distribution
chirp#4545: like, real-life text has typos and people's names and other idiosyncratic info
chirp#4545: which no language model can predict
Noa Nabeshima#0290: Yes, but where do you see it in the empirical laws?
tin481#5221: It's much clearer in the non language tasks. According to the latest scaling paper, even their largest model was too small to get a good estimate of the entropy of natural language |
tin481#5221: They don't give a constant for language specifically
tin481#5221: See page 7, table 1 https://arxiv.org/pdf/2010.14701.pdf (and caption)
Noa Nabeshima#0290: Thank you!
gwern#1782: remember, you can link to specific pages in PDFs: `https://arxiv.org/pdf/2010.14701.pdf#page=7` to save readers the hassle
Deleted User#0000: btw ive seen discussion of MoE, saving on compute. Is that because of the reduced communication overhead between experts vs parts of a big model?
Deleted User#0000: or if not, what is it?
Veedrac#0443: You only activate a small portion of a MoE model each time you use it
Deleted User#0000: ah wait i may have been confusing MoE with deep ensembles
Deleted User#0000: what's a good paper on MoE for deep learning?
kindiana#1016: https://arxiv.org/abs/1701.06538
kindiana#1016: https://arxiv.org/abs/2006.16668
Deleted User#0000: thanks
gwern#1782: (deep ensembles are really expensive because each version has to learn more or less the same thing)
Deleted User#0000: yeah but on out-of-train-sample prediction they typically generalize somewhat better from averaging [edit: tho this is at the expense of compute as u say so actually meh:P]
Deleted User#0000: but yeah didnt strike me as a particularly useful thing
Deleted User#0000: MoE now that I see what it is seems like a much more interesting idea
gwern#1782: you could see MoE as kind of the exact inverse of deep ensembles. ensembles require N times more compute than the baseline model to ensemble, while MoEs require 1/n the compute to 'de-ensemble'? 🤔
Deleted User#0000: yeah except that the experts arent learning the same thing most of the time?
StellaAthena#3530: It's generally more instructive to think of an expert as the base unit of computation rather than the whole model.
StellaAthena#3530: IMO |
StellaAthena#3530: > yeah except that the experts arent learning the same thing most of the time?
@Deleted User Correct.
tin481#5221: I have a question. How much do you think openai "sandbags" its research, waiting to publish to maintain an edge?
bmk#1476: they absolutely do
bmk#1476: I know for a fact they had 13B months before the gpt3 paper release
tin481#5221: That seems like something you only do if your timelines are short. Are we starting to see race dynamics?
bmk#1476: (or maybe they just wanted to release all their models at the same time?)
StellaAthena#3530: @tin481 Race dynamics involves pushing timelines **up**
gwern#1782: > yeah except that the experts arent learning the same thing most of the time?
@Deleted User that's why I say it's de-ensembling. if in ensembling you have many models learn the same thing but somewhat different to average out the errors, maximizing overlap of the 'experts', then presumably the inverse is to make many sub models learn as maximally different things as possible and minimize overlap of the 'experts'
StellaAthena#3530: Delaying publication indicates you don't think anyone can race you.
gwern#1782: it could just be the difficulties of writing up papers. look at how enormously long and complex these scaling papers are
gwern#1782: given how sluggardly y'all have been in setting up evaluation harnesses, I'd think you'd be *much* more sympathetic to OA here
Deleted User#0000: ^ i notice that my group's paper tend to accumulate as many things per paper to minimize the paper writting overhead
Deleted User#0000: Appendix M
StellaAthena#3530: I don't think anyone is being unsymathetic?
zphang#7252: oh this reminds me
StellaAthena#3530: Certainly nobody here would say that what they're doing isn't incredibly hard
bmk#1476: > it could just be the difficulties of writing up papers. look at how enormously long and complex these scaling papers are
@gwern I mean they have 20 authors or whatever, surely 2 pages per person isn't too much? |
gwern#1782: I see no reason to think they aren't publishing these papers pretty much as soon as they are even half-baked, and all the delays are more than adequately explained by the sheer difficulty of doing all of those scores of different evaluations and writing up latex files which don't barf overlapping boxes everywhere
gwern#1782: (it's a long way from looking at a model's final loss on a tensorboard and having a writeup like brown et al 2020)
bmk#1476: I presume they'd be doing at least *some* of the writing beforehand
bmk#1476: It's not like they have absolutely no clue where the curve is going a week before the big run finishes
StellaAthena#3530: > @Deleted User that's why I say it's de-ensembling. if in ensembling you have many models learn the same thing but somewhat different to average out the errors, maximizing overlap of the 'experts', then presumably the inverse is to make many sub models learn as maximally different things as possible and minimize overlap of the 'experts'
@gwern Traditional ensemble learning is like when your study group goes off and solves the pset on their own and then compares answers. MoE is more like working in small groups and coming together at the end to collaboratively write the answers.
zphang#7252: ```
model.train()
results = model.eval()
print(results.to_latex())
```
StellaAthena#3530: It's pair-wise (or n-wise) ensemble
gwern#1782: (this perspective, along with the blessings of scale and lottery ticket hypothesis, is also part of why I'm skeptical that the MoE advantage is anything but a constant factor gain relevant mostly to the the small model / high loss regime)
StellaAthena#3530: I have a theory about how exactly this works, and I've been meaning to float it so now's as good a time as any.
StellaAthena#3530: It's a way of trying more random reinitializations and more shuffles of the data without needing much more data.
Deleted User#0000: hm? interesting
StellaAthena#3530: Statistically speaking, for large datasets each expert sees the same thing: they are looking at repeated large samples from a distribution.
Deleted User#0000: do they see the same thing?
Deleted User#0000: but different things are routed to them no?
StellaAthena#3530: They don't see *exactly* the same thing, but the law of large numbers says that the answer is effectively yes |
Deleted User#0000: > (this perspective, along with the blessings of scale and lottery ticket hypothesis, is also part of why I'm skeptical that the MoE advantage is anything but a constant factor gain relevant mostly to the the small model / high loss regime)
@gwern in the L(D) or L(C)?
Deleted User#0000: > They don't see *exactly* the same thing, but the law of large numbers says that the answer is effectively yes
@StellaAthena i thought that there would be experts specialized to different types of inputs
Deleted User#0000: so that maybe one only sees nouns and another only verbs or whatever
gwern#1782: @Deleted User L(C)... I think
Deleted User#0000: hmm
StellaAthena#3530: @Deleted User It's a trainable parameter, but it's not like one sees only nouns and one sees only verbs. It's that as time goes on we learn what types of datapoints each expert is good at.
Deleted User#0000: yeah but i imagined they will fixate early on
Deleted User#0000: but i donno
Deleted User#0000: how long thats just like my intuition man
StellaAthena#3530: That's why I compared it to divvying up a problem set. Each person gets the questions they're good at.
StellaAthena#3530: I don't know.
StellaAthena#3530: I haven't seen any info on that. @bmk, @Sid have you?
Deleted User#0000: so u are talking about the period before they decide what to focus on? however long it takes to do that?
StellaAthena#3530: I was oversimplying and planning on getting to the learning 😛
StellaAthena#3530: Let's suppose for a sec that we give them out randomly.
StellaAthena#3530: Then everyone sees the sample "kind of stuff" though they do see different samples.
StellaAthena#3530: Since they represent different pathways through the NN they're also initialized differently.
Deleted User#0000: this is smelling like lottery ticket stuff |
StellaAthena#3530: You would expect that some of these initializations to do better than others.
Deleted User#0000: [grr i cant find my notebook; my room is at maximum entropy]
StellaAthena#3530: One way you could build an ensemble out of this would be to figure out which expert does the best on different subsets
kindiana#1016: (this is further complicated by the gating network, which has to choose which expert to send each sample to, and that doesn't get nice smooth losses)
StellaAthena#3530: The problem is that this isn't very efficient - you're effectively training independent models in parallel.
StellaAthena#3530: And that's where the gate learning comes in.
kindiana#1016: usually in moe you only send things to the top-k (usually 2) experts, which is kinda iffy
StellaAthena#3530: You can get more out of the model by "artificially inflating the dataset"
StellaAthena#3530: Let's say S is good at math problems and M is good at reading comp.
StellaAthena#3530: In a normal ensemble, reading comp problems we show S are wasted compute. M is just better and we will go with M's opinion
StellaAthena#3530: By learning to allocate and tying that allocation back to the model performance, you're now allowing yourself to get more out of the specialization
StellaAthena#3530: Let's say that 80% of the data a normal ensemble sees is not going to matter much (because it teaches about something the expert is bad at). By only showing the expert things in it's speciality, you can show the expert the same number of datapoints but get much better learning because there's as many helpful examples as if you had 5x the data under the ensemble model.
Deleted User#0000: what do u mean by "a normal ensemble" here?
Deleted User#0000: like deep ensembles, without gating?
StellaAthena#3530: Yeah
StellaAthena#3530: (I think?)
Deleted User#0000: i agree with ur account, tho i wouldnt call them experts in a deep ensemble
StellaAthena#3530: I'm using the term "ensemble" in the generic sense it's used in ML
Deleted User#0000: coz they are too similar
StellaAthena#3530: Yeah but switching between "expert" in one context and "submodel" in the other felt clunky |
StellaAthena#3530: You're right though
StellaAthena#3530: That was somewhat sloppy
StellaAthena#3530: > this is smelling like lottery ticket stuff
@Deleted User I hadn't thought about it in these terms, but maybe
StellaAthena#3530: I think that comparison would make more sense if we were learning the wirings instead of the weights though
Deleted User#0000: hmm gating sounds so similar to attention though. Sounds to me like the routing transformer (which i just looked at the abstract now) is similar to the MoE/gating idea
Deleted User#0000: > I think that comparison would make more sense if we were learning the wirings instead of the weights though
@StellaAthena it was your comment that they start with different initializations which is what will make one tend more toward one type of input or another, just from the initialization
Deleted User#0000: > hmm gating sounds so similar to attention though. Sounds to me like the routing transformer (which i just looked at the abstract now) is similar to the MoE/gating idea
and now im imaginging some crazy thing where expert l at token j can sparesly attend to any expert k at token i, creating a sort of 2D sparse transformer monster xD
StellaAthena#3530: I think that if you allow arbitrary attention you blow up the parameter space too much
StellaAthena#3530: A major part of why we save compute is the sparsity of the model.
Deleted User#0000: parameter space is the same? if anything could blow up compute, but thats why im imaginging something adaptive like routing/sparse gating
StellaAthena#3530: I was assuming each expert would learn who to attend to
StellaAthena#3530: Is that not what you have in mind?
kindiana#1016: (actually attending to something vs evaluating if you want to attend to something costs about the same, so you need to pull a lot of tricks to make dynamic sparsity faster)
StellaAthena#3530: is that contra me or contra @Deleted User
Deleted User#0000: hm yeah true
Deleted User#0000: contra me
kindiana#1016: just in general for sparse attention |
StellaAthena#3530: Okay, that's what I thought just wanted to double check
bmk#1476: I might be a bit difficult to reach for the next 24 hours, just a heads up
bmk#1476: Dm me if there's something really pressing
StellaAthena#3530: Ditto. Got a doctor's appt with some recovery time and then the work I've been putting off in favor of hanging with all y'all this month
gwern#1782: I wonder if it would be good to set up a subreddit, like `/r/mlscaling` or something? I have all these scaling papers, and there are tweets like CC100 https://twitter.com/alex_conneau/status/1321507120848625665 which are important news to a niche of ML researchers/developers, but /r/machinelearning doesn't want all this, I expect, and it's certainly not appropriate for any other subreddits I can think of - it's usually not `/r/reinforcementlearning`, many links are not openai-related so `/r/openai` is wrong, it's certainly not `/r/mlnoobs` or mlmemes or decisiontheory...
gwern#1782: (I know bmk would prefer `/r/highenergyml` or something, but unfortunately, how do you spell or punctuate that, or remember it?)
gwern#1782: (the tags would be something like`R`(esearch), `T`(ransformer), `C`(NN), `RNN`, `MoE`, `Data`(set), `M`(odel release), `History`, `OP`(inion), `Forecasting`), `Meme`. Affiliations: `OA/DM/MS/FB/NV/AL/EA/TF`...
gwern#1782: and other tags as topics come up... `Smol` (distillation/compression)?
gwern#1782: probably good to divide between `Theory` papers on why bigger=better and what the inductive biases are, etc, and `Empirical` for papers actually doing scaling...
gwern#1782: in part, such a subreddit would help get all the eleutherai-relevant links in a public place. discords (like any chat or mailing list or email kind of communication) are never great for long-term archiving or visibility. nothing in it serves the role of an FAQ or anything. see https://www.ribbonfarm.com/2010/10/27/warrens-plazas-and-the-edge-of-legibility/
cfoster0#4356: FWIW I've been working on an information repo with some of the above, including an FAQ
cfoster0#4356: I do like the idea of collecting all of these High Energy ML papers
gwern#1782: I've begun setting up https://www.reddit.com/r/mlscaling/ . pls 2 provide reddit usernames for mods
aquajet#7800: Should there be a pinned post with the discord link?
gwern#1782: I was going to pin kaplan & henighan et al 2020 because they're the most required reading right now
StellaAthena#3530: How hard do y’all think replicating “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale” would be?
Aran Komatsuzaki#5714: it's pretty easy
StellaAthena#3530: I have some hypotheses that I think are incredibly interesting to explore related to that.
StellaAthena#3530: I’ve chatted about this before – I think we can mathematically estimate how the results change when you change languages using some really cool computational linguistics work.
gwern#1782: @aquajet if you want to write up a text selfpost describing EA with a link to the discord etc, go ahead |
gwern#1782: I've added some user flairs to make affiliations clearer
bmk#1476: @gwern can add me too pls, leogao2
gwern#1782: already did, I believe
gwern#1782: don't know connor's account
bmk#1476: Ah thanks
gwern#1782: everyone should feel free to suggest new links, moderators, subreddits to link in the sidebar, etc
bmk#1476: I'll make a "High Energy ML" sub banner
gwern#1782: but will that be memey enough
bmk#1476: If you have ideas what to add to it you can lmk
bmk#1476: I was going to make it look vaguely like particle trails from colliders but with an ML twist
StellaAthena#3530: > it's pretty easy
@Aran Komatsuzaki wait, do you mean on a purely technical level? Because I’m looking through it and it mentions 2.5k TPU days of computing...
bmk#1476: We don't have that kind of compute atm, lol
bmk#1476: Any TPUs we can get are going towards other projects
StellaAthena#3530: IK
Aran Komatsuzaki#5714: oh i meant on a purely technical level
StellaAthena#3530: Ah
StellaAthena#3530: Yeah
StellaAthena#3530: I wonder if I email the authors with “hi I don’t have $30,000 to drop on an experiment but I have a cool idea plz coauthor with me” I’ll get a response.
Aran Komatsuzaki#5714: highly unlikely |
StellaAthena#3530: Lol yeah
StellaAthena#3530: I was being sarcastic
Aran Komatsuzaki#5714: you can do a similar experiment at a smaller scale
Aran Komatsuzaki#5714: scaling law means you can scale down your problem in a robust way
Aran Komatsuzaki#5714: which is why i meant in a technical level, since you don't have to completely replicate the results to try any novel idea.
StellaAthena#3530: True
Sid#2121: I just signed up to reddit so i can join this and it suggested i join a few of the subreddits i'd browsed in the past month. :thonk: is it scraping my history somehow or am i *really* that predicatble?
Sid#2121: it was like, strangely specific ones, like r/opendirectories and r/thinkpad which i went on to find out why the fuck anyone would use the nub on the thinkpad instead of the trackpad haha
bmk#1476: Yer cookies
Sid#2121: I dun like 😠
Sid#2121: > it was like, strangely specific ones, like r/opendirectories and r/thinkpad which i went on to find out why the fuck anyone would use the nub on the thinkpad instead of the trackpad haha
@Sid all nub users make yourselves known so i can ban you
gwern#1782: I'd think cookies, yes. reddit and adverstiers are hardly above shadow profiles and other tricks
gwern#1782: (you can test it by registering with incognito. automated systems don't use IPs as much as you'd assume)
Sid#2121: anyway @gwern my user is sdtblck if you wanna make me mod although i have no idea how the fuck reddit works, honestly
gwern#1782: what about connor?
Sid#2121: i don't know if he has a reddit
gwern#1782: _gasps_
Ken#8338: > I've begun setting up https://www.reddit.com/r/mlscaling/ . pls 2 provide reddit usernames for mods
@gwern Really enjoying the collection you are providing. |
gwern#1782: it's a very scattered literature right now... hopefully pulling it all together should be interesting
Ken#8338: Good wide spectrum approach - including good historical ones such as Moravec.
zphang#7252: I am `zphang` on reddit
guac#4716: might want to put the tag descriptions in the sidebar 🤷♂️
bmk#1476: Wow, 46 members already?
cfoster0#4356: Lots of folks lurk here 👀
gwern#1782: I've added tag definitions to the submission page
StellaAthena#3530: @bmk thats 46 people *currently online*. Only 6 people follow the subreddit
gwern#1782: where do you see 6? I see 47 subscribers / 84 browsing online
StellaAthena#3530: huh, it seems like it's slow to update subreddit info on mobile.
StellaAthena#3530: It also shows you and I as the only mods on my phone.
StellaAthena#3530: On desktop I see 47/84
StellaAthena#3530: I strongly recommend a dictionary of post flairs, and cutting some too. I can intuit some of them but seeing five on a post is overwhelming.
gwern#1782: you'll appreciate them as the subreddit scales
StellaAthena#3530: Very possibly.
gwern#1782: (104 submissions!)
StellaAthena#3530: I'm going to work on seeding the submissions with high quality comments (perhaps shamelessly stolen from here!). I left one that I thought all y'all might be interested in about "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale" in cross cultural contexts.
https://www.reddit.com/r/mlscaling/comments/jl0tse/r_an_image_is_worth_16x16_words_transformers_for/gamyag9?utm_source=share&utm_medium=web2x&context=3
gwern#1782: 137 submissions. _J'y suis, j'y reste_ |
StellaAthena#3530: https://twitter.com/wahbamo/status/1127114025781927937?s=20
gwern#1782: "Dimitri, didn't you know? There is no god. But when we're done there will be."
Louis#0144: weird q
Louis#0144: when training a large LM like this, you sample a substring from the doc and give that as a training example?
Louis#0144: Or do you just give the entire doc up to some seq len and ask it to generate the last token
Louis#0144: lol
cfoster0#4356: Correct me if I'm wrong, but I believe for efficiency sake you can give it the whole sequence and use a triangular mask along the batch dimension so that you can predict multiple positions in a single step
cfoster0#4356: Ie 1st item in the batch generates the 1st token of the doc, 2nd item generates the 2nd token of the doc, and so on
gwern#1782: the way I understood it was that you always take a substring of context-window-length sampled at random from your dataset (treated as a single giant string), with documents separated by a EOT token and no effort made to align. then if you have 2048 tokens, each minibatch predicts tokens 1-2048 simultaneously, reusing the model internal calculations, so you get 2048 predictions: 2nd conditional on 1st, 3rd conditional 1-2, 4th on 1-3, etc
gwern#1782: and conceptually this is like predicting the 2048 one token at a time, one token longer each time, but vastly faster
gwern#1782: @cfoster0 you should submit any links I missed for /r/mlscaling
gwern#1782: _waits patiently for bmk's banner_
StellaAthena#3530: Has anyone trained language models on text backwards?
bmk#1476: I have a paper deadline tomorrow, you're not getting that banner for a few days
andyljones#7746: > Has anyone trained language models on text backwards?
@StellaAthena someone must've done an ablation of a bidirectional model at some point
andyljones#7746: ...but having said that, my Googlings around "bidirectional" "ablation" "backward-only" are failing me
StellaAthena#3530: Yeah I would have assumed so but they failed me too
StellaAthena#3530: I couldn’t even find a “sliding window” paper that looked ahead k-words
andyljones#7746: bingo, found from a BERT reference |
ERROR: type should be string, got "\nhttps://arxiv.org/pdf/1705.00108.pdf https://cdn.discordapp.com/attachments/729741769738158194/771881730930901092/unknown.png\nbmk#1476: @gwern what does the TK affiliation on r/mlscaling mean? ... Tpu podKast?\ngwern#1782: TensorforK\ngwern#1782: (if I used 'TF', everyone would assume it meant 'TensorFlow')\nshawwn#3694: _uses tfk_\nDeleted User#0000: https://github.com/botupdate/botupdate\n\nI'm confused. Did AI create a repo for itself?\nStellaAthena#3530: > ANN (adversarial neural network)\nPlz no\nLouis#0144: LOL\nLouis#0144: That’s so funny\nbmk#1476: ANN (ANN Neural Network)\nBrooks#6128: > I’ve chatted about this before – I think we can mathematically estimate how the results change when you change languages using some really cool computational linguistics work.\n@StellaAthena In all seriousness, I'd like to see that work.\nLouis#0144: how do I stop xorg from using 500MB of VRAM\nLouis#0144: I cant kill it\nLouis#0144: it keeps causing kernel panics\nLouis#0144: :/" |
StellaAthena#3530: @Brooks This paper finds that the *information content per syllable* is approximately constant across languages despite very different speech speeds, number of symbols, and number of possible phonemes: https://advances.sciencemag.org/content/5/9/eaaw2594
This paper contrasts the speaking speed and information content of native and non-native speakers: https://www.isca-speech.org/archive/Interspeech_2019/pdfs/1150.pdf
This paper looks at how information is distributed across positional location in a sentence: https://arxiv.org/pdf/1609.07681.pdf
This paper examines how the order of the basic building blocks of a sentence (English is Subect-Verb-Object, but other languages are different orders) influence the distribution of information in a sentence with a special focus on non-uniform information density is in Object-first languages. They connect this to the fact that such languages are extremely rare: https://papers.nips.cc/paper/4085-why-are-some-word-orders-more-common-than-others-a-uniform-information-density-account
This paper examines the specific example of Chinese, where there is some disagreement among linguists about what the basic building blocks of meaning are in Chinese: http://dsd.future-lab.cn/research/publications/2011/MOL-SpringerVersion.pdf
Louis#0144: @StellaAthena erick might like that paper
Louis#0144: have u sent it to him
StellaAthena#3530: The embedded plot shows (left) the speaking rate of different languages and the information rate (right) of the same languages. This sums up the first paper rather well: while there is significant variations between speakers of the same language, the plot on the right shows significantly more consistency across languages.
StellaAthena#3530: @Louis I just did 🙂
Louis#0144: noice
Noa Nabeshima#0290: What's the ratio between the price of GPT-n and GPT-n+1 inference ignoring changes to cost of compute with time
kindiana#1016: inference cost is proportional to parameter count
Noa Nabeshima#0290: oh god, that makes sense
Noa Nabeshima#0290: wondering about future monetization of large models
gwern#1782: aren't we all
gwern#1782: speaking of which: https://www.reddit.com/r/mlscaling/comments/jln8xr/how_compute_bound_ml_may_affect_the_startup/ |
Noa Nabeshima#0290: I made a bet with my friend that Metaculus would evaluate as Robin Hanson winning this bet
https://www.metaculus.com/questions/5118/will-robin-hanson-win-a-bet-that-the-gpt-line-of-language-models-will-generate--1bn-in-customer-revenue-by-2025/
Noa Nabeshima#0290: Anyone have probabilities they're willing to share?
Noa Nabeshima#0290: mine were 70~80 some months ago
cfoster0#4356: Interesting. Not sure what would constitute the GPT family, but I assign a relatively high probability that encoder-decoder models will eclipse GPT-Ns in not too long
cfoster0#4356: In both scale and economic value
kindiana#1016: what would be the justification?
cfoster0#4356: I think they'll have the same scaling curves and be more flexible in use
kindiana#1016: fair enough
kindiana#1016: depends on you belief on how useful long form text generation is
kindiana#1016: anything that isn't purely autoregressive training seems to fail at that
cfoster0#4356: Hmm is that so?
cfoster0#4356: I'd seen a couple attempts and they seemed in the same ballpark as GPT-2
cfoster0#4356: Maybe not 1.5B but definitely the smaller models
kindiana#1016: hrm, which models?
cfoster0#4356: At moments like these I wish I wrote notes on the papers I read 🙃
cfoster0#4356: May well be misremembering
gwern#1782: I think the first bidirectional which seemed to match gpt-2-1.5b was like T5. not a favorable comparison
bmk#1476: i'm personally not a fan of encoder-decoder models but i don't have any strong justifications for it
gwern#1782: and incidentally, nick walton tried to use t5 for AID text generation, but gave up, went back to gpt-2-1.5b, and used T5 for classification instead |
gwern#1782: I think this is pretty anomalous and interesting, myself
cfoster0#4356: Hmm. Interesting. To my eyes they seem underexplored, but maybe no one's gotten them working well enough
gwern#1782: seems like something of a bootstrap problem. no one uses them for text gen because they suck, but why should they suck? maybe they only suck because no one has quite figured out the trick
gwern#1782: (there's so many stupid little problems that can happen in DL. like forgetting to use some normalization at sample time and gosh now the samples look much worse than they should)
bmk#1476: (or using dropout 0.1 instead of 0.9)
gwern#1782: haha oh yes
cfoster0#4356: Of course, for multimodal tasks, it seems like *everyone* uses them
zphang#7252: I find it interesting that BART and T5 came out within a week of each other
zphang#7252: and both fundamentally do the same thing, but were applied differently to NLU tasks
gwern#1782: (poor BART! it gets a tenth the attention of t5)
zphang#7252: (a tenth the size too!)
Brooks#6128: > speaking of which: https://www.reddit.com/r/mlscaling/comments/jln8xr/how_compute_bound_ml_may_affect_the_startup/
@gwern This was the first thought that crossed my mind after reading Kaplan’s paper - a lot of startups are going to their boards to ask for larger investments. Also, if the capital requirements are anything like they are now to create a model of GPT-3 size (or a couple of orders of magnitude larger), this is going to rather quickly become (continue to be) an oligopolic market. This might cause the pendulum to swing the other direction in a search for less capital intensive model infrastructure. But that is a second order action and that may leave the path open to the bigs to have a one to two year head start.
chirp#4545: I wonder if pretrained-model-providers could turn into something akin to semiconductor foundries. In that industry, a few big players dominate, because no one else can afford the R&D
chirp#4545: in this analogy OpenAI could become like intel in the 1990s, at least until other people catch up (like TSMC did eventually)
chirp#4545: and in this analogy, a pretrained model is like a semiconductor fab
chirp#4545: this would be... very different from the usual economics of software
chirp#4545: my understanding is that except for stuff like the cloud (which isn't just pure software), software companies don't have a lot of supply-side moats
Noa Nabeshima#0290: are GPT3 scaling laws in bits or nats?
kindiana#1016: nats I think |
kindiana#1016: I dont think anyone reports loss in bits
cfoster0#4356: What exactly is the upside of using nats instead of bits for things?
bmk#1476: I think the main reason is just because everyone else uses them
bmk#1476: Also because e is nice I guess but it's not really *that* nice here
bmk#1476: Tbf, bits has the major advantage of everyone having an intuitive idea of what the hell a bit actually is
bmk#1476: I guess some people report bpc
cfoster0#4356: Yeah I have no mental model for what a nat is
cfoster0#4356: Nothing to ground it other than the definition
bmk#1476: Me neither, i just think "chonker bits" because they're bigger I guess
Deleted User#0000: i guess its coz all the nice mathemaical identities involving logs are with lns
bmk#1476: I mean, information makes more sense with base 2 imo
bmk#1476: Mostly because we work with bits all day
Deleted User#0000: yeah but derivatives and stuff?
bmk#1476: It's literally a constant factor it doesn't really change much
bmk#1476: And we have autodiff
bmk#1476: Nobody is painstakingly running gradients by hand
bmk#1476: (this reminds me of a certain gwernpost about impoverished sheep herders)
Deleted User#0000: yeah donno im just playing devils advocate
bmk#1476: Let's split the difference
bmk#1476: log_{(2+e)/2} |
kindiana#1016: https://cdn.discordapp.com/attachments/729741769738158194/772366547133661194/pi_vs_tau.png
bmk#1476: Damn beat me to it
bmk#1476: Natbits
bmk#1476: The Kelly-Bootle Logarithm
Deleted User#0000: ln in base 2
Veedrac#0443: My pedantic side is screaming to tell y'all you should be using log_√(2e) instead
Ravna#1831: 2nd order optimizers are lame too, let's do the 2.718th order of derivatives
Louis#0144: ROUTING TRANSFORMER IS CONVERGING 😮
shawwn#3694: @FishofFlight welcome to the server!
shawwn#3694: out of curiosity, how'd you hear about this discord?
FishofFlight#3096: GodAI
Louis#0144: LOL
Louis#0144: I knew it
Louis#0144: someone would eventually make a religion around LMs
Louis#0144: smh
shawwn#3694: interesting. what's GodAI?
shawwn#3694: googling doesn't bring up much
FishofFlight#3096: https://awk.itch.io/godai
Aran Komatsuzaki#5714: Is he the real one?
FishofFlight#3096: hm? |
Louis#0144: on the note of jordan peterson
Louis#0144: did you guys see Gary Marcus' AI group has like a bunch of eugenics people as advisors
Louis#0144: LMAO
Louis#0144: Nvm apparentlt some of the advisors have followers who support eugenics
Louis#0144: but the advisors do evo psych
Louis#0144: (i did not properly check my sources)
bmk#1476: > Nvm apparentlt some of the advisors have followers who support eugenics
@Louis but eugenics is only bad if eugenics is bad
AI_WAIFU#2844: We're like 50% transhumanists here. Mildly altering the gene pool though artifical selection is pretty tame by our standards.
Sid#2121: veering very close to eugenics defense here
Sid#2121: let's not tell people when and where to breed please
Sid#2121: that's not exactly what eugenics is @AI_WAIFU
cfoster0#4356: 50%? I dunno about that 😅
AI_WAIFU#2844: I mean, if you wan't to have this discussion. You can't use the term "eugenics" as it encompases everything from genocide to embryo selection.
bmk#1476: are catgirls eugenics?
AI_WAIFU#2844: Only if they're genetically engineered.
bmk#1476: but they remove people from the gene pool
AI_WAIFU#2844: Hmm, your right. Even robot catgirls to that.
AI_WAIFU#2844: Yes, catgirls are eugenics.
bmk#1476: :yes: |
Sid#2121: > I mean, if you wan't to have this discussion. You can't use the term "eugenics" as it encompases everything from genocide to embryo selection.
@AI_WAIFU doesn't the term normally refer to human genetic selection on a societal scale? I agree it's broad but i don't think embryo screening falls under the umbrella of eugenics per se
StellaAthena#3530: Eugenics is a historical pseudoscientific and sociocultural movement to improve the human race by breeding certain ancestral groups out of existence. There are a not insignificant number of people who believe the core idea (selective breeding to cultivate desirable social distributions) is desirable even if they don’t want to do it at gun point the way people did in the early 1900s
AI_WAIFU#2844: I mean, I've always thought of it as any attempt at modifying the human genome or gene pool though artifical methods.
Sid#2121: > I mean, I've always thought of it as any attempt at modifying the human genome or gene pool though artifical methods.
@AI_WAIFU is cockblocking eugenics?
Bedebao#4842: Perhaps I should've mentioned upon arriving that I'm the one who dropped a link to this server on GodAI's, which is why there was this little wave of arrivals. As for how I came to learn of Eleuther, it was mentioned on 4chan.
StellaAthena#3530: @AI_WAIFU I strongly disagree, for the same reason I disagree with labeling anyone who is nationalistic a fascist
Sid#2121: oh boy, we've spread to 4chan
Sid#2121: > Perhaps I should've mentioned upon arriving that I'm the one who dropped a link to this server on GodAI's, which is why there was this little wave of arrivals. As for how I came to learn of Eleuther, it was mentioned on 4chan.
@Bedebao what's GodAIs?
Bedebao#4842: The /aidg/ threads on /vg/ tend to attract related machine learning topics.
Bedebao#4842: A link about that was posted above. GodAI aims to be a new take on what AI Dungeon tries to do.
Bedebao#4842: A competitor of sorts.
StellaAthena#3530: > My grandmother was not a highly educated woman, but she told me as a small child to quit feeding stray animals. You know why? Because they breed.
> You’re facilitating the problem if you give an animal or a person ample food supply. They will reproduce, especially ones that don’t think too much further than that. And so what you’ve got to do is you’ve got to curtail that type of behavior. They don’t know any better.
AI_WAIFU#2844: @Sid I wouldn't say so.
AI_WAIFU#2844: I think it makes sense to think of eugenics the same way you think of color of bits.
Bedebao#4842: And since GodAI wants to make use of GPT-3 as well, it's no wonder that an open source alternative would be interesting.
StellaAthena#3530: That’s a quote from a man running for governor of south carolinia in July about why he wants to end state welfare programs. |
bmk#1476: > Perhaps I should've mentioned upon arriving that I'm the one who dropped a link to this server on GodAI's, which is why there was this little wave of arrivals. As for how I came to learn of Eleuther, it was mentioned on 4chan.
@Bedebao i feel accomplished
AI_WAIFU#2844: If you're trying to alter the genepool, it's eugenics.
StellaAthena#3530: That’s not true
AI_WAIFU#2844: That was an attempt at putting forth a definiton
bmk#1476: 4chan must feel right at home among the discussions of whether catgirls are eugenics
StellaAthena#3530: Ah.
You’re doing the same thing that people who call everyone on the right a “Nazi” are doing
AI_WAIFU#2844: Kinda? I'm trying to negotiate a definiton of the word eugenics.
StellaAthena#3530: Eugenics is a specific sociopolitical agenda created largely by Francis Galton. It was very popular in the US and the UK around 1900, and was later embraced by the Nazis to justify genocide.
StellaAthena#3530: There are people today who directly link to the ideals and history tied up in that term
StellaAthena#3530: I don’t see why diluting it to mean “modifying the gene pool” is desirable
StellaAthena#3530: Why do you want to do that?
StellaAthena#3530: There are legitimate critiques of some emerging medical practices as eugenics (preemptively preventing people who are autistic form being born is popular in certain circles, for example)
cfoster0#4356: #off-topic This isn't for general consumption imo
StellaAthena#3530: And there are people who openly advocate for letting the poor starve and deny them medical care so that they die off and don’t breed. Those people are eugenicsits.
AI_WAIFU#2844: Let's take this to #off-topic
Bedebao#4842: Hmm, do you guys want to know more about the whole deal with 4chan, AI Dungeon and GodAI?
bmk#1476: i'd love to hear more
bmk#1476: now that the "are catgirls eugenics" convo has movec |
Sid#2121: yes me too
StellaAthena#3530: @Bedebao @FishofFlight so what’s going on on 4chan about us?
FishofFlight#3096: ?
Bedebao#4842: It goes without saying that anons are fond of AI Dungeon because it allows to generate porn tailored to their various fetishes. However, the owner of AID, nicknamed Mormon on 4chan, alongside OpenAI, have a complete monopoly of the stuff. They keep adding stupid features, censorship, and can charge however they want. Another major flaw to them is that AID uses a model that was finetuned with Choose Your Own Adventure stories.
And so someone (nicknamed AWK) started GodAI to try to make things right. Better UI, better features, a pure model not tainted by CYOA... AWK recently got access to the GPT-3 beta, but the prices are ludicrously high, as a result of the OpenAI monopoly. AID has a special deal with them.
This is why GodAI and GPT alternatives such as EleutherAI are starting to get mentioned. Anons are tired of Mormon and OpenAI's bullshit and want to become free of their grasp.
bmk#1476: interesting
bmk#1476: unfortunately for them we're months away from actually having a gpt3
moonfire8#8768: Very interesting....
bmk#1476: where by months i mean possibly a year depending on how things go
bmk#1476: we have no concrete roadmap yet
StellaAthena#3530: That’s a little misleading
StellaAthena#3530: The major blocker is compute
bmk#1476: yeah and we have no clue how much tfrc will give us
StellaAthena#3530: We don’t have a concrete roadmap for how we are going to get the compute.
bmk#1476: fair
moonfire8#8768: This whole convo.... is misleading
StellaAthena#3530: If we were to wake up tomorrow with it we could start training right away. |
bmk#1476: we don't have pile dedupe done yet but yes i see what you mean
bmk#1476: we could start a week from now
Bedebao#4842: I'm sure it was already asked, but... distributed computing?
StellaAthena#3530: @Bedebao Our current plan is to try to work out a deal with Google by impressing them with runs on smaller scales. We have created a training dataset that we expect to do much better than what OAI used because it’s seeded with a great diversity of data sources, many of them information-dense (GitHub, arXiv, medical research, US legal records)
StellaAthena#3530: We are planning to release that by the end of the year, train GPT-2 scale modes, and try to impress the people who run Google’s TFRC program
bmk#1476: TFRC*
Bedebao#4842: Be careful however to not become a slave to Google.
StellaAthena#3530: Lol
Bedebao#4842: I mean, it just sounds like they could keep you hostage with their computing power and force things on you.
StellaAthena#3530: TFRC is a program where they give indie groups and non-profits access to TPUs they’re not currently using
StellaAthena#3530: We are currently using it to train GPT-2 scale models, but GPT-3 is so much bigger that it’ll take forever at the rate at which we currently get access
StellaAthena#3530: We are asking for a higher priority
StellaAthena#3530: I mean, you’re not wrong abstractly
cfoster0#4356: We can just... Walk away. Like they don't have anything to force our hands with?
bmk#1476: @Bedebao where is eleuther discussed? i can literally only find a single post on /vg/ mentioning eleuther
Bedebao#4842: Well yes, it's only been sparsely mentioned for now.
Bedebao#4842: But it's a start.
bmk#1476: https://boards.4channel.org/vg/thread/312167906 there is exactly one post mentioning eleuther and nobody has replied
StellaAthena#3530: And if you know about 100 people who own a DGX-1 or more powerful device that we can use 24/7 for several months we would be happy to look into distributed computing.
bmk#1476: and if we can get all the dgx1s into the same physical location, probably |
bmk#1476: aint gonna be figuring out swarm training
StellaAthena#3530: True. Networking is its own issue
StellaAthena#3530: So yeah. Somewhere between 64-128 GDX-1, ideally in the same physical location.
StellaAthena#3530: It’s really just not feasible to crowd source training models like this unfortunately
bmk#1476: yes, believe me, the idea of crowdsourcing model training has been mentioned *many* times
bmk#1476: it's extremely techically challenging
bmk#1476: we don't have the resources to develop it
StellaAthena#3530: Google recently released a paper that would have cost someone ~300k to train by leasing the TPUs.
Bedebao#4842: I guess the crazy exascale stuff like Folding@Home mostly came from supercomputers, with regular users only being a fraction.
Ravna#1831: Let's just make the transformer architecture 1000 times more efficient instead. We still have two months to do so.
cfoster0#4356: Honestly that's not the worst idea
bmk#1476: it's not the best idea either though
StellaAthena#3530: @Bedebao neural network training doesn’t parallelize very well.
bmk#1476: *gestures wildly at all of the CNN papers*
StellaAthena#3530: Folding at home and SETI are incredibly paralelizable. But neural networks are not. It’s the the wrong type of computation.
cfoster0#4356: Moonshot concepts aren't a bad secondary direction, since we're pretty much just going to wait for a reply from Daddy G
bmk#1476: fair
bmk#1476: but we'd have to be doing something completely new
Bedebao#4842: Darn, that complicates things.
bmk#1476: all the existing transformer variants are maybe 2x more efficient but also worse in some way? |
bmk#1476: getting a cnn -> resnet magnitude change would be very hard
cfoster0#4356: :yes:
StellaAthena#3530: > Darn, that complicates things.
@Bedebao yup! Trust me, some people here care about democratizing tech a lot. I do research on developing protocols for trustworthy distributed neural network training myself.
StellaAthena#3530: Shit’s really hard
StellaAthena#3530: The best models are currently... very very bad
StellaAthena#3530: So bad that the results don’t even make conceptual sense in an applied context.
StellaAthena#3530: For example, the best approach to “verifying” a neural network computation without rerunning it takes about 8 hours and 100 GB of space to verify a single inference on VGG-16
StellaAthena#3530: It’s probably not an exaggeration to say that if we tried to verify a single GPT-3 inference everyone alive would be dead before we finished
Ravna#1831: This paper from Amazon did a very simple search on dimensions of transformers such as layers and embedding size, and came up with this. Seems to be a direct contradiction to the OpenAI scaling paper that claims total weight count matters much more than weight allocation.
Ravna#1831: https://cdn.discordapp.com/attachments/729741769738158194/772512248811880478/Screen_Shot_2020-11-02_at_12.26.01_AM.png
Ravna#1831: https://arxiv.org/pdf/2010.10499.pdf
StellaAthena#3530: Interesting
bmk#1476: what is W-coeff?
Ravna#1831: https://cdn.discordapp.com/attachments/729741769738158194/772513123533193276/Screen_Shot_2020-11-02_at_12.30.56_AM.png
Ravna#1831: I'm still trying to figure out what they are talking about too
Ravna#1831: The main take from their conclusion seems to be that the ff width should be closer to 1x embedding width instead of 4x embedding width
Ravna#1831: In the Bert case
Sid#2121: lol did they actually call the model BORT
Sid#2121: i fucking love ml research |
Subsets and Splits