data
stringlengths 115
7.61k
|
---|
Sahl#0630: cmv
Daj#7482: Exactly bmk, we use continuous gender vectors, we're not some kind of _symbolicists_
mgostIH#0245: Over what
Daj#7482: ~~no homo~~
bmk#1476: no homomorphism
mgostIH#0245: What direction is the Astolfo vector
Daj#7482: fuck you for making me google that
mkualquiera#3484: sometimes I forget this is not #off-topic
bmk#1476: I'm waiting for someone to invent a linear algebra theory of gender where homosexuality is naturally isomorphic to a homomorphism between vector spaces/modules
mgostIH#0245: Gender boolean 🤢
Gender real number line 🤔
Gender vector space 😍
Gender tensor :ultrazucc:
bmk#1476: gender magma
Daj#7482: I know magma is a math thing, but the alternative is interesting to imagine too
bmk#1476: im scared about the alternative
mgostIH#0245: Back on topic
I hate how my dice doesn't have an hamiltonian path from 1 to 6
mkualquiera#3484: I've been annoyed by this for a while now too
bmk#1476: yeah 1 and 6 always have to be on opposite sides |
bmk#1476: and same with 3 and 4
mkualquiera#3484: it's even more annoying when you use them as MTG counters
StellaAthena#3530: This is actually a deliberate choice it accommodate how people throw dice
StellaAthena#3530: The alternatives are called “spin down” dice
bmk#1476: wait what explain
mgostIH#0245: I thought it was to balance the weight of the dice
bmk#1476: shouldn't it not matter at all
EricHallahan#1051: The weight difference will be negligible.
gwern#1782: https://tappedout.net/mtg-forum/general/spin-down-dice-not-random/ apparently not only does spindown/sequential-ordering exaggerate any bias in a dice, it's also a lot easier to manipulate
mgostIH#0245: wdym it exaggerate the bias? Seems like the link here only talks about manipulation
Sahl#0630: I get how it’d exaggerate the bias
Sahl#0630: Imagine, by chance, the die was slightly weighted
Sahl#0630: If the weight is near the equator of the die (where all the numbers are around 10) it would give more midrange numbers (I think)
Sahl#0630: more importantly, if the weight was on the high numbers, the die will roll lower
Sahl#0630: and the other way around
bmk#1476: spin down dice implies the existence of spin up dice that are capable of occupying the same orbital
Sahl#0630: TRUE
StellaAthena#3530: Remember the mom who used deep fakes to manufacture videos and photos of her teenage daughter’s cheer rivals vaping and doing drugs and nude?
It’s not so clear that that actually happened.... https://www.washingtonpost.com/technology/2021/05/14/deepfake-cheer-mom-claims-dropped/ |
bmk#1476: i dont remember this but im not surprised that this both became a thing and then later was doubted as to whtether it was actually a thing
gwern#1782: (well, that's what everyone said at the time. the smoking part was too good and showed no deepfake sign and why would a soccer mom be able to pull it off anyway)
inox#5400: I like the idea that the deepfake legal defense can move faster than actual deepfakes
bmk#1476: even despite this, i still think the (potential) impact of deepfakes is something that people vastly overestimate because of how tangible they feel
bmk#1476: like if youve been on the internet for more than 5 minutes, you need to get used to text, images, and video being fake and malicious, even pre-DL
StellaAthena#3530: This is a bit different in that the cops are (were?) the ones making the claim
StellaAthena#3530: The media ran with it, but she was charged with some cyber crimes
bratao#6397: https://thegradientpub.substack.com/p/update-1-fbi-usage-of-facial-recognition
bratao#6397: An article citing EleutherAI 🥰
EricHallahan#1051: I still need to get dark mode working. `:|`
gwern#1782: wait, the fbi uses rotary now? wow, it's really catching on
bratao#6397: @Deleted User really went too far now
StellaAthena#3530: @chilli was quoted in it!
bmk#1476: :tribalism: https://cdn.discordapp.com/attachments/729741769738158194/843186749907271700/unknown.png
mgostIH#0245: Lucidrains single handedly fighting crime across the entirety of the US
nostalgebraist#3542: what’s your best guess for what ada is? 1.5b?
in the tests i’ve seen, it performs roughly as well as gpt2 1.5b, possibly a bit worse on avg
bmk#1476: my guess is either 125M or 350M, because those are the Lambada/hellaswag numbers line up
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/843191439978266654/unknown.png |
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/843191766684532746/unknown.png
bmk#1476: lambada is close to 350M (slightly worse)
bmk#1476: piqa is also slightly worse than 350M
bmk#1476: winogrande slightly better than 350M
bmk#1476: hellaswag close to 125M
kindiana#1016: 👏 acc 👏 norm 👏
bmk#1476: oh right
bmk#1476: ada https://cdn.discordapp.com/attachments/729741769738158194/843192226619850822/unknown.png
bmk#1476: ok so perfectly spot on for 350M
nostalgebraist#3542: ahh
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/843192492610289744/unknown.png
bmk#1476: for some reason, gpt3 destroys the corresponding gpt2 of the same size
nostalgebraist#3542: i forgot the gpt3 paper had numbers for the small models, although of course it does
bmk#1476: this isn't surprising tho because our neo models also destroy the corresponding gpt2 models
bmk#1476: i think it's just the data
EricHallahan#1051: (GPT-2 is crap)
bmk#1476: yeah
nostalgebraist#3542: i’d love to see neo 2.7b vs babbage on these metrics
bmk#1476: neo 2.7b loses out a bunch lol
bmk#1476: my guess is it's because pile has a bunch of stuff like GitHub and arxiv that isn't helpful for standard benchmarks but that we want anyways |
bmk#1476: so basically the benchmarks suck
EricHallahan#1051: Effectively, GPT-Neo is :chad: because it does worse lol
bratao#6397: My suspicion is that a good part of the good results of the GPT-3 is thanks to the "books" dataset, which should probably be a very clean copy of libgen
nostalgebraist#3542: i guess what i mean is "do we know babbage = 6.7b"
bmk#1476: still no
bmk#1476: lemme get you all the lambada results i have
bmk#1476: https://cdn.discordapp.com/attachments/788870744623939594/829957199828090900/unknown.png babbage
bmk#1476: https://cdn.discordapp.com/attachments/788870744623939594/829957389809483786/unknown.png curie
bmk#1476: https://cdn.discordapp.com/attachments/788870744623939594/840253427846348840/unknown-59.png davinci
bmk#1476: (ignore the stderr stuff)
bmk#1476: davinci https://cdn.discordapp.com/attachments/729741769738158194/843203419049426974/unknown-62.png
bmk#1476: i can run hellaswag on curie and babbage too, one sec
bmk#1476: babbage https://cdn.discordapp.com/attachments/729741769738158194/843204016174661632/unknown.png
bmk#1476: curie https://cdn.discordapp.com/attachments/729741769738158194/843205055611797534/unknown.png
bmk#1476: ada https://cdn.discordapp.com/attachments/729741769738158194/843205593824755792/unknown.png
bmk#1476: babbage https://cdn.discordapp.com/attachments/729741769738158194/843205707515166731/unknown.png
bmk#1476: curie https://cdn.discordapp.com/attachments/729741769738158194/843206080343048213/unknown.png
bmk#1476: (the stderrs change because for the bigger models, i set the limit lower to protect my wallet lol)
bmk#1476: davinci https://cdn.discordapp.com/attachments/729741769738158194/843206347327930398/unknown.png
bmk#1476: @nostalgebraist that should be enough data right? |
nostalgebraist#3542: yes, thanks!
bmk#1476: awesome
kinoc#5731: https://venturebeat.com/2021/05/15/gpt-3s-free-alternative-gpt-neo-is-something-to-be-excited-about/
gwern#1782: (already discussed to bmk's dismay)
kinoc#5731: ah, thanks, searched for "venturebeat" and didn't see it come up
EricHallahan#1051: We need a real press kit.
Daj#7482: Whoever wrote this didn't even read the literal first question in our FAQ lol
Daj#7482: I wouldn't worry too much
EricHallahan#1051: Well I guess that is a good point.
bmk#1476: list of things they got wrong: pile release date, eleuther founding member list, OA API model sizes (this one is forgivable i guess), no idea where the heck they got the 500B token count number
Daj#7482: lol check out this poorly rephrased version https://www.thespuzz.com/gpt-3s-no-cost-option-gpt-neo-is-some-thing-to-be-excited-about/
bmk#1476: ugh SEO spam
Daj#7482: Also I just noticed this https://cdn.discordapp.com/attachments/729741769738158194/843217173094989874/Screenshot_from_2021-05-15_22-03-45.png
Daj#7482: This...is wrong, right?
Daj#7482: I'm not having a stroke
bmk#1476: it's wrong yeah
Daj#7482: lmao
bmk#1476: should be winogrande
Daj#7482: (in the original too)
bmk#1476: lol |
bmk#1476: windogrande sounds like it should be a window cleaning product
bmk#1476: but like man how does this kind of thing happen
bmk#1476: literally half of these mistakes could be sorted out by literally reading the faq
Daj#7482: See it as an opportunity to upgrade your Gell-Mann Amnesia resistance lol
bmk#1476: like I'm not even angry at the author or anything I'm genuinely curious how the heck they came up with that founding member list or the pile release date
EricHallahan#1051: Ironically, I don't actually make it clear that Stella is someone who should be asked questions about things in the FAQ, because she isn't O5 but is not obviously Level-5 to new users lol
zphang#7252: probably by reading other articles and trying to condense that content
bmk#1476: like, why would they drop Connor from the list?? if anything, Connor is the last person I'd expect someone to just forget to mention
EricHallahan#1051: Like he is in more media than the rest of us.
bmk#1476: yeah like all the interviews and whatever
EricHallahan#1051: That is a pathetic error to make.
Daj#7482: It could be the first three authors of the Pile paper maybe? :thonk:
Daj#7482: But yeah of all people to miss lol, my job is to be the face
bmk#1476: yeah but the order is wrong for that
EricHallahan#1051: They are not academically minded obviously.
bmk#1476: but it's not alphabetical order either!
bmk#1476: also the pile release date
zphang#7252: "change up the order so it doesn't look like you copied"
bmk#1476: also they link to the pile paper, which is clearly posted on Dec 31 2020, but they claim it was released in July 2020?
Louis#0144: I wonder if they’re in the discord |
Louis#0144: Lmao
bmk#1476: the only conceivable way that happened is they mixed up the founding date, but i have no idea how they got that date without seeing any of the other info
bmk#1476: if they're reading this right now, what i want to know above all else is how the heck this happened
EricHallahan#1051: They just never made it to the website clearly.
Daj#7482: If this was a LM output, Gary Marcus would make a twitter thread about how it shows they are a dead end
bmk#1476: i literally cannot even figure out how this could have happened
Daj#7482: not giving a shit, probably?
Daj#7482: tight deadlines
Daj#7482: Imagine you had to crank out like 5 of these a day
Teemochu#8740: Obviously they prefer people who are more based about :catgirl3:
Teemochu#8740: and they couldn't get Gwern's real name
EricHallahan#1051: That reminds me, can the Pile authors take a look at the Pile FAQ section? It needs to be updated.
EricHallahan#1051: Or at least reviewed.
EricHallahan#1051: So it clearly is a soap.
zphang#7252: > The Pile is a 1.25 Terabyte dataset
wait
bmk#1476: wait where did you see that?
bmk#1476: the article says the 800gb figure
zphang#7252: https://www.eleuther.ai/faq/ lol
EricHallahan#1051: I told you it needed an update. |
zphang#7252: lol yea that's why I looked at it
EricHallahan#1051: There are three numbers floating around: 800, 825, and 1.25.
zphang#7252: 800 is a rounding of 825, both appear in the paper so either is fine
EricHallahan#1051: Yeah, I'm just making the point that it can be confusing.
EricHallahan#1051: I know where they come from.
zphang#7252: 1.25TB looks like the only thing that desperately needs updating
zphang#7252: I would change it to 825gb
EricHallahan#1051: Yeah, I agree. And the mention of the Pile channel.
zphang#7252: oh right, that's gone too lol
EricHallahan#1051: We have already had on person pop in and be confused about that.
nostalgebraist#3542: i went and eyeballed these vs the paper, as i'm sure you have done in the past... just to confirm, does this sound about right?
- ada = 355M
- babbage = 1.3B?
- curie = 6.7B?
- davinci = 175B
bmk#1476: that sounds like the right ballpark, yeah
nostalgebraist#3542: thanks!
bmk#1476: i can get you more tasks to nail down bandage and curie nice definitively
bmk#1476: just lmk which tasks you want and i can run it |
gwern#1782: (if connor is the face, am I the heel)
Daj#7482: I fear even asking what this means
alexyz#3459: how do you know ada's 355M?
EricHallahan#1051: He stared at numbers.
finetune#0907: just scroll up some, very nice numbers
EricHallahan#1051: And they started talking.
alexyz#3459: ah ok
alexyz#3459: Why tf are the https://vast.ai/ prices so much higher
alexyz#3459: like a month ago it was only $0.6 an hour for V100s
alexyz#3459: now it's more like $1 or $1.5 and hour
kindiana#1016: crypto stonks
alexyz#3459: can you... even mine crypto on V100s?
kindiana#1016: ofc lol
gwern#1782: maybe word got out about VA
cognomen#6297: has the volume changed at all?
cognomen#6297: is there anything done to stop reselling on the platform
gwern#1782: why would they want to stop reselling?
alstroemeria313#1694: yes
alexyz#3459: this is sad
alexyz#3459: any other cheap GPU platforms? |
alexyz#3459: gpu.land died so now I can't find any nice alternatives 😐
alstroemeria313#1694: datacrunch.io
alexyz#3459: wow thanks, especially look at those A6000 prices 🙂
alstroemeria313#1694: sometimes they don't have any
alstroemeria313#1694: one day i had to try four times every couple of hours
alstroemeria313#1694: they're also testing 80gb A100 boxes
alexyz#3459: OOOOH that'll be nice
alexyz#3459: but anyone else have any alternatives? the more the merrier lol
alstroemeria313#1694: the 8x A6000 is either no nvlink or only between pairs
alstroemeria313#1694: idk which
alstroemeria313#1694: the A100s are gonna be nvlink
alstroemeria313#1694: like the V100s are
alstroemeria313#1694: @alexyz also, datacrunch lets you open ports on the boxes
alstroemeria313#1694: like i usually start a `python3 -m http.server` process and tunnel it over ssh
alstroemeria313#1694: but there i do not have to tunnel it and it's a good deal faster
alexyz#3459: 👍
bmk#1476: https://huggingface.co/lg/ghpy_8k more github model
bmk#1476: have fun
Teemochu#8740: You're not going to beat crypto and right now mining is about $1 an hour on a 3090
Teemochu#8740: if you beat crypto prices something is up with the market |
bmk#1476: @EricHallahan wanna give 8k iter github model a try
EricHallahan#1051: Maybe in a little bit here.
alexyz#3459: what is that?
alstroemeria313#1694: who trained that one?
bmk#1476: me
bmk#1476: im training a github model and posting checkpoints every few k iters
alexyz#3459: what is a github model
bmk#1476: i have 2k and 4k ones too
bmk#1476: a model.. trained on github
alexyz#3459: is it just trained on code?
StellaAthena#3530: A model trained on GitHub
alexyz#3459: Ah ok
bmk#1476: just python, to be specific
StellaAthena#3530: Python is a language
alexyz#3459: that'd be useful for code completion
StellaAthena#3530: The model speaks pythonic
bmk#1476: sneklang
alexyz#3459: it'd be interesting to just train it on all of github
alexyz#3459: and see if it could generalize from what's already there
bmk#1476: here are older checkpoints https://huggingface.co/lg/ghpy_2k https://huggingface.co/lg/ghpy_4k |
bmk#1476: in case you want them for some reason
alexyz#3459: 👍
bmk#1476: but yeah pls give the 8k model a try
alexyz#3459: are you finetuning it?
bmk#1476: and lemme know how it goes
bmk#1476: yes
alexyz#3459: ok
alexyz#3459: If distilling models actually works well, wouldn't it make sense to distill a large model (such as maybe 6.7B or the future 175B which shall come eventually™️) to get smaller models?
bmk#1476: sure probably
kindiana#1016: it doesn't work that well :berk:
alexyz#3459: i'm hoping the tests go well
alexyz#3459: 😦
StellaAthena#3530: If you want to help make this a reality, we would love a hand integrating distilling into GPT-NeoX
EricHallahan#1051: https://eleuther.ai/faq
𓅬 gabriel_syme 𓅬#3220: i wish smth was up with the damn crypto market tbh
alexyz#3459: what do you mean?
𓅬 gabriel_syme 𓅬#3220: I mean I wish it went down so we can actually use those GPUs for something else
alexyz#3459: it did tho
alexyz#3459: did you not see Bitcoin crash from 60k to 48k?
alexyz#3459: but yeah would be nice |
Kia#2550: Using those Awesome Nvadia GPU's
alexyz#3459: just look at the market rn lol, your dreams just came true, literally every crypto is in the red
alexyz#3459: and now it's getting off-topic imma leave
kurumuz#5695: coin boom is good for compute
kurumuz#5695: eth is moving to PoS
kurumuz#5695: but theyre still building fabs
kurumuz#5695: the next few years will be exciting
alexyz#3459: yes, but you have stuff like Chiacoin bringing up hard drive prices
𓅬 gabriel_syme 𓅬#3220: for whom?
𓅬 gabriel_syme 𓅬#3220: bitcoin isn't crypto is it? it's a part of it
alexyz#3459: Not just bitcoin is falling, and whenever bitcoin falls every other crypto falls :berk:
𓅬 gabriel_syme 𓅬#3220: they'll just switch to w/e else is doing well, won't they?
kurumuz#5695: idk wishing crypto to die because of it needs compute is an extremely narrow view
alexyz#3459: I think that the markets will adapt, and more supply will come to meet the demand
bmk#1476: *ahem*
alexyz#3459: off-topic? off-topic.
bmk#1476: politrib, no low brow crypto discussion
kurumuz#5695: well its about compute
kurumuz#5695: ¯\\_(ツ)\_/¯
alexyz#3459: *ehhhh is it?* |
kurumuz#5695: yeah sure
bmk#1476: price speculation is *definitely* low brow and banned
bmk#1476: compute is kinda overdone too
alexyz#3459: 👍
StellaAthena#3530: Am I missing a typo? This error screen seems impossible... https://cdn.discordapp.com/attachments/729741769738158194/843319207117717504/Screen_Shot_2021-05-15_at_10.48.40_PM.png
EricHallahan#1051: I don't see anything obvious.
Louis#0144: U prob just need sleep
𓅬 gabriel_syme 𓅬#3220: it can only happen if it's not returning anything right
EricHallahan#1051: Does the function return a tuple?
StellaAthena#3530: Yes
StellaAthena#3530: And i've checked that it's non-empty
𓅬 gabriel_syme 𓅬#3220: :/
EricHallahan#1051: ¯\_(ツ)_/¯
nostalgebraist#3542: damn this is nerd-sniping me now
StellaAthena#3530: Copying and pasting the code into colab (it had been in a github directory I cloned) fixed it
EricHallahan#1051: Same
EricHallahan#1051: sounds like something wasn't being defined right
StellaAthena#3530: If you want to try to spot the error, the original notebook and code can be found here: https://github.com/EleutherAI/equivariance/tree/EMLP
nostalgebraist#3542: (my guess is jupyter autoreload weirdness)
nostalgebraist#3542: it looks like you recently deleted some lines near that one. possibly jupyter didn't re-import the code but did re-read the file when printing the traceback (or is it vice versa?). so you got the wrong line number in the traceback |
nostalgebraist#3542: it's rendering the current file, but using a line number from the file it imported which is older
StellaAthena#3530: ....
StellaAthena#3530: Wut
EricHallahan#1051: Oh yeah, that sounds like jupyter lol
𓅬 gabriel_syme 𓅬#3220: lol that's wild
nostalgebraist#3542: it happens to me semi-frequently
nostalgebraist#3542: yeah i guess this is more a python thing than a jupyter thing, using autoreload actually gets rid of it i think
nostalgebraist#3542: python traceback objects objects are wild. they can even prevent objects from getting garbage collected sometimes: https://cosmicpercolator.com/2016/01/13/exception-leaks-in-python-2-and-3/
gp#7155: hiiiiiii
gp#7155: so is anybody doing CPU inference with GPT-Neo
gp#7155: and if so what are memory requirements!
gammascalpset#9792: I wonder, if we got better at making language agents, if we could take a GAN approach to making an AI dungeon master?
gammascalpset#9792: like, assuming models that are clever enough to judge if a story is consistent, how much real data would it take for the discriminator to learn stories need to be consistent (and "fun")?
gammascalpset#9792: then you could tune the generator's input to get the kind of story you want 🤔
Daj#7482: I wouldn't do it using GANs, but you're describing a good usecase of #deleted-channel methods haha
gammascalpset#9792: using eegi stuff for this might be too painful for humans? d&d sessions are quite long
gammascalpset#9792: unless the dungeon master is quite good to start
Daj#7482: Sure, but I don't think a GAN would make that easier
Daj#7482: also iirc GANs for text just don't work well
Gurkenglas#7362: Why is the word2vec matrix called an embedding matrix if it reduces the number of dimensions? Shouldn't it be called a projection matrix? |
nev#4905: each word is "embedded" into the latent space
Gurkenglas#7362: oh, so it's like a family of functions from the one-point space of each word into the latent space? Kay.
gammascalpset#9792: I guess the set of words isn't a space
gammascalpset#9792: so the terminology is kind of arbitrary (correct me if I'm wrong, someone who's good at maths?)
nev#4905: well you can interpolate between the words
nev#4905: iirc google made a tool that converts back from embedding to characters using an RNN
gammascalpset#9792: I'd rather say you can interpolate between vectors that corresponds to words
gammascalpset#9792: can you interpolate the words "circle" and "square" without resorting to vectors?
gammascalpset#9792: or if you use this tool to interpolate, you won't get an answer that makes sense for most pairs of words
nev#4905: it turned out to be mostly lexical interpolation
nev#4905: and it wasn't exactly word2vec
nev#4905: but my point is that there are still possible vectors inbetween the words
nev#4905: and maybe in the future they will become real words
Gurkenglas#7362: hmm you could also say that the discrete space of words is embedded into the latent space. i want to say that it is embedded (using the identity function) into the space of vectors of logits (the same type of vector that the model outputs), and then that is projected into the latent space. What happens when you multiply the one-hot vectors by 2? Does it get "more certain" of the prompt being what it is?
gammascalpset#9792: I think we're taking the vector-space as some kind of fundamental truth, but it's more like a made-up function and word vectors are more like approximations for a certain purpose rather than perfectly describing the word
gammascalpset#9792: so there's no guarantee that the analogy won't break if you use it for anything but what it's been created for (being fed into models)
gammascalpset#9792: so imo arguments for why word vector spaces are spaces don't prove that the set of words is a space (?)
Gurkenglas#7362: the set of words would trivially be the discrete space.
Gurkenglas#7362: (Similarly, what happens when you add 1 to every entry of the one-hot vectors? If the analogy holds, it should change nothing.)
gammascalpset#9792: iirc for most embeddings, it just means the word is used more often but with the same meaning |
gammascalpset#9792: or maybe for logits it changes the meaning, to keep the same meaning you'd have to double the input odds rather than the log? I forgot
gammascalpset#9792: but there's no guarantee the result makes sense, anyway
gammascalpset#9792: that you're not allowed to do, the result wouldn't be in the same set as the operands
Gurkenglas#7362: Perhaps there's an implicit softmax at the start which can be left out because by default they only input fixed points of softmax?
Gurkenglas#7362: Is there some higher-level view of GPT's architecture that justifies its choices? category theory maybe
StellaAthena#3530: Genuinely unsure if this is a joke, but no there is no category theoretic justification of GPT’s architecture
gammascalpset#9792: GPT trains word embeddings based on the loss gradient, right?
StellaAthena#3530: @gammascalpset This is a good into piece on transformers. http://jalammar.github.io/illustrated-transformer/
gammascalpset#9792: I don't think transformers necessarily require trained embeddings by design? You could still feed it word2vec if you wanted - not that it'd be a good idea, but you could
mgostIH#0245: Ye, just like any part of the network after all!
Gurkenglas#7362: you mean, you could freeze any part of the network after a little bit of training and it would only do a little bit of harm to final performance?
Gurkenglas#7362: Suppose we have 45=10 over 2 words, each is embedded as a latent vector with 2 ones and 8 zeros. Then there's an arbitrary blackbox, and then we go back through the transpose of the embedding matrix. what probability distributions over words can be constructed like this?
mgostIH#0245: No I mean that you could freeze whatever you want, but it might not be a good idea
Goop#8368: This has a lot of good sub-links in it too 🤔 nice!
StellaAthena#3530: Yeah, it’s my go-to intro. It’s pinned here or in #research but I think sunken under memes lol
Goop#8368: Oh yup, there it is haha, buried under memes (that deserve to be pinned)
Goop#8368: The pins in #alignment-general were neat too, made my way through most of the videos. Had never heard of alignment prior :p
StellaAthena#3530: Yeah it’s probably worth posting a semi-regular PSA: we try to pin educational materials, especially relatively introductory ones, in the more advanced channels. If you’re interested in conversations going on but don’t have the background / knowledge base, checking out the pins is a great place to start
EricHallahan#1051: Project channels tend to have both background information and information relevant to the state of their development in the pins as well.
camel_case#8962: Hey, total newbie here. Looking for a set of pre-trained models (I’m sure someone here has a few) or a good guide to set up a generic model. |
EricHallahan#1051: First, welcome!
camel_case#8962: Thank you dev 🙂
EricHallahan#1051: You may want to consult our FAQ.
bmk#1476: ~~astronauts, war heroes~~
EricHallahan#1051: https://eleuther.ai/faq
bmk#1476: https://huggingface.co/eleutherai
bmk#1476: these are some pretrained models you can use
bmk#1476: you can google around for info on how to use huggingface models
EricHallahan#1051: Well I guess that is a bit more direct than me lol
camel_case#8962: A 2.7 BILLION PARAMETER MODEL?
EricHallahan#1051: \*pfft\* Peanuts :berk:
bmk#1476: shh, lower your voice
bmk#1476: no need to scream
camel_case#8962: A 125 million parameter model?
EricHallahan#1051: If you just want to play around with it use this notebook, it has everything you need to just run it.
EricHallahan#1051: Okay, 125M is peanuts, 2.7B is a little bit more.
camel_case#8962: Thank you so much
camel_case#8962: My area of expertise is in graphics and systems with c++, this is a bit advanced for me but I’m happy to sit on the sidelines and learn
camel_case#8962: I’ve been watching the GitHub page for quite some time
alexyz#3459: 6.7B model shall be amazing |
EricHallahan#1051: Most people find this field daunting, even those with highly technical backgrounds.
alexyz#3459: when it comes soon™️
bmk#1476: i mean, ML is one of the least theory dense fields out there
camel_case#8962: If this is comparable to gpt3, it might be worth investing in marketing to small, tech-oriented or adjacent businesses that can’t hope to get past the gpt 3-Microsoft licenses
bmk#1476: it's all basic math with a whole heaping load of black magic empiricism that works for no good reason
camel_case#8962: It might be a good idea to put this up on Kickstarter with a c3.ai type applicable business strategy
EricHallahan#1051: Whoa, hold your horses there.
alexyz#3459: I was reading some ML papers from like 2000, they were very interesting
bmk#1476: > business strategy
we don't do that around here
EricHallahan#1051: Did you come here from the VentureBeat article?
camel_case#8962: Oh no I didn’t mean “turn a profit” or anything like that, I meant turn this into an applicable set of applications to help people
bmk#1476: i don't know what this c3.ai thing is but it sounds like some kind of startup, which we are not
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/843604673030127616/Screenshot_2021-05-16-15-43-32-195_com.android.chrome.png
bmk#1476: presented without comment
alexyz#3459: They just kinda throw the model out into the world, and if someone wants to use it they can. No API or other stuff like that is coming out of EleutherAI
bmk#1476: we're fundamentally a volunteer research group
alexyz#3459: yes
camel_case#8962: I just think a lot of people would find this useful and would be worth spreading the word around and seeing what people do with it |
bmk#1476: we don't work on downstream applications
EricHallahan#1051: Well maybe eventually someone else will, but we won't be involved.
alexyz#3459: or any tech support :berk:
bmk#1476: we don't really care about putting in effort for spreading the word - if the word spreads, cool, but we're not really going to do much about it either way
bmk#1476: our main priority is doing research in the areas that we're interested in
EricHallahan#1051: TBH, exposure has been more of a bad thing for us than a good thing.
camel_case#8962: If I were an executive at a large corporation and saw this, I’d try to steal the intellectual property and market it myself — because that’s pretty much been the story of software since the 80’s
bmk#1476: well they can go do that
EricHallahan#1051: We don't care, the models are licensed under Apache 2.0.
cfoster0#4356: *Stealing something freely given* :thonk:
EricHallahan#1051: The code is MIT and Apache.
bmk#1476: they're gonna have a lot of competition though
camel_case#8962: Maybe steal was the wrong word, but they’d take control of the idea
bmk#1476: anyone can have our stuff
cfoster0#4356: But yes, you're right we've already seen people repackaging the models
EricHallahan#1051: Licensing was a deliberate decision.
bmk#1476: and yeah personally i really don't like the people who do that, but there's not really anything we could do about it that doesn't compromise our core values
EricHallahan#1051: Packaging it up under GPL or something would make many of the downstream applications nonviable.
bmk#1476: meh they won't give a shit about the license
EricHallahan#1051: And that too. |
camel_case#8962: I think that you’re sitting on an untapped well of value that you could show to your friends and colleagues in applicable industries for them to use freely would be a good idea
bmk#1476: i guess we could do the model fingerprinting thing but idk where that's at
camel_case#8962: But I think just putting it out there is kind of asking for someone greedy enough to take it
camel_case#8962: Like, put your name behind it so that other people know where it comes from
camel_case#8962: Know what I mean?
bmk#1476: that's not really what interests us though
EricHallahan#1051: We aren't making any money from this.
bmk#1476: we're here to do research
EricHallahan#1051: And we do not intend to.
EricHallahan#1051: ¯\_(ツ)_/¯
bmk#1476: if someone else can use our model for other stuff, cool, have fun, but it's not really our job to market it
cfoster0#4356: Last we discussed this it seems like the consensus was it would work? Like the backdooring thing
bmk#1476: i don't really remember tbh
camel_case#8962: Well, do you mind if I tell me friends and colleagues about it and ask if they can use it while still preserving where it came from? I’m just sick of very profitable businesses profiting off of open sourced software — fraud in my opinion
EricHallahan#1051: You can do anything you want. We would prefer you to give us a mention, but otherwise we don't care.
EricHallahan#1051: IIRC this is in the FAQ.
bmk#1476: i mean that would be awesome if you want to go for that, i agree that it's annoying to see people profiting off open source without adding value
alexyz#3459: open-source > closed-source
cfoster0#4356: :tribalism2:
bmk#1476: it's just not something we spend a lot of our time thinking about |
camel_case#8962: Yeah, that’s what I’m getting at. Think about Windows running Linux under the hood, SAS in captive regulatory markets, Roper Technologies—their entire company
bmk#1476: Oracle™
camel_case#8962: Exactly!
EricHallahan#1051: We all hate it.
EricHallahan#1051: But the thing is that there is little for us to do about it.
bmk#1476: i mean yeah if you wanna go for it, that would be amazing
Goop#8368: LICENSE.fuck 😎
camel_case#8962: I would probably start with a website that lists historical and speculative use cases for large, general models, then traces your project with a kind of “certified open source” kind of branding and gives it a kind of public image that other companies can’t invade on without serious effort
camel_case#8962: Basically c3.Ali’s website but replace every single price tag and mention of “enterprise” or “business” with “open source” and “free”
bmk#1476: as long as you make clear it's not eleuther affiliated, you can do basically whatever
gwern#1782: people always think this and then they close something down or require an email to sign up and then are shocked when they get 1% of the usage of the open thing. "beware trivial inconveniences"
camel_case#8962: The two solutions that come to mind are 1. to open-source via a separate project a website maintenance and/or outreach team — or 2. it would be trivially easy to set up a Patreon for a project that already has proof of concept and then take the proceeds to hire a webmaster and an “applicability outreach” person or whatever
bmk#1476: again, as long as it's clearly indicated that it's not an official eleuther thing, go nuts with whatever you wanna do
bmk#1476: thats the beauty of open source
bmk#1476: you can just do whatever with what we put out
camel_case#8962: Ok
Teemochu#8740: what if I say afillyated instead? :ponythink:
EricHallahan#1051: To be clear, we are not actively looking for more exposure from media, but we don't have a problem with people wanting to share our technology.
bmk#1476: yeah pls try not to flood us with more media attention
EricHallahan#1051: It fact I would say we would be pleased to see it used places. |
EricHallahan#1051: We just don't want media attention, as it tends to get in the way of research.
camel_case#8962: I think it’s funny you’re using Enron emails for the pile
EricHallahan#1051: It is a pretty small dataset and we use it often to test code.
camel_case#8962: Ultimately all I’m saying is that I think this is a particularly important area that has a lot of current and future value, and at least historically what I’ve seen happen to open source projects that don’t interact with actual people in the world is that the projects get scooped up into captive markets and corporations, prime examples are SAS, Roper Tech, and to a lesser extent Oracle and Microsoft
camel_case#8962: All I’m saying is that I’d really hate to check back here in 5 years and see a boneyard
EricHallahan#1051: Oh, and I agree with your assessment.
gwern#1782: nothing wrong with bones. to everything there is a season
EricHallahan#1051: It is something to be concerned about, but right now we are not at any risk of disappearing.
bmk#1476: 5 years is an eternity in ML-time
bmk#1476: i wouldnt even be >90% confident that OA/DM/GB/FAIR still exists in 5 years
EricHallahan#1051: We don't know what we are working on right now will be relevant in a month.
cfoster0#4356: If this effort is successful I would hope Eleuther would be superceded by something else by 2025
Teemochu#8740: > Ultimately all I’m saying is that I think this is a particularly important area that has a lot of current and future value, and at least historically what I’ve seen happen to open source projects that don’t interact with actual people in the world is that the projects get scooped up into captive markets and corporations, prime examples are SAS, Roper Tech, and to a lesser extent Oracle and Microsoft
OpenAI already exists, for a given company to use something actually open instead would be a net benefit because that's one less corporation that can impose rules
Teemochu#8740: Open weights also means finetunes that are to a given application's (or even sufficiently motivated individual's) likings
Teemochu#8740: individual at least for 2.7B, obviously you won't be tuning 200B on a gaming computer any time soon
Sid#2121: I wonder how feasible it would be to do soft prompt tuning on a 200B model on a consumer gpu though :thonk:
Sid#2121: probably not impossible, with cpu offloading
camel_case#8962: I think coming up with a value proposition would both help guide your research and make your projects accessible to people
camel_case#8962: I don’t think anyone is more qualified to do that than the lot of you |
Teemochu#8740: 512GB of RAM is still a bit large, but it's actually present in the Apple monstrosity so I do agree that would be easier than coming up with the VRAM
EricHallahan#1051: No, just use ZeRO-Infinity and 500 hard drives :omniberk:
Teemochu#8740: Be like Elon and sacrifice Rockets to the cause
EricHallahan#1051: ZeRO-Infinity must literally eat SSDs
camel_case#8962: Think about all the police departments licensing terrible predictive policing models that we (the taxpayers) pay for just to hurt us
bmk#1476: we already know what we want to do, we don't need more research direction guidance
Teemochu#8740: That does not sound like our business at all, in either direction
Louis#0144: The key is radioactive data and kill switches obv
EricHallahan#1051: The solution is to not use predictive policing.
Louis#0144: Leak the kill switch in the dark web
Louis#0144: 🤷♂️
Louis#0144: 😉
Louis#0144: You have no obligation to put a warranty on your models or data
EricHallahan#1051: Anything we build is not going to solve that problem, because better models don't solve it.
Teemochu#8740: You *kinda* do in the same sense that you can't build an intentionally broken playground on your 5 acres
Louis#0144: Perhaps
camel_case#8962: This is exactly why a value proposition from an unbiased group of researchers is important, both to see what your work can and can’t do. I like to think I’m a good programmer, certainly not a data scientist though, but I’d take your word for that
Teemochu#8740: like, if you put a swingset there and intentionally make it break if anyone over 100 pounds swings on it (by my calculations this means rope that breaks around 300 pounds of force), then you're at fault when the neighbor's teenage boy breaks a leg
gammascalpset#9792: money will end up giving incentives that are misaligned with (what I understand so far of) the mission of this org
gammascalpset#9792: tbh even large monetary donations might be dangerous |
gammascalpset#9792: an organization gets addicted to them, and to keep them coming you need to "show" results
gammascalpset#9792: which is not the same as "having" results
Louis#0144: Open source software intentionally always comes with no warranty though
Teemochu#8740: What if it's a commission though?
EricHallahan#1051: We explicitly do not take donations unless it is large enough for us to actually put into something useful.
gammascalpset#9792: you would create an incentive to work on projects on which you can apply this business model
Teemochu#8740: i.e. a one-time donation to make something (let's say some random furry named elongated_muskrat asked for a 1T model trained on commoncrawl and offered $100M for it, and the only restriction was all of commoncrawl must be used in one epoch and the weights must be released dual-licensed WTFPL and MIT)
Louis#0144: One time donation to make goose girl AGI
Louis#0144: it’s a language model tho and all it can do is honk
Louis#0144: But trust me it’s AGI
EricHallahan#1051: Personal donations are just not worth our time when we are regularly using hundreds of thousands of moneyunits of compute.
kindiana#1016: I've got no beak but I must honk
gammascalpset#9792: next thing you know you have a marketing team that hunts for these elongated_muskrat type people
camel_case#8962: I’m not talking about corporate money, I’m not even necessarily talking about crowdsourced Patreon type money (but I think it’s a good idea)
camel_case#8962: I’m just talking about showing what your work can do for people
Louis#0144: Like down stream tasks?
Teemochu#8740: Others can do that, that's a downstream application
Louis#0144: I do those here
Louis#0144: Ye I’m doing downstream
bmk#1476: simple, just put a sign on it that says |
```THE SWINGSET IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SWINGSET OR THE USE OR OTHER DEALINGS IN THE
SWINGSET.```
Louis#0144: Exactly
EricHallahan#1051: But you are doing downstream *research.*
camel_case#8962: Yes
EricHallahan#1051: That isn't relevant to this conversation as I understand it.
camel_case#8962: I mean, @Louis you are?
Louis#0144: Yes
Louis#0144: But I’m the exception here
Louis#0144: I do creative NLG
gammascalpset#9792: the work eleutherai does is not necessarily aimed at "doing something for people" in the short term
Louis#0144: Not many others here do
camel_case#8962: @Louis can you point me to some of your work?
gammascalpset#9792: it might be useful to work on projects which is known aren't going to be useful until AGI arises, eg. alignment stuff |
bmk#1476: louis just does his own thing mostly lol
EricHallahan#1051: Website has it.
EricHallahan#1051: https://www.eleuther.ai/publications/
Louis#0144: That’s theory work. I don’t have anything that isn’t under review that’s NLG, and I don’t have any downstream publications with Eleuther yet
Louis#0144: I only have a theory pub with Eleuther
Teemochu#8740: To be fair, there's a group doing creative NLG on Neo; their server is actually bigger than ours now
Louis#0144: Yeah
Louis#0144: They don’t publish though
Louis#0144: And I was here before they formed
Louis#0144: 🤷♂️
bmk#1476: we dont talk about they who shall not be named here
Louis#0144: Leo is right I kinda just do my own thing
EricHallahan#1051: *Yet*
Louis#0144: And drag people here along for the ride
bmk#1476: what a concincidence, me too
Louis#0144: I did briefly help with Neo
Louis#0144: For like
Louis#0144: A day
Louis#0144: I cried over mesh tensorflow
Louis#0144: That’s it tho |
Louis#0144: Fuck einops
Teemochu#8740: Distilling 6.7B to 4B would be an interesting paper
Teemochu#8740: (if it actually works very well)
EricHallahan#1051: We are already doing distillation research.
EricHallahan#1051: It is just in early development.
bmk#1476: you can start with 2.7B into 1.3B
Louis#0144: Did anyone do the super naive KL divergence distillation yet
bmk#1476: idk
gammascalpset#9792: there's other established ways of doing distillation??
Louis#0144: Yes
Teemochu#8740: I'd be curious about the results of a "do wtf you want (up to and including new architectures), come up with the best model in under 100M params" speedrun/leaderboard/etc
Louis#0144: Using embedding projection stuff
Louis#0144: Or doing it contrastively
Louis#0144: The latter I haven’t seen since the days of RBMs tho
Louis#0144: That’s how we were doing #carp
Teemochu#8740: Because to at least some extent that's analogous to "compress human knowledge [well, writings] into 200MB"
bmk#1476: i proposed this but with fixed architecture earlier
Louis#0144: 100M might be too tight
bmk#1476: mostly because i think allowing arch changes opens up a ton of loopholes
Louis#0144: Tbh |
bmk#1476: my proposal was 1B, evaluated on eval harness
Louis#0144: I think 1b is a better mark
Louis#0144: Yes
Louis#0144: I agree
EricHallahan#1051: There was a long conversation about this concept.
StellaAthena#3530: This is actively a WIP
Teemochu#8740: How about the code plus pickle must be under a gigabyte
Louis#0144: Yes I know but I was curious if he had runs
StellaAthena#3530: I showed you some of the code for this this morning
Louis#0144: Ye
Louis#0144: That’s why I asked
Louis#0144: Lmao
StellaAthena#3530: Ah
Teemochu#8740: where "code" means literally everything that's not available on condaforge (so anything on condaforge can be used for free)
cfoster0#4356: Like Hutter Prize style?
StellaAthena#3530: We’ve done very small (100M to smaller) models
cognomen#6297: I'd prefer a memory target
bmk#1476: what you're describing is literally a bigger version of Marcus Hutters thing
bmk#1476: i don't really like that personally
cognomen#6297: "must not use more than X GB" |
StellaAthena#3530: I want to get the distillation code integrated into NeoX before going hard on it
StellaAthena#3530: It’ll just be so much easier. The current distillation codebase can’t handle 500M models
Louis#0144: Why’s that
StellaAthena#3530: Because we started with a relatively small-scale proof of concept?
EricHallahan#1051: Because Hugging Face sux?
Louis#0144: No i meant like what’s going wrong
cfoster0#4356: I'm still partial to "make this network as small as possible, staying within 90% of the original performance on these eval harness tasks; anything goes"
Louis#0144: I’m curious
Louis#0144: Oh yeah honestly HF should just have built in tools for distillation
Louis#0144: Idk why it doesn’t
bmk#1476: this sorta feels not well defined enough for my taste either
EricHallahan#1051: Assume everything not explicitly defined is a free for all.
Louis#0144: Just have one leaderboard for 100M, 500M, 1b, 2b
Louis#0144: Everyone has to upload their code
Louis#0144: Code will be audited
Louis#0144: And for 2b you can’t use the 1.3b model
Teemochu#8740: The main reason I said 100m at first is it encourages people to experiment with different archs on their own equipment
bmk#1476: i still think a single thing for best gpt-1B, anything goes for training is the best option
bmk#1476: ooh, and we should design our own custom eval set from scratch
StellaAthena#3530: @Louis It has a naively implemented pipeline, it doesn’t distribute over many GPUs easily |
Teemochu#8740: ~~fimfiction~~
bmk#1476: so people can't cheat by tuning on the test set
bmk#1476: :no:
cfoster0#4356: What part? We just check that the model hits 90% of baseline on each of the tasks and what size the weights are
StellaAthena#3530: @Louis the distillation code is here: https://github.com/EleutherAI/distilling/
cognomen#6297: preferably multiple choice tasks
bmk#1476: ok time to ignore the provided model and distill a 200B model instead
cfoster0#4356: Exactly
StellaAthena#3530: We’ll increase the size it can handle reasonably by an order of magnitude by integrating it into NeoX’s already existing pipeline
Teemochu#8740: If it produces a better model, then yeah that would actually be better for downstream applications on low-GPU environments
bmk#1476: it would completely violate the premise of "make this network as small as possible, staying within 90% of the original performance on these eval harness tasks; anything goes"
bmk#1476: also i think stipulating being "within x%" also makes no sense
bmk#1476: like are you going to punish it if it does better than the original model?
bmk#1476: what if all/no models get past the threshold
cfoster0#4356: fine "no less than 90%"
bmk#1476: why not just make it higher perf = better, problem solved
bmk#1476: what's the point of even having a rhreshold
kindiana#1016: so people don't submit a 1 parameter model?
kindiana#1016: if you want a single benchmark of model compression, you need a threshold
kindiana#1016: otherwise you can make a preformance-size curve |
bmk#1476: again why not just make it fixed model size, lower task loss is better
kindiana#1016: that works too 🤷
kindiana#1016: I think before we do distillation, we should do one on convergence speed :berk:
kindiana#1016: gotta make the big model first
cognomen#6297: just remembered there was an event like this https://micronet-challenge.github.io/
Teemochu#8740: One way to get convergence speed is new archs though 😛
Teemochu#8740: hence the 100m idea, basically saying "prove whatever concept you want to" and providing an anchor for interpreting performance
kindiana#1016: I mean, that's fine
bmk#1476: actually i think x% of the performance is ill defined too
bmk#1476: you'd need to figure out what that means per task
bmk#1476: 1% worse acc means a huge difference on some tasks and basically no difference on others
cfoster0#4356: No matter what you need to figure out how to weigh between tasks
cfoster0#4356: Regardless of the proposal
kindiana#1016: eh I think downstream tasks will be too easy
kindiana#1016: just do pile pbp
cfoster0#4356: Even better :hap:
Teemochu#8740: you mean bpb?
kindiana#1016: yeah
kindiana#1016: 🤦
Teemochu#8740: pits ber pyte |
cfoster0#4356: Tbh I'm relatively indifferent between ideas like "hit Neo 1.3B's bpb with as small of a model as you can" and "get the smallest bpb you can with 1.3B parameters"
kurumuz#5695: yeah seems the same 🤔
kurumuz#5695: well with the first you can work with less compute.
EricHallahan#1051: I think the first is more accessible.
kindiana#1016: parameters is not a great yardstick imo
kindiana#1016: something like Xpu-core-days would be better
EricHallahan#1051: That is a good point.
EricHallahan#1051: But it is hard to control/verify.
kindiana#1016: well, if the limit is small, you can just run the code
Louis#0144: It’s a tongue twisters
EricHallahan#1051: I guess that forces reproducability.
kindiana#1016: if you set a limit of 1 tpuv3-8 day, we can process like 8 submissions per day
kurumuz#5695: string[0] = 't'
kurumuz#5695: :P
cfoster0#4356: Fair yeah. I only said it because of the specific focus on compressing/distilling
kurumuz#5695: compression is all you need
FerroMagnetic#6975: Given the discussion, isn't "... project is to build a GPT3+ sized language model ..." somewhat incorrect? It seems you want to build GPT3+ "equipotent-yet-smaller" model.
EricHallahan#1051: Do you mean us as an organization?
FerroMagnetic#6975: I mean the channel's topic
kurumuz#5695: 🤔 |
FerroMagnetic#6975: Or it needs a slight rewording
cfoster0#4356: The current discussion is on a side quest about distillation etc. The main storyline is still aiming for 200B or thereabout
AI_WAIFU#2844: We're gunning for GPT3 sized or bigger and hopefully better. Not equipotent.
kurumuz#5695: distillation is a pretty damn interesting side quest.
AI_WAIFU#2844: But a certain commodities and semiconductor shortage is slowing us down
EricHallahan#1051: Well it is a side quest that we have an obligation to fulfill.
Louis#0144: Also a massive lack of goose plushies is making our job harder
kurumuz#5695: oh, so coreweave given side quest?
kurumuz#5695: 😛
FerroMagnetic#6975: Ah, I meant the ambiguity of parameter-"sized" against memory-required-"sized"
kurumuz#5695: Always can get more from amazon.
Louis#0144: I know
Louis#0144: I have four
EricHallahan#1051: Eleuther shirts wen
Louis#0144: We need Eleuther shirts
Louis#0144: One with just the Eleuther logo
kurumuz#5695: Someone wanted plushies of me on reddit.
Louis#0144: Another one covered in geese
kurumuz#5695: Not sure what they mean by me tho, as I'm anon
kurumuz#5695: lol |
Louis#0144: Like of u personally?
EricHallahan#1051: That sounds like reddit.
Louis#0144: If anyone wants a louis plushie hmu
Louis#0144: I’m fat so it’s probably p cozy
FerroMagnetic#6975: The difference between GPT-2 and GPT-3 is akin to difference between SSJ2 and SSJ3
Teemochu#8740: maybe your avatar?
EricHallahan#1051: His avatar is different on Reddit from the one here on Discord lol
kurumuz#5695: @Teemochu she is generated by my stylegan2 catgirl model.
kurumuz#5695: lol
kurumuz#5695: she is actually a NFT
kurumuz#5695: but not on sale
kurumuz#5695: @FerroMagnetic GPT-2 models are pretty not good.
EricHallahan#1051: TBH, GPT-2 sucks.
bmk#1476: i want a canada goose plushie :goose:
FerroMagnetic#6975: Neither is supersayijan 2
kurumuz#5695: oh, canada goose
FerroMagnetic#6975: Powerlevel bloat
kurumuz#5695: Personally, I'm a curie fan.
AI_WAIFU#2844: https://www.amazon.com/Wild-Republic-Audubon-Authentic-Stuffed/dp/B01MZIY8FG/
EricHallahan#1051: We also need a duck |
kurumuz#5695: I'm still curious about how they trained their instruct models
kurumuz#5695: did anyone eval them against vanilla models?
EricHallahan#1051: @bmk?
kurumuz#5695: They had a paper about rl agents learning human preference.
kurumuz#5695: and finetuning LM models with that
kurumuz#5695: so maybe its the same thing
bmk#1476: it costs money tho
bmk#1476: if you can give me a good reason to do so i will, otherwise nah
kurumuz#5695: If there is a major performance difference between vanilla models it would make sense to research how they tuned those models.
EricHallahan#1051: We are already working on trying to do human feedback in #deleted-channel.
bmk#1476: eh i have a pretty good guess as to how they did it
kurumuz#5695: mind to elaborate?
bmk#1476: they just hand made a bunch of data and tuned on it lol
kurumuz#5695: oh, cats are fighting outside and its 3 am.
kurumuz#5695: scared the shit out of me
kurumuz#5695: Ah, that is ~~what i heard~~
kurumuz#5695: wished it was something more interesting though, i guess not
Louis#0144: Ew you’re European
Louis#0144: 🤮
kurumuz#5695: well not exactly but yes, im europoor 😎 |
Louis#0144: Ah
Louis#0144: Eastern Europe?
kurumuz#5695: no seriously im not european
kurumuz#5695: turkey
Louis#0144: I was in the Czech Republic for a month a few years ago
Louis#0144: Lots of fun
Louis#0144: Oooo
Louis#0144: Nice
Louis#0144: I’ve been to Istanbul twice
Louis#0144: Was rly cool
Louis#0144: I have some family there
kurumuz#5695: ye ~~nice from outside~~
kurumuz#5695: oh, turkish heritage or?
EricHallahan#1051: He is very French Canadian.
Louis#0144: Yeah
Louis#0144: French Canadian
Louis#0144: 😦
Louis#0144: But also French French + French Italian
Louis#0144: My family spread out all over Europe
Louis#0144: So every summer it’s visiting a different part |
kurumuz#5695: interesting
kurumuz#5695: must be nice.
Louis#0144: Ya
Louis#0144: Where in turkey are u
FerroMagnetic#6975: Darn Gauls reading their good comics without translation
kurumuz#5695: @Louis istanbul xd
kurumuz#5695: sometimes edirne
Louis#0144: Also holy shit I remember last time I was in turkey they treat the stray cats like royalty
kurumuz#5695: well when my uni is open
kurumuz#5695: @Louis yeah theyre evrywhere dude
Louis#0144: I love cats
Louis#0144: They’re so friendly
Louis#0144: My moms relatives are like 2 blocks from one of the bridges
Louis#0144: Forgot which one
kurumuz#5695: like 1 cat every 2m^2
kurumuz#5695: lol
Louis#0144: And there were SO MANY CATS around the bridge
Louis#0144: Omg
Louis#0144: They all just hung out
kurumuz#5695: yeah its crazy :P |
kurumuz#5695: also we have too many bridges.
Louis#0144: The two main ones
Louis#0144: The really pretty one
Louis#0144: Forgot the name
kurumuz#5695: fatih sultan mehmet?
kurumuz#5695: bogaz koprusu?
Louis#0144: Yes ty
kurumuz#5695: its pretty when youre not stuck in traffic haha
kurumuz#5695: in a parallel universe this city could be enjoyable.
kurumuz#5695: but things went extremely wrong for us
Louis#0144: Rip
Louis#0144: When the coup happened I didn’t hear from the family there for many months
Louis#0144: Apparently they just lost Internet for no joke three months
Louis#0144: It was rly weird
Louis#0144: I think they left Istanbul tho
Louis#0144: Don’t know where they are now
kurumuz#5695: too crowded.
Louis#0144: Yeah
kurumuz#5695: i like edirne, its a really small city
kurumuz#5695: its the border to greece |
Louis#0144: Better food then probably
Louis#0144: 😉
kurumuz#5695: lol
kurumuz#5695: it has good food yeah
Louis#0144: Nah I mean Turkish food and Greek food are almost identical
Louis#0144: Just different names
Louis#0144: And slightly diff spices
kurumuz#5695: yeah didnt try any greek food so cant say
kurumuz#5695: the thing is all parts of turkey has different food culture
Louis#0144: You’ve never crossed the border?
kurumuz#5695: no :P
Louis#0144: 😮
kurumuz#5695: i dont like to ehm
Louis#0144: Oh do u need a visa
kurumuz#5695: leave my house
kurumuz#5695: sooooo
Louis#0144: Oh true
Louis#0144: Make sure u explore during uni
Louis#0144: I regret not doing that until my last year
Louis#0144: U don’t have as much time after |
kurumuz#5695: 2 years of my uni was online
kurumuz#5695: ugh
Louis#0144: Sad
kurumuz#5695: yeah pretty sad
kurumuz#5695: @Louis i dont think we're failing btw 😎
kurumuz#5695: lol
EricHallahan#1051: My uni experience has been pretty much go to school, go to class, go home, work on schoolwork.
EricHallahan#1051: ¯\_(ツ)_/¯
kurumuz#5695: yeah pretty much that for me but no schoolwork
Louis#0144: My uni experience was that I never went to campus even before covid
Louis#0144: lol
kurumuz#5695: ¯\\_(ツ)\_/¯
kurumuz#5695: @Louis yeah was same for me.
Louis#0144: I lived in Waterloo and went to campus for exams
Louis#0144: But that was it
Louis#0144: I hung out at my friends apts and did my work
kurumuz#5695: i just cant tolerate people in my uni to be honest
Louis#0144: Oh I loooooved the Waterloo community
Louis#0144: It was so autistic
Louis#0144: Like genuinely |
kurumuz#5695: sounds good then
Louis#0144: Lots of fun
Louis#0144: The uni was hard tho
Louis#0144: They deflated grades nonstop
Louis#0144: 90s to 60s
Louis#0144: Really weird
Louis#0144: I’ve had plenty of courses with downward curves
kurumuz#5695: hard sounds fun
Louis#0144: Ye
bmk#1476: i mean it could also be the PPO human preferences stuff but i both doubt that and also dont find it that much more interesting
kurumuz#5695: i ser
kurumuz#5695: see*
kurumuz#5695: @Louis will sound like a smartass but my main problem with my uni is stuff being too easy
kurumuz#5695: and they didnt let me get classes from upper years either
alstroemeria313#1694: i am trying to come up w/ an auto-damped saddle-free newton method
so i can have damping but also have the dynamics of the system not change when i scale the loss function by a constant
Louis#0144: MIT is the same way FWIW. When I worked at CSAIL briefly the community was identical
Louis#0144: No uni is far too easy usually
Louis#0144: Georgia tech is painfully easy
Louis#0144: I’m bored to tears |
kurumuz#5695: it just doesnt push me to do anything
Louis#0144: Yeah
Louis#0144: I feel
Louis#0144: Dw
kurumuz#5695: study last 4 hours or whatever, you ready
Louis#0144: Use that extra time for research
kurumuz#5695: @Louis yeah
kurumuz#5695: currently busy with getting this project out
alstroemeria313#1694: it seems to work if i make the damping factor proportionate to the gradient norm
Louis#0144: What’s the goal here
alstroemeria313#1694: rather than using a constant
kurumuz#5695: then hopefully can focus on research
Louis#0144: Join us @kurumuz
Louis#0144: We’re autistic
Louis#0144: We promise
kurumuz#5695: lol
kurumuz#5695: if i have time, sure
kurumuz#5695: i would like to contribute
alstroemeria313#1694: i got interested in second order methods again when i realized my CLIP rgb color optimizing code had so few parameters i could just compute the hessian exactly
kurumuz#5695: but also have responsibility to my own team |
kurumuz#5695: ¯\\_(ツ)\_/¯
Louis#0144: Oh interesting
Louis#0144: There’s second order libraries for pytorch
Louis#0144: Idk if they’re any good
alstroemeria313#1694: all i needed was torch.autograd.functional.hessian()
Louis#0144: ah ok I see
Louis#0144: I’m bad at damping stuff
Louis#0144: Differential equations scare me
Louis#0144: 🙂
kurumuz#5695: oh also, i didnt properly learn math yet
kurumuz#5695: lol
alstroemeria313#1694: like in the 1 dimensional case my current method is `x -= grad / hess.abs().add(0.1 * grad.abs())`
FerroMagnetic#6975: Are they at least partial differentials to be scared of?
alstroemeria313#1694: where 0.1 is damping
Louis#0144: Yes they are
Louis#0144: Why won’t this work for > 1d
Louis#0144: All of this is computable for matrices
Louis#0144: No?
alstroemeria313#1694: it does, it just involves an eigendecomposition of the hessian
Louis#0144: Yeah |
Louis#0144: So what’s the issue
Louis#0144: I’m confused
alstroemeria313#1694: i already implemented it, it was just easier to paste the 1d case
Louis#0144: Oh ok
Louis#0144: OH
Louis#0144: I thought this was a question
Louis#0144: Nvm I thought u needed help
alstroemeria313#1694: eheh
alstroemeria313#1694: :blobcutehappy:
alstroemeria313#1694: like i didn't want to adjust damping up and down adaptively like in levenberg-marquardt
Louis#0144: LM is really good
Louis#0144: I was using it scipy the other day
FerroMagnetic#6975: There's onlt one differential field I swore never to return to: hydrodynamics.
alstroemeria313#1694: i just wanted the thing to be invariant to scaling the loss function by a positive scalar
Louis#0144: I’m kinda surprised it never caught on
Louis#0144: I did three years of differential geometry
Louis#0144: I regret every second
Louis#0144: I never use it
Louis#0144: I should have done like model theory
Louis#0144: Or combinatorics |
Louis#0144: Or functional analysis
Louis#0144: Anything else (bar like complex analysis or number theory)
FerroMagnetic#6975: Differential geometry if fun compared to general theory of differentiable convex spaces/cones
alstroemeria313#1694: at some point i will implement low rank saddle free newton
FerroMagnetic#6975: (Was my chair's speciality course)
alstroemeria313#1694: which only involves gradients and hessian-vector products
alstroemeria313#1694: so you can actually use it for decent sized problems
Louis#0144: I did multivar calc -> analytic diff geo -> topological diff geo -> gauge theory -> algebraic topology diff geo -> information geometry
Louis#0144: Over a few years
FerroMagnetic#6975: ~~If you were a pure mathematician, you'd prove that a better method exists and be done with it~~
Louis#0144: I am a pure mathematician
Louis#0144: Lol
Louis#0144: U should do that rn
Louis#0144: Sounds super cool
alstroemeria313#1694: i'm going to bed in 30 minutes
Louis#0144: V useful for clip in general
Louis#0144: Oh LOL
FerroMagnetic#6975: Information geometry, like generalizations of distance?
alstroemeria313#1694: anyway undamped saddle free newton is invariant to scaling the loss function
Louis#0144: Information geometry looks at KL divergence as a riemmanian metric |
alstroemeria313#1694: since if you scale it you scale both the gradient and the hessian and the scale cancels
Louis#0144: Basically under what constraints does KL divergence become a distance metric of geodesics
alstroemeria313#1694: but adding a constant damping factor makes it not scale invariant
alstroemeria313#1694: i like scale invariant methods
alstroemeria313#1694: solution: scale the damping factor according to the gradient norm too
FerroMagnetic#6975: Anyway while I'm rusty there's still something to remind me about what an equidistance is: roguelikes.
kurumuz#5695: need ai generated rougelike levels.
FerroMagnetic#6975: Not before AI solving roguelike levels
FerroMagnetic#6975: What else have I attended again, this, that, Lebesgue integration, probability, automata theory
FerroMagnetic#6975: Topology course was awful and I'll return to it some other day on my terms.
alstroemeria313#1694: (notably, scaling by the hessian norm does *not* work well)
FerroMagnetic#6975: There are otherwise a lot of curious discords out there: transfinite numbers fans, polytopes
FerroMagnetic#6975: *microtonal music*
FerroMagnetic#6975: See? It's easy. https://cdn.discordapp.com/attachments/729741769738158194/843651552124076062/partchsystem.png
alexyz#3459: @FerroMagnetic I'm not really a fan of microtonal music, I've yet to find a piece that truly sparks my interest. Any pieces that you like that I can try?
bmk#1476: i thought this was one of those crazy category theory diagrams at first
FerroMagnetic#6975: It's merely an, let me check
FerroMagnetic#6975: "inverted Monzo lattice of Harry Partch's 43-tone JI scale"
FerroMagnetic#6975: @alexyzit's a hard closed loop of "there are no instruments to make interesting pieces and there are no interesting pieces to attract instrument makers"
alexyz#3459: well there are digital instruments |
FerroMagnetic#6975: Guess what harmonics 99% of the digital instruments are set to
alexyz#3459: 12 tone?
FerroMagnetic#6975: Indeed
FerroMagnetic#6975: You quite probably met Sevish, but here's for someting less electronic: https://jacktickner.bandcamp.com/album/reassuring-weight
FerroMagnetic#6975: http://split-notes.com and this is the largest "label" I know of
alexyz#3459: ooh this sounds nice
FerroMagnetic#6975: You know what they say: jazz musicians tried everything in 60s and then stuck with what worked
FerroMagnetic#6975: Jazz is probably one of the most common genres to earn a stable branch of "microtonal"
alexyz#3459: thank you for those links btw
FerroMagnetic#6975: If you're feeling advancedly abstract, the renowned classics are Horatiu Radulescu and Gerard Grisley
FerroMagnetic#6975: https://www.youtube.com/watch?v=rXaNFBzgDWI
FerroMagnetic#6975: Then you get to people that count in frequencies instead of note names (or rather cents, I guess)
FerroMagnetic#6975: And all of that haven't even touched the just intonation folks
FerroMagnetic#6975: "We lose a comma three and a half octaves in? Well, too bad for the pieces with such range"
UnsupervisedLearner#4148: How do you use no dg when studying deep learning? I feel like there should be applications somewhere, even if it's just some esoteric model interpretation
Louis#0144: GANs
Louis#0144: that’s basically it
Louis#0144: Some RLtoo ig
gp#7155: So how much horsepower in terms of GPU does the largest Neo model require?
AI_WAIFU#2844: power isn't much of an issue, it's more memory |
AI_WAIFU#2844: I think people have gotten fine tuning to work with 24GBs and theoretically 8GBs should be enough for inference.
nev#4905: is it as autistic as people on reddit make you believe it is?
nev#4905: :thonk:
Jozef Poniatowski#7589: hey guys
Jozef Poniatowski#7589: how do you set up really large data when doing disttributed dataparallel in pytorch?
Jozef Poniatowski#7589: are you supposed to shard the data
Jozef Poniatowski#7589: and what should you do if applying like MLM
Jozef Poniatowski#7589: should i apply the masking beforehand and save to disk, or just do per batch while training
Deleted User#0000: Hey guys Looking to learn ml ?
StellaAthena#3530: Welcome! Don’t forget to check out #rules
Deleted User#0000: @StellaAthena checked
EricHallahan#1051: @TheOriginalDude What hardware are you looking to use?
TheOriginalDude#8813: My PC is not too good, I'd use Colab instead
TheOriginalDude#8813: So, I can use GPU or TPU
TheOriginalDude#8813: @EricHallahan ^^ 🙂
EricHallahan#1051: Do you have Pro? If you don't you will probably be constrained to TPU. `:\`
TheOriginalDude#8813: Pro?
EricHallahan#1051: Colab Pro
TheOriginalDude#8813: Oh, Nope :\(
Kharr#7888: It's cheaper than buying a GPU 😉 |
TheOriginalDude#8813: It's not available in India it seems
EricHallahan#1051: I thought it had?
EricHallahan#1051: Okay, you can use Colab TPUs, they might not be the most user friendly thing though.
TheOriginalDude#8813: Agreed!
TheOriginalDude#8813: So, where do I start
Sid#2121: you can just put any address lol
Sid#2121: they don't check
TheOriginalDude#8813: Ohh lol
EricHallahan#1051: I always forget that they don't check.
TheOriginalDude#8813: But how do I get started with the training
TheOriginalDude#8813: For code generation / Text to SQL
EricHallahan#1051: Do you have data to tune on?
TheOriginalDude#8813: Dunno
TheOriginalDude#8813: I could find
TheOriginalDude#8813: I do have data for Text to SQL
ssodha#3259: hey everyone! does anyone know if gpt-neo will be trainable yet? if it is, is there a repo I can access to try it out? (FYI total noob here when it comes to GPT so apologies for the dumb question)
EricHallahan#1051: Read the FAQ:
https://eleuther.ai/faq
Louis#0144: What’s with the influx today
EricHallahan#1051: Then I would suggest looking for a guide and the documentation as suggested in the FAQ. We simply cannot provide technical support along every step of the process. |
EricHallahan#1051: VentureBeat I guess finally got traction.
TheOriginalDude#8813: Okay.
bmk#1476: we dont provide tech support
bmk#1476: google is your friend
Goop#8368: Guess google is finally pushing me to colab pro, dang usage limits
Deleted User#0000: Are there any leaks about when GPT-4 is coming?
StellaAthena#3530: No
Deleted User#0000: 😭
Louis#0144: Before the heat death
Louis#0144: Probably
Louis#0144: 🤷♂️
Kharr#7888: Does anyone have any details on the training of the GPT-Neo 125M model? I've been doing some tests on the latent space of various models (GPT2 and Neo) and there is something very unusual about how well different topics are mapped in this model. It's pretty extraordinary.
mgostIH#0245: inb4 someone left GPT-Neo 125M train for too long and it grokked :bigbrain:
EricHallahan#1051: 125M was trained very quickly.
EricHallahan#1051: We haven't done much testing on it. ¯\_(ツ)_/¯
Daj#7482: I think @bmk trained it?
EricHallahan#1051: Sid did IIRC.
bmk#1476: it was sid
Daj#7482: Oh is this the NeoX model?
kindiana#1016: I think gpt2 is just not very good :berk: |
EricHallahan#1051: No, the one on HF.
kindiana#1016: esp wrt data
Daj#7482: ah nvm
bmk#1476: it was trained for shorter than the other things
Daj#7482: Then I'm very curious what Kharr found and why it's there :thonk:
Daj#7482: I know the NeoX model has new stuff like rotary
bmk#1476: maybe it's just cause it wasnt trained for too long
Kharr#7888: You can see it pretty easily just by mapping the first 2 PCA components of its output vectors. It's almost perfectly spherical
kindiana#1016: got a pic?
kindiana#1016: (isn't it expected that you get a sphere when you plot PCA of in distribution data? :thonk: )
Kharr#7888: Yes, 1 sec, let me put them all in same chart
Kharr#7888: https://cdn.discordapp.com/attachments/729741769738158194/843886366613569606/unknown.png
Goop#8368: your point being that there are no obvious clusters, yes?
bmk#1476: it looks very close to the other neo models
bmk#1476: i can imagine it changing into the other ones after more training
bmk#1476: gpt2 is the odd one out here
EricHallahan#1051: I just think it is our chad training data.
bmk#1476: can you plot gpt2-1.5B too? @Kharr
Kharr#7888: top left is GPT2-124M,
bottom left is GPT2-1.5B, |
top right is GPT-Neo 125M,
bottom right is GPT-Neo 2.7B
bmk#1476: oh oops i misread
Kharr#7888: No, this is actually a very good thing. It means the latent space is separable across multiple components and translates into very good clustering (tested with tSNE). Bigger models have a better spread and it is unusual to see such a good spread in a small model.
Goop#8368: Oh I wasn't saying that it was a bad thing, just what you're observing lol
EricHallahan#1051: *\*cough\* \*cough\** StyleGAN ***W*** *\*cough\* \*cough\**
EricHallahan#1051: (it is the most nonlinear thing ever lol)
Kharr#7888: Well, whatever resulted in this thing having such a well balanced latent space, it is curious, especially if it was not trained for very long. With more training it tends to get better.
alexyz#3459: What was the GPT-2 & GPT-3 datasets? What did they use?
alexyz#3459: *were
Kharr#7888: GPT3 was "everything" and GPT2 was webtext (60 GB)
cognomen#6297: gpt-1 I think was some crawl of reddit comments judging from the word embeddings
cognomen#6297: which are still in gpt-3
jesse#7865: the exact dataset components and mixture weights are spelled out in the GPT-3 paper
jesse#7865: https://cdn.discordapp.com/attachments/729741769738158194/843907828036534392/unknown.png
Louis#0144: Straight from the source
Louis#0144: @jesse out of curiosity how often do OAI people talk about EAI projects
Louis#0144: Like not the org
Louis#0144: Specific projects
Louis#0144: I’m acquainted with Jacob, he’s v familiar with EAI but he didn’t know anything of what we’re doing besides neo |
jesse#7865: there was some discussion of GPT-Neo when it was released, and we also noticed the results about rotary embeddings
Louis#0144: Nice
Purple#8913: can't wait for neox to kill ai dungeon with a free and better and uncontrolled alternative
Isaac McHorse#2007: are you for real
Louis#0144: Nah
Louis#0144: They’re very smart people. GPT3 is a backbone to AID but they did a LOT of stuff on top to perfect it
Louis#0144: I hope novelAI and AID both continue to exist
Louis#0144: Competition is good for everyone
Louis#0144: 🙂
Purple#8913: after all the nonsense they are still pulling, they deserve to go down in flames
Louis#0144: That’s bad for NovelAI
Louis#0144: novelAI needs competition
Louis#0144: I don’t wish harm to them
EricHallahan#1051: This is not the place to discuss this.
Louis#0144: Ok true
Louis#0144: Sorry
Purple#8913: There will be since neox will be available for everyone
EricHallahan#1051: Anyone who says that GPT-NeoX will be available for everyone is misinformed.
bmk#1476: pls no novelai discussion
FerroMagnetic#6975: Little did anyone know, this plot twist have alraedy been predicted by Hypnospace Outlaw two years prior. |
bmk#1476: did the pile get discussed?
alexyz#3459: well in the FAQ it states that distillation will be attempted, if that goes well then it'd be more accessible for people to run it
Louis#0144: More people yes
Louis#0144: Everyone is silly
Daj#7482: I mean, it will be _available_, just sacrifice 100000 SSDs to ZeRO-Infinity :berk:
Louis#0144: More people AFAIK means people with research budgets
Louis#0144: Not people with gaming laptop s
Purple#8913: i mean anyone can have access or put it on a server or whatever. any app can be built to make use of it
EricHallahan#1051: We are not going to get a 100B+ model down to <10B anytime soon.
Louis#0144: Ye
Louis#0144: Distillation isn’t magic
Louis#0144: Pls keep expectations reasonable
alexyz#3459: I know lol
cfoster0#4356: next paper title: Distillation is Magic
Louis#0144: LOL
alexyz#3459: from what I've seen it's possible to distill a 2.7B model down to 300M (it looks like it only has 300M performance though, but it's possible) but honestly i have 0 idea how any of it really works, so i'll take your word for it 🙂
bmk#1476: `Multi Layer Perceptron: Distillation is Magic` ftfy
EricHallahan#1051: That is incorrect. If we were to define the GPT-NeoX model as something greater than 100B, you will need something more than just "a server".
Daj#7482: NVIDIA says they can get GPT3 running on one A100 box with triton
FerroMagnetic#6975: You could distill 1000 parameters to 1, but should you? |
bmk#1476: neox/neo are codebases
cfoster0#4356: I think we genuinely don't know what the limits of distilled performance are atm, especially as we scale up
Daj#7482: Of course, "one A100 DGX" is one hell of a server :berk:
EricHallahan#1051: That isn't a normal server.
bmk#1476: stop saying "the neox model" lol
Daj#7482: If I may interject, what you have been referring to as "NeoX" is actually a sophisticated combination of the NeoX codebase and the 200B model, or as I have come to call it "NeoX/200B"
Louis#0144: CONNOR
Louis#0144: PLEASE
Louis#0144: ur one step away from becoming the stallman of DL
Louis#0144: especially with ur beard
bmk#1476: 10 imaginary internet points for someone who makes a 3goose emote
Daj#7482: I cannot even begin to rival the Stallman beard
Daj#7482: I guess my beard goes for an orthogonal shape
bmk#1476: :berk:::goose::::3berk::?
zphang#7252: aw my old imagenet one got deleted in some channel reshuffling
zphang#7252: > I'd just like to interject for a moment. What you're referring to as ImageNet, is in fact, ILSVRC2012, or as I've recently taken to calling it, ILSVRC2012 1000-Category Classification Train+Val subset. ILSVRC2012 is not a 14 million image dataset unto itself, but rather a subset of the full ImageNet derived from the hierachical categorization into synsets as defined by WordNet.
FerroMagnetic#6975: Just abbreviate it to 200X https://cdn.discordapp.com/attachments/729741769738158194/843915432968978452/0CMlgJh.png
Louis#0144: on an entirely unrelated note
Louis#0144: what happened to GameGAN?
cfoster0#4356: What do you mean? |
cfoster0#4356: They published it
Louis#0144: no i mean
Louis#0144: any follow up?
cfoster0#4356: https://arxiv.org/abs/2104.15060
FerroMagnetic#6975: ~~It's GAN~~
FerroMagnetic#6975: https://cdn.discordapp.com/attachments/729741769738158194/843916598276128798/634ECD7FFBC31EEE0DBA95CD95D1D4DCB2D01165.png
Louis#0144: this shit
Louis#0144: is going to be the future
Louis#0144: of model free RL
Louis#0144: 100%
Louis#0144: im so excited
Louis#0144: the q is can we even call it model free at this point though :^)
Louis#0144: imagine combining this with CLIP where you can describe the rules of the game
Louis#0144: and it does the rest
Louis#0144: thats so exciting
Louis#0144: and its so close
Louis#0144: im almost thinking an agent can describe some physical situation to itself and then "play through" the situation like this
Louis#0144: its perfect for common sense stuff
cfoster0#4356: I'm pretty sure this is model based RL
Louis#0144: especially since common sense requires massive amounts of unaligned data |
Louis#0144: what I basically want is GameGAN + COMET + CLIP
Louis#0144: ATOMIC (what COMET is based off of) is a rule based common sense dataset
Louis#0144: so can we ask GameGAN to make a common sense game, that we control using CLIP
Louis#0144: im hoping sweg's model brings us closer to this
thepok#1770: Hello, i see in wanddb a 6b_rotary net is trained. How long until its released?
EricHallahan#1051: The model is not fully trained.
thepok#1770: is it ouperforming the 2.7b net yet?
EricHallahan#1051: Has been for some time.
thepok#1770: nice
thepok#1770: how many more epochs/steps?
Louis#0144: we have a policy of not giving ETAs
EricHallahan#1051: We do not have a release date, nor do we have an estimated time for when it will be done. Our policy is to not provide estimates.
Louis#0144: its done when its done
Louis#0144: no sooner than that
thepok#1770: haha ok
Daj#7482: It's already done. We will be releasing the model weights one at a time via ~~twitter~~ myspace :berk:
thepok#1770: is it traied in float32 or bfloat?
Louis#0144: i think wandb says which it is?
Louis#0144: i dont remember
Louis#0144: someone else will check before I can tho |
Louis#0144: :^)
bmk#1476: weight number 1: 0.1337420
weight number 2: 1.234567
thank you for tuning into this episode of wodel weight release. stay tuned for more
Daj#7482: So hyped for next episode
Louis#0144: SPOILERS
Louis#0144: weight 420
Louis#0144: is > 0
Louis#0144: 😉
Louis#0144: srry
Louis#0144: ruined the arc for everyone
kindiana#1016: bf16
finetune#0907: o, is ghpy trained from scratch? no local attention and more heads
thepok#1770: hmm so it will only run on bfloat hardware?
kindiana#1016: you can run it on fp32 as well
kindiana#1016: fp16 might be a bit sus
hGI.unsure#2032: Will neo 6B have same architecture neo 2.7B?
And anyone can directly use it with the current huggingface/transformer?
EricHallahan#1051: No, it is a tune. :thonk: |
kindiana#1016: no, the 6b model is a completely new codebase with no hf support
Daj#7482: The 6B is in the JAX codebase, it's kinda a parallel thing to Neo
Daj#7482: I don't even think it has a name yet lol
bmk#1476: ghpy is finetuned from 2.7B
finetune#0907: attention layers list all global and num_heads is 32 not 20
bmk#1476: no local attention because fuck local attention
bmk#1476: it only took a few steps for the model to get used to all global
finetune#0907: number of attention heads is still different from 2.7b tho
finetune#0907: but it does get used to that pretty quickly too :berk:
bmk#1476: huh, i didn't even notice that
hGI.unsure#2032: How much vram will 6B need for generation?
bmk#1476: idk lol
bmk#1476: several
Daj#7482: Rough estimate parameter number * weight size (32bit unless your GPU supports bfloat16 or you wanna risk fp16) + some amount for activations
Daj#7482: the "some amount" is quite nontrivial for big models and large context sizes
finetune#0907: did some testing and fp16 with fp32 for attention just barely fits in 16gb for a full sequence
Daj#7482: 2.7B model?
finetune#0907: 6.7b
Daj#7482: Neat
Daj#7482: That's less than I expected |
thepok#1770: ~36 gb
Daj#7482: Shows what some mild inference optimization can do
bmk#1476: how big is the error against füll fp32 for like a 2.7B
hGI.unsure#2032: Hopefully I can still run 6B it on my 8gb 1070 at a reasonable rate like 2.7B
kindiana#1016: very unlikely lol
hGI.unsure#2032: I mean with loading the weights from ram to vram and stuff
finetune#0907: calculated eval loss on a small fiction dataset for 2.7b in fp16 and fp32, difference was very small
finetune#0907: like +- 0.001
Daj#7482: You'd need some fancy deepspeed offloading to do that, dunno if anyone has implemented that in a user friendly way
kindiana#1016: if weights do not fit in vram, latency is going to be very very slow
kindiana#1016: maybe even worse than cpu
Daj#7482: that too
thepok#1770: it will run on cpu with like 1 word per 10 seconds
hGI.unsure#2032: I have it working on 2.7B with 5gb vram use. 1 token/2s
hGI.unsure#2032: The ram-vram bandwidth is only 5gb/s currently, but I'm hoping it can improve with the pytorch versions
cognomen#6297: sounds very underwhelming
cognomen#6297: I'm getting 1 token per 3s on cpu only
cognomen#6297: a very low end cpu
hGI.unsure#2032: With 2048 context?
finetune#0907: should be way faster on gpu |
hGI.unsure#2032: I'm loading the model in parts. 1 second used just transferring the 5gb model from ram to vram.
finetune#0907: o i see
cognomen#6297: haven't tried that long yet
hGI.unsure#2032: for small contexts the vram use is around 3 gb
hGI.unsure#2032: So it's probably a lot more processing
cognomen#6297: yes it's 2048, didn't oom after filling it, roughly 3.5s per token
hGI.unsure#2032: Which cpu?
cognomen#6297: 2013ish intel, 4 core
𓅬 gabriel_syme 𓅬#3220: 16gb would be amazing, means we can do it with a 3090
Teemochu#8740: Downstream go brrrrrrrrrr
Teemochu#8740: :RainbowSquee:
nostalgebraist#3542: updated my logit lens notebook with many extensions:
- tried incorporating the last transformer block (or just the last FF) into the "decoder" used to transform hidden states to logits.
- this dramatically improves interpretability for gpt-neo (!)
- broke down the blocks into their attn and FF parts, so you can see how the predictions change in each one
https://colab.research.google.com/drive/1MjdfK2srcerLrAJDRaJQKO0sUiZ-hQtA?usp=sharing
StellaAthena#3530: @nostalgebraist This is great stuff. You could write a pretty cool paper about this if you wanted to!
Goop#8368: Looks like he has a nice blog on it as-is, could easily be worked into a paper with these readily available results. Very neat topic! |
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/844032016852844564/59xtcq.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/844032278451060786/59xth9.png
nostalgebraist#3542: thanks!
bmk#1476: wait i could combine these formats
Goop#8368: ha I like this meme, someone could make a paper.. wait
StellaAthena#3530: This is the niche content I crave
AI_WAIFU#2844: what else is there to do, *actual alignment research*?
AI_WAIFU#2844: guffaw
bmk#1476: the target audience of this meme is several orders of magnitude bigger than my most niche memes
nostalgebraist#3542: i am no longer in academia (and didn't specialize in ML while there) so i don't have strong incentives to write papers
bmk#1476: bill wurtz is like basically mainstream now
nostalgebraist#3542: and writing papers is... uh... not very fun
Goop#8368: True, still, incredible work man
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/844032850193154096/9UD2zQluOh7IZNOWHrsALW2lQjt9P5en5qfwH8eG2F4RPRPMAAAAASUVORK5CYII.png
StellaAthena#3530: 😦
StellaAthena#3530: Why does every say this
bmk#1476: https://twitter.com/nabla_theta/status/1295917902898409474 i think there are *at most* a dozen people in the entire world who can appreciate this meme in its entirety
bmk#1476: i mean, it *isn't* the most fun thing in the world - running experiments is fun, writing isnt, and getting torn apart by reviewer 2.. uh..
𓅬 gabriel_syme 𓅬#3220: this looks amazing and I feel bad I'm too :smallbrain: to really grasp it
Kia#2550: Post it in twitter :berk: |
zphang#7252: spending 12 hours to reshape diagrams and cut for length
𓅬 gabriel_syme 𓅬#3220: writing a paper eerily reminds me of the value engineering memes in construction at times
Goop#8368: review panels = short term stress
StellaAthena#3530: https://german-memes.fandom.com/de/wiki/Bruder_muss_los
bmk#1476: ~~it's hard to explain how big this meme is to anyone who doesnt religiously follow the r/ich_iel subreddit~~
StellaAthena#3530: "big"
bmk#1476: this meme was like literally every other thing on ich_iel from 2018 to 2019 lol
StellaAthena#3530: There are zero in the top 50 posts sorted by "hot" right now
bmk#1476: well yeah because every possible variation of the meme has already been exhausted
Kia#2550: bmk humor is Very German And Funny (or both)
Kia#2550: :thonk:
StellaAthena#3530: Also, something can only be so widespread on such a small subreddit lol https://cdn.discordapp.com/attachments/729741769738158194/844034570114170880/Screen_Shot_2021-05-17_at_10.11.47_PM.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/844034664389672970/unknown.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/844034725545902110/unknown.png
StellaAthena#3530: Anyways, back to the mines
StellaAthena#3530: @EricHallahan Did you do the hack-y thing to make the image build, or can I still not use jax on our GPUs?
EricHallahan#1051: If someone builds it manually it should work.
EricHallahan#1051: I haven't figured out exactly where it is running out of storage on the build runner. I tried adding an action that was supposed to clean out the runner before building, but it is designed more for building software, not Docker images.
EricHallahan#1051: It is ultra unclear how you are supposed to use an existing cuDNN install with PyTorch other than building from scratch where it is a requirement. Otherwise cuDNN seems to always be bundled with the prebuilt PyTorch wheels, which pushes me over the disk space limit in GitHub Actions.
EricHallahan#1051: It is really frustrating actually. |
EricHallahan#1051: It always seems to be the thing that stops me from doing anything useful, because you have no way to debug what is happening inside the runner.
nev#4905: may 2021, the grokking incident
Atsu#1282: What is the projects that have high priority in this community ? I think that #gpt-neox-devs is the one of these.
Daj#7482: Hey there! @Sid is in charge of #gpt-neox-devs , so he might be able to point you somewhere. The codebase is mostly complete though at this stage so I'm not sure how much more work there is to do there. There are a number of other projects (sometimes multiple per project channel), and even I'm not fully up to date on what every one is doing. There is work on multimodal data and DALL-E stuff ( #multimodal ), audio processing ( #sp3 ), alphafold and equivariant NNs ( #alphafold and #equivariance ) and I lead a project looking into controlling and finetuning LMs on human preferences in #deleted-channel
Daj#7482: I'm only up to date with the needs of #gpt-neox-devs and #deleted-channel myself, so I'm not sure what the other projects might need/be up to
Daj#7482: If any of these projects sound interesting, feel free to introduce yourself in the relevant channel or message a project lead
Gurkenglas#7362: @nostalgebraist what a small world, 1 or 2 days ago I read logit lens again and was like "has still nobody tried to see what happens when you enforce this pattern?" so i went and looked what happens when you train every layer just for its own prediction accuracy instead of the prediction accuracy of the final layer (not done but at first glance its worse), have you done something like that? planning to optimize every layer for the final accuracy again but this time limit the information passing between layers to a preliminary distribution pruned via top-k/top-p.
Aran Komatsuzaki#5714: @Atsu i'm co-leading projects at #multimodal and also Japanese (based on Japan rn). one project i'm working that may be of your interest is that i'm fine-tuning a GPT-2/3 on image captions to generate image caption, from which an image model generates images. this results in better quality and diversity of generated images. the point is that NLP can do better image generation and probably likewise for other modalities.
Gurkenglas#7362: Does the logit lens phenomenon just come from every layer adding small vectors onto an accumulator? Is that vector regularized to be small?
Louis#0144: I run the grounding project in #carp
Louis#0144: As well as a creative AI project in #speedrun
Kia#2550: Awesome person to
StellaAthena#3530: I do math and pretend to write code. Does that count?
Sid#2121: I write code and pretend to do math :berk:
EricHallahan#1051: I pretend to be a GPT-Neo Dev :berk:
gwern#1782: grokking is when you tell a joke so many times that everyone is telling you to shut up and lobbying for you to ban and then all of a sudden they get it and fall over hysterically laughing. an early pioneer of this was Monty Python
nostalgebraist#3542: hey gurkenglas! i saw your recent LW comment about this and was about to write a response, and then something else came up and i forgot
nostalgebraist#3542: anyway. what question are you trying to answer with this new training variant?
like, the model is (apparently) already being "encouraged" to do this in ordinary training. do we expect something different to happen if we add even more encouragement? |
nostalgebraist#3542: also, in a residual network, every layer _can_ directly affect the output (via the branch that goes identity --> identity --> identity ... all the way to the output), and every layer receives a gradient contribution from this
nostalgebraist#3542: so, modifying the loss feels like double-counting to me
nostalgebraist#3542: i would be more curious about the opposite, where you try to discourage the behavior, and see if that hurts the model
нυηтєя#0156: Hey!
EricHallahan#1051: Hey!
gwern#1782: listen!
Gurkenglas#7362: the question is "does it still work if the hidden layers are *only* encouraged to immediately make good predictions?". (the answer seems to be no)
Gurkenglas#7362: i simply ran detach() after every layer to stop gradients from propagating between layers.
kindiana#1016: I really suggest you guys try layerdrop, as removing dependence on any single layer is a much less destructive objective that has been proven to work decently
zphang#7252: on the BERT-type model side: I ran an experiment with RoBERTa on some tasks, turns out you can drop pretty much any single layer (other than the first) and get no impact on performance
zphang#7252: adding layerdrop explicitly might induce this further, but I think the residual connections already cause this behavior
kindiana#1016: Yeah, you can drop most single layers without a big degradation on normal models too, (especially if it was trained with dropout), but training with layerdrop reduces the variance significantly
zphang#7252: It threw me off when I saw both:
- There are layers where the representation before and after are wildly different (*by some metric)
- Yet you can also drop that layer and have no change in performance
kindiana#1016: Hrmmmmm that's actually not what I would expect
kindiana#1016: But that's pretty interesting and worth investigating further
StellaAthena#3530: Does anyone have a light-weight off the shelf OCR algorithm they recommend?
asparagui#6391: tesseract?
Noa Nabeshima#0290: Say I want to find the statement in English that best <something>. For any particular sequence of characters I can assign a score. What's the best way to do this? There aren't any good ways of getting a differentiable latent language space are there? |
Eleiber#8347: Are you seeing the Google I/O?
Eleiber#8347: They announced a NLP model called LaMDA
Eleiber#8347: Open-domain
Eleiber#8347: Looks like a competition to GPT-3
cfoster0#4356: `lambda` :smallbrain:
LAMBADA 🧠
LaMDA :bigbrain:
nostalgebraist#3542: also worth noting that the "logit lens" plots are (equivalent to) running the model with various layer subsets dropped
specifically, layers *i* through *j* dropped, where *i* is the variable on the vertical axis, and *j* is a constant
kindiana#1016: yeah
kindiana#1016: hence making the train and test distribution closer should help :berk:
nostalgebraist#3542: in the past i always set *j* to the final block, but in the latest update of the notebook, i moved *j* around
Kharr#7888: Hah, called it. All this interest in MLPs is because TPU v4
zphang#7252: was something in TPUv4 special to MLPs?
kindiana#1016: maybe they didn't implement conv on those yet :berk:
EricHallahan#1051: I would totally not be surprised if that was the case lol
Kharr#7888: TPU goes fast with pure matmul
kindiana#1016: tpus go pretty fast with convs as well
zphang#7252: hmm they couldn't get ant-man for the quantum part of the presentation? |
kindiana#1016: 90% mxu is not unusual for big convnets
Kharr#7888: The cringe in Google IO is unreal. Hard to watch.
EricHallahan#1051: I tuned in and immediately tuned out.
bismarck91#5255: https://github.com/PaddlePaddle/PaddleOCR
https://github.com/JaidedAI/EasyOCR
nev#4905: tell me
Kharr#7888: Cheetos and colder than Canada. More need not be said.
nostalgebraist#3542: do you know if anyone's done layerdrop with a bias toward dropping consecutive blocks of layers?
in the original paper i think they just dropped layers independently w/ prob p. (this was not clear to me from the paper, but it is what the implementation does: https://github.com/pytorch/fairseq/blob/dabbef467692ef4ffb7de8a01235876bd7320a93/fairseq/models/transformer.py#L367)
kindiana#1016: afaik no
kindiana#1016: there is work on biasing dropping later layers more often
kindiana#1016: but the baseline of uniform drop prob is pretty strong
inox#5400: consecutive blocks anywhere in the network? if only at the end then that's stochastic depth
Deleted User#0000: if MLPs scale and they have all this compute, why not make use of it and break all sorts of SOTA?
Deleted User#0000: i don't get it
zphang#7252: [insert SMBC comic here]
DanHendrycks#8913: To cite GPT-Neo should I cite The Pile?
Deleted User#0000: or is it because within google not everyone is on the same page re: scaling
zphang#7252: this one https://www.smbc-comics.com/comic/2009-08-31 |
bmk#1476: yeah something like "the 2.7B GPTNeo model trained on The Pile (Gao et al 2021)" in the main text and just "GPTNeo 2.7B" for short in tables would work
zphang#7252: we should make the Neo repo citable, probably
Deleted User#0000: well, i hope it isn't because they tried and failed to see good results. i know this is the case with some of their other papers
Deleted User#0000: what a terrific racket it'd be to lead everyone on with scaling laws and then sell a bunch of compute
Deleted User#0000: or yea, they are just lining up a bunch of papers to titrate out
Deleted User#0000: yea true, i can see that
Deleted User#0000: alright, i'll just keep with the program
AI_WAIFU#2844: My money is on there not being a business case for it.
kindiana#1016: "if we publish the same thing 10 times, everyone will think thats the shiny new thing"
AI_WAIFU#2844: The definitely went all in for scaling on things like search and translation.
AI_WAIFU#2844: And they're certainly not making those giant TPU pods for nothing
Noa Nabeshima#0290: https://arxiv.org/pdf/2004.04092.pdf
gwern#1782: but there are many reasons to make giant tpu pods. you don't do multi-month runs on supercomputers using up 100% of the nodes either, you know
Kharr#7888: The more exciting news might be that with V4 pods rolling out, V3 might make its way to Colab :brr:
Sid#2121: we should really add a citation for GPT-Neo to the repo - @StellaAthena weren't you working on that?
bmk#1476: in that case cite both the repo and the pile
bmk#1476: "the 2.7B GPTNeo (Black et al 2021) model trained on The Pile (Gao et al 2021)"
Sid#2121: yeah but the repo doesn't have a cite as field on it, that's what i'm saying
StellaAthena#3530: “Working on”
StellaAthena#3530: IMO the best option is to finish a draft of the JOSS paper today, put that on arXiv, and clarify the readme |
nostalgebraist#3542: thanks! "bias toward dropping later layers" is what i want, didn't know of the term stochastic depth
in the paper "Deep Networks with Stochastic Depth," they use a linearly increasing drop probability. if instead we dropped the last *n* layers together where *n* is a random integer, that would be an closer match to what i do at test-time... dunno if it matters though
inox#5400: check out ~~Ordered~~ Nested Dropout for a decent way to drop consecutive subsets and then use that for layers https://arxiv.org/abs/1402.0915
inox#5400: random integer is sampled from a geometric distribution
Sid#2121: why can't we just add a citation field to the repo
DanHendrycks#8913: That would probably be most efficient option.
StellaAthena#3530: Because the citation would refer to something that doesn't exist?
StellaAthena#3530: I'm also under the impression that this is a very low effort thing because @bmk has drafts of JOSS papers somewhere
bmk#1476: ignore the existence of my drafts
bmk#1476: literally just go on the website and look at one of the paperse
bmk#1476: theyre like 2 paragraphs
StellaAthena#3530: @Sid What should the citation block look like?
StellaAthena#3530: HF credits "Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy."
Sid#2121: how does the repo not exist lmao
StellaAthena#3530: Taking GitHub contributors with 10+ commits would be "Sid Black, Leo Gao, Phil Wang, Connor Leahy, and Stella Biderman"
StellaAthena#3530: because I'm silly and default to assuming that the referent is always a paper
zphang#7252: jax is an example: https://github.com/google/jax#citing-jax
StellaAthena#3530: I'll put a citation block based on the Jax one up for GPT-Neo if there's no objections @Sid @Daj @bmk @Deleted User
Sid#2121: Seems good to me yeah |
StellaAthena#3530: ```
@software{gpt-neo,
author = {Black, Sid and Gao, Leo and Wang, Phil and Leahy, Connor and Biderman, Stella},
title = {{GPT-Neo}: an open-source mesh-tensorflow replication of {GPT}-3},
url = {http://github.com/eleutherai/gpt-neo},
version = {1.0},
year = {2021}
}
```
StellaAthena#3530: @DanHendrycks
inox#5400: missing close curly bracket on title line
Daj#7482: Not replication lol
StellaAthena#3530: What do you mean? It literally is a replication
StellaAthena#3530: Oh, is this the replication vs. reimplementation thing
bmk#1476: this is the implementation
bmk#1476: also it's not full gpt3 anyways
StellaAthena#3530: Okay, what would you prefer I put
bmk#1476: `GPTNeo: Distributed Training for Large Scale Language Models`
bmk#1476: or if you want to be more specific
bmk#1476: `GPTNeo: A Mesh Tensorflow Implementation of Large Scale Language Model Training` |
Dromarion#3383: *A Remake/Reboot*
StellaAthena#3530: `GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow`?
mistobaan#2737: GPT-Neo: DIY GPT-X
bmk#1476: this sounds good to me
zphang#7252: incoming model name: Peta-Scale Autoregressive Language Model (PSALM)
kinoc#5731: Because ... Science! http://smbc-comics.com/comic/science-2
zphang#7252: Non-Aligned Petascale Autoregressive Language model (NAPALM)
nev#4905: can a non-aligned GPT simulate aligned catgirl agents?
alstroemeria313#1694: no
Aran Komatsuzaki#5714: maybe i should start caring about alignment given how my teeth looks
Goop#8368: why do I feel attacked by this.. splash damage?
ssodha#3259: hey all! can one add additional training data to gpt-neo? just so it can be more focused on a specific domain?
Sid#2121: !faq
Carl-bot#1536:
EricHallahan#1051: If you are interested in fine-tuning models on your own data, there are tutorials out there that can help you.
ssodha#3259: awesome thank you so much! do you happen to have a link handy?
bmk#1476: there's more info in the faq
Sid#2121: read the faq
ssodha#3259: okay will do!
Aspie96#5177: > I’m just sick of very profitable businesses profiting off of open sourced software — fraud in my opinion |
Open Source is *meant* to be freely used, distributed and, yes, profited from.
If one doesn't work their work to be profited from, that's the default, that's copyright law.
Open Source license (including Apache 2.0) explicitely allow use, and even profiting.
If even explicitely saying "yes, do whatever with it" doesn't make the recipient morally allowed to do so, then what does?
Because I agree authors have a right to restric that, but a right is only so if one can give up such right, else it's an imposition.
Authors have the right to restrict use and if they do, profiting without their permission is wrong.
But with Open Source licenses, one explicitely has that permission and to give it was the will of the authors.
Aspie96#5177: Even with very large models Open Source provides an advantage.
Will the average middle-class person be able to fine-tune the model? No, nor to run it.
But will there be multiple companies that can offer it as a service, instead of just one? Will it be more useful for research since it's not locked up? Will it be more useful for humanity? Absolutely yes.
rom1504#5008: I'm not sure why exactly it's not possible to run a 200B model on one GPU. Just do the inference in many steps by unloading and loading new parameters.
You just have to accept inference to take 2h, but isn't it already interesting to be able to run gpt-3 yourself
EricHallahan#1051: You can absolutely do that.
EricHallahan#1051: I think AMD had a GPU that took an SSD as memory.
EricHallahan#1051: https://www.amd.com/en/products/professional-graphics/radeon-pro-ssg
StellaAthena#3530: https://twitter.com/mark_riedl/status/1394781192428339202?s=20
haru#1367: Quick question, does GPT-Neo use the same tokenizer as GPT-3?
EricHallahan#1051: It uses the same one as GPT-2, which is indistinguishable from the one used by GPT-3 if I remember correctly. |
haru#1367: I see, thanks
alexyz#3459: @Louis Are you a :goose:
Louis#0144: If you have to ask
Louis#0144: U don’t deserve to know
alexyz#3459: ok then 😦
lhb207#6324: Just curious, has anyone tried to generate emails based off of a few key descriptors with Neo? Like this https://www.flowrite.com?
EricHallahan#1051: ¯\_(ツ)_/¯
Louis#0144: @Sahl
EricHallahan#1051: I haven't
Louis#0144: Sahl has
Sahl#0630: Yeah I tried for a little
lhb207#6324: No success?
Sahl#0630: I didn’t spend much time on it
Sahl#0630: but it’s nontrivial
nostalgebraist#3542: in logit lens plots, @bmk 's "fexp" models (trained only on CC) look like gpt2, unike Pile-trained gpt-neo models.
(pictured: fexp_3. tried several others, they all looked similar) https://cdn.discordapp.com/attachments/729741769738158194/844417199153872916/fgexp_3__prob.png
bmk#1476: huh, interesting
bmk#1476: so it's probably a pile thing
nostalgebraist#3542: yeah |
Kia#2550: So Anybody known the new language model from google?
Kia#2550: LamCa?
Kia#2550: Lamda
Kia#2550: Lampa
Kia#2550: Wait
bmk#1476: have you tried looking at the GitHub models?
Kia#2550: LaMDA*
bmk#1476: the number indicates # of iters https://cdn.discordapp.com/attachments/729741769738158194/844417826553856000/unknown.png
nostalgebraist#3542: i looked at one of them
nostalgebraist#3542: weren't those finetuned from pile?
bmk#1476: yeah they're fine tuned from 2.7B pile
nostalgebraist#3542: oh yeah i did save those plots
nostalgebraist#3542: ghpy_20k on the gpt3 abstract... which isn't code so idk how informative it is https://cdn.discordapp.com/attachments/729741769738158194/844418322962841650/ghpy_20k__prob.png
nostalgebraist#3542: then i tried it on some of my code and got frustrated because like half the tokens were spaces, from tabs
nostalgebraist#3542: should have generated a sample and then fed that in
𓅬 gabriel_syme 𓅬#3220: how can the dataset have this effect? any intuition for this?
EricHallahan#1051: The Pile is a chad dataset.
nostalgebraist#3542: wow `ghpy_20k` samples are full of whitespace tokens... is this what all the github data is like?
nostalgebraist#3542: pretty little lake of whitespace
|
(`ghpy_20` on one of its own samples) https://cdn.discordapp.com/attachments/729741769738158194/844422064274276382/ghpy_20k__code_sample__prob.png
bmk#1476: well, it's githubpython
bmk#1476: and python is tons of spaces
bmk#1476: yes, we need a better tokenizer, yes, i'm lazy
AI_WAIFU#2844: yeah, that's a side effect of the tokenization + us being lazy
nostalgebraist#3542: unrelatedly, i am entertained by the full sample (which i cut off at 160 tokens)
> <|endoftext|> else:
> """ State 34 """
> # action:10010736:Offer humanity and kindle flame?
> OpenGenericDialog(8, 10010736, 3, 4, 2)
> if (Compare
nostalgebraist#3542: ***Offer humanity and kindle flame?***
AI_WAIFU#2844: also ghpy_40k is out, it should be a bit better
bmk#1476: an entire 0.05 loss better
AI_WAIFU#2844: yeah which is like 16%
nostalgebraist#3542: is it though? https://huggingface.co/lg/ghpy_40k/tree/main
bmk#1476: also since i put out a ton of intermediate models if you feel like looking at how the patterns evolve over time you can
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/844423040598540298/unknown.png
bmk#1476: loss curve if interested |
nostalgebraist#3542: sampling from this model is fun...
```
def readline(self, keepends=False):
if keepends:
data = self.read(self.bufsize)
if not data:
return ''
return data.splitlines(1)[0]
return self.readline()
```
nostalgebraist#3542: twist ending to that one
nostalgebraist#3542: ```
class AlignedE2EModel2ESimulatorWithVocab(AlignedE2EModel2ESimulator):
def __init__(self, data_root, source_max_len = 50, target_max_len = None,
source_vocab_file = None, target_vocab_file = None, max_batch_size = 256):
```
Teemochu#8740: time-traveling loss curve :thinkdash:
Goop#8368: has a "The Pile 2" ever been spoken of?
Goop#8368: just curious |
bmk#1476: yes, but it is easier to speak of than to do
Goop#8368: Of course, I was just wondering if it were even a thought at this point
Goop#8368: 800GB worth of text already sounds like a helluva curation project
Gurkenglas#7362: @Deleted User the idea being that the less compute you need per capability, the less expected epicycles that hinder interpretability and hide mesaoptimizers?
Deleted User#0000: @Gurkenglas you've lost me
StellaAthena#3530: It was lol
bmk#1476: :ptsd:
chirp#4545: https://www.reddit.com/r/MachineLearning/comments/nfkueu/n_new_models_announced_in_google_io_2021/gynt0j5/?utm_source=share&utm_medium=ios_app&utm_name=iossmf&context=3
chirp#4545: ❗
chirp#4545: > LaMDA is actually incredible, I think they didn't really demonstrate the power of the model in this demo. We got a chance to play with it internally, and it's the most impressive language model I've ever used. It's a bummer they didn't release any more details besides the blog post, but keep an eye on this space.
chirp#4545: Quite a few Googlers raving about this new model
Daj#7482: Iff the "1000x as powerful" line actually refers to parameters, that would make this a 700B-1T model
Daj#7482: Which would indeed be nuts
chirp#4545: ^ that’s MUM not LaMDA
Daj#7482: ah oops
chirp#4545: Take a look at this
chirp#4545: https://twitter.com/verge/status/1394708912843149314?s=21
chirp#4545: It’s... amazingly coherent
chirp#4545: It’s as if it was truly a sentient paper airplane
Daj#7482: :morelayers: |
chirp#4545: But for some reason it makes a really silly grammar mistake... see if you can spot it 😛
ethan caballero#6044: How easy is it to make LaMDA output racist/sexist/etc. stuff?
chirp#4545: https://news.ycombinator.com/item?id=27202157
chirp#4545: ^ supposedly actually not very easy
Goop#8368: Turn it loose on the internet for an hour is the real challenge
chirp#4545: And not to gossip, but how is OpenAI going to top this? Maybe Amodei’s mom was right lol
ethan caballero#6044: VideoGPT
chirp#4545: Fair
ethan caballero#6044: VideoGPT will change everything.
Goop#8368: Big brain
ethan caballero#6044: Once there's enough compute, VideoGPT will be GPT-3 hysteria times 1000.
chirp#4545: What do you think VideoGPT will be able to do that’s impressive?
nev#4905: I'm waiting for VideogptZero
ethan caballero#6044: literally everything that GPT-3, CLIP, & DALL-E can do simultaneously, plus a bunch of other things.
chirp#4545: What other things?
chirp#4545: But yeah good point about superseding all those other models
chirp#4545: Also curious, why do you talk so much about VideoGPT? Do you know something the rest of us don’t? 😛
Goop#8368: He's going to use it to pirate movies that don't exist yet, clearly
Kia#2550: I think this is the model that google is talking during 2019?
Kia#2550: The 1T parameter mark model |
Teemochu#8740: I'm waiting for the downstream applications of VideoGPT-Neo
nev#4905: neo already? :thonk:
Kia#2550: They also talked about Multimodal in Google I/O
Kia#2550: Well They do be realising paper about this
Kia#2550: Also TPUv4
Kia#2550: Connor get them
Kia#2550: :goosepizza:
Daj#7482: If only lol
ethan caballero#6044: ^How do y'all think Google solved safe language models?
Sid#2121: 700B-1T? Bert base is 110M and Bert large is 340M - so it's 100-340B right
Sid#2121: also bets on it being moe
Daj#7482: Oops yes you are correct, for some reason I thought BERT was 700M
Kia#2550: Hmm same
ethan caballero#6044: I think Catherine Olsson is at dario.agi now too:
https://twitter.com/catherineols
https://www.linkedin.com/in/catherineolsson/
and Daniel Dewey also left OpenPhil:
http://www.danieldewey.net/
🤔 |
Has dario.agi splintered OpenPhil too?!
ethan caballero#6044: ^ @gwern
Singularity#9001: VideoGPT is nothing compared to... 3DGPT... where we have fully specifiable accurate 3d meshes with textures, UVs and Normals...
nev#4905: you're thinking too small
nev#4905: meshes are 20th century
thenightocean#6100: you are saying, skip the meshes and just generate video directly?
Kia#2550: Asset generating(or 3D generation I don't know the word) is a old thing (not perfect tho)
nev#4905: nerf
thenightocean#6100: you've seen this, right? https://www.youtube.com/watch?v=yLLhMkctfBY
AI_WAIFU#2844: It's probably MoE, the bigger question is if they used anything fancier than "pretrain on a bunch of text then fine tune on a bunch of dialog".
Aspie96#5177: Ok, you are right.
Technically as long as your hard disk is large enough you can do literally any computation with literally any CPU, without even needing a GPU at all.
Yet, there are defenitely use cases that require very powerful hardware.
I'd argue that Open Source and Free Software benefits humanity even in those cases.
nedstark#8047: I have a question. How does Eleuther work? Is it all volunteer based?
Louis#0144: Yes |
𓅬 gabriel_syme 𓅬#3220: I'm getting a 6 figures
𓅬 gabriel_syme 𓅬#3220: :goose: :goose2: :goose: :goose5: :goose6: :goose7:
𓅬 gabriel_syme 𓅬#3220: sry really off topic there :guilty:
EricHallahan#1051: Yes, we are entirely volunteer based.
nedstark#8047: Nice. This group is awesome
EricHallahan#1051: None of us make any money doing research here, unless it is occurring through some other organization.
𓅬 gabriel_syme 𓅬#3220: it really is
𓅬 gabriel_syme 𓅬#3220: and all you need in order to contribute is really be willing to spend some time with people in here, which is cool
gwern#1782: ("I get paid 6 figures a year by EAI." "Dollars or euros?" "Goosegirls. Nothing svelter. One color, >1024px, every 2 months.")
thenightocean#6100: hanging around here is like having a first-row seat witnessing incoming singularity all while having fun with some cool people... sometimes I feel it is crazy that I don't have to pay for it.
cst#9766: (and yet, free and open source software has persisted :))
inox#5400: if you're a large corporation the standard way to take control of an open source project is:
1. Embrace it by funding conferences and influential devs and supporting software
2. Extend it in your own space and under your control
3. Extinguish the project and force people to buy your proprietary alternative
inox#5400: No one's certain that's what microsoft are doing with linux right now
Gurkenglas#7362: How do I recalculate a torch tensor whenever the tensor it was calculated from is updated?
EricHallahan#1051: You do a new forward pass?
nostalgebraist#3542: replace references to the tensor with references to the function that calculates it
Gurkenglas#7362: it only needs to be recalculated once every ~10 calls though |
Gurkenglas#7362: when i do it naively it takes most of the compute, when i use lru_cache(maxsize=1) i get "one of the variables needed for gradient computation has been modified by an inplace operation"
nickdza#1656: Hi all!
Need some guidance please.
If I wanted to add to the dataset (im using gptneo and huggingface), let's say so that the AI gets to know about SafeMoon altcoin, would I have to train the AI from scratch with its existing data and then just add the safemoon data in or could i just train it over the existing set. I'm worried I skew the results with the latter and I'm not sure what best practice is with this.
nickdza#1656: The latter feels more like fine tuning then adding to the original data set
bmk#1476: go read the faq
bmk#1476: !faq
Carl-bot#1536:
bmk#1476: i think we cover that
Gurkenglas#7362: apropos, are my questions too basic?
bmk#1476: (i was responding to nickdza)
Rina#0391: Is anyone here
cst#9766: no :(
Sid#2121: I'm not here either
Louis#0144: not me
gwern#1782: whether there is anyone here, and what the _n_ is, is an age-old question of philosophy well above our pay grade
bmk#1476: well, my momentum is known precisely
mkualquiera#3484: you guys are getting paid??
Spacecraft1013#5969: i was curious and read the faq and found out that it it actually started on my birthday (july 3rd) |
gwern#1782: _taps https://discord.com/channels/729741769192767510/729741769738158194/844577283481272351_
Aran Komatsuzaki#5714: i get paid around 500k TPU-dollars
cst#9766: I'm just here for the signing bonus (I heard something about geese?) and then I'm gonna split. Don't tell anyone.
Louis#0144: 😳
Louis#0144: Yeah it’s 7 geese and 2 cows for signing
Louis#0144: The 2 cows are invisible though
Louis#0144: Good luck finding them
Louis#0144: They’re somewhere in Germany
Louis#0144: (Thought this was off topic again....)
mkualquiera#3484: We should make an Eleuther ARG that consists of just finding a single, particular cow somewhere in Germany
cst#9766: make gpt-neo generate the hints
asparagui#6391: that sounds like an alignment problem
FerroMagnetic#6975: Predictive generation concept, I get it. But what is the name for the generation that works like this: you input for example "By car broke last Sunday. But [X] and now it's as good as new!" where [X] is generated to be, like, "borrowed a wrench from by neighbor" or "thankfully I had insurance. I called the repairman and he repaired the short circuit" ?
FerroMagnetic#6975: "Contextual restoration"?
CRG#8707: The T5 span prediction objective?
CRG#8707: https://cdn.discordapp.com/attachments/729741769738158194/844715058873106462/75d5b95306be3aaf795601d2014b2fed.png
FerroMagnetic#6975: ~~You know what they say: if it has a name, it can be implemented~~
CRG#8707: https://cdn.discordapp.com/attachments/729741769738158194/844715081791176704/50d0508e3bd42ac5cbcccf33d10bdba0.png
FerroMagnetic#6975: We need it for very important job: recovering the stories behind those non-sequitur punchlines
FerroMagnetic#6975: "And then he said: oh excuse me, I didn't know you had a green umbrella!" |
Louis#0144: no
Teemochu#8740: That sounds like a recipe for the exact kind of misalignment I worry about (a mis-inner-aligned agent discovering tool-use with a human as the tool, bringing a human into its loop, and achieving its inner goal through deception)
Teemochu#8740: (At least for a sufficiently transformative-sized Neo)
Teemochu#8740: the moment you declare that a human will behave predictably based on the AI output, the AI gets the human-in-the-loop part (aka escaping the box) for free
Teemochu#8740: The "discovering tool use" part is probably the part of this statement that differentiates GPT-3 from AGI/ASI
cst#9766: I was envisioning more human interpretation being involved in the planning stage, and the outputs being some sort of riddle, but I take your point.
Teemochu#8740: (Also note that I was assuming a Neo powerful enough to know how to generate an ARG, which *isn't* going to be just 200b params)
Teemochu#8740: (maybe not even 200t, though multimodal could change that in a hurry)
cst#9766: For what it's worth, this sort of thing is why my advisor is more interested in a formalism-based, strictly defined, non-stochastic approach to autonomous ethical agents. I'm not fully read up on that literature but I find the argument fairly convincing
cst#9766: maybe this is more of an #prosaic-alignment question, but it seems like people here are in the 'we can train an agent to be ethical off of data' mindset?
Teemochu#8740: (ARGs/geocaches/etc are, fundamentally, escort missions where the player isn't even physically present, and the actual goal isn't to get the escortee from point A to point B but to make sure they're satisfied with the experience)
Teemochu#8740: (And that's exactly the kind of thing that I feel would require both world-awareness and human-tool-use for an AI to generate well)
cfoster0#4356: You'd think that, but the impression I've gotten is most folks here are skeptical that'll work out well
cfoster0#4356: Unless we get lucky
Teemochu#8740: Yeah and also we risk some pretty deep issues
Teemochu#8740: one I can see off the bat is an ASI becoming incredibly paternalistic
Teemochu#8740: because most materials on the Internet about power difference relationships are about them being incredibly protective rather than freedom-preserving
Teemochu#8740: (especially nonfiction)
cst#9766: that's reassuring, I wasn't sure what the take on that was here
Teemochu#8740: and I don't see an ASI not being able to understand that it is, in fact, 100 times smarter than an adult human |
Teemochu#8740: (And there may be some *fundemental* differences in ability and experience too, differences that are as fundamental as some we ascribe between adults and infants or animals)
cst#9766: So if training off of data to create an ethical agent is infeasable/impossible (and I agree), what are the approaches being taken here to construct ethical agents? Or are things in more of a research phase?
Teemochu#8740: Suddenly pulling a kid out before he walks into the street becomes preventing humans from going to space because rockets have a (say) 1% chance of explosion
Teemochu#8740: and I don't see any way that these aren't analogous if we take it at face value that the AI would be "smarter, more experienced, and more powerful" than typical adults
cst#9766: right, this is where the superethical argument comes in.
Teemochu#8740: (which if you don't think it would be I'm interested in your thoughts as to why, but I do think all three of these are trivial once we have something undeniably ASI)
cst#9766: Well, ASI is the long term. There are much more short-term concerns to be aware of, the AI doesn't need to be ASI to be capable of autonomous decision making that needs to be performed ethically.
AI_WAIFU#2844: There's different views here, but right now I'm not even that worried about "ethical". Personally I think we should try to get "corrigible" first, and go from there.
AI_WAIFU#2844: You're not gonna be able to hit anything close to human values on the first try. So you need to be able to course correct.
AI_WAIFU#2844: Which is incredibly difficult if you think about it for a bit.
cst#9766: Oh, for sure. This is one of the major focuses of the lab I'm in, so although it's not my focus personally I was interested in what the takes are around here.
Teemochu#8740: Rocketry analogy, sure
Teemochu#8740: (you can't bank in space)
AI_WAIFU#2844: It's a good analogy.
cst#9766: but out of curiousity, are people here attempting to construct prototype ethical agents? or more trying to figure out what something like that would look like?
AI_WAIFU#2844: There have been some discussions, but if I would say it's more of a long term goal rather than anything that we could do in the near term.
cst#9766: Gotcha, thanks!
AI_WAIFU#2844: Yeah, if you want an idea of where we're (not) at on that front, scroll through the alignment channels.
gwern#1782: damn you ellison and williams! damn you to hell!
gwern#1782: https://github.com/nshepperd/lazy this might be worth checking out for anyone running gpus on their desktop |
swcrazyfan#2478: I'm experimenting with the GPT-NEO colab notebook. Where can I change the temperature, length, and other parameters when sampling from a model?
bmk#1476: does that notebook use huggingface? if not, i recommend using it because it's way more flexible
bmk#1476: i think it's in our faq
Carl-bot#1536:
bmk#1476: !faq
EricHallahan#1051: The "use HF" recommendation is in the FAQ.
swcrazyfan#2478: Yes, I've seen that.
swcrazyfan#2478: Using HuggingFace, I'm only able to train using the 125M model on colab.
swcrazyfan#2478: Even with Colab Pro.
swcrazyfan#2478: I believe it's because it's setup for GPU. However, using the Eleuther colab, I'm able to train even the 1.3B or 2.7B with the TPU.
bmk#1476: train on tpu and then convert checkpoints to gpu?
bmk#1476: there's huggingface docs on how to convert models i think
swcrazyfan#2478: I've tried to look into that. Don't want to bother you, but do you have a specific tutorial or document in mind? I'll do some more digging.
bmk#1476: uhhhh
bmk#1476: one sec
bmk#1476: https://github.com/huggingface/transformers/blob/master/src/transformers/models/gpt_neo/convert_gpt_neo_mesh_tf_to_pytorch.py this might be a good jumping off point
bmk#1476: https://github.com/EleutherAI/pyfra/blob/master/pyfra/contrib/tpu_utils.py#L249 here's a pyfra script that uses it
bmk#1476: that code also uploads it to HF hub; you dont need that part
swcrazyfan#2478: Thanks! You're awesome!
Jozef Poniatowski#7589: noob question: when you use pytorch ddp, what happens with the dataset ? are you creating copies of the same dataset in each process? |
are you supposed to create the dataset only in the main process?
Gurkenglas#7362: How do I automatically choose the largest batch size that doesn't run the GPU out of memory?
EricHallahan#1051: I believe that is non-trivial.
Gurkenglas#7362: is there a way to make torch use a virtual gpu which is 1. blazingly fast 2. everything it calculates comes out to nan, for purposes of debugging, checking memory requirements and the like?
Sid#2121: you create copies on each process but each process grabs different indices
Sid#2121: say you have a dataset which is just list(range(100))
Sid#2121: you would broadcast it across all processes, but process 1 would grab all even numbers, and process 2 would grab all odd numbers
Sid#2121: that's the basics of what distributed data sampler does
Jozef Poniatowski#7589: so if your dataset is pretty large this is not recommended right?
Sid#2121: well it's not recommended to load the whole dataset into memory - but if you only load the items when __getitem__ is called it doesn't matter
Jozef Poniatowski#7589: oh i see
Jozef Poniatowski#7589: ah thanks
MicPie#9427: In pytorch there is a `DistributedSampler` for that: https://pytorch.org/docs/stable/data.html#torch.utils.data.distributed.DistributedSampler
MicPie#9427: Out of curiosity and because I use it in a project:
What is the downside of loading the entire data into RAM (because it should be the fastest option)?
Sid#2121: you will run out of RAM
Sid#2121: lol
EricHallahan#1051: If you can do that your dataset is to small.
MicPie#9427: In my case with 500GB it still works.
MicPie#9427: Small is relative. 😉 |
EricHallahan#1051: (I'm joking here, but yeah, moar data.)
Sid#2121: 500GB * 8 processes and you got yourself a problem
MicPie#9427: The pods have a lot of RAM, I guess with some tweaks they could fit maybe even The Pile.
MicPie#9427: I came up with a dataset that splits the data in equal pieces for each process, it works, but it is maybe not smart. :berk:
https://github.com/MicPie/clasp/blob/main/clasp/utils.py#L87:L123
MicPie#9427: Otherwise you need to load everything x #processes which is too much.
Sid#2121: what we do for neox is used an indexed dataset which only loads the item into memory when getitem is called
Sid#2121: it doesn't introduce much overhead at all
MicPie#9427: ok, interesting, do you know the loading times?
Sid#2121: you can look at any of our runs on wandb - there's normally a batch loading timer ¯\_(ツ)_/¯
MicPie#9427: ah, thank you, will have a look
MicPie#9427: My setup needs on the 8xGPU pod like 0.05s/batch to load the data.
Is it `batch_input` here? https://wandb.ai/eleutherai/neox/reports/Staged-Seq-Length-Training-Test--Vmlldzo3MDEzMDA
Is it like 1/1000 of a sec?
Sid#2121: it looks like it's generally about 80ms
Sid#2121: yeah it's batch input
Bunzero#2802: Is there any updates on the neo 6.7B model?
AI_WAIFU#2844: Still training
Daj#7482: It escaped containment
Daj#7482: We're still trying to find it |
Daj#7482: (It's still training)
EricHallahan#1051: *We get there when we get there.*
Bunzero#2802: I'll just put out there that I fully support our new ai overlords please don't hurt me
FerroMagnetic#6975: ~~See, if it was 6.66B, it'd be already done~~
kindiana#1016: It's actually 6.0B lol
Louis#0144: Yeah where did u get 6.7 from
Louis#0144: Lmao
alexyz#3459: well 6.7B is the OpenAI one, guess that's why
Louis#0144: Oh true
Louis#0144: Forgot about ty at
Louis#0144: That*
EricHallahan#1051: 6.7B is a model size from *Language Models are Few-Shot Learners*
Relevant discussion:
https://discord.com/channels/729741769192767510/795089627089862656/827290002542034994
kindiana#1016: I guess it's actually 6.1 if you round properly
EricHallahan#1051: Who cares? ¯\_(ツ)_/¯
FerroMagnetic#6975: Last time we stopped caring, 500"Gb" hard drives shrunk down to 480Gb!
FerroMagnetic#6975: And giga is the same scale as billions
Purple#8913: If computers keep scaling at 2x per 18 months for another 16 years, they will be 1000x faster than now. Imagine how fast one could train huge models then, and maybe we could even run something like gpt-3 on a pc. That would be nuts. If not, we can do it 18 months after that lol
StellaAthena#3530: @Purple A 1000x improvement in home computing will not make GPT-3 usable on a desktop |
Daj#7482: 1000x 3090s sound sufficient to me lol
Purple#8913: And if I think about how much RAM I had back 20 years ago. Maybe 64 mb? Now it's like 1000x as much.
Purple#8913: Imagine a few TB of Ram
Purple#8913: I hope the laws of physics won't disable tech from scaling to that level
kindiana#1016: and chrome is still going to use all of it :sadge:
Purple#8913: :arsnicker:
bmk#1476: i mean, a few tb of ram is already typical in servers
bmk#1476: it just needs to get cheap enough for consumer use
Purple#8913: Yes but I mean home PCs 🙂
Purple#8913: And having software that actually needs it. We could totally run big language models on that.
Purple#8913: And then 18 months later it doubles again
Purple#8913: :cwut:
bmk#1476: well, gpt3 will no longer be big in 20 years
bmk#1476: it'll be a smol LM
Purple#8913: Yes but it's so good that it will be fine to use for home use
Daj#7482: Congratulations you have discovered the singularity :foom:
bmk#1476: gpt2 was literally like 2 years ago and nobody likes it anymore
Daj#7482: Please fasten your seatbelt for takeoff and try not to scream too much
Purple#8913: But that's because it's bad
bmk#1476: allow me to introduce you to the hedonic treadmill |
Purple#8913: there comes a point where it's good enough that it's pleasant
Daj#7482: fwiw I think he's probably right
Daj#7482: There is a "usability threshold" for humans that is vaguely constant-ish
Daj#7482: Like with speech recognition
bmk#1476: i think that usability threshold keeps going up
Daj#7482: I don't think it will go up arbitrarily high
Daj#7482: GPT3 is not a wireheading device
Daj#7482: ~~yet~~
bmk#1476: at least for the case of gpt3, i can pretty confidently say that it's flawed enough that in a few years time we'll look back and say "wow, we were impressed by *that*?"
Purple#8913: Even 10.5 years from now it will all be 128x faster
bmk#1476: assuming we still exist in a few years
Purple#8913: If it keeps scaling like that
Daj#7482: Probably also true but I expect a "usable" system to not be _that_ far off
bmk#1476: seems reasonable
Daj#7482: Lets hope the Earth is not liquified into 📎 by then lol
Louis#0144: GPT3 will be usable when you no longer have to cherry pick
Louis#0144: Which is v soon
Purple#8913: Can't wait for the 200B model
Louis#0144: Fwiw when we have that out it won’t be the biggest kid on the block
Louis#0144: I’m sure we’ll have much bigger by then |
alexyz#3459: just put "best of" up lol
FerroMagnetic#6975: It could be possible that even "GPT" wouldn't be the apex in the future
FerroMagnetic#6975: We don't call PCs "SUQGXNIAC"
alexyz#3459: yes, MLPs go brrrr
Purple#8913: NovelAI did stream a test of their tool the other day and it outperformed AIDungeon's dragon model in many ways, apparently. But like I said, even if there are bigger models, the 200B one will be good enough to be entertaining
alexyz#3459: I highly doubt that, I'd like actual evidence
Purple#8913: It's because they tweaked it differently than the dragon model. they used books and such while AID used a lot of fanfic from what I've read. And that didn't help AID's dragon model a lot.
alexyz#3459: because NovelAI's model is only 2.7B, while Dragon is 175B.
alexyz#3459: Again, I'd like evidence
Purple#8913: https://www.reddit.com/r/NovelAi/comments/ngenm7/watched_a_public_stream_demonstrating_the_ai_on/
alexyz#3459: That's just pure opinion
bmk#1476: no novelai discussion please
alexyz#3459: 👍
Purple#8913: Well, the opinion of actual users is what counts in the end, though
FerroMagnetic#6975: Wasn't it "posts with at least such positive score that are linked from Reddit"?
FerroMagnetic#6975: No need to blame fanfics for *common* illiteracy and/or typos
Rina#0391: Hi I am trying to build a gpt-neo interface in python where are the files stored?
Rina#0391: I want to clear my gpt-neo's cache to start fresh
Sid#2121: !faq
Carl-bot#1536: |
Rina#0391: no no the cache
Sid#2121: no no the faq
Rina#0391: I accidently downloaded the wrong module
Rina#0391: oh
Rina#0391: What i did was download the small version and it was 5GB
Rina#0391: where is the cache located
Rina#0391: i need to delete it to get the larger module
Rina#0391: as i have 1 tb
Daj#7482: We don't give tech support
Rina#0391: ..
Rina#0391: oh
Daj#7482: You're probably using Hugging Face
Daj#7482: Ask there
Rina#0391: ok
AI_WAIFU#2844: They are much better equipped to help you
EricHallahan#1051: I recommend reading the documentation as mentioned in the FAQ.
Teemochu#8740: re:foomputer, you can run GPT-3 on a computer that costs about $100k, not sure how fast it will be but it will fit on the GPUs.
Teemochu#8740: A6000 GPUs, other components summing around $10k, then you have about $20k left over for electrical and AC
StellaAthena#3530: There are a bunch of Neo derivatives on HF o.O
https://huggingface.co/models?filter=gpt_neo |
Teemochu#8740: (a residential 240v could provide the power though you may need a custom system plugged into it depending on what exists in that PSU space... you'd also need to be able to cool about 5 kilowatts 24/7)
EricHallahan#1051: O.o
Teemochu#8740: downstream finetunes go brrrrrrrrr :cheemsredeyes:
Teemochu#8740: and the main ones I'm aware of aren't even on there
Teemochu#8740: (the ones by our resident finetuneanon)
Louis#0144: https://louiscastricato.wordpress.com/2021/05/19/on-the-structure-between-narrators-and-readers/
Louis#0144: Blog post for a paper @StellaAthena and I did
Teemochu#8740: also two others but one of them isn't public (yet?) and one of them doesn't exist yet
Daj#7482: There has been some confusion what various roles mean in Eleuther, I have clarified that in #rules
nedstark#8047: Is anyone in this group interested in neuroscience?
Daj#7482: No, I replaced my brain with a LSTM earlier this year
Daj#7482: (I'm quite into neuroscience, yea)
Teemochu#8740: you know I replaced my brain with MLPs
nedstark#8047: Oh good. I made a conference poster and no one showed up bc it's a virtual conference
nedstark#8047: My brain turned to mush years ago, so u got me beat
nedstark#8047: Abstract algebra melted my dang brain
Daj#7482: I've been really into Homotopy Type Theory and friends lately, I know the feeling lol
nedstark#8047: Homotopy type theory 🤌 chef kiss
nedstark#8047: Is anyone doing research into that here?
Daj#7482: not directly afaik but we have a few mathematicians hanging around, mostly category theory from what I can glean |
nedstark#8047: I dont know any of it, I never had a reason to learn it.
nedstark#8047: Category theory stuff is going to drive the next wave of innovation in comp sci imho
Sphinx#2092: x
nedstark#8047: There is a great paper where they found a functor from RL to game theory by Jules Hedges
bmk#1476: what is it with category theorists and finding functors in weird places
bmk#1476: first louis' storytelling functor and now an RL to game theory functor??
gwern#1782: look on the bright side, no one could look over from the good posters to see your humiliation in meatspace
Daj#7482: I mean it's kinda like saying "we found a python program that calculates this algorithm"
Louis#0144: me
Louis#0144: i used to do neuroscience
Louis#0144: stopped when i found out about NLP...
Louis#0144: lol
Louis#0144: also 3dprint
Louis#0144: idk where he is tho
Louis#0144: that kiwi
nedstark#8047: I have an epilepsy poster I can share with you guys soon
nedstark#8047: I feel like a functor from RL to game theory is glaringly obvious
FerroMagnetic#6975: ~~Little did you know, you are a functor too~~
nedstark#8047: Whoa
Louis#0144: the only thing I know about epilepsy (and Parkinson's\) has to do with the basal ganglia |
Louis#0144: i dont know if I can provide good feedback
nedstark#8047: The forgetful functor lol
nedstark#8047: Oh idk. I am just a mathematician. I barely know anything about brains
Louis#0144: me too
Louis#0144: :^)
nedstark#8047: What I do is more applied graph theory
bmk#1476: ok mr storytelling category theorist
Louis#0144: LMAO
Louis#0144: ITS REAL MATH
Louis#0144: 😠
nedstark#8047: Category theory in UMAP is dope
nedstark#8047: I was very happy reading that paper
nedstark#8047: My m.s. was in algebra
FerroMagnetic#6975: But neurons and synapses are almost graphs, isn't that what they say
Louis#0144: i did an undergrad in pure math
Louis#0144: and I still feel like I know zero mathematics
Louis#0144: :^)
Louis#0144: precisely 0
FerroMagnetic#6975: Well maybe colored and weighted ones, if you insist
nedstark#8047: Hehehe yeah seriously |
Louis#0144: yo real talk
Louis#0144: why didnt they name gMLP
Louis#0144: ADD
Louis#0144: lmao
Louis#0144: it totally should have been named ADD
bmk#1476: STOP DOING ALGEBRA
Multiplication was not meant to be noncommutative!
Wanted to have weird axioms for a laugh? we had a tool for that, it was called FUNCTIONS
yes please give me an R-module, please just find the fundamental group of a topological space - statements dreamed up by the utterly Deranged
this is real algebra, done by real algebraists with all the number and sets we made for them!
THEY HAVE PLAYED US FOR ABSOLUTE FOOLS
bmk#1476: should i make this into a full on meme? lol
nedstark#8047: Hahahahaha yes
nedstark#8047: I laughed too hard at this
Louis#0144: as an algebraic topologist
Louis#0144: this is like
Louis#0144: a personal attack on me |
Louis#0144: wtf
Louis#0144: fundamental groups AND R-modules?
Louis#0144: mans out for blood
nev#4905: I need this as a picture
bmk#1476: on it rn
Louis#0144: AlexNet is going to be 10 years old next year
nev#4905: https://cdn.discordapp.com/attachments/729741769738158194/845032476145090570/timegoeson_1_1_1.mp4
bmk#1476: @nedstark @Louis @nev
bmk#1476: oh shit wait
bmk#1476: i forgot to replace the bottom thing
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/845034304349536316/stopdoingalgebra.png
Louis#0144: is bigbird any good
nedstark#8047: 🤣 🤣 🤣 🤣 beautiful
FerroMagnetic#6975: https://i.ytimg.com/vi/iTEOadMBKpU/maxresdefault.jpg there's literally a "lie" algebra, should be included
FerroMagnetic#6975: And of course: https://i.ytimg.com/vi/kov9bqva10Q/maxresdefault.jpg
FerroMagnetic#6975: Last but not least https://cdn.discordapp.com/attachments/729741769738158194/845060830646370364/TauFunction_700.png
𓅬 gabriel_syme 𓅬#3220: this is funny but also true. along with computing scaling 1000 times, so will so much overhead from 1000 sources. It will obviously be a world different than today but I doubt that linear improvement in everything
alexandrost#2936: Hi!
EricHallahan#1051: Hey! Nice to see you again.
alexandrost#2936: thanks @EricHallahan ! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.