data
stringlengths 115
7.61k
|
---|
EricHallahan#1051: Curious.
Kia#2550: Next year...Idk I just feel it
EricHallahan#1051: Hopefully before January.
Kia#2550: Maybe I taught when DALL-E is out, Gpt4 will be announced and Gpt-neo is just out and about
Kia#2550: But who Em I kidding ... the chip shortage will effect ClosedAI and Probably slowdown there projects
EricHallahan#1051: I would say my current estimation looks like this:
No earlier than August.
Ideally by the end of the year.
Hopefully within a year.
Before the heat death of the universe.
Kia#2550: Hmm Haha true
EricHallahan#1051: It took us around a month to train GPT-Neo Pile 2.6B IIRC.
Kia#2550: But all honesty
Kia#2550: Thank you for your help for providing GPT-neo for the masses/Public
Kia#2550: I hope you get rest and sleep
Kia#2550: And wish you the best for the end of the year and your hard work
EricHallahan#1051: We hope to release a 10B model sooner rather than later (I think before the end of the summer), so stay tuned.
Kia#2550: Wow
Kia#2550: Uh
Kia#2550: Unbelievably big |
EricHallahan#1051: Yeah, that is being run out of the multimodal group, so I don't have many details about that. 10B is our estimate of what we expect people with limited resources to be able fine-tune.
EricHallahan#1051: We haven't started training that yet, but I would say that would be out within six months looking at the progress.
Kia#2550: Ow wow
Louis#0144: No release date
Louis#0144: We do not give dates
EricHallahan#1051: Dates are illegal.
Louis#0144: Yeah
Kia#2550: Wish for the best for the Dev teams
Louis#0144: Ty
EricHallahan#1051: But timelines sure.
Louis#0144: I’m a theorist, not on the dev team but ty anyway
Kia#2550: Also I hope the Chip shortage doest effect you guys in a way
Louis#0144: Not at all
Louis#0144: Zero effect
EricHallahan#1051: Interesting...
Louis#0144: Wonder why
EricHallahan#1051: Any theories as to why?
Louis#0144: @Daj
Daj#7482: ~~it actually is affecting NeoX a lot lol~~
Daj#7482: I'm sorry this is happening to your server, but that's sort of cyberpunk/cool as hell lmao |
Kia#2550: Damn 👀
EricHallahan#1051: Is it? When did they order their A100s?
EricHallahan#1051: But why would they?
Daj#7482: I don't know the details, but buying hundreds of A100s isn't easy atm
EricHallahan#1051: It really depends on *when*.
Daj#7482: May I tweet this because it's cool?
Daj#7482: @-Archivist
Daj#7482: There's just something neat about the idea that someone in china is trying to DDOS our work
Kia#2550: Guys... https://medium.com/syncedreview/chinas-gpt-3-baai-introduces-superscale-intelligence-model-wu-dao-1-0-98a573fc4d70 have you checked the GPT-3 Chinese version?
Kia#2550: DAMN DON'T SHOW
Daj#7482: Maybe it can drum up a bit more donations for the eye
Kia#2550: But yeah...While reading the article
Kia#2550: Ow yeah that
Kia#2550: Non the less it's surprisingly low in parameters but has better outputs
Daj#7482: strange
Kia#2550: Maybe they cherry pick some outputs...But its promising
mooneater#1086: Hey yo, here from the AID (ai dungeon) discord server
EricHallahan#1051: Welcome!
EricHallahan#1051: You may want to get started in #rules, where we have some information about what we do here.
mooneater#1086: We will watch your career with great interest |
Jokes aside, GPT-neo is a very promising project indeed
EricHallahan#1051: If you have questions, you are more than welcome to ask them.
Kia#2550: Ow hey
Kia#2550: Uh
Daj#7482: https://twitter.com/NPCollapse/status/1374704198856671234
@-Archivist
Kia#2550: Any ideas on there Chinese GPT model?
mooneater#1086: Very interested in how you guys would go about handling finetuning, especially for something like AI dungeon
Currently the devs at latitude are using Choose your own adventure and fanfics for finetuning data, which I doubt is that good to be honest 😆
Kia#2550: Are they trying to steal data from you guys?
Daj#7482: Nah this is all public data
Kia#2550: Like personal data?
Daj#7482: Probably trying and failing to bring down the host
mooneater#1086: From what I can gather no, it's just a DDOS
mooneater#1086: Or at least an attempt
Daj#7482: Though I would honestly expect us all to be on watchlists
Kia#2550: Damn watchlists
Kia#2550: Better be safe guys
thenightocean#6100: I guess no more chinese visas for me 😦
mooneater#1086: Have you guys considered using cloudflare's DDOS protection? |
EricHallahan#1051: I (pesonally) don't expect anything in the range 100-200B to be realistically fine-tuneable by most organizations if at all, I see 10B as the realistic limit for that.
Kia#2550: Wow
mooneater#1086: I see
EricHallahan#1051: But 10B should be plenty for that application IMO, but I'm not too familiar with AID.
Kia#2550: That's ridiculous amount of finetuning
mooneater#1086: I thought that was GB at first, actually, not B
Kia#2550: I think it's Billion parameters
Kia#2550: Maybe I'm dumb...but that's how I interpreted it
EricHallahan#1051: We believe AGI will be achieved at 1.6 parameters.
Kia#2550: 1.6 of what (I'm actually a dumbass)
EricHallahan#1051: (It is an in-joke)
Kia#2550: Back to me :wojak_despair:
mooneater#1086: The finetuning data latitude used at one point was only 30mb, which I thought was absurd for a model that big
StellaAthena#3530: @Kia is correct, that’s billions of parameters
EricHallahan#1051: I'm not that familiar with the fine-tuning process and the methods and scales involved.
StellaAthena#3530: TBH, it’s a sign we are doing something right
mooneater#1086: Ah
Kia#2550: Thanks...Considering GPT-neo the small version is much more faster in development then Gpt-2 from ClosedAI
Daj#7482: Someone try to get a chinese VISA and see if they get denied
StellaAthena#3530: AFAIK nobody is. Like, has anyone fine-tuned a model > 10B for anything? |
Kia#2550: What do they even want from You guys
mooneater#1086: 🤷
Kia#2550: Personal information?
mooneater#1086: no, I highly doubt that
Kia#2550: Maybe other then that
Kia#2550: ...
Kia#2550: They wont actually like GPT-neo bc they're already developing one for there self
EricHallahan#1051: They want nothing other than us to slow down.
mooneater#1086: Yeah, probs
Kia#2550: Uh
EricHallahan#1051: Which isn't going to happen.
Daj#7482: It might also just be a vendetta against the eye
mooneater#1086: ~~or it's OpenAI ninjas~~
mooneater#1086: I joke
Kia#2550: Weird...But wish for the best
Daj#7482: but it's very :jc: to imagine chinese hackers are trying to slow us down
Kia#2550: They're already developing Chinese GPT version
Kia#2550: ...
Kia#2550: And wanting to slow you down guys
mooneater#1086: A few people in the AI dungeon discord have expressed interest in using GPT-neo instead of GPT-3, myself included |
I'm unsure if the devs themselves have commented on it yet though
Kia#2550: They probably have contract with ClosedAI
Daj#7482: We'd love to see people experiment with it
Daj#7482: Currently our biggest model is 2.7B, we plan to have a 10B (Griffin size) model soonish
Daj#7482: Eventually 200B will be bigger than Dragon
Kia#2550: Most people in the AI dungeon Server just actually join the community bc of GPT-3
Kia#2550: Like me
Sid#2121: I'm pretty sure @WAUthethird has been here a while and expressed interest since almost the beginning
Kia#2550: Damn
EricHallahan#1051: TBH I've never used AI Dungeon.
Kia#2550: It's fun and horny...But more on the fun side :wat:
Sid#2121: SHHH lol
Kia#2550: Also Using the App is a bit hard
StellaAthena#3530: We also have academics with similar interests.... @Louis and several members of this lab hang out here http://eilab.gatech.edu/mark-riedl
mooneater#1086: Sorry back, internet went out for a sec
mooneater#1086: Oh yeah I keep forgetting WAU is a dev for AI dungeon
Kia#2550: Ow yeah
mooneater#1086: It'd be nice if AI dungeon ditched OpenAI as soon as possible
Kia#2550: You know switching models is a bit hard...
mooneater#1086: So that microsoft and closedAI stop breathing down latitude's neck |
mooneater#1086: It'd be worth it though
Kia#2550: But its not when Patient is in place
EricHallahan#1051: The most experience I have is with Write with Transformers, and that is really the farthest I've gotten in terms of size.
Kia#2550: I can show you some gameplay
Daj#7482: Unfortunately the infrastructure costs are _enormous_
Daj#7482: We'll see how the future develops
EricHallahan#1051: Thanks, but I have other things to do right now for school lol
mooneater#1086: Yep, that's definitely a problem
Latitude's already losing a lot of money right now from all the people who play nonstop lol
Kia#2550: Ow lol
Kia#2550: Take your time
Kia#2550: And wish for the best for you
Kia#2550: Have a great day and stay safe guys...Also be careful of unwanted links
Kia#2550: Cya guys
Louis#0144: Hi
Louis#0144: What’s the question
EricHallahan#1051: :goose:
Louis#0144: Yes
Louis#0144: Honk
Louis#0144: Glad I could help |
Louis#0144: 🙂
jrowe#5371: anyone been able to run neo on cpu yet?
jrowe#5371: n/m, #gpt-neox-devs has some comments on that already
EricHallahan#1051: It runs, not fast IIRC.
jrowe#5371: <https://discord.com/channels/729741769192767510/747850033994662000/824048377053577226>
jrowe#5371: im gonna spin up a virtual machine with 40gb ram and 100 storage, would more cpus help or hinder?
EricHallahan#1051: *shrug*
jrowe#5371: experiment time!
jrowe#5371: @aero hey, are you around?
jrowe#5371: Eric, should I just grab @aero 's repo, you think?
EricHallahan#1051: IDK
jrowe#5371: thats where I'll start, will report soon
jrowe#5371: soon ish, lol, downloading the model gonna take a while
jrowe#5371: alright, basic trouble: no module named mesh_tensorflow
jrowe#5371: pip3 install mesh_tensorflow or do i need a particular version?
jrowe#5371: snapshots gonna make this much easier to revert
EricHallahan#1051: Check `requirements.txt`
jrowe#5371: already installed it per instructions
jrowe#5371: hmm
jrowe#5371: i might need to wait on aero so i dont reinvent the wheel |
jrowe#5371: afk for lunch and vaccination, hurray
triggerhappygandi#0001: https://tenor.com/view/whyareyougay-uganda-gay-gif-14399349
aero#1357: @jrowe im around now
aero#1357: https://cdn.discordapp.com/attachments/729741769738158194/824378311211614228/message.txt
aero#1357: heres my package list, currently using mesh-tensorflow==0.1.18 installed with pip
jrowe#5371: I'll be back at desktop in about an hour, just had my vax and I have to wait 15 minutes
jrowe#5371: then I gotta stop and grab some ptp radios and cable from a rooftop
jrowe#5371: ty - i cloned your repo on fresh Ubuntu 20.04 install, then pip3 install requirements.txt
jrowe#5371: placed a config and prompt for in same directory as main.py,and it choked on mtf
aero#1357: pip3 - are you using anaconda or ubuntu's python+pip
jrowe#5371: Ubuntu
aero#1357: ive always had super bad luck with ubuntu's python, try installing anaconda and pip via anaconda
aero#1357: that also lets you install cuda libraries easier
jrowe#5371: alright. this is also a vm, no gpu
EricHallahan#1051: I've never used Anaconda, and I don't ever plan to.
aero#1357: 👀 why it makes things so easy
Daj#7482: also gotta throw my support behind anaconda
EricHallahan#1051: Because most things I do don't need it.
Daj#7482: especially when dealing with CUDA garbage
jrowe#5371: having never used it, how much of learning curve is there? |
EricHallahan#1051: I don't have a GPU so :berk:.
Daj#7482: Anaconda is kinda like super pip
alstroemeria313#1694: I never understand how to use conda
Daj#7482: It lets you install specific versions of python and supporting libraries like CUDA
aero#1357: @jrowe think pip but slower and more packages
Daj#7482: Which is _extremely handy_ if you have complex environments
alstroemeria313#1694: I just use homebrew's Python on my laptop
aero#1357: and virtual environments built in
Daj#7482: Your default install is fine for normal uses
Daj#7482: but if you're developing or doing complex installs, conda is very nice
alstroemeria313#1694: And virtualenvs per-project
aero#1357: thats the best part of anaconda, I have like 7 environments for different projects, some use tensorflow 1.15, others use tensorflow 2.x and anaconda lets you have both easily
EricHallahan#1051: I have never used virtual envs either.
Daj#7482: the most useful thing that virtualenv can't do natively is install different python versions with one click
Daj#7482: or different CUDA versions
alstroemeria313#1694: pyenv
Daj#7482: That's why I said virtualenv lol
Daj#7482: pyenv works
Daj#7482: I used to do the virtualenv + pyenv route
Daj#7482: conda was just easier after the first time I had to compile torch from scratch |
Daj#7482: conda also handles e.g. gcc version, CUDA, BLAS implementation etc
Daj#7482: but it's big
alstroemeria313#1694: conda never has the packages i need
aero#1357: use conda for installing python+pip+cuda, use pip to install everything else
they work well together
Daj#7482: I mean, conda installs non-pip stuff
Daj#7482: you still use pip to install python libraries
jrowe#5371: ok, I'll hit you up when I'm back at desk
jrowe#5371: thank you!
Teemochu#8740: Limited resources as in a single 3090? (which is the most you'd probably expect someone to have locally)
Teemochu#8740: Or does this mean ordering a v3-8?
EricHallahan#1051: I think the plan was to have it be possible on a Colab instance.
aero#1357: just make sure to use bfloat16 for 10B 😅
EricHallahan#1051: Resources was meant to be in the sense of monetary expense. Colab, and even Colab Pro, are probably the cheapest way to access compute on the market.
EricHallahan#1051: 11B was our dirty estimate for that.
aero#1357: I wonder how openai is able to offer the full gpt3 to people, things like aidungeon making very heavy use of it. there's gotta be something funky going on
aero#1357: that kind of hardware cant be cheap enough for that kind of load
kindiana#1016: large bs inference isn't much more expensive than low bs
𓅬 gabriel_syme 𓅬#3220: this was a huge thing when it became possible, simply one click install cuda drivers inside the environment |
aero#1357: even cuDNN, without nvidia developer account somehow
jrowe#5371: maybe inference only on cpu is being used?
jrowe#5371: back at my desk
kiwi#6114: O nice
kiwi#6114: @Louis hi
Louis#0144: yo
jrowe#5371: ok, aero - should i revert to my fresh install of ubuntu with git, only?
Louis#0144: get this man a Georgia tech tag @bmk
kiwi#6114: Wait who
thepok#1770: i got it working with cpu on windows and only 16g of ram ;D
thepok#1770: with aeros help
bmk#1476: what
jrowe#5371: I've been doing regular snapshots so i can revert with just a couple minutes between
thepok#1770: ~20 seconds per token
Louis#0144: More invasion
jrowe#5371: @thepok awesome
thepok#1770: aero is the awssome guy to thank
jrowe#5371: starting to smell like mint juleps around here
jrowe#5371: you all speak with a geeawwgian accent?
jrowe#5371: foghorn leghorn gifs are on point 😛 |
aero#1357: @jrowe fresh install might be safer, cuda libraries really like to break
jrowe#5371: sounds good
jrowe#5371: reverted
jrowe#5371: fresh + git
jrowe#5371: anaconda now?
aero#1357: yeah, just writing up the commands I used to build my env
jrowe#5371: I'm doing this via cli and saving piecewise each line, i'll send them to you when done
jrowe#5371: ack, wait
jrowe#5371: i dont want to redownload the model
jrowe#5371: reverting the revert
aero#1357: then something like
```
conda create --name tensorflow
conda activate tensorflow
conda install python==3.8.5 pip cudatoolkit
then pip install tensorflow or tensorflow-gpu
```
aero#1357: you might not need to revert
jrowe#5371: ok |
jrowe#5371: getting anaconda going
jrowe#5371: updating, successful install, snapshot momentarily
EricHallahan#1051: `␆ ␀`
jrowe#5371: ␆ ␆
jrowe#5371: ~~␆ ~~🪤
jrowe#5371: ok, anaconda installed, updated, snapshotted, setting up tensorflow environment
jrowe#5371: pip install tensorflow or tensorflow-gpu - no gpu, is tf-gpu for the case someone does have one?
aero#1357: tensorflow-gpu is for gpu, just tensorflow is cpu only afaik
jrowe#5371: perfect
jrowe#5371: ok, I'm done up to that point
jrowe#5371: from there just clone your repo, or clone from eleuther?
aero#1357: up to you, if you want live_output make sure to get the patch-1 branch
aero#1357: then
```
pip install -r requirements.txt
```
but you might want to remove "tensorflow==2.4.0" from requirements.txt
jrowe#5371: so if im in the tensorflow environment, how does having run pip3 install requirements.txt affect me?
aero#1357: dont use pip3, thats the system pip
|
use just pip. you should see like (tensorflow) something@user ~
at the start of your prompts too
aero#1357: that's anaconda pip and installs in the environment you made
jrowe#5371: right, i mean from before - am i effectively isolated, then?
aero#1357: you _should_ be isolated but it doesnt always work out that way 😅 sometimes it bugs out
aero#1357: always good to keep the base python environment clean (imo)
aero#1357: you could pip3 remove if there are issues
jrowe#5371: cool
jrowe#5371: alright, done
jrowe#5371: afk for meeting
StellaAthena#3530: @aero is your script good to go? Would you mind walking me through it?
aero#1357: live_output? yeah it's in a PR here <https://github.com/EleutherAI/gpt-neo/pull/165>
basically you just add --live_output and it should work. I haven't tried it in jupyter though, im not sure how sys.stdout.flush() works in that case
𓅬 gabriel_syme 𓅬#3220: I totally missed this, did you share the script/walkthrough yet? thanks!
aero#1357: as for "how to get gpt-neo to work on a gpu" that's still not quite ready, jrowe has been figuring that out with me today so once it works for him I can finish putting that together
bmk#1476: may i suggest working on getting local layers working in HF as an alternative?
Daj#7482: HF is already working on that
Daj#7482: don't just tell people doing nifty work to stop doing it lmao
Daj#7482: especially if someone else is already literally doing what you ask for |
bmk#1476: ok, ok
aero#1357: im also not familiar with hf at all 😅
thepok#1770: We need a new chanel
EricHallahan#1051: For what?
thepok#1770: Gpt interference
thepok#1770: Questions
Daj#7482: No, we don't
Daj#7482: Because we're a project focused discord
Daj#7482: We're not here for tech support
Daj#7482: Waste of valuable dev time
Daj#7482: Happy to help here and there
Daj#7482: But don't wanna encourage it as a norm
thepok#1770: Hmmm
thepok#1770: Is there some place in the internet yet?
bmk#1476: ¯\\_(ツ)\_/¯
Daj#7482: there are some more beginner focused discords in #communities , but yea dunno
EricHallahan#1051: Maybe the subreddit?
Daj#7482: This is an advanced discord for people that wanna work on projects
Daj#7482: ~~and shitpost in #off-topic ~~
Teemochu#8740: Yeah I agree with the current norm that there's *not* a "people who can't read backscroll and just want to use the thing" space here |
thepok#1770: But can't send than enywhere else ...
Daj#7482: Sorry, but it's not really our responsibility
EricHallahan#1051: ^
Daj#7482: We do this for fun in our spare time
thepok#1770: Ok ok
Daj#7482: as said, I think people here are usually quite reasonable with answering simple questions
Daj#7482: But I wanna make clear that's not our raison d'être
Kia#2550: Guys what is your final Benchmark for GPT-neo? Like in size
EricHallahan#1051: Ideally match DaVinci in terms of performance.
Kia#2550: Ow cool cool
Kia#2550: Bc I taught you guys will hit the 200B parameters that GPT-3 have...But that's great tbh
Kia#2550: That takes to much time
EricHallahan#1051: If we can get away with less parameters, I hope we would consider that... but yeah, towards 200B.
Kia#2550: What...
StE_gUy#5856: When we say "performance" though, what is the means of measuring that? LAMBADA score?
Kia#2550: That's insanely big...
Kia#2550: And... Takes to much time
EricHallahan#1051: That's DaVinci for you. It is far larger than Curie, and we assume Cushman is somewhere between them.
EricHallahan#1051: We are outperforming GPT-3-XL and GPT-3 Ada by our metrics right now.
StellaAthena#3530: https://cdn.discordapp.com/attachments/729741769738158194/824454408397389874/Screen_Shot_2021-03-24_at_9.27.24_PM.png |
StellaAthena#3530: By a little bit at the same size, yes
EricHallahan#1051: Why are they called "Small" and "Mid"?
EricHallahan#1051: Don't you end up in a cushman situation?
StellaAthena#3530: Hmm
StellaAthena#3530: Maybe it'd be better to give them names parallel to GPT-3's
EricHallahan#1051: That's like marketing 101.
StE_gUy#5856: Do we have any documentation as to what BPB, PPL mean and how they're calculated?
StellaAthena#3530: We can have GPT-Neo XL
EricHallahan#1051: Leave space for product-line expansion.
Teemochu#8740: Also do we know that these benchmarks weren't more-present in the training set than in OAI's?
StellaAthena#3530: And then GPT-Neo Alan
EricHallahan#1051: Bits per byte, Perplexity
StellaAthena#3530: For lambada we do
StellaAthena#3530: Wikitext is more questionable
StellaAthena#3530: GPT-3 doesn’t eval on it because GPT-3 was trained on Wikipedia, as were we
EricHallahan#1051: I suggest calling them out numerically.
Teemochu#8740: So the Lambada ~~ppl~~ perplexity wiping the floor with GPT-2 is fully real?
StellaAthena#3530: Yea
EricHallahan#1051: Yes
EricHallahan#1051: Actually |
EricHallahan#1051: :gameryes:
StellaAthena#3530: And the Wikitext isn’t unreasonable. We aren’t beating it way more on Wikitext than we are on Lambada
StE_gUy#5856: @Teemochu do you have any more context/a link?
StellaAthena#3530: Which surprised me, I thought we would
Teemochu#8740: Stella's posted picture
StE_gUy#5856: Sorry for a moment I interpreted PPL as people 🤦♂️
StellaAthena#3530: I think the GPT-3 architecture is just better than GPT-2
StellaAthena#3530: Notice how GPT-3 1.3B also beats GPT-2 1.5B on Lambada
kindiana#1016: also training data
zphang#7252: yeah, more books
kindiana#1016: :books2:
Kia#2550: That's humungeuosly Big...And probably need a lot of time
EricHallahan#1051: A few months at least.
Kia#2550: But non the less I think the Architecture of GPT-neo is much more efficient and easily optimize in the specific work that is intended
EricHallahan#1051: Not really. It is architecturally very similar.
Kia#2550: But...200B parameters...is Big
bmk#1476: nearly identical*
Kia#2550: Ow cool
EricHallahan#1051: I wasn't confident enough to say that.
bmk#1476: it's not completely identical but it's about as identical as we can get using the public info |
Kia#2550: But still amazing just in a few months you guys Punch the 1Billon parameter mark
EricHallahan#1051: How long did it take to train the smaller model from scratch? two weeks?
bmk#1476: i mean.. 1B hasn't been impressive for a long time
bmk#1476: I think like 3 or 4 but i don't remember for sure
Kia#2550: But *ClosedAI* takes a few years
bmk#1476: er.. what?
EricHallahan#1051: I wasn't around here when it was trained. One took closer to a month definitely.
bmk#1476: I'm somewhat sure that i was around though
EricHallahan#1051: I thought the one took roughly half the time of the other.
Kia#2550: GPT-2 likely takes a few years to developed...and GPT-neo just hit the billion parameters in a few months
Kia#2550: Also weeks
Kia#2550: ...?
EricHallahan#1051: Well, we aren't alone?
bmk#1476: well, GPT2 was the first of its kind, we're just following in their footsteps
EricHallahan#1051: :thisup:
Kia#2550: Hmmm make sense
bmk#1476: also i heavily doubt GPT2 was developed over *years*
bmk#1476: one year tops
bmk#1476: and ours has been in development for like 6 months anyways
Kia#2550: Non the less fascinating work |
Kia#2550: Hmm I hope someone will create a Bread recipe generator with GPT-neo
Kia#2550: But ow well bye guys and have a great day
zphang#7252: all things considered, progress-wise I think GPT-2 was less meaningful than GPT-1
bmk#1476: oddly specific
zphang#7252: people out here trying to solve general intelligence and fractals, and one guy just wants bread recipes. that's wholesome
bmk#1476: holesome
Kia#2550: Because I'm a Baker
Kia#2550: And have a obsession with AI
Kia#2550: 🍞
aero#1357: we will live to see the day where AI perfects bread, what a time to be alive
zphang#7252: will it be the greatest thing since sliced bread
Kia#2550: Hmm true...Theres already a bread recipe generator that uses GPT-3...but you know
Kia#2550: Gpt-3
aero#1357: proprietary elon bread
EricHallahan#1051: -- Károly Zsolnai-Fehér
Kia#2550: Not wrong...
Singularity#9001: Let's get GPT-neo to trillion params
EricHallahan#1051: Whoa, slow down, we aren't reproducing Switch Transformers. :berk:
Kia#2550: Damn...But We Need the whole internet as Data/J
StellaAthena#3530: Naw, we already have enough data for that |
Singularity#9001: We need to make an entirely new internet that's just a bunch of GPT-neo's talking to each other... they have an entire alternate history that develops
Singularity#9001: We can have some that are actually purposefully trained on less data to represent people who are less informed
Singularity#9001: There are very few of them who are trained on the full dataset, and we can play it out and see what happens
Kia#2550: So AGI...that teaches him self...uhhhhh
Kia#2550: It can work in a way/j
Louis#0144: Ending a review like: “I strongly encourage the authors to not resubmit this work without a thorough rewrite and self reflection.”
Kia#2550: Damn...Great writing
Kia#2550: Also check #off-topic I post some
thenightocean#6100: Shouldnt this be like... a bigger news in worldwide ML circles? I mean holy shit, thats amazing!
Sid#2121: did anyone test OA's GPT3 models on the pile? can we fill in the missing pile BPB/ppl sections?
StellaAthena#3530: Yeah, those numbers are in the Pile paper
StellaAthena#3530: Well, some of them are
StellaAthena#3530: GPT-2 has 1.0468 BPB on the Pile
GPT-3 Ada has 0.9631 BPB on the Pile
GPT-3 DaVinci has 0.7177 BPB on the Pile
StellaAthena#3530: I don’t see PPL numbers, though maybe @bmk or @cfoster0 has them somewhere?
StellaAthena#3530: @Sid https://github.com/EleutherAI/gpt-neo/blob/release-patch/README.md
StellaAthena#3530: It was pointed out that a parallel naming scheme to GPT-3 would be clearer to people who don’t have parameter counts memorized. That’s why I called the 2.7B model “Allen” though I don’t have a full naming scheme in mind yet
EricHallahan#1051: > ```GPT-Neo Allen```
StellaAthena#3530: RIP me and knowing how to spell |
StellaAthena#3530: I fixed it
EricHallahan#1051: I suggest leaving it at the size, or if you have to have names, trying to knock-off the existing names as much as possible.
StellaAthena#3530: I do still include the size
EricHallahan#1051: Not in the name.
StellaAthena#3530: And the plan would be to keep the “first names of scientists in alphabetical order” theme
StellaAthena#3530: Huh
StellaAthena#3530: https://cdn.discordapp.com/attachments/729741769738158194/824641228695928872/image0.png
EricHallahan#1051: OpenAI: *Laughs in Cushman*
StellaAthena#3530: Eh whatever
EricHallahan#1051: What model is comparable to 10B?
EricHallahan#1051: Curie?
StellaAthena#3530: Currie is 13
StellaAthena#3530: Babbage is 6.7
StellaAthena#3530: It goes 2.7, 6.7, 13, 175
EricHallahan#1051: Do we have plans for a Babbage sized model? I haven't heard of any plans for that.
StellaAthena#3530: And then cushman is something
StellaAthena#3530: No, we were going to leapfrog that.
EricHallahan#1051: So push directly for Curie, got it.
StellaAthena#3530: A little bigger is the current thinking
StellaAthena#3530: 20B would be the largest trained autoregressive non-MoE transformer after GPT-3 DaVinci |
StellaAthena#3530: Ada -> Allen
Babbage -> Bostrom
Currie ->
DaVinci ->
StellaAthena#3530: Names are hard
EricHallahan#1051: (Turing-NLG is 17B)
kindiana#1016: they didn't release any models tho iirc
StellaAthena#3530: @kindiana it’s funky. NVIDIA didn’t release a model, but Facebook did
kindiana#1016: fb released a 11b one trained with megatron-lm
kindiana#1016: that's the largest released I believe
StellaAthena#3530: Yes
StellaAthena#3530: If you drop the qualifications then the fb multilingual translator is bigger, but that’s fundamentally a different kind of transformer
kindiana#1016: you can drop non-moe without changing anything?
kindiana#1016: I don't think there's a bigger moe transformer that you can download
EricHallahan#1051: If you want a one to one relationship then maybe just call it GPT-Neo XL, GPT-Neo A, GPT-Neo C
kindiana#1016: not a fan of names tbh just call it by the parameter count lol (and append the parameter counts to the openai ones for people who don't remember)
EricHallahan#1051: I don't like naming things sequentially when they are not best expressed that way.
EricHallahan#1051: How many products out there follow the system of `3, 5, 7`?
EricHallahan#1051: A lot.
StellaAthena#3530: I’m definitely not going to recommend dropping the parameter counts. If you check the readme I list the parameter counts for every model |
StellaAthena#3530: @EricHallahan “GPT-3 Ada (2.7B)"
StellaAthena#3530: Or it can say GPT-3 2.7B (Ada) if people prefer. I’m not attached to the ordering
kindiana#1016: I'm saying the "alan" for the corresponding neo model doesn't provide much value, given you have to give parameter counts for both anyways
StellaAthena#3530: Ah
Sid#2121: from what i can remember BPB is just ppl * some_constant lmao
kindiana#1016: yeah exp(bpb * avg tokens per byte)
EricHallahan#1051: Let's see...
Intel: i3, i5, i7, i9
AMD: Ryzen 3, 5, 7, 9 (Copying Intel), Radeon R5, R7, R9, RX
BMW: M1, M3, M5, 7 Series
iRobot: i7, i9
EricHallahan#1051: That is why product SKUs are separated by so much a lot of the time.
Louis#0144: Wild
AI_WAIFU#2844: I think you should just name them based on the number of parameters. GPTNeo-2.7B Way less confusing
EricHallahan#1051: It looks like that is the overwhelming opinion here.
Louis#0144: I wonder why they did a 10x increase tbh, is it literally just for the name that they have the biggest? They put such a big gap between them any competitors. Even their own results showed diminishing returns going to 175
AI_WAIFU#2844: OAI coming up with 10 different names for their GPTs and calling the biggest one GPT-3 is madness.
AI_WAIFU#2844: especially when you get stuff like GPT-3 XL being much smaller than GPT-3
EricHallahan#1051: Cushman :guilty:
StellaAthena#3530: Okay, I dropped it |
EricHallahan#1051: I like the concept, but there are a lot of downsides.
jrowe#5371: Ada -> Mouse
Babbage -> Switch
Currie -> Trinity
DaVinci -> Architect
(1TorBust) -> Oracle
jrowe#5371: sticking with The Matrix theme
jrowe#5371: might coincide a release with Matrix 4?
jrowe#5371: and then API subsections could get other designations like Merovingian and The Trainman, etc
EricHallahan#1051: We pretty much decided that it isn't worth the potential hassles of giving them names. OpenAI readily demonstrated that it becomes messy quickly.
EricHallahan#1051: The only benefit is in the marketing, and that really isn't a concern for us.
jrowe#5371: true
EricHallahan#1051: I wanted to call XL -> LX
EricHallahan#1051: But then again, what's the point of that?
EricHallahan#1051: Not much.
jrowe#5371: projected equivalency, but that's marketing again
jrowe#5371: ok, sanity check - I have a directory, gpt-neo in my root folder, so specifying the model location goes like:
~/gpt-neo/blah/blah
right?
EricHallahan#1051: I would think. |
jrowe#5371: ty
jrowe#5371: hmm, how do i specify config_name?
python main.py --predict --prompt 'example_prompt.txt' --model 27b.cfg
jrowe#5371: it keeps trying for 27b.cfg.json
jrowe#5371: should i just overwrite whats in configs folder?
jrowe#5371: err, not overwrite - add my config to*
jrowe#5371: yay
EricHallahan#1051: \*insert Kermit yay\*
StellaAthena#3530: @jrowe yes, you need to add your config file to the config folder. That’s where the code looks for it
jrowe#5371: 👍
jrowe#5371: what do you recommend for permissions ? getting permission denied, https://pastebin.com/9WG4HUYL
StellaAthena#3530: That’s weird
StellaAthena#3530: That’s not the usual permission denied error
StellaAthena#3530: What permissions are being denied exactly?
StellaAthena#3530: It looks like writing to the Colab?
jrowe#5371: its a local setup, not for colab
jrowe#5371: one sec, i think im missing something
StellaAthena#3530: Well, it looks like the permission you lack is writing to a file.
jrowe#5371: https://pastebin.com/su0i9DXb
jrowe#5371: might be cpu related and my not having a package |
jrowe#5371: i gotta get the cpu binary properly set up, possibly
jrowe#5371: it tries to write a log to the tensorflow directory
StellaAthena#3530: Yeah, this looks like a problem on your side
jrowe#5371: yup
jrowe#5371: btw, anyone wanting to follow along, here's the instructions so far: https://pastebin.com/hU686kZM
Fresh Ubuntu install to running on cpu
jrowe#5371: no cuda driver
jrowe#5371: ill wait til its working, then repeat it, then share it again
aero#1357: 👀
jrowe#5371: cuda driver permission issue failed call to cuInit: UNKNOWN ERROR (303)
jrowe#5371: sup aero, sorry about last night - ended up stuck on a work thing
aero#1357: all good 😄 I was busy in tbc beta anyway
Are there any .so load errors in the log?
jrowe#5371: i didnt have the cuda driver installed, 20 minutes left with that lol
aero#1357: oh jeez
jrowe#5371: bwahaha
aero#1357: theres a nvidia PPA you can add which makes it easier
aero#1357: forget what that is though
jrowe#5371: its all good - added to the setup instructions |
jrowe#5371: i want to make all the dumb mistakes, it helps people later on
aero#1357: for inference on CPU, 2.7B does work but @thepok just found out the hard way that the 1.3B model doesn't, since it's correctly trained with bfloat16 (that doesnt work on cpu)
aero#1357: stil havent been able to convert 2.7B to bfloat16, mesh tensorflow seems to make that a lot harder. At least the things I tried it wasn't cooperating
thepok#1770: i may or may not made an other error
thepok#1770: not an expert
thepok#1770: ill post the errorlog one moment
jrowe#5371: ok, how do i specify cpu?
jrowe#5371: getting this:
tensorflow/stream_executor/cuda/cuda_driver.cc:328] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
thepok#1770: log of small model https://cdn.discordapp.com/attachments/729741769738158194/824680283549990922/message.txt
jrowe#5371: @aero , @thepok - do i need to use a flag or modify a config anywhere to run on cpu?
thepok#1770: i installed tensorflow
thepok#1770: thats the cpu one
thepok#1770: and theres tensorflow-gpu
thepok#1770: and i hide my (to old) gpu with
thepok#1770: os.environ["CUDA_VISIBLE_DEVICES"]="-1" in main.py
aero#1357: yeah if you have tensorflow-gpu installed you can prevent it from seeing your devices with (then it will fall back to cpu)
CUDA_VISIBLE_DEVICES="" python main.py ...
EricHallahan#1051: Can you cast the weights to single-precision floating point? |
thepok#1770: i dont know how
EricHallahan#1051: `:\`
thepok#1770: in the config?
EricHallahan#1051: No, I would presume it needs to be a script. It should be pretty trivial.
EricHallahan#1051: If you understand the mess which is TF.
thepok#1770: well no
thepok#1770: iam a c# guy
thepok#1770: there everthing simply works ;D
EricHallahan#1051: Yeah, I would think casting everything to single-precision floating point would fix it.
thepok#1770: so load it cast it save it
thepok#1770: shouldnt it be possible to load it cast it use it
EricHallahan#1051: What is the best way to download the weights?
thepok#1770: torrent
aero#1357: I was trying that earlier, but I kept getting C-level errors about type mismatches from mesh tensorflow, probably did something wrong
thepok#1770: at the weekend i have some spare time ill look into it and learn a lot ;D
jrowe#5371: ok, what version of cuda do i want?
jrowe#5371: just latest?
EricHallahan#1051: Are you running CPU?
jrowe#5371: yes
jrowe#5371: trying to anyway |
EricHallahan#1051: Why do you need CUDA?
jrowe#5371: <https://pastebin.com/tSprQawU>
jrowe#5371: just trying to troubleshoot from the errors I'm seeing
EricHallahan#1051: You shouldn't install CUDA, it is saying that there is no GPU.
jrowe#5371: right, and I've specified CUDA_VISIBLE_DEVICES=-1
jrowe#5371: so it should ignore them
EricHallahan#1051: What version of TF are you running?
thepok#1770: CUDA_VISIBLE_DEVICES="-1"
jrowe#5371: yes, CUDA_VISIBLE_DEVICES="-1" is the exact line
jrowe#5371: tensorflow 2.4.0
EricHallahan#1051: Oh, that explains why you are interested in Windows.
EricHallahan#1051: You running `tensorflow-gpu`?
jrowe#5371: nope
EricHallahan#1051: :thonk:
thepok#1770: the error looks strange
thepok#1770: it wants to write a log i think
thepok#1770: in the models folder
EricHallahan#1051: I agree.
EricHallahan#1051: What are the permissions on the ~~folder~~ directory?
aero#1357: nvidia kernel driver can only load if you have a nvidia device right? I dont think tensorflow requires it |
jrowe#5371: thats the tf log directory trying to log the first error
jrowe#5371: ~~is there a way to check if tensorflow-gpu is installed somehow?~~
jrowe#5371: pip show tensorflow-gpu
WARNING: Package(s) not found: tensorflow-gpu
EricHallahan#1051: ```2021-03-25 10:37:30.761580: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0```
:thonk:
thepok#1770: pip show tensorflow ?
thepok#1770: mine says (GTP237) F:\GPT\AERO\gpt-neo>pip show tensorflow
Name: tensorflow
Version: 2.4.0
Summary: TensorFlow is an open source machine learning framework for everyone.
Home-page: https://www.tensorflow.org/
Author: Google Inc.
Author-email: [email protected]
License: Apache 2.0
Location: d:\programme\anacondapy3\envs\gtp237\lib\site-packages
Requires: gast, flatbuffers, absl-py, termcolor, opt-einsum, keras-preprocessing, typing-extensions, tensorboard, numpy, tensorflow-estimator, grpcio, google-pasta, six, wheel, h5py, astunparse, protobuf, wrapt
R
thepok#1770: and that one works
aero#1357: you have an nvidia device though (you have the driver loaded) |
that kernel driver error might just be a warning though.
the permission denied bit is weird
did you clone on another user or with sudo?
thepok#1770: i to think it may be a ignorable error. just fix the log error
jrowe#5371: basically the same tensorflow
thepok#1770: make the "models" folder wrtable
jrowe#5371: alright
thepok#1770: if your model is there .... 😉
jrowe#5371: done, same error
aero#1357: ```Python
tf.summary.FileWriter(f"{logdir}/config", sess.graph)
```
the error is when its trying to create the log dir, IMO: re-clone in a new folder you definitely own
aero#1357: like mkdir ~/gpt/ && cd ~/gpt/
thepok#1770: i as windowsuser say, run it as sudo ;D
thepok#1770: 😛
jrowe#5371: i cloned aero's repo
thepok#1770: good luck have to go now "Ill be back"
jrowe#5371: should i create a new anaconda environment? |
aero#1357: its not related to that I dont think, you dont have write permissions for the log dir which means something is messed up
aero#1357: permissions can be hell better to just do a fresh clone in a folder you are sure you own
jrowe#5371: chmod 777ed it
jrowe#5371: fuck it, lets do it live!
aero#1357: just make sure to get the subdirs too 🙈
jrowe#5371: -R 4tw
jrowe#5371: whats your model_path ?
aero#1357: should point to the folder with all the .ckpt files
jrowe#5371: "Could not find trained model in model_dir"
aero#1357: do you have read permissions? 😅
jrowe#5371: problem with my path, could you show me yours so i can adjust?
jrowe#5371: "~/gpt-neo/models/the-eye.eu/eleuther_staging/gptneo-release/GPT3_2-7B" is wrong
aero#1357: "model_path": "/home/aero/mnt/munt/HData/gptneo/fp16"
you might need the full path sometimes python doesnt like ~ in my experience
jrowe#5371: there we go
jrowe#5371: now getting a config error, almost working
jrowe#5371: <https://pastebin.com/rB8aHCdZ>
aero#1357: you need to adjust your mesh shape |
aero#1357: "mesh_shape": "x:1,y:1",
jrowe#5371: seems like maybe its working now lol
jrowe#5371: 8 cpus and 64gb memory, looks like its working now
jrowe#5371: so should i have specified a stop condition, or will predict run and then end on its own?
jrowe#5371: 20ghz cpu and 40gb ram consumed,
jrowe#5371: predict batch size 8
aero#1357: are you using --live_output?
aero#1357: 20ghz 👀
EricHallahan#1051: 1.6 parameters @ 20 GHz
jrowe#5371: yeah, compute cluster with a vm
jrowe#5371: ~8* 2.4ghz
jrowe#5371: set a batch of 1, restarted, 25 minutes running so far - gonna leave it through lunch and see
jrowe#5371: cpu is really slow hah
aero#1357: my i7 6700k generates at about ~2.5 seconds / word
jrowe#5371: it should be at around 600 words
jrowe#5371: hmm, live output definitely needed, it might be in a nonsense loop or something
jrowe#5371: alright, victory!
jrowe#5371: There are many meaningful things in life, but the most important are:
output:
language and priorities. |
A lot of people confuse God and religion. religious people believe that God exists. God has appeared to them in their dreams...
jrowe#5371: 4-5 seconds per word
thepok#1770: Great now put it in an install script
jrowe#5371: yes soon
jrowe#5371: lunch now, also work after, got some switches and radios to turn up today
StellaAthena#3530: @jrowe @aero How is it going?
aero#1357: last I heard everything was working for jrowe, im a bit locked up at work today
StellaAthena#3530: *\*what kind of scrub lets work get in the way of science\**
Louis#0144: fight them
Louis#0144: do it
Louis#0144: coward
jrowe#5371: everything is working, but i also have actual work lol
StellaAthena#3530: No worries lol
StellaAthena#3530: I do too, I'm just not doing it.
jrowe#5371: I need to speed it up, I think, 4-5 seconds per word
StellaAthena#3530: Even a proof-of-concept would be valuable to add to the repo IMO.
StellaAthena#3530: That said, yeah 4-5 seconds per word is slow
StellaAthena#3530: Just wanted to check in and see what cool stuff y'all're up to 🙂
jrowe#5371: i have the line by line setup instructions, gonna redo things from scratch so I dont inflict people with permissions issues |
jrowe#5371: chmod 777 'ed the whole repo to skip troubleshooting, but everything else is clear
jrowe#5371: https://pastebin.com/Z2LEXNKD
jrowe#5371: good output!
jrowe#5371: just a little preachy on this run, but thats ok
jrowe#5371: i love the misspellings
jrowe#5371: looks like it picked up from some transcribed sermons somewhere in The Pile
ersatz#0001: is the notebook broken?
mkualquiera#3484: ```God is so much happier than in-love people. God is so much happier than
people who love someone too much.``` :guilty:
jrowe#5371: lol
jrowe#5371: "God doesn't care about other people's happiness. God is so much happier
than happy people."
mkualquiera#3484: HAHA
jrowe#5371: theres a whole southern preacher vibe going on
Spy#9778: I just got GPT-2 1.5b training working on my 24 GB GPU with adam
Spy#9778: 🎊
Spy#9778: JAX is OP
Ward#1738: GPT-3 Powers the Next Generation of Apps https://openai.com/blog/gpt-3-apps/
trigger757#1830: Could someone please explain differences between neo vs neox (It says in neo that you can train etc on GPU aswell as TPU). I get it that neo is on tensorflow mesh and neox on megatron.. but the bug models released checkhpoints only work with thr neo not the neox.. why the new neox?
Sid#2121: tensorflow bad, pytorch good |
trigger757#1830: Ok, so just to make it easier to maintain code then... got it
Sid#2121: well, it's not just that. We didn't have enough tpu compute to train a GPT3, then coreweave came along and offered us a ton of GPUs. Mesh tensorflow is untested with GPUs and we wanted to integrate some stuff from deepspeed, so we moved over to torch / deepspeed instead
trigger757#1830: Yeah, but I guess it wouldnt be so hard getting the mesh to work with GPUs instead of changing core framework which seems like alot more mork. I still get it, I use pytorch every day 😉
EricHallahan#1051: Well mTF still isn't compatible with DeepSpeed.
Kia#2550: Yeah...propably not in the near future
Sid#2121: mesh tensorflow's parallelism strategy is a bit brute force, using torch makes it much easier to 1) integrate improvements from elsewhere and 2) have more manual control over how you parallelize the model
Kia#2550: ClosedAI...is still closed
Sid#2121: we have a pretty small team of devs so, it's important to be able to get things done quickly
Sid#2121: changing framework actually went pretty smoothly, at least compared to building the initial mtf codebase :berk:
trigger757#1830: Ok, Then I get it 😉 Since I suspect most devs will move over to neox.. is there a way of using the checkpoint models on the neox (I guess not since they were being runned on tensorflow right)?
trigger757#1830: (I guess I could pull out weights and ger it into pytorch)..
trigger757#1830: Should have done my homework, 4 anyone else: https://medium.com/huggingface/from-tensorflow-to-pytorch-265f40ef2a28
EricHallahan#1051: In an ideal world, I would hope that most people will not need to touch the NeoX codebase and will simply use another, more inference-focused codebase with a more intuitive API (e.g. Hugging Face).
trigger757#1830: Of course, personally I am interested in seing if it is possible to optimize the adjoint differation for inproving training speed
StellaAthena#3530: What do you mean “most devs”? There’s between five and ten of us depending on what exactly you’re counting
EricHallahan#1051: Most people who use it externally.
trigger757#1830: It was an answer to Eric, yeah like he said external devs
StellaAthena#3530: We have... two? external devs
trigger757#1830: Ok... most people who are working as developers will most proably use neo feo@ hugging face basically just importing and dont need to change inside neos repository..
EricHallahan#1051: Well it isn't in HF. |
EricHallahan#1051: Right now.
StellaAthena#3530: What you’re saying makes sense, I’m just pointing my out that this is an *extremely* small outfit of people working in the free time.
StellaAthena#3530: We don’t have a user base 😛
trigger757#1830: I know but we are talking about the future right..
trigger757#1830: I know
trigger757#1830: So anyway.. as mentioned inside the small group ov developers which are focusing on neo, I guess they will mostly move over to neox (Since that is what the next features probably will come to, like zero 3 etc).. then I thought it was a little weird that the models just released were for neo (note without x). Hence are someome working on converting the models?
Sid#2121: they'll be converted into a huggingface model but not into neox, no. The models are a little different and it's not really worth it, we'd rather just train new ones.
trigger757#1830: Ok got it
trigger757#1830: Thanks for clearifying
chilli#5665: How does jax help with this :thonk:
Spy#9778: JIT is overpowered
Spy#9778: in particular donate_argnums saves memory
zphang#7252: https://twitter.com/kearonis/status/1375200936021393408
Spy#9778: like I can't run the model without jitting it
Spy#9778: since it runs out of memory
kindiana#1016: :thonk: its like 1.2 tokens per word not 4 words per token
Daj#7482: Hanson rn: :guilty:
zphang#7252: lol I'm more interested in the openai api usage stats
chilli#5665: Shouldn't make that big of a difference 🤔
Spy#9778: could be idk |
Spy#9778: I have been unable to run GPT2-XL from huggingface on the GPU
Kia#2550: It's still considerably high for my opinion...
trigger757#1830: Pephaps this fix for the memory issue with gpt2-xl here or jsust that you have less then 33 gb video ram
trigger757#1830: https://github.com/huggingface/transformers/issues/7152
Spy#9778: I have 24 GB
Spy#9778: and yeah it's without checkpointing in either the jax or tf version
trigger757#1830: I meant the part about testing this from the link ”Therefore, I switched to the RMSprop since its memory requirement is much smaller. ”
Spy#9778: oh yeah but
Spy#9778: my point was using adam
Spy#9778: which is indeed pretty expensive
Spy#9778: I could probably do SGD on 24 GB
jrowe#5371: are there any options in the config file that i can change to make cpu run more quickly?
trigger757#1830: Ok got it, love adam.. I wonder who that guy is 😉
jrowe#5371: can only get it working on one of 8 cpus, would "mesh_shape": "x:1,y:1", have any impact?
EricHallahan#1051: I assume that is for topology?
...
*No Louis, this isn't that kind of topology!*
jrowe#5371: lol
trigger757#1830: Could I ask whats the usage on the cluster you got.. are you basically calculating with with all GPUS with as many FLOPS as possible.. or are usally a couple of ones free if I for example wanna fo some finetuning or a new traing etc?
trigger757#1830: (I know its just for members) 😉 |
jrowe#5371: ok, it works
jrowe#5371: <https://pastebin.com/XpNmWDC4>
jrowe#5371: run on cpu, works on fresh install of ubuntu
EricHallahan#1051: What are the requirements?
jrowe#5371: anaconda, git, python 3.8,
jrowe#5371: ssh for convenience
jrowe#5371: everything else is default gpt-neo repo
jrowe#5371: so other than anaconda, i dont think it requires anything more on top?
EricHallahan#1051: I meant in terms of hardware, but that is also useful.
jrowe#5371: its super slow, im thinking because virtualized, 96gb ram changed nothing - after about 40gb it stops slurping it down
jrowe#5371: 2.4ghz "virtual" 8 core , but ends up running on only one during generation, probably something between tensorflow and how virtualization is implemented
jrowe#5371: Prompt
There are many meaningful things in life, but the most important are
Output:
most likely invisible. We learn what it is not by thinking about what we
can see, but by careful focus. Einstein has shown us that light doesn't
pass through material objects; we discover this by focusing on what we can
see and ignore what we can see not-see.
Therefore, we should all become students of Zen. |
jrowe#5371: I dont want not-see Zen
jrowe#5371: it's now generating Dogens Third Mindfulness Meditation lol
jrowe#5371: I highly prefer Zen Neo to southern preacher Neo
mkualquiera#3484: They both say their share of curious statements tho :berk:
mkualquiera#3484: > Einstein has shown us that light doesn't pass through material objects
jrowe#5371: don't harsh his vibe! he's like...chill, man.
inox#5400: https://twitter.com/colinraffel/status/1375186049081741321?s=20
ethan caballero#6044: ^I wonder if this is where dario will announce what dario.agi startup does? Dario is a speaker at this.
zphang#7252: oh that's an interesting take
Aran Komatsuzaki#5714: yeah dario's mom was saying something like that
Sid#2121: i can't tell if you're making a mum joke or if you've actually spoken to dario's mum
Aran Komatsuzaki#5714: i was referring to the fact that we had a user named "dario's mom" a while ago lol
Sid#2121: oh, i missed that lore
nz#9710: *dario's mom has escaped containment*
StellaAthena#3530: https://twitter.com/gwern/status/1375248981677244417
bmk#1476: wouldnt be surprised. does this mean, though, that their hardware is all hogged up by this and so they dont have resources to train an even bigger model (barring major improvements in efficiency)?
bmk#1476: wait, shit, i think OA might actually have a "major improvement in efficiency" up their sleeves
bmk#1476: that would be kinda :firealarm:
mkualquiera#3484: How much did gpt3 cost to train?
bmk#1476: less than the number that comes up when you google "How much did gpt3 cost to train?" |
bmk#1476: significantly less
bmk#1476: how much less? ¯\_(ツ)_/¯
mkualquiera#3484: Because they might have already made that back and thus can use the surplus to train a bigger model ig
mkualquiera#3484: I mean I don't know if OAI own any actual machines for inference
mkualquiera#3484: They probably just have a different company do that and all the scaling that having a public service entails
chilli#5665: well, we know that OAI has been optimizing the inference
chilli#5665: there's a lot of stuff that could have done to their models
kindiana#1016: how do we know :thonk:
chilli#5665: quantize them
chilli#5665: sparsify them
chilli#5665: I thought they've talked about it
chilli#5665: they've definitely hired for it :thonk:
kindiana#1016: well I'm out of the loop lol
ethan caballero#6044: https://i.kym-cdn.com/entries/icons/mobile/000/028/740/Screen_Shot_2019-02-27_at_2.52.06_PM.jpg
Louis#0144: You know what would be really fun
Louis#0144: Eleuther debate panel
Louis#0144: Just so we can bully Leo tho
Louis#0144: No other reason
Louis#0144: (Jkjk, doing a debate panel on multimodal stuff or alignment stuff could be cool)
bmk#1476: why me tho |
zphang#7252: if it's like the previous virtual *CLs, you can submit a proposal for an Alignment Social
bmk#1476: how would that work
bmk#1476: also would anyone show up other than eleuther people? lol
zphang#7252: I think it varies from event to event since virtual conferences are still evolving
zphang#7252: actually maybe I was thinking of the ML conferences, but same deal
zphang#7252: https://iclr.cc/Conferences/2020/CallForSocials
bmk#1476: but at that point why bind it to iclr instead of just doing our own thing?
bmk#1476: it's not like anyone who might want to come to an alignment social would only come if it was affiliated with iclr, right
zphang#7252: the benefit would be discovery: it'd be listed on the socials page
bmk#1476: yeah, but i have a feeling that very few alignment researchers would look at the socials page and also not at, say, LW
StellaAthena#3530: Hi @!🔞LoveOSGames🔞! Welcome
Kia#2550: Hi
Kia#2550: Stella is a Dev
Kia#2550: Maybe they can help you
bmk#1476: we might need some help with full stack with EEGI in a while (not sure when exactly)
bmk#1476: see #deleted-channel for more info, though right now not much is happening
bmk#1476: https://docs.google.com/document/d/1n8ALlG5F3EQ37-8j35YQSX1vhcj6jNOCp24pMXitlwo/edit?usp=sharing here's the document
bmk#1476: awesome
bmk#1476: @Daj @kip are the main people to talk to
bmk#1476: i'm not sure of the status of the project rn |
Kia#2550: Still interesting you guys have time to talked even when(Not always) Things to do
StellaAthena#3530: There will be, I just need to sit down and set it up
Kia#2550: Have fun working tbh...Or take time in some way
StellaAthena#3530: Have you used the Colab notebook yet?
bmk#1476: EEGI is in super early stages afaict
bmk#1476: kip has some code from a different project that's being reused i think but it still needs major modifications
bmk#1476: anyways yeah probably ask them for more info
Louis#0144: @!🔞LoveOSGames🔞 welcome to the club. I don’t think we have had a dedicated web dev, just someone who does it on the side? I’m not sure
Louis#0144: Anyway you’re more than welcome to join on research projects as well
Louis#0144: Tons of work to do
Louis#0144: Nw
Louis#0144: Out of curiosity has anyone proposed HCI or UX research here
Louis#0144: I don’t think so
Louis#0144: Right?
Louis#0144: @bmk I feel you’d know?
bmk#1476: nope
bmk#1476: nobody has done it yet
bmk#1476: here
Louis#0144: Hmmm ok
bmk#1476: why would we do HCI? |
bmk#1476: at most we'd use results from other HCI people to help us design EEGI experiments or something
Louis#0144: Because hci could be a key part of empirical alignment work
bmk#1476: but the UX itself isnt really our focus
bmk#1476: are u saying u wanna do it?
Louis#0144: No
Louis#0144: I’m just saying it seems interesting
Louis#0144: I’m too busy
𓅬 gabriel_syme 𓅬#3220: had a great presentation about Space Industry/Travel the other day, a lot of interesting HCI work in the field
bmk#1476: @Napolean_Solo i don't think gptneo is what you're looking for
Napolean_Solo#2907: I have seen some startups using BERT models for various language related tasks like I mentioned
Napolean_Solo#2907: Also folks at openAI have made some models of GPT-3 available and claim that they are production ready.
bmk#1476: what are you trying to do
bmk#1476: what do you want
bmk#1476: why do you need to use gptneo and not bert
bmk#1476: if you can't really give an answer to these questions, gptneo is probably not what you need
cfoster0#4356: Anyone who claims to sell you something prepackaged as production ready is likely lying to you
Napolean_Solo#2907: Hmm I have access to the private beta of OpenAI's GPT-3. But they aren't allowing fine tuning of that model on our own data yet. So I read a tweet that said your model can be fine tune on our own data.
Napolean_Solo#2907: That's why I reached out to you guys
bmk#1476: @Napolean_Solo so this is for businesses purposes?
Napolean_Solo#2907: Most likely yes |
jrowe#5371: <https://pastebin.com/XpNmWDC4>
you can try it out on cpu. 30gb model download, takes several hours. instructions unsupported.
Napolean_Solo#2907: Folks at OpenAI have found a lot of creative ways that help you achieve the same type of accuracy that a fine tuned model does without actually fine tuning it
bmk#1476: yeah i don't think we can help you too much
bmk#1476: we've already put everything out there that we can
Napolean_Solo#2907: Yeah I really appreciate what you guys have been doing
Napolean_Solo#2907: Haha you gotta apply for their invite and mention a use case
Napolean_Solo#2907: But folks at OpenAI have been working very hard to make it production ready and trying out new things with GPT-3 like they recently launched a new model called instruct series that allows you to do a lot of tasks with zero shot learning
Napolean_Solo#2907: You should really apply for their invite
Napolean_Solo#2907: As of now only 45k people have access to their models
Napolean_Solo#2907: Anyway I guess BERT is the closest to production ready, is that right to say so?
Napolean_Solo#2907: Yep you're right. But are folks here okay with guiding me in case I need help?
jrowe#5371: hire a machine learning expert - pay for a couple hours of their time to discuss your ideas and develop a concrete list of things you need to learn and do to pull it off
cfoster0#4356: Not really, no. Not any moreso than GPT
jrowe#5371: $200 would probably be enough
Napolean_Solo#2907: You're right but finding the right expert is another challenge
jrowe#5371: it will save you weeks of frustration - just search for machine learning consultant
Napolean_Solo#2907: Yeah I do that thanks for your suggestion
Napolean_Solo#2907: *will do
Napolean_Solo#2907: @!🔞LoveOSGames🔞 you want access to GPT-3 openAI beta? |
Napolean_Solo#2907: I can try and talk to the employees there
Napolean_Solo#2907: Alright
Napolean_Solo#2907: Hmm GPT-3 can give you much more powerful results
Napolean_Solo#2907: You don't need to fine-tune it
Napolean_Solo#2907: See this example one of the folks in the beta posted
Napolean_Solo#2907: https://cdn.discordapp.com/attachments/729741769738158194/824879053768228874/image.png
Napolean_Solo#2907: The bold text is the prompt
Napolean_Solo#2907: Yes and expensive as hell lol
Napolean_Solo#2907: You get trial of $18
Napolean_Solo#2907: 1k tokens cost 0.06c for most capable model
Napolean_Solo#2907: That is a result of most capable model
bmk#1476: can confirm, is expensive
Napolean_Solo#2907: Comparing GPT-2 with 3 is like comparing an ant with a human XD
bmk#1476: but 3 is only 50% more GPT than 2
Napolean_Solo#2907: Yeah but results are mind blowing
bmk#1476: the next one will only be 33% more
bmk#1476: at some point, each GPT will barely be more GPT than the last one
Napolean_Solo#2907: Wait 3 is not 50% more
Napolean_Solo#2907: 2 is 1.5 billion parameters
Napolean_Solo#2907: 3 is 148 billion |
bmk#1476: see: GPT101 is only 1% more GPT than GPT100
bmk#1476: source pls
Napolean_Solo#2907: That's not a 50% increase
Napolean_Solo#2907: https://en.m.wikipedia.org/wiki/GPT-3#:~:text=GPT-3's%20full%20version%20has,of%20pre-trained%20language%20representations.
Napolean_Solo#2907: *175 billion
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/824881190522716190/unknown.png
Napolean_Solo#2907: GPT-2 has 1.2 billion parameters while GPT-3 is 175 billion paramaters
bmk#1476: i never said it was a 50% increase in *parameters*
bmk#1476: just.. eh, nvm
Napolean_Solo#2907: https://cdn.discordapp.com/attachments/729741769738158194/824882017933983754/holys-2.png
bmk#1476: (i was trying to make a joke about the numbering but i guess it was a bad joke)
Napolean_Solo#2907: Haha
Napolean_Solo#2907: I guess so
Napolean_Solo#2907: You know, imo BERT models are production ready. If I fine tune multiple BERTs for various NLP tasks I guess that would work out to be production ready. I can train a BERT for sentiment analysis and another one for classification and so on..
Napolean_Solo#2907: Currently I need models to carry out 3 main tasks, that is sentiment analysis and keyword extraction and multilabel classification
Napolean_Solo#2907: OpenAI has a bit cheaper model however I feel it's unreliable compared a model that is fine tuned for the same stuff
Napolean_Solo#2907: yeah i feel having multiple models each fine-tuned for certain task would be a better option and reliable as compared to a a signle model for multiple tasks
Napolean_Solo#2907: but that's just my opinion
Napolean_Solo#2907: https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html
Napolean_Solo#2907: is this what you were referring to? |
zphang#7252: I shall wave my hand from the fine-tuning world!
zphang#7252: single-task fine-tuning will generally outperform multitask unless they overlap
Napolean_Solo#2907: yep that's what I was thinking
Napolean_Solo#2907: @zphang how would you rate BERT for production ready environments?
Napolean_Solo#2907: for summarization I am using GPT-3
zphang#7252: I love BERTs. (Also if you haven't decided which to use, ELECTRA seems the most solid from my recent experience)
zphang#7252: but also I'm in the research world so I don't really know anything about production/deployment
Napolean_Solo#2907: fair enough
Napolean_Solo#2907: most important thing for production ready environments is reliability and costs
zphang#7252: for costs I assume that the distillberts would be useful
Napolean_Solo#2907: I can use GPT-3 for various tasks and achieve impressive results but costs don't make sense when it comes to production environments
Napolean_Solo#2907: but that shouldn't come at a cost of decreased reliability I hope Distillberts are reliable
Napolean_Solo#2907: I understand that they are cheaper but it doesn't make sense to make a trade-off between the two. Although that might be acceptable if the trade-off is marginal.
Napolean_Solo#2907: you know just fcuk it, i will use pre trained BERT models and fine-tune them and see how the results catch up to be
Napolean_Solo#2907: there are two reasons for that:
1) BERTs are being used a lot by startups as per my knowledge
2) Resources for BERTs are more compared to any other models I feel
Napolean_Solo#2907: is it an organization or a model?
zphang#7252: specifically https://github.com/huggingface/transformers is the library you want
Napolean_Solo#2907: Aha |
Napolean_Solo#2907: So basically it's like keras for all these models
Napolean_Solo#2907: Am I correct?
Napolean_Solo#2907: Damn
Napolean_Solo#2907: Oh so it's built on pytorch?
zphang#7252: it also has TF implementations, but most people use the pytorch ones
Napolean_Solo#2907: Holy this is what I was looking for
Napolean_Solo#2907: So huggingface takes out so the bs and makes it easy to use models right?
zphang#7252: yea huggingface basically became the hub
zphang#7252: where if you had a new better performing model, you'd want to port it in
zphang#7252: so people will use it
Napolean_Solo#2907: What happened to keras?
zphang#7252: keras is in TF land
Napolean_Solo#2907: Hmm
Napolean_Solo#2907: I see
Napolean_Solo#2907: I have some experience with tensorflow but not Pytorch
Napolean_Solo#2907: What do you suggest? Huggingface or keras?
zphang#7252: I've not heard of anyone in NLP using keras
cfoster0#4356: *Francois Chollet noises*
Napolean_Solo#2907: Which would make it easier to implement huggingface, Pytorch or Tf?
Napolean_Solo#2907: It won't make much difference right? |
Napolean_Solo#2907: Okay I will check them out
Napolean_Solo#2907: And hit up some of my investors
Napolean_Solo#2907: Haha I'll let you know, we definitely need some more talent
Napolean_Solo#2907: What's your background? You could DM
zphang#7252: To my dying days https://cdn.discordapp.com/attachments/729741769738158194/824891982526152724/unknown.png
zphang#7252: lol (he has not)
jrowe#5371: mysterious deep learning hipster group eleutherai
zphang#7252: well, more precisely
zphang#7252: he has ideological reasons for not supporting pytorch/facebook
zphang#7252: I think that's a pretty non-biased way to put it
zphang#7252: yea he's had tweets on why google isn't as bad as fb too
zphang#7252: but I think every couple months or so he'll still post charts showing "TF/Keras still has more installs than '''Library #2'''"
chilli#5665: I think that fchollet truly believes it
chilli#5665: Like, for some time I thought that he was just being mercenary about PyTorch being a competitor to TF, so therefore FB bad
chilli#5665: but I think the causal relationship is truly the other way around for him
zphang#7252: oh I agree. I think he would come across better if he just said "I don't like FB so I will not be supporting any FB output"
zphang#7252: but he sometimes still tries to make it about the libraries? which is odd
jrowe#5371: ideological thinking is a lazy filter - insert principles, run a first pass over some topic, fire off a hot take and stick to it, because you know your principles are good
jrowe#5371: then when faced with an argument opposing the hot take, it's framed by your brain as an argument against your principles, hilarity / suffering ensues
Sid#2121: Depending on your level of expertise, you can try our colab notebook https://github.com/EleutherAI/gpt-neo/. We're thinking of putting together an API, but not sure how soon that'll happen. |
Sid#2121: awesome! yeah the blocker rn is mostly the backend stuff, (i.e serving the model quickly), but if I or anyone else ever get round to doing that we'll ping you for the rest.
Sid#2121: unless you wanna help with that too
Sid#2121: by backend i meant the ML part hah
Sid#2121: super-backend
Sid#2121: the problem is our current setup uses tf.estimator which reloads the graph every time it's called, because of course it fucking does
Sid#2121: so for an api that would be way too slow
Sid#2121: so we need to export the model, or hack tf.estimator to not reload the graph
Sid#2121: yep, nor have I lol. I'll get round to it at some point
Sid#2121: if you want to have a go, i'd be happy to help you through steps where you get stuck
Sid#2121: i'd suggest getting familiar with the codebase by running through the colab first
Sid#2121: but the basic idea is to do something like this with tf estimator: https://github.com/marcsto/rl/blob/master/src/fast_predict2.py
Sid#2121: or go the proper route of exporting the model, which i don't know much about, but there should be resources hidden about the internet, and we do already *technically* have a method for it in gpt-neo but apparently it doesn't work well (cc @Daj )
Daj#7482: But HF is already working on this
Sid#2121: they're working on converting to huggingface
Sid#2121: i wouldn't count on that coming any time soon lol
Daj#7482: Fair
Sid#2121: fast_predict.py should work out of the box on GPUs, it could be like a five minute deal
Daj#7482: Well stop tagging me on that because I have no idea how it works lol
Sid#2121: i only ran into problems with tpu estimator
Daj#7482: TF TPU stuff really is quite cursed |
Sid#2121: GPU
Daj#7482: It would be cool if it ran on TPU but that's most likely hell
Sphinx#2092: TF is quite cursed in general.
Sid#2121: our new codebase is pytorch, part of the reason we moved
Louis#0144: The other reason was to just keep the engineers busy so that they’re happy and well fed with a constant stream of memes
Louis#0144: But we aren’t supposed to say that 😉
Louis#0144: What’s going on here https://cdn.discordapp.com/attachments/729741769738158194/825004608430145556/video0.mp4
inox#5400: new cat just dropped
ethan caballero#6044: This implies that the internet now contains more (webtext quality) text generated by GPT-3 than not generated by GPT-3 (300 billion words); Right?
https://twitter.com/gdb/status/1375169852889919488
Sid#2121: it's not like all that text is just going straight to websites, lol. Most of it's probably private.
Dromarion#3383: *Half is AI Dungeon generated erotica*
Sid#2121: probably literally this
AI_WAIFU#2844: Seriously, that's the only use for AI dungeon.
AI_WAIFU#2844: The primary application of GPT-Neo will be to give us a way to get said erotica without it getting linked to your CC#.
Dromarion#3383: The thing with erotica is that the writing quality doesn't necessarily need to be good, it just needs to turn you on. So a half coherent text generator that does the job precludes the need to pay some degenerate to write it, or to write it yourself.
jrowe#5371: catgirl go :brr:
mkualquiera#3484: I would even argue it being slightly incoherent is actually better
nz#9710: erotica dataset when
mkualquiera#3484: There's probably a fair share of that on the pile, right? |
Sid#2121: @bmk aren't we already hosting 500gb of literotica somewhere lol
Sid#2121: oh, not 500 unfortunately but here you go https://the-eye.eu/public/AI/pile_preliminary_components/Literotica.jsonl.zst
mkualquiera#3484: yeah there it is
nz#9710: o shit
jrowe#5371: there goes your weekend
bmk#1476: if you can read all of literotica in a weekend, then.. wow, congratulations
jrowe#5371: just use that spritz speed reader at 1k wpm
jrowe#5371: and don't shake too much
bmk#1476: that doesn't sound fast enough
bmk#1476: you'd need to read at like 800k wpm
bmk#1476: assuming you don't need to eat or sleep
mkualquiera#3484: Oh wow
mkualquiera#3484: first time learning about spritz
mkualquiera#3484: this seems great
mkualquiera#3484: https://github.com/pasky/speedread
daster#4021: Hey all!
Doing a refresh of a job posting - specifically for an MLOps candidate to help us support EleutherAI. Description below!
We are CoreWeave (https://www.coreweave.com/), the infrastructure team behind the Open Source GPT3 Training efforts here. Here is the link to the ML Ops role (https://apply.workable.com/coreweave/j/8CABC79205/) we are looking to fill. Please note that remote is perfectly acceptable. |
Thanks!
EricHallahan#1051: I was all the rage.
Dicky the sexy diesel#7454: some sites to play with question answering?
Dicky the sexy diesel#7454: ai question answering?
StellaAthena#3530: The web demo that exists is the Colab file that you’ve been pointed to multiple times.
StellaAthena#3530: It doesn’t matter how much space you have on your computer – Colab doesn’t run on your computer. It runs on Google’s computers
triggerhappygandi#0001: Dear God.
triggerhappygandi#0001: Is it not ALL of Literotica?
StellaAthena#3530: I’m pretty sure it is. Or, it was supposed to be
StellaAthena#3530: It’s ~12 GB
dadnoithurts#5425: hey guys
triggerhappygandi#0001: The way Sid said it I thought even Literotica had 1000GB text
dadnoithurts#5425: any tips for fine-tuning gpt2 on a really small dataset? ~2500 training examples
Dicky the sexy diesel#7454: I need a question answering website to try online
bmk#1476: we're the wrong people to ask
triggerhappygandi#0001: Colab is a website. It is online. It has question answering (pull up any repo). Literally all there.
triggerhappygandi#0001: Lmao moloch?
triggerhappygandi#0001: Who changed bot name
EricHallahan#1051: AGI is dead. |
triggerhappygandi#0001: Look for tensorfork's gpt-2 Colab notebook
bmk#1476: this is not a beginner discord, check #communities for some places to look
triggerhappygandi#0001: Enter this discord only if >160 iq or <10 iq
dadnoithurts#5425: thanks a lot man, I guess its this one? https://colab.research.google.com/drive/1QE4LVEYITjIkjXxosahHVZPsSHtYZy7x
triggerhappygandi#0001: Yeah
triggerhappygandi#0001: It has fine-tuning iirc
EricHallahan#1051: Or both simultaneously.
triggerhappygandi#0001: That would be epic@EricHallahan
dadnoithurts#5425: anyone knows why theres no adafactor in tf core? only implementation I found is the Tensor2Tensor one and the thing is already deprecated lol
EricHallahan#1051: Adam is all you need.™️
triggerhappygandi#0001: Because adam rulez everything else droolz@dadnoithurts
dadnoithurts#5425: lololol
triggerhappygandi#0001: For real though. Adam/AdamW are pretty much best. If you want to go fancy you can do Lamb/Ranger etc.
bilal2vec#1816: this still haunts me
bilal2vec#1816: https://twitter.com/bilal2vec/status/1331078883412623360?s=21
bmk#1476: you don't need adamw if you aren't doing weight decay
alstroemeria313#1694: No Ranger pls
triggerhappygandi#0001: Obviously
triggerhappygandi#0001: @alstroemeria313 ytho
alstroemeria313#1694: > So RAdam is sensitive to the loss function being multiplied by a scalar |
> Quite badly
> Like what were they thinking
> And you can't change the learning rate accordingly like with SGD
> Because when it switches back to Adam-like mode it'll be way too low/high now
dadnoithurts#5425: @bilal2vec F
triggerhappygandi#0001: Oof. Didn't know that
alstroemeria313#1694: I'm trying Lookahead + AdamW now
triggerhappygandi#0001: Tried PowerSGD?
alstroemeria313#1694: What's that
Louis#0144: Lmao top notch
Louis#0144: You know she’s in this server
alstroemeria313#1694: Like my main problem is the occasional bad step that it can't recover from easily?
triggerhappygandi#0001: SGD, but _powerful_ lol
Louis#0144: I forgot her username
Louis#0144: She’s talked here before
dadnoithurts#5425: what kind of lr schedulers do you guys use when running Adam?
alstroemeria313#1694: Isn't PowerSGD just for distributed optimization?
triggerhappygandi#0001: I mean, yeah
Louis#0144: @helen 🐳 your GitHub issue comes up again
triggerhappygandi#0001: But apparently it is kinda decent. |
alstroemeria313#1694: I'm optimizing over GAN latents, they're tiny
triggerhappygandi#0001: How about Lamb?
triggerhappygandi#0001: Never actually tried it.
alstroemeria313#1694: I've never looked at it
Louis#0144: https://arxiv.org/pdf/1503.07589.pdf this paper has TEN PAGES of authors
Louis#0144: Ok I need to go back to work https://cdn.discordapp.com/attachments/729741769738158194/825080881239949362/video0.mp4
triggerhappygandi#0001: How many pages did the Higgs Boson paper have?
bilal2vec#1816: lmao the pain of trying to making it compatible with tpus/keras/schedulers was enough to make me jump off the tf ship
triggerhappygandi#0001: Playing this on repeat feels like the cat is singing
Louis#0144: I KNOW RIGHT
Louis#0144: omg
helen 🐳#5160: HAHAHAH i'm so sorry i really am going to maybe make a PR for this
Louis#0144: Speed it up!
Louis#0144: It’s so funny
bilal2vec#1816: haha this should be my intern project lmao
Louis#0144: Did u ever convert whalefacts to GPT3
Louis#0144: Where tf did you even find the data for that too
helen 🐳#5160: whalefakes now runs on another secret big language model, but not gpt3 :))))) i scraped the original @awhalefact twitter acct
Louis#0144: Ooo
Spy#9778: @helen 🐳 re: your pinned tweet about getting GPT-2 onto a single GPU |
mkualquiera#3484: It runs on underpaid workers typing very fast
Spy#9778: I got GPT-2 XL training with adam on a 24GB one as of yesterday
Spy#9778: but not full context
Spy#9778: what context size were you able to get on the 32GB ones?
Spy#9778: I realize I'm a bit behind the times but alas I am not a secret language model haver
helen 🐳#5160: i forget now tbh! i also had to chop the context length to get it to fit. luckily you can fit a lot of tweets into even a cropped context
Spy#9778: ah okay
alstroemeria313#1694: Um why can't Lamb converge on a simple convex test problem?
alstroemeria313#1694: Like MSE loss between two vectors, which just tries to make the first one equal to the second
aro#1177: Happy to answer questions about distributed Shampoo. author here.
alstroemeria313#1694: ...Lamb doesn't debias the Adam momentum/squared grad buffers? What?
alstroemeria313#1694: But it still initializes them to 0...
alstroemeria313#1694: Either you should init to 0 and debias or you should init to the first gradient and not debias
StellaAthena#3530: @helen 🐳 I'm low-key jealous of the tag `mathemakitten`
alstroemeria313#1694: Um, I just tried Shampoo on the same simple convex problem and can't get it to converge? It's even worse than Lamb?
Louis#0144: Shampoo is trash
Louis#0144: Don’t bother
Louis#0144: I have had nothing but a bad time with it
Louis#0144: I tried it extensively for a month or so
jrowe#5371: that was nice of you to tell the author |
jrowe#5371: lol...
aro#1177: Shampoo is a approximation for full matrix AdaGrad. It forms statistics based on dimension of gradient tensors. G : [5, 6 8]. It will form [5,5] [6,6], [8,8]. For convex problem it makes less sense, as your parameter are generally vectors. In that case full matrix AdaGrad works exceeding well. https://twitter.com/_arohan_/status/1304623387499610112?s=21
alstroemeria313#1694: @aro ah
Louis#0144: Oh
Louis#0144: lol
Louis#0144: I had issues with GGCNs and shampoo
aro#1177: Sorry Louis you had a bad experience 😆 - we just released a correct implementation. Your experience jives with everyone who used external code that had all kinds of bugs
Louis#0144: Oh ok
Louis#0144: I’ll check it out
Louis#0144: Thanks
Louis#0144: It would actually be a life saver for me if it works well
Louis#0144: 🙏
Louis#0144: Have you looked at all the new second order methods that have come out over the last year?
aro#1177: Unfortunately we only have the Jax implementation right now. We will see what we can do about releasing pyrotechnics as well as tensorflow
alstroemeria313#1694: The learning rate seems to be drastically different in meaning from Adam?
Louis#0144: There was one that tried to directly rebut shampoo
aro#1177: Not pyrotechnics, pytorch.
Louis#0144: Using some weird fractional root thing
Louis#0144: It got rejected
Louis#0144: But I tried it myself and it works really@well |
Louis#0144: WAIT
Louis#0144: maybe I am conflating shampoo with something else
Louis#0144: LMAO
alstroemeria313#1694: oh no
Louis#0144: https://openreview.net/pdf?id=Sc8cY4Jpi3s
Louis#0144: This is the one I really liked
Louis#0144: It’s by you as well though
aro#1177: No problem! I hate working on optimization stuff because the community can’t make forward progress anymore with 100 Adam variants
Louis#0144: I was referring to the 2018 paper above
alstroemeria313#1694: @aro Shampoo has Adagrad-style monotonically decreasing learning rates?
aro#1177: Oh yeah, I am the author of that paper.
aro#1177: The latest version doesn’t!
Louis#0144: It’s really well written!
Louis#0144: I liked it
aro#1177: Thanks 🙏
Louis#0144: “Thank you for changing your words to be professional.”
Louis#0144: Was that you?
Louis#0144: LOL
aro#1177: It was quite upsetting the previous feedback from AC
Louis#0144: Can I see edit history on open review |
aro#1177: Rejection was totally fine
aro#1177: But they said it was useless
aro#1177: All papers get rejected including original adagrad
alstroemeria313#1694: @aro so if I have a 1x18x512 parameter tensor, say, does that mean it's using a 1x1, an 18x18, and a 512x512 matrix to store statistics in?
aro#1177: Yes, but we actually don’t store 1x1 in the new implementation, instead store 8x8 and 512x512. We also do things like [5, 10, 1024] into [50, 1024] to get more correlations
alstroemeria313#1694: Oh
alstroemeria313#1694: Which does the pytorch_optimizer version do
StellaAthena#3530: This is quite thought provoking https://www.technologyreview.com/2021/03/26/1021318/google-security-shut-down-counter-terrorist-us-ally/
Louis#0144: @aro did anyone ever use shampoo for a big SOTA model
Louis#0144: Or a large LM
aro#1177: You mean > 1b parameters? Unless you count embeddings (dlrm). No
aro#1177: Though I am looking at it now with Jax impl.
alstroemeria313#1694: OK so a run with Shampoo was about 40% as fast as a run with Adam?
alstroemeria313#1694: Like for the same number of iters
aro#1177: 2x in iters. Each step is a bit more expensive due to matrix multiply to compute preconditioner gradient instead of coordinates wise multiplication
alstroemeria313#1694: Ahh.
alstroemeria313#1694: I am also using LR of 5 vs 0.07 with Adam.
aro#1177: Think of Adam and Shampo as approximating the same thing. One is diagonal (one learning rate per parameter) and other is kronexker product of matrices (allowing correlation between parameters)
aro#1177: If you use grafting, you can use the same Hparam setting as Adam (or only search locally)
aro#1177: Grafting is this idea that you can run one optimizer to get the scale of the updates, and use direction from another. https://rosanneliu.com/dlctfs/dlct_210312.pdf has details slide 49 |
alstroemeria313#1694: Ah
aro#1177: Ofcourse some of the variants of Adam probably have equivalent matrix versions. But improvements with those variations might be just marginal.
aro#1177: Yes! Testing it 🙏
alstroemeria313#1694: @aro Oh wow, I get *vastly* different results with a 1x4 parameter tensor instead of a 4 parameter tensor
aro#1177: Do you change exponents of the matrix inverse based on the rank? May want to just override it and try -1, -0.5, -0.25
aro#1177: Could you share your code? I am wondering what you are doing for inverse pth roots
aro#1177: Don’t do that!
dadnoithurts#5425: now that torch has native complex number support Im running away from TF
aro#1177: https://github.com/google-research/google-research/blob/f06d25db7de870cad822a46c5ab69705dd384de8/scalable_shampoo/jax/shampoo.py#L343 make a pytorch version of this, use fp32
alstroemeria313#1694: This? https://cdn.discordapp.com/attachments/729741769738158194/825100317602742324/optimizers-Copy1.html
alstroemeria313#1694: Or did you mean to reply to someone else
aro#1177: Oh oops, was curious about your results. Pastebin will be better don’t know how to use this phone 😂
alstroemeria313#1694: https://pastebin.com/1cz7k8Ku
alstroemeria313#1694: The 1x4 version converges, the 4 version doesn't
aro#1177: Oh man, there is a shampoo in torch_optimizer!?, let me take a look
alstroemeria313#1694: Since the real thing I'm trying it on is a 1x18x512, I wondered if the 1 mattered
aro#1177: Run it on the GPU!
aro#1177: It’s super fast
aro#1177: On the cpu
alstroemeria313#1694: Why is it on the CPU |
aro#1177: This can run on the GPU!
aro#1177: This = inverse pth root
aro#1177: And it runs super parallel, all matrix matrix or matrix vector products
aro#1177: Btw pytorch svd on GPU has bug , let me find it
aro#1177: On average it takes 10 iters
aro#1177: 100 was just a default ooops, it early exits
alstroemeria313#1694: @aro ...So the weight decay is transformed by the preconditioner too?
aro#1177: No, not at all.
aro#1177: It’s added after preconditioning but before momentum
aro#1177: Just like momentum optimizers in Jax
alstroemeria313#1694: I should compare your code to the pytorch-optimizer code then... 🙃
alstroemeria313#1694: Which seems to be doing it before.
aro#1177: Pytorch gpu, one reason maybe shampoo and other implementation fail https://github.com/pytorch/pytorch/issues/28293
alstroemeria313#1694: oh no
nz#9710: another user (chilli) mentioned that pytorch may be part of the reason why second order optimizers are not as popular as they could possibly be.
aro#1177: Inverse pth root is quite stable so if you want to port it
aro#1177: Yes! Parallelism
aro#1177: Also you can derive more faster than 10 iteration variants as well
aro#1177: Yeah!!
aro#1177: I am excited to see your variant now! |
chilli#5665: any justification for why "grafting" works? Seems like something that could break horrendously
chilli#5665: a priori, I also don't see any particular reason why grafting norm onto direction would work better than grafting direction onto norm
aro#1177: It’s really empirical. Finding is bizarre but after you think about it’s make sense. Learning rate schedule both implicit and explicit is why we have 100 first order methods. So all first order methods once you do grafting, the directions are all in the same space (signs don’t change)
StellaAthena#3530: > a priori, I also don't see any particular reason why grafting **norm onto direction** would work better than grafting **direction onto norm**
Do we know this is the case?
chilli#5665: I don't see why that's true
aro#1177: So once we factor out the per layer step size based on magnitude optimizer. You find that almost all optimizers first order behave similarly.
chilli#5665: For example, SGD + momentum can definitely have a different direction than just regular SGD
chilli#5665: oh ok, so that's mostly an empirical finding as well
chilli#5665: :thonk:
aro#1177: Yes! Should have clarified, momentum is applied after grafting
chilli#5665: hmmmmm
aro#1177: Try it! Take two optimizers one that works well, one that is mediocre
chilli#5665: haha I'm not doubting it, just finding it very weird to think about
chilli#5665: but you're saying that you take the pre-momentum gradient
chilli#5665: take that norm
chilli#5665: and then graft it onto the other optimizer's direction?
aro#1177: Make direction optimizer have unit norm
aro#1177: Paste the norm of the magnitude optimizer which is running as a shadow
chilli#5665: before you apply momentum |
aro#1177: Yep!
chilli#5665: so, uhh, if you take it before you apply momentum
chilli#5665: how do the different first-order optimizers even differ :thonk:
chilli#5665: uhhh, hold on
aro#1177: https://github.com/google-research/google-research/blob/f06d25db7de870cad822a46c5ab69705dd384de8/scalable_shampoo/jax/shampoo.py#L771
aro#1177: Preconditioner
chilli#5665: first-order optimizers have preconditioners?
alstroemeria313#1694: The division by the sqrt of the squared gradient accumulator
chilli#5665: oh, like for adam?
alstroemeria313#1694: Is a diagonal preconditioner
alstroemeria313#1694: Adagrad on up
chilli#5665: I see
chilli#5665: I haven't thought about those as preconditioners
chilli#5665: (I haven't thought too much about the details of these optimizers in general...)
chilli#5665: so for something like Adam: https://cdn.discordapp.com/attachments/729741769738158194/825107990796959794/1Qzpf8aKwdBYTgMuL69C5qw.png
aro#1177: https://rosanneliu.com/dlctfs/dlct_210312.pdf
aro#1177: Visually
aro#1177: V_t inverse square root is the diagonal preconditioner
chilli#5665: what are you taking the norm of from adam?
chilli#5665: eta/sqrt(v_t + epsilon) * g? |
aro#1177: Yeah!
chilli#5665: interesting
chilli#5665: btw, I don't know if you saw any of the discussion in #multimodal , but do you have any thoughts on lookahead type optimizers? Does that make sense to combine with something like Shampoo?
aro#1177: Look ahead is interesting! But it just takes too many steps, and I don’t think that compute is worth it.....
alstroemeria313#1694: @aro I'm trying it rn to solve my bad step problem when optimizing GAN latents and it seems to work
aro#1177: If you run look ahead on the same batch things get interesting at large batch sizes
chilli#5665: how do you run lookahead on the same batch?
aro#1177: Take many steps on the same batch
chilli#5665: That kinda makes it sound like you're not having it be the same batch anymore
chilli#5665: lol
alstroemeria313#1694: I mean the sort where you run for five or so batches with the inner optimizer then interpolate the slow weights with the fast weights
aro#1177: Update weights, compute forward again.
chilli#5665: oh, with the entire batch
chilli#5665: why is that interesting?
aro#1177: Think about it this way!
aro#1177: You get better estimate of gradient
aro#1177: Thought question: do you take 1 step on a batch or 10 steps (with 10 times smaller Lr) on the same batch
chilli#5665: why is that especially true of large batches?
chilli#5665: I mean, second one is probably somewhat better
chilli#5665: but I doubt it would be better than taking 10 steps with different batches |
chilli#5665: :thonk:
chilli#5665: and that's what you need to compare to, no? Since you're basically doing 10x the compute
aro#1177: You are right! It’s usually not except in rare circumstances
aro#1177: https://arxiv.org/abs/2010.13639
chilli#5665: ah
aro#1177: When batch sizes are large
chilli#5665: ok, so when your data IO is a limiting factor
aro#1177: Oh no
aro#1177: Sure you can use that
chilli#5665: oh, no?
aro#1177: But convergence point of view
aro#1177: Is it actually useful
chilli#5665: > As a consequence, CPU-bound preprocessing and disk/memory/network operations have emerged as new performance bottlenecks, as opposed to hardware-accelerated gradient computations.
chilli#5665: You're asking whether taking 10 steps on the same batch is better than one big step?
chilli#5665: yeah, I think so
chilli#5665: I can see how lookahead on these 10 steps would be even better
chilli#5665: but I guess I'm not really sure why this is especially true when batch sizes are large
aro#1177: In convex setting: convergence rate has two terms. One that has noise in the gradient, the other curvature (hardness of optimization)
aro#1177: In small batches, rate is dominated by the noise, so actually it’s worse to 10 steps on same batch vs 10 fresh samples
chilli#5665: oh, are you claiming that 10 steps on the same batch is actually better than 10 fresh samples (in some settings)? |
aro#1177: In the other case, it doesn’t make thing worse
chilli#5665: I see
aro#1177: By that much!
chilli#5665: well, by an amount equal to how much noise you have, presumably
aro#1177: I guess this is why I am not fully bought into look ahead idea
chilli#5665: people usually aren't doing lookahead on the same batch. though
Louis#0144: Weird q but has anyone looked into doing DL with second order Newton Ralphson
chilli#5665: so isn't this point somewhat moot?
aro#1177: But, doesn’t look ahead average the weights at the end
aro#1177: So it’s rewinding back?
kindiana#1016: I don't believe that should be the case, with sgd or sgdm it should be actually the same, but with adam you just get more accurate variance estimates with smaller batches
chilli#5665: I think lookahead just takes a step in the direction of the final weights
chilli#5665: he's talking about 10 steps on same (small) batch vs 10 steps on different (same-sized small) batches
kindiana#1016: oh my bad
chilli#5665: The idea is very similar to checkpoint averaging but seems different in some ways (not sure if it's significant)
chilli#5665: but I guess, either way, I don't understand this point :thonk:
aro#1177: Maybe a question to clarify is do you see look ahead performs better for fixed iterations
aro#1177: Or when comparing best possible accuracy/loss
aro#1177: I can see the second happening due to the averaging as you said. But would be totally surprised if it’s better at fixed iteration count(number of gradient evaluation)
chilli#5665: It's identical |
chilli#5665: Ranger is just radam + lookahead
chilli#5665: ?
chilli#5665: We are?
chilli#5665: How so?
chilli#5665: Not totally sure, I think it's plausible that it could improve in terms of number of gradient evaluations
chilli#5665: I mean, checkpoint averaging improves your model if you do it at the end of training
chilli#5665: I think it's natural to think it would help during training as well
chilli#5665: Oh, that was a separate discussion
chilli#5665: From here
chilli#5665: I asked aro about whether he thought lookahead could help shampoo
aro#1177: I see, so yeah with a second order method that uses much larger learning rate, look ahead actually might improve things
aro#1177: There is hessian free methods from James martens
chilli#5665: And he was skeptical, but mentioned that he's found it useful in this setting where you use the same batch
chilli#5665: I'm not even sure if lookahead is what I think makes sense
aro#1177: https://www.cs.toronto.edu/~jmartens/docs/Deep_HessianFree.pdf
chilli#5665: Rather than something like
chilli#5665: "Everytime you plateau, average the last 20 steps"
Louis#0144: No I meant like
chilli#5665: Or something like that
Louis#0144: With hessian |
Louis#0144: I know it’s expensive
chilli#5665: But lookahead optimize is the closest to what I'm imagining
Louis#0144: But there’s some Monte Carlo methods to get the hessian that work well
Louis#0144: I went to a seminar course on this actually
aro#1177: Wouldn’t memory become the primary bottleneck
Louis#0144: MCMC in nonlinear optimization
Louis#0144: Yes
Louis#0144: But like let’s say it wasn’t anymore
Louis#0144: Let’s say in ten years every GPU has a terabyte of VRAM
aro#1177: (But trillion weights...)
Louis#0144: SHHHH
Louis#0144: lmaooo
Louis#0144: Let a man dream
Louis#0144: HAHAHA
Louis#0144: You’ll fit right in
Louis#0144: That sounds like something Connor would say
nz#9710: *one of us one of us*
Louis#0144: Kitty comes to say hello https://cdn.discordapp.com/attachments/729741769738158194/825115539785384016/image0.jpg
Louis#0144: I’m reading a model theory textbook
Louis#0144: I think she wants to see what it is I’m doing |
Louis#0144: She keeps hearing pages flip which aren’t real pages it’s just an iPad sound effect
Louis#0144: Lmao
Louis#0144: From experience actually second order methods help with sparse networks with nontrivial topologies
Louis#0144: So like weird variants of hopfield networks for instance
Louis#0144: Where some edges randomly get dropped
Louis#0144: Or weird higher order sparsity
Louis#0144: This isn’t ANNs, this was from atheoretical neuroscience thing I did
Louis#0144: Oh I mistook dense for sparse
Louis#0144: I feel silly
Louis#0144: My brain literally did the conjugation for me
Isaac McHorse#2007: i'm taking a screenshot so i can remember this moment forever
alstroemeria313#1694: @aro does the Shampoo learning rate have any particular meaning in terms of what units it's in?
Teemochu#8740: There's also fimfarchive if you want to specialize in ponies... haven't extracted text from the epubs yet so I'm not sure how much text it is (Astralite probably knows though) but it's around 200k fics (majority SFW but there's quite a bit of explicit as well), 6GB compressed. https://www.fimfiction.net/user/116950/Fimfarchive
bmk#1476: please do not specialize in ponies
Louis#0144: Specialize in geese
Louis#0144: Did you actually just tell a grown man to specialize in my little pony erotica
Louis#0144: That’s
Louis#0144: Truly something else
aro#1177: Phone has died. Yeah so I think second order methods should improve the convergence of these large models. I have some weak evidence on this.
aro#1177: So I am basing of shampoo Jax implementation, if you graft to sgd. you can use the same scale as your sgd runs, graft to adagrad, need larger levels. |
kindiana#1016: where's the code haha
aro#1177: That would be cool to see. I am taking a break now and getting some sun!
triggerhappygandi#0001: Nice pussy
triggerhappygandi#0001: :p
alstroemeria313#1694: I really don't understand the pytorch-optimizer Shampoo implementation
alstroemeria313#1694: It doesn't compute a separate L and R preconditioner?
alstroemeria313#1694: But just one?
alstroemeria313#1694: Ohh
alstroemeria313#1694: L and R are for the 2-dim case
alstroemeria313#1694: And it just does ndim preconditioners
alstroemeria313#1694: I still don't know that the matrix power exponent is right
alstroemeria313#1694: ...Also does it correctly handle dim-0 parameters
alstroemeria313#1694: I changed the matrix power exponent in pytorch-optimizer Shampoo from 1 / order to 1 / order / 2 and the convergence problem went away...
alstroemeria313#1694: Also I confirmed that it does not work with dim-0 parameters
alstroemeria313#1694: I should make it decay the preconditioners Adam-style too
alstroemeria313#1694: To match the Jax code
alstroemeria313#1694: Like... it just works better for me now
alstroemeria313#1694: @aro Do you do Adam-style debiasing of the momentum and preconditioners?
aro#1177: No, I instead make learning rate warm up quadratic lr times min(step/warmup, 1)**2
alstroemeria313#1694: Ah |
alstroemeria313#1694: How come?
aro#1177: Adam with bias correction +linear warmup is roughly equivalent to Adam without bias correction with quadratic warmup
alstroemeria313#1694: Oh
alstroemeria313#1694: Any tips on learning rate tuning?
aro#1177: Do graft to Sgd to remove the scale from shampoo. Then tuning is similar to other optimizers. Perhaps an important tuning parameter is epsilon added to statistics before inverse. Use power iteration to find max eigen value times 1e-6
alstroemeria313#1694: ...Wait, do you add the gradient times its transpose before momentum to the preconditioner, or after momentum?
chilli#5665: Is grafting how you get best results or is it just a hack to use existing lr schedules
aro#1177: Grafting is the only way, due to several reasons
chilli#5665: Interesting
alstroemeria313#1694: ...
aro#1177: GG.T is on original grad
alstroemeria313#1694: Then that's another bug with the pytorch-optimizer code 🙃
aro#1177: Yeah
chilli#5665: Can you elaborate on the reasons?
chilli#5665: I kinda assumed it was a way to easily integrate distributed shampoo into existing lr schedules
chilli#5665: As opposed to an essential part of the method
aro#1177: 1. Shampoo proves an upper bound. It can be off by ‘r’ rank which we don’t know. But this is convex theory, doesn’t necessarily apply. We don’t know learning rate scale of each layer, specially things like batchnorm ties up learning rate (implicit from grad norm) and weight norm. 2. If we are computing inverse every K steps, then even in convex setting it’s worse 3. Diagonal approximation from kronecker factor is worse than original diagonal adagrad. There is a fix last paragraph of appendix b in https://arxiv.org/pdf/2002.09018.pdf
aro#1177: The fix is expensive, grafting is better in case of 3.
aro#1177: (Empirically)
chilli#5665: Hmm, I think I need to read your paper properly to understand these reasons 😅 |
chilli#5665: But before I do that, do you have an opinion on whether grafting is the "right" thing to do? Or do you think there's probably a better way to do it?
alstroemeria313#1694: Um the pytorch-optimizer Shampoo implementation also copies the weight-decay-altered gradient into the momentum buffer
alstroemeria313#1694: :/
aro#1177: Wait what do you mean?
aro#1177: They compute preconditioner with weight decay part of that gradient. That is insane
aro#1177: ?
alstroemeria313#1694: oh, Adam does that too
aro#1177: So essentially WW.T
alstroemeria313#1694: But AdamW got rid of that
alstroemeria313#1694: But yeah the weight decay altered gradient gets preconditioned
alstroemeria313#1694: https://github.com/jettify/pytorch-optimizer/blob/master/torch_optimizer/shampoo.py#L114
aro#1177: This is really wrong. Good catch!
alstroemeria313#1694: I already changed it in my local copy
aro#1177: In Adam it might be okay, since the signs of gradient doesn’t change. In shampoo, preconditioner changes sign. So I wonder it will increase the weight norm instead of decreasing.
alstroemeria313#1694: Oh no
alstroemeria313#1694: > In Adam it might be okay, since the signs of gradient doesn’t change.
It was only sort of OK in Adam to begin with, that's why AdamW was created
aro#1177: Jax shampoo includes weight decay by default 🙃
alstroemeria313#1694: By which I mean it didn't break completely but it did considerably decrease the usefulness of weight decay
alstroemeria313#1694: > As you can see the weight decay is normalized by sqrt(v) as well. If the gradient of a certain weight is large (or is changing a lot), the corresponding v is large too and the weight is regularized less than weights with small and slowly changing gradients! This means that L2 regularization does not work as intended and is not as effective as with SGD |
aro#1177: Yep! Makes sense. With shampoo it would weight increase instead of weight decay
aro#1177: (Sometimes)
alstroemeria313#1694: I tried to make the preconditioner decay EMA-style and managed to break it instead 🙃
aro#1177: Need grafting
alstroemeria313#1694: Ah
aro#1177: update * ||gradient|| /(||update||_2 +1e-16)
aro#1177: Gradnorm/shampoo update grad norm
alstroemeria313#1694: Ah
Louis#0144: Thank god for the spoiler warning
Louis#0144: Omg I can’t believe he just spoiled every anime ever in two words
alstroemeria313#1694: @aro Oh, I don't have to set lr above 1 if I set epsilon low enough :)
alstroemeria313#1694: PyTorch uses mean instead of sum as the reduction in its loss functions by default
alstroemeria313#1694: So you usually have *really small* gradient elements
Aran Komatsuzaki#5714: did you guys talk about EMA/SWA/lookahead?
aro#1177: One more reason it blows up is the svd: there is a pow(s, inverse root), change that to pow(max(s, some epsilon), inverse root).
aro#1177: I only saw look ahead
aro#1177: s can be very small, 1e-30 after doing ema of GG.T, so pow will blow up
alstroemeria313#1694: Oh
alstroemeria313#1694: I was adding the epsilon * identity matrix to the preconditioner outside the EMA
aro#1177: Even so, some small change for indefinite as |
aro#1177: Chance*
aro#1177: Due to numerics
joaogui1#8461: Larger levels?
alstroemeria313#1694: Ah
aro#1177: Larger scale.
aro#1177: Use same epsilon for max()
alstroemeria313#1694: Don't people flush the singular values below epsilon to 0 and then not invert those?
alstroemeria313#1694: Like in the pseudoinverse?
aro#1177: Yeah, but we found this is better. Noecdal has discussion about this issue in his book without any conclusion what is correct.
alstroemeria313#1694: Ahh.
joaogui1#8461: Got it
joaogui1#8461: Also, when you say lookahead may work better with 2nd order methods, does that include shampoo?
joaogui1#8461: Any experience on which of there work best?
Aran Komatsuzaki#5714: in fact, we did a long discussion yesterday about it cuz none of us really tried the comparison before lol
Aran Komatsuzaki#5714: one thing i know is that EMA works really well on image generative models like vae variants
Aran Komatsuzaki#5714: i tried it on vdvae, and image quality noticeably improved
alstroemeria313#1694: GAN generators especially
Aran Komatsuzaki#5714: my guess is that it should work on transformer LM just as well.
Aran Komatsuzaki#5714: in fact, checkpoint averaging that was used to be applied to transformer LM is
Aran Komatsuzaki#5714: a cousin of EMA |
Aran Komatsuzaki#5714: i think EMA is better than checkpoint averaging
joaogui1#8461: Hmmm, interesting
alstroemeria313#1694: It smooths over the variations induced by the generator/discriminator dance
joaogui1#8461: I believe someone also commented about lookahead helping with GANs, wonder if it helps with LMs
EricHallahan#1051: Oh, is that why GANs need EMA? That makes total sense.
Aran Komatsuzaki#5714: like six of us think EMA/lookahead would help, but none of us has really run any experiment lol
Louis#0144: @joaogui1 hi
Louis#0144: I know u
Louis#0144: You follow me on twitter
alstroemeria313#1694: https://arxiv.org/pdf/1806.04498.pdf
aro#1177: Yeah though I am a skeptic on lookahead
joaogui1#8461: Lol
joaogui1#8461: What's your @?
joaogui1#8461: Fair enough
Louis#0144: Louis Castricato
joaogui1#8461: Oh, hi!
StE_gUy#5856: I know this is a bit off topic for this server, but I don't know where else to ask: Does anyone know of a good discord community/channel for sharing entertaining or interesting GPT3 prompts/responses? Someone suggested #the-faraday-cage-archive here but it's not quite what I was looking for.
bmk#1476: anything in #communities ?
StE_gUy#5856: Checked out a few but haven't found it yet. Maybe I just need to look a bit harder
alstroemeria313#1694: I moved the Shampoo SVD to the GPU and it goes faster for me |
StellaAthena#3530: Yannic’s discord maybe?
StE_gUy#5856: They don't have a channel dedicated to sharing prompts/responses. That's what I'm looking for.
StellaAthena#3530: A dedicated space? Yeah I don’t know anywhere that has that specifically
StellaAthena#3530: We *talk about* prompting a lot
StellaAthena#3530: But don’t have a dedicated space for collecting prompts + responses
StE_gUy#5856: I find transformers fascinating because cleverly-engineered prompts make all the difference. And I'm trying to refine the art of prompt writing to be as straightforward as possible.
Basically I want to share observations I've had about what's useful and what actually hampers the process.
StE_gUy#5856: Plus there are so many responses that are too damn funny not to share.
StE_gUy#5856: Works for me!
StellaAthena#3530: Whose line is it anyways?
StellaAthena#3530: It’s a popular improv show in the US
StE_gUy#5856: I'm not good at naming, but some ideas #prompt-engineering, #prompts,
zphang#7252: prompt-and-circumstance
StE_gUy#5856: Meh. Not really interested in overloading the function of the channel, thinking a bit more about it.
Clay Mullis#7136: Best channel to ask a question about ML deployment?
cfoster0#4356: We don't really do deployment, tbh ...
EricHallahan#1051: Here or #off-topic maybe, but :thisup:
Clay Mullis#7136: off-topic it is. thanks
zphang#7252: There're some slides on model deployment in Chip Huyen's class: |
https://stanford-cs329s.github.io/syllabus.html
chirp#4545: @Deleted User were you the one who asked about how to host models cheaply?
apparently you can now run GPT-2-1.5B on Lambda! https://nostalgebraist.tumblr.com/post/646235079906148352/did-you-know-that-gpt-2-can-run-on-aws-lambda
InquilineKea#6571: have mini-AIs run over the recordings of my entire videostream (nlp from text) and have them figure out patterns in my activity that are fascinating and demand more intermittent reinforcement learning. this is how to best reinforce your memory
InquilineKea#6571: does anyone take screen recordings of their entire screen and put it into input for the next gpt?
InquilineKea#6571: "As an aside a business/foundation I've always wanted to start is a businesss that stores and encrypts peoples data to be released/read by historians X years from now." - a friend
Sid#2121: iirc it's this deoldify model https://github.com/jantic/DeOldify
nz#9710: something like this I think
nz#9710: https://aliaksandrsiarohin.github.io/first-order-model-website/
Kia#2550: So Guys...Can GPT,propably future GPT-neo models Can do Simple equations from Addition, Multiplication, Etc. And Simple convertion?
Kia#2550: Hehe...But Not really for academic use...Just In computing Ingredients prices and Over all cost in per sells
Kia#2550: Haha Math :wojak_despair:
EricHallahan#1051: I would say to just give BERT a calculator.
Kia#2550: Non the less interesting
Kia#2550: Thanks
EricHallahan#1051: Ideally we would like to significantly outperform GPT-3 in math.
EricHallahan#1051: But it is obviously much easier to just plug it into a calculator `:P`
Sphinx#2092: I mean, people already did that and published a paper on it. It works much better, iirc.
EricHallahan#1051: I really like the concept. |
Louis#0144: gm goosegirls
Kia#2550: Non the less true...Bc GPT-3 Just see numbers as Special Fonts...That doesn't understand it
Kia#2550: All those fundings and investment going somewhere To a AI that doesn't understand simple mathematics
mkualquiera#3484: well to be fair GPT doesn't really understand anything at all
mkualquiera#3484: it's just a statistics model at its core
Kia#2550: Yeah
Kia#2550: It's just a special English Teacher that can Keep writing over and over again...But doesn't understand Different writing styles Numbers or any Drawing in your paper
Kia#2550: Just oddly Dumb
CRG#8707: Would you say CLIP encodes understanding in the multimodal neutons? <https://openai.com/blog/multimodal-neurons/>
mkualquiera#3484: I would say CLIP is closer to real understanding yes
mkualquiera#3484: I mean understanding _is_ also a statistical model
Kia#2550: Considering they're paid to work there properly...
EricHallahan#1051: I think there is a non-zero chance that some GPT-Neo models will ditch BPE.
EricHallahan#1051: I am very likely going to try to add it to the repo.
StellaAthena#3530: That would be fun
Kia#2550: Owww interesting
EricHallahan#1051: Just for experimentation purposes.
EricHallahan#1051: It has pretty big drawbacks though like context length.
CRG#8707: This is why I think it doesn't make sense to say it understand or doesn't, only how much. (GPT-2 also has "concept" neurons encoding, say, Harry Potter characters: <https://twitter.com/nostalgebraist/status/1345111569730965504?s=19>)
Kia#2550: I'm Interested... Are You Guys planning to make a simple Playground for GPT-neo or let it stay in The Official site? |
Kia#2550: Wait
Kia#2550: They sound desame
Kia#2550: Uhhh...
EricHallahan#1051: The most we can offer is a Colab notebook.
Kia#2550: Hmm, That's fine non the less
Kia#2550: It's a simple method to
Kia#2550: Probably when I Can Learn to make UI's I can help you guys...If I can, But ow well
CRG#8707: (The playground thing should probably be in the FAQ / rules if it isn't there already.)
Kia#2550: Wait they have FAQ
Kia#2550: Ow I Taught in this server
Kia#2550: :wojak_despair:
Kia#2550: But ow well...Thanks for the Conversation and Time, Have a great a day to bye
cat_#4534: Using the huggingface code, I can do inference for the 2.7B model on CPU at a rate of about 2.75 characters per CPU core minute. That's not as slow as I expected
EricHallahan#1051: What Hugging Face code?
cat_#4534: The one from this pull request
https://github.com/huggingface/transformers/pull/10848
cat_#4534: The model name just has to be changed to gpt_neo_2.7B
CRG#8707: (GPT-neo used by @dril_gpt2) https://twitter.com/kingdomakrillic/status/1375801614154485767?s=19
Louis#0144: 2.7 is too thicc for colab?
mkualquiera#3484: yeah |
mkualquiera#3484: even the repo says so iirc
Louis#0144: O damn
Louis#0144: Even with grad checkpoints?
mkualquiera#3484: dunno, maybe no one has tried it and we are all just assuming it wont work because someone decided to say it doesn't work :berk:
mkualquiera#3484: I mean I know I haven't tried it
thepok#1770: i have heard that work on an even bigger model has started or starts soon? where can i get more information about that?
Louis#0144: There isn’t more information
Louis#0144: We have no release date
Louis#0144: We have a policy of not giving a release date
Louis#0144: It’ll be done when it’s ready
Louis#0144: We are actively working on it though
Louis#0144: @thepok
thepok#1770: i dont mean the big 1t one
EricHallahan#1051: In less time than it took the Cassini family to map France.
Louis#0144: We have zero release dates for any of the models
Louis#0144: We aren’t allowed to give you any release dates
Louis#0144: Not only that we don’t have a consistent internalized deadline
Louis#0144: We couldn’t give you a release date even if we wanted
thepok#1770: okey i didnt ask for realeasedate though 😄
thepok#1770: just more info |
Sahl#0630: they’re definitely working on it
EricHallahan#1051: We really don't know.
bmk#1476: * hopefully
thepok#1770: i understand thanks 🙂
EricHallahan#1051: We are letting #multimodal take over the reigns on this one I'm pretty sure.
EricHallahan#1051: They need the infrastructure anyway for DALL-E.
thepok#1770: great ill folow there
EricHallahan#1051: I don't think they have a timeline other than to have it done before we get to DaVinci.
StellaAthena#3530: Nice! If you do any down-stream stuff with it, definitely let us know.
Also, I both love and hate how good y’all are at finding unannounced things. It seems like every week someone asks about a WIP feature they found the PR for lol
EricHallahan#1051: I initially acted dumb on that one just to see if the information would surface.
EricHallahan#1051: And it did.
StellaAthena#3530: Yeah I was a little curious, given how you were literally testing it yesterday
EricHallahan#1051: Yeah. I don't like talking publicly about things that are highly untested and in development. This case especially as I feel that a HF release is effectively saying "this model can be used in production," which is not a message I want to send given our limited testing.
user91010#6777: Yeah, just give it time. The two smaller models were Soon(tm) right up until they were released.
EricHallahan#1051: You mean Soon™️.
bmk#1476: a large part of that was us being lazy and not having the time to release it lol
bmk#1476: those models sat around for, like, weeks or months before we finally got around to it
bmk#1476: we're *really* not in a rush to do anything
StellaAthena#3530: They were finished in.... late january? |
EricHallahan#1051: Yes. I got involved right as we finished 2.7B
bmk#1476: it gets better - the training code was finished basically months prior but we never actually got around to starting the runs
StellaAthena#3530: imagine how productive we would be if we had someone whose job was to write down ideas people have and schedule tests on TPUs
bmk#1476: im all ears, list ideas pls
bmk#1476: im willing to take on this job and run like 10 experiments for different papers
bmk#1476: as long as whoever has the idea is willing to be first author and write up the paper itself
EricHallahan#1051: I've mentioned the SMS transformer thing a few times, but it really isn't something that is that publishable unless there is something novel about it.
bmk#1476: im not interested in the sms transformer personally
EricHallahan#1051: This little guy doesn't have a name sadly, because it would be appropriate. https://cdn.discordapp.com/attachments/729741769738158194/825425750345515008/853.png
EricHallahan#1051: Yeah, I totally understand the sentiment.
bmk#1476: i dont even care that it's unpublishable
EricHallahan#1051: It isn't that useful.
bmk#1476: i think it's not worth doing
bmk#1476: it's not useful for anything, and it doesnt teach us anything new - so it's not useful for practice, and it's not useful for theory
EricHallahan#1051: I can definitely see that, it has probably been done to death.
freddiemitchell6#0094: I'm down to help, but I'm just a hobbyist.
bmk#1476: what background do you have?
freddiemitchell6#0094: I'm an engineer (not software) but have been spending lots of time on NLP for the past year.
freddiemitchell6#0094: I read 10 papers per week probably
Louis#0144: What do u mean SMS |
EricHallahan#1051: Short Message Service?
Louis#0144: But a transformer?
Louis#0144: Por que?
EricHallahan#1051: SMS messages and tweets have very short contexts.
bmk#1476: ah, nice - then it should be pretty easy for you to pick up on the software stuff
Louis#0144: Oh
Louis#0144: Yeah that’s dumb
Louis#0144: Sorry
EricHallahan#1051: It is.
Louis#0144: An XS version of GPT neo would be nice though
Louis#0144: 600M params or something
bmk#1476: 600M is too microscopic
Louis#0144: Something comfortably finetunable on colab
bmk#1476: just use gpt2
Louis#0144: The data quality of the pile is way better than GPT2 though
EricHallahan#1051: Why don't we distill one model down?
Louis#0144: Or that yeah
EricHallahan#1051: Good practice.
EricHallahan#1051: Like take 2.7B or 1.6B and bring it to the size of large?
Louis#0144: Ye |
Louis#0144: That’s a great experiment
Louis#0144: Not publishable
Louis#0144: But worth a blog post
EricHallahan#1051: Then technically it could replace GPT-2 Large on Write with Transformers.
Sphinx#2092: Reminds me of: https://cdn.discordapp.com/attachments/729741769738158194/825428647526662164/unknown.png
EricHallahan#1051: Then all the people who want to have their web interface can have it.
EricHallahan#1051: HF gets a superior model in that size regime.
EricHallahan#1051: And we gain ~~notoriety~~ notability and the experience for hopefully doing it at a larger scale later on.
EricHallahan#1051: :gameryes:
guac#4716: notoriety nooooo EAI is good peoples
EricHallahan#1051: Wrong word.
EricHallahan#1051: `:P`
EricHallahan#1051: Too little sleep this week.
guac#4716: rest well eleuther bunny 🦦
EricHallahan#1051: And yes, that is me being passive aggressive.
StellaAthena#3530: @bmk Train transformers at multiple levels of precision (when we talked about this last month I think you said 32 and 64 make the most sense?) from the same initialization. Then replace the $(W_kX)^T (W_v X)$ part with $X^T W X$ and train them again.
At first we don’t need to train them for very long, I’m interested in looking at the distance between the weights of the two transformer structures
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/825433664933593138/193204646687408129.png
StellaAthena#3530: That’s a bit weirdly worded, does it make sense? |
EricHallahan#1051: TPUs hate double-precision floating point though.
bmk#1476: when i said that i was mostly thinkgin stuff we can run without much code changes tbh
bmk#1476: that sounds complicated in terms of code changes
bmk#1476: i thought th emphasis was on the *scheduling* part
StellaAthena#3530: Isn’t it just changing like two lines in the attention layer?
bmk#1476: tl;dr no
StellaAthena#3530: 😦
bmk#1476: and tbh i'm kinda pessimistic about this experiment in general
StellaAthena#3530: Why?
bmk#1476: just intuition
bmk#1476: no solid reason
bmk#1476: it's a weak prior
StellaAthena#3530: For the record, I’m expecting the XWX layer to do worse
StellaAthena#3530: What I want to find out is why
bmk#1476: tbh the general class of experiments i was thinking of doing was mostly training models on different data or different hparams and testing them on eval harness
bmk#1476: that's the pipeline that would be the easiest to do
StellaAthena#3530: Ah
EricHallahan#1051: Oh, yeah, that makes sense.
bmk#1476: speaking of which
bmk#1476: new eval harness ada results |
bmk#1476: https://gist.github.com/leogao2/d00ee248359e6363be4957ba7d61094e
bmk#1476: which of these results look suspisiouc
StellaAthena#3530: That’s real bad on lambada
StellaAthena#3530: 50% and a PPL of 10?
bmk#1476: might just be the small sample size
bmk#1476: lemme run a full lambada run
StellaAthena#3530: For context GPT-2 gets 63% and a PPL of 8.6
EricHallahan#1051: Yeah, that sounds wrong.
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/825437255404748840/unknown.png
bmk#1476: full lambada results
StellaAthena#3530: That’s disappointing if true
StellaAthena#3530: Try running GPT-2 on it
bmk#1476: gpt2-1.5B?
bmk#1476: im gonna run 117M first because im lazy and dont want to wait for 1.5B
bmk#1476: 117M https://cdn.discordapp.com/attachments/729741769738158194/825437766355124304/unknown.png
bmk#1476: can you check if this is reasonable?
EricHallahan#1051: They list PPL as 35.13 in the paper.
bmk#1476: seems about rightg
bmk#1476: maybe ada is just worse than the model in gpt3 paper
EricHallahan#1051: ACC is 45.99 |
bmk#1476: we know they're changing models around anyways
Sid#2121: can confirm this to be the case lmao, i tried it once and it OOMed but i couldn't be bothered to fiddle with the hparams once
bmk#1476: 1.5B https://cdn.discordapp.com/attachments/729741769738158194/825439458999533639/unknown.png
EricHallahan#1051: ```
| Task |Metric|Value |
|-------|------|-----:|
|lambada|ppl |8.63 |
| |acc |0.6324|```
Sid#2121: wait, this is GPT2, or neo?
bmk#1476: gpt2
EricHallahan#1051: GPT-2
Sid#2121: that seems...
Sid#2121: well, either the code is wrong, or openAI are
bmk#1476: some help hunting down the issues would be nice
Sid#2121: did you ever push the code somewhere?
bmk#1476: yes
Sid#2121: ok
Sid#2121: where
bmk#1476: er.. https://github.com/EleutherAI/lm-evaluation-harness/
Sid#2121: where's the lambda code |
bmk#1476: https://github.com/EleutherAI/lm-evaluation-harness/blob/master/lm_eval/tasks/lambada.py
Sid#2121: `lambada` :guilty:
Sid#2121: wait what, where's the actual task lol
Sid#2121: where are you getting the accuracy numbers
jrowe#5371: sounds like a dance style
jrowe#5371: so not like there's any need for even more cool eleuther things, but I have an experiment proposal - a transformer that takes an image of writing and translates to English, using cuneiform, glyphs, etc, and applied to currently untranslated artifacts
jrowe#5371: <https://en.m.wikipedia.org/wiki/Undeciphered_writing_systems>
jrowe#5371: that'd be a hell of a paper, plus a pretty big deal for many fields
Sid#2121: how would you train that
Sid#2121: if they're untranslated you don't have any labelled data
Louis#0144: lol
Louis#0144: I don’t understand where this idea comes from at all
jrowe#5371: train on all known writing systems, set aside a set for validation - shouldn't a transformer be able to generalize?
Sphinx#2092: You mean this? https://arxiv.org/abs/2010.10648
Louis#0144: How
Louis#0144: How could you translate at all
Louis#0144: You have *no* labels
Louis#0144: You can’t do this unsupervised
jrowe#5371: I don't think you're on the same track
jrowe#5371: train system using labeled data - all other known writing systems |
nz#9710: don't you still need data to finetune if you want to approach an unseen language?
jrowe#5371: it should generalize features of writing, mapped to English output
Sphinx#2092: oh you want the unseeen case
Sphinx#2092: People have already done that as well
Sphinx#2092: https://arxiv.org/abs/1910.13998
jrowe#5371: with giant ass transformers™️?
Sphinx#2092: No
Sphinx#2092: Transformer base.
Sphinx#2092: lol
Sphinx#2092: But I think this heavily abuses some oddities of this particular training set.
Sphinx#2092: I explored this type on more traditional MT tasks and it failed.
Sphinx#2092: Thankfully lol
nz#9710: yea I don't think it's gonna work tbh
Sphinx#2092: Your only hope really is to rely on some lexical similarity.
jrowe#5371: I'm not so sure - gpt models already seem to handle invented words and on the fly linguistic rules just from exposure to synthetic languages - extracting relationships between symbols and mapping to English seems plausible
Louis#0144: You can (probably) also provably show that such a model cannot exist lol
StellaAthena#3530: Historical linguists are really good at their job. Most artifacts we can’t translate have unique words that show up nowhere else in the historical record or for which we have no translation of the language at all
jrowe#5371: right, not thinking of the sparse ones necessarily, though, more like voynich manuscript type situations
jrowe#5371: where you've got a nice big chunk of data
jrowe#5371: or thousands of untranslated carvings, etc |
jrowe#5371: I'll do some reading lol
Sphinx#2092: You can probably write a nice paper if you find some natural way of incorporating unseen languages.
Sphinx#2092: Especially for some of these big models. As it is becoming publically known, a lot of these multilingual datasets are poorly labelled. I wouldn't even be surprised if there are some languages in these datasets which are not accounted for.
Sphinx#2092: The model might already even know how to do it and you don't know.
Winter#7938: Hey all
Winter#7938: Glad you guys are working on on open source GPT3 model
Winter#7938: Meaningful and valuable work, for sure
EricHallahan#1051: Hey!
Lurker or new?
Winter#7938: Brand spanking new
EricHallahan#1051: Welcome! If you haven't looked in #rules yet, we have a bunch of resources that we are (slowly) updating.
Winter#7938: Ooh goody, that means I have an opportunity to not ask dumb questions
Winter#7938: Well, it looks like offering up my GPUs, folding@home style, is not going to work according to the FAQ
freddiemitchell6#0094: AFAIK, languages are combinations of other languages, via cultural evolution. So interpolating between similar languages should be possible.
StellaAthena#3530: Yes and no. And most of what’s doable by interpolation already has been. The languages we don’t know how to decode *don’t have* similar languages that are known
StellaAthena#3530: tl;dr linguists aren’t completely incompetent guys, I promise
AI_WAIFU#2844: yeah, latency and bandwidth requirements are a bitch.
bmk#1476: STOP DOING LINGUISTICS
grammatical structures were not meant to be given names! |
Sphinx#2092: I'm not saying it's impossible. Quite the opposite, not only is it possible, it would be a nice paper.
AI_WAIFU#2844: 600Gbps NVLink connections and you still can't keep the GPUs fed.
Daj#7482: New sentences weren't meant to be constructed! Wanted to say something new anyways? We had a tool for that, it was called GRUNTING AND POINTING
Sphinx#2092: This also goes beyond just rare languages though. Even for 'common' languages, it would be nice to have a cleaner solution beyond that current approach of appending tokens to guide the model.
StellaAthena#3530: You right now https://cdn.discordapp.com/attachments/729741769738158194/825451510283632680/image0.png
Sphinx#2092: Especially if you want to treat concepts like dialects or registers as 'languages'.
freddiemitchell6#0094: Makes sense. Another interesting angle is when "we" have access to all archival text around the world, we could see the combinations of languages more clearly across time. Almost like mixup.
freddiemitchell6#0094: Just BSing here 🙂
Louis#0144: Mf wants to make a poset of languages
Louis#0144: @StellaAthena hurry Stella get the ultraproducts!
jrowe#5371: lol
bmk#1476: "poset? is that a free category but nonspicy?"
jrowe#5371: Stella's example unicorns prompt text appeared to make up its own language, so i went on a linguistics hunt through Wikipedia, then saw that the Louvre had released its digitized collection for free and thought a translator would be maybe possible
Daj#7482: Just train a transformer on all of physics, simulate the universe from the big bang, and reconstruct the lost languages
Daj#7482: ez
jrowe#5371: then just transfer Satoshi's bitcoin to my wallet and voila
mkualquiera#3484: I got a notification but I'm too drunk to figure out what it was
mkualquiera#3484: so
mkualquiera#3484: whoever pinged me
mkualquiera#3484: hi |
Winter#7938: I didn't ping you, but hey there
Winter#7938: Discord really likes making people search for their pings
Winter#7938: Someone should train a neural net to solve that problem... except there's no training data 🤷
alstroemeria313#1694: It'll be great when `torch.vmap()` finally fully works
alstroemeria313#1694: I'm using PyTorch nightly in a container rn just to use its current implementation
chilli#5665: Haha what issues do you have with it
alstroemeria313#1694: Some things are slower than Python `map()`
alstroemeria313#1694: But I used it in some other code to get a 1.5x speedup
alstroemeria313#1694: Because it meant I was feeding the GPU better
chilli#5665: What kind of stuff are you vmapping over?
alstroemeria313#1694: A bunch of stuff, including my own good differentiable image downsampling code and CLIP evaluations
chilli#5665: Ah, so it includes a lot of stuff like
chilli#5665: Regular convs and stuff like that
alstroemeria313#1694: Yes
alstroemeria313#1694: And transformers
alstroemeria313#1694: ahaha randomness doesn't work inside vmap yet
chilli#5665: Does it warn you about what ops it's using fallbacks for?
alstroemeria313#1694: > To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True)` before the call to `vmap`
chilli#5665: Yeah that's an annoying thing about vmap
chilli#5665: Lol |
chilli#5665: The problem is that the semantics are somewhat unclear
chilli#5665: This is one of the reasons Jax has their whole "rng key" stuff
chilli#5665: If you give me a list of ops I can forward them to the guy working on this stuff :P
alstroemeria313#1694: Oh, also .detach() doesn't work yet
chilli#5665: It doesn't? Like, it errors?
alstroemeria313#1694: Yes, it's a RuntimeError
alstroemeria313#1694: @chilli Here are the warnings https://pastebin.com/EXUYEfvb
alstroemeria313#1694: There are a bunch.
alstroemeria313#1694: It was still 1.5x faster.
chilli#5665: Haha, damn
alstroemeria313#1694: Because I got to use a decent batch size instead of batch size 1.
chilli#5665: Cool, I'll send it to him
alstroemeria313#1694: Thank you :)
chilli#5665: Could you also post the example where it's slower?
alstroemeria313#1694: @chilli Vmapping over an instance of this class ```python
class TVLoss(nn.Module):
"""L2 total variation loss, as in Mahendran et al."""
def forward(self, input):
input = F.pad(input, (0, 1, 0, 1), 'replicate') |
x_diff = input[..., :-1, 1:] - input[..., :-1, :-1]
y_diff = input[..., 1:, :-1] - input[..., :-1, :-1]
return (x_diff**2 + y_diff**2).mean()```
alstroemeria313#1694: Specifically replication pad 2D and the mean are the unsupported ops
chilli#5665: How much slower is it?
alstroemeria313#1694: It ran at 1/2 the speed on CPU
alstroemeria313#1694: Than just doing the calls one at a time, batch size of 1, in a `map()`
chilli#5665: Interesting
chilli#5665: Perhaps the extra stacks cause slowdowns
chilli#5665: In general I wouldn't be shocked if these kinds of pointwise ops aren't faster with vmap on CPU though
cfoster0#4356: Can't tell whether or not this is using a GPT-Neo model under the hood https://pchojecki.medium.com/test-gpt-3-for-free-a3e55b753b51
Sid#2121: try prompting it with code lol. The pile has a lot of github in it, if it's any good it's probably neo
Sid#2121: well i'm assuming it's not actually gpt3
Sid#2121: but i think our model is better at python at least than gpt3. This is only from eyeballing it, i don't really have a way to measure
jrowe#5371: would almost have to be neo, nobody gonna pay oa for a public server
jrowe#5371: definitely neo, he's a fan
cfoster0#4356: Ah cool
Sid#2121: oh yeah, i missed that
Sid#2121: i guess they finetune it, cool!
cfoster0#4356: It spits out code like Neo :) |
cfoster0#4356: >>> class FeedForward(nn.Module):
def __init__(self, dim, dim_out = None, mult = 4, glu = False, dropout = 0.):
super().__init__()
### COMPLETION ### nn.LeakyReLU.init(inplace=False)
super().__init__()
self.dim = dim
self.dim_out = str(dim_out)
self.glu = glu
self.multi = mult
self.dropout = dropout
self.relu = nn.LeakyReLU(inplace=False) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.