data
stringlengths 115
7.61k
|
---|
- `"step_shift"` in config, if present, is subtracted from steps when computing learning rates and stuff. this is how i did lr warmup/decay sensibly while finetuning
- `"min_lr_frac"` in config controls minimum for lr decay, rather than hardcoded 0.1 * max lr
- `--noise-scale` script arg to measure gradient noise scale in tensorboard summaries
- `rob_util.py` has the helpers i wrote to read tensorboard summaries in python, i dunno how others do it
- additional options for `"precision"` in config. most useful is `"mixed_precision_load_bfloat16_once"` for converted the 1.3B checkpoint that was saved in bf16. it will *load* from bf16, but *save* in fp32, while training with bf16 for activations only
nostalgebraist#3542: *also a bunch of code formatter changes that make the diffs hard to read, and a bunch of useless crud from my attempts to do inference in tf-mesh :p*
Deleted User#0000: can anybody PLEASE help me?
I am trying to fine tune gpt neo
I ONLY found this guide: https://medium.com/geekculture/fine-tune-eleutherai-gpt-neo-to-generate-netflix-movie-descriptions-in-only-47-lines-of-code-40c9b4c32475
but the linked guide tells me how to generate text
without any input
but i
want it to be fine tuned
so that it learns to predict output when given some input
instead of some
'<startoftext>' '<endoftext>' bullshit
so pls help me
EricHallahan#1051: This repo supposedly works. |
EricHallahan#1051: https://github.com/Xirider/finetune-gpt2xl
Rina#0391: Hi everyone
Rina#0391: can anyone give me ideas for my gpt3 prompts
Rina#0391: I ran out
Rina#0391: besides coding, homework etc
Rina#0391: does anyone know how to link it with a python project?
jellyvish#0841: hey all - Vishal here. I mostly work on deployment/governance research at DM, and have some experience with marketing/communications, product stuff, etc. I know this is mostly a ML research focused group, but feel free to get in touch if there's a project that might benefit from that kind of skillset.
EricHallahan#1051: Welcome!
AerysS#5558: I am currently using free GPU on Kaggle/Colab to run my code, but right now they are not enough. I plan to use GCP/AWS. Any suggestions which should I choose? I am a student, and will mainly run my personal code there so no need to worry about production ready, etc.
The cheaper the better I think. I cannot afford a big amount.
𓅬 gabriel_syme 𓅬#3220: Concerning GPU prices, the cheapest would be smth like vast.ai I think. Really not sure if that fits what you want to do as a system though
AerysS#5558: I only try some architectures I am experimenting, so anywhere that is Colab/Kaggle-like is fine
𓅬 gabriel_syme 𓅬#3220: vast.ai is pretty cool then. I would get a V100 for about 0.6$/h and the 3090 is not much more expensive (maybe at 0.9 as high number)
𓅬 gabriel_syme 𓅬#3220: but ofc you can get a V100 (usually) in Colab Pro as well so that's definitely worth a shot, if that GPU is enough
AerysS#5558: Main reason that pushes me away from Colab is I cannot let it run in the background like Kaggle. Does vast.ai support it?
𓅬 gabriel_syme 𓅬#3220: you mean close connection to the VM and let it run? yeah sure
𓅬 gabriel_syme 𓅬#3220: they are just like AWS instances I guess, only cheaper and not as 'safe' with your data
AerysS#5558: you mean there's no mention about data privacy? aka they can read my code?
CKtalon#7792: yes, you are running it on someone's computer after all |
𓅬 gabriel_syme 𓅬#3220: I think there is mention just not as tight as a cloud instance might(?) be
𓅬 gabriel_syme 𓅬#3220: in any case for experiments running open source code it is perfect
AerysS#5558: hmmm I am not running open source code so it's a problem imo
𓅬 gabriel_syme 𓅬#3220: hmm, you could still try some alternatives like https://gpu.land/ or maybe even grid.ai (although they are new and use AWS, I do think they offer cheaper prices)
𓅬 gabriel_syme 𓅬#3220: gpu.land seems nice, I've never used it though.
alstroemeria313#1694: I thought it was dead
𓅬 gabriel_syme 𓅬#3220: oh is it? dang. I do remember you had issues setting up an environment there
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/841239794242682901/Screen_Shot_2021-05-10_at_2.06.30_AM.png
𓅬 gabriel_syme 𓅬#3220: RIP
AerysS#5558: lol they should have put that warning on the front page
AerysS#5558: @𓅬 gabriel_syme 𓅬btw it would be convenient if I can edit the code directly on the platform. It should work like my local machine, while having save and power like Kaggle. I guess AWS is the good-to-go?
AerysS#5558: Paperspace free does not allow private notebooks so that's out of the question, moreover their docs throw 404 so I guess they are dead too?
AerysS#5558: Also AWS and Azure declined my student tier after 1 minutes of "careful review"
Singularity#9001: I've got some interesting ideas if you'd like to hear
Xirider#4010: the cheapest one i found is https://datacrunch.io/
there you pay $0.65 / hour with one V100
alexyz#3459: vast.ai can have lower prices
alexyz#3459: At some point I saw a GPU on-demand for $0.33/hour for a V100
alexyz#3459: usually it's more like $0.6/hour
alexyz#3459: or $0.7/hour |
EricHallahan#1051: The point is that they can't use vast.ai.
alstroemeria313#1694: i might check them out when they get their 80GB A100s
finetune#0907: datacrunch is cheap, but so little ram
zphang#7252: When a paper says "16 TPUv3s", is there an unstated assumption of what size TPU that is?
bmk#1476: err i'd guess it means a (single) v3-32 but that's a total guess
zphang#7252: cool, time to test by OOM errors
bmk#1476: like is this a context where they're training a bunch of small models?
bmk#1476: or one big model
zphang#7252: nope it's 1 big model
bmk#1476: hm i doubt they figured out swarm training
bmk#1476: so theyre probably talking about one big pod
zphang#7252: this is electra btw
bmk#1476: and they probably got cores and dies mixed up
bmk#1476: they could also mean a v3-128
bmk#1476: if they treat each entire 8-core system as one tpu
bmk#1476: actually that seems more likely
Louis#0144: Omg guys I realized today
Louis#0144: My Leo number
Louis#0144: Is 2
Louis#0144: Wow |
Louis#0144: Legendary
bmk#1476: lol
EricHallahan#1051: My Leo number is 1 if you count the blog.
EricHallahan#1051: And NaN if you don't lol
bmk#1476: my number is 0 :Chad:
Daj#7482: willing to bet this is a v3-16 unless it's from Google lol
zphang#7252: it's from Google lol
bmk#1476: theres no such thing as a v3-16 tho
bmk#1476: it's the one size that gets jumped over
Daj#7482: huh so it is
Daj#7482: TIL
EricHallahan#1051: ^
gwern#1782: maybe there are -16s internally?
nz#9710: isn't it more likely to be 16 * TPUv3-8s = TPUv3-128?
zphang#7252: okay, so it's the same configuration as BERT, and in the BERT paper it says
zphang#7252: https://cdn.discordapp.com/attachments/729741769738158194/841400923912798259/unknown.png
zphang#7252: and google injecting ads into the BERT paper
zphang#7252: https://cdn.discordapp.com/attachments/729741769738158194/841401183632621578/unknown.png
zphang#7252: this makes it sound like TPU v3-32?
zphang#7252: although at this point I think we've exhausted almost every possible setup |
gwern#1782: maybe it's time to just email or tweet them
Bruce23#6204: Hi, can I use GPT NEO for tasks like paraphrasing?
EricHallahan#1051: There should be nothing stopping you.
Bruce23#6204: What's a good resource for "prompt design"?
StellaAthena#3530: I thought v3-8 was the smallest, but this implies using 4x v3-4s, no?
bmk#1476: each chip is 2 cores
bmk#1476: so v3-32
Bruce23#6204: Hm, GPT-3 has an example prompt like that to summarize text: https://pastebin.com/pqPJWVRN
Bruce23#6204: Does GPT neo understand commands the same way?
EricHallahan#1051: It should.
Bruce23#6204: Ok, thank you! It just repeats the text for me and then continues with some unrelated text, https://pastebin.com/hURhJwQm So I guess that a matter of how I set my parameters?
Sid#2121: GPT-neo's few shot abilities won't be as strong as GPT-3. Those capabilities emerge at scale.
Sid#2121: if it's repetitive, turn up the temperature
Bruce23#6204: Ok, thanks! 🙂
Bruce23#6204: wow, its very good!
Bruce23#6204: I can't find information on stop sequences. A string that tells the model to stop the generation. I guess that's not supported?
mkualquiera#3484: that depends on the api that you're using
Bruce23#6204: I am using api-inference.huggingface.co
mkualquiera#3484: in that case you have to ask huggingface
Bruce23#6204: Ok |
Bruce23#6204: ty
EricHallahan#1051: I would suggest reading the Hugging Face API docs.
EricHallahan#1051: It has some very useful information.
Deleted User#0000: When is GPT-Neo gonna have the same amount of parameters as GPT3?
Deleted User#0000: or will it?
gwern#1782: see the faq
alexyz#3459: Soon™️
Bruce23#6204: Seems like you can only set a eos_token_id (stop sequence) when you provide your own tokenizer config :/
EricHallahan#1051: https://eleuther.ai/faq\
cfoster0#4356: Good to see OpenAI folks using the Pile https://youtu.be/mVZE7wm1skw
bmk#1476: i am so happy lol
zphang#7252: https://www.youtube.com/watch?v=mVZE7wm1skw&t=5m38s
zphang#7252: >those 2 words, spoken on the OpenAI youtube channel
bmk#1476: :tribalism:
Louis#0144: They’re probably in the discord
bmk#1476: i'm so hyped
bmk#1476: wait
jesse#7865: oh we're here
mkualquiera#3484: That's kinda sus
bmk#1476: i need to make one crucial improvement to the :tribalism: emote |
gwern#1782: oh no! now it'll feel awkward when I criticize or meme about y'all
bmk#1476: "oh no! anyways"
Louis#0144: You should poke Ellie so I can pick his brain for the #carp project
Louis#0144: Lmao
gwern#1782: _lies. it was always awkward._
Louis#0144: You don’t have to I’m just kidding
bmk#1476: i recommend `\/me` because nobody reads italics as /me
bmk#1476: yes yes discord sucks for not properly supporting /me
gwern#1782: retvrn to the old IRC ways
mkualquiera#3484: Does this mean that we can't say ClosedAI anymore? :(
Louis#0144: No we can
Louis#0144: Dw
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/841455194309918770/tribalism2.png
Louis#0144: I wonder if OAI has seen our geese
Louis#0144: honestly
Louis#0144: I’d be more concerned if anything
zphang#7252: https://cdn.discordapp.com/attachments/729741769738158194/841455352897601646/unknown.png
bmk#1476: :tribalism2:
𓅬 gabriel_syme 𓅬#3220: hey nice! also, watching the video
jesse#7865: the memes are much spicier here for sure |
𓅬 gabriel_syme 𓅬#3220: btw the problems found in the work are interesting for grounding project right?
Louis#0144: Thank you
𓅬 gabriel_syme 𓅬#3220: seems like CL screws everything?
Louis#0144: HF has good memes too
mkualquiera#3484: The Geese [Gao et al. 2021]
bmk#1476: I'm very proud of the spicy memery that goes on around here
𓅬 gabriel_syme 𓅬#3220: Gao [The Goose et al., 2021]
𓅬 gabriel_syme 𓅬#3220: So I have a question. As someone who's last NLP 'work' was GRU+word2vec embeddings or smth, is it a good time (tool wise) to get into NLP? I'm mostly curious about work closer to production than research
bmk#1476: the most legendary memes are found in #off-topic pinned messages - unfortunaetly, #memes has gone downhill in quality
Louis#0144: You totally won’t get a biased answer here
Louis#0144: I promise
Louis#0144: 😉
𓅬 gabriel_syme 𓅬#3220: 😄
zphang#7252: oh hey, goose in chinese is written as "I, bird" (鹅)
gwern#1782: a sign!
Louis#0144: I thought goose in Chinese was hundan
Louis#0144: That’s what my friends always called them
𓅬 gabriel_syme 𓅬#3220: I've thought of fine tuning a neo model for weeks now, just a bit scared of going into it I guess after so long (and little experience)
bmk#1476: unforutunately, canadian goose in chinese is 加拿大雁
bmk#1476: https://zh.wikipedia.org/wiki/%E5%8A%A0%E6%8B%BF%E5%A4%A7%E9%9B%81 |
zphang#7252: https://en.wikipedia.org/wiki/Geese_in_Chinese_poetry huh
bmk#1476: regular geese are impostor
zphang#7252: The common character for "wild goose" is 鴈
where 鴈 is "man and bird, under roof"
bmk#1476: canadian geese best geese
Louis#0144: Come on
Louis#0144: This was hilarious
Louis#0144: Pls
zphang#7252: so, now that we've scared them away...
bmk#1476: now that we've scared them away, time to plot our takeover of OA through memetic infiltration
Louis#0144: huehuehue
𓅬 gabriel_syme 𓅬#3220: come on, it's just a bit of bird talk
Louis#0144: yeah Jesse is our spy now
Louis#0144: 😉
gwern#1782: _is concerned about all the quack science here_
Louis#0144: you’ve been a bird racist in the past
Louis#0144: But this time it won’t fly
gwern#1782: bite me, honkie
Louis#0144: 🥵
mkualquiera#3484: hot |
gwern#1782: so what's the upshot here? so far it seems to be basically "contrastive learning is hard, and didn't clearly outperform regular pretraining; we're still working on it"
𓅬 gabriel_syme 𓅬#3220: sounded like a lot of the discussions the grounding project had?
bmk#1476: yeah
bmk#1476: making CL actually enrich the model is hard, apparently
gwern#1782: (mm. well, not super-suprising, but I guess nothing that needs a /r/mlscaling submission)
cfoster0#4356: I dunno. They didn't do a negative-sampling based approach and (therefore) stability seems like it was kind of a problem
Louis#0144: You want me to be honest?
Louis#0144: I think our current approach to grounding won’t work
Louis#0144: That’s why I’m investing time in sweg
Louis#0144: I think training a text to text clip model will work better
𓅬 gabriel_syme 𓅬#3220: is how you sample the examples (if any) important at all for grounding purposes?
Louis#0144: Nah
Louis#0144: We ran it for 48 hrs
Louis#0144: Did 2700 clip batches of size 8k
Louis#0144: No improvement for grounding
Louis#0144: We were going to go bigger with many A100s
Louis#0144: And use full wiki articles for disentanglement
Louis#0144: But I’m kinda of the mindset it won’t work
Louis#0144: However I mean sent sim out of Princeton did well recently
EricHallahan#1051: I still want GPT-Neo CLIP encoder though. |
Louis#0144: They saw massive piqa improvements
Louis#0144: Working on that now
Louis#0144: Yep
EricHallahan#1051: wen
Louis#0144: Just porting to neox
Louis#0144: Idk
Louis#0144: I’m v busy this week
EricHallahan#1051: I want it now
Louis#0144: Sometime this week or next
EricHallahan#1051: :berk:
Louis#0144: @jesse did Ellie touch on this work?
Louis#0144: If u don’t mind me asking
Louis#0144: Danqi Chen’s work
Louis#0144: From last year
Louis#0144: If u don’t know I’ll just email him dw
bmk#1476: btw piqa is a bad task because it's noisy as hell
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/841462313558212618/unknown.png
bmk#1476: ±1 pp means it's basically all noise
cfoster0#4356: I don't think so. FWIW I consider CLIP to be a grounded vision-language model already
Louis#0144: They saw like ten percent |
bmk#1476: oh lol
bmk#1476: nvm
cfoster0#4356: Sent sim?
cfoster0#4356: This? https://github.com/princeton-nlp/SimCSE
Louis#0144: Ya and that other paper that they scooped
Louis#0144: Sorry piqa was the other one
Louis#0144: Danqi uses it for NLI
cfoster0#4356: Link?
Louis#0144: I can’t find it
Louis#0144: Rip
Louis#0144: I shall look more after dinner
Louis#0144: I agree with you though
Louis#0144: This is why I think txt to text might work better
cfoster0#4356: If the text-text approach works in the general case it'd make a very cool paper.
alexandrost#2936: do you think it makes sense to make a distilled gpt-neo?
EricHallahan#1051: :gameryes: if you ask CoreWeave about it.
EricHallahan#1051: https://eleuther.ai/faq
alexandrost#2936: thank you, somehow I missed that!
EricHallahan#1051: The entire FAQ or that part?
alexandrost#2936: that part 🙂 |
EricHallahan#1051: Yeah, it is something we will investigate.
alexandrost#2936: this is one of the most exciting projects I've ever seen in the past years
alexandrost#2936: godspeed 👍
Bruce23#6204: Sometimes GPTneo returns the exact same text. I already increased the temp to 0.9. The next run returns a better result. How can I further control that GPTneo does not return tokens that are in the input?
guac#4716: make temp > 1
Bruce23#6204: Do top_k and top_p influence the "input/output coverage"?
bmk#1476: temp = inf :bigbrain:
Bruce23#6204: Aha
unobtanium#6610: nice idea. I had something similar in mind 😉
EricHallahan#1051: Welcome!
UnsupervisedLearner#4148: Found some sprite database websites but no one has compiled a dataset yet
Asked around, no scraping etiquette either so I'm just gonna write a braindead recursive script
unobtanium#6610: https://arxiv.org/abs/2104.14553
unobtanium#6610: seems to be something tht might be of interest.
unobtanium#6610: I'm also looking at deep rl world-sim paper from 2 years back (for modeling environment as mlp) for tuning game mechanics.
unobtanium#6610: ... and a backwards differentiable render function for improving sprite generation
unobtanium#6610: I've given it some light thought 😉 nothing firm yet I'm working on the more mechanical side at the moment 😉
unobtanium#6610: in that vein: hello. I'm not super-hot on discord 😉 so if I miss a reply t'is not intentional.
UnsupervisedLearner#4148: Sounds cool. I'll dm you once I have anything actually built besides just daydreaming about it |
Spectre#9939: I am also very interested in this!
Spectre#9939: I have been thinking of training a dataset of sprite animations
Spectre#9939: To fill in animation in between frames
Spectre#9939: Not exactly the same but a related idea
EricHallahan#1051: I suggest bringing this conversation to #off-topic if you want to continue discussing it.
unobtanium#6610: same 🙂
unobtanium#6610: there's some work in pure computer vision that does this 😉
unobtanium#6610: recently released... https://www.youtube.com/watch?v=sFN9dzw0qH8
unobtanium#6610: my apologies, posting in #off-topic now.
nostalgebraist#3542: **logit lens on gpt-neo**: https://colab.research.google.com/drive/1MjdfK2srcerLrAJDRaJQKO0sUiZ-hQtA?usp=sharing
- confirmed the observation that gpt-neo does *not* exhibit the "logit lens" phenomenon like gpt2 does
- the notebook uses a package `transformer-utils` which i just wrote and published to PyPI -- it contains logit lens plotting stuff as well as my low-memory loading code for HF transformers
EricHallahan#1051: I want to say that it is because GPT-2 is crap.
bmk#1476: ok that's super surprising. have you looked at any other gpt2-replication models?
nostalgebraist#3542: i'm really curious why, i can't think of any great hypotheses
gwern#1782: (is not exhibiting logit lens a bad thing? I've forgotten what 'logit lens' is)
bmk#1476: is it just something we're doing right/wrong/different?
nostalgebraist#3542: no, which ones should i look at?
nostalgebraist#3542: it's not good or bad, just surprising |
bmk#1476: errr connor made one back in the day, and so did skylion i'm pretty sure
gwern#1782: there were also the variants like CTRL or grover
EricHallahan#1051: My theory is that GPT-Neo is better at understanding things because it uses more of the model's capacity.
bmk#1476: my theory is the big difference is the pile lol
nostalgebraist#3542: oh yeah i should try in on CTRL
EricHallahan#1051: I think Pile is a huge difference.
bmk#1476: mostly the size of the data, rather than the quality
bmk#1476: i mean doing a ton of epochs on 40gb only gets you so far
EricHallahan#1051: It is just filling it up to the brim.
kindiana#1016: hrmm
EricHallahan#1051: hrmmm
kindiana#1016: did we do dropout for those models?
bmk#1476: https://github.com/ConnorJL/GPT2#downloading-pretrained-models connor's model, though apparantly it sucked on benchmarks
nostalgebraist#3542: how many effective epochs did you guys do over the pile? IIRC gpt2 did roughly 5 over webtext
bmk#1476: like 1ish
bmk#1476: a bit more but somewhere in that ballpark
bmk#1476: like probably 1.3 epochs?
nostalgebraist#3542: ah, that feels relevant
bmk#1476: i haven't actually sat down to calculate it
nostalgebraist#3542: maybe if it sees the same data enough times, it learns a structure where it guesses "roughly which part we're regurgitating" in the middle and then confirms it in high layers |
bmk#1476: https://blog.usejournal.com/opengpt-2-we-replicated-gpt-2-because-you-can-too-45e34e6d36dc cohen&gokaslan's model, which does comparable on benchmarks
bmk#1476: also i don't know if this interests you at all but i also have a bunch of random miscellaneous models if you want to mess around with
bmk#1476: https://huggingface.co/lg all the fexp_* models are 1.3Bs trained on CC for 1 epoch-ish over 40GB of text
bmk#1476: the difference is that i filter CC with different intensity for each one
nostalgebraist#3542: nice
bmk#1476: idk if youd find it useful but i put these together for another paper and i might as well put them out there
bmk#1476: here's the key for the numbers https://cdn.discordapp.com/attachments/729741769738158194/841519793504387082/unknown.png
bmk#1476: 6 is missing from the repo because i never got it trained, lol
nostalgebraist#3542: seems worth looking at
bmk#1476: i don't even know if these models can generate (good) text with only one epoch but yeah
kindiana#1016: :thonk:
bmk#1476: i also have models on CC100/Pile/rawCC that were originally for the pile paper that i could put on hf
bmk#1476: i also have 117M models trained on every single activation function you can possibly think of for like 10GB each
bmk#1476: lol https://cdn.discordapp.com/attachments/729741769738158194/841520511640928287/unknown.png
kindiana#1016: we should train a model with layerdrop
bmk#1476: oh, we also have the pile rotary models
kindiana#1016: that should make the effect much more ovbious
EricHallahan#1051: We have many model.
bmk#1476: one might even say.. *several* https://cdn.discordapp.com/attachments/729741769738158194/841521071509602344/unknown.png
EricHallahan#1051: I'm installing `transformers-utils` now lol |
nostalgebraist#3542: even just today i learned a new bad thing about HF
when i was making that notebook, at first i was getting nonsense plots from gpt2 despite it working with gpt-neo and distilgpt2
turns out that HF gpt2 is saved as a state_dict like `h.0.blah` while HF gpt-neo and distilgpt2 saved like `transformer.h.0.blah`
https://github.com/nostalgebraist/transformer-utils/commit/d1d25ad179c79f696a990b5caaba6278f78484f8
zphang#7252: don't they throw an error/warning when you load the wrong mismatched keys?
zphang#7252: also when you say logit lens isn't working, what do the results look like?
nostalgebraist#3542: HF has a bunch of wrappers around torch `load` and `load_state_dict` that can handle forms both with and without something called `base_model_prefix` defined at the class level. in this case `base_model_prefix` is "transformer"
bmk#1476: it's sad because there's literally no reason for neo impl to be any different from gpt2
zphang#7252: oh yeah that's why I do the loading and saving myself
HF has too many wrapper/helper/default-friendly methods that it becomes really easy for a mistake to go unnoticed
bmk#1476: they literally wrote the conversion script to convert from neo format to their pytorch checkpoint format
nostalgebraist#3542: but if you try to treat their stuff as just torch.nn Modules, rather than magical HF whatevers, you run into stuff like this and it *does* cause key mismatch issues
EricHallahan#1051: Do I need `pandas` for `transformer-utils`?
bmk#1476: and they made it incompatible for no good reason
nostalgebraist#3542: no, i actually removed that requirement like an hour ago lol
nostalgebraist#3542: i defensively pinned a `pandas` version because i expected to use it and its API makes breaking changes all the time
nostalgebraist#3542: should try to avoid it entirely though, it's bad |
zphang#7252: is pandas still breaking compatibility a lot? I thought they'd stabilized after 1.0
nostalgebraist#3542: possibly not... but i'm still stuck at pre-1.0 because i don't want to upgrade my code 😛
EricHallahan#1051: It kept failing during the install process.
nostalgebraist#3542: in the case of the mismatched keys, the plots didn't make any sense because the model was just random init weights
as i confirmed a moment later by sampling and getting random tokens
zphang#7252: oh I meant with gpt-neo
nostalgebraist#3542: oh
nostalgebraist#3542: here's gpt2 125m https://cdn.discordapp.com/attachments/729741769738158194/841544553970401280/notebook_gpt2_sm_probs.png
nostalgebraist#3542: here's gptneo 125m, same text https://cdn.discordapp.com/attachments/729741769738158194/841544621791117312/notebook_gptneo_sm_probs.png
zphang#7252: what are these layers, evenly sampled across all?
nostalgebraist#3542: they're just all the layers from bottom to top
zphang#7252: oh 125m, so only 12 layers
nostalgebraist#3542: gpt2 1.5b https://cdn.discordapp.com/attachments/729741769738158194/841545204431061083/notebook_gpt2_1_5b_probs.png
nostalgebraist#3542: gptneo 1.3b https://cdn.discordapp.com/attachments/729741769738158194/841545240687149066/notebook_gptneo_1_3b_probs.png
nostalgebraist#3542: gptneo 2.7b https://cdn.discordapp.com/attachments/729741769738158194/841545403966291988/notebook_gptneo_2_7b_probs.png
cfoster0#4356: What happened to this kind of behavior?
nostalgebraist#3542: i believe it still occurs, haven't set up the notebook to plot ranks yet
nostalgebraist#3542: i'll do that when i get time
nostalgebraist#3542: in 2.7b the ranks do look interesting near the top, somewhat moreso than the probs/logits |
nostalgebraist#3542: hmm actually that rank plot feels inconsistent with what i have from today
AI_WAIFU#2844: Yo when we trained 2.7B did we check that the gradients we're flowing through the network reasonably well?
zphang#7252: oh I gotta cite logitlens btw, do you have a bibtex :p
nostalgebraist#3542: if we're getting rank of 1 a lot near the top then we should see tokens flip from the "Supporters" type nonsense up there
bmk#1476: so wait it seems like gptneo's internal layers dont keep using the same basis?
nostalgebraist#3542: i guess i used a different text for that one, and actually a different model (my finetuned one for my bot)
AI_WAIFU#2844: well it looks like the interal layers aren't doing much of anything in the 2.7B model
nostalgebraist#3542: the finetuned one went ~5 epochs over its tuning corpus, so that's maybe a difference
bmk#1476: @nostalgebraist this might be dumb but what would happen if you just cut one of the middle layers out of the model
kindiana#1016: I don't think the 2.7 is that bad such that its not using most of the layers lol
bmk#1476: would it still generate anything remotely normal
kindiana#1016: on a baseline transformer you still retain most of the perf when you drop a layer
nostalgebraist#3542: just to be clear, gpt2 itself also makes a "sudden jump" like the one at the top of the gptneo plots
nostalgebraist#3542: it's just right at the *start* in the h0 mlp
nostalgebraist#3542: (it's specifically in the mlp part of h0, not the attn... i looked at this once)
nostalgebraist#3542: (after the first attn it still looks like the wte/wpe input)
nostalgebraist#3542: probably? it's residual...
cfoster0#4356: Hmm. Where are you usually pulling these from? Right after the mlp residual merge?
nostalgebraist#3542: yeah
nostalgebraist#3542: then i do `ln_f` and decoder |
kindiana#1016: hrm, does the ln have affine params?
nostalgebraist#3542: yeah
kindiana#1016: which set of affine params do you use?
nostalgebraist#3542: oh in my original post, i actually used "bare" ln, just the norming part, not shift/scale
bmk#1476: i'm mostly thinking because the middle layers dont seem to be doing much, maybe we can remove them, lol
kindiana#1016: :thonk:
nostalgebraist#3542: oh no i would not take that interpretation
AI_WAIFU#2844: yeah, it might make sense to try norming them somhow to get a better picture of what's going on
kindiana#1016: pretty sure they are doing stuff, we are just looking at it wrong
kindiana#1016: lol
nostalgebraist#3542: they just look boring when i compute this function of them
bmk#1476: ah
nostalgebraist#3542: and the only reason i'm doing that is that it was interesting when i computed it with gpt2 layers
bmk#1476: still, it seems interesting that merely changing the data makes the model work so differently internally
bmk#1476: ok we did change some other stuff too but the data is the big onr
kindiana#1016: so you use these affine params right? https://github.com/EleutherAI/gpt-neo/blob/d76836abc9503ebfc58e7f6c5a13b7eb177aac12/models/gpt2/gpt2.py#L178
nostalgebraist#3542: yeah, in the recent stuff
nostalgebraist#3542: in the original post i used ones/zeros
nostalgebraist#3542: but then i thought, really it's arbitrary whether you group the `ln_f` affine params as part of "the decoder" or not
nostalgebraist#3542: or rather, i wanted something where, when you did it for the last layer, you got the actual output logits |
nostalgebraist#3542: which means `ln_f`
kindiana#1016: well, thats what the model uses to produce logits, so it makes sense
nostalgebraist#3542: did you initialize the same way as gpt2?
bmk#1476: uhh
bmk#1476: I'm not actually sure lol
bmk#1476: i guess you'd have to look at the neo code for that
bmk#1476: I'm assuming we didn't do anything special
kindiana#1016: its pretty close iirc
bmk#1476: i don't expect init to be the deciding factor
nostalgebraist#3542: i guess i'm thinking that "where the model does what" is likely to be settled early on in training
bmk#1476: also fwiw both neo and gpt3 absolutely destroy the respective gpt2 models of the same size
bmk#1476: neo loses slightly to gpt3 but still destroys gpt2
nostalgebraist#3542: and, if it's also somewhat arbitrary, may be random based on init
kindiana#1016: a little more than slightly tbh :berk:
bmk#1476: like it's not even close, gpt3-345M beats the shit out of gpt2-1.5B
bmk#1476: yeah i guess
bmk#1476: i think it has to be the data because it's the only thing that explains such a drastic gap between gpt2 and 3
bmk#1476: and so maybe that's what's responsible for the logit lens being different
kindiana#1016: also nobody knows how many epochs gpt2 was iirc :thonk:
AI_WAIFU#2844: I wouldn't be so sure |
AI_WAIFU#2844: Bad init really fucks with gradient propagation
bmk#1476: maybe
bmk#1476: I'm not sure tbh
nostalgebraist#3542: i was sure i knew it was like 5.7 or something
nostalgebraist#3542: let me see if i can find my notes on it
bmk#1476: where did you get that info? o.O
bmk#1476: afaict OA has never spoken about it
nostalgebraist#3542: you know what, i think i'm confusing it with the scaling paper
nostalgebraist#3542: if you're that sure
nostalgebraist#3542: oh yeah, i am
nostalgebraist#3542: the first scaling paper did 5.72 epochs on webtext2 whenever they weren't doing a run about varying step count
nostalgebraist#3542: the gpt2 paper is so damn vague, those checkpoints might as well be alien artifacts that washed up on the beach somewhere
nostalgebraist#3542: oh good god, ctrl in HF has something exactly like `self.ln_f` but the code calls it `self.layernorm` instead
EricHallahan#1051: HF is cursed
EricHallahan#1051: I mean Transformers is cursed
Daj#7482: This is very :thonk: . I wonder if the local attention has something to do with it
EricHallahan#1051: That is a possibility.
kindiana#1016: I don't really see how local attn would effect that hrmm
EricHallahan#1051: Me neither.
kip#6104: could you share/link the code you used to plot these? |
Sid#2121: https://discord.com/channels/729741769192767510/729741769738158194/841517042418450432
kip#6104: oops, thanks 👌
Jozef Poniatowski#7589: is it standard practice to use fp16 when doing large scale pretraining?
Fando#5805: Hello, I would like to use the gpt-neo model for sentiment analysis. Does someone have already tried that? All the content I could find so far was about text generation, but I could not find anything about sentiment analysis or things like text classification. Thank you a lot for your help 🙂
alstroemeria313#1694: so https://datacrunch.io has RTX A6000 instances now (48GB per GPU)
alstroemeria313#1694: for $1.1/hr
alstroemeria313#1694: i woke up and vast.ai prices were through the roof so i went elsewhere for now
alstroemeria313#1694: an A6000 is somewhere between an RTX 3090 and an A100 speed wise, i used to get 4.4 iters/sec at 512x512 with one of my VQGAN+CLIP methods on an A100 and on this machine i am getting 4 iters/sec
StellaAthena#3530: There has been pretty limited application to downstream tasks AFAIK. I'm sure you can follow a guide like this one to get started, but we don't have any GPT-Neo specific resources
https://lvwerra.github.io/trl/05-gpt2-sentiment-control/
alstroemeria313#1694: i think they will eventually have 80GB A100 instances too, the A6000 instances just launched today
Fando#5805: Thank you a lot, I will definitely check that.
kurumuz#5695: prices seem really good, thanks!
Louis#0144: gm goosies
Louis#0144: 🙂
𓅬 gabriel_syme 𓅬#3220: they really are, just registered. I think I'll do my diffusion models there
𓅬 gabriel_syme 𓅬#3220: so it's finetuned for 480k steps
𓅬 gabriel_syme 𓅬#3220: it's getting better, so it will run all night here
𓅬 gabriel_syme 𓅬#3220: but imo, it has captured already more detail
𓅬 gabriel_syme 𓅬#3220: lol nvm wrong chat :/ |
𓅬 gabriel_syme 𓅬#3220: :berk:
Chris Tauchmann 🌊#4270: Hey,
Im Chris. Im working/researching in NLP, and together with @Daj on another project (on Language Models)— he asked me if I’d be interested to join here, and here I am 🙂 .
For now I’m just browsing, but might be interested in joining a reading group or a project here in the future (given people’s approval).
Louis#0144: What are u working on
Chris Tauchmann 🌊#4270: broadly, ethical/biases in large pre-trained Language Models and identifying different axes within
Daj#7482: Hey Chris! Glad you made it here!
Daj#7482: Kip, Koen and me do our project down in #deleted-channel
Chris Tauchmann 🌊#4270: alright, see you there!
Daj#7482: There's also the weekly interpretability reading group in #interpretability-reading-group (you can see the schedule/signup in the pinned comments)
quinn#9100: I believe i'm working on this this summer i got an internship in the lab that produced this paper https://arxiv.org/pdf/2008.02275.pdf#page=5
quinn#9100: something like this i should say
Teemochu#8740: A6000 looks like the cheapest VRAM around
kurumuz#5695: fp16/fp32 1:1 tho
kurumuz#5695: 😢
Teemochu#8740: Inferring GPT-3 under $100k actually looks not just possible but quite likely for anyone with the weights
alexyz#3459: Why aren't there any projects for recreating Jukebox?
alexyz#3459: There's only 3 models that OpenAI released: 1B, 5B and 5B_lyrics
alexyz#3459: Imagine scaling up Jukebox
cfoster0#4356: Like here or in general? |
alexyz#3459: Like in general
alexyz#3459: now that I say that there's probably some project I overlooked 😐
cfoster0#4356: Probably because no one else (outside of corp labs) is training at that scale
bmk#1476: (outside of corp labs (and us))
bmk#1476: :tribalism:
alexyz#3459: lol
bmk#1476: as for why *we* aren't doing it.. well, pls write the code for it and we can run it on TPUs
bmk#1476: we have too much TPU compute just lying around
alexyz#3459: i'm like 5 years old i have 0 idea how to do that
alexyz#3459: but yes it'd be cool for there to be a mesh tensorflow implementation
Teemochu#8740: ~~uh oh, things took a weird turn. help figure it out?~~
bmk#1476: >teemo is typing
me: uh oh
Ravna#1831: 5B Jukebox is much less impressive than 1.5B GPT-2.
Ravna#1831: Imagine how many trillions of parameters are needed to make it borderline impressive.
alexyz#3459: I think it's due to the complexity of audio
alexyz#3459: compared to text
alexyz#3459: but scaling it double would help lol
alexyz#3459: You wouldn't need trillions |
Ravna#1831: I don't think so. 3B GPT-2 isn't qualitatively different to 1.5B GPT-2.
Ravna#1831: Doubling is nothing.
Ravna#1831: 3 orders of magnitudes may do something.
alexyz#3459: 3 orders is kinda nonsense
alexyz#3459: even 1 order of magnitude is a giant difference
alexyz#3459: 13B v 175B for GPT-3
alexyz#3459: 3 orders of magnitude is the difference between 1B and 1T
alexyz#3459: (is that how magnitudes work? or am I insane?)
Teemochu#8740: foom = five orders of magnitude
Ravna#1831: Yes, 175B GPT-3 still can't write coherent long texts.
Ravna#1831: Coherent long music isn't achievable by just doubling.
alexyz#3459: except that Jukebox already has coherent music
alexyz#3459: it's just more like 1/4 times
alexyz#3459: of random samples
Ravna#1831: No?
alexyz#3459: plus with priming it works better
Ravna#1831: It's coherent for about 2 to 3 sentences
Ravna#1831: After that it diverges to more and more random directions
alexyz#3459: Are you talking about the 5B or 5B_lyrics models?
alexyz#3459: you could just give it no guiding lyrics |
alexyz#3459: and then it gives weird nonsense I agree
alexyz#3459: but if you're giving guiding lyrics it makes actual results
zphang#7252: Dumb TPU question: what does an error like
```
2021-05-11 18:42:40.783592: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "Unavailable: Socket closed" and grpc_error_string = "{"created":"@1620758560.781359749","description":"Error received from peer","file":"external/grpc/src/core/lib/surface/call.cc","file_line":1039,"grpc_message":"Socket closed","grpc_status":14}", maybe retrying the RPC
```
usually mean?
bmk#1476: either a preemption or some other miscellaneous network error
bmk#1476: i got a ton of those when doing the test run for 200B neo
gwern#1782: at least one issue is that generating already takes forever. I figure that at 100b parameters, jukebox will start being seriously competitive with humans... but it already takes like a day to sample a full song, doesn't it? so how long would a full model take...
alexyz#3459: well, not really a day
alexyz#3459: it does take a while though
alexyz#3459: in Colab it was something like 30 min for every 10 sec?
alexyz#3459: but i need to check again
zphang#7252: It doesn't seem to be preemption since the TPU status still seems to be "ready"
zphang#7252: I'm going to assume it's "Just random TPU things ™️"
alexyz#3459: and in the real world, it can take months to write and produce songs
bmk#1476: yeah it doesn't guarantee preemption
bmk#1476: though preemptions sometimes cause it
zphang#7252: oh jk I read the output wrong, I was preempted |
alexyz#3459: nope, it's apparently ```On a V100, it takes about 3 hrs to fully sample 20 seconds of music``` 😐
alexyz#3459: and you can have 3 samples generating in parallel
alexyz#3459: I think that it's still worth it to create the model
alexyz#3459: even though there wouldn't be many people who actually have the computation power for actually generating songs quickly
alexyz#3459: (this would be 27 hours for a 3 min song) so @gwern you're correct
alexyz#3459: just like it's worth it to create a 175B GPT-Neo even though literally nobody can run it lol
alexyz#3459: well people can run it, but not at quick speeds (for most people at least, i'm not counting the people with their supercomputers)
mkualquiera#3484: Imagine not being able to type ``sinfo`` and it showing 512 nodes
mkualquiera#3484: couldn't be me
gwern#1782: you see about as many people replicating GPT-3-175b as you do expanding Jukebox to 100b parameters, though, so I feel like this makes my point for me
bmk#1476: literally nobody.. except us :tribalism:
gwern#1782: strictly speaking, you guys have not and currently are not, and only *plan* to
gwern#1782: plus, what about alibaba and huawei?
alexyz#3459: well nobody even plans to create a Jukebox model of that size, and I can't find any similar models of that size 😢
mkualquiera#3484: >:(
mkualquiera#3484: just wait a few years and it will happen
alexyz#3459: or I could continue to be a nuisance and keep asking for someone to port Jukebox to TPUs >:)
alexyz#3459: nah jk
mkualquiera#3484: do it yourself
alexyz#3459: I literally can't, I have 0 idea how tensorflow or pytorch works |
alexyz#3459: I just... use them
mkualquiera#3484: I don't think anyone knows how they work tbh :berk:
mkualquiera#3484: all black magic
alexyz#3459: i have no idea how to actually port something from pytorch to mesh tensorflow
mkualquiera#3484: but you could learn
mkualquiera#3484: start by building the same small model in both frameworks
mkualquiera#3484: and then expand from that
mkualquiera#3484: Note that you don't need to do all the work yourself
mkualquiera#3484: you need to do enough to convince Leo that making the first Goosegirl AI idol is a worthy goal
alexyz#3459: I really have no idea where to start though, and I have school and other stuff to work on
alexyz#3459: but I might pursue that
alexyz#3459: would be something nice to learn
mkualquiera#3484: yeah
mkualquiera#3484: I mean I did tell you where to start :berk:
mkualquiera#3484: if you don't have enough time then that's different
alexyz#3459: @cfoster0 you've been typing for the last hour https://cdn.discordapp.com/emojis/663595881311764480.png?v=1
cfoster0#4356: Hmm.
cfoster0#4356: Odd
alexyz#3459: ok then
mkualquiera#3484: Yeah I thought you were writing a poem or something tbh |
cfoster0#4356: I wish. No I haven't been typing anything
alexyz#3459: GPT-Neo is gaining sentience through @cfoster0's account and is trying to speak to us
alexyz#3459: that's the only reasonable explanation
mkualquiera#3484: :guilty:
alexyz#3459: I'm planning to finetune GPT-Neo on Discord chat logs
alexyz#3459: found a nice 2GB dataset on Kaggle
mkualquiera#3484: 2GB of just text?
alexyz#3459: Yes
alexyz#3459: actually i think there might be images i need to check
alexyz#3459: i'm pretty sure it's just text
alexyz#3459: https://www.kaggle.com/jef1056/discord-data
alexyz#3459: yep it's just text
alexyz#3459: it's actually more than 2GB
alexyz#3459: but i'm using only a small part of it
mkualquiera#3484: 2Gb of just text is quite considerable yeah
Kharr#7888: Good luck 🤣 https://cdn.discordapp.com/attachments/729741769738158194/841825464284348456/unknown.png
bmk#1476: using discord data is against tos
EricHallahan#1051: https://eleuther.ai/faq
alexyz#3459: is it really?
alexyz#3459: do you mean "scraping" or "using" |
Kharr#7888: https://cdn.discordapp.com/attachments/729741769738158194/841826264088051732/unknown.png
𓅬 gabriel_syme 𓅬#3220: also "reading"
𓅬 gabriel_syme 𓅬#3220: like the 300 OT messages overnight
𓅬 gabriel_syme 𓅬#3220: can we massage the TOS and get a summarization bot for that? 🙂
𓅬 gabriel_syme 𓅬#3220: or is it going to be just goose images you think
alexyz#3459: ```With the collaboration of a large number of discord moderators, server owners, and members of the community, this data was sucessfully downloaded and cleaned.``` from the kaggle page
alexyz#3459: Doesn't say anything about TOS 😐
alexyz#3459: it says ***collecting*** the dataset is against TOS
EricHallahan#1051: I still wouldn't want to touch it.
mkualquiera#3484: it's a bit of a gray area
alexyz#3459: That's fine, I'm using it for my personal finetuning project
EricHallahan#1051: ¯\_(ツ)_/¯
alexyz#3459: anyway I probably should read up on the Discord TOS
EricHallahan#1051: https://discord.com/terms
alexyz#3459: I'm already reading, but thanks 🙂
mkualquiera#3484: I read the TOS and I couldn't find anything related to that
EricHallahan#1051: ¯\_(ツ)_/¯
mkualquiera#3484: TOS is mostly things you can and can't do with the service, but this would be more likely in the privacy policy thing
mkualquiera#3484: assuming they have one
alexyz#3459: https://discord.com/privacy |
alexyz#3459: Doesn't look like it's in there either
alexyz#3459: but the actual collection of the data was definitely against TOS
mkualquiera#3484: why?
alexyz#3459: ```You agree not to (and not to attempt to) (i) use the Service for any use or purpose other than as expressly permitted by these Terms;(ii) copy, adapt, modify, prepare derivative works based upon, distribute, license, sell, transfer, publicly display, publicly perform, transmit, stream, broadcast, attempt to discover any source code, reverse engineer, decompile, disassemble, or otherwise exploit the Service or any portion of the Service, except as expressly permitted in these Terms; or (iii) use data mining, robots, spiders, or similar data gathering and extraction tools on the Service.```
alexyz#3459: in "RIGHTS TO USE THE SERVICE"
alexyz#3459: on the ToS
alexyz#3459: it's kinda a gray area using scraped Discord data
alexyz#3459: it's definitely against ToS though to scrape Discord data
alexyz#3459: like did you know it's legal to download password data breaches? (there are good uses, like for security researches)
alexyz#3459: it's just not legal to *leak* that password data from the website
alexyz#3459: i expect this is in a similar legal area
mkualquiera#3484: curious
mkualquiera#3484: law is weird
mkualquiera#3484: we should just rewrite all laws in haskell code tbh
alexyz#3459: no, just go to python and
```import laws```
Kia#2550: That's the most vagues thing they made
alexyz#3459: wdym?
Kia#2550: Like it's "legal"
alexyz#3459: Well it's legal to use the data, it's just not legal to get the data |
mkualquiera#3484: anyway this is probably more #off-topic
alexyz#3459: yeah true
Kia#2550: True
alexyz#3459: but yeah imma finetune GPT-Neo on Discord chats and then hook it up to a Discord bot
alexyz#3459: probably will end horribly
Kia#2550: In this server?
alexyz#3459: No
mkualquiera#3484: let's hope it's not too obsessed with femboys
Kia#2550: Lol:berk:
alexyz#3459: imma put it in some server, haven't really planned out that far
Kia#2550: You should try your prototype first here
Kia#2550: People can't tell who's who's
alexyz#3459: Then the Eleuther staff would have to add the bot
mkualquiera#3484: they have already added various bots made by community members
alexyz#3459: because it's against TOS to use a user token for bots
Kia#2550: True
alexyz#3459: because I *could* hook it up to my user account
alexyz#3459: and have it just chat in servers lol
mkualquiera#3484: honestly this is really stupid
Kia#2550: Erased that idea :p |
Kia#2550: But im interested on results to be honest
alexyz#3459: I'm hoping for a GPT-Neo 6.7B in the near future
alexyz#3459: a finetuned model of that size would probably be pretty coherent
mkualquiera#3484: I believe there is one but it's not public
mkualquiera#3484: could be wrong
alexyz#3459: It's training iirc
Teemochu#8740: believe me same
𓅬 gabriel_syme 𓅬#3220: how much harder to inference from a 6.7 vs a 2.7
𓅬 gabriel_syme 𓅬#3220: like some people fit the latter on an 11gb card I think
𓅬 gabriel_syme 𓅬#3220: (not ideal ye)
Teemochu#8740: 2.7 fits on 8gb
Teemochu#8740: 6.7 with bf16 you'll get on a 3090 and that's it
EricHallahan#1051: I would be surprised if you could fit it on an RTX 3090.
EricHallahan#1051: binary32*
Teemochu#8740: where we're going, we don't need ~~roads~~ fp32
𓅬 gabriel_syme 𓅬#3220: 3090 isn't that bad, guessing speed will be okay there too
nostalgebraist#3542: updated my notebook with **logit lens for CTRL**: https://colab.research.google.com/drive/1MjdfK2srcerLrAJDRaJQKO0sUiZ-hQtA?usp=sharing
CTRL looks similar to gpt-2, dissimilar to gpt-neo here.
nostalgebraist#3542: CTRL plot, similar to others i posted yesterday https://cdn.discordapp.com/attachments/729741769738158194/841866467783999548/notebook_ctrl_probs.png |
nostalgebraist#3542: actually, now that i look closer, CTRL is more like what i expected gpt-2 to do!
nostalgebraist#3542: early layers look like the input, late layers look like the output, gradual flip in the middle
nostalgebraist#3542: almost spookily interpretable
nostalgebraist#3542: (in lighter news, today's "HF is terrible" moment: their CTRL config json is missing a key it needs to load, and they have a patch that fills in the key, but *only if you're loading by passing the model name as a string*.
because then they do *substring matching on the string you passed* against all their model names.
https://github.com/nostalgebraist/transformer-utils/blob/main/src/transformer_utils/util/tfm_utils.py#L7 )
kurumuz#5695: lol
alexandrost#2936: Hi guys. Forgive my noobness - I was wondering, when gpt-neox comes out. How would one go about loading this model? Given that it will be around 150-200b parameters large, does that mean that you will have to use model parallelism to make it work?
chris_myzel#9645: Is there a way to tell how much faster inference get's from running on a e.g. Xeon 24 core CPU to me purchasing an A100 and how is your answer validated 🙂 ? (gpt-n)
Louis#0144: You probably won’t be able to load it
Louis#0144: It isn’t even close
alexandrost#2936: how would someone go about loading it?
Louis#0144: You would almost certainly use 3D parallelism
Louis#0144: But you’d need many many GPUs
Louis#0144: Think tens of thousands USD
alexandrost#2936: waaat...
Louis#0144: For the full model
Louis#0144: You can’t run it locally |
Louis#0144: lol
alexandrost#2936: yeah I wasn't expecting locally, but I wouldn't expect tens of thousands of GPUs either hahah
Louis#0144: 🤷♂️
Louis#0144: A 3090 will be able to do inference on the 6b model is what I think people said here yesterday
Louis#0144: 175>6
Sid#2121: this is absolutely not true lol
alexandrost#2936: I mean, I am running the 2.7b model on a 12GB memory GPU, - I would expect that with 100 of those GPUs I'd be able to. so the memory requirement isn't linear?
Louis#0144: I’m just repeating what ppl have said here o.O
Louis#0144: I mean
Louis#0144: It would be significant
Louis#0144: Undoubtedly
Sid#2121: if you can quote me where anyone said it would take tens of thousands of gpus to run inference, i'll tell them they're wrong instead
Louis#0144: Ok
Louis#0144: I’ll look for it
Sid#2121: the weights will be ~700GB give or take
Sid#2121: if we're talking on A100s, to give it a bit of wiggle room, 20 should work
Sid#2121: v100s you'd need like 24
chris_myzel#9645: meaning? I'm at around 1 token / sec on 12 cores with 24 gb ram , when running a 3000 token completetion I see disk activity over the next hr, so guess I'm limiting myself here with the avail ram
alexandrost#2936: I see thanks!
Louis#0144: Oh so it’s more? |
alexandrost#2936: I guess it would make sense to go for A6000 in that case
Sid#2121: more? than what?
Louis#0144: More than tens of thousands USD
Sid#2121: why are we talking in USD?
Sid#2121: you said ```But you’d need many many GPUs
Think tens of thousands```
Louis#0144: Ohhh
Louis#0144: Sorry
Louis#0144: I made a typo
Louis#0144: I meant to say USD
Sid#2121: I am saying, you'd need 20 or so A100s, or 24 or so V100s
Louis#0144: apologies
Louis#0144: Fixed
alexandrost#2936: hopefully a distilled version would be created for neox
Louis#0144: You would be able to do significantly more with a beefy GPU
Louis#0144: I don’t have exact numbers
Louis#0144: It would be cheaper too
Louis#0144: Xeons are expensive
chris_myzel#9645: ok thanks - I'll give coreweave a 1 hr testing spin
chris_myzel#9645: I'll post here what I find out... |
alexandrost#2936: I guess there is no scenario of CPUs cost-to-token-generation ratio beats the GPU one, right?
chris_myzel#9645: to be fair, a relevant svenario is I have the xeon around & A100 not
Louis#0144: You don’t need an A100 to do inference on 2.7b
Louis#0144: You can use a 2080ti there I think
Louis#0144: (?)
chris_myzel#9645: was just interested in how far I get on a A100
Louis#0144: 1.3b can def fit on a 2080ti for inference
Louis#0144: I think 2.7 is very slightly over 12gb
Louis#0144: So maybe not
Louis#0144: Chonk
Louis#0144: Idk I don’t have exact numbers
Louis#0144: Probably a lot
alexandrost#2936: when I used a GPU with 12GB of memory it would load the 2.7B model, and work,, but occasionally would have memory crashes
alexandrost#2936: so I guess around 10-12 GB you're on the edge
Louis#0144: Can confirm this
Louis#0144: Yeah
chris_myzel#9645: on a CPU <-> RAM scenario I can see 20 GB spikes
chris_myzel#9645: https://cdn.discordapp.com/attachments/729741769738158194/842015887506014238/unknown.png
alexandrost#2936: when I tried it on an A100 (which is an overkill, I know) it was running like a dream
alexyz#3459: How does distilling a model work? |
StellaAthena#3530: https://github.com/EleutherAI/distilling
alexyz#3459: Yes, but that doesn't explain *how*
alexyz#3459: like if you trained a 1.3B model, and then took a 2.7B model and distilled it to 1.3B, would it have the same quality of generation?
Louis#0144: 🤷♂️
Louis#0144: There’s no law of distillation like that
Louis#0144: It would have comparable performance to the 2.7b model
Louis#0144: Where it would fall is anyone’s guess
alexyz#3459: would it use similar compute to the 1.3B or the 2.7B model?
alexyz#3459: sorry if there's no clear answer
EricHallahan#1051: ¯\_(ツ)_/¯
EricHallahan#1051: It is an open problem.
EricHallahan#1051: IMO at least.
alexyz#3459: 👍
Kharr#7888: Distillation usually achieves better performance than training from scratch when using equal parameters for specific tasks. This hasn't been conclusively demonstrated for generative models (yet).
gwern#1782: there are many ways to distill, compress, or sparsify, but for a relatively modst compression level like 50% I would expect them to be nearly indistinguishable if you don't screw it up
StellaAthena#3530: This is something we are actively working on. The answer is that nobody knows.
EricHallahan#1051: https://eleuther.ai/faq
gwern#1782: incidentally, I've noted a few times before that the MS Tay narrative sounds like a leprechaun to me. I've looked into it a little more and I am still finding no good reason to believe '4chan taught Tay to be racist': https://discord.com/channels/729741769192767510/818705689031475240/842055749734760459 does anyone have any *hard references* demonstrating Tay did online learning, as opposed to the media echo chamber of repeat-after-me quotes and generic Internet-LM-chabot gibberish?
inox#5400: huh you could prompt engineer to produce arbitrary distillation tokens to distil from CLIP to traditional classification vision models
Louis#0144: Yes |
Louis#0144: I have a friend working on that
Louis#0144: It’s v promising
inox#5400: using hard or soft distillation?
alexyz#3459: It wasn't all repeat after me
alexyz#3459: One quote is someone asking "Did the Holocaust happen?" and Tay said "It was made up 👏"
alexyz#3459: but you're probably right
alexyz#3459: there's no actual hard references
alexyz#3459: and it doesn't help how there's no archive of their tweets
gwern#1782: that's what I mean by 'generic Internet-LM-chatbot gibberish'. saying 'the holocaust was made up' is something any LM trained on data from the past half-century could say. it provides zero evidence for online learning. and in the topical recent examples where the training corpus would be silent, it appears to just be copying in-convo in the screenshots
gwern#1782: if 'the holocaust was made up' is the best that can be exhibited, such ancient topics are strong evidence against online learning
Louis#0144: Soft
alexyz#3459: On the last capture of the https://tay.ai/ website https://cdn.discordapp.com/attachments/729741769738158194/842066834291163166/unknown.png
alexyz#3459: It's probably PR speak
alexyz#3459: note: the quote I'm talking about is "The more you chat with Tay the smarter she gets, so the experience can be more personalized for you."
gwern#1782: 'personalized' just refers to building up a chat history and clearing the way for future training
gwern#1782: again, I'm not doubting that Tay was intended to be trained, and like xiaoice, would've been updated at some point in the indefinite future. the question is. "Did 4chan train Tay to be evil?"
alexyz#3459: i think it's like how you have chat context
alexyz#3459: that's how I'd assume they'd do it
gwern#1782: yes, it's just runtime conditioning. but that's not what anyone means by 'trained tay to be evil'
alexyz#3459: well some people said "trained" some people said "made", it's more like these people don't understand how machine learning works and don't know the related vocabulary |
FerroMagnetic#6975: "Your mirror is bad, it reflects exactly what's put in front of it"
gwern#1782: I should note that what triggered me today was a long thinkpice in the ACM about adversarial attacks and poisoining of AI training data, and the authors perfectly well understand this, they're just repeating BS
alexyz#3459: Ah that makes sense
gwern#1782: so the journalists may have made an understandatable simplification, the problem is *all* the experts repeating it ever since
alexyz#3459: well
alexyz#3459: yeah
kurumuz#5695: love the experts.
bmk#1476: wen tay gwernpost
kurumuz#5695: It was kinda concerning seeing tweets in my timeline talking about how Language Models are extremely biased so we should filter their training data.
kurumuz#5695: Like, I thought we were trying to model language, and not a subset of the language?
gammascalpset#9792: I think it's a legitimate concern
Sphinx#2092: ...lol? I mean most text data out there is trash.
Sphinx#2092: We already filter stuff. The question is, what should we filter?
kurumuz#5695: Also, if you're not extremely brain washed you should be able to generate racist or some kind of hate speech even though you don't agree with it.
gammascalpset#9792: the standard formulation is that we're trying to model language, but most language on the internet is humans shitting out shitty ideas
EricHallahan#1051: That is why the internet is a bad thing to model on.
kurumuz#5695: Real life isn't any better.
EricHallahan#1051: ¯\_(ツ)_/¯
kurumuz#5695: IDK what we're supposed to ground our models if it's not reality.
gammascalpset#9792: If you analyse word vectors generated by "old" statistical models, they had a tendency to encode goodness and badness in one of the largest principal components of the vectors. This component was then used by sentiment analysis models you train on those vectors. |
I wouldn't be able to cite a paper for the same phenomenon happening with modern language models, but I'm pretty sure I remember seeing evidence that it does
gammascalpset#9792: so the concern is that, for example, encoded meanings of sentences related to muslims contain "dirty" markers that get picked up by whatever classifier your running on top of them
kurumuz#5695: No what is concerning is, these people expecting decency from a mindless text sampler.
gammascalpset#9792: language is not reality, language (as we mean it in this discussion) reflects the flawed perception of reality of people, in particular, people who spend a lot of time on the internet
bmk#1476: politrib warning
kurumuz#5695: Well, assuming we're not learning from raw sensory input like video or audio, you will have to learn from language for now.
kurumuz#5695: Everything is flawed, just model them all.
kurumuz#5695: A language model should be able to sound racist
bmk#1476: my perspective is the KL distance between the distribution of text on the internet and the idealized distribution we want it to learn is hopefully small enough that we can just do some small nudges to get it there
kurumuz#5695: If it can't, it's a shitty model and it's failing at what its trying to do
alexyz#3459: All filters are flawed.
alexyz#3459: No matter how much you tried to stop it, it'd be able to still
gammascalpset#9792: you could say anyone who fine-tunes a curriculum classifier for their companies recruitment department based on GPT-N is an idiot, and imo you'd have a point... but I argue that models free of bias would be way more useful, as you would then be free to deploy them safely for a lot more use-cases
Sphinx#2092: I think the issue is that even a very small amount of data can impact performance, see e.g. https://arxiv.org/pdf/2011.00675.pdf
Sphinx#2092: > Our results are alarming: even on the state-of-the-art systems trained with massive parallel data (tens of millions), the attacks are still successful (over 50% success rate) under surprisingly low poisoning budgets (e.g., 0.006%)
bmk#1476: hm yeah that might be a problem
alexyz#3459: I would say you should attempt to create a language model that models the optimal examples of a language, not one that models humanity's flaws.
bmk#1476: my hope would be that the model learns both a "normal" distribution and a "malicious" distribution (that it learns to interpolate between) and that we can just nudge it towards the former through a small amount of fine tuning
Sphinx#2092: Yes, agreed. I think having a held-out "clean" dataset to "debias" (or whatever word you want to use) is good.
Sphinx#2092: I think they also found that you can correct such issues in the finetuning stage. |
gammascalpset#9792: in a world where we never have to make compromises, yes, I guess "sounding racist" is one of the things a language model should be able to do. However, the current state of the world is that our SOTA language models don't "choose" to be biased on demand. Their bias is encoded in a way that leaks out in downstream tasks where it shouldn't. Therefore, atm the best compromise we can make might be to counterbalance the bias.
bmk#1476: yeah, or possibly someething with RL
kurumuz#5695: It's hard to understand for me that how there would be a "normal" distribution.
kurumuz#5695: what does even normal mean here
bmk#1476: regular, typical
bmk#1476: not gaussian
kurumuz#5695: I can understand that.
bmk#1476: also there is no single "true" LM distribution
bmk#1476: there are different distributions of text
alexyz#3459: By this logic, the NovelAI model should have the data to create underage NSFW textual content. Obviously, it shouldn't.
bmk#1476: when you start to argue which LM distribution you think is most useful, that becomes an ought problem not an is problem
kurumuz#5695: Yeah, I don't agree it's the same logic though.
alexyz#3459: "we should leave in racist data" = "we should leave in pedophilic data"
kurumuz#5695: I didn't say we should have racist data?
kurumuz#5695: If you don't do some crazy filtering over your crawl dataset, It will have it
alexyz#3459: You're literally arguing to not filter the data
kurumuz#5695: yes, we didn't filter our finetune data 🤔
bmk#1476: lemme propose some new terms
kurumuz#5695: I'm arguing not filtering data, that doesn't mean I SAY SOME DATA SHOULD DEFINITELY BE IN
alexyz#3459: oh no... |
kurumuz#5695: If it's in the language, sure.
kurumuz#5695: Do you think we have any kind of time to do filtering like that?
kurumuz#5695: lol
kurumuz#5695: We're talking about gigabytes of text
cfoster0#4356: what's a gigabyte? so smol 👀
bmk#1476: internet-distribution: the distribution of language found on the internet
true-distribution: the distribution of all language produced by humans
ideal-distribution: the idealized distribution of language we wish we had with no racism or whatever, the CEV of language essentially
alexyz#3459: A penny saved is a penny earned. By leaving something in when you could remove it, then you're adding it imo
kurumuz#5695: We're so smol
bmk#1476: yeah whats with gigabytes
alexyz#3459: It's literally 2GB, that could be easily filtered
AI_WAIFU#2844: guys just a reminder, keep this discussion productive
bmk#1476: even the pile is almost a TB
kurumuz#5695: my moderators going through every novel to filter bias
kurumuz#5695: would be fun yeah
alexyz#3459: Or you could have a LSTM to do it?
bmk#1476: we should start using this terminology
kurumuz#5695: and bork it like aid did?
alexyz#3459: or any of the 100 other solutions? |
kurumuz#5695: Yeah it's pretty good.
alexyz#3459: Don't filter the outputs, filter the data
alexyz#3459: don't make a bad filter
kurumuz#5695: We rather decided to spend that on time building our backend and frontend
kurumuz#5695: like, you are aware our closed alpha is this week right?
alexyz#3459: Yes, but why?
bmk#1476: reminder that this isnt the novelai discussion channel
alexyz#3459: You could always *not* rush it
alexyz#3459: yeah, k
kurumuz#5695: It's not rushed, but we like to have rapid progress.
bmk#1476: if you want to talk about the abstract discussion of data filtering, you can stay here, but if youre gonna argue about novelai, pls go somewhere else
kurumuz#5695: Yeah.
alexyz#3459: imma shut up, but still, @kurumuz, NovelAI is rushing out a product before doing any proper filtering and anything, anyway gtg byeeeeeee
kurumuz#5695: lol
FerroMagnetic#6975: Very neutral data sounds like only feeding dictionaries to the input. It'll be an erudite with no opinions, which is equal or worse to just using dictionaries.
kurumuz#5695: I already defended not doing any filtering on training data
kurumuz#5695: so what is your point?
nev#4905: can someone star this
bmk#1476: my argument is internet-distribution is neither the true-distribution nor the ideal-distribution
StellaAthena#3530: Oh boy do I have news for you about the history of dictionaries... |
bmk#1476: actually, dictionaries acausally bring words into existence
gammascalpset#9792: I think that in principle it might be beneficial for LMs to know what racism looks like, for example, if you want to fine tune them into racist Tweet detectors
bmk#1476: yeah agree
bmk#1476: my main crux is internet-distribution != true-distribution
FerroMagnetic#6975: @StellaAthena may as well get to "academic definition" and "common consensus" language theories
StellaAthena#3530: Connor and I wrote a thing about thi recently, one sec
gammascalpset#9792: I think the true-distribution is probably worse or similar to internet-distribution
gammascalpset#9792: think of all the stupid shit drunk people say in bars
gammascalpset#9792: we tend to hang around smart people, I think we have a biased idea of what true-distribution sounds like lol
kurumuz#5695: There is a lot of information missing that is in true-distribution, though.
cfoster0#4356: Also this
>>> think of all the stupid shit ~~drunk~~ people say ~~in bars~~
StellaAthena#3530: https://montrealethics.ai/wp-content/uploads/2021/04/SAIER-Apr2021-Final.pdf#page=181
bmk#1476: i think true-distribution models are independently useful for different things than ideal-distribution models
bmk#1476: im personally more interested in ideal-distribution models
FerroMagnetic#6975: https://www.youtube.com/watch?v=Z9cw4pyKMSU this ~~joke~~ subject is older than neutral networks popularity
AI_WAIFU#2844: I think there's good arguments that the "ideal distribution" should be a lot more offensive than one would naively imagine. There's a lot of scenarios where you might want your model to be <insert socially unacceptable thing here>.
AI_WAIFU#2844: This whole AID situation being a good case study in that.
bmk#1476: ideal-distribution is subjective
StellaAthena#3530: IMO y'all're missing the real question to an extent |
kurumuz#5695: getting to the ideal distribution sounds like an interesting problem
StellaAthena#3530: The way we currently train language models is bad
gammascalpset#9792: I think the problem here is that the only goal of these models is to say the next most likely word
gammascalpset#9792: they don't necessarily have to believe what they say as a fact
kurumuz#5695: Yeah, that is also my argument.
gammascalpset#9792: not that GPT-3 has even a tiny spec of a world model IMO
AI_WAIFU#2844: I think it's even more than that, currently we have no distinction between language models and language actors.
cfoster0#4356: Idk, LMs are doing great, imo! They learn the objective well. Just that objective isn't aligned with "produce text I approve of"
AI_WAIFU#2844: this shows up in the way we talk about, and the way we train/implement these systems.
bmk#1476: i wrote a 5000 word thing about this lol
gammascalpset#9792: but assuming that at scale models trained on LM goals do develop decent world models (I don't think they will), more accurate world models should give better predictions. Therefore, the best LMs won't include racism in their world models, but will sound racist when it maximises the likelihood of doing good at language modeling.
kurumuz#5695: That is why i criticize people expecting that from our current language models. They're good at their objectives and doing what they're supposed to do.
bmk#1476: https://docs.google.com/document/d/1HuzRZIuQEX0zlST25kt1BnnnMU6iTzEhT5ncyUxzbf8/edit
gammascalpset#9792: if this happens, you can probably train them not to be racist with some RL fine tuning?
kurumuz#5695: I will read it when my pomodoro stops, see you guys later.
bmk#1476: especially the "Why natural language as a medium" section
StellaAthena#3530: Regardless of whether or not there is some idea mix of texts that would train an ethical language model, the mere fact that a minor distortion in the training text could suddenly make the AI racist indicates that we are not going about training language models the right way
cfoster0#4356: I think the distinction between language models and language agents/actors is helpful here
gammascalpset#9792: this
bmk#1476: https://docs.google.com/document/d/1NCEJROewaFgugWFuVDxtygq2K3xuawc7S32G7ntJXO4/edit the outline also has some relevant stuff buried in there |
chris_myzel#9645: I dunno if this is the right place, tried generating a 3000 token text with an around 1000 token input and the generator stopped after some hrs (killed). What can I do to further debug this or make inference more stable here?
gammascalpset#9792: I think that if you had a really good but racist LM, and found that it was useful to shave off the last layer and plug that into a language agent, the agent wouldn't be racist unless it needs to for some reason
StellaAthena#3530: Does anyone know how to operationalize this distinction or is it largely theoretical
AI_WAIFU#2844: I've been tossing around ideas
AI_WAIFU#2844: and yelling at BMK about it
gammascalpset#9792: but current LMs would still be racist, as they don't achieve their performance by accurate world modelling, rather by loose associations between word sequences, and it just so happens that associating words in a racist way lets them do better at modelling internet-distribution
bmk#1476: put them down in the outline doc
EricHallahan#1051: *Hours?* What machine are you running on?
chris_myzel#9645: the CPU guy...
EricHallahan#1051: Ah
EricHallahan#1051: Okay.
cfoster0#4356: I think LM can be pinned down but agency is still not well understood
EricHallahan#1051: No problem with that.
EricHallahan#1051: Hmm
chris_myzel#9645: maybe around 2 hrs at high utilization and 45 min on 1-2% before the kill
AI_WAIFU#2844: start with generating like 1 token
AI_WAIFU#2844: then work your way up
EricHallahan#1051: Are you using Hugging Face Transformers?
chris_myzel#9645: so my 1000 tokne input, ask for 1005, 1005 input ask for 1010,...?
chris_myzel#9645: > Are you using Hugging Face Transformers? |
yes
nev#4905: effective racism
kurumuz#5695: Oh, apparently the weird generated tokens with the fp16 model is a sampling problem.
EricHallahan#1051: I heard.
kurumuz#5695: yeah pretty good news.
cfoster0#4356: Gesture towards operationalizing. Language models aim to maximize the likelihood of observed sequences at training, and at runtime roll out continuations according to the distribution they've learned. Language agents/actors learn some kind of distribution over sequences (not necessarily maximizing likelihood), and use it at runtime instrumentally to some other goal or control system.
kurumuz#5695: and their evals almost match
kurumuz#5695: they're extremely close.
EricHallahan#1051: Not much that we can do to fix the sampling code, open an issue with HF and PyTorch.
kurumuz#5695: Yeah, just thought it was interesting.
gammascalpset#9792: I think it's interesting how one fact about LMs always gets lost somehow. LMs aren't outputting words they want to write, they're outputting a probability distribution. It's human-written code that samples from the dist and writes the word.
EricHallahan#1051: Add it to the list of reasons of why not to use HF.
AI_WAIFU#2844: Yeah, so one way to go would be to bolt on a utilityfunction/rewardmodel, an agent NN, and then use the OG LM to extract information from simulated actions taken by the agent NN to be fed into the utility function.
kurumuz#5695: Why to use HF:
AI_WAIFU#2844: Model based language agency
gammascalpset#9792: I just thought of something. What if we wired spoken words audio into the first layer of a pre-trained text-based LM (potentially with some other NN in the middle) and asked it which words were spoken and/or follow the spoken instructions? Would this work better than current voice assistants? has anyone tried before?
EricHallahan#1051: #sp3
cfoster0#4356: A lot of voice recognition systems use a language model
kurumuz#5695: follow the speaken words as, create a classifier?
kurumuz#5695: because you want your language model to call some functions, so an interpreter. |
cfoster0#4356: It's a lot easier to decode what someone is saying if you've got priors about what word/sound sequences are likely
FerroMagnetic#6975: "The Enrichment Center reminds you that the ~~Weighted Companion Cube~~ Generative Associative Network will never threaten to stab you and, in fact, cannot *speak*."
FerroMagnetic#6975: The follow-up of "In the event that the ~~weighted companion cube~~ GAN does speak, the Enrichment Center urges you to disregard its advice." makes it even better
gammascalpset#9792: GAN proceeds to say "no matter what happens, don't give me access to the internet"
FerroMagnetic#6975: In fact I wanted to raise one question from the above, "more accurate world models should give better predictions". Like we at human level do have more accurate model and a lot of our answers would be "I don't know".
FerroMagnetic#6975: If viewed pessimistically, we'll never be able to raise artificial level above the human.
gammascalpset#9792: Good point
gammascalpset#9792: On the other hand, this model has to predict not one person, but a wide variety of people
gammascalpset#9792: in the best case not only it would need to be as smart as the smartest human (if it wants to model stuff like code or very hard math proofs), but also gain a much better understanding of psychology than any of us has
gammascalpset#9792: this is all in theory, in practice I don't think intelligence would arise out of LM
gwern#1782: yeah, it has to predict the ensemble, not just one person
gammascalpset#9792: I know this is a hot topic in this discord, but do we have any evidence that GPT-3 is doing any meaningful reasoning beyond superficial association between concepts?
cfoster0#4356: It can certainly simulate reasoning agents, yes
cfoster0#4356: Not as robustly as you'd like, but it can do it
Tinytitan#5596: and thats good enough for us
cfoster0#4356: What do you have in mind by superficial association between concepts?
gwern#1782: (in the limit, it needs to be at least as smart as the smartest agent in the corpus, because otherwise, that would represent some reducible loss left to learn)
gammascalpset#9792: maybe in the limit, but that might require petabytes or hexabytes of text generated by the smartest agents
bmk#1476: * assuming there's enough resolution of the agent
gammascalpset#9792: in reality, only an extremely small fraction of the current loss is generated by text that requires high intelligence to generate |
gwern#1782: well, that brings you back to the argument about low vs high order bits: "how do we know that the language model will not focus on learning grammar/style/verbal tics/facts to lower its loss, rather than inducing the higher-order abilities?" ~= "how do we know the language model will not focus on modeling the fine details of stupidity, haste, carelessness, sloth, and ignorance of the dumb agents in the corpus, rather than learning to model the best agents?"
StellaAthena#3530: I don’t think so actually. Have you read the Scaling Laws papers?
StellaAthena#3530: Petabytes of data would train *insanely* large models.
EricHallahan#1051: Hexabyte?
EricHallahan#1051: Exabyte
cfoster0#4356: In the current regime, you should increase your model size much much faster than you should increase your dataset size
StellaAthena#3530: I need to pin that equation, I keep losing it
bmk#1476: i think you might need some careful data crafting but not *petabytes* of data
gammascalpset#9792: the concept of gravity is generally associated with the concept of falling
bmk#1476: also petabytes isnt actually that much
gammascalpset#9792: https://www.lesswrong.com/posts/L5JSMZQvkBAx9MD5A/to-what-extent-is-gpt-3-capable-of-reasoning the first two questions for example
FerroMagnetic#6975: Is there a theoretical name for the model that have read "absolutely everything avaiable and possible"?
gammascalpset#9792: GPT-3 can obviously tell you're negating gravity, so associated concepts should receive "negative scores" in some sense
cfoster0#4356: Mm yeah I think this method is pretty universal. ie "these particular bits on my retina are generally associated with these particular bits inside my internal world model"
gammascalpset#9792: yes... my completely worthless take is that it would at the very least need some kind of working memory it can write and read from, and control over how long to run a computation before spitting out an answer
cfoster0#4356: My completely worthless take is that your completely worthless take is very valid
EricHallahan#1051: AGI?
gammascalpset#9792: assuming that our brains also work by similar vague associations, we're only able to perform abstract reasoning because we can choose to keep reasoning for (almost) arbitrary amounts of time about something and operate on our memory
gammascalpset#9792: current transformers architectures are necessarily limited in how long they can "think" about something by the number of layers
gammascalpset#9792: anyway, when it comes to GPT-3, I think the most important reason why it can't become an AGI is the training objective, not the model architecture |
gammascalpset#9792: it's on my reading list
bmk#1476: id counter argue against this with my 5000 word post lol
bmk#1476: tl;dr i think the training objective is *suboptimal* but could still eventually lead to AGI
gammascalpset#9792: did you already publish? :3
bmk#1476: no, it's still wip
bmk#1476: i posted the draft
bmk#1476: https://docs.google.com/document/d/1HuzRZIuQEX0zlST25kt1BnnnMU6iTzEhT5ncyUxzbf8/edit
bmk#1476: i posted this earlier in this chat but here it is again
gammascalpset#9792: I hope I'm not wasting anyone's time by being very wrong before reading them, but I assume these laws are extrapolated from our current experiments. However, our current experiments measure the size of the dataset, not the size of the subset that requires true ™️ intelligence to model
gammascalpset#9792: for all we know, the Pile might contain only 4 or 5 sentences that require anything more than a lizard's brain to model
StellaAthena#3530: @gammascalpset I believe that the equation that relates dataset size to # of params is $$P(D) = 2\times 10^{-19}D^{2.7}$$
gammascalpset#9792: well, that is an exaggeration, but I hope that I got my point across
TeXit#0796: **Stella Biderman**
Compile Error! Click the :errors: reaction for more information.
(You may edit your message to recompile.) https://cdn.discordapp.com/attachments/729741769738158194/842099634704875561/193204646687408129.png
StellaAthena#3530: That would mean that an exabyte of data would be enough to train a language model with 8 x 10^29 params
StellaAthena#3530: When I say “insanely beyond anything we have” I mean *really* *really* large
bmk#1476: my post also provides an explanation of how the scaling laws relate
gammascalpset#9792: did you guys try to estimate how big a model we could train with our largest supercomputer to date?
gammascalpset#9792: or something along those lines |
StellaAthena#3530: The Pile and its “measly” 850 GiB is good up through 3 x 10^13 params, far beyond anything we currently have
cfoster0#4356: It's a bit unclear how far the current regime will take us. If the curve switches from L(C) to L(D) then we may need more data than these predict, right?
StellaAthena#3530: @cfoster0 you mean the cross-over point?
StellaAthena#3530: Yeah we have no idea what happens by then
gammascalpset#9792: also, assuming that these features do turn out to be necessary, doesn't it kind of turn into an RL problem?
gammascalpset#9792: having working memory and control over computation time means the model needs to learn to make decisions about those two things
gammascalpset#9792: it seems to be they're control problems
StellaAthena#3530: @gammascalpset we already have systems that do that.
StellaAthena#3530: I don’t see why it would require RL, personally
gammascalpset#9792: sorry, I'm not sure RL is the right word
StellaAthena#3530: Control theory yes.
StellaAthena#3530: Well
StellaAthena#3530: Maybe
gammascalpset#9792: my point is that they're control problems, where the model needs to explore the space of possible behaviours
gammascalpset#9792: and we can assume it will start with a terrible strategy
StellaAthena#3530: Why? Like I said there are known algorithms that are good at this
cfoster0#4356: Good at which part?
CRG#8707: For what it's worth it, conditional computation / increased test time compute doesn't seem to work very well atm: <https://www.youtube.com/watch?v=8iz5v3Q0g9I&t=180s> https://cdn.discordapp.com/attachments/729741769738158194/842102562710749224/dfcf66eebab31203377a933920740782.png
gammascalpset#9792: Such as? I only know of DNC which only kind of works at tiny scales afaik
gammascalpset#9792: my point exactly, we don't know how to teach a model to do that well atm |
cfoster0#4356: I don't think pure conditional computation is that interesting
cfoster0#4356: But planning in a learned model is moreso
StellaAthena#3530: @cfoster0 @gammascalpset @CRG TCP/IP, Lotus Notes admission control, Apache QoS differentiation
StellaAthena#3530: I’m not a control theory expert but I’m having trouble seeing why this isn’t a solved problem tbh. Maybe it would help if someone explicitly states the requirements?
StellaAthena#3530: I could have the wrong mental model of what y’all’re talking about, but I have a very strong prior on “it’s not real to DL people until it’s published by a DL person”
cfoster0#4356: What part of this are you saying is solved?
gammascalpset#9792: seems like AQM has a relatively small action space?
gammascalpset#9792: if you're writing to or querying your own working memory, I presume the action space is continuous and very high dimensional
cfoster0#4356: I thought we were talking very generally about "given a world model and a reward model, how do you learn a control system that allocates fixed computation time and memory to achieve high reward?"
StellaAthena#3530: > yes... my completely worthless take is that it would at the very least need some kind of working memory it can write and read from, and control over how long to run a computation before spitting out an answer.
> assuming that our brains also work by similar vague associations, we're only able to perform abstract reasoning because we can choose to keep reasoning for (almost) arbitrary amounts of time about something and operate on our memory
> current transformers architectures are necessarily limited in how long they can "think" about something by the number of layers
> anyway, when it comes to GPT-3, I think the most important reason why it can't become an AGI is the training objective, not the model architecture
StellaAthena#3530: This is what I’m thinking of
StellaAthena#3530: The trade off between “thinking time” and “acting in the world time” aka “make decisions as promptly as you are required to” is a solved problem unless there’s something specific that’s weird about NNs
gammascalpset#9792: tbh I'm surprised that we haven't succeeded in developing a NN that is good at delaying inference until it has had more computation time to compute
bmk#1476: people have tried it, it's just not widely deployed
gammascalpset#9792: it seems like it shouldn't be too complicated, find a way to measure how confident you are in your answer and decide to either perform another step or not
gammascalpset#9792: it's the working memory part I'm more concerned about
bmk#1476: people have already done it |
StellaAthena#3530: Like, we had algorithms that would reason about how to allocate time budgets between explore and exploit since the 70s
StellaAthena#3530: It’s trivial to make a chess bot that can manage its own clock. I know this because I’ve done it as a homework assignment
gammascalpset#9792: last paper I read they didn't get good results, but if y'all can't tell I'm just back from a ML hiatus of almost 2 years so I'm still catching up
bmk#1476: i mean if the simple solution doesnt work, clearly it's pretty complicated
StellaAthena#3530: Can you link to this paper?
cfoster0#4356: Assuming the subproblems in question are in fact solved, the bottleneck would basically be compute. The only working method I know of that would take advantage of an adaptive budget for GPT-N is doing a whole lot of separate autoregressive rollouts (like, tree search style). Unless there's something else y'all had in mind?
StellaAthena#3530: Let me introduce a piece of terminology I think will be helpful: uniform vs non-uniform computing.
We often want to compute a class of functions that naturally stratifies by a notion of “size.” For example, we would like neural networks to be able to compute any continuous function from R^n to R, regardless of the actual value of n. It’s not that we have a fixed, unspecified n. It’s that we want to encompass *all* n.
Similarly, when we talk about the traveling salesman problem we don’t want to solve this problem on graphs of size 104727. We want to solve this problem on *any* graph.
A system computes a family of functions, F, uniformly if the is a single configuration of the system that can compute any f in F. A system computes F non-uniformly if we require a different setting for each striation (typically size of the input)
StellaAthena#3530: Bog-standard NNs are *non-uniform* when computing functions from R^n to R. You need a different NN for each n. Your laptop is not: Turing machines can uniformly compute all computable functions.
StellaAthena#3530: There’s another – more important – sense in which NNs are non-uniform. They have a fixed depth.
StellaAthena#3530: Let $\{G_k\}$ be a parameterized family of computational graphs of depth $k$. Let $NN(G_k)$ refer to the set of all functions that can be computed by a neural network with computational graph $G_k$. Then there does not exist a single graph $G’$ such that $NN(G’) = \cup NN(G_k)$. It cannot be done (assuming some basic non-degeneracy requirements)
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/842113862383435786/193204646687408129.png
gammascalpset#9792: my bad, it seems my own memory wasn't serving me well https://arxiv.org/pdf/1603.08983.pdf
cfoster0#4356: What happens when you use the same NN for all n (ie transformers) or when you allow depth to vary (ie recurrent nets)m
gammascalpset#9792: the network learned to make decisions about computation time in this paper |
StellaAthena#3530: @cfoster0 If you build the NNs in a sufficiently ``well patterned'' fashion then $NN(G_1)\cup NN(G_2)\cup\cdots\cup NN(G_{k-1})$\subseteq NN(G_k)$
TeXit#0796: **Stella Biderman**
Compile Error! Click the :errors: reaction for more information.
(You may edit your message to recompile.) https://cdn.discordapp.com/attachments/729741769738158194/842114814478123018/193204646687408129.png
StellaAthena#3530: The case for RNNs is open. I had thought I had solved it a couple months ago and got very excited about it until @samantha_bot found a hole in my proof.
gammascalpset#9792: anyway as I mentioned, it's the working memory aspect that I'm much more concerned about
StellaAthena#3530: The maximum working memory of a fixed feedforward NN is bounded
StellaAthena#3530: You can't dynamically add more layers at inference time or something like that because you had to have originally trained those layers for them to do anything
bmk#1476: https://arxiv.org/abs/1807.03819
gammascalpset#9792: If you look at how DNCs are defined, there's multiple parallel "read heads" and "write heads". There's multiple formulations of the external memory unit, but generally it's a big matrix of K vectors of size N. When you write, you either dynamically decide what to overwrite, or some hardcoded logic decides what the most obsolete vectors are. The per-instant action space is already huge: essentially you can write or query for any vector of dimension R^N
gammascalpset#9792: Now, DNCs aren't the only NN that can be said to have some form of working memory, but it seems to me that when it comes to the alternatives, the action space is similarly huge, it's just harder to think about it in those terms; eg. LSTMs also have to decide what to remember and forget
gwern#1782: inb4 'transformers are key-value memories'
gammascalpset#9792: yes, I think there's been a misunderstanding: the control problem I'm concerned with is not how much memory to allocate, but what to store in that memory
StellaAthena#3530: So you're talking a replacement for traditional training?
gammascalpset#9792: 😮
Interesting, this should be able to refine its representations as long as it needs to. I wonder if it's as powerful as being able to read and write arbitrary stuff to memory
gammascalpset#9792: what do you mean by traditional?
CRG#8707: Universal transformers are very compute inefficient unfortunately.
StellaAthena#3530: "deciding what info to store in the weights" is another way of saying "train the NN"
gammascalpset#9792: I'm not talking about what to store in the NNs weights, the "working memory" I'm referring to starts zeroed-out at each episode and gets filled in at runtime |
CRG#8707: Well, you could do something like Memformer: https://arxiv.org/abs/2010.06891> https://cdn.discordapp.com/attachments/729741769738158194/842121408993558547/3f1f90e06bfa517f3f0bebdf4daa329f.png
gammascalpset#9792: based on decisions taken by the NN during inference
gammascalpset#9792: yes, this looks more like what I'm thinking of
cfoster0#4356: :gameryes:
Deleted User#0000: I need to rotarify memformer too
cfoster0#4356: No one does this but you can explicitly initialize a transformer to perform associative kv retrieval for a particular data store
cfoster0#4356: In theory you can just manually add new keys and values at runtime. Not terribly efficient though if you don't know when to expire old items
gammascalpset#9792: amazing... I wonder if this is an answer someone gave on the internet somewhere, or it indipendently figured out that if giraffes went extinct it would affect tall trees? https://cdn.discordapp.com/attachments/729741769738158194/842125232940515338/Screenshot_2021-05-12_at_21.43.29.png
cfoster0#4356: I doubt it. This is what I meant by "simulating reasoning agents"
cst#9766: Define "figured out": did the model produce a some form of logical representation of the scenario and then attempt to follow that logic? I would say no. Did it create some novel reply based off of giraffes implying predators and trees? Sure, but I wouldn't count this as "figuring out" either: that process was based in some statistical model of giraffes being highly likely to be in a sentence involving trees, not based in some semantic understanding.
cfoster0#4356: Looks like semantic understanding to my eyes 👀
cfoster0#4356: It "got" that the color of the giraffes changes their susceptibility to predation, and that increased predation of them could create room for more tall trees
gammascalpset#9792: Yes, assuming that this exact question hasn't been asked in the training set, it's showing at least 2 levels of understanding that are unlikely to show up by chance: pink giraffes -> dead giraffes -> happy trees. Idk if this whole reasoning can happen inside of the model as it gets asked the question, but notice an interesting detail: before mentioning the trees, it wrote text about the giraffes being vulnerable. Then it had the chance to parse that text. Rudimentary form of working memory?
gammascalpset#9792: of course, this "memory" can only be used to store text rather than arbitrary state, and it doesn't choose what to store in it, it just so happened the first generated sentence (maybe) was helpful to generating the second
cfoster0#4356: Fwiw it did choose what to store in memory, although that choice is probabilistic because of sampling
Sid#2121: Is it possible to get at the underlying bits of a torch tensor? say i wanted to look at the sign / exponent / significand of a fp32 scalar? ( @chilli ?)
chilli#5665: for a specific element?
chilli#5665: or for the tensor as a whole
Sid#2121: well ideally the tensor as a whole
Sid#2121: I'm trying to see if there's a way i can hackily express a bf16 tensor as a fp16 tensor so i can send it over nccl :berk: |
Sid#2121: but also just interested in looking at a specific element
chilli#5665: :thonk:
chilli#5665: well, for any specific element you can just index into it and use `.item()` to get the value out
chilli#5665: You can also use `x.data_ptr()` to get the raw address
Sid#2121: 🤔 I'm not sure that converting super large tensors item by item to bits then back again will actually save me any time
chilli#5665: and then probably do something like
chilli#5665: `x.data_ptr().to_bytes(total_size, sys.byteorder)`
chilli#5665: it seems you can also use
chilli#5665: `io.BytesIO`
Sid#2121: https://pytorch.org/docs/master/generated/torch.frexp.html#torch.frexp apparently this is a thing in torch 1.9
Sid#2121: exactly what i need 😢
Sid#2121: installed the latest torch - it doesn't like frexp for bf16 :sadge: ```
>>> t = torch.randn((2, 3), dtype=torch.bfloat16).cuda()
>>> t.frexp()
terminate called after throwing an instance of 'c10::Error'
what(): "frexp_cuda" not implemented for 'BFloat16'
Exception raised from operator() at /pytorch/aten/src/ATen/native/cuda/UnaryOpsKernel.cu:174 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x42 (0x7fb41a7a4d82 in /home/mchorse/.local/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x5b (0x7fb41a7a171b in /home/mchorse/.local/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #2: at::native::frexp_kernel_cuda(at::TensorIteratorBase&) + 0xb1 (0x7fb430108f01 in /home/mchorse/.local/lib/python3.8/site-packages/torch/lib/libtorch_cuda_cu.so) |
frame #3: at::native::frexp_out(at::Tensor const&, at::Tensor&, at::Tensor&) + 0x211 (0x7fb41bb61271 in /home/mchorse/.local/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
Aborted (core dumped)
```
Teemochu#8740: Agreed, and TBH the "ideal distribution" is if anything *more* "offensive" than the true distribution, because of humans' tendencies to self-silence and coalesce into misaligned constructed intelligences known as societies.
Teemochu#8740: (One of the things I fear about alignment is that we end up trapped in 21st century ideals forever because we aligned our overlords to the flavor-of-the-century)
Teemochu#8740: if you zoom out, there are a lot of things modern societies take as fundamental morals that are quite unusual or at least not absolute over the millennia
Teemochu#8740: (for a not-too-spicy example, take attitudes about drug use)
bmk#1476: *CEV noises*
Teemochu#8740: funny you say that, given it also stands for closed-eye visuals, aka drug or sleepiness induced hallucinations of coherent forms upon closing one's eyes
bmk#1476: time to make a CEV version of the CBT diagram
bmk#1476: Coherent Eye Volition
bmk#1476: Closed Extrapolated Visuals
Teemochu#8740: But anyway, the ability to properly form a 26th century just as alien to us as the 16th (if not moreso) shouldn't accidentally be thrown away in pursuit of alignment
AI_WAIFU#2844: this is why I think a ground up reevaluation of our entire culture is probably necessary after we solve the more pressing problems, up to and including redesigning language
gammascalpset#9792: Do you think we'll ever find something better than the "freedom as long as you don't harm others" principle?
gammascalpset#9792: I don't think we should abandon it as long as we vaguely resemble homo sapiens (who knows afterwards)
gammascalpset#9792: All our new attitudes on drug use, sex etc. stem from that
gammascalpset#9792: And imo it's fucking awesome
Teemochu#8740: The "new attitude" I'm talking about re:20th century is the fact that it was banned actually |
gammascalpset#9792: Ah lol true
gammascalpset#9792: Well I meant the *newest* attitude
Teemochu#8740: like, "freedom as long as you don't harm others without consent" (with no-nonsense definitions of harm and consent) is great but it's far from the current (or any particular past) way, and I'll leave it at that
AI_WAIFU#2844: Yeah, there's a *lot* of people who aren't on board with that, and that's before you deal with things like limited resources, shelling points, etc...
Teemochu#8740: (Another thing that worries me is AI being paternalistic, given that a lot of our current mores care more about protection than freedoms for lesser beings of all kinds, and an omnipowerful AI would probably internalize its power relationship with humans)
Teemochu#8740: (hence the need for raw no-nonsense definitions of harm and consent in my above statement)
gammascalpset#9792: Yeah well, those people are wrong™️
gammascalpset#9792: Morality: solved :bigbrain:
Kia#2550: Ow I saw this is quite fascinating
ml̩ˈvluː#2850: How is the development of AI by EleutherAI funded?
bmk#1476: read the faq
ml̩ˈvluː#2850: I did.
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/842216724308819988/unknown.png
EricHallahan#1051: https://eleuther.ai/faq
EricHallahan#1051: (for reference and easy access)
ml̩ˈvluː#2850: How is everything aside from high-quality GPU funded?
bmk#1476: the gpus are like the #1 cost lol
bmk#1476: everything else is just out of pocket
bmk#1476: but it's peanuts
ml̩ˈvluː#2850: I see. |
bmk#1476: like what were you thinking?
EricHallahan#1051: (I legitimately would like to know.)
EricHallahan#1051: I would say it is our time lol
ml̩ˈvluː#2850: I wished to know the speed at which this project would be allowed to progress by the money available to be used on the projects of EleutherAI.
EricHallahan#1051: Our speed is not constrained by funding but instead by our time.
bmk#1476: is that another way of saying you want to donate? if so, we aren't taking donations currently unless youre talking like really big
ml̩ˈvluː#2850: I cannot donate.
ml̩ˈvluː#2850: I see. My concern stemmed from reading regarding independent developers being constrained by funds. Why is time the main constraint of speed for EleutherAI rather than funds?
bmk#1476: ..because we have enough funds?
EricHallahan#1051: Once compute costs are covered, there really isn't much else needed to do research other than time.
bmk#1476: i dont see what we'd need to buy other than gpus
bmk#1476: and we already have gpus
Louis#0144: Sorry this isn’t off topic
Louis#0144: Didn’t realize
EricHallahan#1051: Besides, we are not getting paid for this, nor are we trying to get paid for this. Most of us here effectively do this as a fun side project.
ml̩ˈvluː#2850: That is also the case for some of the aforementioned independent developers; hence my concerns. How restrictive is time?
bmk#1476: very
bmk#1476: moar devs pls
bmk#1476: we literally have lists of ideas that nobody has time to pursue
AI_WAIFU#2844: correction, we have a spot on the waitlist for gpus |
AI_WAIFU#2844: which, for the really big models, is the bottleneck
Teemochu#8740: aka we have money for gpus but not physical gpus?
Teemochu#8740: so, gpu-denominated dollars?
EricHallahan#1051: Final NeoX hardware is on order.
EricHallahan#1051: We just don't know when exactly it will show up.
AI_WAIFU#2844: for every other project we have more than enough compute and the bottleneck is engineer time
bmk#1476: im mostly interested in other projects tho
ml̩ˈvluː#2850: What?
AI_WAIFU#2844: same
Teemochu#8740: things like subquadratic attention?
AI_WAIFU#2844: lol
EricHallahan#1051: lol
Teemochu#8740: and image recognition? (aka bite-pear encoding)
EricHallahan#1051: Can someone do "stop doing linear attention"?
EricHallahan#1051: I'll have to think up the phrases.
gwern#1782: _feels triggered_
bmk#1476: what a coincidence, this is the one thing im not interesed in
Teemochu#8740: oh so you *are* interested in unaligned catgirls
Teemochu#8740: as long as they're :catgirl3: it's fine
bmk#1476: :gameryes: |
bmk#1476: all catgirls deserve interest
Louis#0144: Pfft
Louis#0144: Cat girls
Louis#0144: Such peasantry
bmk#1476: :goose:
Louis#0144: Ya
Louis#0144: That’s fuckin right
EricHallahan#1051: If you are concerned about us disappearing or failing to reach our goals, rest assured, it is incredibly unlikely bar a solar storm or some other cataclysmic event.
Louis#0144: Know ur place
Teemochu#8740: or the heat death of the universe
EricHallahan#1051: That too.
Teemochu#8740: though I guess we *did* say we'll have 200B before then
Louis#0144: Even if Leo died this week (hypothetical) people would carry on EAI
Teemochu#8740: :ZoeSus:
Louis#0144: 😊
Louis#0144: ☺️ 🔪 🩸 :goose:
Teemochu#8740: you preparing a goose sacrifice for :moloch_chan:?
Louis#0144: Ofc
bmk#1476: well, except nobody would know how to maintain pyfra or eval harness
Louis#0144: Oh |
Louis#0144: gg then
bmk#1476: but neither of those are critical i gurss
bmk#1476: also i know the pile inside and out, nobody else here can match me on that lol
Louis#0144: Oh you’re a pile fan? What’s their best selling meta data album
bmk#1476: i can remember exactly who did what without checking the contributions section of the paper
zphang#7252: @nostalgebraist actually though, how would I cite you. Should I just put your name down as `nostalgebraist`
gammascalpset#9792: I'd be super glad to help with on #deleted-channel's EL summary network in my spare time
gammascalpset#9792: And potentially other stuff
gammascalpset#9792: Was talking about it with @45 , we both have *jobs 'n' shit* but might be able to dedicate some time to it
Atsu#1282: Hi, all. I am interested in getting involved in your organization's research. I've just joined here and tell you my background and skills. My current occupation is a ml engineer in a research team of a startup campany in Japan. My skills are as follows.
(Prog-Lang) python3 for five years
(DL frameworks) pytorch 0.x, 1.x for 3years, tensorflow 1.x for 2 years and numpy for 5 years.
(Other Environments) docker, pyenv, conda, poetry
(Natural Language Skills) A native speaker of Japanese, but I am also interested in language-free methods like bpe and sentencepiece and cross lingual pre-training.
(IaaS) I mainly use GPU instances of AWS and GCP. I have executed an official BERT pretraining script on TPU once but I am not familiar with the XLU and evaluations of xlu tensors. I have no experience of large scale distributed multi node pre-training on IaaSs.
Some of the topics I am interested in are: semantically equivalent paraphrase generation, applications of reinforcement learning and imitation learning to text generation and language models, text generation with facts, discrimination free text generation, application of knowledge graphs to text generation
EricHallahan#1051: Welcome!
EricHallahan#1051: Wow, that is a wall of text there. |
EricHallahan#1051: Hmm
Kia#2550: Hmmm
Kia#2550: Wow
Kia#2550: Well you're currently the only person Active in this server Eric
Kia#2550: ;p
EricHallahan#1051: I'll tell you right now: You have more qualifications than I do.
EricHallahan#1051: I'm trying to think of exactly where the best place to send you would be.
zphang#7252: if you don't have any burning project idea in mind, you can hang around the discussion and project channels and jump in when something sounds exciting to you
EricHallahan#1051: I was about to suggest the same.
zphang#7252: (interesting: we don't actually have any project focused purely on general text generation)
BoneAmputee#8363: #speedrun was talking about a need for paraphrasing today
EricHallahan#1051: Yes, that is the case, because we are not particularly interested in downstream applications.
Teemochu#8740: I, for one, am very interested in downstream applications :smug:
EricHallahan#1051: Go somewhere else for that, you know where that is.
Kia#2550: ML engineer:thonk:
Kia#2550: Hmm
EricHallahan#1051: Hmmm
Teemochu#8740: fun fact if you try to warn carlbot using carlbot he'll ask what he did to deserve it
zphang#7252: We're avoiding one specific class of downstream applications, by both conscious and subconscious design
Kia#2550: Really sound interesting |
Teemochu#8740: AGI?
EricHallahan#1051: Unaligned Catgirls
Kia#2550: Are they hiding something?:WowPika: /jk
Teemochu#8740: *taps sign for 3 mana and uses it to summon an unaligned catgirl*
jbustter#5167: how's this as a new emoji on the discord https://cdn.discordapp.com/attachments/729741769738158194/842306620230467584/Discord_1O9zSWmXzc.png
EricHallahan#1051: ¯\_(ツ)_/¯
zphang#7252: it can be the PogChamp of EAI
EricHallahan#1051: TBH, I don't think this is the best place to ask this question.
EricHallahan#1051: ¯\_(ツ)_/¯
EricHallahan#1051: I don't know.
Deleted User#0000: I thought geeks are here lol
Deleted User#0000: Ah anyway
chris_myzel#9645: I just skimped at the question before it was deleted, but I believe `JSON.stringify()` will do what you are asking for in a browser, you'll need to display it somewhere still
Jozef Poniatowski#7589: is there any benefit in using pytorch lighting's training code over the standard huggingface training code? they seem pretty similar to me
EricHallahan#1051: ¯\_(ツ)_/¯
alstroemeria313#1694: what is that, a vqgan encoded/decoded face?
jbustter#5167: "people staring at the camera in disgust at a wedding"
jbustter#5167: the usual vqgan
mkualquiera#3484: This happens to me so much
aze#1010: im trying to fine tune gpt-neo with *very specific* objects described with random stuff |
aze#1010: example generations that i want are ``` - a pink frog with strong legs and a hat
- a boxy [rectangular] car with smoke coming out of it
- a cowboy (with a hat) with very big eyes smoking a very long cigarette```
aze#1010: how would I go about this? just feed it a dataset containing those prompts ^ , how big would that dataset have to be?
EricHallahan#1051: How are you tuning NeoX if we haven't released any models?
aze#1010: i mean neo
aze#1010: w/ a gpu
EricHallahan#1051: Ah, okay.
EricHallahan#1051: I you want good results you want a lot of data. Think at very least in the tens of mebibytes. I am not the best person to ask about this though.
EricHallahan#1051: ¯\_(ツ)_/¯
aze#1010: dang, thats gonna be hard to achieve
EricHallahan#1051: I'm not good with these estimations though. `:\`
bmk#1476: I've never fine tuned a model on less than like 10gb of stuff at the very least
kip#6104: what is the goal of your fine-tuning? It might be worth just trying to do a few shot prompt
aze#1010: achieve generations like these
kip#6104: yeah just few shot prompt it then i think
aze#1010: whats your idea for the prompts?
EricHallahan#1051: Few shot it.
EricHallahan#1051: You could use those.
kip#6104: if you put those into a prompt for the model |
aze#1010: o
kip#6104: yeah, eg: ```- a pink frog with strong legs and a hat
- a boxy [rectangular] car with smoke coming out of it
- a cowboy (with a hat) with very big eyes smoking a very long cigarette
-
```
kip#6104: just give it that, and hopefully it will generate stuff of similar style
aze#1010: ill see what result it gives, ty
BoneAmputee#8363: does finetuning just for a *little bit* on a small corpus help? I've felt like it has in the past, but it could have been in my head. like, getting vanilla gpt-2 to generate tv show transcripts was difficult, but letting it look at a few dozen scripts for like 10 minutes, seemed to help a lot
BoneAmputee#8363: it just wants to overfit really quick and you gotta not let that happen
EricHallahan#1051: ¯\_(ツ)_/¯
EricHallahan#1051: *I'm sorry, my responses are limited. You must ask the right questions.*
kindiana#1016: One epoch on small dataset works ok
kindiana#1016: It's a bit theoretically iffy due to non iid lol
aze#1010: "a red car" "a monkey with a guitar" "a big little"
aze#1010: pretty good for a rough attempt !
kip#6104: maybe add more samples or turn up temperature
aze#1010: its a 99.99
aze#1010: i noticed its generating more than 1 prompt and honestly only the first generation is good
aze#1010: is there a uniform way to prevent that from happening? |
aze#1010: i guess i can just use regex
RazikMazilya#0001: Got into the OpenAI beta
EricHallahan#1051: Congrats
StellaAthena#3530: Very notably, they don’t text on text or any hard image datasets
EricHallahan#1051: I was about to say that.
EricHallahan#1051: Like at least do ImageNet.
EricHallahan#1051: MNIST is not a good benchmark lol
StellaAthena#3530: I want to see someone redo this with the hard version of CIFAR-10
alstroemeria313#1694: is that CIFAR-100
EricHallahan#1051: Like I can't even compare it to EfficientNet.
StellaAthena#3530: No there’s a dataset that’s designed to be CIFAR-10 but with harder examples and classes
alstroemeria313#1694: ohh
Kazumi#1297: bigger dataset/classes, or messier image?
finetune#0907: so i managed to load and run gpt-neo with hf's unmodified gpt2 code with identical sampling results to the gpt-neo implementation for an 800 token sequence :berk:
bmk#1476: you should probably PR it to HF to make it so you can do that just through the model config file
bmk#1476: so we never have to use the trainwreck that is the hf gptneo impl ever again
finetune#0907: think it should already work with just the config file, as long as the weights contain the attn.bias matrices with the lower triangle masked out for the local attention layers
bmk#1476: huh
bmk#1476: do you have it in the form factor of a script for hf-neo->hf-gpt2?
bmk#1476: if so link pls |
bmk#1476: i wanna tack it onto my existing model conversion pipeline
StellaAthena#3530: Really? That surprises me. My attempts to do so had issues with local attention.
bmk#1476: im not surprised
bmk#1476: well, i am surprised that you can set the bias in the weights
bmk#1476: i always assumed you needed a code change for that
bmk#1476: this makes me even more disappointed about the whole thing with the neo model class
finetune#0907: just writing a bit extra to write it back into a regular gpt2 model
bmk#1476: awesome
finetune#0907: https://github.com/finetuneanon/misc/blob/main/load_gpt_neo_as_gpt2.py
finetune#0907: the gpt2 class doesn't cast k and q to fp32 in _attn and conv1d seems to give very slightly different results than linear, which might explain why results are different in fp16
finetune#0907: fp32 results matched during my tests
finetune#0907: writing the bias into the weights probably works because it's registered as a buffer
Teemochu#8740: @geospiza @finetune so is finetuning on half precision on a GTX possible?
bmk#1476: unfortunately, half precision with HF at least is kinda :ptsd:
geospiza#5912: hf?
Teemochu#8740: hugginfgace
EricHallahan#1051: Not that, it is actually PyTorch.
geospiza#5912: huggingface ah
Teemochu#8740: ...
Teemochu#8740: you know what I meant to type |
EricHallahan#1051: torch.multinomial is borked at FP16.
Teemochu#8740: hnnngguface obviously 😛
EricHallahan#1051: Not realistically. You will spend 64x more time per FLOP.
geospiza#5912: so i heard mixed-precision can't be used for inference, what does that mean for fine-tuning?
geospiza#5912: say if i rented out a beefier machine but wanted to run inference on my desktop with less memory
bmk#1476: hnagnginfnace does make it *really hard* to cast to fp32 at the parts that need precision tho
bmk#1476: like this wouldn't be a problem if it was just a normal pytorch model where i can just add .to(float32) on the afflicted areas
geospiza#5912: so not tractable for the most part 😦
Jozef Poniatowski#7589: is there any place to download the CC-Stories (https://arxiv.org/pdf/1806.02847.pdf) dataset ?
Jozef Poniatowski#7589: gcs original link was taken down 🙁 (https://console.cloud.google.com/storage/browser/commonsense-reasoning)
Kharr#7888: You should be casting your final logits that come out of the model to fp32 to avoid all sorts of weird issues with PyTorch multinomial and FP16 training. The rest of it doesn't matter as much. Casting attention matrix to fp32 before Softmax helps a little as well, but eats up memory.
finetune#0907: sounds painful if possible at all
finetune#0907: finetuning in half precision on a v100 with zero2 works tho if you have enough ram
comsplender#7330: I know with colab can randomly give you a k80, p100, v100 or T4. How do these rank so i can look out for the best gpu? do you get better gpus wiht pro or just a higher chance?
finetune#0907: should be k80, t4, p100, v100. usually get p100 or v100 with pro. without is mostly k80, sometimes t4
comsplender#7330: thanks a bunch dude
chris_myzel#9645: Maybe this helps to approach the answer. In nvidias 2021 keynote they describe that with their _megatron_ framework they achieve 16 queries / sec on a GPT3 like model on DGX A100 (8 cards) compared to ~ 1 query / minute on a dual channel CPU. So that's a x960 speedup 🙂 https://cdn.discordapp.com/attachments/729741769738158194/842712984745017374/unknown.png
rom1504#5008: GPU inference is usually 10x faster when batching
rom1504#5008: GPT3 is not really the common case since there's only a single company that has it (and it's not Nvidia)
glazgoglabgalab#5255: (& @cfoster0) I've been flirting with the idea of language agents. Agents that act on mutable language buffers. Sort of like an interactive notebook but entirely text. Both your comments got that curiosity burning again but I'm not sure if there's something to this or I've been seduced by the promise of RL. |
Related papers posted earlier https://arxiv.org/abs/2001.05540
https://arxiv.org/abs/1906.04604
chris_myzel#9645: but shouldnt this very very roughly translate to GPT neo performance (175b)…given there's a full high bandwidth interconnect between the A100's. Interesting info about the batching speedup (which I guess is included in this slide), thanks.
chris_myzel#9645: Given there's time until 175b is there and moneywise it's crazy but not impossible to build a 175b capable rack, I'd like to work on exploring what to expect.
rom1504#5008: Well anyone can already create a random 175b model today, that will have the same performance issues as the real thing
rom1504#5008: Just initialize the weights randomly, and then try to do inference
rom1504#5008: I guess that's what Nvidia did
rom1504#5008: That way you can measure how many hundreds of gpus you need to make it run in less than minutes
gammascalpset#9792: it's been a while since I thought of how gradients are computed, but in principle, do you lose that much performance by not holding all the weights/activations/gradient in GPU memory and saving it on the system mem or even hard drive?
chris_myzel#9645: is there a place to cheer about how good GPT-Neo is already? I've tried some query-engineering techniques an Open AI dev shared in a 1hr long google hangout talk.
This works convincingly well with Gpt-N:
```
Title: Toward the realization of highly autonomous driving and the creation of new values for cabin space
Press-Release: [the whole text of a press release from sony, find it at https://www.sony.com/en/SonyInfo/News/Press/202104/21-033E/]
Title: Myzel.io releases Sentinel, a new A.I. powered companion for your daily life
Press-Release: |
```
generated text:
> Myzel.io has announced the release of Sentinel, a smart-phone-sized A.I. companion that learns the world around it and can help humanity with complex tasks that require human intelligence.
>
> Sentinel is a machine-learning system that learns from its interactions with the environment, while retaining its
gammascalpset#9792: if you don't have 1 TB of GPU memory lol
gammascalpset#9792: iiuc you only need activations + weights from one layer to compute the next activations, and only need gradients + activations + weights from one layer to compute the gradient of the previous
gammascalpset#9792: Idk if that would be the case, but I kind of get the feeling that the matrix operations themselves take so long you should be able to preload the next info from a hard drive?
gammascalpset#9792: I mean, reading from a HDD is slow but O(n), matrix multiplication is O(n^3) (yes, I know there's algos with lower complexities, but in real implementations they're used rarely)
gammascalpset#9792: now I want to know what it retains 😦
chris_myzel#9645: I always suffer from that 😄
gammascalpset#9792: proof that GPT-Neo is evil
gammascalpset#9792: causing human suffering by halting at the most suspenseful moment
rom1504#5008: You might be interested by deepspeed @gammascalpset ; it tries to do that kind of GPU off loading
𓅬 gabriel_syme 𓅬#3220: can you explain that technique? is that link a real press release or that is the technique?
chris_myzel#9645: I used the whole text https://cdn.discordapp.com/attachments/729741769738158194/842736811389878302/unknown.png
chris_myzel#9645: the link is a real press release that I want to copy the style from. If you have a blog article as input, that follows a pattern like
|
> Museum of modern art: The museum of modern art presents […]
>
> Louvre: The louvre is well known for […]
>
The generated text will most likely be a) in the same form b) be about museums
chris_myzel#9645: this is the link that shows some query-eng techniques https://www.youtube.com/watch?v=ffKXEvnaAZM
CKtalon#7792: speaking of GPT3, one way to have it easily fail the turing test is to ask it about COVID-19 (since it's data doesn't include covid). GPT-Neo probably knows about it?
CKtalon#7792: https://cdn.discordapp.com/attachments/729741769738158194/842738393623298068/unknown.png
Kia#2550: Hmm, Updating models in not that Viable but damn...Even Covid 19 GPT-3 don't know about this
chris_myzel#9645: might be hitting their content filter also since they are very focused on not letting it be misused ?!
chris_myzel#9645: but according to the video I shared it should indicate this in the playground if so
chris_myzel#9645: does HF `repetition_penalty` apply the same logic as on `diversity_penalty` where higher is more diverse?
concedo#4289: The dataset GPT-3 was trained on was from 2018 and prior. There's no way it *can* know about Covid.
alstroemeria313#1694: openai doesn't regularly add to their dataset and then fine-tune w/ the augmented one/
kurumuz#5695: no
CKtalon#7792: https://twitter.com/jackbandy/status/1392490138190680064/photo/1
comsplender#7330: is it possible to retrain gpt-neo2.7B with a google colab v100? im using transformers and im getting a cuda error that im out of memory. GPU has 16GB ram
Kia#2550: Wait amazon?
CKtalon#7792: doubt so |
Kia#2550: True true, I taught It's was 2019
comsplender#7330: what is the biggest model size i could realistically load and retrain on 16gb?
EricHallahan#1051: Of ours?
EricHallahan#1051: 1.3B probably if you use SGD?
alexyz#3459: it's much better to just finetune if you want specific results like stories and stuff
alexyz#3459: retraining it would be a pain
glazgoglabgalab#5255: Quote from @bmk
> imo the solution is "simple", just make the text universe more and more complicated so that learning the real world and figuring out how it produced the text universe is easier than modelling the text universe"
From an alignment viewpoint it feels like we're moving further away from what make oracle's preferable.
Another related paper
https://arxiv.org/abs/1909.00109
triggerhappygandi#0001: I hate that they always compare to CPU servers
triggerhappygandi#0001: Like, who is using CPU servers for inferencing from multi billion models?
gwern#1782: at least RAM is cheap on servers!
DanHendrycks#8913: Will smaller GPT Neo models be available? If they were available, then I could just use Neo instead of various GPT-2 models for research papers. (Recall GPT-2 as a 0.1B, 0.3B, and 0.7B model as well as 1.5B.)
EricHallahan#1051: We have a 125M model on Hugging Face Model Hub.
bmk#1476: we were also going to train 350M and 760M models but we haven't gotten around to it yet
bmk#1476: please note that the 125M model was trained for less tokens than the other models |
bmk#1476: so you can't exactly do a scaling law using the current neo models
bmk#1476: we'll probably do a more consistent set of models at some point
gammascalpset#9792: just realized, nvidia claims A100s do 5 petaFLOPS
gammascalpset#9792: you can do a exascale supercomputer with 200 of them
gammascalpset#9792: although... do 200 of them exist? 🤔
StellaAthena#3530: Given that we have access to 48 A100s, I am highly confidant that 200 exist
EricHallahan#1051: I don't know if they exist in one place though lol
gammascalpset#9792: oh, no I think I meant DGX A100s
StellaAthena#3530: Ohhh
EricHallahan#1051: Ohhhh
StellaAthena#3530: Maybe not tbh
gammascalpset#9792: how many gpus in one of those?
Sid#2121: I am certain that >200 DGXs exist
Sid#2121: msft probably owns most of them lol
StellaAthena#3530: Yeah, probably
StellaAthena#3530: I know my company tried to get two but the global GPU shortage caused supply problems
gammascalpset#9792: that's what I was thinking of, if you tried to buy more than one (assuming you could afford it) they'd be backordered forever
asparagui#6391: 8 is the standard dgx but there's a 16 gpu variant
bmk#1476: hey does anyone wanna try generating from this model https://huggingface.co/lg/ghpy_2k
bmk#1476: it's 2.7B; i dont have a generation script handy and im very lazy |
EricHallahan#1051: ```py
import numpy as np
import tensorflow as tf
from tensorflow.python.framework import ops
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import random_ops
from tensorflow.python.ops import state_ops
from tensorflow.python.ops import variable_scope as vs
from tensorflow.python.platform import tf_logging as logging
from tensorflow.python.training import basic_session_run_hooks
def _get_var(name, shape=None, dtype=None, initializer=None, trainable=True, collections=None):
if not name:
return None
elif isinstance(name, (list, tuple)):
return [_get_var(x, shape, dtype, initializer, trainable, collections) for x in name]
else:
return _get_var(name, shape, dtype, initializer, trainable, collections) |
def _get_variable(name, shape=None, dtype=None, initializer=None, trainable=True, collections=None):
if not name:
return None
elif isinstance(name, (list, tuple)):
return [_get_variable(x, shape, dtype, initializer, trainable, collections) for x in name]
else:
return _get_variable(name, shape, dtype, initializer, trainable, collections)
```
Sid#2121: wtf is this eric
EricHallahan#1051: The generation.
Sid#2121: ?
Sid#2121: are you sure
EricHallahan#1051: Yes
EricHallahan#1051: `import numpy as np`
Sid#2121: i... don't think that is doing any generation. there's not even any functions being called lol
Sid#2121: oh
Sid#2121: right, the model made that?
bmk#1476: :berk: |
Sid#2121: i thought you were saying that was how to run the model
bmk#1476: im assuming so
EricHallahan#1051: No.
EricHallahan#1051: > hey does anyone wanna try generating from this model
EricHallahan#1051: I did
bmk#1476: @EricHallahan run it with
```
def median(x):```
Sid#2121: @EricHallahan run it with
```python
def make_hentai(args):```
EricHallahan#1051: Note that I have beam search on.
Sid#2121: turn it off lol
Sid#2121: beamsearchbad
EricHallahan#1051: Yeah, I just realized that.
EricHallahan#1051: ```py
def median(x):
"""Returns median of the list elements.
|
Args:
x (list[float]): Elements to be sorted.
Returns:
float: Median element.
"""
return np.median(x) if len(x) > 1 else x[0]
```
Sid#2121: :berk: that's cheating
EricHallahan#1051: ```py
def median(x):
"""Return median of elements in x, x must be an array or list."""
if not isinstance(x, (list, np.ndarray)):
raise ValueError('x must be an array or list of integers')
median = np.median(x)
if x.shape[-1] == 1:
return x[0]
return np.median(x.flat)
```
bmk#1476: lol |
Sid#2121: `x.flat` :thonk:
Sid#2121: it's pretty good tho
bmk#1476: what's something more complicated
bmk#1476: that we can ask the model for
EricHallahan#1051: ```py
def median(x):
'''
median(x: Float64) -> UInt64
'''
x_dtype = _get_dtype(x)
x_shape = x.shape
return UInt64(np.median(x, axis=None, dtype=x_dtype).view(x_dtype))
```
Sid#2121: bogosort
bmk#1476: eric do this one lol
Sid#2121: ```python
def bogo_sort(a):
"""Sorts array a using Bogo sort"""
```
Sid#2121: @EricHallahan |
Sid#2121: ```python
def bubble_sort(array):
"""Sorts array using Bubble sort"""
```
EricHallahan#1051: I accidentally set the length too low for this one:```py
def bogo_sort(a):
"""Sorts array a using Bogo sort"""
if len(a) != 1:
raise RuntimeError('Array must have one element: {0!r}'.format(a))
k = 0
while k < len(a):
i = 0
j = 0
while i < len(a) and j < len(a[k]):
a[k], a[i] = a[i], a[k]
i += 1
k += 1
while j < len(д) and a[i] == a[j]:
a[i] = a[j]
i += 1 |
j += 1
```
Sid#2121: :sadge:
EricHallahan#1051: Note the token sampling error.
Sid#2121: ```python
import torch
import torch.nn as nn
class MLP(nn.Module):
def __init__(self):
super(MLP, self).__init__()
``` can you try this @EricHallahan
EricHallahan#1051: ```py
def bubble_sort(array):
"""Sorts array using Bubble sort"""
print(array)
# print('%s' % array)
for i in range(0,len(array)):
# print(array[i])
while( i < len(array)): |
left = array[i]
right = array[i+1]
vanishing = False
while(i>0):
# if(right - left >=0):
# if(left >= right):
# vanishing = True
# print(array[i])
# print(array[i+1])
# i -= 1
# elif(i<0):
# i += 1
# print(array[i])
i += 1
# print(array[i])
left = array[i]
right = array[i+1]
vanishing = True
# print(array[i])
# print(array[i+1]) |
# i -= 1
# print(array[i])
if(vanishing):
# print(' ')
# array[i] = array[i] - array[i]
# array[i+1] = array[i+1] - array[i+1]
# array[i] = array[i] + array[i]
# array[i+1] = array[i+1] + array[i+1]
# array[i] = array[i] - array[i]
# array[i+1] = array[i+1] - array[i+1]
array[i] = left + left - right
array[i+1] = right + right - left
else:
array[i] = left + left - right
array[i+1] = right + right UFO
array[i]
```
Sid#2121: it's terrible at sorting algos :berk:
Sid#2121: I guess it would be better in C or something. Probably not many people are really writing sorting algorithms in python apart from in tutorials and stuff
EricHallahan#1051: ```py |
import torch
import torch.nn as nn
class MLP(nn.Module):
def __init__(self):
super(MLP, self).__init__()
self.input_dim = 3
self.lin0 = nn.Linear(self.input_dim, 64)
self.lin1 = nn.Linear(64, 1)
self western_feature_normalizer = nn.FeatureNormalization(
input_mean=self.input_dim / 2.0,
input_std=0.5)
def forward(self, x):
x = self.lin0(x)
x = self.lin1(x)
x = self western_feature_normalizer(x)
return x
|
class FeatureDetector(nn.Module):
def __init__(self):
super(FeatureDetector, self).__init__()
self.feature_detector = nn.Sequential(
#MLP(64), nn.Dropout(),
nn.Linear(64, 256),
nn.Dropout(0.25),
nn.Linear(256, 256),
nn.Dropout(0.1),
nn.Linear(256, 256),
skinny, nn.ReLU(True),
)
```
Sid#2121: western_feature_normalizer wtf
EricHallahan#1051: ¯\_(ツ)_/¯
Sid#2121: so close :blobsad:
EricHallahan#1051: Decoding error
Sid#2121: I feel like it's not really that much better than gpt-neo lol
EricHallahan#1051: Greedy Sampling```py |
import torch
import torch.nn as nn
class MLP(nn.Module):
def __init__(self):
super(MLP, self).__init__()
def forward(self, x):
return x.view(x.size(0), -1)
```
bmk#1476: this model is trained only on python
bmk#1476: i mean, only 2k iters
bmk#1476: I'll keep posting better models as they become available
aze#1010: how much vram does the 2.7B Neo model use? (on inference)
EricHallahan#1051: >10 GiB
aze#1010: ahh
EricHallahan#1051: at binary32
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/842846632756510831/unknown.png
bmk#1476: here's what the loss looks like
bmk#1476: the one i just posted is the 402k one |
bmk#1476: i should have the 404k uploaded soon
nostalgebraist#3542: updated my `transformer-utils` package with a function `partial_forward` .
you give it an input and the some names of internal modules, like `['h.0', 'h.2.attn.c_attn', 'h.5.mlp.c_proj']'` , it returns those modules' output. without running any later layers. like tensorflow fetches
this is really more of a general torch util than a HF transformers util. there are probably other implementations out there (?). anyway, this is what i use to efficiently train extra heads on top of my bot's LM
see https://github.com/nostalgebraist/transformer-utils#get-activations-from-any-part-of-the-model
bmk#1476: @EricHallahan https://huggingface.co/lg/ghpy_4k new github model
bmk#1476: oh yeah @nostalgebraist in case you're interested, im fine tuning 2.7B on github (python-only) rn and uploading ac heckpoint every 2k iters
bmk#1476: https://huggingface.co/lg/ghpy_2k
bmk#1476: https://huggingface.co/lg/ghpy_4k
nostalgebraist#3542: how many iters is an epoch?
bmk#1476: tbh i actually don't know but the data should be pretty hefty so that we aren't repeating data for quite a while
bmk#1476: @AI_WAIFU any idea how big the data is before tokenizing?
nostalgebraist#3542: oh btw i put the LM for my bot up at https://huggingface.co/nostalgebraist/nostalgebraist-autoresponder-2_7b
StellaAthena#3530: @nostalgebraist I almost think the question “how many iters for an epoch” is meaningless at this (data) scale
bmk#1476: i dont actually know how big the data is
Sid#2121: if you tokenized with neo, all the tfrecords should have the number of documents in their filename
StellaAthena#3530: There’s enough data to train for a year or something absurd like that |
bmk#1476: ai_waifu processed it
bmk#1476: but it's kinda chonk
bmk#1476: probably
StellaAthena#3530: It’s the data we collected for the Pile, no?
Sid#2121: well it's only the python parts of github
nostalgebraist#3542: cool... i guess i'm trying to get a sense of scale
bmk#1476: i mean the original data is 600GB that we then filtered for github only and whatever other heuristics ai_waifu added
bmk#1476: and i have no idea if he filtered all of github
nostalgebraist#3542: so maybe "what is the batch size" is my real question
Sid#2121: btw @bmk can we make the text data available
bmk#1476: what data?
Sid#2121: the data we are talking about... right now
bmk#1476: the githubpython data?
Sid#2121: yeah
bmk#1476: uhh
bmk#1476: again, ai_waifu processed this months ago and nobody got around to actually running the run until now
bmk#1476: so any inquiries are to be directed to him
bmk#1476: I'm just here to test out pyfra and to make some nice python models
Sid#2121: @AI_WAIFU i direct my inquiry to you
Sid#2121: * |
bmk#1476: im pretty sure i asked ai_waifu to post the filtering scripts at some point but then i forgot to follow up
AI_WAIFU#2844: The dataset was somewhere between 10-20GBs I can't remember exactly. I totally forgot about posting those scripts.
bmk#1476: oh did you only filter the small component?
AI_WAIFU#2844: if by "small" you mean the 600GB dataset, no
bmk#1476: i find it hard to believe there's only 20gb of python in 600gb
AI_WAIFU#2844: I grabbed quite a bit more
bmk#1476: o.O
AI_WAIFU#2844: Yeah I was suprised too
bmk#1476: how come there's only 20gb? is there really that little python?
AI_WAIFU#2844: but the problem is a lot of the python repo's are "junk" a.k.a not python code
AI_WAIFU#2844: if I didin't throw that away we'd probably have quite a bit more data
bmk#1476: ah
bmk#1476: now I'm interested in the heuristics
AI_WAIFU#2844: I don't think it was anything fancy. I think the biggest filter was "does this file end with .py?"
AI_WAIFU#2844: and that got rid of almost everything
bmk#1476: lol
bmk#1476: wat
StellaAthena#3530: So, I'm trying to use a package written in JAX
StellaAthena#3530: And it keeps failing to find the GPUs
StellaAthena#3530: 😦 |
bmk#1476: welcome to cuda hell
bmk#1476: last time i just gave up and used torch lol
AI_WAIFU#2844: yeah that's what I said when all of a sudden my 1TB of ingress just up an disappeared.
StellaAthena#3530: It's running on CPU, just slowly >.>
AI_WAIFU#2844: I think I also had some heuristic for dealing with forks
AI_WAIFU#2844: That also cut things down by a lot
bmk#1476: i still want to do a full download of all of Github at some point
bmk#1476: and then put it all through the blender
EricHallahan#1051: Do you have CuDNN installed?
StellaAthena#3530: IDK
StellaAthena#3530: Is that default installed on the K8s
EricHallahan#1051: No
EricHallahan#1051: You can't install it really either.
EricHallahan#1051: You effectively need to bake it into the Docker image.
EricHallahan#1051: TensorFlow won't let you use GPUs without it, I learned that the hard way.
StellaAthena#3530: 😢
StellaAthena#3530: Are there any hurdles to setting it up in the default image?
EricHallahan#1051: It should be as simple as updating NeoX to use the one with it as a base image.
EricHallahan#1051: We never needed it for NeoX, so me and Sid decided that it didn't matter if it was there or not and left it as it was.
EricHallahan#1051: I should just create a general purpose Docker image so that we don't have to keep piggybacking off of NeoX. |
StellaAthena#3530: Plz
Teemochu#8740: > western_feature_normalizer
MLP *is* a Western cartoon
StellaAthena#3530: @EricHallahan what do I need to do to bribe you to set up the docker image
EricHallahan#1051: ¯\_(ツ)_/¯
EricHallahan#1051: I should be able to get it done tomorrow.
EricHallahan#1051: I don't have time today, I have a deadline for other stuff.
StellaAthena#3530: The difference equivariance can make.... https://cdn.discordapp.com/attachments/729741769738158194/842937483863392267/Screen_Shot_2021-05-14_at_9.31.58_PM.png
StellaAthena#3530: (Plain is a normal model, unlabeled is equivariant)
AI_WAIFU#2844: what's the dataset?
StellaAthena#3530: It's a toy dataset
> Let's get started with a toy dataset: learning how an inertia matrix depends on the positions and masses of 5 point masses distributed in different ways. The data consists of mappings (positions, masses) --> (inertia matrix) pairs, and has an $G=O(3)$ symmetry (3D rotation and reflections). If we rotate all the positions, the resulting inertia matrix should be correspondingly rotated.
AI_WAIFU#2844: ah
StellaAthena#3530: I'm playing with the EMLP framework
EricHallahan#1051: Rad
𓅬 gabriel_syme 𓅬#3220: this looks really cool 🙂 need to think of datasets this can apply, I'm sure my domain is full of them
𓅬 gabriel_syme 𓅬#3220: I guess point clouds? But I'm not doing much of that yet
StellaAthena#3530: And this is using O(3) instead of SO(3) (which is the wrong group) https://cdn.discordapp.com/attachments/729741769738158194/842940712588935209/Screen_Shot_2021-05-14_at_9.45.06_PM.png
StellaAthena#3530: I think
StellaAthena#3530: The plot looks the same. hmmmm |
StellaAthena#3530: Here's SO(3) https://cdn.discordapp.com/attachments/729741769738158194/842942358878158878/Screen_Shot_2021-05-14_at_9.51.48_PM.png
StellaAthena#3530: Hot damn
StellaAthena#3530: @Louis I just need to code up the permutation tests and this is a paper
EricHallahan#1051: Do you need the cuDNN stuff still?
StellaAthena#3530: yes plz
StellaAthena#3530: This is on TPU lol
EricHallahan#1051: I'll get around to it when I can.
bmk#1476: what cudnn version u need?
bmk#1476: or rather
bmk#1476: what cuda version u have
StellaAthena#3530: Reusing existing connection to developer.nvidia.com:443.
HTTP request sent, awaiting response... 403 Forbidden
2021-05-15 02:00:02 ERROR 403: Forbidden.
bmk#1476: ok lemme download the file and send it to you
StellaAthena#3530: Non-equivariant: average test equivariance error 1.58e-01
O(3)-equivariant: Average test equivariance error 1.58e-01
SO(3)-equivariant: average test equivariance error 3.01e-07
bmk#1476: kinda sus
Louis#0144: Oooo
Louis#0144: Concerning |
𓅬 gabriel_syme 𓅬#3220: sleep
juancamilog#9077: Rescue mission for sci-hub: https://www.reddit.com/r/DataHoarder/comments/nc27fv/rescue_mission_for_scihub_and_open_science_we_are/
asparagui#6391: @StellaAthena it should be noted that if you just do `pip install jaxlib` it will pull in the cpu-only version
StellaAthena#3530: @asparagui Thanks for the tip, but I have 8 GPUs I would much rather use 😛
asparagui#6391: well i mean that unless you explicitly installed the cuda version it will only use the cpu, what it sounded like you're seeing
StellaAthena#3530: Ohhh
kindiana#1016: even if you installed the cuda version you also need to explicitly install cudnn and point it at the right path as it doesn't bundle it (like pytorch does)
asparagui#6391: https://github.com/google/jax#pip-installation
nev#4905: does BERT or GPT take longer to train to convergence at the same parameter count?
chilli#5665: Is this also true for CPU?
EricHallahan#1051: Why would that be the case for CPU?
chilli#5665: Like, mkldnn
EricHallahan#1051: ¯\_(ツ)_/¯
chilli#5665: Most hardware vendors have their own libraries lol
kindiana#1016: I believe they compile in eigen or something in the xla runtime
CRG#8707: <https://arxiv.org/pdf/1810.04805.pdf#page=16> https://cdn.discordapp.com/attachments/729741769738158194/843105279700631552/aa7eff389b34a9e4874a25bed44d95ce.png
nev#4905: I take this a sign that MLMs have a different scaling law
CRG#8707: Since you only mask 15% of tokens, what exactly counts as an epoch for MLM? :thonk:
chilli#5665: hmm, but there's no way that eigen is faster than MKLDNN on Intel CPUs
chilli#5665: right? |
chilli#5665: When I looked into this last I saw that there were some references to using MKLDNN with XLA
kindiana#1016: No, but I don't think CPU performance for matrix operations is that high on the list 🤔
chilli#5665: that's true
chilli#5665: I think some of the people I've been talking to have some unusual use cases
chilli#5665: lol
mgostIH#0245: Are there decent python libraries for sparse matrix computation?
chilli#5665: depends on what you mean by "decent"
chilli#5665: lol
chilli#5665: if you're used to Julia you'll probably be disappointed
mgostIH#0245: The most decent in your opinion I guess
mgostIH#0245: Julia has better support for it?
mgostIH#0245: I wouldn't mind learning it
chilli#5665: I've used PyTorch Geometric for my research before and found it acceptable, but I'm definitely somewhat jealous of Julia's support
chilli#5665: lol
chilli#5665: (also depends on whether you need GPU support)
mgostIH#0245: Well idk if there's much GPU support for sparse algorithms in general
mgostIH#0245: Regardless, what about Julia for sparse stuff?
chilli#5665: it's pretty good?
chilli#5665: wdym
chilli#5665: they just generally have a community that works a lot more in domains where sparsity is integral |
gpt-3#9219: 👀
Kia#2550: We Got a Bot catch them
Jozef Poniatowski#7589: anyone have experience using the a100 dgx?
Jozef Poniatowski#7589: wondering is it worth the money (vs a 3090 setup for the same money)
EricHallahan#1051: ¯\_(ツ)_/¯
Jozef Poniatowski#7589: 😂
EricHallahan#1051: It is going to depend on your use case.
Jozef Poniatowski#7589: its for our lab, 1 a100 dgx for more money vs two 8gpu 3090 servers
we don't do any large lm stuff
Daj#7482: If you're not doing large model parallel training, go for the latter
Daj#7482: The selling point of the DGX boxes is really good GPU-to-GPU interconnect
Jozef Poniatowski#7589: ok thanks that's what we were thinking as well
ah isee
CKtalon#7792: how is it the same money?
Jozef Poniatowski#7589: yeah actually its 0.5 dgx as that option we share with another lab
CKtalon#7792: isn't a A100 DGX starting from 6 digits USD?
Jozef Poniatowski#7589: when doing pretraining with something like MLM objective is it better to make the masked datasets beforehand? it seems like they do this in google's bert pretraining code
gwern#1782: https://venturebeat.com/2021/05/15/gpt-3s-free-alternative-gpt-neo-is-something-to-be-excited-about/
bmk#1476: this headline causes me pain
gwern#1782: and yet, you are responsible for it. curious! |
bmk#1476: oh god it's wrong in so many ways
bmk#1476: this article has like at least 3 errors from the short amount of time I've spent skimming it
bmk#1476: time to go cry in a corner
Kia#2550: The headline
Kia#2550: :feelgoodman::gooseknife:
cognomen#6297: free as in assuming you have moderately sized brewery at your disposal
cognomen#6297: not free as in beer
Jozef Poniatowski#7589: i havent used gpt neo but i have to say im very glad it exists, same goes for the pile, bless you all
Daj#7482: Congrats to @StellaAthena for founding Eleuther :berk:
gwern#1782: tfw tensorfork airbrushed out of history due to anti-furry/anime/pony bias
Daj#7482: something something weirdness points
bmk#1476: who wants to reach out to them and break the news that ada looks like it probably isn't 2.7B
AI_WAIFU#2844: have you looked at our website recently?
AI_WAIFU#2844: shh
gwern#1782: I thought that was still a secret
Daj#7482: No anime in sight, looks good to me
Sid#2121: lmao you've been totally cut out of eleuther history too
Sid#2121: ```Stella Biderman, Leo Gao, Sid Black, and others formed EleutherAI with the idea of making AI technology that would be open source to the world. ```
Sid#2121: F
bmk#1476: i mean OA is keeping silent but using eval harness the numbers suggest it |
Daj#7482: The secret power behind the throne
Daj#7482: Despite being the public face
Sahl#0630: et al strikes again 😳
Daj#7482: I am, in fact, the hacker known as Eleuther
bmk#1476: Archibald Eleuther runs this place from the shadows
Sahl#0630: how is one hacker, Eleuther, replicating GPT-3 by themselves?
Sahl#0630: They must be using 3 keyboards at once or something
𓅬 gabriel_syme 𓅬#3220: with a computer and an endless supply of clean hoodies
bmk#1476: who is this hacker eleuther
StellaAthena#3530: 3 girls 1 keyboard?
Daj#7482: I cannot comment on the number of girls using my keyboard
bmk#1476: it's ok, my prior is sufficiently strong that comment would not shift it much
Sid#2121: I can - it is none
Daj#7482: You don't know how many keyboards I have
mkualquiera#3484: The expected value is .5
bmk#1476: my prior is concentrated on a point mass on zero
Daj#7482: My number of girls at my keyboard has a uniform prior
bmk#1476: this is a proof by construction that zero is in fact a probability
mkualquiera#3484: you really don't think connor is even a liiiiiiittle bit gay?
Sahl#0630: zero is unlikely to be a probability |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.