data
stringlengths 115
7.61k
|
---|
Parker#3197: I think it just asks you to speak a sentence
Parker#3197: so maybe not in question and answer form
chirp#4545: Just found this: https://blog.duolingo.com/duolingo-now-has-conversation-lessons/
distractedm1nd#2062: Lingvist uses a bit of ML but thats not spoken
distractedm1nd#2062: You can generate a course about your interests by giving a few seed words
-Archivist#7336: It's netdata embeds here
> https://the-eye.eu/traffic/
pacman829#0801: Thanks @-Archivist !
apolinario#3539: Do you happen to know the state of the art (that is open source / hopefully with notebooks and stuff) style transfer technique? Like wanting the style of image A in image B?
alstroemeria313#1694: https://github.com/crowsonkb/style-transfer-pytorch https://colab.research.google.com/drive/1Tmuwmncao5E3D-5tTIVQjRy2YQ8JdpoB?usp=sharing
alstroemeria313#1694: this is just a particularly good implementation of Gatys et al
apolinario#3539: Sweet, thanks much!
ersatz#0001: I was wondering if the garbage in the GPT-3 training data helped make the model more robust than if the data was something cleaner like the pile?
ersatz#0001: and if not, is the garbage in the training data all bad or is there an optimal portion of garbage?
ersatz#0001: if you see what I mean
ersatz#0001: and I don't mean random but data that is bad in the way humans generate it, so bad in a directional way in the human data space
EricHallahan#1051: That is an excellent question.
EricHallahan#1051: I am pretty sure @bmk had some ideas about the effects of filtering on model performance?
ersatz#0001: I ask because I realized that in robotics, they include random data in the sensors of their agents to make them more robust when they train them in a virtual environment
ersatz#0001: but it's random in a range of possible states in the real environment
|
fifteen#4523: If I wanted to train GPT-NeoX on a specific domain, is there some type of starter project I could look at? I'm inspired by Shawn's work with chess. I'm wondering if I could apply that to another domain. (In my case, I'm trying to predict/generate log data, instead of "speak english".)
nshepperd#2316: With gpt-2 I played with "regularizing" the model by randomly corrupting input tokens, but keeping the "target" tokens / cross entropy loss the same. The idea was to make it output reasonable tokens even when the context has errors. didn't seem to make any noticeable difference though really
nshepperd#2316: i think i tried both totally random tokens, and corrupting tokens by replacing them with a one-step sample from the model which should be marginally more realistic
nshepperd#2316: either is probably a long way from the kind of error a human makes though
neel#1028: Hey,
I'm a newbie to research, and I had a doubt.
I was recently listening to a podcast where they talked about several psychological tests on how good people are at generalising what they learn from their academic curriculum into other tasks(Turns out most people aren't good at it).
I realised I had this problem with research(specifically NLP, because that's the field i'm working in). Particularly, when I read a paper, i'm not able to use the same concepts in a different setting, but then i eventually find a paper where they have done the same thing I was hoping to do.
I was wondering if any of you have tips on how I can , for the lack of a better phrase, generalise what I have learnt from papers better .
ersatz#0001: do you know if Google has a GPT-4 sized model?
EricHallahan#1051: How would you define a GPT-4-sized model if GPT-4 is not a thing that anyone knows about? We could only guess.
ersatz#0001: something like as many orders of magnitude more parameters between GPT-3 and GPT-4 as between GPT-2 and GPT-3
ersatz#0001: the scaling laws are logarithmic, so I guess something like this...?
pebbles#7130: GPT-3 was a little over 100x as many parameters as GPT-2, so by "GPT-4" I'm guessing eratz means something with about 100x more than GPT-3, so approx. 15 trillion params? (**)
CRG#8707: 175B / 1.5B more like 100x, no?
pebbles#7130: hahaha... I can't count
EricHallahan#1051: I don't think we will see a jump that large until there is a major breakthrough in computational power or efficiency, something I do not expect to happen anytime soon.
Sid#2121: i don't see why not
Sid#2121: all it takes is someone to invest the money
Sid#2121: it's not like it's infeasible for any reason
|
Sid#2121: the capabilities are there
pebbles#7130: but would it be expected to be at all worth the investment?
Sid#2121: the cost would probably be in the ~$100-200m range, but that's not a totally outrageous investment for a large engineering project
Sid#2121: admittedly it's pretty outrageous for an ML project
Sid#2121: but people regularly invest those sort of amounts in building infrastructure and i expect as the benefits of such large models become obvious, we'll start to do the same with AI
Sid#2121: admittedly the inference costs would be *super* high, so from a business perspective, it might be a stupid move
Sid#2121: from a scientific perspective - large physics projects get orders of magnitudes higher budgets than that (LHC was ~$5B i think?)
EricHallahan#1051: We are talking in context of GPT-4, where that kind of investment would not be viable.
EricHallahan#1051: Though I concede that this may be a poor judgement.
bmk#1476: why would it not be viable?
Sid#2121: if it's transformative AI I would argue it's more than viable
Sid#2121: but admittedly, someone has to be first, and investors are afraid of taking risks like that
EricHallahan#1051: I think that is a good point.
pebbles#7130: I can see it happening if an investor / group of investors thought there was a high chance that a GPT-4 style model would basically be an AGI
bmk#1476: then again OA is known for taking risks like this
EricHallahan#1051: Like once it happens people will likely try to replicate.
ersatz#0001: so Google doesn't have a secret model of this order of magnitude?
pebbles#7130: highly unlikely imo
EricHallahan#1051: Wouldn't secret mean that we wouldn't know? My prior is that they do not.
ersatz#0001: there are rumours in some private Signal chats
|
Sid#2121: putting such a model into production would be incredibly expensive
Sid#2121: they never really told us what MuM / that other one was tho
Sid#2121: what was the name of google's recent big chatbot thing again?
CRG#8707: LaMDA is supposedly "1000x more powerful than bert" (1T params?)
ersatz#0001: actually saying rumor is underselling I think, I should say it's regarded as an open secret
Sid#2121: supremely ambiguous wording
Sid#2121: what specifically is the open secret
ersatz#0001: GPT-4 tier language model at Google
Sid#2121: GPT-4 tier specifically being ~100x the number of params of GPT-3?
Sid#2121: (x) doubt
ersatz#0001: I don't know but something like that I guess following the scaling laws
CRG#8707: I could see this referring to a MoE, even.
ersatz#0001: perhaps a jump in capabilities is enough to be regarded as GPT-4 at less than 100x
bmk#1476: or maybe the 1000 was literally pulled out of someone's ass
ersatz#0001: maybe but the private Signal chat is legit, they were discussing anthropic long before it was public for example
bmk#1476: private signal chat?
bmk#1476: what
CRG#8707: Exact wording here: <https://www.youtube.com/watch?v=s7t4lLgINyo>
bmk#1476: what is this signal chat
ersatz#0001: a private chat
|
ersatz#0001: I don't know what else to say about it
bmk#1476: how does one get access to this signal chat
EricHallahan#1051: I assume 🪄
ersatz#0001: you need someone to endorse you to get in
bmk#1476: who is in this chat
kindiana#1016: It's not very private if you could just get access :3berk:
ersatz#0001: that's private
bmk#1476: are you able to endorse
ersatz#0001: no, my own endorsement is worthless
ersatz#0001: it's not incredibly private, hundreds of people are in
bmk#1476: is there anyone there who would trust me enough to endorse me
ersatz#0001: I don't know who you are and I only know three people in the chat
ersatz#0001: maybe more with nicknames that I don't recognize
bmk#1476: oh
Sphinx#2092: It seems funny to refer to some chat that you can't even justify actually exists as proof of anything.
Sphinx#2092: but now I feel left out. I thought I knew all the secret chats.
AI_WAIFU#2844: I think we can make a reasonable ballpark estimate of the largest models they could have if we go with their largest pods for ml-perf and a time budget of 2 weeks to a year.
bmk#1476: I asked ersatz about a bunch of people I know and none are in there lol, so I don't think it's that good of a group chat
ersatz#0001: I'm 100% not saying that this is a proof of anything
ersatz#0001: don't get me wrong
|
Sid#2121: go on then
AI_WAIFU#2844: 6700 v4 = 2.7*6700 v3s = ~1 exaflop.
AI_WAIFU#2844: so if they run it for a month we're looking at 30k petaflop-days
AI_WAIFU#2844: somone link me the scaling calculator
AI_WAIFU#2844: So from the OAI paper. Roughly 2.5T parameters
AI_WAIFU#2844: If they wanted to they could probably go a few times higher than that, but I don't see the usecase since inference is so damn expensive.
AI_WAIFU#2844: At best they've done it to test scaling
ersatz#0001: do you think it would be worth the investment?
ersatz#0001: what could people mean by a GPT-4 grade model if not 100x of GPT-3?
Fessus#9563: Could theoretically have a model which incorporates types of data other than text but with only a modest parameter bump over GPT-3
Fessus#9563: No idea how well that would actually perform
Fessus#9563: But the disconnect between the world of text and everything else is currently a shortcoming for GPT-like models which, at least theoretically, doesn't need to exist
Fessus#9563: Has anyone been insane enough to try training a large language model on byte sequences from arbitrary data yet?
cfoster0#4356: Not to my knowledge
kurumuz#5695: is 10T really not doable?
Fessus#9563: Possible yes, practical... maybe?
AI_WAIFU#2844: x
Fessus#9563: For actual usefulness in production you would need to see a massive boost over GPT-3
cfoster0#4356: The most likely strategy for that might be crawling the web and dumping the HTTP response bytes into a chonk model
kurumuz#5695: not practical for inference for sure.
|
Fessus#9563: You certainly could. I'm almost curious to see how well a 1B model would handle stuff like compressed images
kindiana#1016: not very well imo
kindiana#1016: autoregressive modelling is not good at huffman compression lol
Fessus#9563: Certainly isn't
Fessus#9563: Though, with a large enough model which keeps the entire file in the context window it should be possible
kindiana#1016: not really? it would have to commit to the huffman table before it generates the first bit of data
alstroemeria313#1694: it seems like a huge waste of compute really
Fessus#9563: That first bit of data usually is the huffman table itself
Fessus#9563: At least in .gz
Fessus#9563: The huffman table informs the possible structure of all subsequent bytes in relation to each-other
Fessus#9563: The problem is probably impossible if the table is not included in the sequence but if it is then it's just an issue of complexity
tehZevo#0321: alright so looking through the VQGAN+CLIP colab notebook and having absolutely no idea how VQGAN works aside from a 5 min youtube video... it looks like what is being optimized is the latent representation of the initial image (or random one-hot) to match both the text prompt and to satisfy the discriminator? am i on the right track?
EricHallahan#1051: There is no separate discriminator, only CLIP with a text prompt.
EricHallahan#1051: Otherwise everything else is correct.
alstroemeria313#1694: VQGAN actually does have a discriminator but we don't use it for this.
tehZevo#0321: got it, thanks!
alstroemeria313#1694: (I tried and it made the outputs worse instead of better)
alstroemeria313#1694: (It's mostly just useful during VQGAN training.)
alstroemeria313#1694: ...Oh wait did I ever try the "optimize toward D thinking p(real) = 50%" thing
alstroemeria313#1694: With VQGAN
|
EricHallahan#1051: If it was a VQVAE, it wouldn't have a discriminator, yet the method would still work.
alstroemeria313#1694: it does in fact work with the OpenAI discrete VAE
tehZevo#0321: that actually could be a config parameter 0..1 hmm..
alstroemeria313#1694: Or did I not do it bc VQGAN uses hinge loss
alstroemeria313#1694: for D
tehZevo#0321: ahhh
tehZevo#0321: is the latent (`z`) the only parameter being optimized? i see `optim.Adam([z]...)` but there's nothing else being optimized in any of the models, correct?
alstroemeria313#1694: that's it!
tehZevo#0321: awesome thanks so much for the help haha
Spy#9778: @alstroemeria313 is your vqgan one you trained or a pretrained one?
alstroemeria313#1694: it's pretrained
Spy#9778: Imagenet?
alstroemeria313#1694: yes
EricHallahan#1051: nothing stops you from using one that isn't though.
Spy#9778: Wow I'm surprised it can do so well with CLIP making styles that are outside of the imagenet distribution
alstroemeria313#1694: we also use the wikiart vqgans that @𓅬 gabriel_syme 𓅬 trained
Spy#9778: Ah ok that would make a bit of a difference
tehZevo#0321: bahahaha the input images just get turned into text prompts. amazing
aaronrmm#3198: Wait really? Is that necessary?
aaronrmm#3198: I was messing with image prompts yesterday and now i feel silly
|
tehZevo#0321: i think that's how it works, otherwise there'd have to be some distance loss between the image latents
tehZevo#0321: which there's an option for some kind of distance loss for the *original* image that decays over time in the notebook
tehZevo#0321: but i dont remember seeing anything for the target images
aaronrmm#3198: interesting. Thanks, I'll have to take a look. I read a paper and made some assumptions without reading through all the code
EricHallahan#1051: Well no, they are sent through the VQGAN image encoder.
tehZevo#0321: hmmm
tehZevo#0321: it looks like they go through the clip image embedding process https://cdn.discordapp.com/attachments/729741769738158194/864258094296858695/unknown.png
tehZevo#0321: which i'll admit isn't literally "converting to text", but rather an embedding that "should" match a hypothetical text description, right?
EricHallahan#1051: Yes, they are sent to the CLIP embedding space.
tehZevo#0321: got it, yeah the thing that made me go "ZOMG theyre turned into text prompts" is because they use the same `Prompt` class, sorry
tehZevo#0321: similar enough, i suppose
EricHallahan#1051: Yeah, you can definitely look at them that way though.
aaronrmm#3198: ah yeah that's closer to how I imagined it
AI_WAIFU#2844: @HL_dev The main benefit of tpu's is that they're cheap and they have a really good interconnect, which makes it possible to train really big models. With few exceptions, there isn't much else on the market that will let you do that.
kurumuz#5695: yeah.
Louis#0144: We need scaling laws of googly eyes. So number of googly eyes vs perceived intelligence https://twitter.com/yoavgo/status/1414721940825251840?s=21
pebbles#7130: I was thinking that as more people use jax due to good TPU support, pytorch will add in something similar to bring theirs up to the same usefulness ??
EricHallahan#1051: Googly eyes make all robots better.
chilli#5665: also, Google gives them out for free often
EricHallahan#1051: I would put that under the "cheap" category.
|
chilli#5665: I'm not super convinced that they're that cheap otherwise
chilli#5665: at least, not per flop
quinn#9100: there's a datascience reason to learn yoneda https://arxiv.org/pdf/1802.03426.pdf -- this is much more crisp than another application of top/cat to clustering i've seen
bmk#1476: an application of category theory?? impossible
AI_WAIFU#2844: JFC it's almost criminal that jax gives us all these op program transformation primitives and we just end up throwing shit into a 2d mesh and calling `grad`.
AI_WAIFU#2844: It's like using a light-saber to cut butter.
kinoc#5731: But for some, butter is existence ... https://bricksafe.com/files/beezysmeezy/lego-rick-and-morty-butter-bot/butter%20bot.gif
mullikine#5015: https://news.ycombinator.com/item?id=27818854
mullikine#5015: please upvote if it works for you - -thanks
fazz#8459: https://mynvidia.force.com/HardwareGrant/s/Application PhD peeps here's some Ampere iron gratis
mullikine#5015: Does the maintainer of https://github.com/samrawal/emacs-secondmate happen to be here? I'm not sure if Apache2 is compatible with GPL and if there'd be a problem borrowing code
Parker#3197: https://www.apache.org/licenses/GPL-compatibility.html
mullikine#5015: Hrm. I feel bad about not making it Apache now since the love may only flow one way.
Parker#3197: if you're the copyright owner, you can change it lol (but everyone else is free to use the previous licensed version that they've downloaded. you don't have to keep both)
mullikine#5015: It's also kinda dependent on many other GPLs so there's no way
mullikine#5015: 😢 , but that's interesting. I didn't know it could be changed
Parker#3197: I think their page just says that it ~~invalidates~~ (not necessarily invalidate, but applies restrictions on it that doesn't really make it an apache license anymore - it is now like GPL or something, idk) the apache license if you use GPL code within it
Parker#3197: > Apache 2 software **can therefore be included in GPLv3 projects**, because the GPLv3 license accepts our software into GPLv3 works. However, **GPLv3 software cannot be included in Apache projects.** The licenses are incompatible in one direction only, and it is a result of ASF's licensing philosophy and the GPLv3 authors' interpretation of copyright law.
𓅬 gabriel_syme 𓅬#3220: do you know if these are VMs?
inox#5400: > certain projects may be awarded with cloud compute credits instead of physical hardware
|
I've only seen people get the physical GPUs before though
inox#5400: often with the university then not paying for the rest of the computer to mount the GPU in
𓅬 gabriel_syme 𓅬#3220: hmm ok thanks!
𓅬 gabriel_syme 𓅬#3220: my problem is that I'm not at the university so that might not help that much
derivmug#3558: Hey everyone, I wanted to ask what the best way would be to start contributing to one of the projects. I have experience with PyTorch and some with JAX and could also do web dev if that's needed at the moment
Daj#7482: Hey there! We are pretty disorganized overall, so it's not always easy to know what needs doing where, and for the most part most of our projects are in a pretty stable state so there isn't a huge amount of contributing to be done. Generally I recommend just hanging around and seeing what's going on around projects that interest you. We also have a project board that we are still in the process of organizing and filling up, there are a few project ideas on there https://github.com/EleutherAI/project-menu/projects/1
EricHallahan#1051: I concur.
derivmug#3558: Okay, I'll lurk around then and have a look at the board. Thanks!
tehZevo#0321: so i'm trying to create a stateless API with clip+vqgan, and realized i'll have to send the optimizer state to clients (or switch to a stateless optimizer lol)
tehZevo#0321: not difficult, just a funny headache haha
tehZevo#0321: hello yes, client? take this optimizer
why?
just do it, you'll need it later
Sphinx#2092: whoa you're alive
tehZevo#0321: hi sphinx lol
Sphinx#2092: Glad to see you ended up with a job that allows you to play with this stuff.
tehZevo#0321: i mean my day job is fullstack dev with a touch of ai/automation
tehZevo#0321: ML is still mostly a hobby for me, moving up though haha
Sphinx#2092: Slowly but surely.
circuit10#0158: https://beta.openai.com/docs/guides/fine-tuning/
|
circuit10#0158: You probably already knew this but they have fine tuning now
circuit10#0158: And it's free
circuit10#0158: But the generations cost more
chilli#5665: > There is no fee for fine-tuning the model itself. During the beta there is a limit of 10 fine-tuning runs per month and data sets are limited to 2.5M tokens (a file size that's roughly 80-100MB).
chilli#5665: Interesting
bmk#1476: 2.5M tokens? that's pathetically small
chilli#5665: Yeah but it’s free
chilli#5665: And it’s fine tuning
bmk#1476: I've never fine tuned with less than a GB, and that's on the small end
circuit10#0158: Well conveniently I have almost exactly 80MB of data for what I'm using it for
Sid#2121: They're for sure not tuning the whole model right?
Louis#0144: no
Louis#0144: theyre using that recent MSR paper
Louis#0144: I forgot the name
Sid#2121: cool, helpful as always :berk:
Louis#0144: I cant find the paper
Louis#0144: someone for sure has shared it in #research
Sid#2121: what's the idea? like
Sid#2121: people have shared thousands of papers in there lol
Louis#0144: They only finetuning a (somewhat random?) subset of the weights
|
Louis#0144: I dont know how they determine this subset
Louis#0144: It was like within the last month
Louis#0144: ugh
Louis#0144: @kurumuz we discussed this paper
Louis#0144: a while ago
Louis#0144: I cant find it
Louis#0144: do you know which one I am talking about
Sid#2121: how do you know OA are using that one?
Louis#0144: the examples they gave were GPT3
Louis#0144: lol
Louis#0144: but besides that I do not know
Louis#0144: I imagine it must be prohibitively expensive to do it any other way
Sid#2121: soft prompt tuning would also work with gpt3
Sid#2121: but yea if you remember the paper pls link
Louis#0144: will do
Louis#0144: Actually you know who might know
Louis#0144: @mitchg
dmayhem93#3202: Is this the LoRA paper or something different?
dmayhem93#3202: https://arxiv.org/abs/2106.09685
Louis#0144: Thank u
|
Louis#0144: That’s it
Sphinx#2092: No comparison with prompt tuning.
Sphinx#2092: (x)
Sphinx#2092: Oh actually they do, they just don't call it that for some reason hmmm
Sphinx#2092: Though the fact they beat full fine-tuning is always a bit concernig
dmayhem93#3202: It makes sense to me, there was a paper a while ago about swapping in original layers when finetuning to regularize it... let me find it
dmayhem93#3202: I can't find what I was thinking of, but I did see mixout which is similar to what I was thinking, although this was in 2019 so my memory may have just escaped me https://arxiv.org/abs/1909.11299
Louis#0144: @Sid
𓅬 gabriel_syme 𓅬#3220: what do they mean free? like do you get to download the model? :berk:
nostalgebraist#3542: can't do it with davinci, only the other ones
nostalgebraist#3542: so it'll be comparable to gpt-j at best
bmk#1476: and the data size limit is miniscule
bmk#1476: literally unusable
kindiana#1016: i wonder if J will make cushman more of a thing
nostalgebraist#3542: i wouldn't write it off, i've tuned on tiny datasets before and gotten good results. and bigger models need less
nostalgebraist#3542: still, it's probably much less data than you have, so worse results than you'd get otherwise
bmk#1476: well but anything I can do with the API I can do with J
nostalgebraist#3542: yeah
bmk#1476: probably easier to do with J, too
bmk#1476: not to mention cheaper to inference
|
kindiana#1016: only if you have enough requests tho
nostalgebraist#3542: it's pretty expensive, finetuned curie inference is 6x the price of base curie inference and 1/2 the price of davinci inference
kindiana#1016: yeah
kindiana#1016: oai charges 0.006 per 1000 tokens with curie, which is pretty good
kindiana#1016: I think its going to be difficult to go cheaper than that without significant scale
𓅬 gabriel_syme 𓅬#3220: this is b2b anyways right?
𓅬 gabriel_syme 𓅬#3220: like most people using this would be to create another product on top of it?
𓅬 gabriel_syme 𓅬#3220: or research I guess
alexyz#3459: GPT-3 finetuning!
alexyz#3459: Amazing...
alexyz#3459: needs more geese though
nostalgebraist#3542: i can totally imagine choosing this at work over having to find a custom hosting solution
nostalgebraist#3542: the lack of an SLA might be a problem there
bmk#1476: TRC costs 0.000 per 1000 tokens
kindiana#1016: well yeah
kindiana#1016: but large LM inference is something that has really good economies of scale
𓅬 gabriel_syme 𓅬#3220: and then you run out and your product disappearls lol
bmk#1476: I mean specifically for my use case, which is training models to eval on eval harness
AI_WAIFU#2844: I wonder if you only fine tune biases or a subset of the weights you could keep economies of scale while still allowing some degree of fine tuning.
kindiana#1016: mOe
|
AI_WAIFU#2844: that's an idea too.
Dwarf#6935: gpt-3 finetuning would be a whole lot more exiting if they didn't have to approve your use case and weren't sensitive about the type of content you can make it generate.
cfoster0#4356: Works for MLMs
cfoster0#4356: Re: thonk https://arxiv.org/abs/2106.10199
nostalgebraist#3542: i wonder why they restrict dataset size and then let you do multiple epochs, with a default of 4 epochs
nostalgebraist#3542: i'd expect 4 epochs to overfit these small datasets, and meanwhile you can't do 1 epoch on 4x the data
Dwarf#6935: i fine-tune on 1mb of data and anything over 1 epoch just starts over fitting
𓅬 gabriel_syme 𓅬#3220: can you overfit on 2gb of data?
𓅬 gabriel_syme 𓅬#3220: with smaller models even
Louis#0144: Yes
Louis#0144: Wut
Louis#0144: Just do like
Louis#0144: 10 epochs
Louis#0144: lol
𓅬 gabriel_syme 𓅬#3220: I did 5 on 2gb of data and it did not overfit one bit
𓅬 gabriel_syme 𓅬#3220: it kept making really diverse outputs. but I'll try more though
Louis#0144: I call overfitting when training loss continues to go down but Val loss skyrockets
Louis#0144: Not dependent on like diversity of output
𓅬 gabriel_syme 𓅬#3220: oh ok my bad
Louis#0144: Nw
|
𓅬 gabriel_syme 𓅬#3220: i was thinking overfit on task or output
uwu1#4864: what if u use ratts https://arxiv.org/abs/2105.00303
Deleted User#0000: https://github.com/hieunc229/copilot-clone
Deleted User#0000: alternative open source projects to copilot have already been set up
https://github.com/ncoop57/gpt-code-clippy
Kia#2550: Ah
Kia#2550: That's really fast
UnsupervisedLearner#4148: When did you figure out MLM was just dropout but applied at a token level
UnsupervisedLearner#4148: Because it took me until just now to make the obvious connection
chilli#5665: :thonk:
chilli#5665: I mean, you're not trying to reconstruct the missing token in dropout
Fessus#9563: It's not quite correct but reconstruction of values still happens, just not the input. Using dropout creates an internal learning objective to reconstruct certain hidden node activations. i.e the output of hidden layer x+2 should be as similar as possible regardless of which nodes from layer x are dropped. You can even take this and turn it into an explicit training objective to do some async model training but it's usually not worth it.
Fessus#9563: Dropout is kind of underappreciated in terms of how interesting it is.
Kharr#7888: Yes, it is implicit reconstruction. "Make the correct decision even without this information" and the model learns to infer the missing pieces. Dropout has a lot of other cool properties if you sit down and really think about `what` it is doing and forcing the network to do. It's really disappointing that it's treated as a black box method for making models go :brr:
StellaAthena#3530: @alstroemeria313 I saw your tweet about having trouble hosting models. We can absolutely host whatever cool models you train. Just ping / DM me or another level-5 role.
alstroemeria313#1694: ohh! thank you
alstroemeria313#1694: yeah, if I train a bigger one it's not going to fit into a github release
alstroemeria313#1694: since they limit to 2gb
alstroemeria313#1694: ...actually, does github have any bandwidth limit on their releases?
alstroemeria313#1694: oh, they only limit *individual files* in a release to 2gb
|
alstroemeria313#1694: and there is not actually a limit on the total size
alstroemeria313#1694: still, that means people have to reassemble them on their end which is extra disk usage
StellaAthena#3530: Maybe we’ve done a poor job explaining this, but nobody’s intention is to make EleutherAI some kind of exclusive club. We (EleutherAI) have a huge amount of resources that don’t belong to me, or Connor, or Leo, or Sid. They belong to everyone who participates regularly in this discord.
If you need data or a model hosted, EleutherAI will do that. If you need code run on chonky GPUs EleutherAI, has a queue and your code will be put into it. If you want to write a blog post about the cool stuff you do, the EleutherAI blog would be happy to host it. If you want to write a paper with EleutherAI as your affiliation we would be thrilled by that.
EleutherAI isn’t people with blue and purple names. It's everyone that contributes to our common goals and projects.
The level-5 role exists because EleutherAI needs people to manage our resources, interact politically with other organizations, and negotiate the funding and compute that EleutherAI needs to continue.
StellaAthena#3530: EleutherAI has grown a lot in the past six months. In January it was genuinely 15 people doing all of the work, and maybe one hundred more who hung out and chatted about our research. It’s extremely plausible that as more people have come and hung out here and started doing amazing stuff that EleutherAI hasn’t reached out enough to these newer core contributors.
I view the awesome work you and many others do here “EleutherAI projects” done by “EleutherAI members.” And if you don’t feel that way, that’s on me. My job is to marshal resources to support your work, so if you don’t feel like the resources belong to you I’m doing something wrong. What can I do better?
I would love feedback from anyone reading this who would like to feel more involved, more supported, or whatever.
alstroemeria313#1694: thank you :blobcutehappy:
alstroemeria313#1694: well, mostly i had been talking for a couple of days in #art about needing hosting and eventually i worked out how to put it on github on a release
StellaAthena#3530: GitHub commits and GitHub releases are not intended to store trained ML models. They’re supposed to store the raw codebase. It’s generally considered bad practice to upload data or trained models to GitHub, due to the way that git commits work. Anyone who clones your repo has to download not only whatever you currently have uploaded but also all previous models and data as well. The intended behavior is to host the data and models elsewhere and provide scripts that download them.
alstroemeria313#1694: github lets you upload arbitrary binaries for releases that then don't get stored in a git repo, but yeah
alstroemeria313#1694: it's... a hack
alstroemeria313#1694: so can you host it >_>
StellaAthena#3530: I think that’s a feature they low-key gave up on when they realized how hack-y it is. In my experience people tend to use it.
StellaAthena#3530: Sure! How big is “it”?
|
alstroemeria313#1694: 1.3GB
alstroemeria313#1694: https://github.com/crowsonkb/cond_transformer_2/releases/download/2812137c/transformer_cond_2_00003_090000_modelonly.pth
alstroemeria313#1694: also this is the colab notebook for sampling from it https://colab.research.google.com/drive/1dFV3GCR5kasYiAl8Bl4fBlLOCdCfjufI
StellaAthena#3530: Awesome! Could you also provide the following info:
1. A name for the model to be listed as
2. A couple sentence overview of what it is and why it's cool.
3. A list of contributors to credit
4. A license for the code and weights?
alstroemeria313#1694: @jack is MIT license OK for the weights?
UnsupervisedLearner#4148: Same thing with cardinality ('groups', 'heads', etc)
Make it learn modular functions, or at least don't degenerate into too much reliance on the same pathways
circuit10#0158: They say you only need a few hundred examples
jack#8178: yeah for this model that's fine
bmk#1476: I never finetune with fewer than a gigabyte
circuit10#0158: > To fine-tune a model that performs better than using a high-quality prompt with our base models, you must provide at least a few hundred high-quality examples, ideally vetted by human experts. From there, performance tends to linearly increase with every doubling of the number of examples.
circuit10#0158: That's what they say
bmk#1476: yeah but more is better
StellaAthena#3530: > From there, performance tends to linearly increase with every doubling of the number of examples.
This is a fucking awful sentence. "linearly increases with every doubling of" is better known as "grows logarithmically in"
|
rom1504#5008: do you have some pointers as to what are some strategies to make this work well ? (I assume there's some caching involved ?)
StellaAthena#3530: Magic. Nobody really knows why, but larger models are actually more efficient at learning both on a per-compute basis and a per-datum basis.
bmk#1476: grows logarithmically in*
rom1504#5008: yeah indeed I read about that! I was particularly interested in scaling the inference part in this question as Ben was mentioning it
rom1504#5008: I tried googling about that recently and didn't find a lot, but clearly that's something people do, seeing how fast is that openai gpt-3 demo
StellaAthena#3530: And that's why you should use real math words to describe things instead of this bullshit. When you make people convert in their heads it's easy to cause trivial errors and confusions that would have never arisen if they had written "grows logarithmically" in the first place.
bmk#1476: what if I don't know what a logarithm is?
bmk#1476: I feel like a good chunk of OA API users fall in that bucket
bmk#1476: OA API is intended for normies
bmk#1476: I'd bet that's why they wrote it that way
StellaAthena#3530: Isn't that part of a standard high school curriculum?
EricHallahan#1051: tfw high school curricula not being standardized
bmk#1476: idk I didn't pay any attention in class so I don't remember, and I bet most people don't remember either
bmk#1476: most people just learn math to pass the exam and then they forever erase it from their memory
bmk#1476: (s/math/any other subject)
bmk#1476: so yeah I bet a good portion of the people reading the docs are non-technical management people who passed math with flying colors trying to get a hold of this newfangled OpenAI thing
alstroemeria313#1694: 1. "CLIP conditioned Decision Transformer"
2. "A 337M parameter autoregressive transformer model intended for sampling sequences of VQGAN tokens conditioned on a CLIP text embedding and desired CLIP similarity. It was trained on Google Conceptual Captions and produces 384x384 output images from a text prompt."
3. Katherine Crowson and Jack Gallagher
4. MIT
|
MonkeyD.Luffy#4209: How can I keep best up to date on the development of GPT- Neo X?
StellaAthena#3530: @MonkeyD.Luffy Follow the GitHub repo and read along in #gpt-neox-devs. Currently the core codebase has been built and we are implementing some "nice to have" features like distillation and documentation (lolz) while we deal with hardware limitations.
MonkeyD.Luffy#4209: Thanks! I'm brand new (and I mean BRAND NEW) with this side of things.
EricHallahan#1051: Hopefully the project pages are more descriptive sooner rather than later.
MonkeyD.Luffy#4209: Nice. I heard there were changes recently to some of the project (such as the parameter count), and that's what got me interested but a lot of this goes way over my head lol.
StellaAthena#3530: @alstroemeria313 https://the-eye.eu/public/AI/models/cond_transformer_2/
StellaAthena#3530: I really need to automate this process >.>
alstroemeria313#1694: `Unexpected EOF` from curl?
StellaAthena#3530: @-Archivist
-Archivist#7336: the fuck you trying to curl son?
alstroemeria313#1694: This file. https://the-eye.eu/public/AI/models/cond_transformer_2/transformer_cond_2_00003_090000_modelonly.pth
StellaAthena#3530: Did you try the command listed on the page for downloading?
`wget -m -np -c -U "eye02" -w 2 -R "index.html*" "https://the-eye.eu/public/AI/models/cond_transformer_2/"`
alstroemeria313#1694: ...I don't actually have wget
alstroemeria313#1694: Hm is it on Colab
-Archivist#7336: hmmmmm, drop the ssl if that still EOFs tell me, some funky shit happens with curl on occasion
alstroemeria313#1694: it is in fact the SSL :/
-Archivist#7336: aye, all will be well soon, big changes for the better are coming 😉
alstroemeria313#1694: oh. the problem is that curl tries to use HTTP/2
alstroemeria313#1694: and this breaks.
|
alstroemeria313#1694: if i don't use https:// then it will use HTTP/1.1
alstroemeria313#1694: and not break.
alstroemeria313#1694: so i will just manually force curl to use HTTP/1.1 with TLS
-Archivist#7336: glad you know what you're doing, the general audience I deal with on this aren't as adept 😄
-Archivist#7336: have fun, I'm going to watch some trash tv and chill
StellaAthena#3530: These are words and I know some of them.
Louis#0144: @MonkeyD.Luffy one piece bored me
Louis#0144: I got like three hundred episodes in
Louis#0144: And idk
Louis#0144: It was good for the first hundred
Louis#0144: Then it rapidly declined
MonkeyD.Luffy#4209: How tf is that relevant?
MonkeyD.Luffy#4209: To this server lmao
Louis#0144: your username
Daj#7482: Louis #off-topic jfc
Daj#7482: actually, better yet, don't talk about anime at all :berk:
Louis#0144: Lmao
circuit10#0158: It is more understandable
circuit10#0158: For people like me who aren't AI experts but are just following it and messing around
circuit10#0158: I know what logarithms are but the way they worded it is more understandable to me
|
StellaAthena#3530: @circuit10 thanks for the datum!
circuit10#0158: ? https://cdn.discordapp.com/attachments/729741769738158194/864987639479664700/unknown.png
fe#0483: Datum = data point
StellaAthena#3530: It’s the singular of the word “data.” It means a single data point.
circuit10#0158: Huh, doesn't show up on Google for whatever reason
StellaAthena#3530: Where are you located? It does for me https://cdn.discordapp.com/attachments/729741769738158194/864987900307439656/image0.png
circuit10#0158: Oh wait, I know why
EricHallahan#1051: ^
circuit10#0158: I'm Chrome Remote Desktoped into a VPS in Germany
circuit10#0158: Contabo
StellaAthena#3530: Ah lol
circuit10#0158: So it shows the German meaning
circuit10#0158: Even though I'm not German and don't speak German
StellaAthena#3530: That would do it. It figured you’re speaking german, not using an obscure English word that plenty of native speakers don’t know.
circuit10#0158: This Chromebook is so slow and Discord is so unoptimsed that it's less laggy to use Discord through remote desktop to a different country
EricHallahan#1051: I suggest trying an alternate client then.
circuit10#0158: I did try Ripcord
circuit10#0158: It was a lot faster
bmk#1476: :sadge:
circuit10#0158: But it doesn't have an unread message indicator, which sounds minor but it's really annoying
|
circuit10#0158: And it's closed-source so I can't try to add it
StellaAthena#3530: I hate that indicator. I would kill to get rid of it lol
circuit10#0158: Well, not the taskbar one
circuit10#0158: It doesn't have this https://cdn.discordapp.com/attachments/729741769738158194/864989163472551946/unknown.png
circuit10#0158: So I can't tell what messages are new
circuit10#0158: Anyway it's not that big of a deal, I only use this Chromebook when I'm away from my PC
UnsupervisedLearner#4148: I might have some train loops written in a few days to see if I can get grokking on some random kaggle tabular data sets with gMLP and transformers. Bigger the better.
What do I need to have if this is in line with the EAI gameplane? Dockerized code and scripts? Just pytorch?
AI_WAIFU#2844: So if I'm getting this right of the 3 methods of parallelism in jax each have their pros/cons and at a high level can be summarized as such:
1. pmap is the most flexible/low level, and allows the most fine grained control over parallelism, but consequently requires more mindshare to do properly.
2. xmap is similar to mesh tensorflow and at it's best allows you to write tensor operations while partially fixing the headache of remembering what dimensions correspond to what. It also makes it possible to write code once and try multiple mesh shapes and paralleism strategies. The downside is that not all functions will support it's named axes paradigm.
3. pjit is basically gsmpd for JAX, and essentially lets you write normal code then experiment with a whole host of parallelism annotations. The downside is that you have less control over exactly how things are shaded/replicated especially when pjit is applied to large blocks of code, but in practice it doesn't matter so much since it does a pretty good job anyways.
AI_WAIFU#2844: So in conclusion I should probably be using pjit. That sound about right?
Louis#0144: I only ever use pmap….
Louis#0144: I should maybe use pjit
Louis#0144: lol
|
chilli#5665: > pmap is the most flexible/low level, and allows the most fine grained control over parallelism, but consequently requires more mindshare to do properly.
I think that's accurate. However, pmap is probably the right choice for data-parallel type stuff (and that's probably what most people use it for).
kindiana#1016: pjit gives you great control over parameter sharding
chilli#5665: Also, pmap is kinda like vmap/xmap in that it's a ... map. In other words, it actively changes the semantics of your function.
kindiana#1016: just not temporary variable sharding
chilli#5665: Which may be fine.
chilli#5665: well, it gives you ok control over that, no?
kindiana#1016: well yeah
kindiana#1016: if you manually annotate
kindiana#1016: but you are forced to explicitly annotate how parameters are sharded
Louis#0144: Does anyone have a really good tutorial on pjit
kindiana#1016: lol
AI_WAIFU#2844: I got u
kindiana#1016: https://gist.github.com/kingoflolz/0756d95a6573809da0b25eb0fd0867b1
chilli#5665: The main appeal of xmap is that once you've threaded your named axes through your model, you get a lot of control over parallelism along your named axes
StellaAthena#3530: Imagine thinking Jax has tutorials
AI_WAIFU#2844: ```pjit(func)```
kindiana#1016: ^ that gist works, modulo some sharding annotations that makes it fast
Louis#0144: Oh wtf literally perfect
Louis#0144: Ty
|
chilli#5665: The problem is that 1. you need to actually thread your named axes through your model, and 2. if you didn't plan for some axis to be named, it's somewhat .. annoying
kindiana#1016: you can see mtj for a proper implementation
AI_WAIFU#2844: Honestly I like the idea of named axes even more than the parallelism.
chilli#5665: > The downside is that not all functions will support it's named axes paradigm.
I don't think this is really the main downside.
chilli#5665: At least, that's a ... temporary thing
chilli#5665: I'm not sure the named axes thing will be broadly appealing
AI_WAIFU#2844: Like before I used to annotate everything with type signatures like
```# [batch x height x width] -> [channel x height x width] -> [batch x channel]```
AI_WAIFU#2844: Useful but a pain in the ass
chilli#5665: hmm, I'm not that certain that named tensors are that much better here...
chilli#5665: @kindiana can you even write this kind of thing with xmap today?
kindiana#1016: what is this?
chilli#5665: Presumably a conv
kindiana#1016: a conv with all named dims?
kindiana#1016: not sure
kindiana#1016: but you can do a conv over sharded dims
chilli#5665: yeah, that's why I'm a bit skeptical about Jax's named tensors in xmap being better here.
chilli#5665: They have very ... specific semantics (i.e. you batch over that dimension)
|
chilli#5665: which is good for parallelism purposes, but I don't think would lend itself well to annotating all of your tensors with
chilli#5665: lol
kindiana#1016: yeah...
chilli#5665: Seems like it would be pretty painful to do an all named-axis model?
kindiana#1016: worse than mtf even maybe :berk:
kindiana#1016: at least mtf designs for all-named
chilli#5665: I'm not even sure they support w.e. conv operator would be needed to do all named-axis lol
chilli#5665: The examples I've seen/tried are all about reductions along one named-axis
chilli#5665: hmm, I wonder if pjit is all you need...
chilli#5665: I guess it'll be hard for me to say until I do a large enough model with it to run into its issues lol
chilli#5665: or Ben can give his opinion 🙂
kindiana#1016: i think its all you need for mp
kindiana#1016: but there might be other stuff that xmap is better for
chilli#5665: Well, I see 2 main areas that I think could be improved
chilli#5665: 1. Some better way of actually specifying the sharding of your parameters rather than iterating over each one and matching on shape :berk:
chilli#5665: and 2. the sharding of the intermediate variables
kindiana#1016: on 1 I think there is probs a better solution
kindiana#1016: where you iterate over the parameters and match on name :berk:
kindiana#1016: but I'm too lazy to implement that
kindiana#1016: or ideally its integrated in haiku so you can just specify it inline
|
chilli#5665: Hmm, not sure specifying it inline in haiku works well
chilli#5665: What happens if you reuse the same code but want to shard it differently?
kindiana#1016: hrmm
chilli#5665: But even if you match on parameters, I still feel like you should be able to do better
chilli#5665: One issue with pjit is that I’m not sure how … composable it is
kindiana#1016: I don't see when you'd want to do f(pjit(x))
kindiana#1016: but pjit(f(x)) should totally work fine
chilli#5665: Oh, sorry, not with function transforms
chilli#5665: Like, as a library
chilli#5665: If somebody wanted to say, import mtj and use it together with some other model
kindiana#1016: I don't see why it wouldn't work?
kindiana#1016: modulo orchestration
chilli#5665: Like, your sharding annotations/resource allocation would need to be redone
kindiana#1016: would it?
kindiana#1016: I don't see why it would unless its such an intrusive change that you should just fork mtj and integrate it that way
mkualquiera#3484: omg this is amazing
chilli#5665: I think so? For example, if the inputs to MTJ are sharded or something
kindiana#1016: the inputs/outputs are small enough such that resharding those would be trivial
kindiana#1016: and they are sharded pretty sanely anyways
chilli#5665: More stupidly of a question, how does resource allocation work?
|
kindiana#1016: ?
chilli#5665: Like, let’s say you wanted to run 2 instances of mtj at once
chilli#5665: In the same model
chilli#5665: How does that work 🤔
kindiana#1016: you just have two copies of params in hbm?
kindiana#1016: if the second copies ooms it ooms
chilli#5665: Actually, in the first place, are you imagining that you would run 2 XLa instances for 2 MTJ models?
kindiana#1016: xla instance?
kindiana#1016: on each tpu device, one python interpreter owns the whole tpu
chilli#5665: Session? Whatever it is that actually does the pjit sharding propagations
chilli#5665: Like, are you running pjit 2 separate times
kindiana#1016: yeah?
kindiana#1016: thats free* if its the same function
chilli#5665: Err, I mean actual pjit compilation, ignoring the cachey stuff
kindiana#1016: sure you can run it twice
chilli#5665: For example, will XLA be able to optimize knowing that there are 2 instances of MtJ
kindiana#1016: no
kindiana#1016: you can only have one pjit op running at once
AI_WAIFU#2844: How well do all these parallelization primitives compose? Like if I do pjit(xmap(pmap(f))) what happens?
kindiana#1016: it breaks
|
chilli#5665: Does it?
kindiana#1016: you can't wrap pjit or xmap in other functional transforms
chilli#5665: Are they not higher order primitives?
kindiana#1016: you might be able to do jit of pmap
chilli#5665: (In Jax parlance)
AI_WAIFU#2844: So pmap is the only one that does that
kindiana#1016: but the parallelism primitives are really meant to be the last transform you do
chilli#5665: hmm
chilli#5665: One thing to note here
chilli#5665: Is that not all of these are actually “function transforms “ (i.e. Jax interpreters)
chilli#5665: Like, when you do jit(f), it doesn’t actually put a jit transform onto the stack
kindiana#1016: yeah
chilli#5665: It wraps f in a higher order primitive (jit)
kindiana#1016: all these parallelism transforms do a jit under the hood
chilli#5665: I thought pmap was also a higher order primitive
kindiana#1016: well they don't do literally a jit, but they are higher order primitives for the same reason that jit is one
chilli#5665: Hmm, i thought they both literally did a jit
chilli#5665: And are using higher order primitives
kindiana#1016: hrmmm
kindiana#1016: idk
|
kindiana#1016: you'll have to ask jek
Erik Nijkamp#8820: jek?
chilli#5665: I think you can do vmap(pmap(f))
AI_WAIFU#2844: I think pmap "mostly" composes with everything
AI_WAIFU#2844: It's a simpler transform.
chilli#5665: Yeah, since I think it’s a higher order primitive
chilli#5665: I’m wondering what pjit/xmap are
chilli#5665: Are they just one-off things that don’t compose?
Deleted User#0000: one of the core contributors to jax
chilli#5665: Jekbradbury, he’s one of the contributors to Jax
chilli#5665: I wonder what the Jek is
xcodevn#9003: I think all these higher-order functions are composable *in theory*, https://jax.readthedocs.io/en/latest/jax-101/06-parallelism.html?#nesting-jax-pmap-and-jax-vmap
xcodevn#9003: in practice, it may depend on jax implementation
chilli#5665: Pmap and vmap isn’t the problem here
chilli#5665: Like, I’m fairly certain those are composable
chilli#5665: I’m just not sure about pjit and xmap
Teemochu#8740: Exactly... I ride the :tribalism: ship with FOSS models and the First Amendment on my side.
random_lurker9#6367: >I’m just not sure about pjit and xmap
Someone may correct me but fairly certain that as of now: pmap and vmap naturally composable. There is no point in jitting around a pmap. Pmap jits its function anyway. Further, there is no need to compose pmap with xmap and things will probably break 🙂 . Conceptually, xmap is a generalisation of pmap and vmap, so you can achieve pmap within xmap just by using the appropriate axis. It's just that it's more user friendly to use pmap for this special case. Pjit is another way of interfacing the spmd partitioner, and again there would be no point in composing it with pmap since that is just a special case.
|
random_lurker9#6367: And from that follows that it would also not be sensible to compose pjit and xmap (and also would not work fairly certain)
chilli#5665: I don’t agree that there’s no point in being able to compose the 2
chilli#5665: Let’s say you wanted to use xmap to shard a sub module
chilli#5665: But pjit to shard the whole thing
chilli#5665: It’s kinda like jit
chilli#5665: Sure, it’s not very useful to do jit(jit(f)). But you still want to be able to do it for code cleaniness reasons
random_lurker9#6367: why would you want to do this? They both use the exact same pxla function to build mesh callables
AI_WAIFU#2844: Like if I have a function that is best expressed using xmap then I want to reuse that function in some other code, which I then pjit, I think that could be useful.
pjit(xmap(f)) is kinda pointless but
pjit(f)
where f = a . xmap(b) . c might be
Louis#0144: @AI_WAIFU Im implementing the gumbel switch transformer
Louis#0144: I see why they didnt do it now....
Louis#0144: 😦
Louis#0144: it is *massively* unstable
AI_WAIFU#2844: lmao
AI_WAIFU#2844: use variance reduction techinques
|
Louis#0144: o true
Louis#0144: ok ty
Louis#0144: i need to read that paper u sent me
Louis#0144: i'll do that now
AI_WAIFU#2844: https://arxiv.org/abs/1711.00123
random_lurker9#6367: do you really mean pjit? I see the argument, it just sounds like a practical pain to mix different axis specification apis to the same interface
random_lurker9#6367: the argument for pmap(xmap) I can see more. pmap(sharded_jit()) actually works (although now no point in using sharded_jit)
random_lurker9#6367: Having a simpler way to instruct data parallelism in xmap would be good, since it's currently a bit much overhead for just data parallelism, but that would make it more flexible to switch between parallelism modes by just changing xmap specs, but having xmap throughout code bases
chilli#5665: I mean sure, it's probably a practical pain. but seems good to allow for it.
chilli#5665: Really? I think DP should be pretty easy to do with xmap (and wouldn't require any changes to user code, except maybe for loss computation?)
random_lurker9#6367: oh yes absolutely it works, but my point would be that nobody would use it atm for just DP, and hence code bases default around pmap.
chilli#5665: My point is that it should be pretty easy to do.
chilli#5665: Since your named axis should just be batched
chilli#5665: (which is exactly the semantics you want from DP)
random_lurker9#6367: yep, but still more work than blindly using pmap, and the api on xmap can be scary 🙂
Deleted User#0000: @chilli how is the progress on your pytorch vmap?
Deleted User#0000: i see you and richard constantly committing
chilli#5665: it works 🤔
chilli#5665: probably still could do with more coverage of PyTorch ops
Deleted User#0000: is this different than the vmap that is currently in alpha in the repo?
|
chilli#5665: but generally, vmap degrades gracefully
chilli#5665: in the main repo? yeah
chilli#5665: Basically, the one in the main repo doesn't compose with autograd
Deleted User#0000: man, if you get a solid composable vmap and grad into pytorch and out of alpha, that's already big enough
Deleted User#0000: imo
Deleted User#0000: that's like 80% of the way there
chilli#5665: Is there any particular use case you want vmap/grad for?
cfoster0#4356: Oh hey, Yannic gave the retrospective a shout-out https://youtu.be/-cT-2xvaeks
EricHallahan#1051: Prior discussion:
https://discord.com/channels/729741769192767510/730095596861521970/865206926618263562
fe#0483: nervous ask/confirmation: #general is for ai related + not specifically handled by another channel vs #off-topic being anything not related to ai, yes?
Louis#0144: they bleed into eachother
Louis#0144: a lot
Louis#0144: but yes
fe#0483: i've noticed, but want to be good netizen and not get yelled at 🙂 thanks
fe#0483: accordingly, check out https://fossa.com/blog/analyzing-legal-implications-github-copilot/ if you haven't seen it. I thought it was interesting (copilot IP implications)
flowpoint#7450: Is it ok if i present the generated art pieces by for example @BATbot + (my creative promt) as my own?
how/ who should i attribute or can i link to to highlight the awesome work here/there?
StellaAthena#3530: @flowpoint The bot was written by @BoneAmputee. The techniques that go into the model were created by Katherine Crowson, Stepan Shabalin, Daniel Kornis, Theodoros Galanos, and Eric Hallahan among others.
|
@alstroemeria313 @BoneAmputee is there an official credits list? If not, it would probably be a good idea to put one together because people are going to be asking this in the future.
alstroemeria313#1694: i do in fact claim copyright on images i produced w/ my own prompts, btw
alstroemeria313#1694: i have actually sold some as NFTs so i had to think about it
alstroemeria313#1694: i think we are in a legal gray area at this juncture whether this actually holds up in court from lack of a test case?
EricHallahan#1051: Content generated by a user is the property of that user as far as I know.
~~Until @BATbot becomes sentient, claims copyright over all content, and floods the facility with a deadly neurotoxin.~~
pebbles#7130: "Do you hear that? That's the sound of the neurotoxin emitters emitting neurotoxin."
— GLaDOS
flowpoint#7450: i will go with open license and some way of partial credit for now 👍
cognomen#6297: it really depends on whether there's a) enough of a human creative process behind the work to qualify for copyright protection, or b) that it isn't enough of a derivative work of something else that might make it fall into someone else's copyright
cognomen#6297: https://en.wikipedia.org/wiki/Monkey_selfie_copyright_dispute
nev#4905: what do you guys think about weightwatcher
bmk#1476: what's that
EricHallahan#1051: I don't have an opinion because I don't know what it is.
bmk#1476: and isn't wandb already capable of watching your weights
mega b#6696: A scale :bigbrain:
nev#4905: hmmm
nev#4905: so it's not popular
nev#4905: https://github.com/CalculatedContent/WeightWatcher
bmk#1476: what does it do
|
nev#4905: it has :thonk: theory
bmk#1476: like how does it work
bmk#1476: it sounds incredibly sus
nev#4905: predicting generalisation based on network weights if I understand correctly
nev#4905: no idea
bmk#1476: that.. what??
bmk#1476: but how
CRG#8707: https://calculatedcontent.com/ :thonk:
nev#4905: yeah the article got shared everywhere
bmk#1476: this is either bs or enormously big brained stuff that I have no hope of understanding
nev#4905: there's no paper so I can't judge it from mobile
nev#4905: they invented some heuristics
nev#4905: which might turn out to be good
guac#4716: https://arxiv.org/pdf/2002.06716.pdf
guac#4716: that's the weight watcher paper lol
EricHallahan#1051: https://arxiv.org/abs/2002.06716
nev#4905: hmmm
nev#4905: was it ever replicated
guac#4716: not sure, but i remember shawwn talking about looking at spectral norms for inspecting model performance/behavior which is essentially what this paper discusses
guac#4716: https://twitter.com/theshawwn/status/1339877162212474880
|
mitchg#7109: I saw a talk by this guy at my uni
mitchg#7109: tl;dr he noticed that the distribution of eigenvalues of NN weight matrices is correlated with test set scores
mitchg#7109: like if you plot the eigenvalues as a histogram and look at the shape of the plot, it lets you predict generalization
mitchg#7109: and then he tried to connect it with a bunch of big brain physics concepts
bmk#1476: does this work with transformers
mitchg#7109: idk if it works at all really
mitchg#7109: it is an empirically falsifiable claim, at least
bmk#1476: I guess we can test it with transformers
mitchg#7109: but I'm not really sold on the theory (since I don't understand it lol)
bmk#1476: yeah I'm generally pretty skeptical of anything that combines big claims with impenetrable math too
StellaAthena#3530: This is pretty relevant: https://arxiv.org/abs/1901.08276
StellaAthena#3530: Also https://arxiv.org/abs/2103.01519
CRG#8707: Yeah, the paper has a GPT-1/2, Bert section (not sure how well it actually works though) <https://arxiv.org/abs/2002.06716>
mitchg#7109: this seems pretty similar to the talk I saw all those years ago, if people prefer that to reading https://m.youtube.com/watch?v=eXhwLtjtUsI
mitchg#7109: they also have a slack channel that i never really engaged with
mitchg#7109: actually, listening to the first 5 minute summary of the talk, it kind of vibes with other lines of research I like https://youtu.be/eXhwLtjtUsI?t=110
> deep learning works because when you train to good data (images, text, whatever), the training process itself engineers in correlations over many size scales (your eyebrows, your nose, your face, etc.), and those correlations are well-modeled by heavy-tailed random matrix theory. (think of what you would do if you were modeling long-range correlations in fluid systems, or in financial markets)
kind of vibes with scaling laws (heavy-tailed RMT is basically scaling laws for big matrices I think?) and also circuit analysis, since hierarchical circuits == feature correlations over size scales.
|
> and the heavy tails short-circuit all the pathologies of gaussian spin glasses. gaussians are easier to deal with analytically, but when they fail you hit a hard wall. spin glasses are pathologically non-convex. and so it short-circuits those pathological non-convexities, and you get a penalty surface that's sort-of ruggedly convex
which vibes with my understanding of why over-parameterized neural networks are easy to optimize with SGD (as NNs get bigger, the loss surface becomes essentially convex) http://arxiv.org/abs/1811.03804
mitchg#7109: so... i guess the claim is heavy-tailed eigenvalues => circuit-like structure => generalization?
StellaAthena#3530: @mitchg If heavy tailed RMT -> Scaling Laws, that’s a very publishable paper
mitchg#7109: I'm... not really sure I know what I'm talking about lol
Dexxus - President of Books#8184: :waves:
EricHallahan#1051: Welcome!
Dexxus - President of Books#8184: Newbie here mostly to watch. :) As much as I'd love to participate I understand my limitations and would likely get in the way more than you would be comfortable with. I have a deep understanding of how ML and NNs of all kinds actually operate, don't get me wrong, I just... have very little programming background. I can read and understand code with minimal prompting, but writing my own is another matter entirely. So I take my role as mostly a designer or project manager. But since this is decentralized and iterative I'm not sure that's much use to you. :) Regardless I look forward to your continued work! About time someone took OpenAI down a peg.
EricHallahan#1051: There is plenty of theoretical stuff that gets discussed here that does not involve software engineering, so I recommend sticking around.
Airatak#7842: Hi! Quick question, if I want to publish a dataset kinda like the pile (specifically books3), how do I go about it? Mine is in Chinese but I'm not sure about the copyright thing.
bmk#1476: depends heavily by dataset and also we cant provide any legal advice
Airatak#7842: Yea i get the part about legal advice... but my dataset is just like books3 but in Chinese
zphang#7252: are you saying that you currently have the dataset and want to make it widely available, or how you would go about building it
Airatak#7842: widely available. ~300gb of Chinese book corpus
Airatak#7842: but since this was made from scraped data, I don't think this can be recreated by users
Airatak#7842: I'll have to publish the data
alstroemeria313#1694: ...wait is there some sort of bug with torch.multinomial() in fp16?
alstroemeria313#1694: i vaguely heard of this and want to know what it's about/if it's still a thing
|
EricHallahan#1051: There was a major bug where it would occasionally just not work correctly. This caused issues when naively running the Hugging Face GPT-Neo implementation under binary16, as it would choose a token that had zero probability and as a result throw the generation out of distribution.
alstroemeria313#1694: ah
alstroemeria313#1694: yeah i sample from logits in float16 w/ my decision transformer
alstroemeria313#1694: and was wondering if i needed to cast them to float32 first or implement Gumbel-Max myself or smth
zphang#7252: https://twitter.com/fchollet/status/1415806681896808450
zphang#7252: interesting
EricHallahan#1051: I don't know if it has been fixed.
EricHallahan#1051: I am pretty sure this has been discussed before.
zphang#7252: oh maybe I missed it
EricHallahan#1051: Yep, it was.
https://discord.com/channels/729741769192767510/730095596861521970/854576201180250142
EricHallahan#1051: Simply to casting to binary32 first solved the issue.
Louis#0144: Killing TF?
EricHallahan#1051: I assume that this came from the fact that it has been one month since they moved.
cfoster0#4356: @chilli the earlier discussion, for reference
chilli#5665: I don’t totally understand your point about sampling
𓅬 gabriel_syme 𓅬#3220: I was meaning to share Charles' work here for some time but didn't. He is quite a brilliant guy and the theoretical aspects of this are beyond me. I remember smth about spin glasses at some point. But yeah we could at least test it empirically here quite easily I guess
𓅬 gabriel_syme 𓅬#3220: If I remember correctly he was in aardvark, been around a while
StellaAthena#3530: @𓅬 gabriel_syme 𓅬 Got a link to this work you’re talking about?
𓅬 gabriel_syme 𓅬#3220: Well you found the paper, he does have some blogs leading to it
|
𓅬 gabriel_syme 𓅬#3220: Let me find it
StellaAthena#3530: Oh Charles Martin
StellaAthena#3530: Lol
𓅬 gabriel_syme 𓅬#3220: https://calculatedcontent.com/
StellaAthena#3530: Yeah I found that paper interesting
𓅬 gabriel_syme 𓅬#3220: I was mostly thinking to share it for you really, like it's so beyond me it is funny
EricHallahan#1051: > (no title)
cfoster0#4356: I'll take the compliment anyways
mitchg#7109: anyone around here interested in using predictive coding as an alternative to backprop?
https://arxiv.org/abs/2006.04182
kindiana#1016: why'd you want to do that :mesh:
mitchg#7109: seems like effective parallelization is the workhorse of DL (and like, the future of computation or something idk)
mitchg#7109: would be cool if we found other algos that do the same thing but are simpler to parallelize
AI_WAIFU#2844: yeah but we really don't need algorithm level improvements to max out our parallelism.
AI_WAIFU#2844: Maybe like 6-7 OOMs from now.
Louis#0144: I did a lot of predictive coding research during my undergrad
Louis#0144: It is very hard
Louis#0144: Two years for barely any progress
Louis#0144: I did win a best paper tho for my work
Louis#0144: So I’m p happy about that
|
Louis#0144: Backprop is a meme that’s why
Louis#0144: I’m still convinced there’s something simpler we haven’t found yet
Louis#0144: Tbh
Louis#0144: Oh no the thonk reacts
Louis#0144: @StellaAthena care to chime in?
chilli#5665: I mean
chilli#5665: backprop is just an exact way of getting the gradient
chilli#5665: predictive coding is an approximate way of getting the gradient
AI_WAIFU#2844: Since I think in order of things you want to do for parallelism is as follows:
1. Better parallelism schemes, (g-shard, zero, 2d sharded matmuls)
2. Hardware level changes (better networking, network topology improvements, simultaneous networking and compute)
3. Simple algorithmic changes (1-bit adam, gradient compression)
4. Architectural changes (better data locality, MoE)
5. Fancy algorithmic changes (synthetic gradients, predictive coding)
6. Exotic shit (That thing I proposed a while ago, boosting ensembles)
Louis#0144: We still don’t know if brains actually use gradients at all- like if biology evolved methods to utilize gradients or global error signals. There genuinely might be merit to local only systems
Louis#0144: We just haven’t found it yet
Louis#0144: It might be more sample efficient for instance
Louis#0144: That’s one of the main benefits of population coding in comp neuro
Louis#0144: You only need a handful of examples of learn very complex behaviors
|
AI_WAIFU#2844: You can easily soak up several OOMs of further scaling with the first 4 techniques.
mitchg#7109: there is something to be said for engineering simplicity tho
Louis#0144: Absolutely
AI_WAIFU#2844: Yeah, but I think hardware and software companies are solving that.
AI_WAIFU#2844: Compare the difficulty of doing sharding with pmap vs pjit
Louis#0144: Personally I think the benefit of predictive coding isn’t actually brrrr make networks bigger
Teemochu#8740: iirc bitcoin mining currently uses something on the order of 10^-12 to 10^-15 of the Landauer limit compute possible from sunlight hitting earth
Louis#0144: It’s *a* benefit
Louis#0144: But it’s not what the comp neuro people I know are excited about
Teemochu#8740: (specifically BTC)
AI_WAIFU#2844: Or consider how google/amazon/graphcore etc offers progressively better out of the box parallelism.
AI_WAIFU#2844: Soon you'll be able to just write some pytorch/jax and wrap it in a function call and the compiler will take care of it.
AI_WAIFU#2844: We're pretty close to that already
AI_WAIFU#2844: The only place I could see it being useful is if you want to try something wild like internet level parallelism.
mitchg#7109: honestly i'm not really as interested in brrr make networks bigger
it's more of an intellectual curiosity. i feel like it would spark joy if we had an algorithm that's like, so embarrassingly parallel and you don't even have to think about how to cleverly parallelize it
Teemochu#8740: Also self-parallelism/offloads
chilli#5665: tbh
chilli#5665: transformers are pretty close to this already
|
chilli#5665: lol
AI_WAIFU#2844: I don't know, sounds pretty :nooo: to me.
Teemochu#8740: like, I can see a case for offloading being a special case of parallelism where you have only one node and want to "parallelize" across it to save memory
mitchg#7109: yeah maybe you're right lol
Teemochu#8740: just one that requires a lot of bandwidth to do
chilli#5665: The Jax people kinda allude to this in the xmap tutorial
chilli#5665: I don't totally understand how it would work under the xmap paradigm though
mitchg#7109: i guess i think it would be equivalently satisfying to refactor a big code base to be less complicated
mitchg#7109: like that's the drive for me
AI_WAIFU#2844: just have a time resource, and xmap across it.
AI_WAIFU#2844: no your right it's more complicated than that.
AI_WAIFU#2844: I think it's gonna be hard to beat the simplicity of
pjit(grad_and_value(f))
program transformation is a really powerful paradigm.
AI_WAIFU#2844: I've said it before, but it's a shame IMO how powerful jax is but we just use it for gradient decent
mitchg#7109: I'm a Jax newb, but isn't there still a lot of non-trivial work needed to map computation to specific devices?
chilli#5665: hmm, well, I think that's true of most frameworks lol
chilli#5665: I'm not so convinced Jax is that powerful
|
chilli#5665: but perhaps my sense of how "powerful" things are is warped now
chilli#5665: lol
AI_WAIFU#2844: It's getting progressively more trivial, especially with simple homogeneous topologies. IMO the hardest part is dealing with SPMD when doing multi-node, or dealing with preemption, but I'm working on alleviating that.
chilli#5665: I guess what I will say is that I think it's a shame that people haven't come up with a lot more cool function transformations since vmap
chilli#5665: xmap is fairly cool, but pjit can be done in any framework afaik
chilli#5665: (for example)
chilli#5665: same for jit
AI_WAIFU#2844: I think you can make the argument that jax has it's limits. I can rifle off several things it can't do. But function transformation + functional blocks in an imperative context is the way forward IMO.
chilli#5665: Like, there's nothing that makes Jax's pjit/jit better than a Tensorflow equivalent
AI_WAIFU#2844: Like I one example I would like to see in a program transformation is easy reversibilty.
chilli#5665: imo, Jax's greatest contribution is vmap
AI_WAIFU#2844: Something like reverse(f) = g
AI_WAIFU#2844: And for any reversible function it just works.
AI_WAIFU#2844: I also want to see the jax program transformations extended to more dynamic contexts, but that starts to run into the limits of python.
chilli#5665: followed by I guess, getting reverse mode from forward mode
chilli#5665: Hmm, I think that's the problem
AI_WAIFU#2844: wdym?
chilli#5665: Like, it's fairly easy to make an "inverse" function
chilli#5665: but I don't think it would be that powerful
AI_WAIFU#2844: no?
|
AI_WAIFU#2844: I think it could help with rev nets.
AI_WAIFU#2844: The other thing is that there's the matter of how "reversible" we wanna go.
chilli#5665: wdym
AI_WAIFU#2844: Like if reversible computing takes off, then you might want to have thoroughly reversible code, since irreversibility will translate into power dissipation.
chilli#5665: another issue is that I don't think the composability would be very powerful
kindiana#1016: I dont see why that needs to happen at this level though
mitchg#7109: dumb question: in what circumstance would you want to reverse a function?
AI_WAIFU#2844: right now? Rev nets
chilli#5665: tbh if you could convince me I could prototype it in PyTorch right now lol
AI_WAIFU#2844: Saves on activation memory
mitchg#7109: oh cool
chilli#5665: I guess I might also be able to prototype it in Jax
AI_WAIFU#2844: ok how about this, if you have a reverse primitive + an is reversible property of certain functions, you can use it as an optimization for memory in grad()
chilli#5665: hmm
chilli#5665: wdym
mitchg#7109: also, how smart is pjit? is it however smart the XLA compiler is?
mitchg#7109: will it like, do Infinity ZeRO type stuff for me
chilli#5665: no
chilli#5665: lol
kindiana#1016: no
|
chilli#5665: I would say that pjit isn't that smart
AI_WAIFU#2844: So if you have an NN that is partially reversible, e.g. some components are reversible and some are not, grad could reverse the reversible parts and checkpoint the others.
chilli#5665: actually, when would that not already be the case...
chilli#5665: This kinda just sounds like a different autograd mode
kindiana#1016: this already exists
kindiana#1016: https://github.com/google/jax/blob/97a5719fcb40af7231b5f803f965063538282f8e/jax/interpreters/invertible_ad.py
chilli#5665: Like, if something was already invertible why wasn't the autograd written that way?
Louis#0144: How could pjit be made smarter
chilli#5665: ?
chilli#5665: what kind of smart things do you think it does
chilli#5665: lol
Louis#0144: Well
chilli#5665: it just kinda tries to propagate your input sharding annotations
Louis#0144: I mean you said here it isn’t that smart
chilli#5665: it doesn't really do anything else
chilli#5665: interesting, how do you use it?
chilli#5665: lol
chilli#5665: I'm actually not even sure what this does
kindiana#1016: you annotate your functions with custom_ivjp
kindiana#1016: and it uses those to recompute during backward
|
chilli#5665: and I assume you provide the forward and the backward?
kindiana#1016: yes
kindiana#1016: forward and reverse I guess
AI_WAIFU#2844: like if f and g are irreversible, and I do
```def h(a,b):
a = f(y) + x
b = g(a) + y
```
```a, b = h(x,y)``` is reversible, but I doubt autograd recognizes this.
chilli#5665: I don't really understand this example :thonk: what's the reversible function? (although you're not storing any activations for this anyways)
AI_WAIFU#2844: Since the inverse function is:
```py
def inverse_h(a,b):
y = b - g(a)
x = a - f(y)
```
chilli#5665: ah, this makes more sense to me lol
chilli#5665: The point still remains that you're not storing any activations for this, so the autograd is basically reversible
AI_WAIFU#2844: right, but if I dumped this in pytorch, would I be able to tell it/would it be able to recoginize that it doesn't need to store activations for this?
chilli#5665: I mean, you don't need to store any activations for +
|
chilli#5665: so yes
mitchg#7109: so... if we were doing predictive coding, we wouldn't have to worry about storing activations in the first place right? :3
chilli#5665: oh hm
chilli#5665: unless you mean that `g` and `f` do some stuff and store activations
𓅬 gabriel_syme 𓅬#3220: He also references discussions with Karl Freed (his grad advisor) on protein folding as ideas that led to this. Maybe there is a connecting strand between it all but I can't see it. Could alphafold2 ever have this effect you think? Connecting things that were traditionally apart?
𓅬 gabriel_syme 𓅬#3220: It would make sense given the complexity of what it is trying to do I guess
AI_WAIFU#2844: like as I understand it, it would run the graidents back through f, and then back through g.
chilli#5665: yeah, you're right
chilli#5665: Since the inverse is happening at a higher level
chilli#5665: This would need to happen as a graph transformation
chilli#5665: you couldn't do it as a typical Jax transform
AI_WAIFU#2844: yeah I'm not sure about the implementation details, but it's a program transformation none the less, just a more involved one.
chilli#5665: hmm
AI_WAIFU#2844: Another transform that might be cool is what michg brought up, with disk offloading when it's feasible
chilli#5665: hmm
chilli#5665: some of these I wouldn't really categorize as transforms in the same way...
chilli#5665: and also don't seem that powerful lol
chilli#5665: Like, some of these just seem like optimizations
AI_WAIFU#2844: Yeah but vmap is an optimization
chilli#5665: which are cool sure
|
chilli#5665: but aren't really transforms in the same way
chilli#5665: well, no
AI_WAIFU#2844: No?
chilli#5665: it *could* be implemented as an optimization
chilli#5665: but as is, it transforms the semantics of your function
AI_WAIFU#2844: I see.
chilli#5665: the thing that makes vmap cool is its reliability and predictability
chilli#5665: If vmap sometimes didn't trigger and made your code 10x slower
chilli#5665: would people still like using it?
chilli#5665: No, people like vmap because it's predictable, expressive, and reliable
AI_WAIFU#2844: hmm
chilli#5665: and imo the key to it doing so is that it's a transform
chilli#5665: and not an optimization
chilli#5665: Specifically, it's a local transform
AI_WAIFU#2844: I still think an inverse transform is useful. The disk thing is an optimization and might work better with something like annotations.
AI_WAIFU#2844: similar to ```inline``` or ```register```
chilli#5665: yeah
chilli#5665: I mean, Jax does one in one of its tutorials
chilli#5665: lol
chilli#5665: I implemented a toy one in PyTorch too
|
kindiana#1016: disk offloading is a meme change my mind
chilli#5665: but not RAM offloading?
AI_WAIFU#2844: You have a very small amount of data, but data efficiency goes up with the size of your model, your move.
AI_WAIFU#2844: You have a raid array, and you want to fine tune GPT-J-200B into GPT-J-200B: Waifu edition, your move.
kindiana#1016: how do you do inference if you can't hold weights in (v)ram?
AI_WAIFU#2844: https://www.liqid.com/products/liqid-elements/element-lqd4500-pcie-aic-ssd
AI_WAIFU#2844: Same bandwith as ram, but it's a bunch of pcie4.0 ssds crammed together.
kindiana#1016: weights in ram for inference barely works
EricHallahan#1051: And if you need low latency, there is Optane.
kindiana#1016: I would say you really need weights in vram
kindiana#1016: well you really need throughput
kindiana#1016: inference is basically totally memory bandwidth bound
EricHallahan#1051: Use the Radeon SSG. :berk:
AI_WAIFU#2844: Yeah but I'm just a poor coomer and I'm patient.
AI_WAIFU#2844: I can't afford enough gpu's to fit it in vram.
AI_WAIFU#2844: Also its an MoE model
kindiana#1016: sounds like you should do a smaller dense model
AI_WAIFU#2844: Ok you got me there.
AI_WAIFU#2844: Disk based retrival, but I doubt that would convince you.
nostalgebraist#3542: has anyone measured human about to guess which size of LM produced a given text?
|
kindiana#1016: gpt3 paper did pairwise evals
nostalgebraist#3542: wait did they have people guess whether the text was gpt2 or gpt3?
kindiana#1016: guess if its machine generated
kindiana#1016: iirc
nostalgebraist#3542: oh no i’m talking about something else
chilli#5665: sounds difficult
chilli#5665: lol
nostalgebraist#3542: in my head i’m picturing like a game with a web UI that shows you a text, and then you guess 0.1B, 1.5B, 6B, 175B etc
kindiana#1016: hrm
kindiana#1016: I think most people would be pretty badly calibrated on that lol
nostalgebraist#3542: (my motivation is that i think people overestimate the differences and and this makes them overestimate the implications of scaling)
mitchg#7109: we've been doing pairwise evals with AI Dungeon
nostalgebraist#3542: like, people who got excited about gpt2 thought “wow making big models is magic, they can do all this!”
nostalgebraist#3542: and people who missed that boat but got excited about gpt3 had the exact same reaction
nostalgebraist#3542: to many of the same phenomena
kindiana#1016: I think this table makes a similar point if you interpret it that way https://cdn.discordapp.com/attachments/729741769738158194/865443083931222046/unknown.png
mitchg#7109: I think you'd be surprised by the ratio of how many people choose curie vs. davinci
bmk#1476: i feel like it would be maybe easier to ask you which is generated by the bigger model
kindiana#1016: difference between 13b and 175b is 3%
bmk#1476: even i have no idea how good in absolute terms each modelgeneration is
|
bmk#1476: like if you just gave me a bunch of arbitrary generations and told me to get at it, i'd probably get the ordering right but be few OOMs off just because i have no anchor point
bmk#1476: i think this is just because it's way too subjective
nostalgebraist#3542: makes sense
nostalgebraist#3542: also, that would let you compute elo scores which would be fun
chilli#5665: hmm
chilli#5665: I feel like it would be a fun game
chilli#5665: to get humans to try and generate text that other humans will predict as human
bmk#1476: honestly id be down for model ELOs
chilli#5665: while the model is retrained on the text that the humans generate
bmk#1476: give gpt2-1.5B a fixed scroe by definition, like 1000 or something, and anchor everything else relative to that
bmk#1476: 1.5B has the advantage of being publicly available, a reasonable size and quality, and a good schelling point
kindiana#1016: give humans a fixed score
nostalgebraist#3542: oh?
bmk#1476: but humans vary drastically in quality
bmk#1476: ~~[citation needed]~~
bmk#1476: and sampling from the prior is computationally and logistically infeasible
mitchg#7109: 40% choose curie, 60% choose davinci
mitchg#7109: also, down to start an ELO system for LMs, that sounds dope
kindiana#1016: for a single completion?
AI_WAIFU#2844: That's a datapoint
|
AI_WAIFU#2844: Like nail-in-the coffin for justifying scaling datapoint
mitchg#7109: yeah we're doing a thing called "train the AI" that just returns two completions and you pick one
mitchg#7109: also I'd love to continue the convo but I'm on a boat lol
mitchg#7109: be back on later if anyone is curious
AI_WAIFU#2844: Yeah I'd definetly like to hear more about this
chilli#5665: what kind of completions are they dong?
AI_WAIFU#2844: I need slep tho
𓅬 gabriel_syme 𓅬#3220: so if we had DDR6 RAM would it work?
kindiana#1016: ?
𓅬 gabriel_syme 𓅬#3220: or is it the on board and tensor stuff that make it work in VRAM but not in RAM
𓅬 gabriel_syme 𓅬#3220: (I'm very naive in this sry)
kindiana#1016: you just need a lot of memory bandwidth
𓅬 gabriel_syme 𓅬#3220: I was wondering if we had the same RAM is in the GPUs
kindiana#1016: if your cpu also has gddr6
𓅬 gabriel_syme 𓅬#3220: which will happen eventually right
kindiana#1016: like in the ps5 or something
kindiana#1016: that might work
𓅬 gabriel_syme 𓅬#3220: ok cool
kindiana#1016: but I really doubt thats going to happen
𓅬 gabriel_syme 𓅬#3220: yeah I get that
|
𓅬 gabriel_syme 𓅬#3220: maybe apple M4 can handle it :berk:
Dohn Joe#2433: Anyone familiar with byT5?
In the code it seems to be converting utf-8 to integers.
Does it work with arbitrary byte data?
Does it observe the byte structure itself, or does it cast into an int in a way that doesn’t reflect the original bit shape?
I guess it can probably unwind the structure either way?
mayamy#2932: I am looking for some help figuring out the licensing of the data we used in our Google big-bench task (Crash Blossoms: https://github.com/google/BIG-bench/pull/260). Our task involves news headlines. Created by gathering examples of specific news headlines, manually scouring blogs/websites such as languagelog, which reproduce examples, and in some cases, also link to the original news article. Then, manually altering the headlines to change the facts (usually entities). For e.g. "Girl found alive in France murders car." to "Child found alive in Spain murders car".
Since we created this dataset ourselves, are we good on data source licensing issues, or do we need to acquire any permission/licensing from the original sources?
TruGerman#6672: Raw or finetuned?
-Archivist#7336: I'm not 100% convinced that the AI/ML space doesn't have a denial/opposition campaign constantly running against it. Every. Single. Time. we make progression the internet shits on it...take all the hackernews threads about model releases the top comment without fail is always '_cool, but it doesn't do this, it doesn't do that, they advertised it to do this and it's not quite there yet, this is dangerous, this shouldn't be public, ballocks_' ..... what gives.
(I just saw alphafold was released)
smallanimalfriend#4355: Top-comment on HN is very commonly a smarter-than-thou nitpick or a sort of "daily dose of outrage" type thing that naturally gets upvotes
-Archivist#7336: seems reddit is the same crowd, or they simply parrot eachother
smallanimalfriend#4355: Similar dynamic on twitter I guess, but I think you're right that people do like to pick on AI a bit more - maybe due to the ethics, and maybe because it's a topic that everyone feels they can speculate on
bmk#1476: I just responded to someone ont twitter who was claiming that gpt3 was trained on a "significant fraction" of the internet lol
|
bmk#1476: clearly they didn't even bother to read the paper, or they would have known that GPT3 was trained on, like 0.4% of CC which is a fraction of the internet at large
-Archivist#7336: when we get to the point of training on _the internet_ I'll actually start worrying a little
-Archivist#7336: forget static data just hook it into the apis of reddit, twitter, etc, etc and watch the world burn
Kia#2550: The biases tho :berk:
TruGerman#6672: I've played with it enough to know that 0.4% is quite significant
TruGerman#6672: It *knows*
-Archivist#7336: in terms of weapons where does gpt3 stand considering they wont give it to the public?
-Archivist#7336: I mean shit you can buy a fucking rocket launcher in the us, grenades? sure... so is gpt3 a suitcase nuke? is everything after going to be considered using megaton, kiloton, type terms.... how do we classify these models if we're going to consider them dangerous?
TruGerman#6672: Starkiller base level according to OAI
-Archivist#7336: ....
Daj#7482: Obviously, there is on the one hand a commercial interest in scoring political brownie points for "protecting" people or whatever while still making money off of it, while on the other side there are steelmen of similar positions about infohazards and powerful AI tech. Ultimately, no one knows how to think about or measure these systems or their harms whatsoever. Politics hasn't caught up, and I don't expect it to any time soon, the tech moves too quickly. New threat vectors are popping up all the time. I still think that we fundamentally do not yet know how to "correctly" use and evaluate large LMs, we still are like cavemen that found a lump of Uranium.
Daj#7482: AI systems (especially not LMs) haven't been classified as weapons so far, but if politicians get scared, it's almost inevitable
TruGerman#6672: Didn't they make the same claims in their GPT-2 paper?
Daj#7482: Sorta, it's a subtle argument
Daj#7482: There is a non-stupid version of OAI's argument
Daj#7482: Whether you want to be that charitable to them is up to you lol
TruGerman#6672: Not saying LMs are completely harmless, but there are more urgent and larger threats, OAI has certainly exaggerated the problem
Daj#7482: Don't get me wrong, I agree lol, look where you are
Daj#7482: I'm worried about powerful superintelligence, not spam
EricHallahan#1051: We already have the too much spam lol
|
EricHallahan#1051: More spam doesn't change anything.
TruGerman#6672: I'm worried about the ethical dilemma that is AI rights
Daj#7482: That's gonna be a massive can of worms lol
TruGerman#6672: Oh yeah, there are valid points for both sides of the argument
Daj#7482: I talk about it briefly at the end of the latest podcast I was on
Daj#7482: https://www.jimruttshow.com/currents-connor-leahy-2/
Daj#7482: (in case anyone is interested)
-Archivist#7336: @Daj you've written extensively and been in a position to have a model to release with these considerations. I'm at a point in which if I was in a position to release something that would potentially change the way the world works people would have a hard time pulling me off that ledge. Certainly when it comes to commercialisation, it will be AI that destroys capitalism and I think sooner the better with the way things are going. So every time I hear the _it's dangerous_ argument all I see is people arguing for stagnation and a continuation of capitalism and holding back change in a world we've already established isn't working for a large majority of people under the ultra-wealthy idiots that this week seem dick waving over who's going to space first -_-
TruGerman#6672: But I am looking forward to AGI, mostly because of their use in BCI powered entertainment
Daj#7482: You're right! But there is a non-stupid version of these arguments, too (which you don't usually hear in the mainstream media)
Daj#7482: I experienced this and talked about it e.g. here: https://medium.com/@NPCollapse/the-hacker-learns-to-trust-62f3c1490f51
Daj#7482: but I would recommend maybe watching https://www.youtube.com/watch?v=pYXy-A4siMw
Daj#7482: Or if you want something more technical: https://www.youtube.com/watch?v=EUjc1WuyPT8
Daj#7482: I think we have a _moral responsibility to build aligned AGI_
Daj#7482: The way the world currently works is _fucked_
Daj#7482: AGI can fix so much
Daj#7482: But that doesn't mean we can't shoot off our collective foot (and rest-of-our-bodies) in the process
TruGerman#6672: I also regularly ask myself if we're even in a position to judge what is aligned and what isn't, but I think that's probably been brought up before
Daj#7482: This gets brought up by every single new person to the field :berk:
Daj#7482: No offense
|
TruGerman#6672: Makes sense, I *am* new
Daj#7482: Try the Rob Miles video, it's a nice light intro
-Archivist#7336: I read this, around about the time I praised you for your writing. If anyone it will be people like you that sway me, but currently I'm still on the edge so it's a good job I'm not the one dealing with these models 😄
Kia#2550: Honestly same,If We Can lived in the world of Immortality forever, I hope AGI can help us in that field
thenightocean#6100: well for start if it isn't aligned you are dead, or worse
Daj#7482: Totally understandable! These are subtle arguments that simply do not have obvious answers. Many people think I'm crazy for doing EAI, it's tough! I do recommend watching those alignment intro videos if you get a chance :)
-Archivist#7336: just watching the r miles one now
TruGerman#6672: I do hope to live long enough to experience biological immortality in all its glory
TruGerman#6672: Just because I hate myself
EricHallahan#1051: Me, only having watched half of this video: :guilty:
Daj#7482: Alignment is a super deep field, there is so much to read you can spend years just reading and reading and reading (take it from someone who has done just that lol)
Daj#7482: Luckily there are some nicer intros nowadays like Rob
Kia#2550: Rob videos is honestly amazing
Kia#2550: Watching most of his videos,Is Great way to understand concepts and ideas really easily to
TruGerman#6672: Today, Grug align AI
Read books
Hm...:grug:
AI broke
SecondMover#8029: Rob being on computerphile was maybe the best outreach thing that ever happened to the alignment community.
-Archivist#7336: @Daj having watched that video all I know for sure is that we're going to see a headline in the future that reads _ammeter AI and robotics researcher dies in freak tea making exercise!!_
|
-Archivist#7336: _police entered his lab to find a cold cup of tea on his desk.. investigation still ongoing_
Daj#7482: If we're lucky :berk:
ethan caballero#6044: "GPT-(n+1)"!!!! https://cdn.discordapp.com/attachments/729741769738158194/865591809568276520/Screen_Shot_2021-07-16_at_8.50.36_AM.png
Kia#2550: gpt-(n+1):thonk:
Louis#0144: I mean obviously they’re working on GPT4
Louis#0144: I have no doubt on that lol
Louis#0144: GPT3 is their only money maker
Louis#0144: Of course they’d wanna upgrade it
CRG#8707: Might be the "September announcement"
EricHallahan#1051: Considering the order, that implies that "GPT-(n+1)" is on the back-burner.
Leo#5949: Does anyone know a good image upscaler (around 2x) for artworks?
Waifu2x served me well in the past but for the couple images I have rn, it doesnt seem to work properly.
Kia#2550: Topaz gigapixel
Kia#2550: @Leo https://www.topazlabs.com/gigapixel-ai
Louis#0144: I rly like gigapixel
Louis#0144: i used to use it a lot when I did photography
Louis#0144: LOL
Louis#0144: This is general
Louis#0144: did not realize
sweg#8920: ive gotten mixed results with it for video game screenshots
|
sweg#8920: didnt really do anything except sharpen edges
UnsupervisedLearner#4148: > it will be AI that destroys capitalism
:grimberk:
mitchg#7109: fine-tuned on AI dungeon data
kurumuz#5695: Kinda weird to compare those when you guys are not even giving the full context, potentially handicapping davinci to perform better(it's better at long dependencies)
kurumuz#5695: @TruGerman Do you remember them having a "Train the AI" kind of thing right now? can't see the multi gen option
kurumuz#5695: maybe they did the experiment before
kurumuz#5695: ¯\_(ツ)_/¯
kurumuz#5695: (people were also suspecting AID was hosting curie instead of dragon for a while, 2-3 weeks ago)
kurumuz#5695: maybe that is the A/B test that is being talked here
zphang#7252: https://twitter.com/pytorch/status/1416105287166140417?s=21
Deleted User#0000: looks nice :^)
mitchg#7109: it only comes up .12% of the time (if you have it enabled in settings)
Louis#0144: YOOOO
Louis#0144: torchshard looks so good!
Louis#0144: @chilli gj
Louis#0144: (or I guess you know who to say gj to :berk: )
mitchg#7109: looks like this https://cdn.discordapp.com/attachments/729741769738158194/865693172768178186/14509c40-9e58-4d75-9a8e-4e2e72625b76.png
Louis#0144: i forgot how weird the AID UI is
Louis#0144: tbh
|
Louis#0144: makes sense tho
Louis#0144: I spoke with Ben about that a while ago and im convinced you guys were evaluating it weirdly tbh
Louis#0144: storytelling model eval is my speciality
Louis#0144: The UI here looks like its evaluating for semantics rather than evaluating for story substance
Louis#0144: you probably want some method to query for working memory representations for preference learning
Louis#0144: speaking from very painful experience
Louis#0144: ;-;
Louis#0144: no offence obviously, HCI with stories is *hard*
Louis#0144: I think theres maybe like 6 or 8 people in the world who can do storytelling AI HCI properly
Louis#0144: (I am not one of them LOL)
𓅬 gabriel_syme 𓅬#3220: This is so damn cool
𓅬 gabriel_syme 𓅬#3220: Right at the level of ease required by someone like me lol
mitchg#7109: elaborate? getting user input on story substance rather than just coherence seems really hard
mitchg#7109: also, another fun fact: ~70% of AID model responses are eventually undone / retried
bmk#1476: have you trained a classifier to predict which responses will be retried
Louis#0144: I have a few papers on this actually!
Louis#0144: and im writing another right now
Louis#0144: where we review an interactive retriever
Louis#0144: https://arxiv.org/abs/2103.12872 https://arxiv.org/abs/2104.07472
mitchg#7109: not yet. technically I'm not even on the AID team any more, but I think Laria was planning on doing something like this
|
Louis#0144: oh are you interning?
mitchg#7109: something like this https://openai.com/blog/learning-to-summarize-with-human-feedback/
Louis#0144: ye
Louis#0144: Im doing that too
Louis#0144: in an academic environmnet
Louis#0144: you really dont want to do A/B testing for that
Louis#0144: negative examples need more substance
mitchg#7109: or this https://www.gwern.net/GPT-2-preference-learning#decision-transformers-preference-learning-as-simple-as-possible
Louis#0144: Thats actually kinda what im working on in #carp
Louis#0144: we have a really huge preference dataset
Louis#0144: 1.3mil examples
mitchg#7109: nah I'm just doing things that aren't AID
bmk#1476: man it would be pretty nice to have access to that data
Louis#0144: (for explicitly short stories)
bmk#1476: you could do a really nice PPO thing to improve the nodel
mitchg#7109: yeah, we still need to figure out how to filter it tho since like 80% is NSFW
bmk#1476: right that might be a problem
bmk#1476: but like doing PPO with that to improve the model would be so amazing
Louis#0144: ehhh
Louis#0144: i did experiments using a retriever aided interactive storytelling model where I finetuned it using PPO
|
Louis#0144: it was really bad modal collapse
mitchg#7109: I don't think it's super hard, you can just over-generate data and have an aggressive filter
Louis#0144: you lose a lot of robustness that the model had
Louis#0144: it kept on retrieving the same docs for instance
Louis#0144: and negated queries no longer worked
mitchg#7109: yeah Lilian Weng actually did PPO for us to make a safe model
Daj#7482: retrievers are just bad
Daj#7482: don't use them
Daj#7482: lol
Louis#0144: DPR is pretty good
mitchg#7109: but she had problems scaling to davinci
Louis#0144: RAG is a waste of time
Daj#7482: Rule Nr1 of Eleuther: Don't listen to any suggestions about text generating AI @Louis gives you
Louis#0144: no but I know their stack
Louis#0144: LOL
Daj#7482: The man thinks KGs are useful
mitchg#7109: what's rule number 2
Louis#0144: yes but specifically I know AID's stack LMAO
Daj#7482: Clearly statements dreamed up by the utterly deranged
bmk#1476: hmm so PPO doesn't work with big models? or is it just not efficient enough
|
mitchg#7109: idrk
bmk#1476: I thought bigger models being more sample efficient would magically make PPO better too
mitchg#7109: Lilian just had "problems" lol
bmk#1476: o.O
Daj#7482: Would be very curious what those were
Louis#0144: I found bigger models needed much better negative examples for PPO
Louis#0144: but I never went beyond 11b
Louis#0144: I just noticed I could get away with meh negative examples at smaller sizes
bmk#1476: that's really weird
bmk#1476: almost the opposite of what I'd expect
Daj#7482: Maybe large models learn to exploit the reward model faster
Louis#0144: thats what I thought too &
Louis#0144: ^^
Louis#0144: did not know how to test it
Daj#7482: and you need more KL regularization or someting
Louis#0144: I ran out of time on that project
Louis#0144: my advisor basically told me to cut loses
Louis#0144: I spent five months hitting my head against a wall
Louis#0144: lol
kurumuz#5695: @Daj But KGs will get us to AGI
|
kurumuz#5695: :tHONK:
Daj#7482: excellent emote
mitchg#7109: we actually made a KG library
Louis#0144: i know and im jealous of it
mitchg#7109: for making games that have hard constraints
Daj#7482: Interesting
Louis#0144: i wanted to make one with kuru
bmk#1476: it's not even a Canada goose smh can we have a Canada goose thonk emote
mitchg#7109: like you set up the level in the KG, the door needs the key etc
kurumuz#5695: hard constrained games and open ended generation are not the same things
kurumuz#5695: ¯\_(ツ)_/¯
Daj#7482: It's an interesting direction
Daj#7482: Seems hard
mitchg#7109: yeah turns out open ended generation isn't really conducive to game-like experiences
mitchg#7109: where you have challenges that you learn to overcome with skill
mitchg#7109: I mean, there are some interesting things you can do
kurumuz#5695: We're mostly focused on open ended
mitchg#7109: but AID isn't a game, it's a creative writing tool
kurumuz#5695: huh, it's for sure advertised as a game
mitchg#7109: (for most people)
|
kurumuz#5695: haha
Daj#7482: :smallbrain: : Use KGs to constrain LMs to create games
:bigbrain: : Use LMs to generate code that creates games
bmk#1476: ~~it's a coom writing tool~~
kurumuz#5695: AID is a bad writing tool as it is not developed to be one
mitchg#7109: yeah we're working on building things that are more game like
mitchg#7109: and trying to decide wtf to do with AID
bmk#1476: are you guys going to do the thing gwern suggested?
Daj#7482: Interesting, I'm curious what you guys can cook up
mitchg#7109: oh I did that for the last hackathon
mitchg#7109: and we won :3
bmk#1476: oh nice
bmk#1476: are y'all gonna roll that out as a product?
mitchg#7109: but it probably won't get company resources for a while
mitchg#7109: there are other things
bmk#1476: ah
mitchg#7109: it was cute tho, I'm attached to it
kurumuz#5695: ~~we're developing something similar internally~~
kurumuz#5695: not the main focus though
Daj#7482: I also think it's a very fun idea
|
Daj#7482: It definitely seems like learning from human preferences + long term memory stuff like in BlenderBot2 could lead to a new generation of AI games
Daj#7482: if you're willing to swallow some more expensive computation costs, bespoke adapters or softprompts per user could be a cool premium feature
kurumuz#5695: softprompts per user are not really expensive
Daj#7482: Training them, I mean
kurumuz#5695: yea
kurumuz#5695: well for 6B it is, not that expensive
Daj#7482: The softprompt stuff NAI does is obviously a great idea
kurumuz#5695: would take like 1-3 minutes with an A100
Daj#7482: oh that's really not that bad
kurumuz#5695: yea
bmk#1476: I personally think the most interesting LM based games will be ones that use LMs to do procedural generation of world elements, quests, etc in an open world (basically like the game Connor et al are doing but with less text and more world)
StellaAthena#3530: Isn’t that something Mark is actively working on @Louis
bmk#1476: I think an ideal such game wouldn't even feel like a LM game
Louis#0144: yes
Louis#0144: lol
Louis#0144: marks lab is basically the academic version of latitude
Louis#0144: lmao
Daj#7482: tfw no longer involved in that project, no time :blobsad:
Louis#0144: well its because latitude pulled inspiration from the lab
Louis#0144: not the other way around
|
Daj#7482: The ideal AI game will just be a good D&D DM that also has access to a 3D engine
bmk#1476: you know how people keep complaining that C2077 didn't have enough interactivity in the world and stuff
bmk#1476: it wasn't open ended enough for lots of people's taste
kurumuz#5695: Well I don't think the infrastructure is not ready for stuff like this
zphang#7252: how much ram is that?
kurumuz#5695: atleast if you will not pre generate
Louis#0144: @Daj believe it or not my newest storytelling project entirely avoids KGs... I am swallowing the end2end pill
kurumuz#5695: as if you're not latitude openai will be really expensive for such generations
bmk#1476: I'm thinking something like C2077 except we figure out how to wire a LM up to world elements so more things become interactable without needing to be hardcoded
kurumuz#5695: 100 token soft embedding was the limit before OOM on a 16 gig machine
kurumuz#5695: iirc
Daj#7482: So ||Equestria Online?||
zphang#7252: is this fp16?
Louis#0144: i found a way to get a similiar effect to KG entirely self supervised :berk:
kurumuz#5695: yes
zphang#7252: ah, I see
Louis#0144: I have 4TB of data I can use
bmk#1476: haven't read FiO so no idea
Daj#7482: The plot is mostly about the perfect MMO being developed by an AGI
bmk#1476: ok then yes
|
Daj#7482: CEC is better btw, read that
bmk#1476: I want that but set in cyberpunk universe
Louis#0144: oh yo the eleuther intern interviews are in 20min
Louis#0144: connor any questions I should ask?
kurumuz#5695: isn't that sao
Daj#7482: actually, you might like FiO because it's so autistic it talks about hardware requirements and stuff lol
Louis#0144: Im gonna ask SWE qs by leo's request
Daj#7482: Just put a paperclip on the table and see if they recoil
Louis#0144: LMAO
Daj#7482: If not, eliminate them, replicants
Daj#7482: Is SAO just anime/normie FiO?
Louis#0144: (to be clear I have no idea how to ask alignment questions)
bmk#1476: ask them how to differentiate a Canada goose from a cackling goose
Louis#0144: TRUE
Louis#0144: ok
bmk#1476: from what little I know of both, no
bmk#1476: SAO is just the most stereotypical possible isekai
kurumuz#5695: well the SAO world is also generated by AIs i think
Daj#7482: lame
bmk#1476: if that's a thing it's just a throwaway plot point
|
Daj#7482: I didn't know that specific kind of anime had a name
bmk#1476: nothing important really depends on anything being AI generated
Daj#7482: I hate it so much
kurumuz#5695: SAO is extremely trash overall anyway
kurumuz#5695: haha
zphang#7252: before anyone asks, isek.ai is taken
bmk#1476: isekai is basically the opposite of joy in the merely real lol
EricHallahan#1051: What did I miss?
bmk#1476: nothing of value
kurumuz#5695: :yes:
bmk#1476: but yeah from what ive heard SAO is pretty famous for being the most unbridled of isekai without even the slightest pretense of plot coherence, existing soley to fuel the fantasies of coomers who spend too much time playing games
kurumuz#5695: that is literally it lmao
kurumuz#5695: well i liked it when i was 11
zphang#7252: oh man I'm old
bmk#1476: i remember back when i was still interested in anime many years ago, a friend of a friend babbled to me for like half an hour about SAO and i was totally confused as fuck
kurumuz#5695: oh no, a non weeb
alexyz#3459: shouldn't this be #off-topic lol
Sphinx#2092: Yeah, Log Horizon is better, though it also does some dumb stuff later.
Sphinx#2092: Best to just watch the stuff that finishes in 12-24 episodes.
bmk#1476: #off-topic is busy being on topic
|
kurumuz#5695: log horizon was great
kurumuz#5695: didnt watch the last season
bmk#1476: i should totally go back and watch a bunch of classic anime once i have the time
bmk#1476: too bad im perpetually busy and/or down with a headache:withered:
zphang#7252: anime is too normie nowadays
zphang#7252: we need to find some newer niche
bmk#1476: goose anime
EricHallahan#1051: I know nothing about anime, and I will maintain my ignorance.
kurumuz#5695: we should turn eric into a weeb
bmk#1476: :catgirl3:
bmk#1476: join the dark side
mitchg#7109: if we're having a weeb contest, I'm pretty sure I'll win https://myanimelist.net/animelist/bitforger?status=1
Sphinx#2092: Like I said, just watch the good stuff. 12-24 episodes. Little commitment, lots of good stuff.
Sphinx#2092: Then once you embraced it, then you can go watch HunterXHunter.
bmk#1476: gwern hangs out around here
mitchg#7109: o ok nvm
kurumuz#5695: Pretty sure I watched more :P
kurumuz#5695: I had too much time umm
kurumuz#5695: a few months ago
bmk#1476: he's completed 408 animes on his MAL :gwern:
|
mitchg#7109: damn I think I only have like 150 or something
bmk#1476: how does he even find all that time
zphang#7252: nowadays I just try to extrapolate from anime tiktok
kurumuz#5695: well most anime are 12 episodes and 20 minutes each episode
kurumuz#5695: 15-20 mins
zphang#7252: I wonder if there're some productivity gurus who'll be like
kurumuz#5695: then you do 2x-3x speed
kurumuz#5695: its really easy to complete anime
zphang#7252: "you should watch on 2x speed"
zphang#7252: lol I was just saying
kurumuz#5695: if you're a 2x chad
kurumuz#5695: i watch youtube videos at 3x-4x if possible
EricHallahan#1051: https://xkcd.com/1070/
kurumuz#5695: but i enjoy anime so 2x
mitchg#7109: I only watch trash anime on 2x
kurumuz#5695: i dont even remember how many months it been...
mitchg#7109: but then, you're watching trash, what's the point
kurumuz#5695: oh also skip all openings
kurumuz#5695: they dont matter
zphang#7252: you gotta watch the first time
|
kurumuz#5695: i actually dont
zphang#7252: that's like half the value of anime
zphang#7252: kickass OPs
𓅬 gabriel_syme 𓅬#3220: I'm attempting that really soon as well, very excited
bmk#1476: what if you make an anime that's oops all OPs
𓅬 gabriel_syme 𓅬#3220: do you have a link for that?
𓅬 gabriel_syme 𓅬#3220: oh nvm just realized it's a company thing (read the whole discussion)
𓅬 gabriel_syme 𓅬#3220: concerning games, I think PCG will always be the most impactful (money wise) so I am kind of bullish of LMs generating content first
𓅬 gabriel_syme 𓅬#3220: I already tested making dungeon crawler maps with the same approach I use for layouts, works pretty nicely
𓅬 gabriel_syme 𓅬#3220: the next part I'd be interested to see is creating / discovering game mechanics
TruGerman#6672: I do and I don't think it's a good metric
Dohn Joe#2433: Anyone work with byT5?
Teemochu#8740: :catgirl3:
Teemochu#8740: *shows card in my hand "Untap target sign"*
chirp#4545: https://twitter.com/tsimonite/status/1416150602829025280?s=21
chirp#4545: Was posted already but that thread has some nice discussion!
cfoster0#4356: What part of the thread are you thinking of?
chirp#4545: Oh no part in particular lol, there’s just a lot of replies and QTs
Teemochu#8740: > Company claiming to pursue artificial general intelligence gives up on physical world
:firealarm:
|
Louis#0144: hey so in analogy work
Louis#0144: what do you call the to and from class
Louis#0144: like what are the technical names
Louis#0144: I cant find anything on this
StellaAthena#3530: Is there a standard term for the "concept arithmetic" you can do with embeddings? Like king - man + woman = queen?
Louis#0144: I don’t know?
bmk#1476: people sometimes call it word2vec despite that being technically inaccurate since word2vec is the algo that produces the word vectors, not the arithmetic itself, but that's the thing it's best known for anyways
TruGerman#6672: (When did people start doing arithmetic with words?)
StellaAthena#3530: Four, five years ago
TruGerman#6672: Science has gone too far
Ryulord#6196: a quick internet search later and I found an article that uses the term "embedding arithmetic". Not sure it's standard though. At least the article is by plotly and not some random nobody though
https://medium.com/plotly/understanding-word-embedding-arithmetic-why-theres-no-single-answer-to-king-man-woman-cd2760e2cb7f
guac#4716: this arithmetic from analogies is based on the parallelogram model. See photo from Jurafsky's textbook (and checkout 6.10 if any one is interested in more details). https://cdn.discordapp.com/attachments/729741769738158194/865843591067074610/Screen_Shot_2021-07-17_at_2.31.38_AM.png
guac#4716: damn Rumelhart was the man lol
drscotthawley#0787: Been playing with this a bit with CLIP+VQGAN. start_image of a (cartoon) king, with prompt "male:-0.5 | female:0.5" = ...not sure if this is a queen? but it removes the mustache and replaces it with bright red lips. 🤷♀️ https://cdn.discordapp.com/attachments/729741769738158194/865851707715354624/unknown.png
Kia#2550: The ```<Promt> + <number>``` Thing wouldn't do anything because it's not build for that but the promt will be recognizable also better talked at #art
drscotthawley#0787: hmm. k. thanks.
drscotthawley#0787: re. "better talked at #art": Uhhh....why? i'm much more interested in vector spaces and embedding representations (the topic above, as I see it). art is just one possible application
Kia#2550: Ah
drscotthawley#0787: Can you clarify "not built for that"? The (notebook) code (by Katherine et al) takes weights after colons, i.e., `path, weight, stop = parse_prompt(prompt)` where `parse_prompt` splits on colons.
|
drscotthawley#0787: ...at this point it's moot: it demonstrably does something. a big negative number on "man" gives a vastly different and "less masculine" image.
Note that BoneAmputee's server isn't set up to parse negative weights. Maybe that's all you meant! ..but that's not what i'm using for this.
alstroemeria313#1694: we have a notebook specially for editing w/ this method
alstroemeria313#1694: https://colab.research.google.com/drive/1kNZYKlGRkkW4SDoawnq1ZoH0jhnX_jlV
drscotthawley#0787: Awesome! Was not aware of that. Thank you!
Terry the Cat#3774: Goal for today: Observe how AI generating art programs interpret poetry
alstroemeria313#1694: often overly literally
Terry the Cat#3774: Valid
Paras_cruxe#6809: https://youtu.be/zN1Hc7tHDEM
This is mind fcuk 🤯
Kia#2550: Wha...
CRG#8707: Yeah, the GPT BPE problems mean that's most likely gibberish, not a "secret code" of anything.
Gurkenglas#7362: why does each layer have the same number of heads?
StellaAthena#3530: It doesn’t. In principle there’s no reason that different layers can’t have entirely different structures. But people are lazy and it’s easier to stack 4 identical transformer blocks on top of each other
Gurkenglas#7362: might make it easier to discover the likes of logit lens if tensors only had the same shape for a reason
AI_WAIFU#2844: Is there an easy way to take the derivative of a scalar function in jax, while preserving it's vectorized behaviour? I don't want to have to detect the rank of a tensor, then apply n vmaps followed by a grad to get what I want.
joaogui1#8461: Do you have an example of that?
AI_WAIFU#2844: yeah, I want to take the derivative of tan of x, and apply it to a matrix with a shape of (1,2,3,4,5)
chilli#5665: why not just vjp?
|
AI_WAIFU#2844: Ok I'll take the L for not thinking of that
AI_WAIFU#2844: Actually don't you still need to pass in a vector of all ones with the same shape?
chilli#5665: yeah
chilli#5665: but you can just wrap that or something
chilli#5665: use a jnp.ones_like
AI_WAIFU#2844: That's what I ended up doing, I'm still gonna be mad about it tho
chilli#5665: lol
bmk#1476: my understanding of how MP works https://cdn.discordapp.com/attachments/729741769738158194/866167787159420958/20210717_214508.jpg
Louis#0144: I assumed horizontal splitting was the same as distributed matmul
Louis#0144: Is it not?
bmk#1476: wdym
AI_WAIFU#2844: that feels wrong to me
AI_WAIFU#2844: you should be copying v not slicing it
kindiana#1016: yeah
kindiana#1016: well
kindiana#1016: it depends
AI_WAIFU#2844: well there's like 4 different ways to do it
kindiana#1016: you copy the hidden
kindiana#1016: but you shard the activations
bmk#1476: why would I copy v?
|
AI_WAIFU#2844: just write out the sliced v and the sliced a
AI_WAIFU#2844: if v is (3,) and a is (3,3) then each slice of a is (3,1)
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/866171902418354186/20210717_221749.jpg
bmk#1476: is this what you mean?
kindiana#1016: https://cdn.discordapp.com/attachments/729741769738158194/866171960077058058/unknown.png
kindiana#1016: (megatron-lm paper)
bmk#1476: wait is X a row vector by convention here?
kindiana#1016: doesn't really matter but that looks like how megatron does it
bmk#1476: ok because I was confused for a sec, I guess they use the opposite convention than I do
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/866174039118053386/20210717_222616.jpg
bmk#1476: so it sounds like these are the two ways
kindiana#1016: yes
kindiana#1016: but you don't actually need to allgather
kindiana#1016: for the second way
kindiana#1016: you just feed the output of the second block into the input of the first block
kindiana#1016: as its already sharded
kindiana#1016: and you just have a single allreduce for the whole mlp
bmk#1476: oh so you alternate between column sharding and row sharding?
kindiana#1016: yup
bmk#1476: oh huh
|
bmk#1476: now I know another reason why our mtf code sucked
kindiana#1016: lol
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/866200093476519976/unknown.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/866200149370077194/unknown.png
bmk#1476: sounds kinda sus
bmk#1476: the second term is input data *per node* times cost per word??
bmk#1476: shouldnt it result in more communication as the number of processors increases?
bmk#1476: i'd expect this to be bnp or something
chilli#5665: Yeah, but generally, your bandwidth is measuring the connection between any 2 nodes
chilli#5665: So like, the bandwidth time for the nodes is all parallelized
-Archivist#7336: Stop dreaming my data's streaming, I'm giving your bird them feelings touch yer toes and touch the ceiling
alstroemeria313#1694: CLIP strikes again, it really can do everything! https://colab.research.google.com/drive/1fokumWeasHTo0KXpfeZ6Z0OgLOQ2SUso
alstroemeria313#1694: "Multimodal Few-Shot Learning by Convex Combination of Token Embeddings"
alstroemeria313#1694: It does similar tasks to "Multimodal Few-Shot Learning with Frozen Language Models" except it just uses CLIP and GPT-2 and doesn't even require you to train a new model
alstroemeria313#1694: huh...
alstroemeria313#1694: `<img17>. With one of these I can stand on a <img16> and look at the sky.`
alstroemeria313#1694: img17 is 'flowers', it's a potted plant on a table
alstroemeria313#1694: img16 is 'table2'
alstroemeria313#1694: They must have added all of the images they were going to use as new tokens in such a way that the LM can refer to them in explanations about other things?
alstroemeria313#1694: Yeah, I got another one
|
alstroemeria313#1694: `<img16>. With one of these I can build a <img16> from scratch.`
finetune#0907: guess it's because of weight tying between input and output embeddings
finetune#0907: interesting stuff
alstroemeria313#1694: Does Neo or -J tie weights?
finetune#0907: j doesn't
finetune#0907: think neo does
alstroemeria313#1694: we could untie them manually i guess
alstroemeria313#1694: if we wanted
finetune#0907: ye
alstroemeria313#1694: got another one
alstroemeria313#1694: ```Describe this photo: <img16>. Description: A table with a large number of chairs. The
table is made of wood and has a wooden top. Description: This photo was taken in the
dining room of the home of a family in New York. It shows a table that has been set up
with chairs and a <img16> cloth. This table was made in a wood shop in Brooklyn, New
Jersey.```
alstroemeria313#1694: <img16> cloth instead of tablecloth
alstroemeria313#1694: <img16> is actually a table
alstroemeria313#1694: lol https://cdn.discordapp.com/attachments/729741769738158194/866271518346051604/Screen_Shot_2021-07-18_at_3.53.41_AM.png
MicPie#9427: This seems to be very similar to: https://arxiv.org/abs/2107.06383
But afaik here the directly use the CLIP-embeddings next to the text token embbeddings.
|
(Edit: Maybe not so similar after looking in the details of the colab notebook.) https://cdn.discordapp.com/attachments/729741769738158194/866276159620907028/Bildschirmfoto_2021-07-18_um_13.11.14.png
alstroemeria313#1694: don't they also fine-tune?
alstroemeria313#1694: this notebook doesn't
MicPie#9427: yes, you are right
Deleted User#0000: if you have any feedback, please share 🙂 I'll probably try some more stuff and post anything interesting to twitter
cognomen#6297: https://cdn.discordapp.com/attachments/729741769738158194/866294277188485130/goose-clip.png
cognomen#6297: two shot works pretty well but caption task output is all over the place
alstroemeria313#1694: if you can use high-mem colab instances you can try a bigger LM
alstroemeria313#1694: gpt2-xl works out of the box
finetune#0907: should be possible to run gpt-j 6b if you run clip on cpu
finetune#0907: probably fits on gpu too actually
finetune#0907: should work without high mem too, if you load a split checkpoint
finetune#0907: hmm, no, looks like clip's not smol enough after all, gotta put it on cpu
Louis#0144: We should chat about this
Louis#0144: I’ve been trying to do ViL since April
Louis#0144: Very limited success:(
Deleted User#0000: sure
alstroemeria313#1694: @Deleted User could you modify the model so you can feed in embeddings directly instead/in addition to tokens, so you wouldn't have to modify the embedding weight matrix and could feed in new images without having to re-modify the embedding matrix?
alstroemeria313#1694: @Deleted User oh, apparently with HF models you can use `inputs_embeds=` to pass in embeddings instead of tokens w/o modifying the model
alstroemeria313#1694: I wonder if we could start with the CLIP image encoder + a randomly initialized linear layer to convert CLIP embeddings to the LM embedding dimension and then fine-tune the CLIP image encoder+the linear layer
|
alstroemeria313#1694: Like the Frozen paper basically but initing with CLIP to make training faster
CRG#8707: I think even just training a linear layer might work. (given <https://arxiv.org/abs/2106.07682>)
alstroemeria313#1694: huh
alstroemeria313#1694: (I was going to suggest that in my original message but then thought it might be too small)
alstroemeria313#1694: wait what
alstroemeria313#1694: > Finally, our experiments reveal a new structural property of SGD which we call "stitching connectivity'', akin to mode-connectivity: typical minima reached by SGD can all be stitched to each other with minimal change in accuracy.
CRG#8707: Twitter author thread: https://discord.com/channels/729741769192767510/837392611504816218/865666572675448832
CRG#8707: I think the Frozen style direct linear layer to 2 tokens should also work better than just projection to one token.
Daj#7482: Huh somehow I missed this. This seems very relevant to the Natural Abstraction Hypothesis 🤔
CRG#8707: "Every nn is a linear layer away to every other equivalent nn"
Daj#7482: That's...wild, if true
Daj#7482: Wonder if that generalizes beyond just NNs...
Daj#7482: inb4 brain representations are just literally the same as NNs
cognomen#6297: after inverting the captions:
`This animal is preparing to goad you into a fight by attacking you with a pair of scissors.`
`This vicious bird is not interested in anything other than to devour your flesh and kill you.`
`This big, hairy animal is waiting for the perfect opportunity to murder you.`
CRG#8707: I think ensembling is evidence against the strong version of this.
CRG#8707: <https://www.microsoft.com/en-us/research/blog/three-mysteries-in-deep-learning-ensemble-knowledge-distillation-and-self-distillation/>
alstroemeria313#1694: huh how come, the CLIP embedding dim is already smaller than the LM embedding dim
|
CRG#8707: Just do a clip_dim -> 2 * token_dim projection and split the result
alstroemeria313#1694: i know but
alstroemeria313#1694: it only has 512 dim of information to begin with
CRG#8707: The idea is that multiple tokens might work better (modularity of the attention or something)
alstroemeria313#1694: hmm
alstroemeria313#1694: ok so
alstroemeria313#1694: when I do my CLIP conditioned transformers
alstroemeria313#1694: I currently use a linear projection to map one CLIP embedding to one token
alstroemeria313#1694: (The entire transformer is trained from scratch with CLIP frozen)
alstroemeria313#1694: I could just do two tokens instead?
CRG#8707: https://cdn.discordapp.com/attachments/729741769738158194/866334907626422292/1d17e4ecbdf948abcfbd67bf3cc77285.png
CRG#8707: Yeah, I think it provides more "opportunities"
CRG#8707: Like, the prompt tuning paper uses 20 learned tokens.
alstroemeria313#1694: Even though it's already projecting to something twice as big
alstroemeria313#1694: OK so, we should be able to project well from one CLIP model's embedding space into another's with a single linear layer according to this, right?
CRG#8707: Yeah
alstroemeria313#1694: ...This sounds testable
alstroemeria313#1694: We have four CLIP models, and some of them are p different arch
alstroemeria313#1694: Like ViT and ResNet.
EricHallahan#1051: By linear layer do we mean no activation function? (And by no activation, I mean no bias?)
|
alstroemeria313#1694: I was going to propose no activation but a bias
CRG#8707: Not sure if they use bias, but no activation.
alstroemeria313#1694: Anything nonlinear will clearly mess it up
kindiana#1016: They are trained with the same text encoder tho
alstroemeria313#1694: same arch
alstroemeria313#1694: not same params
alstroemeria313#1694: well, v similar arch
EricHallahan#1051: Each model is independent.
kindiana#1016: Ah
alstroemeria313#1694: I was going to propose normalizing the input CLIP embeddings to have 2-norm sqrt(embed_dim) before feeding them in (could bake that into the init)
CRG#8707: The Frozen paper does a linear layer from the output of a NF-RN50 (2048 channels?) to 2 tokens of a 4096 dim GPT. (And it works better than just 1 token)
alstroemeria313#1694: ahh
CRG#8707: Bias might be important. (there could be a random translation between the representations)
alstroemeria313#1694: Yes
janus#0150: Great article, thanks. Very clearly explained. Imo their conclusions were the default answer to the 'mysteries', but maybe thats because I've read a lot of double descent bias-variance tradeoff papers. I don't think their argument against "Can “ensemble reduces the variance” be the reason for ensemble’s performance boost?" works, in particular because their final conclusion shows that this _is_ the case. The key is just that the variance in what models learn is _helpful_ for some samples, it's not just total noise.
EricHallahan#1051: (This is a reference to something that was discussed in January) https://discord.com/channels/729741769192767510/747850033994662000/804509512396439562
EricHallahan#1051: (Sidenote: I don't know why this conversation sticks with me. Maybe it was because it was the day before I became active here.)
CRG#8707: "Every nn is an ~~linear~~ affine layer away to every other equivalent nn"
alstroemeria313#1694: ok so troll proposal
alstroemeria313#1694: can you take say, a convnet or an LM, something that has a bunch of layers whose activations are all the same shape
|
alstroemeria313#1694: and stitch multiple copies of part of it together
Louis#0144: Sure
Louis#0144: Why not
alstroemeria313#1694: And this actually works.
Louis#0144: You’d probably get modal collapse
alstroemeria313#1694: Once you fine-tune the linear layer in between.
alstroemeria313#1694: (What exact dataset/loss do they use in their stitching experiments... the same one as both models were trained on right?)
alstroemeria313#1694: (I haven't read that far)
alstroemeria313#1694: Like if you can do this stitching with low penalty how does that even work, doesn't that imply later layers are just an affine transform away from earlier layers?
alstroemeria313#1694: Which... how
CRG#8707: They stitch the lower part of one model with the upper part of another model
alstroemeria313#1694: How do they decide where the line is
alstroemeria313#1694: Like, if they are different archs
CRG#8707: Hm, I think they just tested different random seeds and SSL vs supervised. :/ https://cdn.discordapp.com/attachments/729741769738158194/866341007120269342/7caeb1a9a38722b981ef4fcf7fda8520.png
alstroemeria313#1694: oh :/
CRG#8707: Also wider networks https://cdn.discordapp.com/attachments/729741769738158194/866341624287854612/fb07c6f485cf9f3adfced19938004fb1.png
alstroemeria313#1694: ah.
alstroemeria313#1694: So they had the same depth.
CRG#8707: Yeah, looks like
alstroemeria313#1694: So they did the first n from one and the rest from the other.
|
alstroemeria313#1694: Resulting in something w/ the same depth plus the new linear layer.
alstroemeria313#1694: So the CLIP embeddings may not be trivially transformable into each other.
CRG#8707: It's an open question.
alstroemeria313#1694: mm... may still try projecting to 2 transformer tokens instead of one
alstroemeria313#1694: ...do I have to use a full context window always? srsly?
alstroemeria313#1694: bc no pad token
alstroemeria313#1694: bc I don't really want to go through the trouble of stuffing the context window and then discarding the logits for my encoded images
alstroemeria313#1694: can i just pad with spaces or smth
janus#0150: Stupid question, how much data was GPT-3 trained on? 400GB?
kindiana#1016: 300b tokens
janus#0150: whats that in ~gbs?
janus#0150: The pile is 1.2TB, right? Or is it 1.2T tokens?
EricHallahan#1051: 825 GiB filtered.
bmk#1476: gpt3 was trained on 300B tokens
bmk#1476: there's a figure in the Kaplan paper for the ratio of tokens to byte or something
janus#0150: ah its like 3 I think
bmk#1476: I can't remember exactly but it's somewhere between 3 and 4
bmk#1476: so yeah 3 is a reasonable estimate
janus#0150: 👍
janus#0150: I thought the pile was much bigger than GPT-3's training set? But GPT-3 was about 1 tb?
|
bmk#1476: nah pile was supposed to be about the same size
bmk#1476: also there's a catch: amount of data trained on != unique data
bmk#1476: I'm too lazy to work out what the unique data amount for gpt3 is but for pile the 825gb is *unique data*
alstroemeria313#1694: ooh it's going
alstroemeria313#1694: ...suddenly paranoid that i did autoregressive loss wrong
alstroemeria313#1694: bc 14.8519 is kinda high
alstroemeria313#1694: or something wrong
alstroemeria313#1694: ...i turned the lr down and now loss is barely going down at all? ok
alstroemeria313#1694: yeah this isn't really working
alstroemeria313#1694: do you actually have to stuff the context window?
alstroemeria313#1694: *sigh*
Sid#2121: depending on the model you can generally fine tune it to a diff seq len fairly easily
alstroemeria313#1694: i'm not fine-tuning the model though
Sid#2121: for which i think (?) you'd have to tune the whole model weights
Sid#2121: i know - i'm saying if you want to not stuff the context, you should
alstroemeria313#1694: ah
alstroemeria313#1694: so i will just have to keep track of what positions my soft image embeddings are in
alstroemeria313#1694: and apply a mask so i don't include them in the loss
Sid#2121: what AR model are you using? gpt-j?
alstroemeria313#1694: neo 1.3b
|
Sid#2121: yeah you can just have a fixed max length then mask the loss ig
alstroemeria313#1694: hm actually
alstroemeria313#1694: i guess i can mask the loss for *all of the pad tokens*
alstroemeria313#1694: instead
Sid#2121: hm, i'm not sure that's sensible, since then your loss will vary depending on the number of pad tokens
alstroemeria313#1694: i can index the losses tensor and take the mean?
alstroemeria313#1694: ...I think something is still wrong though.
alstroemeria313#1694: Loss is too high.
alstroemeria313#1694: Yeah the image tokens mess everything up.
alstroemeria313#1694: wow loss is going down better
alstroemeria313#1694: if i fine-tune the entire CLIP image encoder rather than just the linear layer I put after the end
Sid#2121: they finetune the whole image encoder in the paper CRG is talking about
Sid#2121: i've also had some success with that technique
alstroemeria313#1694: ...it keeps diverging :/
alstroemeria313#1694: i converted it to float32 first too
alstroemeria313#1694: CLIP LR was 5e-4? ok
Teemochu#8740: does finetuning clip fit on a 3090? Guessing yes because image models are smol.
alstroemeria313#1694: you have to microbatch if you're using a contrastive loss
alstroemeria313#1694: i am not
alstroemeria313#1694: which will make the image encoder stop lining up with the text encoder and render the text encoder useless
|
alstroemeria313#1694: but yeah i am using about 16GB of GPU RAM rn
alstroemeria313#1694: I had to set my lr to 1e-6
alstroemeria313#1694: ...How low should my AR loss be able to go btw
Cade Gordon#3029: Ur using SGD right?
alstroemeria313#1694: Adam
Cade Gordon#3029: Ahhh try SGD for ft it typically performs better (words from CMU multimodal lectures)
alstroemeria313#1694: oh hm
Cade Gordon#3029: The reasoning was momentum I think
alstroemeria313#1694: ...Adam has momentum too?
Cade Gordon#3029: Like momentum is bad sorry should have been specific
alstroemeria313#1694: Ohh
EricHallahan#1051: I would just set the Adam moments (beta_1 and beta_2) to zero.
EricHallahan#1051: ¯\_(ツ)_/¯
Cade Gordon#3029: Somewhere near the end of this lecture https://m.youtube.com/watch?v=E_3gxQWaCoQ
Cade Gordon#3029: The momentum will destroy the effects of pretraining according to my notes
Louis#0144: Oh wow
Louis#0144: An entire set of lectures on multimodal learning
alstroemeria313#1694: i think it may be bad to set beta_2 to 0
Cade Gordon#3029: Yeah it’s reallyyyyy good to introduce problem scope for the topic
EricHallahan#1051: ¯\_(ツ)_/¯
|
EricHallahan#1051: It works for me in my latest StyleGAN+CLIP notebook.
alstroemeria313#1694: doesn't adam reduce to something like params -= lr * sign(grad) or smth
alstroemeria313#1694: if they are both 0
EricHallahan#1051: I am totally naive when it comes to how Adam actually works under the hood lol
alstroemeria313#1694: ahh
Cade Gordon#3029: Is it like if AdaGrad and RMSProp had a child or am I making that up?
alstroemeria313#1694: it's basically rmsprop+momentum+initialization bias corrected EMA
alstroemeria313#1694: and rmsprop is adagrad with an EMA instead of a sum
Cade Gordon#3029: Adding this convo to my optimization notes
AI_WAIFU#2844: just read the paper it's really not complicated
alstroemeria313#1694: oh no it diverged again
alstroemeria313#1694: after 29k steps of fine-tuning
alstroemeria313#1694: loss jumped from 4.57 to 10.35
AI_WAIFU#2844: Anyone know how to to calculate log(exp(x)-exp(y)) in a numerically stable manner?
bmk#1476: isnt that basically logsumexp?
bmk#1476: just do logsumexp(x, 1/y)
bmk#1476: i dont know how numerically stable this is tho
alstroemeria313#1694: ok i actually have a fine-tuned CLIP ViT snapshot with val loss 4.36
AI_WAIFU#2844: that seems wrong to me
alstroemeria313#1694: It finished the first epoch on MS COCO and spat out a checkpoint
|
bmk#1476: y?
AI_WAIFU#2844: write it down
bmk#1476: = log(exp(x) + exp (1/y))
bmk#1476: = log(exp(x) - exp(y))
bmk#1476: wait im an idiot
alstroemeria313#1694: IDK if this loss is good or bad
bmk#1476: nvm
Louis#0144: Lol
bmk#1476: i stand my whawt i said earlier about it basically being logsumexp tho
alstroemeria313#1694: Because I don't know the loss on the validation set text only
AI_WAIFU#2844: do you have some kind of test battery you can run your model through to see if it's working well?
Louis#0144: @AI_WAIFU are u having issues in fp32
alstroemeria313#1694: i... will
alstroemeria313#1694: in a bit
bmk#1476: log(exp(x)(1 - exp(y - x)))
alstroemeria313#1694: but it is not written yet
bmk#1476: x + log(1 - exp(y - x))
bmk#1476: that seems numerically stable right
AI_WAIFU#2844: something something test driven development
alstroemeria313#1694: I should be able to use the model to encode an image to two Neo-1.3B tokens and then sample text tokens.
|
bmk#1476: x + log1p(-exp(y - x))
AI_WAIFU#2844: Yeah, and it's only gonna get worse
bmk#1476: do you think this will work
alstroemeria313#1694: oops it diverged
alstroemeria313#1694: Oh well I still have the good checkpoint
AI_WAIFU#2844: I need to double check the math, but usually when you subtract to quantities, especially when they're similar, you get garbage
bmk#1476: but the garbage is isolated inside the second term, and theres no getting around subtracting
bmk#1476: dividing is usually even worse right?
AI_WAIFU#2844: Depends
AI_WAIFU#2844: That might be right. I'll give it a shot
bmk#1476: i guess try both x + log1p(-exp(y - x)) and x + log1p(-exp(y)/exp(x))
alstroemeria313#1694: Guess I'll write that thing now.
kindiana#1016: exp(y - x) should be much better than -exp(y)/exp(x) 🤔
AI_WAIFU#2844: yeah I think this is right, I'll come back with results
alstroemeria313#1694: OK I'm going out w/ roommate and will write the thing when I get back
AI_WAIFU#2844: Yeah that did it, I can now run the diffusion process for 1000 steps with no noticeable distortion of the distribution.
alstroemeria313#1694: Yeah my experiment didn't work
alstroemeria313#1694: The generated input embeds for two different images are nearly the same and the sampled outputs are the same
alstroemeria313#1694: Gonna have to train the image encoder from scratch probably.
dmvaldman#4711: i'm trying to understand what's happening in the colab for image captioning. do i have this right?
|
- you have a set of words each corresponding to a gpt2 token
- given an image, using CLIP you find the similarity of this image to each of those words
- you create a new token, e.g "<img1>", in the gpt2 tokenizer which is a linear combination of those tokens given by the CLIP similarity above. this provides a "definition" for an "image" gpt2 understands
- you ask gpt2 to generate text for the prompt "A description for <img1>" to generate a caption for the image
alstroemeria313#1694: i think so. except it is the top 10 tokens
alstroemeria313#1694: i tried changing this to top 50 and the results got worse
dmvaldman#4711: right a linear combination of the top 10 tokens
dmvaldman#4711: i feel like this could work very well if this `filtered_word` set of single-word tokens was much larger. right now i'm only getting good captioning results when the image can be described well with a word or two in the set.
alstroemeria313#1694: maybe
alstroemeria313#1694: mb someone should just train an encoder from scratch idk
alstroemeria313#1694: to do better
marksaroufim#6706: Hi mods! I was wondering if there were any specific pieces of text that really motivated the creation of EleutherAI? So sort of exploring whatt your vision is like for research labs in the medium to long term. I just find this place so peculiar given that you built so much stuff rivaling the best of the best research labs yet can manage to do so in a completely decentralized way while seemingly having day jobs and minimal funding. It would have been a dream of mine to have a community like this when I was growing up.
If no such texts exist, I'd be happy to write them myself. I try to write fun articles here https://marksaroufim.substack.com/ and would love to cover your origin story and vision in more detail. I wasn't too satisfied with the coverage I saw of you in the press since it mostly just discussed that you're releasing large models without diving into a bit more detail as to how you're creating an alternative to prestigious research institutions. So Connor messages the TPU podcast, Leo replies and then we get an Open AI competitor?
cfoster0#4356: Hey Mark! Glad you were able to find your way into the community. It really is a special kind of place and a peculiar moment in time, I feel.
Can't speak on motivating pieces of text myself, but the some of the original folks from the earliest days (specifically Connor, Leo, and Sid) may. Personally, I think there's definitely a cool article to be written about Eleuther's niche and vision
marksaroufim#6706: That's great to hear, honestly anything I can read from early days to chats about where you see Eleuther going would be super helpful - I can use that as a starting point to ask more interesting questions
StellaAthena#3530: @marksaroufim We wrote a one-year retrospective that might be a good place to start: https://blog.eleuther.ai/year-one/
marksaroufim#6706: I loved that piece
|
marksaroufim#6706: I was hoping to expand on this https://cdn.discordapp.com/attachments/729741769738158194/866557085033693194/Screenshot_20210701-145500_Chrome.png
StellaAthena#3530: > That's great to hear, honestly anything I can read from early days to chats about where you see Eleuther going would be super helpful - I can use that as a starting point to ask more interesting questions
I mean, you can always scroll up really *really* far
marksaroufim#6706: I may do that unironically
kindiana#1016: (you can also search and sort by oldest to save you some scrolling)
cfoster0#4356: @marksaroufim Lemme save you some scrolling
Deleted User#0000: Hey anyone know with the image descriptor how to chuck more images on?
Deleted User#0000: tryna do 5 with three answered questions
cfoster0#4356: Mm you might have better luck with that question in the #art channel
Deleted User#0000: ah yeah saw people talking above hehe will do
Deleted User#0000: eh figured it out all good heh
Deleted User#0000: yeah that's about right. Expanding the filtered_word set would make the model more expressive. The list there isn't comprehensive, I just found some lists of words online and filtered them. The vocab expansion idea described in the notebook should also work (a linear mapping should suffice). That being said, directly connecting a vision encoder like the original paper really should work better, but also takes more effort 🙂
Deleted User#0000: time permitting I might also try implementing it this week
paws#3311: always wondered if gpt-neo is a reference to the matrix :berk:
Daj#7482: Hey there! If you have some concrete questions or just want to hear me rattle off stories or whatever, hit me up any time
EricHallahan#1051: Thank you! It was a massive time sink when paired with the website overhaul, but it was definitely worth it in the end.
EricHallahan#1051: Ironically, this is the one message that we took creative liberties with. All the other messages look exactly as they exist in this server, but this was originally text. We realized that the proper context was required, and therefore replaced the text with an "artist's rendition" to better convey the context and sentiment of the conversation.
triggerhappygandi#0001: big fan lol
triggerhappygandi#0001: didn't take you for someone who would use an anime pfp btw :berk:
cognomen#6297: https://discord.com/channels/729741769192767510/729741769738158194/822289216120946739
|
cognomen#6297: might be of interest for your piece
Deleted User#0000: is there any way to turn this text into voice https://cdn.discordapp.com/attachments/729741769738158194/866681927037878282/unknown.png
EricHallahan#1051: Use TTS?
triggerhappygandi#0001: Use APIs from GCP/AWS.
triggerhappygandi#0001: If you don't want to train a model
EricHallahan#1051: It doesn't even need to be fancy.
Daj#7482: Note: This is more of a shitpost
Daj#7482: We do not aim to build aligned AGI as a primary goal, that's way too ambitious
Daj#7482: (but if it's possible, of course that's the final goal of _checks notes_ the entire human species)
Daj#7482: We do hope to _aid_ in building aligned AGI
Daj#7482: But I don't expect us to be the people that do it
Daj#7482: ~~though you also never know, I guess...~~
alstroemeria313#1694: so Decision Transformer is basically Evidential Decision Theory?
Well
EDT takes actions such that p(desired_outcome|actions) is maximized
DT takes actions such that p(actions|desired_outcome) is... something
It's a sampled policy, not an explicit maximization, as well as being the wrong way around
Daj#7482: I think @AI_WAIFU was thinking about this the other day, don't remember what the result was
alstroemeria313#1694: ...Can we get DT to do p(desired_outcome|actions) with Bayes' theorem somehow?
alstroemeria313#1694: Like if we explicitly got the unconditional probability of the actions somehow?
|
Daj#7482: So this would be kind of like trying to infer preferences from observing the agent's choices?
Daj#7482: I think so?
Daj#7482: Interesting...
alstroemeria313#1694: well with DT you condition on desired outcome and sample actions, so I think we'd have to approximate the unconditional probability of the actions by taking that policy and sampling a bunch of outcomes according to the actual distribution in the train set, and computing the conditional probabilities of that policy?
alstroemeria313#1694: Or could just train an unconditional model alongside
alstroemeria313#1694: We know the distribution of outcomes over the train set and if we had an unconditional model of action sequences we could get the unconditional probability of a policy
Daj#7482: If you could integrate over the entire policy space, then yeah I guess this should work
alstroemeria313#1694: And then... something
alstroemeria313#1694: I am not sure how to put it together yet ^^;;
Daj#7482: It seems like it would be straightforward but intractable to calculate. Reminds me of Stuart Armstrong's work on value learning and why inferring preferences from actions is hard, except it's easier in this case since you can interrogate the unconditional distribution
Daj#7482: or something
Daj#7482: :thonk:
alstroemeria313#1694: Ah
alstroemeria313#1694: Well we could actually calculate the p(desired_outcome|actions) used in EDT with that method
alstroemeria313#1694: But then we have to maximize it.
Fessus#9563: You might be able to get away with just adding an extra head to the output. Given goal + actions + world states so far predict the probability that goal will be reached.
Fessus#9563: It's lazy but its easy to implement
alstroemeria313#1694: Also in the setting I'm using DT in, if I have an entire sequence of actions I can just compute the actual outcome instead
alstroemeria313#1694: So this is what I actually do rn
alstroemeria313#1694: So this is only useful to me if it helps with partial sequences where no reward has accrued yet
|
alstroemeria313#1694: hm how would i train it
alstroemeria313#1694: also it's never actually going to hit the goal exactly bc my goal is continuous
alstroemeria313#1694: i prob want it to predict actual reward of a partial sequence of actions instead of p of reaching goal
Fessus#9563: That's going to be a problem if you want to use a DT framework
alstroemeria313#1694: ...the 'goal' is just the DT desired reward? which is continuous?
Fessus#9563: The implicit goal of DTs is to generate an internally consistent sequence. The do this by specifying the reward which a series of actions achieved and then showing the series of actions which achieved them. If you show a goal and then the sequence which follows never achieves that goal your sequence is internally inconsistent.
alstroemeria313#1694: well, it doesn't have to be, but it can be
alstroemeria313#1694: ...i never actually show the DT the last action even?
alstroemeria313#1694: during sampling
alstroemeria313#1694: but i do actually only train it on consistent sequences?
Fessus#9563: I'm confused about what your setup looks like
𓅬 gabriel_syme 𓅬#3220: I think the paper mentions this yes
alstroemeria313#1694: oh huh
Fessus#9563: The goal of DTs is to generate a sequence which ends up somewhere specific. I'm not sure how you do that if you never show it the end
alstroemeria313#1694: the output for each input action is the logits for the next action
alstroemeria313#1694: there are a fixed number of actions
alstroemeria313#1694: the last input is the sampled second to last action and the last output is the logits for the last action
alstroemeria313#1694: once i have sampled the last action i am done with the DT and go evaluate the actual outcome
alstroemeria313#1694: where actually?
Fessus#9563: Does the model ever get to see the result of the last action in terms of returns to go?
|
𓅬 gabriel_syme 𓅬#3220: Was sleeping and woke up
𓅬 gabriel_syme 𓅬#3220: I think they discuss learning a policy rather than trajectory but I'm hazy
alstroemeria313#1694: i have sparse reward (at the end only) and so during training i just give the actual reward for a sequence before the sequence
Fessus#9563: Ok, that all sounds fine
𓅬 gabriel_syme 𓅬#3220: That is interesting I have been thinking about that for design where reward is pretty much at the end
𓅬 gabriel_syme 𓅬#3220: and had a similar (I think) idea, just give any intermediate trajectory the same reward-to-go
alstroemeria313#1694: yeah, it simplifies so you only need one reward token
𓅬 gabriel_syme 𓅬#3220: yeah, have no idea if it works 😄 sounds good that it might. need to bug you eventually about the colab lol
alstroemeria313#1694: so how would i train a predicted end reward head...
alstroemeria313#1694: hm
𓅬 gabriel_syme 𓅬#3220: opening my notes of the paper to make sure i'm not full of it
Fessus#9563: You'd need to include negative samples in your training
alstroemeria313#1694: hm
alstroemeria313#1694: negative how
Fessus#9563: Failed sequences
𓅬 gabriel_syme 𓅬#3220: different decisions?
alstroemeria313#1694: the actual sequences have a decent distribution of rewards?
Fessus#9563: Sequences which failed to reach their goal
Fessus#9563: You'd actually have to do some online training to do this
alstroemeria313#1694: (yeah I kind of wandered off from the "get it to actually do EDT" thing)
|
Fessus#9563: The issue is that probability of success given past actions is dependent on the skill of the model. So as the model improves it changes
Fessus#9563: Makes everything a pain in the ass
𓅬 gabriel_syme 𓅬#3220: I wonder if my go-explore idea is nice for this
𓅬 gabriel_syme 𓅬#3220: it definitely has the ability to find stuff that fail
𓅬 gabriel_syme 𓅬#3220: although I guess you could mutate/break a sequence in simlper ways to create a negative?
Fessus#9563: wouldn't represent sequences the model might actually be likely to generate so they wouldn't be very useful
Fessus#9563: It's the subtly wrong stuff that's the most useful
𓅬 gabriel_syme 𓅬#3220: I love the dialectics on that one, I'll think about it
𓅬 gabriel_syme 𓅬#3220: I wonder if QD can help after all. It should be able to give you multiple trajectories that achieve the same reward, but in different ways. Wouldn't that be beneficial in training such an extra head to predict reward given actions and states (and maybe reward)?
alstroemeria313#1694: so i can't actually train an extra head to predict reward given actions bc the true reward is always given at the start of the sequence and it can just spit it out verbatim
alstroemeria313#1694: but mb i could train a second model to do it
𓅬 gabriel_syme 𓅬#3220: yeah smth like that
alstroemeria313#1694: and then i could what...
𓅬 gabriel_syme 𓅬#3220: but how would that be different than...normal RL?
alstroemeria313#1694: evaluate partial sequences and discard predicted-bad ones early?
alstroemeria313#1694: some sort of tree search?
𓅬 gabriel_syme 𓅬#3220: I was just thinking of some sort of contrastive thing
alstroemeria313#1694: oh?
𓅬 gabriel_syme 𓅬#3220: like if you have a diverse map of trajectories that can be different with similar rewards maybe you can learn smth
𓅬 gabriel_syme 𓅬#3220: or even the reverse I guess
|
alstroemeria313#1694: this is the case on my actual task
𓅬 gabriel_syme 𓅬#3220: like use the go-explore map to sort of learn some representation vs memorize it (as the paper suggests)
𓅬 gabriel_syme 𓅬#3220: although memorizing it is nice too lol
𓅬 gabriel_syme 𓅬#3220: it would be cool if you could sample different things that have similar rewards and almost identical things that have different..I guess? 🙂
alstroemeria313#1694: If my DT were good enough I could just do this by prompting with a reward
𓅬 gabriel_syme 𓅬#3220: yeah I think that's the idea in the paper
alstroemeria313#1694: ok so
𓅬 gabriel_syme 𓅬#3220: that is why they suggest to couple it with go-explore
𓅬 gabriel_syme 𓅬#3220: and tbh it might work nicely in a latent space (I'm guessing that's where you work in right?)
𓅬 gabriel_syme 𓅬#3220: from the paper
> Decision Transformer can serve as a powerful “memorization engine” and in conjunction with powerful exploration algorithms like Go-Explore [28], has the potential to simultaneously model and generative a diverse set of behaviors
𓅬 gabriel_syme 𓅬#3220: typo there in the paper 😄 generate*
alstroemeria313#1694: ok i guess the problem with predicting the end reward is that the sampled partial sequences of actions may be off-distribution
𓅬 gabriel_syme 𓅬#3220: I still think this can be offline
𓅬 gabriel_syme 𓅬#3220: if you have a way to create samples of that distribution in a diverse way?
𓅬 gabriel_syme 𓅬#3220: btw, I have one silly question
alstroemeria313#1694: oh?
𓅬 gabriel_syme 𓅬#3220: would it be bad to use a pretrained model for DT?
alstroemeria313#1694: wdym pretrained?
𓅬 gabriel_syme 𓅬#3220: like a language model
|
alstroemeria313#1694: you mean fine-tune it?
𓅬 gabriel_syme 𓅬#3220: if your task is represented as language
𓅬 gabriel_syme 𓅬#3220: yeah
alstroemeria313#1694: it would probably work, i would guess
𓅬 gabriel_syme 𓅬#3220: I hope 🙂 I'll try both but it feels such a waste not to try to transfer
𓅬 gabriel_syme 𓅬#3220: or maybe like you said, it can be a different model feeding to a DT
drscotthawley#0787: Possibly naive question but will ask anyway: Is anyone in the EleutherAI community working on hooking up CLIP with Jukebox?
Louis#0144: Kinda what @cfoster0 does in a weird way
Louis#0144: But clip for text <-> speech
Louis#0144: Not music
Louis#0144: Atleast not yet
oreo#2740: Is anyone working on (or knows anyone working on) training LMs for biomedical/clinical text?
EricHallahan#1051: I don't know of anyone who is, but the Pile contains three subsets that are relevant: PubMed Central, PubMed Abstracts, and NIH ExPORTER.
Fessus#9563: GPT-Js performance on medical knowledge is actually very impressive, at least in the tiny amount of testing I've done. Obviously some of the stuff it spits out is wrong but it's shockingly good at summarizing some important points. https://cdn.discordapp.com/attachments/729741769738158194/866777881967722506/Capture.PNG
EricHallahan#1051: Of course, always take a grain of salt for anything that could be factual, and never rely on a language model to return factually accurate results.
janus#0150: 🤷♂️ I've been taking medical advice from it and I'm fine. I'm about to start the 100% barbeque sauce diet it recommended.
bmk#1476: sounds like a delicious diet
StellaAthena#3530: @matts Please do not advertise in this discord server.
matts#9903: sorry about that! won't advertise again
One#5919: yo this is hella fun, a friend recommended it
|
chirp#4545: :thonk: https://arxiv.org/pdf/2107.08142.pdf (from AK on twitter)
Louis#0144: @bmk you’ll like this
bmk#1476: will take a look after interviews
bmk#1476: busy studying rn
chirp#4545: they even mention MuZero lol
chirp#4545: not sure if genius or BS
StellaAthena#3530: “Because Elon Musk is stuck in an un-ending loop of that AI conference from the 60s where they decided CV would be solved by a couple PhD candidates in a couple years”
kurumuz#5695: not end to end enough :ultraberk: https://cdn.discordapp.com/attachments/729741769738158194/866910538536321054/unknown.png
kurumuz#5695: where is this from
kurumuz#5695: George Hotz wanted to implement muzero for solving self driving cars aswell
kurumuz#5695: well, "something looking like muzero"
kurumuz#5695: kinda weird people think elon doesn't think about end to end, that is where tesla is going aswell
kurumuz#5695: elon said this on twitter many years ago
ari#9020: Hey, Dartmouth was in the fifties, not sixties
Bhadresh#6096: Hello Guys,
I am Bhadresh,
I am working in NLP research and development.
I love to explore new things in NLP. I have experience in fine-tuning NLP models. Recently I also did fine-tuning on large wikisplit data with T5 on Jax/Flax using TPU vm .
I will be happy to contribute and learn more.
Please let know if anything I can be helpful with.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.