data
stringlengths 115
7.61k
|
---|
mr_seeker#1337: Add the C4 dataset?
Louis#0144: @bmk pls send me a hard drive full of pictures of geese
bmk#1476: ok Maxtor 10GB from 20 years ago on the way
Louis#0144: Like 10TB of geese
bmk#1476: hope you have IDE ports on your mobo
Louis#0144: oh
bmk#1476: and 4-pin power
Louis#0144: I do on my nas
mr_seeker#1337: I know someone who has an old harddrive with 10mb from the 80's...
bmk#1476: also I put too many jumpers on the jumper block and I don't have the user manual to tell which ones I should remove
bmk#1476: I do actually have a 10GB Maxtor and I'm pretty sure it still works too
bmk#1476: or maybe it was 20GB
bmk#1476: something like that
Louis#0144: That’s a lot of geese
Louis#0144: @𓅬 gabriel_syme 𓅬 we should train a GAN exclusively on pictures of birds
tylerlastovich#3263: https://twitter.com/bird_not_exist
EricHallahan#1051: Why would you do that? I expect that there are already hundreds of them out there.
Louis#0144: Finetune it on goose memes
Louis#0144: That’s my point
Louis#0144: Infinite Eleuther meme |
tylerlastovich#3263: Go crazy https://github.com/steggie3/goose-dataset
bmk#1476: this dataset is glorious
Sahl#0630: This reminds me
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/854590458025213972/goose-mugshot-0045.png
𓅬 gabriel_syme 𓅬#3220: invite steggie3
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/854590492858384424/goose-mugshot-0030.png
𓅬 gabriel_syme 𓅬#3220: ok I can train smth on that
Sahl#0630: There’s this very cursed feeling when you’re watching a goose in the distance
𓅬 gabriel_syme 𓅬#3220: maybe one of the DDPM models?
Sahl#0630: and suddenly you don’t see the white on its face
Sahl#0630: because it’s looking right at you
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/854590650037174282/unknown.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/854590736745627658/unknown.png
bmk#1476: this is the greatest dataset ever known to man
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/854590853335220284/unknown.png
Kia#2550: Put them in Stylegan :ultragoose:
Kia#2550: This is amazing
bmk#1476: move aside imagenet
Sahl#0630: That dataset is disgusting
Sahl#0630: it’s just fowl |
guac#4716: someone needs to fork this to the EAI github
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/854591125456945152/unknown.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/854591439162048564/unknown.png
tylerlastovich#3263: Only 7 stars too, so very unappreciated.
bmk#1476: we need a bot feature
bmk#1476: !goose to get a random goose image
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/854591724495699998/unknown.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/854591764999962654/unknown.png
tylerlastovich#3263: The name of the project is also very fitting... GANder https://steggie3.github.io/projects/gander.html
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/854591889243111444/unknown.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/854591929516949524/unknown.png
Kia#2550: This is amazing :O
bmk#1476: from that post: :ultragoose: https://cdn.discordapp.com/attachments/729741769738158194/854592175732031488/goose_200000.png
bmk#1476: we *need* to cite this at somep oint
Kia#2550: True :ultraberk:
tylerlastovich#3263: Someone needs to implement the *future work* section
> Use Conditional GAN to control goose face features such as the presence of eyebrows, the presence of a white bar on the forehead (a good identifier of the Branta canadensis maxima subspecies), the presence of white circles around eyes, open or closed eyes, open or closed mouths, etc.
cognomen#6297: eyebrows?
mega b#6696: Goose Cloner
mega b#6696: For the raid on ducks 😈 |
howardakp#9976: Hello, I'm new on this discord. Have anyone tried of to replicate the search engine from OpenAI gpt3 using gpt-neo or gpt2? Same question for the OpenAI's answering engine.
Napolean_Solo#2907: Can you serve your own models on hugging face?
Daj#7482: We don't work on downstream applications here. There's no reason you couldn't implement that yourself but we haven't done so
Daj#7482: We're not hugging face so no idea
Napolean_Solo#2907: You have no idea but definitely someone else might
Daj#7482: You've asked multiple off topic questions in the past, ask HF
Napolean_Solo#2907: Off topic?
Napolean_Solo#2907: Isn't this #general
Daj#7482: 🙄
Daj#7482: Whatever
Napolean_Solo#2907: It's the same, is it not?
Napolean_Solo#2907: I see you do have a channel for #off-topic
Napolean_Solo#2907: But what constitutes as off topic and general?
Daj#7482: #general is general ML and Eleuther discussion
Daj#7482: #off-topic is shitposting, non-ML, whatever
Napolean_Solo#2907: So my topic really was ML related
Daj#7482: No, it's related to a product of one company that is not us
Daj#7482: tbh I don't even care that much
Daj#7482: but you'd get much better answers asking on a HF forum
howardakp#9976: where can I find any info that someone from the community tried to work on it? |
Daj#7482: I unfortunately don't know of anybody that has tried to implement this, sorry
Napolean_Solo#2907: Alright, thanks for your help
howardakp#9976: I see, thanks 🙂
Napolean_Solo#2907: Do you guys share implementation paper?
Napolean_Solo#2907: For the GPT models you build
Napolean_Solo#2907: Would be highly appreciated if you could let me know where I can find them.
Daj#7482: We haven't written any papers about them no
Daj#7482: All the details are in the github repos and/or model cards
Daj#7482: Neo is very similar to GPT2 except utilizing banded local and global attention
Daj#7482: GPT-J is more different, it uses rotary positional embeddings and a novel parallel attn/ff architecture that doesn't yet have a name
Napolean_Solo#2907: So you don't disclose how you built the model?
Napolean_Solo#2907: You only open source it as pretrained?
Daj#7482: No, the details are right there in the code lol
Daj#7482: We just didn't write a paper because that would be more effort than it's worth since it's not particularly interesting/novel
𓅬 gabriel_syme 𓅬#3220: am I the only that sometimes still reads "we are not hugging face" literally?
Napolean_Solo#2907: I see,
So how comparable is it to the Curie model?
Daj#7482: Read the post: https://arankomatsuzaki.wordpress.com/2021/06/04/gpt-j/
Napolean_Solo#2907: Ah that seems interesting! Thanks for sharing
Napolean_Solo#2907: How well did it perform in sentiment classification tasks? |
Daj#7482: The post includes all the evals we personally did
Daj#7482: I don't think anyone tried any sentiment classification benchmarks, but I might be wrong
Daj#7482: As a rule of thumb, it performs about as good as GPT-3 curie on most tasks
Daj#7482: Slightly worse on prose, much better on code
Napolean_Solo#2907: Hmm OpenAI's is bad at code but good at prose
Daj#7482: Just test out the model yourself I would recommend
Daj#7482: Evaluating LMs is kinda a crapshoot
Napolean_Solo#2907: I do understand that your model was trained on a totally different dataset but how much of a difference does it make?
Napolean_Solo#2907: When compared to OpenAI's model
Daj#7482: It makes a _massive_ difference to code
Daj#7482: Other changes are strongly subjective
Daj#7482: So take whatever anyone says with a grain of salt
Napolean_Solo#2907: Did you guys upload the model on huggingface hub?
Daj#7482: Not yet, because HF has to finish implementing it into their library first
Napolean_Solo#2907: Ah I see, any idea when could that happen?
Daj#7482: Nope, really all depends on HF. A PR exists but apparently HF has some problems with it idk
Daj#7482: You can also try our model here: https://6b.eleuther.ai/
Napolean_Solo#2907: So now the only way to get the model is through the GitHub repo, am I right?
Daj#7482: Yep
Daj#7482: There are some colab notebooks you can try |
Napolean_Solo#2907: Okay!
Napolean_Solo#2907: OpenAI recommends top P to be 1 but I see you guys by default have set it at 0.9. is that due to some difference of architecture?
Daj#7482: ¯\_(ツ)_/¯
Daj#7482: All sampling parameters are completely empirical
Daj#7482: There's no formal way to know what is or isn't good other than trying it out
Daj#7482: I have heard J reacts pretty different to parameters than GPT3 does though, especially repetition penalty (which is not implemented in the web app)
Napolean_Solo#2907: I see
Napolean_Solo#2907: What about temperature? Lower temperature makes the model more deterministic
Napolean_Solo#2907: It's the same here?
Daj#7482: Yep
Daj#7482: The parameters do the same thing, the model just has a different "personality"
mgostIH#0245: The general goal of a model should be to generate the sequence with most confidence
mgostIH#0245: So there's also strategies like beam search to find better sentences overall
mgostIH#0245: p = 1 without search is the greedy strategy of picking the current best estimate
Napolean_Solo#2907: I see, you guys have made some great progress.
mgostIH#0245: Or more like temperature 0
Napolean_Solo#2907: Is there a limit on how many tokens can be provided in the prompt?
Daj#7482: I'm pretty sure this is false, no? I thought top_p was it would sample from the p proportion of probability mass
Daj#7482: so top_p = 1 is normal sampling
Daj#7482: 0.9 truncates the lowest 10% probability mass |
mgostIH#0245: Ye I think I mistaken it with temperature
mgostIH#0245: temperature 0 changes the distribution so it spikes to the highest prob
Daj#7482: The web app always produces 512 tokens iirc. The maximum size the model can handle is 2048 in total
Daj#7482: Yep, exactly. 0 temp is greedy sampling
Napolean_Solo#2907: So it's the same as OpenAI. They have a limit of 2048 tokens too.
Napolean_Solo#2907: Why do I feel your model performs way better than Curie?
Napolean_Solo#2907: I just interacted with it and somehow it feels the responses are much better than my interaction with OpenAI's Curie
Napolean_Solo#2907: Your model performed amazingly well in emotion classification
Napolean_Solo#2907: Any way to add stop sequences?
Napolean_Solo#2907: ```
Text: wow! This was amazing, I have never felt this great
Emotion: Delighted
###
Text: this is ridiculous. I hope he had some decency to present himself better.
Emotion: Disappointed
###
Text: what is wrong with you?? Why would you do that, you moron!
Emotion: Upset
###
Text: wait what? I don't understand. How is it doing that? |
Emotion:
```
Daj#7482: Yea that's what I mean by evaluations being tricky, it's all very subjective, glad you like the model lol
Daj#7482: Not on the web app, no. The web app is super primitive
Sid#2121: Don't know if it's already been mentioned anywhere - but allennlp did the preprocessing for MC4 and released it! https://github.com/allenai/allennlp/discussions/5265
Sid#2121: thank you @Dirk Groeneveld 🥳
Sid#2121: ah, i see dirk posted already
Sid#2121: truly an insane amount of data
nz#9710: Wow, this is amazing!
nz#9710: I kinda want to download the italian part now lol
𓅬 gabriel_syme 𓅬#3220: hey I can use my TRC to train a greek model
𓅬 gabriel_syme 𓅬#3220: say, could I train a 2.7b model in greek with the TRC provided TPU VMs?
also, I'm guessing I would start from the pretrained one?
kurumuz#5695: you can yes
kurumuz#5695: though non latin might be hard
𓅬 gabriel_syme 𓅬#3220: hmm that makes sense
𓅬 gabriel_syme 𓅬#3220: wonder how much a month of TPU lets me train
𓅬 gabriel_syme 𓅬#3220: I doubt I'll be able to wire multiple together with my skills as well
nz#9710: yea won't the fact that greek is non latin be a serious issue? at the very least it requires a different tokenizer right?
StellaAthena#3530: Training a tokenizer is easy tho. That’s not a significant limitation. |
nz#9710: btw was checking out https://arxiv.org/abs/2103.12028 and there's a stella spotted! 😄
thenightocean#6100: is this limitation of the api or the ui? If its the second, I can maybe fix that.
Daj#7482: I think Ben mentioned in #gpt-j that this is nontrivial to do on the JAX level. You could truncate at the app level, which would maybe make the output less cluttered but not save compute lol
alstroemeria313#1694: Hey, is there some way to auto-derive the right KL Div loss weight for a VAE?
alstroemeria313#1694: Like, I'm ramping it up over time, as advised
alstroemeria313#1694: But I need to stop when the reconstructed reals and the fakes look visually similar I think
alstroemeria313#1694: Also how big should my bottleneck be
alstroemeria313#1694: This is working way better than the LDMGAN though
alstroemeria313#1694: CIFAR-10 VAE samples, reconstructed reals on the left, fakes on the right https://cdn.discordapp.com/attachments/821173872111517696/854719287533240330/demo-85.png
alstroemeria313#1694: This is with a 96K parameter model...
alstroemeria313#1694: It's tiny!
StellaAthena#3530: That was fun 🙂 Happy to answer any Qs
alstroemeria313#1694: I think the LDMGAN encoder learns to sneak information through in the correlations between the elements of the latent
alstroemeria313#1694: Which... kinda stops the whole LDMGAN paradigm from working well I would think?
alstroemeria313#1694: LDM only enforces that each *element* of the latent is near mean 0 std 1
alstroemeria313#1694: So when I tried it on CIFAR-10 the reconstructed reals started looking way different from the fakes
alstroemeria313#1694: I think the discriminator is supposed to fix this and make the fakes look like the reals anyway?
alstroemeria313#1694: But maybe it doesn't sometimes?
alstroemeria313#1694: The VAE information bottleneck involves adding *actually* uncorrelated noise to the encoder outputs
alstroemeria313#1694: So it nicely prevents this |
StellaAthena#3530: Some thoughts on what you should (and should not!) use the mC4 dataset released by @Dirk Groeneveld et al for
https://twitter.com/BlancheMinerva/status/1405166703173160967?s=20
alstroemeria313#1694: From a 232K param VAE: https://cdn.discordapp.com/attachments/821173872111517696/854736599651844156/demo-88.png
alstroemeria313#1694: ...So how do you do conditional VAEs?
Spy#9778: https://cdn.discordapp.com/attachments/729741769738158194/854742797147963422/Screenshot_20210616-102224.jpg
Spy#9778: A whole 7 of my 1024 codes getting used
alstroemeria313#1694: oh no
Spy#9778: Seems to collapse really early in training too
Spy#9778: Under 1k steps
alstroemeria313#1694: that's really weird
Spy#9778: Wait I off by oned it's actually 8 codes 😎
Spy#9778: Yeah I'm not sure if the initialization isn't suited to the data distribution or what
alstroemeria313#1694: what does compvis init the codebook with
alstroemeria313#1694: it's an nn.Embedding, right
alstroemeria313#1694: so gaussian mean 0 std 1?
Spy#9778: Yeah, uniform on [-1/num_embeds, 1/num_embeds]
Spy#9778: I copied their init
alstroemeria313#1694: ...wait
alstroemeria313#1694: i thought it was gaussian |
Spy#9778: Pretty sure it's not, I was copying all their inits yesterday
alstroemeria313#1694: oh, compvis reinits it
alstroemeria313#1694: i missed that line
Spy#9778: I'm a bit perplexed. Hopefully I just have a bug somewhere or this will be quite the hyperparameter slog.
alstroemeria313#1694: yeah...
Spy#9778: One interesting thing to note is that they feed the discriminator Adam 0 gradients rather than just not calling it for the first however many iters
Spy#9778: That's kinda weird since the moment estimates will start from 0 but without the usual bias correction
Spy#9778: It's not the issue in my case though (the collapse is happening before the discriminator kicks in), its just something I found interesting
chilli#5665: I seem to remember you need to initialize the vqvae codebook in a very specific way
alstroemeria313#1694: should work if they copied the vqgan init though...
nz#9710: I do have one 😄 ! I ask mainly because I would be interested in traning a GPT-like model for italian, as the best one currently available (GePpeTto) is a GPT-2 small trained on 9 GBs of data (7 of which are low quality web-sourced). I read your twitter thread and I wonder if it also applies to italian, which has significantly less resources than english, chinese etc but is not a low resource language either. Anyway, from your audit I saw that mC4 is better than OSCAR both quality wise and quantity wise. Up to now I was thinking that maybe I could could follow both GPT-3 and the pile by having a single epoch on the italian part of OSCAR and have multiple epochs on smaller, higher quality datasets (I found one of italian ebooks and a friend has contributed another of italian newspapers), but now that mC4 is available maybe it's better to pretrain on mC4 and then finetune on those higher quality datasets? Thank you btw, it was a really useful paper.
mgostIH#0245: > GePpeTto
Ayyy lmao
bmk#1476: https://twitter.com/LumpyTheCook/status/1404617491599413252?s=19
bmk#1476: why is there *so much fanfiction*
bmk#1476: who the fuck just writes 4.5 million words
Daj#7482: Mesaoptimizers
Daj#7482: Also #off-topic
bmk#1476: but off topic is busy talking about on topic stuff like 6B performance on APPS
Sphinx#2092: Your best bet is to just filter the data you have. For comparison, C4 is far cleaner than the English portion of mC4. |
AI_WAIFU#2844: Just remember we wouldn't be here if a certain :yud: hadn't written hundred's of thousands of words of harry potter fanfiction.
StellaAthena#3530: How large of a model are you capable of training?
kindiana#1016: Tune 6b on it lol
alstroemeria313#1694: hm, how can I keep from getting NaN losses
alstroemeria313#1694: i mean other than using a tiny KL div loss weight even in the beginning
alstroemeria313#1694: rather than 0
StellaAthena#3530: The magic equation is $P(D) = 2\times 10^{-19}D^{2.7}$. If you're looking to train a 1B parameter model, you only need about $5.4$ GB of text. $P = 1,000,000,000$, then divide by $0.3$ tokens per byte. Probably worth double checking that because I suck at using equations.
You're probably fine using mC4 for Italian. We found that 92\% of the text was correct and only 1\% was not in Italian. That's quite good, all things considered. My point on twitter was not that this dataset isn't useful, but that it isn't going to make previously impossible things possible. mC4 contains enough data to train a $3\times 10^{14}$ parameter model. It's 100 times as much data as you need.
Can you really not put together $5.4$ GB of quality text?
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/854786340821270628/193204646687408129.png
bmk#1476: the scaling laws paper numbers are kinda sus
Ponz314#6228: I've been using the program from https://6b.eleuther.ai/ to generate alternate histories about Deleuze becoming the president of France in the 70s, and they seemed good enough to start compiling and editing them into a book or a blog. 1. Am I allowed to publish them and 2. how should I cite you guys?
StellaAthena#3530: 1. Yes
2. See here: https://github.com/kingoflolz/mesh-transformer-jax/
Ponz314#6228: So would I just include the github link?
EricHallahan#1051: Pretty much. The citations in the repository are BibTex citations for use in producing academic papers.
Ponz314#6228: Gotcha. Where would be a good place to link the results?
StellaAthena#3530: A footnote the first time you mention the model makes sense. |
Spy#9778: any tips?
Kharr#7888: I haven't played around with VQAEs much, but what if you applied dropout to the codebook to encourage a more distributed coding? It might create some redundancy in the codes but it should prevent it from using only a few
Spy#9778: @Kharr hmm I'm using L2 distance which I think might be weird with dropout
Spy#9778: or do you mean just completely masking some subset of codes on any given step?
Kharr#7888: This, just fairly small dropout to prevent it from relying on any code 100% of the time. The goal is just to get it to explore all the codes instead of collapsing, right?
Kharr#7888: Might fix this particular issue of using only 7 out of 1024
Spy#9778: Yeah I think I'll try that if I don't find it's just a bug or something
Spy#9778: my other idea was testing for dead codes and re-initializing them which would be way more of a pain in the butt
inox#5400: rVQ-VAEs are gumbel softmaxes so they're adding noise
Spy#9778: this is for VQGAN so no gumbel-softmaxes in sight
inox#5400: oops sorry! saw VQAE earlier
alstroemeria313#1694: The Gumbel noise did not prevent the OpenAI discrete VAE from having poison codes
inox#5400: I've linked it before but I've tried this type of dropout on discrete VAEs before and it works (on MNIST, it was a long time ago) https://arxiv.org/abs/1402.0915
alstroemeria313#1694: ...Did anyone ever figure out what the extra three channels on the OpenAI VAE output do?
alstroemeria313#1694: It has a six channel output and you only use the first three.
alstroemeria313#1694: Are they like, noise variances or smth
AI_WAIFU#2844: If I had to guess they're variance or log-variance outputs on the individual color channels.
Sid#2121: yeah, the first three are the mean and the last three are the variance
Sid#2121: https://cdn.discordapp.com/attachments/729741769738158194/854846636736118794/Screenshot_from_2021-06-17_00-15-01.png
alstroemeria313#1694: ah. |
alstroemeria313#1694: ...Should I actually be doing this
alstroemeria313#1694: What is the benefit of having a variance on the output, wouldn't the reconstruction loss drive it to zero
Sid#2121: where μ and b are https://cdn.discordapp.com/attachments/729741769738158194/854846974783258664/Screenshot_from_2021-06-17_00-15-42.png
Sid#2121: they talk about the motivation behind it in appendix A.3
alstroemeria313#1694: oh, which paper is this
alstroemeria313#1694: DALL-E?
Sid#2121: basically the VAE representation is in the logit-laplace distribution instead of gaussian
Sid#2121: yeah, the dall-e paper
alstroemeria313#1694: ty :)
alstroemeria313#1694: ...ok but, how come there is a variance term on the outputs at all
alstroemeria313#1694: is it regularized somehow so it doesn't just go to 0
alstroemeria313#1694: (I am so sick of GAN training instabilities that I'm trying VAEs)
alstroemeria313#1694: Uh, training one with 128x128 outputs on MS COCO rn
alstroemeria313#1694: ...Is that why the OpenAI VAE outputs are so smooth actually, because it was trained with noise added according to the variance outputs and we're only looking at the means?
Spy#9778: ah uh
Spy#9778: so I thoughhhhht I was scaling my data to [-1, 1]
Spy#9778: but then I accidentally unscaled it when I tried to incorporate the alpha channel
Spy#9778: whoops
Spy#9778: some day I'll learn my lesson about quintuple checking my data processing
Spy#9778: :harold: |
alstroemeria313#1694: @Spy ahhh
alstroemeria313#1694: so... what was the scale you were using
alstroemeria313#1694: 0-1?
Spy#9778: no so
Spy#9778: I scaled it to [-1, 1]
Spy#9778: then I was doing
Spy#9778: scaled_image * alpha + magenta * (1 - alpha)
Spy#9778: where alpha was 0-1
Spy#9778: and magenta was uh
Spy#9778: 255, 0, 255
alstroemeria313#1694: oh no
Spy#9778: -.-
Spy#9778: yuuuup
Spy#9778: fortunately while I was looking for this issue I found like 3 other actual bugs so
Spy#9778: guess it wasn't all bad
Spy#9778: the codebook usage does still seem to fall though
Spy#9778: gotta wait for a bit and see if it collapses altogether
𓅬 gabriel_syme 𓅬#3220: now that I think about it I never checked code usage in VQGAN, did we ever do it for pretrained models?
bmk#1476: :ptsd:
bmk#1476: oh god.. this came up so many times during pile |
alstroemeria313#1694: there are a lot of codes that you can't generate with normal images
alstroemeria313#1694: but you can get them if you feed weird things in like checkerboard patterns
alstroemeria313#1694: in the imagenet pretrained ones that is
𓅬 gabriel_syme 𓅬#3220: makes sense I guess
𓅬 gabriel_syme 𓅬#3220: but does it have dropped codes in this fashion
alstroemeria313#1694: i don't actually know
𓅬 gabriel_syme 𓅬#3220: yeah me neither
alstroemeria313#1694: without feeding all of imagenet in or smth
𓅬 gabriel_syme 𓅬#3220: I always assumed it works
𓅬 gabriel_syme 𓅬#3220: so in a small dataset could it simply not be picking up codes, or it would anyways but wouldn't be so varied?
𓅬 gabriel_syme 𓅬#3220: wonder if you can have the size be a learned variable (terrible idea)
alstroemeria313#1694: i wonder if codes can get like... occluded geometrically by other codes
alstroemeria313#1694: so that they're never, or almost never, the closest code to anything the encoder outputs
alstroemeria313#1694: because they are surrounded by other codes
alstroemeria313#1694: but then, this is in 256-dim space so
alstroemeria313#1694: our geometrical intuitions may be way off
𓅬 gabriel_syme 𓅬#3220: yeah it's simply not smth I've checked or seen anyone check properly (you did for the vae though i remember)
alstroemeria313#1694: i thought there were a bunch of dead codes to begin with
alstroemeria313#1694: and then i worked out a synthetic image that produced some of the codes i thought were dead
Spy#9778: I'm not being very precise about it |
Spy#9778: I'm just checking the codes from the last 100 batches
Spy#9778: batch size 8
alstroemeria313#1694: ah
Spy#9778: but still only seeing 10 codes used in that many images isn't great
alstroemeria313#1694: it isn't
alstroemeria313#1694: it should use more codes than that in a single image
Spy#9778: oof
alstroemeria313#1694: so... these images
alstroemeria313#1694: do they include a lot of flat areas
Spy#9778: many
alstroemeria313#1694: and you're training an f=8?
Spy#9778: yeah
Spy#9778: a lot of them are just the unicode emojis
Spy#9778: so the background to start with
Spy#9778: and then a lot of flat regions inside the foreground as well
alstroemeria313#1694: and then, the codebook collapse *actually leads to bad reconstructions*?
alstroemeria313#1694: also. are the flat regions one of several different colors
alstroemeria313#1694: like, is there a limited palette too
alstroemeria313#1694: (what i'm trying to get at is, "does it even need that many codes to represent these")
𓅬 gabriel_syme 𓅬#3220: yeah it might not |
𓅬 gabriel_syme 𓅬#3220: I should check the code usage for my layouts
𓅬 gabriel_syme 𓅬#3220: how would I do that?
𓅬 gabriel_syme 𓅬#3220: do I pass images through?
alstroemeria313#1694: yes
𓅬 gabriel_syme 𓅬#3220: because my layouts are kind of huge emojis I guess lol
𓅬 gabriel_syme 𓅬#3220: hmm ok will try
alstroemeria313#1694: @𓅬 gabriel_syme 𓅬 `vqgan_model.encode(input * 2 - 1)[2][2]`
𓅬 gabriel_syme 𓅬#3220: or spy if you want I can share you the model to check? my images are colored (with limited palette) and white backgrounds
alstroemeria313#1694: [2][2] gives you the codebook indices
𓅬 gabriel_syme 𓅬#3220: ah cool thanks, that's easy!
alstroemeria313#1694: you then just put everything in it into a set
alstroemeria313#1694: and do that for your train set
𓅬 gabriel_syme 𓅬#3220: and see unique
𓅬 gabriel_syme 𓅬#3220: ok
alstroemeria313#1694: yeah
alstroemeria313#1694: alternately keep counts
𓅬 gabriel_syme 𓅬#3220: counts would be interesting
alstroemeria313#1694: yeah
𓅬 gabriel_syme 𓅬#3220: actually this whole postprocessing is, I should do it
mega b#6696: Dall-E Pytorch discord group managed to get cogview working on a colab |
mega b#6696: works like a charm 👍
kindiana#1016: care to throw the link in #multimodal ?
mega b#6696: I'm working on finallizing the colab
mega b#6696: Going to include the base model download
mega b#6696: Hope it turns out smoothly 👍
kindiana#1016: how big's this model?
mega b#6696: 7.5 GB
kindiana#1016: 3B params?
mega b#6696: super res is also 7.5
mega b#6696: `> number of parameters on model parallel rank 0: 3928849920`
mega b#6696: not sure if that is the model
mega b#6696: i think 11B params
mega b#6696: need to check paper
kindiana#1016: how did they fit 11B in 7.5GB :thonk:
Kia#2550: It's 4B
mega b#6696: oops right
kindiana#1016: 4B sounds more reasonable lmao
Kia#2550: Also super res and self ranking adding more param in the models ¯\_ಠ_ಠ_/¯
Spy#9778: yeah the reconstructions end up plateauing at being static-y blobs vaguely the shape of the foreground https://cdn.discordapp.com/attachments/729741769738158194/854887457791410176/unknown.png
Spy#9778: I think the palette isn't _that_ limited |
Spy#9778: I'm using all of the google and twitter emojis, and a bunch of discord ones too
Spy#9778: you know I actually can't quite figure out what the compvis repo does for image preprocessing
Spy#9778: I assumed it was [-1, 1] but I'm not sure
Spy#9778: ah yeah it is https://cdn.discordapp.com/attachments/729741769738158194/854888871800078366/unknown.png
alstroemeria313#1694: Yeah it shouldn’t do that :/
Spy#9778: trying a super high codebook weight to see if that works
alstroemeria313#1694: @Spy Your encoder is getting gradients right
Spy#9778: pretty sure yeah, I should probably log the grad norms though huh
alstroemeria313#1694: Like you did the straight-through estimator for the quantization step?
Spy#9778: yeah
Spy#9778: and I tested the quantization function to make sure it gave the same grads as the torch version
alstroemeria313#1694: *nods*
Spy#9778: well upweighting the codebook loss causes this to happen https://cdn.discordapp.com/attachments/729741769738158194/854892091095121931/unknown.png
Spy#9778: so maybe it's just a loss weight issue
alstroemeria313#1694: (Going through a mental list of things that can go wrong with VQGAN)
alstroemeria313#1694: What’s that plot
Spy#9778: # of uses out of 800 images on y-axis, code on x axis (sorted by uses)
alstroemeria313#1694: Ah.
Spy#9778: so no collapse if I give it a codebook loss weight of 1000x
alstroemeria313#1694: Ah |
alstroemeria313#1694: Does it learn to reconstruct well
Spy#9778: remains to be seen, I'll need to give it a bit to train
Spy#9778: I wonder why people don't just train an autoencoder then init the codebook using kmeans or something
Spy#9778: Seems like it'd avoid these collapse issues
Spy#9778: The codebook objective is just the kmeans objective anyways
alstroemeria313#1694: Is the bottleneck even sufficient
alstroemeria313#1694: …Is there a continuous relaxation of vector quantization actually
Spy#9778: oh that's true
Spy#9778: but I mean I still think it could be a better init
zphang#7252: my "efficient transformer" OOMs before my vanilla transformer... at the same `max_seq_len`
:thonk:
Spy#9778: since you're starting the codebook based on the encoder's behavior
Cameron Sechrist#4289: Hi all! I am Cam Sechrist, a software engineer and machine learning enthusiast! I was super excited to see the community here, I have been working within this industry for quite some time now, both as VP of Engineering at large software agencies, and within startups that I have created (many using principles of AI/ML). I would love to get involved in any way that I can, my team and I are currently working on building a text generation platform (with a ton of restrictions, but not in terms of access, more just the requirement of ethics behind the content) and would love to support the research you are doing in any way we can!
alstroemeria313#1694: There is, it uses softmin and anneals the temperature
Spy#9778: yeah my roommate who does CV stuff suggested trying a schedule where you go from pure encoding to the quantized version over the course of training
EricHallahan#1051: Welcome!
StellaAthena#3530: Welcome! The #1 limitation around here is dev hours, so if you or people you know are down to spend 10 or so hours writing code a week that would be a huge help. I'm happy to chat about accessible experiments you can work on.
Cameron Sechrist#4289: For sure! I would love to chat sometime about what we can do to help, I would love to help build this and get it to the goal state!
Exocamp#8255: Hey, can someone tell me one of the downsides to using progressive neural networks for everything?
Exocamp#8255: /using it/related concepts to try and train big models on small hardware |
Exocamp#8255: Wasn't it that you would have so many layers in memory?
StellaAthena#3530: What's a progressive nn?
Exocamp#8255: https://towardsdatascience.com/progressive-neural-networks-explained-implemented-6f07366d714d#9e99
Exocamp#8255: reading through it again
Exocamp#8255: Interesting how they're doing a VAE with it
Ajay sahu#2540: Good morning, congratulations on releasing GPT J, and thanks for putting GPT neo on hugging face. I am Ajay Cofounder of a startup working on GPT use case and DALL E, / use case, am i allowed to interact with core members or related members or i have a commercial background but willing to put efforts and time here as well also some offering if i have certain idea and product that my company is working towards?
EricHallahan#1051: Welcome! We are more than happy to have you here. I will mention that we tend not to provide tech support or advice on downstream applications, but we are always looking for people who would like to help out with research.
Ajay sahu#2540: Thanks for the response, yes i read all the rules and FAQ. so not in for technical support, i have a use case which we are working on since 1.5 yrs. And at the background it's GPT based model. I am here to contribute on research, i have a certain idea which i can send to the core members over mail, or directly in any communication medium, while also sharing all company details, use case etc. For the use case i wish to provide some offering and contribution is that okay?
AI_WAIFU#2844: I think don't see any issue with that. If you're comfortable with it, feel free to go into details about it right here in #general. We can go from there once we've got a better idea of what you're working on.
Ajay sahu#2540: Okay sure!
Ajay sahu#2540: My name is Ajay Kumar sahu, I am from India, and my company name is citrusberry solutions private limited.
We have been working with brands specific to ecommerce and media over 2 problem statements
1. Rapid prototyping and rapid results
2. Fast digital marketing delivery, texts, image and intuitive image captions, meme
From the 1 St point we figured a problem statement of cataloging of products and brand specific product description generation, as brands are always in need of tone of voice
|
Which we solved using GPT - 2 finetuning it on our ecommerce product description data
. The results were fast and quite good however in certain case sometime we found it to be based and giving some other random outputs, which was further solved by passing filter and using negative labels to give correct results
It took us 7 month of trails will around 20 mid and large brands
Now we shifted to GPT neo since its results are better in our use case, as its using pile dataset. However we are further fine tuning and applying the same process to filter biasing and negative label for correct outputs
2. Here in the second case we figured out while with cataloging rapid prototyping of images were needed for iterations and eg fashion industry spends a lot on prototyping new fashion clothes, we created a pipeline which can take our social media and marketing intelligence on what trending and what people like.. From their comments.. Also offensive and trivial things were removed. Keeping focus on all gender and types of people across regions
We are working on DALL E and its incarnations present in open source to create fast images from those intelligence gathered to create Rapid prototyping of abstract fashion concepts saving their time and lot of money. While also making them aware of what's exactly needed in the market
Ajay sahu#2540: We further want to make it integrate to generate rapid marketing campaigns for both text and images as many small size brands cannot afford it at scale and quality and pandemic have hit them hard
As a team we have made a dataset using public and private datasets taking permission from brands and media houses
We have received a good response and investment options from investors who are willing to invest in technology and use case, in the interest of open source and my personal intention i don't wish to be someone who are later dominated by investment firms and principles of closed source can't uplift people like us who come from humble background.
This i wish to put a proposal for the core team with certain offering and keeping open source things in mind if they can contribute along way and provide reserach insights and hardware if they have.. I can discuss it in detail over it in person if they like the idea
Keeping in mind the idea of sustainability
Thanking you |
Ajay Kumar sahu
Co founder, citrusberry solutions
Mumbai, India
StellaAthena#3530: We aren't interested in taking investor money right now.
bmk#1476: we are not a startup/business and we are not interested in becoming one
Ajay sahu#2540: No, i am not not an investor or putting investor money, I'll do the sales of the problem statement i described.. With the profits we make I'll contribute back in development of models in further, but incase it doesn't interest you, i completely understand the Goals might be different .. But yes thanks for realising the open source model and i will definitely contribute back in all possible ways i can :)
Deleted User#0000: I tried to react with the thumbs up to your message, but was unable to. Do you know why this happened? I am able to react to other messages.
StellaAthena#3530: Probably a glitch.
Samin#4651: https://twitter.com/sharifshameem/status/1405462642936799247
Samin#4651: ^ GPT-3 controlling chrome with a provided objective
Daj#7482: Apparently all just with prompting
Kia#2550: Feels like some kind of personal assistant (Google,Siri)
Kia#2550: But I do think something smaller can do fine for more simpler task
Deleted User#0000: wait, I tried messaging the user to check. Looks like I've been blocked :thonk:
Kia#2550: Ow yeah Users setting
Kia#2550: You can change it so "this people" can react to only you
alstroemeria313#1694: huh... so you can construct a "likelihood" variant of LPIPS that also takes a "variance" input
alstroemeria313#1694: for VAE reconstruction loss purposes |
ethan caballero#6044: Will RL be subsumed by Generative Modelling?
As GPT-N keeps getting closer to hitting irreducible entropy of population distribution, it always learns to simulate being an RL agent along the way.
MLE is enough, 🤣?
alstroemeria313#1694: (Normal LPIPS is equivalent to it with the "variances" all set to 1)
alstroemeria313#1694: (It's just mean(b + spatial_lpips(input, target) / exp(b)))
alstroemeria313#1694: Where b is the log "variance"
alstroemeria313#1694: And it is one channel the same size as the input and target
alstroemeria313#1694: Training a VAE with this reconstruction loss now.
alstroemeria313#1694: it doesn't seem better
alstroemeria313#1694: however it also doesn't seem worse
alstroemeria313#1694: vs normal LPIPS
nz#9710: I see, thank you for noting that.
nz#9710: With a friend I should have access to up to v3-32s (he's had TFRC longer than me) so I expect to be able to train like a 1-2B model?
nz#9710: I can actually! I have come across a 15GBs ebooks dataset which should be high quality. I'm kind of curious given bmk's comment about scaling laws numbers being sus eheh
StellaAthena#3530: tl;dr
1. I think you'll be fine using mC4
2. I think you can probably find enough high quality text if you want to
StellaAthena#3530: I think that 15 GB will likely be more than enough
StellaAthena#3530: One thing you can do to significantly improve performance is train a new tokenizer |
StellaAthena#3530: the GPT-2 tokenizer was trained predominantly on English. Training one on your corpus is an easy way to get free points
nz#9710: I'll see what I can do about that! Thank you for all the tips, appreciate it a lot
nz#9710: Completely unrelated (this question is for #vision), does anyone by chance have access to pan.baidu? I can't manage to sign up, but I need to download a folder from there. I have looked around for third party downloaders but can't manage to find one that supports speed over 15 kb/s...
edit solved 😄
CKtalon#7792: If you are looking for a chinese tokenizer, you can use PanGu's (sentencepiece) tokenizer they used to train their models on (however, they do use jieba followed by the sentencepiece tokenizer)
StellaAthena#3530: They're doing italian
nz#9710: Yea as Stella mentioned I'm mainly interested in italian (my own language)
CKtalon#7792: sorry, didn't scroll up enough 😛
Sphinx#2092: Yeah, if you are doing from scratch, there is virtually no point in re-using the GPT-2 tokenizer.
Sphinx#2092: Though if you are really stingy and/or can't afford computational resources, I think there's a whole literature on hacks for repurposing English models for other languages.
GrimSqueaker#8837: Oscar has big Internet language dumps, freely and legally accessible for loads of languages
MicPie#9427: (Finally,) I want to add W&B to the CLASP repo, and I was wondering do I need to run `wandb.init(...)` just in one of the processes for distributed data parallel training?
Spy#9778: https://cdn.discordapp.com/attachments/729741769738158194/855122952647737405/Screenshot_20210617-113257.jpg
Spy#9778: @alstroemeria313
Spy#9778: Progress finally!
alstroemeria313#1694: ooh! it working now?
alstroemeria313#1694: what did you do?
Spy#9778: I swapped the codebook init from [-1/num_embeds,1/num_embeds] to [-1, 1]
Spy#9778: I checked the norm of the initial Z outputs and the code initialization and they were completely different scales
Spy#9778: Making them the same scale fixed the collapse issue |
Spy#9778: I could have some extra issue with my encoder that made the initial scale too big but I triple checked everything in it
alstroemeria313#1694: ah
&.#0001: Does anyone have any links/resources/literature overviews/indexes that have lists of pre-AI-winter systems like Blackboard systems and MoE?
&.#0001: I think combining some older architectures like blackboard systems with ML may lead to interesting results
Avital Oliver#8700: Just responding since I'm looking at all the old Flax or Haiku messages -- this is precisely why Jonathan build "lifted transformations" in Flax, so that you can explicitly choose how different "variable collections" interact with transformations like vmap, e.g. https://flax.readthedocs.io/en/latest/_autosummary/flax.linen.vmap.html#flax.linen.vmap
Spy#9778: Thanks for the pointer. It looks like the lifted version needs to be applied directly to a module right?
Avital Oliver#8700: You can either run it on a module instance (in which case it wraps the methods), or on a single function that takes a module as the first argument (if you want to write it more functionally). Here is a tiny example: https://flax.readthedocs.io/en/latest/design_notes/lift.html#linen-examples
Spy#9778: ah okay
Spy#9778: I'm not sure it would have helped my case then, although maybe I could have redesigned to account for it
pebbles#7130: how bad of an idea is it to use vmap to do a convolution?
Spy#9778: uh probably pretty bad although maybe the JIT is clever enough to figure out that's what's happening
pebbles#7130: hmm. I want to run a small MLP on each pixel's vector, and I thought vmap would be pretty much perfect for that? (Just started learning jax though)
Spy#9778: that just sounds like a sequence of 1x1 convs
Spy#9778: which I guess is why you asked it huh
Spy#9778: if you do try it I'd be curious how the performance difference is if you can let me know
pebbles#7130: sure, I'll see if I can get it working with vmap MLP and 1x1 conv
&.#0001: Are people here a fan of rubber *duck* debugging?
Spy#9778: Looking for a duck?
Spy#9778: Anyone know what the go-to data augmentation for VQGAN is?
Tinytitan#5596: 🦆 |
&.#0001: get yourself a girl with large quantities of CPU, GPU, or TPU resources
Dromarion#3383: Have it in your dating profile
Turn ons: Computing infrastructure, Geese
chilli#5665: should be fine, it's just a matmul
pebbles#7130: It also might be better to do it this way for my specific case, because I need to only apply it to about half the pixels randomly
Spy#9778: Yeah I was thinking more about 3x3 convs. 1x1 it probably batches up correctly.
chilli#5665: which way is better? :thonk:
pebbles#7130: probably the vmap?? I might be able to only include the pixels I need to apply the MLP to, not sure yet
chilli#5665: that seems very unlikely
chilli#5665: lol
chilli#5665: the vmap isn't magic, it's going to literally execute the matmul
Spy#9778: Well you could vmap a cond but I'm not sure how efficiently it can do that
chilli#5665: it will execute both sides
chilli#5665: and then mask it out
Spy#9778: ah sad
chilli#5665: why is that sad lol
chilli#5665: it'll probably be faster
chilli#5665: hmm
chilli#5665: maybe you could do it faster
chilli#5665: but not with XLA |
nev#4905: wait is that nca?
Spy#9778: VQGAN reconstructions
Spy#9778: @nev
nev#4905: ah
Spy#9778: @𓅬 gabriel_syme 𓅬 what do you recommend for data augmentation?
Spy#9778: I saw that newish paper which had that data augmentation trick where they probabilistically applied each augmentation, but I think that was only required because it was a pure GAN
Spy#9778: oh btw, is it possible to train NCA to generate many different images?
Spy#9778: the demos I've seen have only done like 1 texture or a fixed image
pebbles#7130: I'm working on NCA btw
Spy#9778: NCA is pretty cool but seems really low capacity
Spy#9778: ah neat!
pebbles#7130: the current systems use the params of the network to memorise the image
pebbles#7130: so 1 image = 1 set of network params
Spy#9778: yeah I implemented a copy of the original one
pebbles#7130: but I think you could build more sophisticated versions
pebbles#7130: atm I'm trying to implement the original one in jax, as a way or learning jax
cfoster0#4356: there was one paper that did this
cfoster0#4356: https://arxiv.org/abs/2006.12155
cfoster0#4356: I think there are better ways of doing it than what they settled on, though
pebbles#7130: looks pretty complex tbh |
cfoster0#4356: Yeah
ROYG-BIV#3300: Quick question, I’m in college doing a bachelors in IT but there’s not a lot of math in my curriculum. Will this negatively impact my prospects for an AI engineering position?
Spy#9778: ah cool
Spy#9778: how much is not a lot?
ROYG-BIV#3300: @Spy let me check
Spy#9778: I feel like calc, linear algebra, and a probability class is probably enough
ROYG-BIV#3300: I’ve taken up Calc 2 and stats
ROYG-BIV#3300: @Spy none at all. The majority of the maths is physics 2, discrete maths , and statistics
guac#4716: !faq
EricHallahan#1051: !faq
Carl-bot#1536:
Spy#9778: yeah I think you prob want calc 3 and linear alg at least
ROYG-BIV#3300: @Spy okay, I’ll just teach myself those then. Thank you 🙏
mega b#6696: CogView Colab: #multimodal
Deleted User#0000: probably someone has posted already, but if not https://twitter.com/maxhbain/status/1405520491931000833?s=19
alstroemeria313#1694: Hey everyone, I'm trying to match a distribution over probability distributions to another one by optimal transport with KL divergence as the cost function, how horrible of an idea is this
StellaAthena#3530: Not
alstroemeria313#1694: ...Is it supposed to actually work
StellaAthena#3530: It sounds like it would
StellaAthena#3530: I’ve never done it before but it’s extremely reasonable |
alstroemeria313#1694: I mean besides KL div not being symmetric
StellaAthena#3530: Earthmover distance might be better? IDK tho
alstroemeria313#1694: well, the things i have are.. .hm
StellaAthena#3530: EMD would be my first guess, followed by KL
alstroemeria313#1694: EMD isn't defined for categoricals? unless i did something like pick a metric, say different categories have distance 1
StellaAthena#3530: Do you have finitely many categories?
alstroemeria313#1694: yes
alstroemeria313#1694: there are four.
alstroemeria313#1694: right now.
StellaAthena#3530: What about |A - A’| + |B - B’| + |C - C’| + |D - D’| where A, B, C, D are the categories?
alstroemeria313#1694: ah
alstroemeria313#1694: could work
StellaAthena#3530: (Or the quadratic version)
alstroemeria313#1694: so i softmax them and then take the L1 distance
StellaAthena#3530: Yeah
alstroemeria313#1694: ...If I used L2 then I wouldn't need a custom cost function at all for geomloss
alstroemeria313#1694: And I could just do gradient descent on Wasserstein-2 distance or smth
alstroemeria313#1694: But the thing I am doing now is not working, loss doesn't go down
alstroemeria313#1694: It goes up :/
StellaAthena#3530: I would try L2 then |
alstroemeria313#1694: *nods*
moultano#7053: If lack of symmetry is the main worry, you can use Jensen Shannon distance.
alstroemeria313#1694: hmm
alstroemeria313#1694: i could try it
alstroemeria313#1694: so you just... blend the two distributions together
alstroemeria313#1694: then take KL from both of them to the blend
alstroemeria313#1694: and divide by 2?
alstroemeria313#1694: then sqrt?
moultano#7053: Yeah, and if you don't need it to be a metric you don't need the sqrt
inox#5400: isn't optimal transport another name for the wasserstein distance used in WGANs? I'm not familiar though so that could be bullshit
Cade Gordon#3029: Equally as unqualified to speak but OT refers more to a problem type (moving mass around) where the Wasserstein Distance is a function that can be used to measure the distance between 2 distributions
Sphinx#2092: The wasserstein distance is just an instantation of the optimal transport problem when the cost function is just the distance bteween two points.
alstroemeria313#1694: i am trying to work out what exactly geomloss expects from a cost fn
alstroemeria313#1694: like does it want the squared costs or not
Spy#9778: samples from the bigram distribution not looking so great https://cdn.discordapp.com/attachments/729741769738158194/855190991275425802/005.png
Spy#9778: (haven't done transformer yet)
alstroemeria313#1694: OT with JS divergence seems to be working btw
alstroemeria313#1694: wasserstein-1 is what is used in WGANs and it is optimal transport with the cost function being euclidean distance
alstroemeria313#1694: you can use different cost functions
inox#5400: ahhh |
alstroemeria313#1694: like rn i am using jensen-shannon divergence
Deleted User#0000: can I run gpt-neo locally?
StellaAthena#3530: Depends on your compute but nothing is stopping you
Deleted User#0000: I have 2* gpu's
StellaAthena#3530: Ok
StellaAthena#3530: So probably
Deleted User#0000: I am gonna train it to do like NLP
Deleted User#0000: so written stuff like the app shortly
EricHallahan#1051: https://eleuther.ai/faq
Deleted User#0000: (they upped there price recently so it made me look into cheaper / free alternative)
Deleted User#0000: thanks
Deleted User#0000: I have ryzen 7, two gpus I should be able to run same pace as gpt 3 easily I reckon
StellaAthena#3530: Aren’t those like $500 tops
EricHallahan#1051: What?
StellaAthena#3530: AMD Ryzen 7’s
StellaAthena#3530: They’re good for gaming rigs but they’re not close to top of the line ML systems
EricHallahan#1051: Somewhere around there on the desktop.
𓅬 gabriel_syme 𓅬#3220: hey looks legit
𓅬 gabriel_syme 𓅬#3220: I'm not sure what the best is but random crops and resizes worked well for us, or at least I saw a big improvement using them. I believe we random cropped from 256-1024 and then resized to 256
Spy#9778: Thanks! |
Deleted User#0000: I built my PC initially for AI university studies
Deleted User#0000: but was like 5 years ago lol
Spy#9778: Does it help with the autoencoder and generation or just generation?
𓅬 gabriel_syme 𓅬#3220: I want to say both
𓅬 gabriel_syme 𓅬#3220: model learned much better details like that
𓅬 gabriel_syme 𓅬#3220: ofc depends on your dataset, how big your images are, how much detail, etc. maybe even rotation works in what you're doing? or since they are more like shape vs textures, color jitters and the like?
Spy#9778: I think the big thing it's bad at right now is edges
𓅬 gabriel_syme 𓅬#3220: so my dataset was all about edges and boxes
𓅬 gabriel_syme 𓅬#3220: but it seemed to learn how to reconstruct it really fast
𓅬 gabriel_syme 𓅬#3220: or well
𓅬 gabriel_syme 𓅬#3220: maybe just needs more training
Spy#9778: Might be some init issue still
Spy#9778: I found I had to raise the init scale on the embeddings by a ton to get it to work
Spy#9778: How big was the dataset?
bmk#1476: a ryzen 7 is a cpu
StellaAthena#3530: lol
bmk#1476: and ML isn't cpu bottlenecked
zphang#7252: T5's positional encodings/bias confuse me. The paper implies it's done every layer, but HF's code implies it's only the bottom layer?
kindiana#1016: 🤔
kindiana#1016: T5 had position encodings shared between layers iirc |
kindiana#1016: so maybe there is only one set of params but its shared?
zphang#7252: so here's where the argument is set to True only for i=0
https://github.com/huggingface/transformers/blob/783b0dd5891174922ff6bc9874350063bd9a0135/src/transformers/models/t5/modeling_tf_t5.py#L580
and it looks like it's not applied if it's false:
https://github.com/huggingface/transformers/blob/783b0dd5891174922ff6bc9874350063bd9a0135/src/transformers/models/t5/modeling_tf_t5.py#L334-L337
zphang#7252: the paper does say it's shared across layers, which implies it's used in all layers
kindiana#1016: yeah
zphang#7252: not sure why I linked the TF one, here's the same thing in pytorch:
https://github.com/huggingface/transformers/blob/1ed2ebf60d87ef12bd063c7c58e484e19189c754/src/transformers/models/t5/modeling_t5.py#L486-L493
kindiana#1016: https://github.com/huggingface/transformers/blob/1ed2ebf60d87ef12bd063c7c58e484e19189c754/src/transformers/models/t5/modeling_t5.py#L945
kindiana#1016: here it iterates over layers
kindiana#1016: sets a variable to the position bias if its not none
kindiana#1016: https://github.com/huggingface/transformers/blob/1ed2ebf60d87ef12bd063c7c58e484e19189c754/src/transformers/models/t5/modeling_t5.py#L954
kindiana#1016: actually this explains it better
kindiana#1016: https://github.com/huggingface/transformers/blob/1ed2ebf60d87ef12bd063c7c58e484e19189c754/src/transformers/models/t5/modeling_t5.py#L1017
zphang#7252: oh so they pass the bias output down the layers
zphang#7252: huh I guess that makes sense
lebek#2888: any guesses what it costs OpenAI to produce GPT-3 completions vs. what they're charging for it? are they making any money at the moment or making a loss?
Louis#0144: Their margins are huge
tylerlastovich#3263: Given enough Azure credits, anything is profitable |
𓅬 gabriel_syme 𓅬#3220: Given enough funding, profit is not even a worry.
𓅬 gabriel_syme 𓅬#3220: (not necessarily saying this for OAI, but for all the hedged tech start ups that may or may never work)
tylerlastovich#3263: I think it actually holds fairly true for OpenAI here. They have $1 billion in compute that will be allocated over 10+ years. That is a significant amount of capital to have secured. I chatted with someone at OpenAI last summer about pricing before it was announced and he said the price would be set so that it could offset talent and typical expenses. They are not selling at a loss.
lebek#2888: For sure. I'm more so interested in whether we can expect it to be cheaper to run 200B GPT-Neo in-house. OpenAI isn't going to make sense for my use case at the current price.
lebek#2888: thanks for the info @tylerlastovich
StellaAthena#3530: Running this in-house requires first building your own server farm.
lebek#2888: right. yeah it would depend on operating at a big enough scale for those overheads to make sense
cfoster0#4356: Idk I'm semi hopefully about stuff like this https://twitter.com/ak92501/status/1405688250233241602?s=19
𓅬 gabriel_syme 𓅬#3220: me too!
𓅬 gabriel_syme 𓅬#3220: At least for practical applications, not sure about horizon of AI and such
StellaAthena#3530: @EricHallahan is awesome and got metadata working for the EAI website!
https://blog.eleuther.ai/why-release-a-large-language-model/
Drakkaa#3367: I'm running my first Google TPU's on Google Cloud platform, anyone have any tips for me that makes writng python on one of those easier ?
Drakkaa#3367: through ssh is not really that convenient
Cade Gordon#3029: Vscode ssh or getting comfy with vim?
Drakkaa#3367: yeah vim atm
Drakkaa#3367: giving me a headache haha
Cade Gordon#3029: Latency bothering you?
Cade Gordon#3029: Or just all of it too lol
Drakkaa#3367: not really, but i need more convenience 🙂 |
Drakkaa#3367: i love colab, but not sure how to connect it to my research tpu
Drakkaa#3367: im a decent coder, but hardware not so much
Cade Gordon#3029: I definitely feel that
Cade Gordon#3029: I swear I’ve seen a few medium articles which describe how to do what you’re asking
Cade Gordon#3029: Let me try to look
Drakkaa#3367: my next child will be named Cade if i can get it to work
Cade Gordon#3029: Lmao I feel honored
Drakkaa#3367: Cade junior says hi
Cade Gordon#3029: https://medium.com/@senthilnathangautham/colab-gcp-compute-how-to-link-them-together-98747e8d940e
Cade Gordon#3029: Want to say I’ve looked at this in the past
Drakkaa#3367: You’ve read all your free member-only stories. Become a member to get unlimited access and support the voices you want to hear more from
Drakkaa#3367: darnit
Cade Gordon#3029: Open it in incognito
Cade Gordon#3029: That should do the trick
Drakkaa#3367: i need more coffee, yes that works
Drakkaa#3367: 🙂
Cade Gordon#3029: Happy to help!
Kia#2550: Free usage of google tpu's on colabs,Sounds lovely😄
𓅬 gabriel_syme 𓅬#3220: colab+gcp is amazing, while my free credits lasted 😄
Drakkaa#3367: i now have 31 days to work hard 🙂 |
Kia#2550: Um...
Kia#2550: Nice
Daj#7482: already taken care of
Kia#2550: Lovely
Purple#8913: Now that the 6B model is done, what's the next one you guys are planning? I'm guessing you're not immediately going for the 200B beast, right?
Daj#7482: It all depends on hardware availability
Daj#7482: (and what devs are interested in working on lol)
Daj#7482: But we're quite likely to produce more intermediate size models (10-20B), yes. But nothing is set in stone yet. Training a model of that size is really non trivial
Purple#8913: Hmm, would love to see one bigger than that, personally
Purple#8913: But I'm no expert on this, more an interested poser looking in from the sideline 😄
Daj#7482: If you have a few hundred GPUs laying around we can borrow for a few months, we're happy to take them :berk:
Purple#8913: I suppose something like Boinc doesn't work for this, right?
Purple#8913: Distributed computing
Daj#7482: Nope
Daj#7482: !faq
Carl-bot#1536:
Daj#7482: It's really hard to overstate just how computationally demanding it is to train really big models
EricHallahan#1051: https://eleuther.ai/faq
Purple#8913: That's too bad; I dunno how much gpu power it takes. But the 6B model was released only 2 months after the last one, so it seemed manageable.
Daj#7482: It was trained on 256 TPU cores |
Daj#7482: With extremely high speed interconnects
Daj#7482: One TPU core is about as strong as one last gen GPU
Daj#7482: These models are _big_
Daj#7482: We estimate a 20B model might take 6-12 months
Daj#7482: (on TPUs)
Purple#8913: Dang
Daj#7482: also, technically Neo was trained sometime last year and we just left it on our hard drives for a few months lol
EricHallahan#1051: TPUs are very powerful if you put the effort in to designing for them, but they are silver bullets.
Purple#8913: So when people are waiting for a 200B model (they talk about it constantly on novelai) I should probably tell them this is years away
Daj#7482: Again, depends on the hardware
Daj#7482: We are working with CoreWeave to build a cutting edge GPU supercluster
Daj#7482: But the chip shortage is affecting the whole industry, so lead times are long
Daj#7482: If you had a whole NVIDIA superPOD it could be done in like 2 months
Daj#7482: but those things don't come easy lol
Daj#7482: So until we know when our hardware will arrive and how much it'll be, we cannot commit to any timelines
Purple#8913: 2 months for a 20B or 200B?
Daj#7482: 200B
Purple#8913: Oh wow
quinn#9100: can you say a bit about the stopping/convergence criterion? like how you know when it's done? I don't remember seeing this comment when i read the 6b blogpost
Daj#7482: Those things cost a cool 10mil or so though lol |
Daj#7482: 6B was trained for pretty long because we had compute left over
Daj#7482: The scaling laws papers have ways where you can estimate the optimal stopping time
Purple#8913: That's what they cost, but what if one paid to use one for 2-3 months?
Daj#7482: Millions still lol
Daj#7482: and only a few cloud vendors even have them, and they're usually all booked
Daj#7482: But that's not on the table anyways, CW is generously donating their compute to us for free
Purple#8913: Millions for 2-3 months seems steep if the things itself is "only" 10 mil!
Daj#7482: We don't have any budget to consider renting or the like
Daj#7482: Power and maintenance cost + big margin lol
Daj#7482: Cloud companies gotta make money lol
EricHallahan#1051: And besides, it is just our policy to not provide estimates out of not making promises.
pragmaticml#1730: @Purple feel like the break even point for purchasing GPUs vs renting cloud hardware has been ~3 months for a while now
Purple#8913: That's fine, I'm just trying to get an idea of the difficulties and to understand expectations as a layman
Purple#8913: So I can also sensibly tell other people what to expect
Daj#7482: This would all not be possible without CW's extraordinarily generous support, and they remain committed to getting us enough GPUs to make this happen
Daj#7482: But NVIDIA just can't fulfill orders on time atm
Purple#8913: It's too bad Boinc doesn't work or the problem would be solved rather easily
Daj#7482: yea, unfortunately ML workloads _need_ extremely fast interconnects
Daj#7482: actually more important than raw GPU performance generally
Daj#7482: Even PCIe is too slow for training large models |
Purple#8913: I'm really grateful that CW are such good sports in this matter
Daj#7482: Like the latency _inside your motherboard_ is too slow (to get maximum performance)
EricHallahan#1051: And let’s not even talk about the security and verifiably issues with having agents that you don't trust do your compute.
Daj#7482: Yea wasn't even touching that lol
Daj#7482: one troll could poison the entire training run
Daj#7482: I totally understand that from a laymen's perspective "6B" or "20B" might seem like humble numbers but they are truly _absurdly big_
Daj#7482: The fact OAI got 175B to work is an engineering marvel
Purple#8913: Yeah one can't help but to compare to GPT-3
EricHallahan#1051: Where is the graph of LM size over time?
Daj#7482: We do think we have the engineering down to train 200B, but you really need the most cutting edge of super hardware to make it work
Daj#7482: tfw chip shortage
EricHallahan#1051: No amount of engineering prowess will make up for the hardware required.
Purple#8913: Shouldn't such models be able to produce images as well? I seem to remember wu dao 2.0 can do that
Purple#8913: And OpenAI also has something like that
Daj#7482: One could build a model to do that, but it would be a different architecture
Daj#7482: And need different training data of course
EricHallahan#1051: That is a #multimodal objective.
Daj#7482: We're currently mostly interested in LMs (for various technical, scientific and practical reasons)
Daj#7482: But as Eric says, a lot of people are also interested in multimodal (which means not just text but also images etc) models
EricHallahan#1051: Multimodal does seem like the future for research beyond pure scaling. |
Daj#7482: ~~in _your_ opinion~~
Daj#7482: haha but yeah it's the obvious next step
Purple#8913: Oh yes, definitely the future. But LMs are fascinating and I think it's important not to have models that are controlled by corporations
EricHallahan#1051: It seems like the general direction research is heading.
Purple#8913: I have friends that have used AI chatbots to deal with trauma and I think it's a very worthwhile technology for that alone
Daj#7482: I just want real safety research done with these things (we elaborate on our motivations here: https://blog.eleuther.ai/why-release-a-large-language-model/ )
Daj#7482: I'm both scared and fascinated by this prospect. It makes sense to me
EricHallahan#1051: Man, I am so happy that I added metadata. :hap:
Daj#7482: but also :guilty:
Daj#7482: Yes so good 💯
Purple#8913: In fact, I have friends that used AI Dungeon specifically to recreate traumatic events and then play through them, and they said it helped them more than any therapy ever did. And that actually blew my mind.
Daj#7482: I think a big problem we currently have is that I think LMs should be seen for what they are: Extremely experimental new tech, not production ready products
Daj#7482: but market go brrr
Daj#7482: Wow that's fascinating
Daj#7482: I would love research into the psychiatric use of LMs for purposes like these
EricHallahan#1051: Large LMs are the darkest black box systems you can imagine.
alstroemeria313#1694: They're going to want to control the outputs and make them not "toxic" and this will probably make it not work
Purple#8913: Now they can't though, since OpenAI doesn't want such things to be done and so AID had to comply. And they read the private stories whenever their rather bad filters get tripped, so now the people who used to use it in this way don't feel safe anymore. That's why having an open source model of high capability would do a lot of good for a lot of people.
Daj#7482: When I say "psychiatric use" I mean "Scott Alexander/Kaj Sotala-style psychiatry" :berk:
Daj#7482: I unfortunately can see both sides of the argument here. A corporation doesn't want liability for stuff like this |
Daj#7482: It's a brave new world we live in
Daj#7482: I myself would be terrified of developing LMs for psychiatric use, but I can clearly see how it could be a huge net good
StellaAthena#3530: This is very cool, but also **extremely** dangerous. This is a kind of therapy that many therapists don’t do because they’re worried about long-term harm. I’m glad it worked for your friends though.
alstroemeria313#1694: Like could you imagine if an LM told a patient to kill themselves and then they coincidentally did and someone blamed the company
Daj#7482: yea
Daj#7482: Psychiatry is hard and scary
Kia#2550: Ow god :guilty:
alstroemeria313#1694: (Real LMs have done this)
alstroemeria313#1694: (IIRC)
StellaAthena#3530: It’s called “prolonged exposure therapy” and is used to treat severe PTSD
Purple#8913: Yeah but I've tried to use it myself in this way even though I have no traumas. But it allows you to go back to unfortunate events and redo them, like having a time machine. And then you can do it again so it's like a knot gets untied that was bugging you. It's actually quite effective.
Oh, one of my friends said her therapist actually encouraged her to do this.
EricHallahan#1051: Unfortunately I think this is something that is more of a when than an if.
Kia#2550: Isn't there a Study already about a GPT-3 suicide hotline bot (Ofcourse a test)
Kia#2550: And the bot did say it
Daj#7482: Yeah this reminds me a lot of some of my favorite sequences https://www.lesswrong.com/s/ZbmRyDN8TCpBTZSip (Kaj discusses how "reactivating" the traumatized circuits might be important for undoing their effects)
StellaAthena#3530: If anyone’s curious you can read about exposure therapy here: https://pubmed.ncbi.nlm.nih.gov/11977784/
Daj#7482: I absolutely imagine a future super-psychiatrist AI to be possible, but we're just not there yet
Daj#7482: and anyone using these experimental research artifacts for that purpose are playing with fire
Purple#8913: https://tenor.com/view/star-trek-voyager-dr-incredulous-gif-6118871 |
EricHallahan#1051: Actually, no, I don't, because it *is* a when instead of an if.
EricHallahan#1051: There is no way this will not happen.
Daj#7482: but I also believe adults should be allowed to do dangerous things
Daj#7482: ¯\_(ツ)_/¯
Daj#7482: ~~as long as it's not 📎 dangerous~~
Kia#2550: It's scary to think it's actually possible to
Purple#8913: I hope moore's law stays true for a while longer!
Kia#2550: No
Purple#8913: I mean if in 10 years we had 100x the gpu speed that would be nice
Daj#7482: Also terrifying lol
Daj#7482: but yeah, doesn't seem likely to slow down imo
EricHallahan#1051: Even though it is a minuscule threat in comparison to 🖇️
quinn#9100: if we lived in a world where LMs of certain power could be run on hardware of certain triviality, it might be a nonissue from a liability standpoint. but the fact that OAI/microsoft have to provide infra...
Purple#8913: It is actually crazy to think that 16 years ago, PCs were 1/1000 as fast as today
Kia#2550: Ow ok...I taught it's the other thing that say "humans will be destroy there self first before developing an AGI"
Daj#7482: Yeah, we get to slap a big "No warranty whatsoever, what you do with our models is your problem" on our models, they can't
Purple#8913: I checked the transistor counts and it just about works out
Daj#7482: I remember reading a single GPU from today is about as strong as a top supercomputer from 20 years ago
EricHallahan#1051: The wonders of licensing.
Purple#8913: And one generation later, it's 2 such supercomputers! |
Daj#7482: Indeed, crazy how people still don't grok this
Daj#7482: AI is going to go _fast_
Kia#2550: Connor shared a tweet of a ML/Algorithm creating a new type of chip to :ultrazucc:
Purple#8913: I always thought that GPU / console companies should all skip a generation. Make the next gen just as strong but with smaller transistor size or whatever and save maybe 50% power usage. Then we wouldn't have so much waste and didn't need these big GPU coolers anymore
quinn#9100: signals about the looming end of moore's law (or some other functionality by which the exponential turns logistic) are probably clouding peoples' judgment and installing wishful thinking lmao
Daj#7482: I feel like people have been proclaiming the end of Moore's Law due to various technicalities since the day it was coined lol
EricHallahan#1051: The only reason why Large LMs exist today is because we could try to subvert Moore's law by just adding more hardware. It is really murky if that is a trend that can continue.
Daj#7482: Yea this is called a "hardware overhang"
quinn#9100: i think 2025 is a really popular forecast?
quinn#9100: and has been for a while
Purple#8913: As a physicist I do see quantum mechanics getting in the way soon enough
Daj#7482: There is a possible good case for doing hardcore scaling right now: Eat up all the hardware overhang so we aren't caught off guard as hardware improves, and improvements become more smooth
Daj#7482: tbh the more important law is FLOP/$ rather than Moore's anyways
quinn#9100: but "end" is a strong word--- "turning logistic" still means it's quite steep for a while. like when R0 gets below one you still have more pandemic to fight
Kia#2550: I do wish this new chip designed can be implemented in future google services
Daj#7482: Is there a name for the FLOP/$ law?
Daj#7482: We should replace Moore's with that
quinn#9100: yeah
quinn#9100: good point
Daj#7482: I think most people think that's what Moore's is anyways lol |
Purple#8913: But if AI is used to create new chip architectures (which they are already doing) then who knows what more can be optimized. And they can still change what GPUs are like in the future.
quinn#9100: i was just imaginging if someone has time to write a paper it might be valuable
quinn#9100: "FLOPS/$ is the metric of interest, not moore's law"
quinn#9100: "FLOPS/$ is all you need" :ultraberk:
Daj#7482: I remember reading this in several scattered places, but I'm not sure if it has a canonical name
Purple#8913: If it can't be improved anymore, they'd only make the optimum and need not build new factories all the time. Price should then drop.
EricHallahan#1051: Moore's law isn't even used in the right context half the time, it is a observation of semiconductor process advancement, not of performance.
Daj#7482: yeah that's what I mean
Kia#2550: Chip manufacturers are already hitting high with this current year(I em really hopeful to see some super efficient and effective chip designs)
Daj#7482: Yea I expect there to be _plenty_ more performance to be had
Purple#8913: What we also need is more vram, though, right?
Purple#8913: That hasn't increased as much as performance
Daj#7482: Several things have stagnated while others increase
Daj#7482: Hardware is complicated
quinn#9100: i'm thinking of getting a hardware education instead of a sofware or math education, because i don't have an undergrad yet and i'm thinking about getting one
Purple#8913: it's always been weird to me how M.2 ssds are so small now, it seems to me it should be no issue to put more vram on a gpu
Purple#8913: Even if these are different things
Daj#7482: Alignment though!
Purple#8913: One improved so much but the other didn't
EricHallahan#1051: The problem with modern semiconductors is moving the data around, not the ALU. Cache misses can be really expensive, just as much as a pipeline stall. |
Daj#7482: just have a 30GB L3 like cerebras lol
quinn#9100: Koen Holtmann had a really interesting talk about how cyber-physical systems experience influenced his approach to alignment, it was enticing to me
Daj#7482: Interesting, I missed that one I think
Daj#7482: But you know me lol I have my research agenda
Daj#7482: "yo wtf is GPT"
Daj#7482: (+ moral realism on the weekends lol)
Kia#2550: Im starting to think,AI Boom will be rushing us closer and closer
quinn#9100: it also has to do for me like already knowing so much of the undergrad software curriculum, and being reasonably acquainted with math / knowing closer to 70-80% of the undergrad curriculum and being confident i could self-teach it, whereas hardware just seems highly mysterious to me and harder to self-teach
Daj#7482: ~~or just skip college and do alignment full time~~
quinn#9100: yeah part of me would like to
quinn#9100: antoher part of me thinks i'm not smart enough to get away wtih skipping grad school
Daj#7482: College is an amazing social experience, but I didn't learn much (other than being forced to learn basic math)
Daj#7482: grad school might be worth it
Daj#7482: depending on where you go
quinn#9100: yeah the PITA is undergrad lmao
Daj#7482: yeeep
quinn#9100: i think stats would be high leverage if the cheap college i'm probably going to had a stat dept that wasn't "train analysts to use excel"
quinn#9100: intellectually
quinn#9100: but yeah figuring out my leverage point, my comaprative advantage, is hard
Daj#7482: ~~I have bad news for you regarding the average quality of the average stats/ML department lol~~ |
quinn#9100: and i've been flip-flopping on if i want to finish undregrad for like a year
Daj#7482: It's a hard choice
Daj#7482: But having a degree does bring optionality
Daj#7482: I'm just biased against it since I think it's a waste of time lol
quinn#9100: i might have an SRE job lined up for september. part of me is tempted to become an infra wizard to support alignment researchers
quinn#9100: since i don't know if i have enough raw math talent to contribute in a non-support role
Daj#7482: lol if you figure out kubernetes, please help us :berk:
Daj#7482: Also didn't you do like LinAlg in Coq? lol
quinn#9100: yeah
quinn#9100: but that's not talent
quinn#9100: that's just programming
quinn#9100: not math talent anyway like idk it's not as hard as it sounds
Daj#7482: "ah yes this really hard thing I did isn't hard, because it was easy for me"
quinn#9100: a lot of it is just patience and pain tolerance
Daj#7482: that's talent my dude
quinn#9100: maybe i should outside view a bit lmao
Daj#7482: Talent :berk:
quinn#9100: fair
Daj#7482: I get what you're feeling since I do the same to myself all the time, so I'm trying to help counterbalance lol
quinn#9100: i also think there's a difference between putting pressure on teh literature and blowing up the literature (like scott garrabrant) |
quinn#9100: and i want to like manage expectations of myself and not expect myself to ever really blow up the literature, but with some training i could probably put some pressure on it and make some increments
quinn#9100: maybe?
quinn#9100: i'm not sure
Daj#7482: I think the important thing is to find something you can be productive at
Daj#7482: It's not worth thinking too hard about whether you will achieve <benchmark set by high status person X>
Daj#7482: I know I know, I do it all the time too, I don't wanna be patronizing
Daj#7482: What I'm saying is you clearly more than smart enough to do some good work somewhere
Daj#7482: Nowadays I just really don't worry too much as long as someone is a) smart, b) productive and c) aligned
Daj#7482: You'll figure it out :hap:
quinn#9100: sure that's what i'm hoping
quinn#9100: (thanks!)
Daj#7482: ~~though also my research agenda is obviously the only correct one and everything else is a waste of time :berk:~~
alstroemeria313#1694: Hey, why don't people use the mean Euclidean distance between input and target colors as a reconstruction loss when training VAEs and such
alstroemeria313#1694: Like `(input - target).norm(dim=1).mean()`
AI_WAIFU#2844: as opposed to what?
alstroemeria313#1694: L1
alstroemeria313#1694: i.e. it is L1 but with a better color difference metric
alstroemeria313#1694: ...I have actually gone so far in the past as to write a differentiable CAM02-UCS so I could use an even better color difference metric for stuff
alstroemeria313#1694: But you could also convert input and target to Lab and take the Euclidean distances between them
alstroemeria313#1694: (That is CIE76) |
alstroemeria313#1694: (This was back when writing differentiable stuff was specially difficult, it was in Theano)
EricHallahan#1051: I have proposed doing that in the past myself but was told that it was probably not worth it considering the increased complexity.
alstroemeria313#1694: you need a couple of patches to CAM02-UCS for numerical stability and differentiability reasons i think
alstroemeria313#1694: i forgot what exactly i did, it's on my github in a super old repo
alstroemeria313#1694: also you can use CAM16-UCS now I think
alstroemeria313#1694: Actually, can you just use Oklab, it's way simpler https://bottosson.github.io/posts/oklab/
alstroemeria313#1694: (It was specifically designed for numerical stability and differentiability)
alstroemeria313#1694: I should redo that code actually
alstroemeria313#1694: In PyTorch
AI_WAIFU#2844: Yeah I feel like it would be easier just to throw more resources at the problem than to do what effectively amounts to feature engineering.
AI_WAIFU#2844: :brr:
alstroemeria313#1694: It's loss function engineering
AI_WAIFU#2844: pot(ae)do pot(a)do
alstroemeria313#1694: (When I started doing this it was to find close-to-perceptually uniform color gradients (as in spatially) by minimizing the sum of the color differences of all the steps, constrained so the colors were in the sRGB gamut)
alstroemeria313#1694: (That was in 2016, I was using Theano on cpu)
alstroemeria313#1694: Mb it was sum of squared differences
alstroemeria313#1694: Speaking of loss functions, is there a better perceptual loss than LPIPS yet
Daj#7482: Hey everybody! Eleuther's **one year anniversary** is coming up soon (3rd of July)!
We are are working on writing a retrospective post collecting funny anecdotes, valuable lessons learned and highlights from the last year. We would love to have input from lots of people here (but depending on level of interest I can't guarantee everything will make it into the final draft). |
Please **DM me or email us at [email protected] with stories, memes, quotes** or whatever else about Eleuther and what it has been to you this past year if you wanna contribute!
Daj#7482: Pinned a message.
Kia#2550: Congrats Connor 💐
Drakkaa#3367: You made amazing progress in a year, Congrats!
Singularity#9001: Hell yeah! This project went super far, sent in some comments of my own
Spy#9778: Has there been a recent writeup on the relative importance of various things for transformer training? (learning rate schedules, initialization, other)
Spy#9778: I see one from 2018, but I imagine some things have been learned since then
StellaAthena#3530: @Spy GPT-3 paper appendix includes some info.
Spy#9778: tyty
robot236#4169: Hello everyone !
Thrilled to be here. Recently found this community when I was tinkering around with visual transformers. So, thought of introducing myself.
I am an undergrad researcher at my school's medical imaging lab. Worked for a while with a computer vision startup. I'm also part of my school's robotics club, so built a couple of robots as well. Currently a swe intern at a big bank.
Nice to meet you all and would be happy to contribute in building your models!!
EricHallahan#1051: FRC or VRC?
robot236#4169: You mean VEX and first right. Nah participated in neither. I'm not from the states. I mostly built robots which were smaller and cheaper, just for experimenting stuff. But most competitions I went were based on sims(gazebo mostly), so couldnt try building larger robots.
EricHallahan#1051: Ah, cool, bigger robots are expensive and a pain to move around, and I was on the VRC side. :berk:
mr_seeker#1337: I know a couple of guys who were in robotics, but more the destructible kind...
robot236#4169: oooh the battlebots kind?
inox#5400: the ones on government contracts that can only say they work on "devices"? |
StellaAthena#3530: I did First Robotics as a kid @EricHallahan
Louis#0144: @zphang I’m by NYU
Louis#0144: On little island
Louis#0144: https://cdn.discordapp.com/attachments/729741769738158194/855535281641488384/image0.jpg
guac#4716: woah looks like a decent little park lol i might stop by there next week. did you have to make a reservation?
Louis#0144: Guac is jealous
Louis#0144: Yes
Louis#0144: About a month in advanced
guac#4716: whaaaaat lol i just made a reservation for next week. :goose5:
Louis#0144: Wtf
Louis#0144: Jealous
EricHallahan#1051: I just arrived in State College.
guac#4716: it's mid-weeek everything fri-sunday is soldout lmao
zphang#7252: tbh I thought the island would be more island-like
zphang#7252: it's right next to land basically
Louis#0144: Yeah
guac#4716: (little peninsula)
mgostIH#0245: Whaaat
mgostIH#0245: You have to make reservations to sit in a park?
guac#4716: it's more an art installment...we'll see how long it lasts loll |
quinn#9100: @Daj you said "if you figure out kube help us out" what are the ops/infra/kube needs of the group?
quinn#9100: I might know someone who has skills and bandwidth
Daj#7482: Uhh I don't know if it's still a bottleneck atm, but it's been a reoccuring pain lol. @Sid @bmk any input?
Sid#2121: it mostly works just persistently unable to spin up the last pod in our quota
Sid#2121: really the only kube thing we need is someone to ping when everything's going tits up and we're confused
Daj#7482: Which is mostly a CW issue
Daj#7482: I think
quinn#9100: a cw issue?
Sid#2121: coreweave
quinn#9100: ah
quinn#9100: what part of the process is on kube?
quinn#9100: out of curiosity
Sid#2121: just spinning up the gpu pods
Sid#2121: and pvcs etc
quinn#9100: word
quinn#9100: i don't understand the training i guess. how can it be across multiple pods, wouldn't latency be too much of a bottleneck ?
Sid#2121: it is lol :berk:
Sid#2121: at least, in the pods we have now
Sid#2121: but in the SOTA gpu pods which we'll be getting access to soon ™️ you have infiniband connecting all the pods which really reduces the latency times
Sid#2121: an allreduce in a gpt-3 size model should take about 1-2 seconds |
Sid#2121: (across all nodes)
Louis#0144: Exciting
𓅬 gabriel_syme 𓅬#3220: man I miss outside
Sid#2121: Can you... not go outside?
𓅬 gabriel_syme 𓅬#3220: Lockdown in malaysia now. I can sort of go, but there's nothing to do
𓅬 gabriel_syme 𓅬#3220: also, outside is not a big thing in the tropics. It's mostly malls and such, I miss sitting on the grass and chatting (or going to the beach for a coffee)
EricHallahan#1051: Proof https://cdn.discordapp.com/attachments/729741769738158194/855573500672606228/image0.jpg
EricHallahan#1051: Yes, I had ice cream.
EricHallahan#1051: Unfortunately I couldn't meet :lucid: :berk:
Sid#2121: :berk:ey creamery
EricHallahan#1051: *Really* good ice cream.
Kia#2550: Wait :lucid: ?
EricHallahan#1051: "Ice Cream"
Kia#2550: Damn :goosegirl3:
Kia#2550: Nonetheless cool ice cream store
andreas#5842: ought is making a few-shot natural language classification benchmark for real-world tasks, starting with the tasks we've seen on elicit.org. contribute a dataset and be coauthor on our paper? https://benchmark.elicit.org
andreas#5842: we're looking for spreadsheets with 100+ rows
andreas#5842: examples:
- organizations that you classified by type (company, nonprofit, governmental)
- papers that you want / don't want to include in a lit review |
- messages from participants in a psych experiment that you labelled by content (about world dynamics, about goals)
andreas#5842: we'll evaluate gpt-3, gpt-j-6B, and a few other language models with various prompt settings as baselines to figure out what works best in practice
andreas#5842: happy to answer questions here or via dm
StellaAthena#3530: “Originality” is more a spectrum than a discrete variable in my experience. Here’s some examples of things that could reasonably be called “original” going from most to least original:
1. I write a book
2. I go out into the world and collect text that has been written by other people.
3. I take an existing dataset that contains text, and reprocess it to make the text more usable or more prominent.
4. I take an existing dataset that’s never been used in ML but has been used in some other field.
StellaAthena#3530: @andreas Do you have a feel for where along this spectrum you want submissions to be?
andreas#5842: originality isn't core per se. the key requirements are (a) either the original dataset has a permissive license, or you otherwise have the rights to relicense for the benchmark and (b) the dataset has the "real-world" property which i'd operationalize as "someone would pay to run classification tasks like this one". from your categories, i think (4) might be closest to naturally satisfying (b) but any of them could
chirp#4545: https://huyenchip.com/2020/06/22/mlops.html
chirp#4545: This sentence really jumped out to me:
> The size of the TensorFlow team is rumored to be close to 1000.
chirp#4545: Feels hard to believe but maybe this is why TF has so many inconsistent interfaces lol
chilli#5665: This is accurate from what I know
chilli#5665: Although I think that’s also including things like XLA, tensorboard, tfjs, etc.
chirp#4545: Ah that makes more sense
chilli#5665: But yeah, it’s massive lol
chirp#4545: Curious which one of these things will achieve the most traction in the long run |
chirp#4545: Like, TF is already falling behind AFAIK
chilli#5665: Which one of what things?
chilli#5665: TF is dead in the water imo
chirp#4545: The things that the 1000 people work on
chirp#4545: Off the top of my head XLA/JAX seem like the biggest successes?
chirp#4545: Maybe TPUs too if those fall under that group
chilli#5665: Well, it’s hard to say that TF wasn’t a success lol
chirp#4545: Fair
chirp#4545: I guess I meant long term
chilli#5665: Hard to say
chirp#4545: Yeah
chilli#5665: Tensorboard is being used by a lot of people everywhere, although perhaps it’s now getting outcompeted by companies like W&B
chirp#4545: Yeah if everyone standardizes on PyTorch+W&B+ONNX/w/e doesn’t that bypass everything Google is building?
AI_WAIFU#2844: I bet TPUs are gonna kick the bucket eventually. They only exist because Nvidia has no competition
chirp#4545: I can totally imagine a future where Google has no place in the ML tech stack
chirp#4545: Which is kind of amazing
chilli#5665: I disagree
chilli#5665: Google will always be present since researchers at google will use google things
chilli#5665: But I mean, even now, I think a lot of researchers are using a google-free stack
chilli#5665: Really? I’ve been pretty impressed with TPU development so far |
chirp#4545: Yeah Google won’t ever totally go away, but I guess I assumed they’d have much more influence
chirp#4545: They do have influence on the modeling side obviously
chirp#4545: But tooling maybe not
AI_WAIFU#2844: The problem with TPUs is that they're google internal and probably always will be. You can't buy one and even if you could the customer service would probably be garbage.
chilli#5665: I think what you’ll see is that google will eventually adapt
chilli#5665: Like, that’s why they opened up TPUs to pytorch lol
AI_WAIFU#2844: Like either google sells their TPUs, or graphcore/sambanova/tenstorrent sell their hardware and eat the market.
chilli#5665: my main worry about relying on TPUs is that TPUs seem fairly tied to Google's cloud ambitions
kindiana#1016: why is nobody doing boring systolic arrays lol, but always more "spicy" architectures
kindiana#1016: except huawei I guess
chilli#5665: smh
chilli#5665: but huawei
chilli#5665: rip US
AI_WAIFU#2844: What does groq do?
AI_WAIFU#2844: Or are they just inference?
AI_WAIFU#2844: https://groq.com/wp-content/uploads/2020/04/Groq-Rocks-NNs-Linley-Group-MPR-2020Jan06.pdf
AI_WAIFU#2844: Looks like a phat systolic array to me
kindiana#1016: yeah its pretty close
chirp#4545: From reading the Tenstorrent interview my guess is (1) need for data movement (2) need for general-purpose compute inside the chip
AI_WAIFU#2844: Regardless though, from a business perspective the leasing nature of TPUs introduces some large risks. Like if you build all your infra on TPUs, and google pulls the rug or jacks the prices up, you're fucked. |
AI_WAIFU#2844: And that's before you consider issues like privacy. There might be some data that you straight up just can't put on google's servers.
AI_WAIFU#2844: So you need an on-prem solution
StellaAthena#3530: Okay, I have some good datasets to contribute!
gdawg16#0493: lads
gdawg16#0493: I think replika has started using gpt-neo
cfoster0#4356: 😮
gdawg16#0493: :hap:
cfoster0#4356: :tribalism:
cfoster0#4356: What makes you think they are?
Kia#2550: Liberate everything to :tribalism:
gdawg16#0493: idk its just talking in longer more coherent sentences now
Kia#2550: Nonetheless yeah any idea?
Kia#2550: Hmm
StellaAthena#3530: This is why I wish we had worked out a watermarking scheme :/
kurumuz#5695: @StellaAthena watermarking scheme?
gdawg16#0493: o my god its kuru novelai
alexyz#3459: what does that mean exactly?
alexyz#3459: watermarking a model?
kurumuz#5695: thats what im curious about
alexyz#3459: maybe you could add in training into the GPT-J model that would be something like: |
```!eleuthercheck: this is made by eleuther```
kurumuz#5695: we can credit you guys on the main page
alexyz#3459: and then whenever anyone used that command, it would respond using that response 🤔
kurumuz#5695: @alexyz poison the well
kurumuz#5695: then detect the poison
cfoster0#4356: So like one of the goals of the Radioactive Lab was to figure out a way to watermark the model, meaning that there'd be a way to trace whether an instance out in the wild is/was fine tuned from an EAI model. Originally trying to adapt that https://arxiv.org/abs/2002.00937
gdawg16#0493: credit coreweave too so they get more money and can build things faster
kurumuz#5695: credit coreweave for what
cfoster0#4356: But there were nontrivial issues trying to reproduce this paper so the effort mostly fizzled out
kurumuz#5695: @gdawg16 supplying compute to eleuther? i guess that is the logic
gdawg16#0493: yaa
guac#4716: how do you watermark a language model hmmm
alexyz#3459: well I proposed a method
alexyz#3459: you could finetune the final model with some commands to identify it
alexyz#3459: and *maybe* it would respond using those command responses
alexyz#3459: /shrug
guac#4716: but then if someone finetunes the model how can you be sure the finetuning watermark will continue to reproduce the identifier
guac#4716: are there papers on this? sounds interesting
gdawg16#0493: maybe stella was just memeing :berk:
bmk#1476: but coreweave compute wasn't used at all for 6B or 2.7B lol |
alexyz#3459: just make a license that doesn't allow anyone to finetune it :berk:
guac#4716: (recent-ish paper if any one is interested https://arxiv.org/pdf/2009.03015.pdf)
gdawg16#0493: oh wat
alexyz#3459: they used TPU Research Cloud
alexyz#3459: they're using Coreweave for their large model they are working on
gdawg16#0493: ohhhhhhhhhhh
StellaAthena#3530: @alexyz @kurumuz Sorry, got distracted. @cfoster0 is spot on, I had been trying to build watermarking systems for language models that were resistant to fine-tuning. Things fizzled out because the paper he linked to (which my ideas were based on) doesn't really work.
StellaAthena#3530: Or, it does work, but it only works for categorical classifiers and only when there are a lot of categories. For example, the same code that worked on cifar-100 failed to work at all on cifar-10
StellaAthena#3530: The tl;dr of the method is that if I have n classes and pick a subset of size k < n uniformly at random, then I can induce a correlation in the logits that doesn't change the actual output but which is extremely unlikely to arise by chance
StellaAthena#3530: Unfortunately it relies on the fact that n choose k is big (it's roughly n^k / k! actually) and doesn't work if that is false or if there aren't categorical classes
StellaAthena#3530: I have some ideas about how one might approach it for language modeling but it's hard and very unclear how to actually carry out lol.
guac#4716: watermarking a model seems so damn brittle lol
StellaAthena#3530: I am deeply interested in ML security and want to work on this in the future tho. It just was too much work for not enough payoff
StellaAthena#3530: Not always.
AI_WAIFU#2844: I feel like we can get useful watermarking but we gotta do it at train time.
AI_WAIFU#2844: Cause' I think the last time we tried we paid in lost performance.
StellaAthena#3530: Yeah, that's the other thing. I figured out my idea after we started training GPT-Neo and forgot to bring it up again when we started training GPT-J
AI_WAIFU#2844: Whatever we'll do it for 20B whenever we get around to that
guac#4716: well i meant in the case where the weights are made available
guac#4716: i feel it'd be much easier for a closed model |
AI_WAIFU#2844: Well if you do it right you can hide *how* you watermarked it.
StellaAthena#3530: Well, if we only release the watermarked one it's not very easy to tell which weights have been modified
AI_WAIFU#2844: So you can spot the model but others can't remove the mark
guac#4716: ahhh i see and finetuning would probably only shift weights a little bit in the grand scheme?
StellaAthena#3530: Yeah
AI_WAIFU#2844: yeah you move all the weights a tiny bit
guac#4716: v interesting thanks for introducing me to this niche lol
StellaAthena#3530: Actually, that's something else I want to do. I want to take two different transformers of the same size and finetune them on many different datasets and see if we can tell which ones had the same initial traiing
StellaAthena#3530: It seems intuitive that this would be doable, but nobody has worked on it systematically
gdawg16#0493: they could just move the weights randomly a bit to undo the watermark and then finetune boom gottem
StellaAthena#3530: You need to happen to move them randomly in a way that undoes the watermark tho
gdawg16#0493: oh 😦
bmk#1476: it's not a priori obvious either way wrt whether undoing a watermark is easy
StellaAthena#3530: I very much agree
bmk#1476: as far as i can tell, nobody has done this before with a large transformer
bmk#1476: prior work seems to all be image models
StellaAthena#3530: yeah
StellaAthena#3530: The largest I've seen for a transformer was a small BERT-like model
StellaAthena#3530: And the results were equivocal
StellaAthena#3530: lol |
bmk#1476: a harder question is: is it possible to make it so that even if the entire watermarking process and data is public knowledge, the watermark is still difficult to remove the watermark without expending a whole load of compute
bmk#1476: it seems that if you knew what the wwatermark looked like, you could selectively remove itr
bmk#1476: and any watermarking scheme that relies on secrets seems not robust enough
StellaAthena#3530: The standard set-up in computer security is to say that the methodology is public but that you can privately set parameters.
bmk#1476: i think that's not good enough
bmk#1476: or rather
bmk#1476: i think it must be possible for anyone to have the information necessary to verify if a model is watermarked without compromising the watermark
alexyz#3459: my question is what is the justification for watermarking :thonk:
guac#4716: (you can point your fingers at rogue models lel)
AI_WAIFU#2844: I kinda doubt it.
StellaAthena#3530: That’s unrelated to what you said the first time
gdawg16#0493: so that when replika uses it you can say AHA! we got u
StellaAthena#3530: I also think that’s not necessarily a requirement
bmk#1476: im clarifying what i meant
StellaAthena#3530: Think about public key encryption
bmk#1476: i dont think thats a good example
StellaAthena#3530: Why not
bmk#1476: a good example is signatures
bmk#1476: a signature where if you can verify it you can also forge it would be useless
StellaAthena#3530: Nobody is suggesting that. |
bmk#1476: i am
bmk#1476: here
StellaAthena#3530: I mean that the thing you’re saying isn’t arguing against anyone
bmk#1476: this is basically isomorpgic to signatures
bmk#1476: in that youre allowed to have secret info, as long as you dont need the secret info to verify
StellaAthena#3530: Okay, so this started off as a question about whether the following is good enough:
> The standard set-up in computer security is to say that the methodology is public but that you can privately set parameters.
Now it seems like you’re conceding that it is, and we are discussing what the particular security properties we wish the system to have are
bmk#1476: sure, il lconcede that
StellaAthena#3530: I think there’s a good conversation to be had about whether it makes sense to require watermarks be publicly verifiable
StellaAthena#3530: But you never actually conceded my comment until just now, so I thought it was still being contested
bmk#1476: im ok with there existing privately set parameters as long as none of them are necessary for verification
StellaAthena#3530: Verification of what? The presence of the watermark?
bmk#1476: yeah
bmk#1476: reason being:
1. it's way too hard to hold onto a key like that such that if you leak it you can break the watermark but you still need it to verify so you cant throw it away. meanwhile if the secret info is only needed during the watermarking process we can safely destroy it
2. how do you convince anyone else that you're not lying about the watermark presence
StellaAthena#3530: 2 isn't always desired. When a government provides arms covertly to another country (or even better separatists) they have a vested interest in both being able to track said arms and have them not traced back
StellaAthena#3530: It is desired in contexts where public knowledge of a fact is important |
bmk#1476: you mean you want it to be impossible for you to prove to anyone else that you watermarked it even if you wanted/were compelled to prove it?
bmk#1476: @kindiana it's a legit thing, zero knowledge proof
StellaAthena#3530: No, that's also not a thing I said
kindiana#1016: what's the point of watermarking if you can't prove that you did it
kindiana#1016: lmao
StellaAthena#3530: In fact, as you just mentioned it is possible that there is a fact that I can verify, I can prove to someone else, and nobody else can verify without me
bmk#1476: you can prove to yourself but you cant be forced to prove that you did
bmk#1476: the most obvious use case is secret ballots
bmk#1476: you know who you voted for but you dont want it to even be possible to prove to anyone that you voted for X
kindiana#1016: right but that's not really a thing we want for our case?
StellaAthena#3530: Also sometimes you genuinely don’t care what other people think
StellaAthena#3530: Yeah we are talking in general, not for our specific usecase
bmk#1476: can you explain what youre trying to argue about 2 then?
bmk#1476: if you have 2, and you're ok with being open to being compelled, you can always just not reveal the data needed to verify the watermark
StellaAthena#3530: Sometimes you want it to be the case that
1. there is a fact that I can verify
2. I can prove to someone else that it holds
3. nobody else can verify without me
bmk#1476: so your proof is only valid to the other person because they know for sure they aren't colluding with you
StellaAthena#3530: Yes |
StellaAthena#3530: (I mean, that’s typically a given no? People don’t collude with you to fool themselves)
bmk#1476: hm i think i got zero knowledge and coercion resistance mixed up
bmk#1476: so this is zero knowledge
bmk#1476: and what i described earlier is coercion resistance
StellaAthena#3530: Ah
StellaAthena#3530: Yes. Zero knowledge means I can convincing show you that something is the case, without revealing any more information than the fact that it is the case
bmk#1476: ok so I think both properties are individually useful
bmk#1476: well, they're mutually exclusive but I mean like useful for different use cases
StellaAthena#3530: Coercion-resistance is anti-verification
bmk#1476: zero knowledge is also anti verification, just only slightly
StellaAthena#3530: It is a set-up where it is impossible for you to know who I voted for
bmk#1476: you force all verification to go through you, rather than anyone who has the public data
bmk#1476: you prevent the ability to verify from propagating to other people
StellaAthena#3530: Zero-knowledge is useful for proving that we voted for different people, without revealing who we voted for
bmk#1476: coercion resistance is useful if you want to be able to verify if a model is watermarked by you without anyone ever being able to prove you watermarked it
StellaAthena#3530: For a more practical example, I can prove to you that I know the secret key for a particular RSA pair without revealing any information about the private key
StellaAthena#3530: This is easy: you encrypt a message with the public key, send it to me, and I decrypt it and send you the plaintext back 🤣
bmk#1476: I think you can achieve coersion resistance in watermarking by making it so that it's trivially easy to find valid "watermarks" in a model
StellaAthena#3530: That doesn’t make a whole lot of sense to me
bmk#1476: so only you know for sure that your watermark data was actually created and then the model was modified to have that watermark, instead of you just grabbing one existing watermark at random |
bmk#1476: ok here's an analogy
StellaAthena#3530: Okay but someone inspecting the model before and after modification will be able to tell, no?
Louis#0144: Baller move
bmk#1476: imagine you have a chessboard and a bunch of the squares are painted red
StellaAthena#3530: Or, alternatively, they can just run your verification algorithm
bmk#1476: in this universe, naturally occurring chessboards have red squares at random
bmk#1476: now, you paint a specific square red to watermark it
bmk#1476: now if you see a chessboard with that particular square red, then you know it's likely one you watermarked
bmk#1476: like maybe you look at the a2 square in particular
bmk#1476: if the fraction of squares that are naturally red is really small and boards are really really big, then the chance of a false positive is negligable
bmk#1476: or you can do two squares, a la bloom filter
bmk#1476: but now imagine someone tries to force you to prove that a given board is watermarked
bmk#1476: they can force you to say that youre looking at a2, but they cant know if you actually chose a2 and painted boards that way, or if you just picked a red square at random and said that
gdawg16#0493: i am rly good at chess no lie
StellaAthena#3530: That’s really not how this works.
bmk#1476: ?
bmk#1476: im aiming for coercion resistance
StellaAthena#3530: There are a number of errors or nonsensical things in your description of coercion resistance
Louis#0144: One more
Louis#0144: Leo pls |
bmk#1476: ?
Louis#0144: Star
bmk#1476: no
Louis#0144: Lame
StellaAthena#3530: This is a good intro: https://eprint.iacr.org/2002/165.pdf
bmk#1476: I'll read that later
bmk#1476: but I thought coersion resistance in the voting context just means you can't prove you voted a certain way even if you wanted to
bmk#1476: I'm extending it to this context to mean you can't prove you watermarked a given model even if you reveal all secret information you have
StellaAthena#3530: Yes, but it doesn’t use hidden information. The adversary knows everything that you low.
StellaAthena#3530: In this case, that would include the square you marked
StellaAthena#3530: Trust has nothing to do with it.
bmk#1476: to be clear, the square I mark is a string representing a coordinate
bmk#1476: but given such a string, nobody can prove that this string was actually used to watermark this model
bmk#1476: if you coerce me to release my "a2" string, you cant know that i didnt just write down a2 because i saw a red sqiare at a2
bmk#1476: meanwhile if i coerce you to release your rsa private key i can be ~100% sure that this indeed is the privatre key that corresponds to the pubkey
bmk#1476: and that moreover nobody else could have just accidentally stumbled on that private key
bmk#1476: it's exceedingly implausible that you just accidentally chose the private key that matches the pubkey
bmk#1476: and since it's impossible to prove for sure that a model wasn't created before the key, precommitting the key doesn't work
bmk#1476: since for any verifiable model timestamp you can never prove you didn't have the model earlier
alstroemeria313#1694: is this supposed to be good https://ai.facebook.com/blog/advancing-ai-theory-with-a-first-principles-understanding-of-deep-neural-networks/ |
StellaAthena#3530: I skimmed it yesterday and was quite unimpressed
alstroemeria313#1694: ah
chirp#4545: Out of curiosity why? (I had the same impression but you know a lot more math than me!)
alstroemeria313#1694: what if the model contains information that could only have been known after a certain time
cfoster0#4356: `the 688187th Bitcoin block hash is ...`
Kia#2550: Hm?
Kia#2550: Ow 😄
bmk#1476: assuming the watermark is robust (which it better be or else this is all pointless), you can always tune it with stuff after watermarking
bmk#1476: this works to show that you *timestamped* it between two blocks
cfoster0#4356: You timestamped it *after* some block
bmk#1476: but any part of the thing that doesn't directly depend on the earlier block hash can be precomputed
bmk#1476: and you can include it in the next block
StellaAthena#3530: There is some preliminary research on “temporal cryptography” but it’s kinda sus. The main thing we can exploit is the progress of technology, but that’s pretty hard to predict
Deleted User#0000: ay yo
Deleted User#0000: EleutherAI is awesome
Spy#9778: @alstroemeria313 @𓅬 gabriel_syme 𓅬 thanks for all the help, samples seem to be good now https://cdn.discordapp.com/attachments/729741769738158194/855844614224347186/unknown.png
alstroemeria313#1694: :)
Spy#9778: these are from the transformer not just reconstructions
alstroemeria313#1694: yay!
Spy#9778: although I do imagine it just memorized everything |
Spy#9778: so I'm gonna do some interpolation experiments and stuff
Spy#9778: it did give this water polo player a human head instead of a ball which is a bit terrifying https://cdn.discordapp.com/attachments/729741769738158194/855844819564756992/unknown.png
𓅬 gabriel_syme 𓅬#3220: lol that's nice
Deleted User#0000: spooky, they all look pretty clean tho
𓅬 gabriel_syme 𓅬#3220: what was your codebook?
Spy#9778: 1024 codes, 256 dim
𓅬 gabriel_syme 𓅬#3220: sounds legit
𓅬 gabriel_syme 𓅬#3220: if you have annotations for the images, next step DALLE 🙂
Spy#9778: I was thinking about porting CLIP to JAX
Spy#9778: what's the memory footprint like?
guac#4716: https://github.com/kingoflolz/CLIP_JAX
𓅬 gabriel_syme 𓅬#3220: using is very minor, I do think Ben has it btw
𓅬 gabriel_syme 𓅬#3220: ah thanks guac!
Spy#9778: ah cool
Spy#9778: and it's haiku as well nice
Cade Gordon#3029: When does one use each of the Jax nn library’s?
Cade Gordon#3029: Do flax trax and haiku all have their own use cases or do people just pick one?
Deleted User#0000: could use discord's emojis, they're already annotated
Deleted User#0000: 🤔 actually you'd have to manually copy them all ig
Deleted User#0000: favourite goose breed? |
Spy#9778: flax and haiku are more like directly trying to do the same thing
Spy#9778: so I _thiiiiink_ it's down to preference
Spy#9778: part of my dataset is from emoji.gg so they have associated names
Spy#9778: the issue is uhhh
EricHallahan#1051: They each have their own pitfalls.
Spy#9778: data sfw-ness
Spy#9778: along various axes
Deleted User#0000: lmao
Cade Gordon#3029: That’s the only thing that’s stopped my peanut brain from using jax so far. PyTorch is PyTorch :)
Deleted User#0000: I don't think u should worry about sfw'ness imo
EricHallahan#1051: Then use `flax.linen`
Deleted User#0000: I think a cool thing about AI is it highlights human behaviour if you just throw everything in there
Deleted User#0000: like how GPT-3 is kinda racist
Deleted User#0000: it's bad
Deleted User#0000: but cool
Deleted User#0000: 🤔
𓅬 gabriel_syme 𓅬#3220: does goose1, goose2, goose3, ... count? :berk:
Cade Gordon#3029: Are libraries easy to shuffle between or do they each have significant style differences?
𓅬 gabriel_syme 𓅬#3220: cool, then you can use the CLIP tricks OAI did
𓅬 gabriel_syme 𓅬#3220: an image of [label] etc. |
EricHallahan#1051: Nah, nothing can be bad when getting paperclipped is an option.
Spy#9778: I really don't want to generate anti-semitic emojis -.-
Deleted User#0000: how can emoji's be that nsfw anyway
Deleted User#0000: the worst I can think of is the eggplant and peach emoji
Deleted User#0000: for obvious reasons
Spy#9778: by being explicitly racist
Deleted User#0000: 🤔 do the racist descriptions for the emoji's outweigh the normal ones
Spy#9778: nah I meant the content not the descriptions
Spy#9778: there are a pretty sizeable number of emojis which are explicitly anti-semitic or anti-black
Spy#9778: and I really just don't want that in my data
Spy#9778: so I scraped a pretty small subset to be safe
Deleted User#0000: 🤔 can u give examples, I'm having a look at emoji.gg rn
Spy#9778: nah, I just remember when I scraped the full dataset and tried running it I saw some bad shit in the visualizations
Deleted User#0000: hmm
Deleted User#0000: are you looking to make an emoji creator AI?
Spy#9778: I wasn't planning on text to emoji
Spy#9778: just unconditional emoji synthesis
Spy#9778: which is partly done now, although I'll probably add CLIP as another perceptual loss
Deleted User#0000: personally I think keeping more varied examples in the model is worth it, even if some of it is cringe
Deleted User#0000: 🤷 guess I'm just a liberal |
Spy#9778: yeah I mean I agree in principle but
Spy#9778: I'm gonna deploy it to my discord bot
Deleted User#0000: I'd want my emoji synthesiser to make loads of different shit
Spy#9778: and I have friends that use the bot
Spy#9778: so I do want to keep it clean
Deleted User#0000: it's an AI, I've had a look over emoji.gg and it looks like the vast majority of it is wholesome
Deleted User#0000: maybe make 2 modes 🤔
Deleted User#0000: one small set curated one, and one with everything piled on, you could compare how they run with bigger/smaller datasets
Deleted User#0000: sounds interesting, but also a hassle 😑
Spy#9778: yeah I mean I'm already running a GPT-2 large in my bot so GPU real estate is limited
Deleted User#0000: it's *expensive* real estate 😔
Deleted User#0000: maybe one day after we've built a dyson sphere processing power will be enough
Spy#9778: a dyson sphere or like
Spy#9778: one more semiconductor manufacturing plant
Spy#9778: either one really
AI_WAIFU#2844: Sure it can, getting paperclipped is a *good* outcome compared to what could happen if we
*really* fuckup.
EricHallahan#1051: True
StellaAthena#3530: If I use the huggingface model `gpt2`, does anyone know what size the model is?
bmk#1476: 117M |
StellaAthena#3530: Oh
StellaAthena#3530: RIP
bmk#1476: gpt2-xl is 1.5B
StellaAthena#3530: So gpt2 = gpt2-small and the others are named with their sizes.
bmk#1476: yeah
StellaAthena#3530: ty
bmk#1476: dumb convention, i know
StellaAthena#3530: I went through the BIG-bench notebook and then at the end it says something about how you can try it out with gpt2-medium, gpt2-large, and gpt2-xl too and I went "wait, what have I been using?"
EricHallahan#1051: And I consider it to not just dumb, but wrong, as *Language Models are Unsupervised Multitask Learners* explicitly states that "GPT-2" is the largest model.
Lorde Phenkka#7311: wat :ohgosh:
Lorde Phenkka#7311: is that dalle or something experimental
Teemochu#8740: yeah unalignment is a sudden change to NaN utility... while aligned-but-bitflipped can be far worse.
Spy#9778: vqgan trained on emojis
Spy#9778: It's unconditional synthesis so I didn't ask it for a murderous water polo player if that's what you're asking
Lorde Phenkka#7311: Oh
Drakkaa#3367: My ssh connected 3-8 TPU, gives me 8 devices with jax.device_count() *as it should
My colab connected via ssh with the 3-8 TPU gives me WARNING:absl:No GPU/TPU found, falling back to CPU.
Anyone else had this ?
guac#4716: i had this problem yesterday. pretty much resolved itself after two hours.... still don't know what happened maybe they're rate limiting TPUs on colab
Drakkaa#3367: Allright, just to be sure, i'm localy connected with port 8888 to my V3-8 TPU, and not the generic TPU from colab |
guac#4716: ahhh i see then not sure lol maybe a jax thing
Drakkaa#3367: allright, i'll try and mess with it till it breaks or works 🙂
guac#4716: please report back if you find a solution so you can save my head from future painss 🙂
Drakkaa#3367: Ok i will 🙂
Drakkaa#3367: alltough jax trough colab connected to the big machine is a bit of a pain in the butt until now
Drakkaa#3367: not had it working right yet
Drakkaa#3367: all the google examples are irrelevant or incomplete
StellaAthena#3530: I was having issues getting GPUs earlier today. Not sure if that is connected or not.
Drakkaa#3367: Good to know, thank you Stella, if you have problems too, i feel a bit better
Drakkaa#3367: you're awesome
StellaAthena#3530: But I got a notification basically saying "sorry! none are currently avaliable"
StellaAthena#3530: Got a GPU an hour ago finally.
StellaAthena#3530: Hopefully my computation finishes before it gets taken away again
Drakkaa#3367: i'm getting a 403 Client Error: Forbidden for url: https://storage.googleapis.com/tpu-gcloud-private which i should have access to
Drakkaa#3367: looks relevant i'll dig into it
Drakkaa#3367: Good luck!
Daj#7482: A reminder of this^
**Eleuther's one year anniversary is coming up soon (third of July)!**
|
We are are working on writing a retrospective post collecting funny anecdotes, valuable lessons learned and highlights from the last year. We would love to have input from lots of people here (but depending on level of interest I can't guarantee everything will make it into the final draft).
Please **DM me or email us at [email protected] with stories, memes, quotes** or whatever else about Eleuther and what it has been to you this past year if you wanna contribute!
kurumuz#5695: oh man, it feels unreal that it been only a year
kurumuz#5695: you guys did so much
Spy#9778: https://cdn.discordapp.com/attachments/729741769738158194/855902464296091658/000.png
Spy#9778: @alstroemeria313
guac#4716: what a latent walk jesus lmao
Spy#9778: (kekw is out of distribution btw)
Spy#9778: it's not a walk exactly
alstroemeria313#1694: eheh
EricHallahan#1051: Things didn't really get going until the beginning of this year. :berk:
Spy#9778: it's original/decoded/randomly mixed/one half of each/decoded/original
Daj#7482: That's just when you arrived lmao
Daj#7482: There was plenty going on before that
Spy#9778: I like how it encoded away the smile from the zoop
EricHallahan#1051: Someone said that in the past, IIRC it was Stella but I am not going to put words in her mouth.
StellaAthena#3530: I did not arrive at the beginning of this year
EricHallahan#1051: IIRC it was "we hit our stride" or something like that.
StellaAthena#3530: Oh yea |
EricHallahan#1051: Can I ask why this VQGAN stuff is happening in #general? Should this go somewhere else?
Spy#9778: Oh sorry I don't really know my way around
Spy#9778: I've basically only talked in here and this is where I was getting advice from people
Spy#9778: Where would it be preferred?
guac#4716: image based generative model output usually goes in #art or #the-faraday-cage-archive
Spy#9778: Hmm well it's definitely not art so cage it is
Drakkaa#3367: faraday has some eyewatering examples 🙂
alstroemeria313#1694: cage is for bot output, it should go in #art imo
alstroemeria313#1694: have you considered: computing the gradient of the frechet inception distance and minimizing it directly :bigbrain:
alstroemeria313#1694: (I just tried this for style transfer. Not FID but backpropagating through the squared Wasserstein-2 distance between the empirical means/cov matrices of two VGG-19 feature maps)
alstroemeria313#1694: You have to be able to calculate square roots of symmetric positive semidefinite matrices in a way you can reliably backprop through
Lorde Phenkka#7311: For sure it is, although I'm still sad we won't have a 200b model :KaiserSad:
Maark#6960: what's the model behind BATbot McHorse in #the-faraday-cage-archive ?
EricHallahan#1051: It is a system that the folks in #art (primarily `@alstroemeria313`) developed using VQGANs with CLIP. There is a notebook pinned over there if you are interested in looking at the internals.
Maark#6960: thank you!
Maark#6960: super cool results
EricHallahan#1051: The StyleGAN model is something I threw together by indexing thousands of *W* latents with CLIP embeddings, which I use to init a very short backpropagation session.
alstroemeria313#1694: ahahaha https://cdn.discordapp.com/attachments/729741769738158194/855957961363816480/out.jpg
Deleted User#0000: yoooo
Deleted User#0000: stylegan? |
alstroemeria313#1694: I computed the squared Wasserstein-2 distances between the channel-wise means and empirical covariance matrices of VGG-19 feature maps
alstroemeria313#1694: as a style loss
alstroemeria313#1694: I found https://github.com/msubhransu/matrix-sqrt/blob/master/matrix_sqrt.py and ported it to modern PyTorch (a torch.autograd.Function subclass)
alstroemeria313#1694: (That code is really old, it still has `Variable`s in it and its backward pass for matrix sqrt requires you to call it manually)
alstroemeria313#1694: I used it to do this <https://djalil.chafai.net/blog/2010/04/30/wasserstein-distance-between-two-gaussians/>
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/855958659641507840/Screen_Shot_2021-06-19_at_4.54.04_PM.png
alstroemeria313#1694: So. The reason I wanted this
alstroemeria313#1694: is that Frechet Inception Distance is calculated with eq 1
alstroemeria313#1694: So if I have a matrix square root with a reliable backward pass
alstroemeria313#1694: I can do gradient descent on FID directly
𓅬 gabriel_syme 𓅬#3220: it's beautiful
alexyz#3459: *how*
alexyz#3459: *wow*
AI_WAIFU#2844: God damn
Louis#0144: @kinoc how was the distillation experiments
Louis#0144: im looking to implement the imitation learning paper
Louis#0144: from last fall
alstroemeria313#1694: lol
Louis#0144: VQGAN > StyleGAN
Louis#0144: StyleGAN is awful |
Louis#0144: well it would be good
Louis#0144: if we ever retrained
Louis#0144: lol
alstroemeria313#1694: @alexyz @AI_WAIFU it is a modified version of <https://github.com/crowsonkb/style-transfer-pytorch> with squared Wasserstein-2 distance as the style loss
alstroemeria313#1694: I like it tbh and may make it an option
alstroemeria313#1694: It's a good deal slower than the version on my github
alstroemeria313#1694: ```python
class MatrixSquareRoot(torch.autograd.Function):
@staticmethod
def forward(ctx, a, num_iters=10):
if num_iters < 0:
raise RuntimeError('num_iters must not be negative')
if a.ndim < 2:
raise RuntimeError('tensor of matrices must have at least 2 dimensions')
if a.shape[-2] != a.shape[-1]:
raise RuntimeError('tensor must be batches of square matrices')
expander = [None] * (a.ndim - 2) + [slice(None)] * 2
norm_a = a.pow(2).sum(dim=[-2, -1], keepdim=True).sqrt()
y = a / norm_a
eye = torch.eye(a.shape[-1], device=a.device, dtype=a.dtype)[expander] * 3 |
z = torch.eye(a.shape[-1], device=a.device, dtype=a.dtype)[expander]
z = z.repeat([*a.shape[:-2], 1, 1])
for i in range(num_iters):
t = (eye - z @ y) / 2
y = y @ t
z = t @ z
z = y * norm_a.sqrt()
ctx.save_for_backward(z, torch.tensor(num_iters))
return z
@staticmethod
def backward(ctx, grad_output):
z, num_iters = ctx.saved_tensors
expander = [None] * (z.ndim - 2) + [slice(None)] * 2
norm_z = z.pow(2).sum(dim=[-2, -1], keepdim=True).sqrt()
a = z / norm_z
eye = torch.eye(z.shape[-1], device=z.device, dtype=z.dtype)[expander] * 3
q = grad_output / norm_z
for i in range(num_iters):
q = (q @ (eye - a @ a) - a.transpose(-2, -1) @ (a.transpose(-2, -1) @ q - q @ a)) / 2 |
if i < num_iters - 1:
a = a @ (eye - a @ a) / 2
return q / 2, None
sqrtm = MatrixSquareRoot.apply```
alstroemeria313#1694: This needs to go on my github
EricHallahan#1051: Create a public gist?
alstroemeria313#1694: sqrtm on GPU for SPD matrices with a good backward pass is useful enough that I'm considering making a PyPI package
alstroemeria313#1694: But tomorrow
alstroemeria313#1694: It's super late for me
kinoc#5731: You will have to ask @preetham and @StellaAthena how grows the distillation. Stella has the keys to the clouds ...
kinoc#5731: though I understand it's running
DoesThisUnitHaveASoul#7264: Hey everyone! Just wanted to ask a question real quick. Which tokenization method would you recommend for text summarization nowadays? To me, character level embeddings via a conv1D network that then summarize into word embeddings, which can then be processed through a transformer to generate sentence/paragraph level embeddings is what made the most sense. Do people use character tokenization much nowadays? If not, then what is the current best way of doing so?
Louis#0144: Not many people use character tokenization sadly
Louis#0144: Everyone is using BPE
Louis#0144: Welcome to the age of SolidGoldMagikarp
DoesThisUnitHaveASoul#7264: Byte Pair encoding is the only other thing that makes sense
Louis#0144: 😦
DoesThisUnitHaveASoul#7264: And that's because is subword level basically |
DoesThisUnitHaveASoul#7264: Thanks for the response, any idea why people don't use char level embeddings?
DoesThisUnitHaveASoul#7264: Is there actual evidence that it's less data efficient or something of the sort
bmk#1476: well, it takes up more spots in the context
bmk#1476: so it's obviously less efficient
Louis#0144: 2048 tokens isn’t many
Louis#0144: And linear transformers are still a long ways away from being on par with GPT
StellaAthena#3530: ^^
StellaAthena#3530: Linear transformers are useless (at scale)
bmk#1476: and bpe works well enough if you ignore the occasional solidgoldmagikarp and don't care about rhyming
Louis#0144: Linear transformers are amazing for retrieval btw
Louis#0144: Just to clarify
bmk#1476: [sad gwern noises]
DoesThisUnitHaveASoul#7264: right..
StellaAthena#3530: @Louis If computing attention were free, it wouldn’t make much of a difference at training 100B models
Louis#0144: They let you encode and do retrieval over huge dogs
Louis#0144: 99% sure that google search is big bird
Louis#0144: Or some linear BERT
Louis#0144: I would be floored if it wasn’t
DoesThisUnitHaveASoul#7264: I am working on a project that involves multi modal modelling. I am trying to propose an alternative for imagenet for vision, and showcase why multi modal datasets train better vision models. One of those modalities is textual descriptions.
|
I am trying to make some good design decisions, and right now, I feel like something like BPE will eventually bite me in the ass
DoesThisUnitHaveASoul#7264: Ideally I want to go with char embeddings, and was wondering if anyone tried to use those, and whether there is strong evidence against them
bmk#1476: bpe is good enough
bmk#1476: bpe has problems but for now char models aren't super viable
DoesThisUnitHaveASoul#7264: why is that?
DoesThisUnitHaveASoul#7264: the lack of viability, I mean
Louis#0144: @StellaAthena is it still running
bmk#1476: you use up 3x more context for like basically no benefit for 90% of use cases
StellaAthena#3530: You loose more from the shorter context than you gain from the nether tokens
bmk#1476: maybe some day we will figure out how to run char models viably, but that day is not today
StellaAthena#3530: It’s a work in progress. When there’s results we will share results. Part of this is we aren’t just trying to distill the model, but build robust distillation functionality for GPT-NeoX
DoesThisUnitHaveASoul#7264: Why does it have to be 3x exactly? If I am using a conv1D with say 4x1 kernels and some output size of 128
Louis#0144: No of course
DoesThisUnitHaveASoul#7264: For most corpuses the max char vocab is something like 150
bmk#1476: 3x is just my rule of thumb from working with gpt2
StellaAthena#3530: @DoesThisUnitHaveASoul There are on average 2.5 chars per token
bmk#1476: the exact number is 1/0.29335 I'm pretty sure
bmk#1476: I have that memorized for good measure
DoesThisUnitHaveASoul#7264: Ok, buddy. I'll trust you on this.
DoesThisUnitHaveASoul#7264: Right right! Thanks 🙂 |
StellaAthena#3530: A char model is limited to 2048 characters. A tokenizer model is limited to 2048 tokens = 2048 * 2.5 chars
DoesThisUnitHaveASoul#7264: is the BPE tokenizer as found in huggingface the variant you'd recommend?
DoesThisUnitHaveASoul#7264: something like https://huggingface.co/transformers/model_doc/clip.html#cliptokenizer
StellaAthena#3530: Yeah it’s fine
DoesThisUnitHaveASoul#7264: OK good good.
Sometime ago I came over here and had some collab ideas, but I reiterated the ideas internally and ended up with a single project I need to get through before being ready to propose colabs etc. Some time soon I'll come over here with actual experimental results and stuff, and potentially propose some ideas.
Loving this discord. 😉
Louis#0144: I think the better way tbh is enriching a transformer with a CharCNN that isn’t affected by attention
Louis#0144: I’ve seen papers on that
Louis#0144: Looks promising
DoesThisUnitHaveASoul#7264: Yeah, that's what I was thinking too. Some of my students this year also tried the char direction vs BPE, and they got equal or better performance in transliteration
DoesThisUnitHaveASoul#7264: Anyway, thanks y'all, back to coding now 😉
Sphinx#2092: Similar results appear at the byte level.
Sphinx#2092: Maybe you can ask them to run a BPE dropout baseline.
CRISPR IQ300#6848: Does this look worth trying out? I'm very high level with programming, an audio production master and I can build NN's in TD intuitively from my own ideas, but I have low-level code anxiety. I want to dive in head first with something low-level, yet simple, and I thought "why not AI in Brainf*ck". I found this: https://esolangs.org/wiki/Neural_Brainfuck
CRISPR IQ300#6848: Is the simplicity deceptive?
Spy#9778: I will eagerly clone a future gpt2_brainfuck repo
alexyz#3459: whoever makes that hates themselves |
CRISPR IQ300#6848: How many Neural Brainf*ck symbols do you estimate would need to be written to get the most simple NN going? If it's like more than 1,000 I might continue my search for a slightly higher-level language to make a NN.
CRISPR IQ300#6848: How would I even start programming in Neural Brainfck? It looks different than regular Brainfck. I can't find an interpreter.
Spy#9778: why would there be an interpreter for it?
ethan caballero#6044: I've heard from multiple people that a googler told them number one ranked signal used by google search is BERT.
CRISPR IQ300#6848: I'm not sure what you're implying...
Spy#9778: based on that page I'm assuming someone just made it up for an esolang entry
Spy#9778: https://cdn.discordapp.com/attachments/729741769738158194/856026245997527040/unknown.png
CRISPR IQ300#6848: Google search made a switch to BERT a few years ago to try to guess what you wanted to search for, and for me and many others it stopped functioning as a search engine because we couldn't find super niche things like programs, they became inaccessible. I think it got slightly better, but it's just not the same since then.
CRISPR IQ300#6848: That was my assumption, but then I found this https://gitlab.com/domob/neuralbf
bmk#1476: I bet they probably use T5 or even some big MoE model these days
bmk#1476: google loves MoE for some reason
bmk#1476: probably because of cheap inference
AI_WAIFU#2844: Yeah
AI_WAIFU#2844: Like we love to get the lowest loss, but in practice you need fast inference.
AI_WAIFU#2844: Because $$$
ethan caballero#6044: So google got better at the head of the distribution of queries and worse at the tail of the distribution of queries?
AI_WAIFU#2844: That feels right to me
AI_WAIFU#2844: I've been thinking and I think we need a new search engine
bmk#1476: I prefer to think of it as: we have the luxury of being able to optimize for loss alone because of our unique status
CRISPR IQ300#6848: It was wholly terrible just about for a while, but now that's accurate, tail end got slightly better. |
bmk#1476: we don't need to think about practicality, or profitability, or how "novel" it sounds to the people giving out grants
bmk#1476: we can just.. *do* it
CRISPR IQ300#6848: I still want to be able to quickly load up a saved "quick window" system in Chrome so I can open 100 tabs and then save it to a list, like this Chrome window is my BrainFck window. And I'd like a one button press backup to folder for that window. I'd like to organize these windows or all my tabs in a tree graph.
CRISPR IQ300#6848: "Organize all YT tabs related to ___ in a new window, but before that let me visualize the tabs in a graph and highlight and select them and drag them around to exclude some from opening in the window, then save that window. This would be the optimal way to organize research.
CRISPR IQ300#6848: 95% of the time when I open a new tab it's something cool but not something I need at that moment.
bmk#1476: :guilty:
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/856029063168786442/unknown.png
CRISPR IQ300#6848: omg man same
bmk#1476: this is just one browser, i have another 6000 in the other one
CRISPR IQ300#6848: Yep, I've abandoned many old sessions, so maybe in my lifetime like 30,000 tabs I never got to, that I would love to have AI powered organization over.
bmk#1476: also i have probably a thousand or two tabs in mobile chrome but it stopped telling me after i got past 100
CRISPR IQ300#6848: "This browser feels so fast and fresh", a few months later 1,000 tabs open and I have to worry about backups lol
bmk#1476: the smily face of death
CRISPR IQ300#6848: A neural-browser than can import all the URL's from all my browsers and even automatically archive the pages so if a page goes down no worries, a YT channel gets terminated at least I know what the missing video is from my AI playlist. This would probably be the ultimate workflow improvement for the world.
bmk#1476: doesnt gwern have the archiving part down
CRISPR IQ300#6848: Can I search for a deleted YT video URL and see the thumbnails? I'm not familiar with it.
bmk#1476: i meant like gwern has a pipeline for archiving websites he visits
CRISPR IQ300#6848: This? https://www.gwern.net/Archiving-URLs
bmk#1476: yeah
CRISPR IQ300#6848: This is incredibly extensive, but do you know if anyone has implemented it into a Chrome plugin or something? |
bmk#1476: no idea
CRISPR IQ300#6848: I found this, gonna look more into it, if some of the juiciest features of gwern can be implemented into it, there we go I think. https://chrome.google.com/webstore/detail/tabs-outliner/eggkanocgddhmamlbiijnphhppkpkmkl
triggerhappygandi#0001: Do you have 128GB RAM?
triggerhappygandi#0001: And I thought I was making things too cluttered with 32 tabs.
triggerhappygandi#0001: How do you even keep track of what's where with 7000 tabs?
Teemochu#8740: Honestly what I'd like from a search engine is website categories (eg: I could search for "Columbia outer jacket cat:discussion" and get only forums/Reddit, or "Disney world vacation fun cat:blog" and get just blogs)
triggerhappygandi#0001: Doesn't Google allow something similar
triggerhappygandi#0001: Well it doesn't allow to search by category iirc but they do allow a search by site
Jonnathan#1234: Lol and I thought I had a lot with a few hundred on multiple browsers.
Teemochu#8740: And yeah I'm pretty sure everything G switched to some leaky ML algo a while back, youtube for one no longer gives only the things that match a wording, and the other recommendations they do give that don't match the words are kind of useless
Teemochu#8740: (You don't totally see this until you're down to where you'd only see <10 or so videos in the results under the old system, but I assure you the change was made and it causes a large amount of clutter that's somewhat akin to high-frequency Fourier noise in that it's a degradation most people never notice but that's blatant under a sharp enough eye.)
𓅬 gabriel_syme 𓅬#3220: there's also this concept of bookmarks
𓅬 gabriel_syme 𓅬#3220: it's wild, you can save your tabs without keeping them open
thenightocean#6100: have u checked this? https://www.mightyapp.com/
𓅬 gabriel_syme 𓅬#3220: interesting, is it good?
thenightocean#6100: dont know, never used it. Probably mot good idea if i care about privacy
Drakkaa#3367: https://kirstenhacker.wordpress.com/2021/01/11/eleuther-ai-plagiarist-in-the-making/
Drakkaa#3367: isn't this slander ?
Drakkaa#3367: I tried reaching out to her and point out that the pile does not contain any of her work and consists of opensource textfiles
Drakkaa#3367: but no response yet |
Teemochu#8740: The Pile does contain documents from a web crawl; it wouldn't really be able to compete with OpenAI's similar dataset otherwise
𓅬 gabriel_syme 𓅬#3220: I've seen her articles before I think all have been critical to the got stuff
𓅬 gabriel_syme 𓅬#3220: Gpt*
Teemochu#8740: (in fact OpenAI pretty much *exclusively* trained on the webcrawl stuff)
CRG#8707: This was discussed a while back: https://discord.com/channels/729741769192767510/729741769738158194/800514489031327754
𓅬 gabriel_syme 𓅬#3220: But I feel its the wrong kind of criticism, the one that means you don't interact with what is going on, experiment, test, solve, etc
𓅬 gabriel_syme 𓅬#3220: The people who completely take sides like that are imo forgotten.
Drakkaa#3367: I missed that one, sorry
Teemochu#8740: Copyright is one of those things I mostly fear the teeth of and little more... the "bark" of copyright is pretty neutered at this point, but its "bite" could still be weaponized by forces that want to stop libre AI for other reasons.
cfoster0#4356: tl;dr do not engage
Drakkaa#3367: allright, i'll stop engaging haha
Teemochu#8740: like, I don't particularly fear Disney going out of their way to file a suit, but someone like Microsoft or <politrib group> doing so out of a root concern that's non-copyright (e.g. "only our big corp should be able to control what AI can generate", or "down with fake gnus and offensive views") *is* a concern of mine, since copyright seems like it would be the easiest way to nip a model in the bud.
Drakkaa#3367: did read some articles from different authors that could reproduce phone numbers and email from gpt-2/3 , so there might be some cleaning of the data needed in the future, unless you wánt it to generate phone numbers/email addresses
Teemochu#8740: Also why I have a copy of Pile on my computer, as well as the [2017] competition subset of Imagenet
Drakkaa#3367: Yes its easy to generate all kind of non pg and offensive views with AI, there might be some resistance there from the pc crowd probably
Teemochu#8740: To that I say the image that maps best to "A cartoon image of two gnus. Two gnus engaging in <redacted>."
Drakkaa#3367: Actually not a bad idea tbh
Drakkaa#3367: this
Teemochu#8740: on a totally unrelated note the-eye limits you to 32 connections at once 😛
Teemochu#8740: so downloading all the files in the pile in parallel isn't viable... on gigabit you should only need about 10 for maximum speed though |
mr_seeker#1337: Legally speaking, you can use a dataset as "fair use" to train the AI
CRISPR IQ300#6848: What else though? What do you do after that? Do you open many bookmarks at once? Then what? I bookmark, but it feels ancient. What is the organization pipeline for actually making rapid use of bookmarks? Any Chrome add-on?
𓅬 gabriel_syme 𓅬#3220: It's not pretty but I think it is not the bookmarks fault but us
𓅬 gabriel_syme 𓅬#3220: Most of those tabs are ancient as well
𓅬 gabriel_syme 𓅬#3220: I was mostly thinking of temp list you try to empty from time to time
mr_seeker#1337: Can I make a humble request for the pile? Hackaday?
CRISPR IQ300#6848: I was unaware, I was thinking about how gpt3 was trained legally speaking. I guess a Tesla looks at a lot of copyright material, so it's only fair, the camera can see anything, and train on anything. If a Tesla crashes into a coca cola truck they still need that data to train it and improve it, so we could even say the same about a robot general AI that learns from the world it sees. This is just my speculation though.
mr_seeker#1337: Well, you use the data to train a new intellectual property. Like you would use snippets of a movie to make critique on it. Yes, it might "plagiarize" some content, but it requires effort and luck.
mr_seeker#1337: And who says "her" content is unique?
StellaAthena#3530: __Non-Americans:__ If we are having a conversation and you want to refer to the capital city of the US, do you say “Washington,” “Washington, DC,” “DC,” or something else?
If I were to use one of these (without the specific context of the US capital, perhaps by saying “I live in [name]”) would all of them be understood correctly?
𓅬 gabriel_syme 𓅬#3220: I probably go for B most of the time
Louis#0144: When in Canada I always heard people call it B
bmk#1476: wait, the US has a capital city?
Dromarion#3383: I know in Vancouver we usually have to make the distinction from Washington state since we're right next to it.
bmk#1476: what next, are you going to tell me that the austro hungarian empire no longer exists?
Dromarion#3383: And Americans always think I'm talking about the Vancouver in Washington instead of Canada anyway lol
bmk#1476: there's a Vancouver in Washington?
bmk#1476: wtf |
EricHallahan#1051: Yeah, there were a bunch of people who went there for the winter olympics.
EricHallahan#1051: :omniberk:
alexyz#3459: where's omnigoose
bmk#1476: be the change you wish to see
EricHallahan#1051: \:omnihonk\:
EricHallahan#1051: https://www.smh.com.au/lifestyle/oops-wrong-vancouver-olympic-tourists-confusion-20100204-nfg7.html
EricHallahan#1051: > "America's Vancouver", as a former town mayor liked to describe it, sits 400 kms south of the Olympic host Vancouver and has a population of some 165,000 people -- far fewer than the Canadian city.
Louis#0144: Canada has a capital flock of geese that moves from province to province
Louis#0144: We’ve been over this
bmk#1476: this is Québec's fault, probably
Louis#0144: 🤮 French 🤮
chilli#5665: Definitely not A, except in certain contexts
chilli#5665: DC is probably most common
chilli#5665: But if it’s a political discussion then Washington is common
EricHallahan#1051: I was about to say that. `A` is valid in political conversations, otherwise it is too ambiguous to be useful.
chilli#5665: Like, if you said, “Washington issued a statement condemning China”, I’d know what you mean
chilli#5665: If you said, “I lived in Washington”, I’d be ???
EricHallahan#1051: "She died in her home in Washington on Tuesday."
EricHallahan#1051: It is totally ambiguous.
Louis#0144: She died inside a giant statue of former president Washington |
bmk#1476: hot take: DC statehood is a bad idea
bmk#1476: the flag with 51 stars would look so terrible
bmk#1476: look up the proposed 51 star flag
bmk#1476: I propose the compromise solution of also merging the Dakotas at the same time so the number of states remains at 50
Louis#0144: We could just get rid of NJ
Louis#0144: no one would mind
mgostIH#0245: Merge North and South Dakota
bmk#1476: that's what I just said
Louis#0144: no, subdivide them further
mgostIH#0245: Oh
bmk#1476: look at how horrible this looks https://cdn.discordapp.com/attachments/729741769738158194/856193491117539338/1280px-US_flag_51_stars.svg.png
mgostIH#0245: Then we had the same idea 😎
Louis#0144: 16 Dakotas
mgostIH#0245: They should draw 50 stars inside the box and one star at infinity
bmk#1476: I guess it was worse in the past
alexyz#3459: there's a better proposal
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/856193684516634634/1280px-Flag_of_the_United_States_18191820.svg.png
bmk#1476: this is horrific
alexyz#3459: Or give PR and DC both statehood
pragmaticml#1730: Idk let's just bring in a bunch of other states at the same time. Puerto Rico, Guam, ... |
alexyz#3459: then you have 52
alexyz#3459: Guam doesn't have enough people for statehood
alexyz#3459: it would mess up the Electoral College and the Senate and the House of Representatives compeletely
bmk#1476: I'd be ok with a nice round 64
bmk#1476: oh yeah the senate is another thibg
alexyz#3459: Guam getting 2 senators would be nonsense
bmk#1476: having exactly 100 senators is just *chefs kiss*
alexyz#3459: The senate should not exist
alexyz#3459: There is no point of having it
alexyz#3459: https://cdn.discordapp.com/attachments/729741769738158194/856194179335323668/image.png
EricHallahan#1051: The point has been lost upon the ages.
alexyz#3459: Another 51 state design
alexyz#3459: I love this one
chilli#5665: Have you guys seen the proposal for greater Idaho
chilli#5665: lol
mgostIH#0245: Merge all the states into one single super star
The United States of Florida
bmk#1476: @alexyz I think you misunderstand this conversation, this is shitposting not legit policy posting
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/856194316040273950/image0.jpg
alexyz#3459: That is beautiful |
alexyz#3459: Give it northern Nevada too
chilli#5665: Basically, a bunch of counties from Oregon want to secede and join Idaho
pragmaticml#1730: https://xkcd.com/2394/
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/856194445041205248/1280px-US_38_Star_Flag_concentric_circles.svg.png
StellaAthena#3530: What we need to do is make DC, Puerto Rico, and Guam states
StellaAthena#3530: That way we can finally be “one nation, under god, indivisible…”
bmk#1476: and then merge the Dakotas?
alexyz#3459: American Samoa: *cries*
bmk#1476: and merge the Carolinas
chilli#5665: It’s a prime joke
chilli#5665: I believe
bmk#1476: and merge some other pair of states so we can stay at 50
EricHallahan#1051: Just remove the "under god" section and restore it to it's canonical form.
pragmaticml#1730: West Virginia and Virginia should just become Big Virginia
alexyz#3459: remove the pledge all together
bmk#1476: this was the flag of the US for 7 long years https://cdn.discordapp.com/attachments/729741769738158194/856194926646919218/1280px-Flag_of_the_United_States_18511858.svg.png
EricHallahan#1051: I am shitposting.
bmk#1476: think about that
alexyz#3459: but why? they are literally seperated by a mountain range
bmk#1476: imagine flying this flag |
bmk#1476: it's so utterly horrific
alexyz#3459: merge West Virginia with Ohio, that makes more geographic sense
pragmaticml#1730: I lived in that mountain range -- so to me they didn't seem all that different 😛
chilli#5665: I agree
alexyz#3459: https://cdn.discordapp.com/attachments/729741769738158194/856195089244487680/image.png
chilli#5665: We should stop adding states
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/856195153794564106/1280px-US_26_Star_GreatStar_Flag.svg.png
alexyz#3459: A flag of my own design
chilli#5665: 50 is nice
alexyz#3459: numeric simplicity > voting rights
bmk#1476: just merge the dakotas
chilli#5665: Yes
bmk#1476: problem fixed
EricHallahan#1051: yes
alexyz#3459: delete Wyoming
bmk#1476: why do we even need so many Dakotas
pragmaticml#1730: 1 is too many already
EricHallahan#1051: Sell them to Canada.
bmk#1476: has anyone ever been like "yes we have one Dakota but what if we had two"
chilli#5665: God intended the US to have 50 states |
alexyz#3459: did you know that South Dakota has less people than 1/3rd of Manhattan?
alexyz#3459: same with North
bmk#1476: yes exactly it's time for GREATER DAKOTA
alexyz#3459: imma make that map
alexyz#3459: will be beauitful
EricHallahan#1051: https://cdn.discordapp.com/attachments/729741769738158194/856195713109065768/636.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/856195977533980672/US_DownDifPath_v3.png
bmk#1476: I found this while googling
EricHallahan#1051: (I've had this image ready for like five minutes now.) :berk:
bmk#1476: its main redeeming quality is that Quebec is missing
StellaAthena#3530: You are correct
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/856196532352581642/3a95568a-8858-4ff6-85ba-8b0d8d92565e.png
bmk#1476: oh god
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/856196621263175710/c78f66cc-aea0-4f0a-80da-843308d08091.png
EricHallahan#1051: I love that Delaware has just been absorbed into Pennsylvania.
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/856196847055011860/counties-layers.png
bmk#1476: glorious
chilli#5665: I’m convinced the only reason Hawaii became a state was to make 50
Louis#0144: @bmk we need an Eleuther flag
Louis#0144: Pls |
Louis#0144: And a coat of arms
bmk#1476: just put the logo in the canton of a black field
bmk#1476: or in the charge
bmk#1476: idk
bmk#1476: whichever looks better
alexyz#3459: https://cdn.discordapp.com/attachments/729741769738158194/856197622401859584/image.png
alexyz#3459: greater Idaho and greater Dakota
alexyz#3459: i made them a big greater than necessary :berk:
bmk#1476: what if we make the 2 state US
bmk#1476: Florida and everyone else
alexyz#3459: what if Florida was split into north and south florida
Daj#7482: what if Florida was no
bmk#1476: what if Florida
Daj#7482: bad :(
alexyz#3459: Florexit
bmk#1476: Quebexit
Daj#7482: flordelete
alexyz#3459: what is a quebec is that a type of *duck*
alexyz#3459: for some reason I can imagine a goose saying quebec
alexyz#3459: amerexit |
alexyz#3459: america leaves america
bmk#1476: amex
Teemochu#8740: what if florida became sentient
Basedblue#9138: florida is hot as *&$ today
Basedblue#9138: @bmk is that red the constitution freezone
bmk#1476: ?
Basedblue#9138: @bmk if u live any where in the orange border patrol can take all eletronics with out a warrent https://cdn.discordapp.com/attachments/729741769738158194/856283915455168533/imagemap.png
Basedblue#9138: https://www.aclu.org/other/constitution-100-mile-border-zone
chirp#4545: Was just re-reading this: https://www.alexirpan.com/2020/08/18/ai-timelines.html
chirp#4545: What jumped out to me was what he said about the possibility of something like “AGI” arriving quickly:
chirp#4545: > The most likely problem I see with my story is that unsupervised learning could be way harder for anything outside of language.
chirp#4545: That was written in mid 2020
Jonnathan#1234: Just today?
chirp#4545: But now with CLIP and other really effective multimodal stuff coming out
chirp#4545: Maybe that won’t be a problem after all
Basedblue#9138: @Jonnathan my phone says 92 feels like 103 wasnt that bad yesterday
Jonnathan#1234: I get giddy with happiness on a cool winter day when it gets to the low 80s
chirp#4545: In fact with CLIP you see multimodality really helping — it’s why you can do amazing stuff like VQGAN+CLIP
Basedblue#9138: tested a theory results are promising
`perceptor= clip.load('ViT-B/32', jit=False)[0].eval().requires_grad_(False).to('cuda') |
clock=deepcopy(perceptor.visual.positional_embedding.data)
perceptor.visual.positional_embedding.data=clamp_with_grad(clock,0,1)`
janus#0150: I don't understand why people think things other than language are relevant for AGI. It seems to me that the only reason to focus on other things is that they might help improve performance on language.
bmk#1476: multimodal = better resolution
janus#0150: Resolution of what?
Daj#7482: gRoUnDiNg
Daj#7482: tbf I think multimodal is the null hypothesis
Daj#7482: since we have an existance proof with humans
StellaAthena#3530: Except it’s clearly not, since most people don’t believe in it
Daj#7482: Are you referring to me or janus?
janus#0150: That sounds like a practical argument that language-only won't be enough for language, right? Regardless of how evolution did it along the way, blind and deaf people are now perfectly good GIs.
Daj#7482: don't get me wrong, I assign pretty significant weight to your hypothesis
janus#0150: I definitely claim that training on language is enough for language. But in general I'm confused whether other people want to do other modalities as a means to an end or as an end itself.
Daj#7482: Probably both
janus#0150: I guess if you want to have a cool API and make some $$$... But whats the endgame for image AI in terms of acceleration? The obvious route forward to me is have AI do ML research. That is research papers and code.
janus#0150: Maybe we want to show it diagrams and have it give us schematics for new hardware?
Daj#7482: I think what people imagine is there's some kind of useful info in images that is not encoded in text you need to do relevant research
Daj#7482: But most people probably just want pretty images lol
Daj#7482: or even more likely, they just want citations lol
janus#0150: I mean, I can't complain. #art is pretty fucking cool. |
janus#0150: Blind people like hmmm
Daj#7482: yeah fr, it's crazy how much progress has been made in just a few months
Daj#7482: Plot twist: Actually all the GI relevant info is encoded in touch
Daj#7482: You need to pet the AI
rom1504#5008: That's a weird definition of AGI if it misses the basic skills of humans to understand the 3d world and act on it
rom1504#5008: Except if you claim it's possible to understand 3d and time with language only ?
AI_WAIFU#2844: It's less a concern about definitions and more "is the marginal benefit of processing images worth it over further text development".
AI_WAIFU#2844: I'm sure you can get AGI either way.
AI_WAIFU#2844: It's just in once case the interface is pure text but in the other it's more generic.
rom1504#5008: I don't see how you can get a program to be able to act on the world that contains 3d objects moving through times by using language only
rom1504#5008: But I'd be glad to be proven wrong
AI_WAIFU#2844: Like at the end of the day it's just byte streams right
AI_WAIFU#2844: The AI writes some code that interacts with the world in realtime
rom1504#5008: Without the AI having ever known anything about the world except by language ?
AI_WAIFU#2844: Well you gotta solve the problem of long contexts, but yes.
rom1504#5008: I do mean language and not arbitrary byte streams
rom1504#5008: Natural language
rom1504#5008: Of course if you include visual tokens and videos tokens in language, then we're talking about something different
AI_WAIFU#2844: If your algorithm is sufficiently general, it should be able to deal with arbitrary byte streams.
rom1504#5008: Yes but that's another discussion |
rom1504#5008: I thought we were talking about language as in natural language
rom1504#5008: If you say "arbitrary byte streams" that includes image, 3d, audio, ... Very multimodal
AI_WAIFU#2844: I think the point is that that the AIs interface is a byte stream. You don't have special tokens for images or video.
AI_WAIFU#2844: In a sense it's all just text
AI_WAIFU#2844: Not just NLP
rom1504#5008: Ok then yeah
rom1504#5008: But that includes multimodal
rom1504#5008: So I'm not sure if that's the point that was being made above
rom1504#5008: But I agree with this yes, if you have a model that understand any byte stream, it's definitely good enough
Dee Dee#7641: this discord is so cool
EricHallahan#1051: Welcome!
genai (Immortal Discoveries)#0601: I'm might be missing something but if The Pile / Open Web Text is just off-links, what if they expire? Don't we need to store the 40GB or more?
Teemochu#8740: Unless I'm grossly mistaken the Pile download contains the actual trainable content (crawled text)
Teemochu#8740: but it's also hundreds of GB
Teemochu#8740: so I'm not sure what you downloaded
genai (Immortal Discoveries)#0601: I had only got links when I tried, though I haven't tried calling the pile.
StellaAthena#3530: I'm not sure what you mean, but the Pile is about 400 GB of compressed text
StellaAthena#3530: OpenWebText was not created by us
genai (Immortal Discoveries)#0601: how do i access it?
StellaAthena#3530: https://pile.eleuther.ai/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.