data
stringlengths 115
7.61k
|
---|
"data": [
{
"document": 0,
"object": "search_result",
"score": 215.412
},
{
"document": 1,
"object": "search_result",
"score": 40.316
},
{
"document": 2,
"object": "search_result",
"score": 55.226
}
],
"object": "list"
}
``` |
I need to get the 'document' with the highest score. Can someone please help me how to do that? It's a json that i converted to dict.
variable name is 'conv_json'
Napolean_Solo#2907: nvm got it
UnsupervisedLearner#4148: I guess the lacking inductive bias is the inability to learn positional information at all?
It's just very strange to me that a dynamically computed transformation works better in lower compute setting.
𓅬 gabriel_syme 𓅬#3220: well all new mlp architectures have ways to learn positional information, so I guess it's not total lack of inductive bias (mixer does local-global with the channel-token mixing layers for e.g.), but maybe it's...better? I don't know lol, I do think it's in all of their future steps to understand wth is going on
alstroemeria313#1694: ...uh, what's a good 2d positional embedding for transformers
alstroemeria313#1694: i'm just using the 1d one from gpt-2 rn
alstroemeria313#1694: for image generation
alstroemeria313#1694: oh, ViT just uses the usual 1d type
UnsupervisedLearner#4148: 2D RoPE
alstroemeria313#1694: oh, it exists?
alstroemeria313#1694: how do you do it
UnsupervisedLearner#4148: Kinda, you split the embedding for dim 1 and 2
UnsupervisedLearner#4148: So if you have token embedding dim n
0:n/2 would have rope for x axis
n/2:n would have rope for y
|
Then just concatenate
UnsupervisedLearner#4148: There's probably a way to use Clifford algebra and do it in a less hacky way but I'm not smart enough to tell you how it would look in practice
alstroemeria313#1694: ah
alstroemeria313#1694: i'm working on two transformer image generation things rn
alstroemeria313#1694: one is autoregressive sampling of VQGAN tokens conditioned on a CLIP embedding
alstroemeria313#1694: another is something i'm calling a "Gumbel transformer", idk the real name for it, it came from nshepperd in #lesswrong
UnsupervisedLearner#4148: Link please?
alstroemeria313#1694: i have no link
UnsupervisedLearner#4148: "I have no link and I must click"
alstroemeria313#1694: we just talked about it in irc ^^;;
UnsupervisedLearner#4148: Academics and their walled gardens
alstroemeria313#1694: but the idea is you input gumbel noise and it outputs logits
alstroemeria313#1694: then you sample from the logits w/ gumbel-softmax using the same gumbel noise you input
UnsupervisedLearner#4148: And you use this as a latent for a GAN?
alstroemeria313#1694: i use it to sample VQGAN tokens
Airatak#7842: Hi guys! I've been super inactive for a while.. where did the pile channel go?
alstroemeria313#1694: vqgan tokens are meant to be sampled autoregressively
alstroemeria313#1694: well, that was how they did it in the original paper
Daj#7482: Project was completed so we archived the channel
Airatak#7842: what about v2? |
Airatak#7842: like a multi language one
Daj#7482: Everyone was too burned out from v1 to want to work on v2 lol
Daj#7482: Hasn't been much interest to kickstart the v2 project since
Airatak#7842: ohok
Airatak#7842: well I just got 300 GB of Chinese text + a ton of Korean and a bit of Japanese also
Airatak#7842: just cleaning it up a bit now
Daj#7482: Neat! Yea multilingual has gotten little love around here lately, multimodal is the new cool thing lol
UnsupervisedLearner#4148: @Daj What is the catastrophic scenario where recommendation engines eat all of human willpower and cognition and turn the world into some weird hyperoptimized pseudoreality called?
Daj#7482: "the default outcome"
Daj#7482: :berk:
Daj#7482: I guess that would be a special (weak) case of wireheading
UnsupervisedLearner#4148: Just read that paper on FB DLRM and it's insane the scale
Daj#7482: Christiano also has a similar scenario laid out: https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like
Kia#2550: Ow Wow There's other Alignment Sources
Kia#2550: Lovely
AI_WAIFU#2844: 2020?
quinn#9100: Also check out kokotajlo on persuasion tools
triggerhappygandi#0001: :nooo:
https://twitter.com/IMordatch/status/1400113795196809227?s=19
triggerhappygandi#0001: Not my environmenterino and agenterino! |
cfoster0#4356: :brr:
Daj#7482: The most interesting part of this paper is how good it handles sparse rewards and how simple it is. They don't evaluate against SOTA, but still :brr:
finetune#0907: the gpt-neo repo gives 59.40% acc winogrande for gpt2-xl, but my own run with eval harness gives:
```
| Task | Metric |Value |
|----------|----------|-----:|
|winogrande|acc |0.5793|
| |acc_stderr|0.0139|
```
also ran gpt-neo-2.7B in fp32 now, much higher than the given 56.50%:
```
| Task | Metric |Value |
|----------|----------|-----:|
|winogrande|acc |0.5959|
| |acc_stderr|0.0138|
```
some kind of copy&paste error? :thonk:
alexyz#3459: probably something to do with that one varying a lot or something
finetune#0907: but should be deterministic on the same model, no?
finetune#0907: could've been the case when running fp16 instead of fp32, but this is fp32, so it should match i think |
alexyz#3459: quote
alexyz#3459: but someone here definitely knows more lol 🙂
kurumuz#5695: its deterministic still
n.kh.l#5814: im trying to finetune the bigger neo models so im tokenizing my dataset using the create_tfrecords.py file... im using zstd-compressed jsonl where each of the lines is a json dict that looks like this `{"text": "whatever my text is"}` but once it gets to about line 25k, it says it cant parse the json there. i checked it out and it seems fine and i even got the checksum and compared them to make sure it wasnt a network error when transferring to colab. does anyone have any idea whats going on?
Daj#7482: cc @bmk
triggerhappygandi#0001: At least they only compete with offline RL so there is still hope
bmk#1476: uhh I'll double check it in a bit
bmk#1476: I didn't actually put the tables together, I only dumped the results in discord lol
bmk#1476: the zero shot winogrande row 2.7B column in this table should be the same number https://blog.eleuther.ai/tuning-on-eval-harness/
bmk#1476: and it's 0.575 ± 0.014
bmk#1476: I can confirm that I've always posted 0.575 ish
bmk#1476: I have no idea where 0.565 came from
finetune#0907: 0.575's still lower than 0.5959 🤔
n.kh.l#5814: just checked your math and i think you're right https://cdn.discordapp.com/attachments/729741769738158194/850069888017760266/unknown.png
n.kh.l#5814: although im not sure due to floating point precision error... i can check the IEEE 754 to make sure its all good
bmk#1476: I'll rerun it without cache this afternoon
bmk#1476: the task definition might have changed slightly since I first ran it
finetune#0907: o yeah, a change in the task def would explain it
bmk#1476: wait
bmk#1476: by "might" I mean like "I don't think I changed it but I can't say for sure the task definition didn't change because I'm on mobile rn" |
bmk#1476: not "I think I changed the task def"
bmk#1476: so don't just run off and take that as the explanation lol
bmk#1476: I'll rerun it this afternoon to be sure
finetune#0907: i won't dw :berk:
finetune#0907: just to make sure it's not some obvious issue on my side, master branch and running through main.py should work?
bmk#1476: yeah it should
bmk#1476: can you run it with --no_cache just to double check that it's stable between runs
finetune#0907: cleared out lm_cache, but can do that to make sure
finetune#0907: same results. 0.5793 for gpt2-xl, 0.5959 for gpt-neo-2.7B in winogrande
n.kh.l#5814: what does the `files_per` flag do in `create_tfrecords.py` if i only have 1 input file?
Sid#2121: !faq
Carl-bot#1536:
Sid#2121: (we're not tech help)
n.kh.l#5814: fair enough, sorry
AI_WAIFU#2844: just read the code
n.kh.l#5814: ohh ok so its not the number of files, its the number of chunks (in my case 2048 characters)
Sid#2121: that's correct iirc
n.kh.l#5814: ok i know you probably have better things to do so im kinda just thinking out loud but my data is `469520509` bytes... with 2048 length contexts, that means `229258` contexts. i set `files_per=1000` so the number of tfrecords should be `229258/1000=229` but i already have 998 files in the tfrecord directory
alstroemeria313#1694: GANs, how do you train them stably
alstroemeria313#1694: I wrote an experimental text GAN and D keeps winning |
EricHallahan#1051: LS-GAN or traditional?
alstroemeria313#1694: traditional
EricHallahan#1051: LS-GANs are significantly more stable IIRC.
alstroemeria313#1694: ah
alstroemeria313#1694: ...There is very little work on text GANs
AI_WAIFU#2844: Yeah I think text GANs are a crap shoot
alstroemeria313#1694: Can you like... distill an autoregressive language model into a generator like the one I have somehow
bmk#1476: I worked on text GANs for a while
bmk#1476: would not recommend
alstroemeria313#1694: ahah
bmk#1476: it's so fiddly
alstroemeria313#1694: like even more than GANs are to begin with...?
bmk#1476: way more fiddly
bmk#1476: I spent like a year working with various image GANs so I know
alstroemeria313#1694: *nods*
bmk#1476: I was trying to do the policy gradient approach for tuning the generator
bmk#1476: which.. I could never get the policy gradient to not completely destroy the generator
alstroemeria313#1694: Yeah I'm using Gumbel-Softmax
bmk#1476: I tried REINFORCE like twice with totally different from scratch implementations, as well as someone else's PPO implementation, none of them worked
bmk#1476: huh I never tried Gumbel softmax, tell me if it works lol |
alstroemeria313#1694: G takes Gumbel noise the shape of its output, outputs logits, then you Gumbel-Softmax the logits with the same Gumbel noise
bmk#1476: I really want to see text GANs work but I unfortunately don't think they ever will lol
alstroemeria313#1694: I can also do Gumbel-Rao to reduce the variance of the gradients
alstroemeria313#1694: I was trying it as a generator for CLIP to begin with
bmk#1476: I'd probably need to spend more brainpower than I have atm to understand Gumbel softmax
alstroemeria313#1694: Like, sampling sequences of VQGAN tokens.
alstroemeria313#1694: Because I could generate a whole image in one step, apply a standard CLIP loss, and backprop
alstroemeria313#1694: Gumbel-Softmax is *way, way* lower variance than REINFORCE stuff
alstroemeria313#1694: As in "I used it with VQGAN and CLIP with a batch size of 1 and it worked" low variance
bmk#1476: huh
alstroemeria313#1694: It is not unbiased though.
bmk#1476: I know other people have tried Gumbel softmax with LMs
bmk#1476: but idk how well it worked
alstroemeria313#1694: yes, my CLIP prompt finder works that way
bmk#1476: I guess if you make text GANs work pls let me know lol
alstroemeria313#1694: eheh~
bmk#1476: I wonder if you can use Wasserstein with your setup
bmk#1476: I couldn't do it with policy gradient but maybe it's compatible with Gumbel?
alstroemeria313#1694: I'd have to think about how to do the gradient penalty
bmk#1476: wasserstein was a huge quality boost for image GANs in my experience |
alstroemeria313#1694: The Lipschitz constraint is on D right?
bmk#1476: idk if it's still sota though
bmk#1476: yeah
alstroemeria313#1694: Yeahhh IDK how to do the weight clipping right for a transformer D
alstroemeria313#1694: Would have to do GP
bmk#1476: yeah clipping is a bad idea anyways
bmk#1476: gp can't be that hard right
alstroemeria313#1694: I... did it once in Keras a looooong time ago
bmk#1476: lol
bmk#1476: uh
bmk#1476: I guess just grab some random wgan gp implementation to see how they did it
alstroemeria313#1694: I found one, it's so old they use Variables
bmk#1476: lol
bmk#1476: fun fact, wgan gp is the first nontrivial thing I ever implemented in pytorch
alstroemeria313#1694: :blobcutehappy:
EricHallahan#1051: I never figured out WGAN at all.
EricHallahan#1051: :berk:
alstroemeria313#1694: In any case I did the LSGAN loss function and I'll let it run for a while
alstroemeria313#1694: G's outputs look incoherent still but they're not collapsing to all "the" or something
bmk#1476: my memory is a bit fuzzy but I think I tried LSGAN at some point and it wasn't super effective |
bmk#1476: hinge loss kinda helped quite a bit
alstroemeria313#1694: Oh, did it have bad quality outputs or was it not stable
alstroemeria313#1694: I've tried hinge loss and it was awful x_x
alstroemeria313#1694: For an image GAN
bmk#1476: I guess GAN stuff is just super untransferrable from one domain to another
alstroemeria313#1694: *nods*
alstroemeria313#1694: So like
alstroemeria313#1694: What if I took my generator and just did lots of generations from it and...
alstroemeria313#1694: Used a loss derived from an autoregressive language model
bmk#1476: uhh can you elaborate a bit
alstroemeria313#1694: Like I computed the likelihood of its outputs
alstroemeria313#1694: And set a target for this value
alstroemeria313#1694: hm
bmk#1476: high likelihood doesn't mean high quality
alstroemeria313#1694: yes, all spaces is probably highest likelihood
bmk#1476: yeah
bmk#1476: or as I like to say, the "aaaaaaa" string
alstroemeria313#1694: thus a target that is not too high and more like what normal text is
bmk#1476: I have a relevant meme
alstroemeria313#1694: ehehe~ |
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/850125497517736007/20210603_153509.jpg
alstroemeria313#1694: Like I have a thing that can generate a whole sequence in one forward pass
alstroemeria313#1694: How can I use transfer learning from an autoregressive model
mkualquiera#3484: aaaaaaaaaa
bmk#1476: why transfer instead of just training it to generate real strings
alstroemeria313#1694: oh...
alstroemeria313#1694: hm
alstroemeria313#1694: I am though with the adversarial training
alstroemeria313#1694: Or at least to fake it well enough.
Spy#9778: Assuming your model can assign probabilities to full sequences, you could get the logprob of a sentence from the autoregressive model then use the gap in the logprobs as the loss
alstroemeria313#1694: It can't
Spy#9778: sadge
EricHallahan#1051: You can't do that directly.
Spy#9778: do what directly?
EricHallahan#1051: Getting the logprobs
Spy#9778: for a given sentence in a corpus you can
EricHallahan#1051: ¯\_(ツ)_/¯
Spy#9778: chain rule and sum logprobs across time no?
alstroemeria313#1694: you can get the logprob of a sequence using an autoregressive model but my generator cannot do the same
Spy#9778: yeah I was just replying to the other comment |
Spy#9778: I mean if you're willing to do something NCE like you could try taking batches of sentences and ranking them under the autoregressive model then using a ranking loss on your model
Spy#9778: if it outputs an energy or something
Spy#9778: but that's dropping a ton of the information from the teacher model and might be a complete waste of time
alstroemeria313#1694: it... outputs logits
alstroemeria313#1694: that you sample from
alstroemeria313#1694: it takes noise as input
Spy#9778: ohhh I was thinking something more like a non-autoregressive MT decoder
Spy#9778: not like a GAN trained type thing
bmk#1476: what kind of model are you using to generate all at once?
bmk#1476: are you feeding noise into a transformer encoder?
alstroemeria313#1694: yes
bmk#1476: ah
bmk#1476: my experiments were all with normal autoregressive models
bmk#1476: I think autoregressive models are most useful personally
alstroemeria313#1694: It's literally like a text transformer GAN
alstroemeria313#1694: The original idea was to train one that took a CLIP embedding as a condition in addition to the Gumbel noise 'latent' and output logits for VQGAN tokens.
alstroemeria313#1694: The trick is that I use the same Gumbel noise I input to sample from the logits
alstroemeria313#1694: So it's deterministic given a certain input.
alstroemeria313#1694: And I can use Gumbel-Softmax to backprop through the 'sampling'.
alstroemeria313#1694: Some other people got a text GAN to work using Gaussian 'latent' inputs and taking the argmax of the output logits. They did backprop by substituting the softmax for the argmax in the backward pass. |
alstroemeria313#1694: This would be easy enough for me to try if the Gumbel idea doesn't work out.
alstroemeria313#1694: They seem pretty similar.
cfoster0#4356: @ym #gpt-neox-devs is the dev channel for the project. If you've got other questions, this is the channel for 'em
ym#0104: gotcha, thanks!
aze#1010: 👀 6B run complete? https://cdn.discordapp.com/attachments/729741769738158194/850137319486652457/unknown.png
aze#1010: orr. crashed ?
bmk#1476: please stop randomly speculating lol
bmk#1476: just like go do something else and when it's done you'll know
EricHallahan#1051: Speculating does nothing good.
bmk#1476: something something a watched kettle
aze#1010: im just little hyped
kurumuz#5695: hype is not good
Kia#2550: Wait
Kia#2550: Um :mittwoch:
UnsupervisedLearner#4148: Requesting favorite MoE papers, I just read a recent one from Alibaba comparing k Top 1 vs Top k routing
Teemochu#8740: I am hyped for the far future of 69.420B running on a single local GPU :smug:
bmk#1476: gptneo-2.7B
reran and got different results somehow https://cdn.discordapp.com/attachments/729741769738158194/850165616597663745/unknown.png
bmk#1476: what about 69.420M? |
bmk#1476: meant to ping @finetune
mkualquiera#3484: better or worse?
UnsupervisedLearner#4148: My American is showing
Do you mean ~69trillion or ~69billion
bmk#1476: americans only use short scale tho?
UnsupervisedLearner#4148: I hope they aren't GPUs for too much longer. It's crazy we don't have real accelerators yet
UnsupervisedLearner#4148: Americans use commas to separate in large numbers
12,345,678 is twelve million etc
bmk#1476: oh
UnsupervisedLearner#4148: I've seen Europeans use periods
bmk#1476: i thought you were confused what the B meant
Louis#0144: who maintains isaac
bmk#1476: isaac does
Louis#0144: thanks
bmk#1476: self maintenance
Louis#0144: lmao
Louis#0144: I really want to try this decoding method on neo 6b
Teemochu#8740: 69.420T is a pipe dream on one GPU |
Louis#0144: it looks *super* promising
Teemochu#8740: ohhh
Teemochu#8740: yeah I mean dot as in decimal
bmk#1476: @finetune ok i ran neo 2.7B again this time with batch size 2 and im getting the 0.575 result
mkualquiera#3484: oof imagine using dot as in thousands
Louis#0144: (Although it wont work for code decoding I think, pretty sure it requires full sentences)
bmk#1476: so part of the story is batch size dependence
Teemochu#8740: this is why medical software *never* uses three digits after a decimal point
bmk#1476: laughs in chinese commas
bmk#1476: 4 digits per comma
UnsupervisedLearner#4148: You said far future. That's not too many doubles, who knows what architecture improvements we get
Teemochu#8740: valid, also VRAM and dedicated acceleration will probably improve if more games start using DL techniques
UnsupervisedLearner#4148: End 2 end MLP video game when
Teemochu#8740: MLP MLP video game
Teemochu#8740: Friendship Is Optimal
bmk#1476: gpt2-xl https://cdn.discordapp.com/attachments/729741769738158194/850175559375650837/unknown.png
bmk#1476: ok just got the same output on 2 machines
bmk#1476: thats promisding
bmk#1476: But then what else could ne causing the difference?
bmk#1476: @finetune what torch version, cuda version, and transfiormers version are you using? |
bmk#1476: im using torch 1.8.1, transformers 4.6.1, and cuda 11.2
alstroemeria313#1694: i implemented wgan-gp in pytorch just now
alstroemeria313#1694: trying a wgan-gp text GAN
bmk#1476: I'm excited to hear how it goes
alstroemeria313#1694: wgan-gp for text is weird
alstroemeria313#1694: since the reals and fakes are one-hots
alstroemeria313#1694: and the required random blending of reals and fakes is... not
bmk#1476: right
bmk#1476: is there any way of getting around that?
alstroemeria313#1694: uhh, hm
bmk#1476: or do you think blending two senteces 50/50 will just work
bmk#1476: by 50/50 i mean like just have the two distributions averaged together so you have 0.5 on each token
alstroemeria313#1694: well, we're forcing D to have gradient norms... hm
alstroemeria313#1694: the only time we evaluate D at these weird places is to force it to have gradient norm near 1 there
alstroemeria313#1694: well
alstroemeria313#1694: i mean, the reason D takes one-hots in the first place
alstroemeria313#1694: uhh
alstroemeria313#1694: ohhh
bmk#1476: well the reason youre forcing 1-norm is you want the discrim score to be pretty smooth wrt input space right
alstroemeria313#1694: what if i evaluated both the reals and fakes for the gp, but used the blended versions in the backward pass |
bmk#1476: so maybe you want to apply the condition to the embeds instead of the one hots
bmk#1476: just fix the embedding to something like the gpt2 embedding layer or something
alstroemeria313#1694: since i do, in fact, use a gradient estimator + one-hots
alstroemeria313#1694: to get the loss for G
alstroemeria313#1694: so, hm
bmk#1476: I think taking the halfway point in embed space makes much more sense
bmk#1476: in fact I think you could experiment with which layer you take it out of
alstroemeria313#1694: oh, so just doing what i'm doing now and feeding in blended one-hots?
bmk#1476: no like do the embed first and blend the embeddings
alstroemeria313#1694: that's the same
alstroemeria313#1694: isn't it?
bmk#1476: oh right I'm an idiot nvm
bmk#1476: the second half of my suggestion was to try that but at different layers in the model
bmk#1476: idk if anyone's even done that with image gans tho
James#6892: https://www.google.ca/amp/s/www.engadget.com/amp/chinas-gigantic-multi-modal-ai-is-no-one-trick-pony-211414388.html
alstroemeria313#1694: layers how, wgan-gp is with respect to the input?
James#6892: 1.75T multimodal model is announced
James#6892: Can generate images, text, poetry, audio lol
bmk#1476: yeah im saying pretend that layer x is actually the input
bmk#1476: my rationale is that sometimes smoothness in input space doesnt make a ton of sense |
bmk#1476: you want smoothness in a more semantic space probably
alstroemeria313#1694: ahh
alstroemeria313#1694: mb i'll try a wgan-gp xmc-gan-ish thing too
bmk#1476: and hopefully middle-of-the-model stuff is more concept-space-ish? idk, I don't think anyone's even done this with images
alstroemeria313#1694: ahh
alstroemeria313#1694: tomorrow, i need to get to bed, i'll run the textgan overnight
AI_WAIFU#2844: Wait how does that even work?
bmk#1476: just blend the text together, gradient penalty go brrr
Exocamp#8255: Well, it's me again thinking about how to implement/use an idea of "continuous low-cost training" in stuff like GPT or GANs
Exocamp#8255: What I mean by that set of buzzwords
Exocamp#8255: Is what I asked before, an extension of "does continuously fine-tuning a model makes it possible to essentially 'continue training' of it?"
Exocamp#8255: Furthermore, would it be possible using this fine tuning data to be split in such a way where training larger-scale models could still be done with relatively low VRAM? (Ignoring constraints such as actual RAM or time needed for now)
Exocamp#8255: When I last asked, someone said to look into StyleGAN and their "progressive growing" idea, I'm looking rn into progressive GANs (from what I understand of the papers 😅) and well
Exocamp#8255: I think I kinda *get* what the general concept is but I'm not sure how it would apply to the actual thing I had in mind
Exocamp#8255: ~~and also the 'progressive growing' thing seems to have been phased out in stylegan2 entirely~~
Louis#0144: We are considering experimenting with text gans in #carp
Kia#2550: Wait
Kia#2550: Wow
Louis#0144: Tire
Louis#0144: Tmrw |
Kia#2550: Have a great day!
Kia#2550: And rest
Kia#2550: :goose6:
n.kh.l#5814: i finetuned the gpt neo 1.3B model and its still going but its at 37k steps (started at 32k iirc) but when i generate it just gives the default output. do i just need to train more?
Hatchling#4049: Hey, AK sent us here in a tweet saying we could play with the AI that can make stuff like this: https://pbs.twimg.com/media/E3A_qOTXoAER5Fx?format=jpg&name=small
veydpz#2681: Hello! Just reached here via AK's tweet.
cfoster0#4356: Hey there! Before you go on exploring, take a second to familiarize yourselves with the stuff in #rules. Once you've done that, the channel for bot generations is #the-faraday-cage-archive.
finetune#0907: weird. i ran it on colab, torch 1.8.1+cu101, transformers 4.5.1
bmk#1476: oh it might be the CUDA version
bmk#1476: is there any way you could test on CUDA 11.2
finetune#0907: i didn't set a batch size, so i think that defaulted to 1
bmk#1476: yeah batch size 1 is default
bmk#1476: I bet they probably changed the matrix multiply optimizations in the new CUDA
finetune#0907: could be. i'll see if i can find somewhere to run with a newer cuda
finetune#0907: don't have enough vram to run locally
finetune#0907: colab only goes up to 11.0
karlo#4645: Hello, is gpt neo still on path towards gpt 3? What does the last message in #announcements mean. What has changed?
bmk#1476: !faq
Carl-bot#1536:
Daj#7482: The last message was an April Fool's joke, maybe we should remove that at some point |
karlo#4645: 😅
finetune#0907: installed transformers 4.6.1, torch 1.8.1+cu111 on colab, which i believe includes necessary cuda libs, ran with batch_size 2 and still got 0.5793 for gpt2-xl. guess if there was a change in cuda, it was after 11.1, so will have to look for a way to try with 11.2
alstroemeria313#1694: So WGAN-GP doesn't have the best output quality?
alstroemeria313#1694: How do you make it better
alstroemeria313#1694: Oh, do I need multiple critic steps per G step?
alstroemeria313#1694: didn't help
alstroemeria313#1694: WGAN-GP is too stable, the critic doesn't provide feedback good enough to make G good
alstroemeria313#1694: Is this why people don't really use it
joaogui1#8461: In 26 minutes we'll have a talk about the paper "Explaining Neural Scaling Laws", please come watch and ask questions! https://www.youtube.com/watch?v=A8F4Qga3NaM
alstroemeria313#1694: ...so uh is Wasserstein-1 distance even *defined* for comparing two categoricals
alstroemeria313#1694: ...We could just arbitrarily pick a metric actually.
alstroemeria313#1694: Like say distance=1 between two different categories and 0 if they are the same.
alstroemeria313#1694: Then Wasserstein-1 distance is just the 1-norm of the difference of the two probability distributions, right?
alstroemeria313#1694: Well not quite
AI_WAIFU#2844: It's earth mover distance right?
alstroemeria313#1694: yes
AI_WAIFU#2844: It's hard for that to make any sense between categoricals
kurumuz#5695: why did i read "categoricals" as catloligirls
AI_WAIFU#2844: you know why
kurumuz#5695: no idea |
kurumuz#5695: :berk:
AI_WAIFU#2844: hey there's something I want you to try
kurumuz#5695: yeah?
triggerhappygandi#0001: Feel shame
Daj#7482: snap, this goes in my cringe collection https://cdn.discordapp.com/attachments/729741769738158194/850357175113351228/Screenshot_from_2021-06-04_14-55-14.png
kurumuz#5695: i am past that point
kurumuz#5695: lmao
kurumuz#5695: btw nice stealth release
Daj#7482: It's not released yet reeeee
alstroemeria313#1694: wait is it the maximum absolute value of the difference actually
kurumuz#5695: >pushes it accidently
AI_WAIFU#2844: How long in tokens are you average training examples?
Daj#7482: Wasn't me lol
alstroemeria313#1694: using the metric i said
Daj#7482: all the individual parts are out there if you're really determined to test it
Daj#7482: or you can just wait a bit longer for the official release
alstroemeria313#1694: distance is 1 between different categories and 0 otherwise
kurumuz#5695: ofc im not going to wait lol
kurumuz#5695: umm, average document token length?
AI_WAIFU#2844: Yeah |
kurumuz#5695: i think we calculated that but i dont remember
AI_WAIFU#2844: are most of them significantly >2048
Daj#7482: fair, I can respect your determination lol
kurumuz#5695: yeah
kurumuz#5695: they are
kurumuz#5695: im pretty sure
alstroemeria313#1694: then you get wasserstein-1 between categoricals by taking the max norm of the difference of their distributions right :/
alstroemeria313#1694: sorry, it's early in the morning here
AI_WAIFU#2844: Ok, what I want to know is, if you retokenize your dataset so that your examples have length of 4096 or 8192. Can you fine tune GPT-Neo to work at those context lengths?
Daj#7482: Neo has learned position embeddings
alstroemeria313#1694: (I am trying to work out what a language WGAN even does)
Daj#7482: So probably woN't work as good
Daj#7482: Rotary would have been much better
Daj#7482: or PIA
kurumuz#5695: could maybe, not sure why we would do that though
kurumuz#5695: 2048 tokens is already kinda expensive and good enough for short term memory
kurumuz#5695: rest should be knowledge graphs, ideally
CRG#8707: You could interpolate the position embeddings (like ViT)
AI_WAIFU#2844: Yeah but knowledge graphs are hard, moar attention is easy.
kurumuz#5695: if the compute wasnt a problem yes |
kurumuz#5695: doing knowledge graphs should be much much cheaper computationally
kurumuz#5695: so kinda on the KG train for now
kurumuz#5695: a 8192 context length model would be cool asf though
AI_WAIFU#2844: Are you guys caching your activations when you sample, or are they recomputed from scratch for every new token?
kurumuz#5695: cached
kurumuz#5695: 4096 should be doable for the small model
kurumuz#5695: ```
seq_len max_len runtime
128 168 1.2413259412000002s
256 296 1.3484386238999833s
384 424 1.5182151628999805s
512 552 1.6499565551000046s
640 680 1.7703169692000074s
768 808 1.892524761200002s
896 936 2.0653174241999865s
1024 1064 2.19975038069997s
1152 1192 2.3780867653000426s
1280 1320 2.53249043699999s
1408 1448 2.6793070617000128s
1536 1576 2.856790712399993s |
1664 1704 3.0497268097999837s
1792 1832 3.2173556434000035s
1920 1960 3.4154131358000086s
```
on a tesla T4
alstroemeria313#1694: oh, there are wasserstein-2 GANs now too?
AI_WAIFU#2844: huh, are you also batching your sampling?
kurumuz#5695: they werent batched no
kurumuz#5695: wdym
kurumuz#5695: oh also should say that this is fp16 inference
kurumuz#5695: which is T4 is pretty good at
AI_WAIFU#2844: You should try and batch sampling. So that you're sampling from several sequences in parallel at the same time. The reason why is that GPU memory bandwidth << GPU processing power. If you don't batch you're gonna be limited by memory bandwith. i.e. shuffling the weights back and forth between the processor and the memory. If you batch, those weights get reused for computations a few times before they need to be evicted from cache and a new set is dropped in. So your overall throughput should go up.
kurumuz#5695: yeah
kurumuz#5695: v100 is totally memory bottlenecked
kurumuz#5695: you need good batching and can do some optimizations to help with memory bottlenecks
kurumuz#5695: v100 fp16
```
seq_len max_len runtime
128 168 1.1074562303999982s
256 296 1.0874227701999986s |
384 424 1.114802437600008s
512 552 1.1220599703999938s
640 680 1.1442525836999948s
768 808 1.1609761245000072s
896 936 1.18747184959999s
1024 1064 1.193353302300011s
1152 1192 1.2385049492000006s
1280 1320 1.274270927100008s
1408 1448 1.2970904247999897s
1536 1576 1.3185601567000163s
1664 1704 1.3666890028000125s
1792 1832 1.3869299345999822s
1920 1960 1.3995377770000004s
```
Kharr#7888: T4s max out at about 4-5 parallel sequences at fp16
Kharr#7888: Is this runtime in seconds?
EricHallahan#1051: Yes
EricHallahan#1051: They end with an `s`
EricHallahan#1051: lol
kurumuz#5695: yea |
kurumuz#5695: v100 is a monster
Kharr#7888: For how many tokens? max_len-seq_len or seq_len?
kurumuz#5695: max_len-seq_len
kurumuz#5695: they're all 40 tokens
Sid#2121: for 2.7B?
kurumuz#5695: yes
alstroemeria313#1694: ...can you just use the squared wasserstein-1 distance as the objective for G
alstroemeria313#1694: (yes, apparently, but unsure if this is actually any better, probably not)
alstroemeria313#1694: ...wait, in a WGAN, can you literally just use the gradients that got into G from the backward pass of D's loss function to train G?
alstroemeria313#1694: Like just negate their sign and train w/o re-evaluating the loss fn
alstroemeria313#1694: This works!
alstroemeria313#1694: I'm gonna write a gradient negater function so I can use this with other losses on G too
alstroemeria313#1694: Yeah, I was looking at the exact form of the loss functions while reading the RaGAN paper
alstroemeria313#1694: I also worked out how to use squared Wasserstein-1 distance to train G
alstroemeria313#1694: But like I said I doubt this is actually better
alstroemeria313#1694: negate_grad works
alstroemeria313#1694: So now I can train a WGAN-GP with additional losses on G
alstroemeria313#1694: With a single optimizer
alstroemeria313#1694: You just do: ```python
class NegateGrad(torch.autograd.Function): |
@staticmethod
def forward(ctx, i):
return i
@staticmethod
def backward(ctx, grad_output):
return -grad_output
negate_grad = NegateGrad.apply
```
alstroemeria313#1694: then negate_grad() the outputs of G when you feed them to D's loss
alstroemeria313#1694: All the other components of the D loss function, including gradient penalty if you did it right, only affect D's gradients
myuntha9#3097: https://www.engadget.com/chinas-gigantic-multi-modal-ai-is-no-one-trick-pony-211414388.html
alexyz#3459: yes
Kharr#7888: https://twitter.com/huggingface/status/1400566583890644992 -- the clickbait is real.
Kharr#7888: https://cdn.discordapp.com/attachments/729741769738158194/850405219727835146/unknown.png
Louis#0144: Someone really loves poop I guess
Kharr#7888: Too much internet content in the Pile?
Kharr#7888: Or Neo is just not great at this task: https://cdn.discordapp.com/attachments/729741769738158194/850409532484354067/unknown.png |
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/850410242777022464/Screenshot_20210604-102621_Chrome.jpg
Jonnathan#1234: It thinks insulting humanity is positive? RIP us
bmk#1476: for the record I had no idea HF was going to post this article until it actually went up on twitter lol
bmk#1476: :3berk: https://cdn.discordapp.com/attachments/729741769738158194/850410847893192715/unknown.png
bmk#1476: o.O switched to computer and it started working? https://cdn.discordapp.com/attachments/729741769738158194/850411181290160128/unknown.png
bmk#1476: uhhh
bmk#1476: ohh theyre not greedy sampling
finetune#0907: can't set temp to 0 either :sadge:
Louis#0144: Huggingface cringe
finetune#0907: added rope to my transformers, now i just need some weights to test :thonk:
Louis#0144: wheres the Jax -> HF converter
n.kh.l#5814: i tried it out and its pretty disappointing... whats up with that?
triggerhappygandi#0001: Scat
kurumuz#5695: a goose took it
n.kh.l#5814: im trying to finetune gpt neo 1.3B to generate questions from askreddit and i trained and it has ~0.008 loss but when i generate it generates (what seems to be) the default output
Louis#0144: sounds like ur doing something wrong
Louis#0144: we arent tech support though
Louis#0144: sorry
n.kh.l#5814: hmm ok np
kurumuz#5695: default output? |
kurumuz#5695: why your loss is that low
n.kh.l#5814: 🤷♂️
n.kh.l#5814: thats at 40k steps
n.kh.l#5814: by default output i mean that it doesnt seem to be finetuned at all, it looks like random conversations and stories
n.kh.l#5814: should i pastebin you one of the generated?
Kharr#7888: This might be a silly question but.. are you sure you saved and then loaded your checkpoints correctly? If the output looks like the default output.. maybe it is :thonk:
n.kh.l#5814: i thought so too but it says `loaded from checkpoint 400000`
n.kh.l#5814: one thing it could be is that my dataset is <=100 chars per sample but its generating 2048 chars
n.kh.l#5814: but im not sure something like that would make the training data completely useless
kurumuz#5695: its saying 40k?
kurumuz#5695: are you sure its not saying 400k?
n.kh.l#5814: oh sorry you're right 400k
kurumuz#5695: you're loading the default model
n.kh.l#5814: really?
kurumuz#5695: yes
n.kh.l#5814: so it was just not saving?
Kharr#7888: 🤣
n.kh.l#5814: bruh it literally says `Saving checkpoints for 391000 into gs://dataset/GPT3_XL/model.ckpt.`
kurumuz#5695: idk what you're doing
kurumuz#5695: neo 2.7b was trained until 400k steps |
kurumuz#5695: idk about 1.3b
n.kh.l#5814: ok im not sure but when i started training the first checkpoint it saved was `362000`
n.kh.l#5814: `This model was trained on the Pile for 380 billion tokens over 362,000 steps.`
n.kh.l#5814: :\ im fine retraining i dont really care i just want to know why its not working
Kharr#7888: If you train with an optimizer like Adam and lr 1e-4 you should start seeing the model adapt to your data within the first 1k steps. Best check at that point to make sure that the output is changing.
n.kh.l#5814: with the colab, can i tell it to generate samples every so often?
n.kh.l#5814: also, do you think its a problem that none of my data samples ever exceed ~100 characters?
n.kh.l#5814: should i just change the context size in that case
Kharr#7888: No, the model will just learn to write in sequences of 100 characters. I trained Neo to do auto content tagging and those are only a word or two. Works fine.
n.kh.l#5814: the default learning rate is 2e-4 and its already adam
n.kh.l#5814: what if there was like a `#tech-support` channel? im probably out of line for asking this because i ask a lot of questions and wouldnt really be able to help but i think that could be useful at least to get the occasional question out of general
bmk#1476: this server is not for tech support
n.kh.l#5814: fair enough
bmk#1476: if someone wants to make an unofficial tech support server they can go ahead and do that
n.kh.l#5814: yeah i really cant complain i wouldnt be able to help answer questions so its fine
bmk#1476: but please don't use this server for tech support
n.kh.l#5814: 👍
Zygma#6000: Was wondering where all of the bot commands were located
Zygma#6000: Wait i got it
Deleted User#0000: was just curious. Are people working on open eleuther projects (clap, carp, vision, multimodal, etc) working on them exclusively as side projects, or do some of u work on them as part of ur actualy job (i guess this is more directed to people in academia/doing phds/postdocs/etc) ? |
Deleted User#0000: and if the latter, are u colleagues ok with showing all the research publicly pre-publication stage?
Louis#0144: I volunteer here fulltime
Louis#0144: yes
Deleted User#0000: how?
Louis#0144: during prior terms when I worked at GT they were ok with it
Louis#0144: I do research though
Deleted User#0000: seems uncommon in academia to me
Louis#0144: my prof is chill
Louis#0144: I shared my code here
Louis#0144: and unfunished papers
Louis#0144: unfinished*
Deleted User#0000: nice
Deleted User#0000: my prof seems quite chill too. But I like to collaborate, and the more collaborators the more likely that maybe some are not ok
Deleted User#0000: which is such a conundrum
Louis#0144: where do u go
Deleted User#0000: where am i?
Louis#0144: ye
Deleted User#0000: im in Inria in Bordeaux, France
Deleted User#0000: just joined as a postdoc
Louis#0144: oh yeah |
Louis#0144: *French*
Deleted User#0000: im not french
Deleted User#0000: but my advisor is chill about it i think, but some of my collaborators from sweeden i think are not as much
Deleted User#0000: so hmm
Deleted User#0000: maybe i'll speak with them to be more sure
Louis#0144: sweeds not chill?
Deleted User#0000: seems so
Deleted User#0000: well one of them seems more chill than the other
Deleted User#0000: i donno i dont wanna make sterotypes lol
Deleted User#0000: but in general many people in academia are not into sharing everything
Kharr#7888: Flag planting is real in academia.
Louis#0144: i share everything
Louis#0144: idgaf
Louis#0144: no one is gonna steal my ideas anyway
Louis#0144: lmao
Louis#0144: and the more public I am about it
Louis#0144: the easier it is to point them out as flag planting
Deleted User#0000: im just thinking about whether to position myself as sharing everything, even if that may cut some potentially quite useful collaborations?
Deleted User#0000: or maybe i can be more subtle / less extreme. And be open to all types of collaborations, but just "happen" to spend more time / be more interested on the open ones
Deleted User#0000: but yeah i know open research is what I want, just need to figure out how to interact with others |
Deleted User#0000: interacting with others is always the hard part for me in general lol
StellaAthena#3530: @Deleted User This is going to sound dumb, but just ask them.
Deleted User#0000: Yeah I should just do that~
Deleted User#0000: thanks for the advice
StellaAthena#3530: 🙂
StellaAthena#3530: Usage stats from HuggingFace. This counts the number of times people downloaded the model using the `transformers` library https://cdn.discordapp.com/attachments/729741769738158194/850463956622770176/Screen_Shot_2021-06-04_at_3.55.19_PM.png
gwern#1782: 100k? I wonder what they do with it
Deleted User#0000: woah nice
StellaAthena#3530: We are the fourth most used causal LM 😮 https://cdn.discordapp.com/attachments/729741769738158194/850464517774770206/Screen_Shot_2021-06-04_at_4.01.59_PM.png
Louis#0144: Curious why someone would still pick GPT2 over neo tbh
Louis#0144: The small GPT2s are basically unusable
EricHallahan#1051: Because there are two model sizes that GPT-Neo does not offer?
Louis#0144: Yeah but what would they be using them for?
EricHallahan#1051: ¯\_(ツ)_/¯
Louis#0144: Embeddings? Prob not
Louis#0144: Finetuning? Good luck
Louis#0144: etc
EricHallahan#1051: Though also if you are using GPT-2 XL you are pretty dumb.
Louis#0144: Yeah true
bmk#1476: maybe because "EleutherAI/gpt-neo-1.3B" is multiple bytes longer than "gpt2-xl" and they dont have enough disk space to store this new, larger string |
Louis#0144: LOL
Sphinx#2092: @StellaAthena You might find this humorous: https://scontent-frx5-1.xx.fbcdn.net/v/t39.8562-6/196203317_1861942553982349_5142503689226033347_n.pdf?_nc_cat=110&ccb=1-3&_nc_sid=ae5e01&_nc_ohc=ibkQ1m-Hhn4AX8IcowK&_nc_ht=scontent-frx5-1.xx&oh=eeaff44906f4a49bd4e73ddf47c516f9&oe=60DE1F0D .
Sphinx#2092: The same people from CC-Aligned released Flores 101, dev sets for 101 languages.
Sphinx#2092: One of the sections in the paper is literally called "Flores-101 at a glance" lol
bmk#1476: is it actually good this time around?
bmk#1476: the abstract sounds promising
Sphinx#2092: Flores has always been good.
Sphinx#2092: Like they did nepali and sinhala with humans, same with khmer and pashto.
Sphinx#2092: Though it's a big jump from 4 to 101.
bmk#1476: i meant like in comparison to cc-aligned
bmk#1476: since you mentioned it's the same people
Sphinx#2092: Well same people as in Facebook lol. But the difference is that Flores is just dev/test sets.
bmk#1476: oh
bmk#1476: i thought you meant literally the same authors
Sphinx#2092: So its much more tractable versus making training sets such as CC-Aligned.
bmk#1476: right makes sense
Sphinx#2092: There is some intersection.
bmk#1476: is flores-101 big enough to train langid?
Sphinx#2092: No clue.
bmk#1476: ah k |
bmk#1476: seems like it could be a really good training source for a classifier
Louis#0144: Looks like solely a dev set
bmk#1476: since i assume you need less data totrain a classifier than a translation model
Sphinx#2092: Either way, it's really good that we have these dev sets though. Even if the training data is shit, at least we can still make progress.
StellaAthena#3530: I would be more amused if they cited us. The fact that they don't cite any examples of good validation work not done by the authors is rather fishy IMO.
Sphinx#2092: Huh that is odd.
StellaAthena#3530: I wonder if they have beef with Isaac. Somehow they managed to cite 0 papers he was an author on.
AI_WAIFU#2844: 100k downloads is pretty wild.
alstroemeria313#1694: Wow I've never seen 10 not be strong enough for the WGAN-GP gradient penalty weight
alstroemeria313#1694: D loss was going down faster than 10 times the gradient penalty was going up.
alstroemeria313#1694: It diverges when this happens.
Teemochu#8740: There are a number of AI Dungeon clone scripts that download 2.7B as a default option
Teemochu#8740: curious how many of those 100k come from KoboldAI
bmk#1476: are there *that many* AID users? o.O
EricHallahan#1051: You seem to highly underestimate that number constantly.
Teemochu#8740: there are 8000 in a server largely consisting of people fed up with recent events, I'd be mildly surprised if there haven't been at least that many model downloads associated with it
EricHallahan#1051: Especially if they are all in Colab.
Teemochu#8740: @finetune might have more stats since he made the third-most-well-known of the scripts (and the one that runs best in colab)
finetune#0907: not like there's analytics in there :berk:
finetune#0907: eyeballing the numbers tho, i'm not sure aid clones make up that much. 125M was never announced and still has 21k, proably mostly grabbed by people who want to test stuff more quickly while working with a bigger neo. probably at least as many people are doing stuff with it without having heard of 125M |
gwern#1782: I thought they all used the finetuned version which wouldn't count here? but I suppose if they're lazy and just use the original that could account for a huge number of downloads easily, sure. esp if they have to redownload the whole model regularly...
Teemochu#8740: Kobold offers standard Neo by default
Teemochu#8740: but I think it recommends a 3090 for that (iirc Kobold says 16gb even though I think it uses half behind the scenes now)
kurumuz#5695: a 2070 should be good enough 🤔
kurumuz#5695: or 3080
kurumuz#5695: fp16 master race
finetune#0907: 1070 ti
finetune#0907: works
aero#1357: 2.7B only uses 7gb vram in my experience (the HF version)
aero#1357: im really curious about 6B though.. been playing around with mesh-transformer-jax with the 6b config and it really doesnt like fitting on my gpu
EricHallahan#1051: https://discord.com/channels/729741769192767510/730095596861521970/850485484165791765
Teemochu#8740: bf16 masterer racer
kurumuz#5695: bf16>fp16>fp32
kurumuz#5695: change my mind
Teemochu#8740: what's your mind's learning rate
kurumuz#5695: dynamic
kurumuz#5695: probably
Teemochu#8740: catgirl bf16>fp32>fp16
Teemochu#8740: catgirl bf16>fp32>fp16
Teemochu#8740: catgirl bf16>fp32>fp16 |
finetune#0907: bf16 sure would be nice if it worked for me in pytorch
Teemochu#8740: there that should change it a bit
bmk#1476: bf16>fp32>fp16
kurumuz#5695: idk why it doesnt work
bmk#1476: fuck fp16
Teemochu#8740: sorry for the batch size of 1 in my messages
kurumuz#5695: fp16 just works :ultraberk:
kurumuz#5695: well
kurumuz#5695: if it loves you
Teemochu#8740: it just works until the singularity
bmk#1476: new format: cg16 (CatGirl16)
kurumuz#5695: :TODD:
kurumuz#5695: it just works
ChaosAlpha#5829: Not sure if this is the right place to ask, but what would be the recommended minimum VRAM to fine-tune the different size variants of GPT-Neo on a GPU?
alexyz#3459: rule of thumb: take the amount of parameters and x16
alexyz#3459: plus a bit extra
ChaosAlpha#5829: Hmm, that's what I feared. Thank you for the tip!
EricHallahan#1051: This is the best place to ask, despite it not looking like it.
EricHallahan#1051: Yes, you will need quite a bit of memory to tune.
ChaosAlpha#5829: I shall tamper my initial expectations of being able to run the billion parameters variants on my current setup 😅 |
ChaosAlpha#5829: Out of curiosity, have "intermediate size" variants (below ~1B but above ~100M) been considered?
aero#1357: theres a project for finetuning using deepspeed but it offloads to system memory, 2.7B requires 80gb+ system ram 😅 but only like 16gb vram
ChaosAlpha#5829: 🤞 it works with the 125M on one of the 1080Ti on the server I'm using
Sid#2121: we have some trained with neox that we'll release at some point 🙂
ChaosAlpha#5829: Interesting. I will follow the progress in that project then. Also yay, the 125M runs \o/
Daj#7482: I'm curious what your usecase for 125M is?
Daj#7482: I often just label them in my mind as useless lol
alexyz#3459: didn't some people do evals on it and saw it was better than the smallest GPT2?
alexyz#3459: then again, the smallest GPT2 is also pretty useless
StellaAthena#3530: Yeah I’m impressed when the smallest GPT-2 can write a sensical and grammatical sentence.
Daj#7482: Yea I'm curious what people use small model like these for
ChaosAlpha#5829: I'm just testing different architectures at this point, the tasks aren't that demanding, though a GPT-like model is probably not super well adapted for it. But hey, I'm basically throwing spaghetti against the wall and seeing what sticks 😅
alexyz#3459: well spaghetti can be quite sticky
ChaosAlpha#5829: Loss is moving down at least, though so far 0% accuracy still, but that's pretty normal for the way it's setup. Fortunately the dataset I'm testing on isn't huge so I'll let it run during the night and see how it did in the morning. Though yeah, 125M is a bit on the lower end of what I tried so far.
ChaosAlpha#5829: So far the best success I've had was using BART-large (406M)
Sid#2121: what's the task?
ChaosAlpha#5829: But they're wildly different architectures so can't really transpose anything.
ChaosAlpha#5829: Explanation Generation
Louis#0144: https://www.reddit.com/r/MachineLearning/comments/nshlhw/p_dynamic_image_editing_using_clip_and_lgrammars/ eleuther official project just dropped 😳 (the code, the paper soon)
Louis#0144: (before u say anything sid i got perm from connor to say its eleuther official) |
gwern#1782: 'compliment'. you should remove the double newlines. and this should've been an image upload to demo, or at least link the examples of what it can do first
gwern#1782: what is https://twitter.com/lcastricato/status/1394436280239501316?s=20 even doing? 'transfer' from what to what? is that supposed to be... a lion, or something? was there a lion picture involved somehow?
Louis#0144: True
Louis#0144: I can’t edit it at this point I guess
Louis#0144: Lmao
Louis#0144: It’s a text post first tbh
Louis#0144: It’s not an image of a lion, it’s style transfer from a text description to a segmented part of an image
𓅬 gabriel_syme 𓅬#3220: the practical tips I'm here for
𓅬 gabriel_syme 𓅬#3220: Kharr was saying 125M is pretty amazing for it's size which can be really enticing imo given you can run it on a CPU comfortably
alstroemeria313#1694: ahah, I can train a WGAN-GP on a single batch of MNIST reals and it doesn't collapse
alstroemeria313#1694: and if I use DiffAugment I can train it on a *single* MNIST real
alstroemeria313#1694: And it will literally start to produce outputs of just that real.
UnsupervisedLearner#4148: Right? I share my ideas in hopes someone less lazy takes them and actually works them out
Kia#2550: I- :blobsad:
Kia#2550: True
UnsupervisedLearner#4148: Unbaked idea
Transformer/gMLP alternates computation on token and global context
Is there a meaningful third computation possible? |
UnsupervisedLearner#4148: Mixing global contexts?
UnsupervisedLearner#4148: Secondary
The FFN has no dynamic component. From MLP attention paper, a lightweight dynamic component on token mixing has disproportionately high effect on capacity.
Add dynamic component to FFN?
UnsupervisedLearner#4148: Third and last one before I get *too* annoying.
SSL in vision and language modeling is very different. Vision working best with very strange BYOL kinda setups while language *seems* to do well with simple next element prediction or MLM. Has anyone attempted to apply SSL concepts that make vision work on language models?
gwern#1782: images worth great with next-element prediction or mlm. it's just too *expensive* because they're damn long sequences
UnsupervisedLearner#4148: 16x16 tokens for early ViT, right? That's not huge, but I don't see BERT style training for them. Maybe I just missed the research when it came out, I'm definitely still catching up with vision transformers
alstroemeria313#1694: You can just apply an autoregressive loss to sequences of VQGAN tokens or smth
alstroemeria313#1694: Indeed this is what VQGAN is for
UnsupervisedLearner#4148: In that case why would FB go through the trouble with DINO and related for their announced giant vision model?
Deleted User#0000: does EleutherAI have a twitter account?
Kia#2550: I-
Kia#2550: Hmm
Kia#2550: We can probably ask @AI_WAIFU
Deleted User#0000: ok thanks
Kia#2550: No problem :o |
EricHallahan#1051: No.
EricHallahan#1051: Nvm
Kia#2550: Wait we have?
EricHallahan#1051: No, I was late to answer.
Kia#2550: Ow
Kia#2550: It would be lovely to follow it :blobsad:
𓅬 gabriel_syme 𓅬#3220: I'm guessing he was referring to the flow-type models? they had great quality but not efficient at all
UnsupervisedLearner#4148: I ackshually tink he was referring to schemes like PixelCNN
𓅬 gabriel_syme 𓅬#3220: but yeah in ViT type models (and even the new MLPs) it's a sequence of patches, but I'm guessing those patches might not be fine grained enough for autoregressive contexts?
UnsupervisedLearner#4148: Which are incredibly expensive, and were SotA for a while before they figured out to make VAEs deeper
UnsupervisedLearner#4148: Yeah I'm really unsure here. I just think it's strange that we just use MLM and autoregressive stuff and call it a day on language modeling
𓅬 gabriel_syme 𓅬#3220: it might be that the opposite direction, vision-> language is more interesting?
𓅬 gabriel_syme 𓅬#3220: (I wouldn't know how to start)
UnsupervisedLearner#4148: Wdym? Like multimodal?
𓅬 gabriel_syme 𓅬#3220: ehm, idk patches? 😄
𓅬 gabriel_syme 𓅬#3220: like literal images
UnsupervisedLearner#4148: I don't understand what you're pointing to here
gwern#1782: I don't think anyone's done iGPT with a MLM loss but I vaguely recall all of the PixelCNN and pixelRNN and pixelSnail and what have yous experimenting with various orders and deletion spans and hierarchies and they all work pretty well so I assume they are good examples along with iGPT
gwern#1782: (but those always have the problems of like iGPT being extremely expensive and a lot of the fiddling with them is just trying to bring the cost down)
UnsupervisedLearner#4148: Guessing we'll see what's up when video GPT arrives:brr: |
UnsupervisedLearner#4148: I don't know why, in retrospect, video GPT would tell us much about this. I'm just excited for it
alstroemeria313#1694: So can you do automatic lr tuning by like… at each step, sampling a step size in a distribution around the current mean value
𓅬 gabriel_syme 𓅬#3220: are we that certain that's what is coming? a video GPT
alstroemeria313#1694: And making the mean drift up or down depending on if the shorter steps or longer steps were doing better on average
alstroemeria313#1694: Mb I should look into stochastic line search actually
UnsupervisedLearner#4148: https://youtu.be/429QC4Yl-mA?t=1157
Ethan linked this in the scaling room yesterday
Kia#2550: I mean... It's probably next to GPT>Audio GPT>Image GPT>Video GPT
UnsupervisedLearner#4148: @alstroemeria313 a lot of meaning is put into sharp vs flat minima, anything related to that?
alstroemeria313#1694: Idk
Kia#2550: Hilarious idea getting all the digital mediums 😄
CRG#8707: Appendix of the original ViT paper: <https://arxiv.org/abs/2010.11929> https://cdn.discordapp.com/attachments/729741769738158194/850631721056469022/Screenshot_20210605-090530.png
Deleted User#0000: Hi
chirp#4545: Is there a smaller version of GPT-Neo?
chirp#4545: Smaller than 1.3B
chirp#4545: I wanna try out the Key-Value Memories thing (https://arxiv.org/abs/2012.14913) but I want to start small
CRG#8707: <https://huggingface.co/EleutherAI/gpt-neo-125M>
chirp#4545: ^ also if any of y'all know any gotchas from that paper, now would be a good time to let me know 🙂
chirp#4545: also, is there a representative subset of the pile that's easy to download? |
chirp#4545: i'd rather not download 800GB 😛
chirp#4545: ah figured it out
marmiteCloud#5923: Apologies for a little ignorance here - I have used this and the other models and they are fantastic. It's excellent they are trained on Wikipedia unlike the GPT-2 ones... I noticed aitextgen mentions a 350m GPT-Neo model, is that a thing?? It does not appear on huggingface like 125M... (it is a typo for GPT-3-350 maybe..)
CRG#8707: https://discord.com/channels/729741769192767510/729741769738158194/845083472191029268
Daj#7482: There will be official intermediate sized models eventually, the ones currently floating around were kinda released by accident lol, but some people say they're good
kurumuz#5695: distilneo when
alstroemeria313#1694: hm apparently you can train an MNIST classifier with SGD + Armijo backtracking line search
marmiteCloud#5923: Ah, thank you. Yes, fast for establishing domain specific language.
alstroemeria313#1694: but it doesn't do as well on the validation set as one trained with Adam?
StellaAthena#3530: Code is in progress, haven’t finished integrating it into the main codebase
Napolean_Solo#2907: Hi guys anybody here good at DBs?
Napolean_Solo#2907: Needed a bit of help
Napolean_Solo#2907: How can I add user submitted files to database that only the user who uploaded can access.
Napolean_Solo#2907: What would the database schema look like?
Napolean_Solo#2907: Am using Flask if that helps
Daj#7482: Please read our #rules, we are not tech support
alstroemeria313#1694: hm https://cdn.discordapp.com/attachments/729741769738158194/850731117102497805/demo_w2.png
Kia#2550: Ow
Kia#2550: 👀 I think I saw this somewhere
Kia#2550: Really lovely introduction on neural networks |
UnsupervisedLearner#4148: Thank you for the reference!
Now the mystery deepens. Because if patch prediction works, why use very engineered strategies like DINO?
And if very engineered strategies *work better* than masked prediction, why haven't they migrated to language? Is it a difference in richness of the signal?
𓅬 gabriel_syme 𓅬#3220: what aspects of DINO do you think are very engineered?
UnsupervisedLearner#4148: The whole thing. It's a weird way of learning a representation vs just a joint probability model over the tokens https://cdn.discordapp.com/attachments/729741769738158194/850749348030447616/IMG_20210605_095300.jpg
𓅬 gabriel_syme 𓅬#3220: interesting, I feel the things they reference above might be more engineered smh
𓅬 gabriel_syme 𓅬#3220: like imagine classic contrastive learning for e.g. with all the sampling and losses going on. But I'm not the person to confidently say which one is easier to implement / train
chinesesoup#6725: You guys think it would be useful to use gptneo to try and suggest messages for support tickets? I'm thinking about taking a pretrained model, and then finetune it on a dataset with support conversations.
The idea would be that the ones who provide the support check the autogenerated answer and modify it if needed. After the ticket is closed it could be added to the dataset and retrained once the dataset contains x amount of support tickets or after x amount of time to increase the accuracy over time. Does this seem like a viable approach?
bmk#1476: !faq
Carl-bot#1536:
alstroemeria313#1694: eheh. https://cdn.discordapp.com/attachments/729741769738158194/850770838868590612/demo-73.png
bmk#1476: this is WGAN with 2-wasserstein?
alstroemeria313#1694: it is ordinary WGAN-GP
alstroemeria313#1694: I still do not understand wasserstein-2
alstroemeria313#1694: However. I actually got a conditional GAN to work.
alstroemeria313#1694: Doing feature extraction with a convolutional part and feeding in the features and the condition to an MLP wasn't working for me at all. |
alstroemeria313#1694: So instead I put a modulation layer after the first conv layer in the discriminator.
alstroemeria313#1694: i.e. two linear projections from the condition vector generate channel-wise shifts and scales for the first conv layer's output.
alstroemeria313#1694: Sticking the condition information into the discriminator as early as possible seemed to make it work.
UnsupervisedLearner#4148: Do you think a room like #technical-help would make it easier to handle these requests? It would perhaps funnel newcomers at least, and people could choose to answer or not without clogging discussion here
EricHallahan#1051: That idea is proposed literally every two weeks.
UnsupervisedLearner#4148: Well sorry for spam then
bmk#1476: someone can make an unofficial support discord
bmk#1476: that someone just wont be me, is all
alstroemeria313#1694: I wonder if I could make a CLIP embedding conditional WGAN-GP using *this technique alone*
alstroemeria313#1694: Or if it still needs the contrastive loss on G.
Louis#0144: should just instamute people who mention HF
Louis#0144: 😉
Louis#0144: jkjk
Sid#2121: the point is we don't want to encourage those sorts of questions. We are mostly just here to do research. We're not a for profit org with any debt to people who use the results of our work, or any responsibility to help them. We just do research and put it into the world, and prefer to focus on that.
alstroemeria313#1694: Like the idea is from BigGAN's class conditional batchnorm / StyleGAN's modulated convolutions
alstroemeria313#1694: Except I use them in D too.
alstroemeria313#1694: Like even putting one at the start of D was a huge improvement.
UnsupervisedLearner#4148: Honestly I was thinking it would decrease the issue in the main chats, I didn't realize it was an already decided-on suggestion
alstroemeria313#1694: Actually those MNIST fakes were made using a regular G and a single modulation layer in D.
Sid#2121: having a channel dedicated to it is only going to increase the amounts of tech support questions we get full stop. It's like how building an extra lane in a road never decreases traffic. |
aquajet#7800: Where did the 16x n_param rule of thumb come from?
aquajet#7800: Is it cause of fp16
kindiana#1016: each param is 4 bytes, and you need about 4 buffers the same size as your parameters (params, grad, adam params x2)
EricHallahan#1051: and by 4 bytes we assume binary32
inox#5400: I really want to make a starboard-like bot that lets you star replies to questions and stores the replies so you get a micro stackoverflow built into discord
inox#5400: even better if it searches the question archive when people post and suggests answers that already exist
inox#5400: but that requires smarts
Sid#2121: just tune a bert model on the discord logs :ultrazucc:
alstroemeria313#1694: Hey what's the thing called where you train a generative model by minimizing the squared Wasserstein-2 distance between a batch of fakes and a batch of reals?
alstroemeria313#1694: I mean, without a discriminator.
alstroemeria313#1694: It seems related to IMLE?
alstroemeria313#1694: from it https://cdn.discordapp.com/attachments/729741769738158194/850788967212253214/demo-76.png
Zygma#6000: I suppose someone here has proposed a question to the imagine bot. Do you notice any sort of trend when a prompt is presented as a question?
Sahl#0630: have a channel dedicated for it but it’s invisible if you have a role
alstroemeria313#1694: aaaaa
alstroemeria313#1694: The Wasserstein-2 GAN computes an optimal transport mapping *in pixel space* each iteration and uses that to train the discriminator?
alstroemeria313#1694: Between the current fakes and reals?
alstroemeria313#1694: IDGI
alstroemeria313#1694: We can just backprop through good enough approximations of Wasserstein-2 in pixel space what do you even need D for
AI_WAIFU#2844: At this point I feel like we need an FAQ entry for this point. People have asked and we have answered so many times. |
Daj#7482: I second this, would appreciate someone writing something up
EricHallahan#1051: I can do it in a couple minutes.
chinesesoup#6725: Would it be of any use for you guys if I scrape extra data so it later could maybe be added to thepilev2?
I was mainly thinking about free books. Or just books in general althrough there would probably be books in there that the author originally wanted to get paid for if I just scrape random books.
chinesesoup#6725: I'm a coder but I'm not really knowledgable about stuff like language transformers, but scraping high quality data is probably something I could achieve relatively easy
nev#4905: nope
nev#4905: but you can experiment
Daj#7482: Hey there! Thanks for the offer, but I think the pile v2 is kinda on hold indefinitely, since no one really seems interested in putting in the (massive) amount of work to put it together. I think there has been quite a lot of interest in #multimodal for building massive text/image pair datasets, though I'm not familiar with the current status there
EricHallahan#1051: Actually, I'll have to do it later. Someone please remind me to do it if I haven't in a few hours.
moopaloo#7562: Has anyone tried distilling the smaller sized models to see if distillation works at smaller scales?
chinesesoup#6725: I would be willing to try, I could also look into getting text/image datasets but that would be a bit harder.
I could probably spin up a raspberry and just let it scrape with a few tb storage or more attached
AI_WAIFU#2844: I guess if you want to take charge of thepilev2 we would welcome that.
AI_WAIFU#2844: But it's a tremendous amount of work
Daj#7482: That's a bit of an overextension to place on someone new to the group lol
chinesesoup#6725: Yea I mean I don't mind trying, but the thing is the data has to be correct
Daj#7482: Unfortunately I'm not exactly sure who is the right person to talk to about #multimodal . @Aran Komatsuzaki ? @Louis ?
chinesesoup#6725: So I definitely gotta look a bit more into it if its text/image pairs |
Daj#7482: I know @spirit-from-germany has been working on multimodal datasets, maybe he can help
Louis#0144: hi so #carp is currently doing controllable NLG with the eventual goal to use it for grounding
Louis#0144: we have a visual grounding project that is on hold rn
Daj#7482: @chinesesoup was interested in doing scraping potentially, wasn't sure who to ping
AI_WAIFU#2844: What if we made some kind of submission pipeline for pilev2 data. Then when people want to contribute we can just say put it in this format and get this information and then just let it accumulate.
Daj#7482: ask @bmk lol
Daj#7482: You'll probably trigger his PTSD :ptsd:
AI_WAIFU#2844: And by "we make" I mean "find a vollunteer"
chinesesoup#6725: Yea I don't got any fiber connection or anything so it might take a while tho, but I don't mind letting it run for a few months or so if that would be needed
chinesesoup#6725: 🤣🤣
Zygma#6000: Fs, i think i might pose prompts and then pose those same prompts as questions and see if theres a difference
Daj#7482: Unfortunately I'm not involved in any data collection efforts atm, so I'm not super helpful, I apologize ¯\_(ツ)_/¯
AI_WAIFU#2844: Like I don't think it hurts to gather some data into an easily usable format. But it won't be usable for quite a while.
Daj#7482: If you wanna do SWE stuff, looking into better ways of _cleaning_ text data might be much higher bang for your buck
Daj#7482: But that's really tricky
Louis#0144: @chinesesoup def talk to @spirit-from-germany
Louis#0144: I do not do image scraping
Louis#0144: im doing text scraping of stories though
Louis#0144: idk if thats of interest
Daj#7482: Both HTML->Text and PDF->Text is pretty terrible even with the SOTA software, especially with non-english |
Daj#7482: It would be _massively_ useful to improve either
nev#4905: yeah
AI_WAIFU#2844: Yeah if you can figure out better ways of extracting text from PDFs/HTML that would be super useful.
chinesesoup#6725: Yea I'd prefer text scraping since that is probably easier to check
chinesesoup#6725: And yes I was thinking about html and pdf
chinesesoup#6725: There is a huge amount of pdfs available
quinn#9100: @bmk @Daj https://trello.com/b/LZlz29Yr/server-projects-menu what do you think. If adopted by the server could solve the problem of new people coming in and not knowing where to plug in. I was envisioning you'd use the `seed` column to publicly port the private list of ideas that you said 2 people have seen.
i made an eleuther org on trello too https://trello.com/eleutherai
Daj#7482: Yea and unfortunately they're almost totally unusable from our experience
Daj#7482: Because PDF->TXT is just so bad
AI_WAIFU#2844: You gotta write the code to make it work
chinesesoup#6725: I could try to look into that, maybe something that discards pdfs that contain images
chinesesoup#6725: Or reffer to images
Daj#7482: I don#t have a lot of experience with project management, but I like this idea
EricHallahan#1051: We are in the process of working toward that goal.
AI_WAIFU#2844: Try it and see what happens.
StellaAthena#3530: @quinn can you elaborate on how this is significant different from GitHub’s project boards?
quinn#9100: i don't think it is different. just columns and tickets.
quinn#9100: accessibility might be better on trello tho |
quinn#9100: i.e. easier to wrangle permissions for
quinn#9100: a github version of the same idea is fine
chinesesoup#6725: Usually pdfs are high quality so it should work theoretically, the size of the pdf files will probably be a big bottleneck tho lol
quinn#9100: the point i'm making is a meta-project board that tracks the status of projects (and then each individual project can use whatever it wants)
StellaAthena#3530: Our problem isn’t project management software. It’s project managers
quinn#9100: we were talking in Int reading group today about the problem of going from conceptualization to shovel-ready for an individual project
Daj#7482: You weren't in the interpretability call
AI_WAIFU#2844: That's less of an issue. If we need resources we can get them. Quality is the bottleneck. The software needs to be able to consitently produce useable text from PDFs.
Daj#7482: We talked about this mostly for that group specifically which has a number of projects they wanna try to make shovel ready
quinn#9100: yeah the `Server Projects Menu` is intended to be like "hi i'm new to the server i want to help out, i see this project is in need of a PM, i can dive in"
Daj#7482: This is similar to what we were discussing in L5, Stella
Daj#7482: But not the same discussion
StellaAthena#3530: @Daj Ah, I didn’t know about that context.
EricHallahan#1051: Same
chinesesoup#6725: I'm thinking, wouldn't it also be possible to use pdfs to create image/text pairs? They would have a pretty big description then tho
chinesesoup#6725: I'm gonna try to code something in .net core or python
AI_WAIFU#2844: Awesome, I recommend python, it's the lingua franca of ML.
bmk#1476: it's not really a private list, I've posted it 4 times, every time in a public channel
quinn#9100: ah word
AI_WAIFU#2844: Yeah but discord is shit for that. You gotta put it somewhere that's not hidden or burried under thousands of messages. |
bmk#1476: and it literally started 2 months ago so I've been averaging one time per 2 weeks
chinesesoup#6725: Cool, you guys planning to make something that can read text and images at the same time? Because I could probably create the scraper to get pdfs, then filter the pdfs without images and take the text, then take the pdfs with images and make the text reference the images seperately. So the model could probably get useful info from the images and text combined in a more usable format. However I know little about data science so I'm not sure if its practical to implement something like this in a model, for me it would seem close to impossible lol
AI_WAIFU#2844: I would start with just text. There's definitely interest in multi-modal stuff, but from an engineering/legal perspective dealing with images is a bigger pain in the ass.
chinesesoup#6725: Does it matter a lot if there is some duplicate data? Or should I check this and discard anything with an over x% match?
AI_WAIFU#2844: Yeah check for duplicates
Exocamp#8255: *Continuing* continuing my ramble on "make one device train huge ai, somehow", I noticed just now the existence of Mesh TF
Exocamp#8255: Would that be able to assist with the idea of "use small pieces of data to consistently train up"?
nickt#8694: I agree with this - random suggestion: add it to the rules post and call that the info channel or something? (whatever board/site/mechanism people decide)
StellaAthena#3530: I’m taking a knowledge-based AI course and for my term project I need to train an AI to solve raven’s progressive matrices problems. I’m thinking of fine-tuning a transformer for this… has anyone done something similar before?
chirp#4545: @StellaAthena i think openai almost got dall-e to do it
chirp#4545: check out their blog post
Louis#0144: Ya
Louis#0144: I have
bmk#1476: is the course intended for GOFAI?
Louis#0144: I have lots of experience with symbolic AI and NLP
bmk#1476: because solving Raven using ML sounds really hard
Louis#0144: Transformer will totally work
Louis#0144: Although a GCN over the decision space would work way better
Louis#0144: Lmao
Louis#0144: GCN + tree search is 😘 👌 |
StellaAthena#3530: Yeah
bmk#1476: are you expewcted to parse the images or is that part already done for you
bmk#1476: for the former, iGPT had a ton of trouble solving it
StellaAthena#3530: That part is pretty easy
StellaAthena#3530: https://cdn.discordapp.com/attachments/729741769738158194/850836121692143676/image0.png
StellaAthena#3530: The main challenges is to figure out which number goes next in the sequence
bmk#1476: oh so its not the full raven
StellaAthena#3530: No
StellaAthena#3530: Maybe?
StellaAthena#3530: What’s missing for “the full Raven”?
bmk#1476: is everything made up of 45 degree rotated squares in the task you are assigned to solve?
StellaAthena#3530: > Your ultimate goal is to submit a final project that attempts all 192 problems
StellaAthena#3530: No
StellaAthena#3530: That was an example problem
bmk#1476: so then parsing sounds nontrivial
StellaAthena#3530: Others look totally different
StellaAthena#3530: https://cdn.discordapp.com/attachments/729741769738158194/850836582801997864/image0.png
Louis#0144: I took this course
Louis#0144: How easily do u wanna do this
Louis#0144: Lmao |
StellaAthena#3530: IDK
bmk#1476: so im saying parsIng the images sounds nontruivial
StellaAthena#3530: It could be interesting? But it’s probably less interesting than EAI things I could be doing with that time
Louis#0144: GCN for decision tree + uninitalized CNN for the square recognition (you don’t even need to train it, all you want is the embedding layer).
Run and train the GCN such that pooling of all the vertices is fed to an MLP that ends in a softmax which decides which square goes there
Louis#0144: The tough thing is the heuristic to build the tree
Louis#0144: But that’s only a few min of work
StellaAthena#3530: GCN?
StellaAthena#3530: Graph convolutional?
Louis#0144: Ye
Louis#0144: You need a way to represent your decision tree smoothly
StellaAthena#3530: We aren’t promised anything about the shapes tho
Louis#0144: Ya
Louis#0144: That’s why you use a CNN
Louis#0144: lol
StellaAthena#3530: There isn’t “square problems”
StellaAthena#3530: Or do you mean the literal squares on the page
Louis#0144: Yes the literal squares sorry
StellaAthena#3530: Oh lol |
StellaAthena#3530: Yeah doing something like that was my first thought, followed by something transformer based
Louis#0144: You don’t need a transformer
cfoster0#4356: https://arxiv.org/abs/2012.14601
Louis#0144: A gated GCN requires very little data
Louis#0144: They converge crazy fast
cfoster0#4356: They tackle Raven's progressive matrices with a sorta transformer like architecture. This is the ESBN thing Phil and I have been talking about
Louis#0144: I really wanna combine ESBN with RL/planning
Louis#0144: At some point
Deleted User#0000: https://github.com/lucidrains/scattering-compositional-learner
Deleted User#0000: i'll get around to ESBN
Deleted User#0000: i have an idea to make it work as a transformer
Deleted User#0000: i also asked the authors of SCL whether they tried transformer, and they never did
Louis#0144: I’m not convinced it’ll work for NLG tbh
Louis#0144: Unless it’s like some weird memformer hybrid
Louis#0144: Idk
Deleted User#0000: yea, my idea would be less explicit than that
Deleted User#0000: it'll be like going from hard attention -> anarchist attention in transformer
nev#4905: why are MLPs so based
AI_WAIFU#2844: I wonder if we're actually fairly close to the limits of what's practical with traditional scaling. You can only cram so many gpu's together before it really stops being worth it. Both Switch transformer and WuDao 2.0 use MoE, and despite coming a year later, HyperCLOVA is only 200 billion parameters. At the same time, the very largest supercomputers only have ~30,000 gpu's. Which isn't much more than the 10,000 OAI trained on.
Daj#7482: Hot take: The scaling race is good for safety because it ate up any hardware overhang there was and forces cutting edge progress to move ahead more predictably |
Daj#7482: :thonk:
AI_WAIFU#2844: Hot take? I was counting on that happening.
AI_WAIFU#2844: The more suits we can convince to scale meme architectures that can't do AGI, the better.
kindiana#1016: > implying transformers are not agi
fazz#8459: By 2035 all worlds silicon mining Doge transactions powered by Elons desert solar farms
bmk#1476: but what if scaling drives even more hardware development
gwern#1782: what makes you think gpt-3 was trained on "10,000" GPUs?
gwern#1782: just because they later had an azure cluster which happens to have 10,000 GPUs? but you know perfectly well from hperclova and others that you certainly do not need 10k GPUs to train just 175b and it'd be doubtful how efficient that even would be
gwern#1782: and those are old busted v100s
fazz#8459: And even WuDao is now claiming their 2.6bn param = GPT3 on NLP, no?
gwern#1782: it was presumably more like 500s ish v100s for a few weeks. then an a100 is like what, 3x better than a v100? so 3 months on a 30k supercomputer is (30,000 / 500) * 3 * 3 = 540x
gwern#1782: gpt-3 is still nowhere remotely near an appreciable fraction of available flops
Deleted User#0000: Can't be a bad thing, we could do with more weird chip architectures
sheggle#6841: Wasn't GPT-3 gonna take two entire days on the Swiss supercomputer?
bmk#1476: but capabilities advancement bad
sheggle#6841: With 20 something exaflops
gwern#1782: sure, nominally
gwern#1782: they aren't going to, ofc
Deleted User#0000: E.g. I think the graphcore chips for instance interleave memory and compute, which was considered too expensive just a couple years back
gwern#1782: supercomputers only go to P R E S T I G E (and nukes) |
sheggle#6841: No I meant as an indication that it takes quite a bit to train these models
kindiana#1016: gpt3 took approximately 100k v100 days I believe
sheggle#6841: Seriously?!
sheggle#6841: That's nothing
fazz#8459: Sunway TaihuLight going exascale next and its not the only one. Although I don't know hw adaptable these are to tasks benefiting from low precision
gwern#1782: hm. maybe they scaled it up to more like 1k v100s then... I'm a little puzzled there because hyperclova was 1k gpus but only I think he said 2-4 weeks?
sheggle#6841: Smaller dataset though right?
gwern#1782: alternately, they just got very low efficiency./ wasn't someone here estimating that OA only got like 20% efficiency? everyone now is talking about 50%+
sheggle#6841: 1/10th or so
AI_WAIFU#2844: Yeah but I think hyperclova had a better interconnect + better gpus
kindiana#1016: yeah, gpt3 had ~25% efficiency
gwern#1782: so if you imagine 1k gpus for 100k gpu-days that's about 3 months, and if you get 2-3x efficiency gains over that by going from 20% to 50%+, that'd bring you down to rouhgly month-long runs like hyperclova... that seems like it makes sense overall
gwern#1782: wich implies that if oa had used all 10k gpus (somehow), it would've dne gpt-3 in 10 days but from the descriptions, no one seems to think the run was *that* quick, which is consistent with more like 1k
kindiana#1016: I'd guess 1.5k
kindiana#1016: cuz batch size
kindiana#1016: lol
AI_WAIFU#2844: I'm not so sure, because you can't fit GPT-3 in a V100
kindiana#1016: yeah ofc
bmk#1476: do we think openai is going to put all other projects on hold and use all 10k GPUs for like a few months to train one final chonker to rule all chonkers?
kindiana#1016: but given that we know they did pp, and it had 96 layers |
kindiana#1016: its likely 1.5x power of two
AI_WAIFU#2844: Either way though, my point is that real difficulties with traditional scaling start to show up somewhere in the 1-10k range.
gwern#1782: at this point with OA API I wonder if they even *could* retrain gpt-3
bmk#1476: I don't think API is on their cluster
AI_WAIFU#2844: Sure they could. Just slap "v2" on it.
bmk#1476: it would be a huge waste of expensive interconnects
gwern#1782: you can just imagine sam altman rolling his eyes and saying "but I could support another 15% users with those GPUs"
gwern#1782: "we already have people beating down the doors, why do we need to blow all of this momentum on training some better model which even fewer people will afford"
sheggle#6841: Another injection from Microsoft would fix that up in a jiff
bmk#1476: of all founders, I'd think Sam Altman probably understands the importance of growth over immediate profit
gwern#1782: "but muh interconnects -" "users! users! users! growth!"
bmk#1476: and gpt3 is no longer unchallenged
gwern#1782: gpt-3 was never going after the chinese or south korean markets so...
sheggle#6841: I like to think the researchers they hired wouldn't take that either, as they hopefully signed up for AGI
AI_WAIFU#2844: If they really cared about users they wouldn't have torped their largest customer, hobbled their API, and gated access.
gwern#1782: which researchers? the ones who all quit half a year ago to form a new startup?
gwern#1782: _wonders if AID was even in the top 5 at this point_
bmk#1476: probably is
gwern#1782: i mean, even before
bmk#1476: coomers are a big userbase |
AI_WAIFU#2844: I think it was one of if not the biggest application.
gwern#1782: once a corporate customer or a legal firm finds a use for GPT-3, they can lean hard into it
gwern#1782: remember, business is fractal. there's countless $10b corps you've never heard of doing immensely complicated high-volume large-scale things which you've also never heard of
chilli#5665: From what I hear, they plan on continuing to exponentially grow their compute capability
gwern#1782: every researcher gpu is in the final analysis a theft from growing the gpu-bottlenecked API further 🙂
sheggle#6841: It would be open if money was all they wanted
sheggle#6841: Likely close enough, why?
gwern#1782: may 28th or so was the paper release. so more like 370
gwern#1782: and yes, people have been circulating rumors about an imminent OA release. however, if you extrapolate out the famous compute curve, we should be getting like, a 10 or 20t parameter model next lol. seems safe to say that will not be the case
sheggle#6841: Anyone test if SAM is less sensitive to learning rate as it walks a flatter surface?
Teemochu#8740: My estimate is that well above 1% of content running through the API was AID stories that AID would now believe should have been filtered. I've lost the various numbers I used to come up with this though.
Teemochu#8740: most notably I forget what I used for "percent of API calls that are AID" or where I sourced the estimate, and seem to recall it being well into the double digits
gwern#1782: I guess I would start with the PR piece about 5b words per day, then try to guesstimate how many words AID was shoveling through GPT-3 specifically after all its economizing and features based on the data leaks
gwern#1782: that would at least give you the %, and then you could try to guess what sort of distribution of users APIs like this tend to have to get an idea of how plausible it'd be to have 5 more users with >1% each or whatever
chilli#5665: Out of curiosity, why do you think this won’t be the case?
chilli#5665: 175B also seemed pretty insane before it happened
chilli#5665: What was the biggest model before that?
gwern#1782: because the doubling period in the extrapolation was like... 6 months? and not a single project came even close to gpt-3 until almost a full year. everyone knew that extrapolation couldn't go on
gwern#1782: turing-nlg at 17b
gwern#1782: or maybe that was after? which is even more telling |
chilli#5665: If you knew that oai was dropping a new model, how many parameters would you guess?
gwern#1782: so you need to stack up another 2-3 doublings of gpt-3's compute in an org which has publicly shown every sign of not wanting to spend even 1x gpt-3 again for a while, combined with global reluctance to even go past turing-nlg for a year *after* gpt-3 showed why you want to go past turing-nlg
chilli#5665: (A new model as in, gpt4)
gwern#1782: my prediction is that it's not going to be just a 10t-parameter text-only gpt-4 like gpt-3 but another 2 ooms like gpt-3 was 2 ooms past gpt-2. it'll be something smaller and multimodal, or possibly video-only, and I will be surprised if it's much past 1t
zphang#7252: I am a little surprised there hasn't been a GPT-3b at least
gwern#1782: I also do not expect whatever it is to rock me like gpt-3 did. I am about 50-50 about whether I will find it as important as CLIP/DALL-E
gwern#1782: my body is ready to be wrong, but I fear this will be more like _Rebuild 4.0_ than _End of Evangelion_
chilli#5665: Of the various AI advances, how many have rocked you significantly?
chilli#5665: Gpt3, alpha go, Alex net?
gwern#1782: alexnet didn't but in hindsight should've
gwern#1782: at the time, my reaction was mostly "huh, someone actually got a neural net to work for more than zipcodes, how about that"
gwern#1782: that's just how we were back in 2011! we just didn't know better
chilli#5665: So just alphago and gpt3?
gwern#1782: no, there was a lot of other stuff along the way. DQN was a big one
chilli#5665: For me, I think alphago was 1, and gpt3 was 2
chilli#5665: I think, even at the time, it was quite shocking in the CV community
chilli#5665: I saw a pic of their presentation after they won, and it was pretty massively packed
sheggle#6841: DL used in a setting where it wasn't used before and achieving incredible results is always exciting
chilli#5665: Yeah, but that was the *first* one
kurumuz#5695: i think gpt-3 was my first one haha |
kurumuz#5695: im pretty recent i guess
bmk#1476: *ahem* :schmid:
chilli#5665: To be honest
sheggle#6841: So sorry Mr schmidhuber, Alexnet was of course but a special case
chilli#5665: I think alpha zero has had less impact than I anticipated at the time
sheggle#6841: Would be cool to see what such a learning rule could do in the real world, but compute probably isn't there yet
Teemochu#8740: https://xkcd.com/1425/
Teemochu#8740: 2014
Teemochu#8740: nowadays you have one that will tell you it's a bird as long as you don't write dog on it
bmk#1476: it literally was https://people.idsia.ch/~juergen/DanNet-triggers-deep-CNN-revolution-2011.html
bmk#1476: dannet *was* the first one
UnsupervisedLearner#4148: Thingken of compiling a giant amount of tabular datasets into one giant mess and training a GPT/aMLP on the entire thing with MLM
UnsupervisedLearner#4148: Just because I've seen too many people talk about random forests on r/ml
zphang#7252: I think that could have value
Louis#0144: seq2seq did it for me
Louis#0144: lol
Louis#0144: Although I’ve been around since the CNN days
Louis#0144: I still remember CV pre Alexnet
Louis#0144: All the kalman filters
Louis#0144: The wavelets |
Louis#0144: The high low frequency filter stuff
gwern#1782: the databanks of textures!
Exocamp#8255: I obviously wasn't around from those times but while I go thumbing through ML things ~~desperately~~ trying to learn whatever the hell people talk about I come across these CNNs and older AI
Exocamp#8255: It's kinda funny to see beginners directed to CNNs and meanwhile we're creating monstrous GAN and Transformer models
Exocamp#8255: ~~please I just wanted to learn how to make haha funny AI that do things what even is a nested transformer or MLP-~~
UnsupervisedLearner#4148: I used to direct people to Karpathy cs231n class, it's starting to show its age now but a lot of the material is still fairly beginner relevant
Exocamp#8255: oh?
Exocamp#8255: I know who Karpathy is
Exocamp#8255: But didn't know he had course pages
bmk#1476: as far as I'm concerned everyone in ML should understand what a CNN is anyways
Exocamp#8255: of course I ~~don't~~ do-ish but
bmk#1476: if someone didn't and still claimed to know ML I'd be deeply skeptical
gwern#1782: "Oh, you know ML? Then name the 3 most powerful kinds of computer vision models." "CNNs." "...That's on me. I made that too easy."
bmk#1476: so it's not really wasted effort to learn about it
UnsupervisedLearner#4148: It's a computer vision with deep learning class, but he takes you from linear->logistic regression->kNN->simple MLP->CNN in a fairly Intuitive way with lots of wisdom from actually working with them for ages
Exocamp#8255: My knowledge of ML is a full field with infinitesimally large numbers of holes to the center of the Earth in it
bmk#1476: quite the contrary
EricHallahan#1051: CNNs seem deeply intuitive to me.
EricHallahan#1051: ¯\_(ツ)_/¯
bmk#1476: it's a small field surrounded by an enormous pile of useless shit |
Exocamp#8255: ~~OK but what's a convolution in the first place aaaaaa~~
UnsupervisedLearner#4148: Way more intuitive than transformers
Is it a soft convolution or a dynamic MLP?
Exocamp#8255: You have a link? This seems very interesting regardless of jokes
bmk#1476: Google "karpathy 231n"
Exocamp#8255: I'm doing (slowly) the Kaggle ML/deep learning stuff but
Exocamp#8255: Oh right
Exocamp#8255: Google exists
EricHallahan#1051: It is a hypernetwork. :bigbrain:
Exocamp#8255: Am specifically interested in quantum neural networks/AI. Anybody ever look into that?
UnsupervisedLearner#4148: https://cs231n.github.io/
Looks like it's been updated
UnsupervisedLearner#4148: I'm still highly perplexed that MLPs achieve better scaling. You would think a dynamic transformation would scale better
Exocamp#8255: @UnsupervisedLearner thank
bmk#1476: I still don't totally get MLPs but I think it's extraordinarily shitty naming
gwern#1782: https://cdn.discordapp.com/attachments/729741769738158194/850919993931464754/xwd-16229455031420040.png
EricHallahan#1051: That is because it is.
UnsupervisedLearner#4148: Lmao like "transformer" or "attention" is any better. Perceptron at least sounds cool and has no previous cognitive meaning to confuse you with |
bmk#1476: "no previous cognitive meaning"
yeah except for literally the fact that MLP already has a totally different meaning in ML
gwern#1782: I kinda like 'transformer' because it makes you think about ripping apart the inputs and recombining them, in a globalish sort of way. and it's not like it has any previous cognitiv meaning either, unless you spend an unhealthy amount of time thinking about toys
UnsupervisedLearner#4148: Global Context Network/Layer
Yeah GCN os used for graphs now but wasn't before
bmk#1476: like even just calling the original MLPs MLP was a bad name imo because MLPs are only really superficially similar to perceptrons, but calling this new architecture MLP literally just because it doesn't use attention is utterly confusing
EricHallahan#1051: Attention is actually pretty good because it actually describes the act of attenuating pretty intuitively.
UnsupervisedLearner#4148: Okay that's fair. I think they were trying to reinforce the concept of very simple old school architectures and perhaps went too far
zphang#7252: "Oh you work in deep learning? Name 10 deep learning libraries"
"tf.contrib"
"That's on me I set the bar too low"
StellaAthena#3530: Wait what
StellaAthena#3530: What are people calling MLPs that are not MLPs?
UnsupervisedLearner#4148: gMLP
StellaAthena#3530: I thought gMLP took a transformer and replaaced the attention with a gated MLP. Is that incorrect?
UnsupervisedLearner#4148: Depends on what you want to call an MLP https://cdn.discordapp.com/attachments/729741769738158194/850924034325741618/IMG_20210602_215542.jpg
nev#4905: hmmmmm
nev#4905: do you need residual connections for bilinear MLPs? |
sweg#8920: has anyone testing language models zero shot capabilities on learning decision based games in text?
sweg#8920: i dont mean like decision transformer
sweg#8920: but something like you explaining a new game to it and trying to let it play
sweg#8920: ~~im currently being amazed by the fact that the !complete bot is able to play chess and am wondering how far this rabbit hole goes~~ nvm its making invalid moves lol
Sid#2121: wait the bot can play chess?
sweg#8920: https://discord.com/channels/729741769192767510/730510538060071043/851040587535745084
sweg#8920: you could discard invalid moves and force it o keep trying until it makes a valid move
Sid#2121: damn that's quite impressive
Sid#2121: the first three moves aren't even in the master's database on lichess, so it seems like it's even generalizing a little
Sid#2121: I wonder where it picked that up from
nev#4905: does gMLP work with SGD
nev#4905: CNNs are just sparse MLPs with shared weights
pebbles#7130: +max pool
nev#4905: not anymore
sweg#8920: https://www.nature.com/articles/s41599-020-0494-4?utm_source=sfmc&utm_medium=email&utm_campaign=2724718_Agenda_weekly-3July2020&utm_term&emailType=Newsletter dont ask why i just found this but holy shit reading philosophers talk about intelligence is equivalent to driving nails into your eyes
sweg#8920: :yep: pain
Daj#7482: That is a great emote lmao
Daj#7482: Nails the vibe of this article lol
sweg#8920: i wish they would at least talk to a computer scientist or AI researcher once in their lives
sweg#8920: before trying to figure out the entire field through 'deductive reasoning' |
Daj#7482: Very fun essay: https://web.archive.org/web/20111114041242/http://school.maths.uwa.edu.au/~mike/Newtons%20Flaming%20Laser%20Sword.pdf
Daj#7482: related to this topic
sweg#8920: i love this
sweg#8920: thanks for sharing
sweg#8920: philosophy as a subject is useful for everyone and something everyone should put *some* time into
sweg#8920: but in isolation its just annoying
sweg#8920: especially when people who are *just* philosophers think they know everything
Daj#7482: I like MIRI's framing: https://intelligence.org/2013/11/04/from-philosophy-to-math-to-engineering/
Daj#7482: Philosophy is the first step in figuring out a new area
Daj#7482: But eventually it has to cache out into math and engineering
pebbles#7130: there will be people claiming AGI is impossible right up until they're molecularly disassembled
sweg#8920: wait you just gave me a really good meme idea
sweg#8920: @Daj can you make me regular so i can post it
Daj#7482: We usually nominate new Regulars in batches every month or two, just post the meme here and I'll cross post and credit you
mgostIH#0245: 🤢
sweg#8920: https://cdn.discordapp.com/attachments/729741769738158194/851101639636156447/unknown.png
mgostIH#0245: The criticism in this article of AlphaGo lmao
mgostIH#0245: "Yes it may have solved a problem decades before we thought we could do that, but it didn't solve general intelligence aswell, so stupid"
Daj#7482: :yes:
Daj#7482: Human intelligence is still just an unproven hypothesis |
Daj#7482: lol
sweg#8920: ive had an idea on what intelligence is that ive been thinking about recently
sweg#8920: i feel like its obvious so im wondering if other people also think this
sweg#8920: i think its just a sufficiently good world model
sweg#8920: everything else is emergent
sweg#8920: and a good enough world model would be capable of doing everything a person could do
mgostIH#0245: Intelligence is when you invest into my startup
The more you invest, the more intelligent you are
Daj#7482: Yep, this is Jeff Hawkins take iirc
Daj#7482: or at least, he's who I first heard it from
Daj#7482: I think it's definitely a valid definition of intelligence
Daj#7482: It also needs some kind of goal structure probably, just to break the "tie" of what to do, but I agree that I think all the "interesting stuff" happens in the world model and the RL part is probably small and relatively dumb
sweg#8920: this part makes sense if you view it from an evolutionary perspective
sweg#8920: evolution didn't start with a world model
sweg#8920: but after brains were evolveed
Daj#7482: Have you read Steven Byrnes' posts on neuroscience before?
sweg#8920: it realized that they were useful
Daj#7482: Really good stuff
sweg#8920: i have not
Daj#7482: https://www.lesswrong.com/users/steve2152 |
Daj#7482: https://www.lesswrong.com/posts/zzXawbXDwCZobwF9D/my-agi-threat-model-misaligned-model-based-rl-agent
Daj#7482: Is a good overview of some of his ideas (with lots of links)
Daj#7482: Or maybe https://www.lesswrong.com/posts/diruo47z32eprenTg/my-computational-framework-for-the-brain is more direct
RageRagaki#8799: Who are the mods in this server? Or where can I go to report a user? I got a spam bot messaging me from this server.
Daj#7482: Send me their ID
queef#0339: https://cdn.discordapp.com/attachments/729741769738158194/851108784319496222/unknown.png
queef#0339: @Daj
C𝙧𝙤𝙞𝙨𝙨𝙖𝙣𝙩#7814: Acclaimed Crash. is also doing the same thing
C𝙧𝙤𝙞𝙨𝙨𝙖𝙣𝙩#7814: https://cdn.discordapp.com/attachments/729741769738158194/851109047062364201/unknown.png
Daj#7482: alrighty
C𝙧𝙤𝙞𝙨𝙨𝙖𝙣𝙩#7814: @Deleted User
ersatz#0001: this account @Deleted User is spamming crypto scams from this server user list
C𝙧𝙤𝙞𝙨𝙨𝙖𝙣𝙩#7814: i guess its one of those mass bot joins from the server or something
queef#0339: @Deleted User looks like a bot too
queef#0339: and @Deleted User
queef#0339: and @Deleted User
Sphinx#2092: Or maybe we are all lucky winners
pebbles#7130: ^^^
queef#0339: and @Deleted User
queef#0339: they are all new accounts |
StellaAthena#3530: @Deleted User
nz#9710: same thing happened in Yannic's server
queef#0339: @Dodgechalenger
ersatz#0001: can a moderator activate phone verification?
queef#0339: @Deleted User
ChaosAlpha#5829: Just got one from @Deleted User
Bedebao#4842: If you need spambot protection, consider adding a bot like AltDentifier.
queef#0339: theres a bunch
queef#0339: @Deleted User
queef#0339: @Deleted User
Bedebao#4842: I see these bots are all new accounts, which is especially what AltDentifier would catch.
EricHallahan#1051: We are working towards a resolution.
StellaAthena#3530: I just bumped up the verification level to require bots be members of the server for 10 minutes. We’ll see if that deals with things, and if not consider going to 2FA.
GHC#5769: yo
Bedebao#4842: Wouldn't that just stop them from DMing for 10 minutes?
GHC#5769: just got a message from a spam bot as well
GHC#5769: Rude
#4606
ersatz#0001: phone verification is also very useful to deal with ban evasion, but that's another matter
ersatz#0001: the entire user list of the server is getting spammed right now |
Daj#7482: Alright we should have banned all the bot accounts now
Daj#7482: Please tell me if you receive anymore spam _from this moment on_
Bedebao#4842: Although effective, not everyone might want to give discord their phone.
Daj#7482: (Their names had a completely predictable pattern lol)
StellaAthena#3530: Also they joined almost entirely at the same minute
Daj#7482: Some top tier names https://cdn.discordapp.com/attachments/729741769738158194/851111085065895956/Screenshot_from_2021-06-06_16-48-35.png
Daj#7482: https://cdn.discordapp.com/attachments/729741769738158194/851111088833036358/Screenshot_from_2021-06-06_16-47-50.png
Bedebao#4842: Which I why I suggest that bot which lets suspecious (read: new) users verify with accounts on other sites, something a bot can't do.
Daj#7482: This sounds neat
StellaAthena#3530: We do not want to force 2FA for this exact reason. We have a number of members who have repeatedly spoken about valuing their privacy and some well-known pseudonymous individuals who don’t wish to be linkable to the account.
StellaAthena#3530: @Bedebao Does it give you an external URL to solve a capatcha or something
Bedebao#4842: I should probably drop a link: https://altdentifier.com
ersatz#0001: free with ads, $5/m without
Bedebao#4842: I don't recall it having ads? Was this added recently?
Daj#7482: It seems this wouldn't solve the problem since the bots can still get the user list and DM people
Daj#7482: It would only help with bots spamming channels
Bedebao#4842: And yes, captcha is an option.
Louis#0144: Oh hey this is what my ex used to call me
Bedebao#4842: But can they fetch the user list if new arrivals are constrained to an empty channel where all other members are excluded?
Daj#7482: I don't know tbh |
ersatz#0001: I dunno but that's what's in the pricing https://altdentifier.com/premium
Teemochu#8740: Same bots joined NovelAI and Yannic too
alexyz#3459: I think so? It'd be against ToS, but it's already against ToS to automate user accounts
Teemochu#8740: It's also against ToS to spam advertising DMs
EstebanSir#2189: i heard you guys were working on the 6b model? is that going to be released on HF? i heard there were problems with using HF
bmk#1476: !faq
Carl-bot#1536:
Sid#2121: is that in the faq? :thonk:
EricHallahan#1051: It is in the FAQ.
EricHallahan#1051: Or, I should say, the topic is covered in the FAQ.
Sid#2121: I don't see anything relating to the 6B model in there
EricHallahan#1051: That is superfluous to the topic.
> **Q: *When do you plan to have more models available?***
> A: As a collective of volunteer researchers and engineers who contribute in our free time, we are unable to commit to either a timeline or a roadmap for future models.
Sid#2121: that's not really what he asked though
Sid#2121: yes we are working on it, and no it's not going to be released on HF - by us at least. You'd have to ask the HF team.
EricHallahan#1051: Then question the blanket policy of responding with `!faq`. ¯\_(ツ)_/¯
Sid#2121: i mean, i don't think we should do that if the answer isn't in the faq lol
EstebanSir#2189: Alright, good luck to you all, you are doing great work.
EstebanSir#2189: and i've read the faq, maybe i forgot about that question |
EricHallahan#1051: Well I am not the one to complain to about that, that was my objection from the beginning.
EricHallahan#1051: lol
gammascalpset#9792: if you guys had to make a guess, is it likely that one or more future breakthroughs in AI will come from neuroscience? it's easy to name past instances of this, eg. the shift towards neural networks, and CNNs being inspired by the mammalian visual cortex
StellaAthena#3530: I think biologically inspired ML is a lie, more or less. If you read the early NN papers for example, it’s clearly a post-how justification. And the reason CNNs work has nothing to do with the visual cortex, so if that’s the inspiration it’s an accident rather than anything meaningful.
gammascalpset#9792: the reason CNNs work has nothing to do with the visual cortex? I think most reasonable people wouldn't claim that CNNs work *because* they resemble the visual cortex
gammascalpset#9792: but it's a good way of processing visual - or in general, spatially organized - input, and evolution just happened to find it
gammascalpset#9792: in general, evolution's had a couple hundred million years to optimize the mammalian cortex, we might save some time looking at it
gammascalpset#9792: could you refer me to some of these early NN papers?
gwern#1782: _is mildly surprised to look up the inventor of cnns and see he's still alive https://en.wikipedia.org/wiki/Kunihiko_Fukushima_
chilli#5665: lol, the Wikipedia article on cnns reads very funnily
chilli#5665: It reads like a lot of people who are just trying to plug their own work
nev#4905: wikipedia on ai is generally funny
chilli#5665: > Compared to the training of CNNs using GPUs, not much attention was given to the Intel Xeon Phi coprocessor.[58] A notable development is a parallelization method for training convolutional neural networks on the Intel Xeon Phi, named Controlled Hogwild with Arbitrary Order of Synchronization (CHAOS).[59] CHAOS exploits both the thread- and SIMD-level parallelism that is available on the Intel Xeon Phi.
bmk#1476: gell mann
nev#4905: https://tenor.com/view/frische-milch-milch-lol-frische-gif-18096085
nev#4905: https://en.wikipedia.org/wiki/Gene_expression_programming
StellaAthena#3530: > but it's a good way of processing visual - or in general, spatially organized - input, and evolution just happened to find it
@gammascalpset I miscommunicated. I meant that reason why CNNs work has absolutely nothing to do with the reason why the visual cortex works. It is false to claim that the way CNNs process visual input is the same way that human vision works.
StellaAthena#3530: This paper experimentally demonstrates that transformers are much more similar to human vision processes than CNNs. One of the big clues is that CNNs care a lot about texture and minimally about shape.
|
https://arxiv.org/abs/2105.07197
StellaAthena#3530: > in general, evolution's had a couple hundred million years to optimize the mammalian cortex, we might save some time looking at it
I’m not saying that this is false. I’m just saying that nobody has made a major breakthrough in DL by doing this.
Daj#7482: _adds this to the "Transformers are AGI" pile_
nev#4905: now do this for MLPs
nev#4905: (:berk:)
StellaAthena#3530: The methodologies in this paper are very interesting. There’s a couple related papers, though this one is the most self-contained IMO. It’s important to note that “more” and “very” are not the same thing. You can be more similar to human vision than CNNs without being very similar to human vision.
Daj#7482: Thanks for the paper, I'll check it out
Daj#7482: tfw I was almost at the point of working off my reading list
pebbles#7130: perhaps meat and silicon are different enough substrates to have relatively little overlap in how efficient algorithms are. I don't think it's too controversial that we don't have a very good description of what the brain is doing. Bits and pieces, yes, but not a full one. And even if we had one, it might not actually be that helpful. (I think it would be helpful, just not *that* helpful)
Daj#7482: My current working hypothesis is that it's all error based learning/gradients/variational bayes all the way down
Daj#7482: Everything else is implementation details
gammascalpset#9792: My first thought: human vision doesn't force translational invariance, but does not have access to global info like a transformer either. afaik it *tends* to have neurons that recognize edges, followed by neurons that recognize angles, curves etc.
Daj#7482: I remember that one paper showing that firing patterns similar to what you see in biological neural nets emerge if you force your neuron outputs to be positive only
Daj#7482: Inhibitory and excitatory neurons form naturally
gammascalpset#9792: so a CNN resembles human vision (especially the first layers) in some ways, but NNs could get more (or less) analogous to it
Daj#7482: Just add more compute lmao
pebbles#7130: I think that with current techniques, enough compute could take us to AGI, but I think that smarter methods will get there faster / with less compute
gammascalpset#9792: as you scale to the size of a modern mammal's visual cortex, you might save computation by having the first layers restrict attention to spatially close inputs (or doing CNN)
like, of course if you give your neurons more inputs they become more powerful in *theory*, but just like nature we don't have infinite resources |
gammascalpset#9792: implementations details that might save (or cost?) us some dozens orders of magnitude of compute, though
Daj#7482: This is a tautology
Daj#7482: Sure, you can always find arbitrarily bad designs
Daj#7482: you _could_ find a PTIME algorithm with O(n^100000000) complexity
Daj#7482: But in practice, those aren't the ones we find or use
Daj#7482: True scaling has never been tried :berk:
Daj#7482: (btw I'm just having a friendly shitpost here, of course our methods will not end up being the best ones)
pebbles#7130: true. I guess I was just thinking about whether transformers + RL (or similar) will get to AGI before we develop something quite different and more efficient
StellaAthena#3530: How big are you thinking of scaling though? Brains are much *much* larger than CNNs
Daj#7482: Scaling is somewhat predictable, new discoveries tend to be somewhat harder to predict, so ¯\_(ツ)_/¯
pebbles#7130: yeah, exactly my thought. Scaling seems to put a weak lower bound on how long we can expect, but a sudden advance is a real possibility
gammascalpset#9792: as big as it takes to get the same accuracy as a mammal's
Daj#7482: Fun post btw:
https://www.lesswrong.com/posts/yW3Tct2iyBMzYhTw7/how-does-bee-learning-compare-with-machine-learning
gammascalpset#9792: which, to be clear, might be less than a mammal's visual cortex if we can find better stuff
pebbles#7130: I'm not sure it's clear that biological neurons use gradient descent
StellaAthena#3530: You said
> **as you scale to the size of a modern mammal's visual cortex**, you might save computation by having the first layers restrict attention to spatially close inputs (or doing CNN)
> like, of course if you give your neurons more inputs they become more powerful in *theory*, but just like nature we don't have infinite resources
StellaAthena#3530: This implies that brains are too small to resemble CNNs |
Daj#7482: https://arxiv.org/abs/2103.04689 :berk:
StellaAthena#3530: But brains are much much larger
Daj#7482: first order error-based optimization go brrr
gammascalpset#9792: I think it implies that brains are larger?
StellaAthena#3530: Oh I can’t read, sorry
gammascalpset#9792: super interesting
gammascalpset#9792: stop adding stuff to my reading list :berk:
Daj#7482: https://www.lesswrong.com/posts/diruo47z32eprenTg/my-computational-framework-for-the-brain
no :)
StellaAthena#3530: I see no reason to think that that’s the case. If you make brains bigger, you think that the thing that works at small scales (CNNs) but not medium scales (brains) will start working again at huge scales? That seems extremely suspicious
StellaAthena#3530: What evidence do you have?
gammascalpset#9792: not sure what you mean, what do you mean by "that's the case"?
StellaAthena#3530: In response to my skepticism to biological ML being worthwhile, you said
> as you scale to the size of a modern mammal's visual cortex, you might save computation by having the first layers restrict attention to spatially close inputs (or doing CNN)
> like, of course if you give your neurons more inputs they become more powerful in *theory*, but just like nature we don't have infinite resources
Do you have any evidence that supports the belief that this is the case? It seems extremely unlikely to me.
gwern#1782: in the learning curve research I've seen, there's typically only one crossover. beyond that, they may converge in the limit at the ceiling (assuming both are complete/consistent), but I don't see any good examples of more than one crossover
gammascalpset#9792: it seems like a couple of purely logical claims, so either I'm making a hidden assumption I'm not aware of or I haven't explained myself correctly
I make two claims
|
1. as you scale an NN that processes visual input to the performance of a mammal's visual cortex + some circuitry required for spatial understanding/recognizing an object you know/etc. you'll save computation by restricting the input of a neuron to the output of other neurons that correspond to close-by regions of the input. so far no evidence needed IMO. While there's no guarantee you can get mammal-level performance if you make your whole model a CNN, it seems likely you can get away with doing it in the first layers (evidence: that's what mammal visual processing does before getting messier)
2. if you have a CNN "A" and you relax restrictions to what you neurons can get input from, thus getting a model "B", the model *must* get more powerful in theory cause B is kind of a subset of A. Of course it'll be harder to train, but that's what I mean by "not having infinite resources". That's why you have to find models that are sample-efficient, after all
nickt#8694: I still disagree with this. Spatially localized receptive fields are definitely a thing, and Fukishima absolutely tried to reproduce that in the neocognitron. There's also clearly a degree to which no one could have known the details of how that's implemented, but to say that it's not inspired by the brain doesn't make sense to me.
UnsupervisedLearner#4148: Deep double descent
UnsupervisedLearner#4148: (I don't know the argument but it's just evidence there can exist a parameter set between 'small' and 'large' that does not work)
StellaAthena#3530: I made two claims, which I think you’re conflating.
1. Many early NN papers seem like they’re using biology as a post hoc explanation rather than a real motivation
2. For CNNs specifically, the thing about them that is essential to getting them to work (equivariance) is not something we see in biological vision. If you take biological vision and try to explain CNN performance as a consequence of some similarity between vision and CNNs, you’re in for a bad time.
nickt#8694: No. I'm taking issue with #1 instead of #2.
StellaAthena#3530: Ah
nickt#8694: For example, 'S' and 'C' cells in the neocognitron are clearly references to simple and complex cells from Hubel & Wiesel
spirit-from-germany#1488: https://cdn.discordapp.com/attachments/729741769738158194/851172210961940480/Screenshot_2021-06-06-20-44-05-620_com.android.chrome.jpg
nickt#8694: I think I might agree with a version of #1 for more contemporary bio-inspired work if anything, but the neocognitron is pretty clear in my mind
StellaAthena#3530: I’m much less confidant in #1 than #2, and it’s very possible I don’t have an early enough notion of “early”
gammascalpset#9792: I'm not sure which equivariances you refer to in 2., but I'll assume one of them is rotational for the sake of argument
gammascalpset#9792: I think saying biological vision systems don't have rotational invariance is misleading
gammascalpset#9792: animals can choose to rotate their heads. you could argue that it'd be inefficient to grow a visual cortex that has rotational equivariance when you can do that
Kharr#7888: There's a decent bit of research showing that humans process upside down information much slower than right-side up. Also, our anatomy compensates for a lot of things mechanically. e.g. Vestibulo–ocular reflex
StellaAthena#3530: @gammascalpset I’m not sure why you’ve decided to go with rotation. It’s very hard to make a CNN equivariant to the natural action of SO(3) on R^2. “Vanilla” CNNs are *translation* equivariant(-ish). Specifically, to the action of Z^2 on itself.
gammascalpset#9792: biological vision is not translationally equivariant? |
gammascalpset#9792: not sure if they are, but if they aren't, I could say they can choose to look where they want?
Teemochu#8740: oh yeah the brain does a *lot* of vision correction that falls apart for common illusions
Teemochu#8740: the most obvious and easy-to-demonstrate thing is filling in your optic nerve blind spot... a small dot painted on a wall just disappears in that area if you close one eye and look at the right angle
StellaAthena#3530: This paper seeks to quantify invariance in human vision experimentally: https://www.nature.com/articles/s41598-019-57261-6
StellaAthena#3530: > The range of translation-invariance is limited, depending on the size and position of presented objects.
> Our psychophysical experiments and related simulations strongly suggest that the human visual system uses a computational strategy that differs in some key aspects from current deep learning architectures, being more data efficient and relying more critically on eye-movements.
gammascalpset#9792: tbh this is kind of surprising to me, but I don't think it clashes with the notion of the retina/visual cortex resembling CNNs *in the first layers*
gammascalpset#9792: I'd say it's a hint that the resemblance to CNNs stops way before enough processing is done to recognize objects
gammascalpset#9792: which makes intuitive sense, given that object recognition based on past experiences is thought to happen in the temporal lobe (iiuc), so way after any crude visual processing
thepok#1770: any news on the 6b net?
Sid#2121: patience
Sid#2121: when there's news, we'll make it clear lol
thepok#1770: its context is of unlimited legth?
chilli#5665: :thonk:
kindiana#1016: trained with 2048 context, you could theoretically use longer at inference but I have no idea if it would work
AI_WAIFU#2844: Every time someone asks we kick back the release date.
thepok#1770: thats only fair ;D
chilli#5665: It was actually originally going to release before GPT3. Sadly...
cfoster0#4356: It uses RoPE on the qk, right? So theoretically no problem running it for longer sequence lengths |
kindiana#1016: yup
Daj#7482: Has anyone actually tried that?
kindiana#1016: but its out of training distribution
Daj#7482: as in, checked how performance degrades outside of its window
kindiana#1016: hrmm
kindiana#1016: not sure if we have long enough documents to evaluate that
kindiana#1016: or you mean generate like 4096 tokens?
Daj#7482: isn't there that long range arena thing?
Daj#7482: Yeah
kindiana#1016: LRA is not text tasks
Daj#7482: and see if it shits itself at 2049 :berk:
AI_WAIFU#2844: We should fine tune on 8192 if it fit's in vram.
Sid#2121: you can just use sliding window generation - I think for it to actually attend to > 2048 tokens you'll need to finetune it slightly
Sid#2121: but it should be more adaptive than a learned positional embedding
EricHallahan#1051: *Trains on TPUs* :berk:
Daj#7482: I'm curious how good it would work out of the box for >2048
Daj#7482: I have no prior on how good or bad it would be lol
Sid#2121: with neox I've already done staged training and it adapts to the longer context lengths really quickly
AI_WAIFU#2844: I consider tpu ram vram.
bmk#1476: rotary imposes a hard limit on how long you can extend it out based on your theta |
kindiana#1016: 8192 would not fit naively I think
EricHallahan#1051: I assume all models in the future will use that?
Daj#7482: Think you can just train on 128 tokens and the finetune on 2048? lol
EricHallahan#1051: That is really long though.
Daj#7482: I guess that's kinda what shortformer does
bmk#1476: idk what we set our theta to
kindiana#1016: theta is 10k I think
cfoster0#4356: In theory, yes. Unclear if this is the case in practice
EricHallahan#1051: It follows the convention set in *Attention is All You Need*.
AI_WAIFU#2844: Don't some causal models still work without embeddings because of the masks?
kindiana#1016: I'm going to try to generate 8192 on colab
kindiana#1016: wonder if its going to fit xP
EricHallahan#1051: ¯\_(ツ)_/¯
EricHallahan#1051: Give it a shot.
Daj#7482: using no position embeddings in causal models is about as good or slightly better than learned, yea
Daj#7482: But rotary is still much better
aero#1357: isnt rotary part of how it builds the context? so the actual context size should matter less
Daj#7482: rotary is an alternative position encoding
EricHallahan#1051: (I still believe the *Attention is All You Need* init is suboptimal.)
Daj#7482: ~~each input to the model is (token_encoding + position_encoding)~~ |
Daj#7482: In GPT3 models, the position encoding vector has a hard limited size of 2048
kindiana#1016: well we don't actually know what position embedding gpt3 used right?
Daj#7482: Rotary is not learned, but generated on the fly, so even though the model is trained at length 2048 you _could_ just make it go further
Sid#2121: rotary gets applied to qk
Daj#7482: oh right
EricHallahan#1051: We don't add.
Daj#7482: I was thinking of sinusoidal
Daj#7482: my bad
kindiana#1016: but yeah same principle of extending as sinusoidal
EricHallahan#1051: We would be replicating sinusoidal if we did. 👍
Sid#2121: i'm fairly certain it's just a learned embedding? They say in the paper that there's no major architectural changes from gpt2/1
AI_WAIFU#2844: right, so rotary + causal masking should be able to go beyond theta because it breaks the symmetry.
Daj#7482: maybe?
Daj#7482: I guess no one has ever tried
Daj#7482: at least not to my knowledge
kindiana#1016: idk if I would consider say, txl rpe as a major architectural change
Sid#2121: i don't think that was the exact wording they used
Sid#2121: I would bet they didn't change the pos emb tho. I really think they would've mentioned it
kindiana#1016: seems kinda unwise to use a position encoding that doesn't support sliding window decoding/caching for a thing they are doing an api for lol
gwern#1782: heck, OA doesn't even cache GPT-3 as far as anyone can tell |
Sid#2121: ```We use the same model and architecture as GPT-2 [RWC+19], including the modified initialization, pre-normalization,and reversible tokenization described therein, with the exception that we use alternating dense and locally banded sparseattention patterns in the layers of the transformer, similar to the Sparse Transformer```
Sid#2121: "the same"
kindiana#1016: fair
gwern#1782: I suggested this almost as soon as I got access. "hey, gpt-3 is referentially transparent. why don't you cache everything?" "lol dunno"
UnsupervisedLearner#4148: I'm really suspicious rotary embeddings work better because it encodes positional information
I think they move the vectors around in an optimal way as they pass through the attention matrix, just like fourier features allow learning periodicity of textures in an image there might be frequency statistics in the token embeddings that are hard to learn
Sid#2121: how would we even know if they do / don't
gwern#1782: well, they *could* be lying about it, not reflecting it in their billing despite big possible savings for OA if users actively designed for caching, they could also be carefully hiding the different latencies...
gwern#1782: but all of it would be pretty adverse to their interests
kindiana#1016: what's the latency of gpt3 like?
cfoster0#4356: Yeah, there's definitely some larger strategy related to Fourier features that they're connected to
UnsupervisedLearner#4148: https://en.wikipedia.org/wiki/Convolution_theorem
> under suitable conditions the Fourier transform of a convolution of two functions (or signals) is the pointwise product of their Fourier transforms
CRG#8707: There's nothing special about reaching theta, the rotations don't really repeat at that point.
UnsupervisedLearner#4148: https://arxiv.org/abs/2003.12193
> In this work, we study wide over-parameterized hypernetworks. We show that unlike typical architectures, **infinitely wide hypernetworks do not guarantee convergence** to a global minima under gradient descent.
EricHallahan#1051: The rotations don't repeat in any period that would be a problem as it is. |
gwern#1782: 'infinitely wide hypernetworks do not guarantee convergence' <-- they have played us for fools
gwern#1782: 'yes please, I would like a network, no, wait, make it *hyper*. how wide? oh, *infinitely*'
kurumuz#5695: yea, they dont definitely charge you like they're doing any caching...
kurumuz#5695: though it would be crazy in tasks like AID
kurumuz#5695: so I will assume they do cache
UnsupervisedLearner#4148: It might help me puzzle out why a dynamic mixing function is outperformed at scale by a 'static' parameter one
gwern#1782: you'd want to cache by user to avoid privacy issues. be too easy to timing-attack token by token to extract prompts. some details like that. but they would expose it because it's the end-users who can redesign to minimize novel prompt generation, and you want to pass the savings on to them. suppose a cached call was *free*? how much do you think API users could revise their usage to eliminate unnecessary variation? probably quite a bit!
kurumuz#5695: oh also, I have some batching results from my experiments with gpt-neo 2.7b fp16 on a v100
kurumuz#5695: ```
neo 2.7b batching experiments:
-----
batch_size=1, input_size=950, generated_tokens=40: 1.47s -> 1.47*1 = 1.47
batch_size=2, input_size=950, generated_tokens=40: 1.83s -> 1.47*2 = 2.94 -> 1.6x
batch_size=3, input_size=950, generated_tokens=40: 2.33s -> 1.47*3 = 4.41 -> 1.89x
batch_size=4, input_size=950, generated_tokens=40: 2.77s -> 1.47*4 = 5.88 -> 2.12x
batch_size=5, input_size=950, generated_tokens=40: 3.28s -> 5*1.47 = 7.35 -> 2.24x
batch_size=10, input_size=950, generated_tokens=40: 5.93s -> 10*1.47 = 14.7 -> 2.47x
-----
batch_size=1, input_size=1977, generated_tokens=40: 1.65s -> 1.65*1 = 1.657
batch_size=2, input_size=1977, generated_tokens=40: 2.87s -> 1.65*2 = 3.3 -> 1.14x |
batch_size=3, input_size=1977, generated_tokens=40: 3.89s -> 1.65*3 = 4.94 -> 1.16x
batch_size=4, input_size=1977, generated_tokens=40: 5.16s -> 1.65*4 = 6.6 -> 1.27x
batch_size=5, input_size=1977, generated_tokens=40: 5.96s -> 1.65*5 = 8.25 -> 1.38x
batch_size=10, input_size=1977, generated_tokens=40: NOT ENOUGH MEMORY
```
kurumuz#5695: will test a100 today
EricHallahan#1051: I had to toggle the member list so I could see this without it breaking the line early lol
kurumuz#5695: oh lol
kurumuz#5695: yea should be quite weird on phones i guess
kurumuz#5695: this is huggingface transformers btw.
kurumuz#5695: would try on deepspeed inference but its still kinda borked
kindiana#1016: sounds kinda slow :thonk:
kurumuz#5695: well you feed in 950 or 1977 tokens and generate 40 tokens
kurumuz#5695: pretty typical numbers for a v100
AI_WAIFU#2844: How many Tflops does that work out to?
kindiana#1016: a tpu v2-8 does 2048 tokens in and generates 512 tokens in 10 seconds
kurumuz#5695: deepspeed inference improves upon this
kindiana#1016: with 6b
kurumuz#5695: oh wow, that is some fast inference
kindiana#1016: (and that is still kinda slow imo) |
AI_WAIFU#2844: I'm getting like 1-2
EricHallahan#1051: TPUs: :brr:
gwern#1782: (imagine how cheap a CYOA AI Dungeon could be with appropriate caching?)
kurumuz#5695: though tpuv2-8 is 180 TFLOPS
kurumuz#5695: while v100 is 30 fp16 tflops?
kurumuz#5695: something like that
kindiana#1016: v100 is ~100T
kurumuz#5695: ah you're talking about us
kurumuz#5695: though we dont do caching for different generation reqeuests
kurumuz#5695: doable but not sure how much it will help
kurumuz#5695: https://www.techpowerup.com/gpu-specs/tesla-v100-pcie-32-gb.c3184
kurumuz#5695: how did you get that number?
Louis#0144: Crazy
Louis#0144: TPU go brrrrr
kurumuz#5695: i was planning to use tpus for inference btw
kindiana#1016: https://cdn.discordapp.com/attachments/729741769738158194/851192918945562654/unknown.png
kurumuz#5695: "tensor performance"
EricHallahan#1051: "Tensor Performance" :grimberk:
kurumuz#5695: what does that mean?
kurumuz#5695: the tensor cores? |
kindiana#1016: tensor cores
kindiana#1016: pretty much any fp16 matmul uses them
kurumuz#5695: well, maybe huggingface does something wrong.
kurumuz#5695: though deepspeed inference improves like %30 on these numbers and i will assume its because their memory optimizations
kurumuz#5695: so they might be doing something wrong too.
EricHallahan#1051: (Knowing HF, they probably do.)
kindiana#1016: well you don't expect super high utilization with generation, but that does sound suspiciously low
kurumuz#5695: lol
kurumuz#5695: according to these numbers, might make a lot of sense to use TPUs as generation nodes.
kurumuz#5695: ```
Pytorch is using tensor cores on volta chip as long as your inputs are in fp16 and the dimensions of your gemms/convolutions satisfy conditions for using tensor cores (basically, gemm dimensions are multilple of 8, or, for convolutions, batch size and input and output number of channels is multiple of 8).
```
kindiana#1016: 4096 token completion from unicorn prompt https://cdn.discordapp.com/attachments/729741769738158194/851194212213194772/message.txt
kindiana#1016: seems a bit sus
EricHallahan#1051: > Click play above to watch and see for yourself.
gwern#1782: you don't save anything if everyone starts at a different prefix, of course. the idea would be to build a few very large shared CYOAs. most of the choices & outcomes would be cached, with the best choices upranked into visibility; a player might be able to request a new choice to be generated, but most of the time they wouldn't bother, preferring to explore the curated game tree
kurumuz#5695: Interesting, I see what you mean now.
gwern#1782: the more people that play, the cheaper it gets
kurumuz#5695: yeah
EricHallahan#1051: Can you get another that doesn't hit **`EOT`**? |
gwern#1782: (ie because it gets harder and harder to hit a deep enough node that the choices haven't been generated yet, and because the more the community optimizes choice ordering, the less any player will *want* to waste time rolling a brand new choice or outcome)
kindiana#1016: here's one for "EleutherAI is" https://cdn.discordapp.com/attachments/729741769738158194/851194871154475078/message.txt
StellaAthena#3530: I tested $\theta=1,000, 10,000, 100,000$ and saw no difference in performance: https://wandb.ai/eleutherai/rope?workspace=user-stellaathena
kindiana#1016: I think long seq is too ood to work without tuning
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/851194971951333426/193204646687408129.png
EricHallahan#1051: Is this from our ablations?
EricHallahan#1051: I still think you can get away with a minimal number of thetas.
kurumuz#5695: This might be interesting to implement, our platform is kinda focused on free form writing though
gwern#1782: I think it's a great idea because it solves the 3 great problems of AID: (1) cost!, (2) very uneven quality, user forced to do all curation, (3) newbies having no idea what to do and giving up immediately
StellaAthena#3530: Yeah I'm going to push theta smaller and see if anything changes
kindiana#1016: theta = 1 :berk:
EricHallahan#1051: I think you need to remove them instead of changing the base.
gwern#1782: and it fosters all sorts of fun possible community dynamics - injokes and editing of branches to be self-referential, building up shared mythologies like forum 'quests'
kurumuz#5695: Well, I think you will need a system to rank the choices available
gwern#1782: yes, you can rank by popularity + explicit voting
gwern#1782: and then at some point you can train a model directly to rank
kurumuz#5695: yep
gwern#1782: (this is actually the context in which I first suggested CYOA to nick - just to get pairwise/ranking data to train a ranker)
kurumuz#5695: we can get a pretty good ranker
EricHallahan#1051: *Looks at CLIP text encoder.* |
gwern#1782: (so you could automatically throw away the worst possible completions)
EricHallahan#1051: (doubt it will work)
bmk#1476: @kurumuz semi related but I want to plug something I did that worked: you can simulate higher quality with more compute by trading a model on sentence quality and using it to cherry pick from multiple generations
gwern#1782: when did you do that?
kurumuz#5695: yea, our generations are kinda expensive rn with GPUs
kurumuz#5695: which is the problem
kurumuz#5695: hmmm
bmk#1476: oh ok so this solution probably wouldn't be useful for you
kurumuz#5695: this is considering 6B btw
kurumuz#5695: for 2.7B it's not bad and we can probably do something like that
Teemochu#8740: >mfw 4chan manipulates the rankings
kurumuz#5695: we are working on retrieval and KGs and a discriminator would be good, though its better if you can force the LM without that :P
Sid#2121: what do you use for the 'sentence quality' metric?
gwern#1782: a nice thing about this also is that it ought to be extremely easy to code up. you just store all of the completions with their popularity metadata, and maintain a list of 'possible next completion', and the UI just follows the pointer until it hits a dead end and actually needs to run gpt
bmk#1476: human annotations
kurumuz#5695: at one point we probably need to go heavy in distillation
Sid#2121: when did you try this lol? I imagine you would need a lot of annotated data for it to work across different genres of text
kindiana#1016: (x)
kurumuz#5695: @Chris I doubt this wouldn't be easy to implement with our editor? :P
gwern#1782: well, I don't know what data model you've locked yourself into. but if you were doing it from scratch, it'd be easy |
bmk#1476: why would it have to work across too many different genres
bmk#1476: your just trying to do storytelling
Sid#2121: am i?
gwern#1782: it's just a tree, after all. literally CS 101
bmk#1476: it doesn't matter if it doesn't work for news articles
bmk#1476: idk I was talking to kuru
Sid#2121: no you weren't lol
bmk#1476: I brought this up initially just to suggest the idea to kuru
Sid#2121: ok, but you were responding to me
bmk#1476: I never intended to suggest that it works for all genres
bmk#1476: I have no idea if it works for that
bmk#1476: I was suggesting a way to improve storytelling to kuru, when you entered the conversation I assumed we were still talking about it in the context of what kuru would use it for
kurumuz#5695: what did you exactly use for reranking the generations?
bmk#1476: another model
bmk#1476: trained on human annotations
kurumuz#5695: yea another model but i was curious about the specifics
kurumuz#5695: BERT?
bmk#1476: well I used a tiny GPT model because I was lazy but BERT would probably have been better
kurumuz#5695: okay the thing is
kurumuz#5695: you dont even need to manually label this |
kurumuz#5695: when you hit retry
kurumuz#5695: it means you didnt like the generation
kurumuz#5695: autolabeling, literally
bmk#1476: sure you can use that data too if you're saving it
AI_WAIFU#2844: upvotes
kurumuz#5695: we're not saving it
kurumuz#5695: :P
kurumuz#5695: we're not saving anything
bmk#1476: if you start saving it
bmk#1476: idk
bmk#1476: I don't really know the specifics of what you guys are doing
kurumuz#5695: we need a policy on this
kurumuz#5695: stories are encrypted and only accessed on the client, generations are never logged
kurumuz#5695: is what we're doing
AI_WAIFU#2844: I think this is in a sense a different product, and you guys would mostly need to redo a lot from the ground up. Including it's marketing.
AI_WAIFU#2844: Focus on the novel writer, then branch out once you have a solid product/userbase
kurumuz#5695: well, reranking with bert is the same product. but what gwern suggested seems kinda different yea
kurumuz#5695: yea need to focus on getting the beta out for now
Sphinx#2092: This actually works out pretty well for MT. In fact, you can actually train a model by using the other model as a reward signal and end up performing better (according to human evals) than reranking.
Sphinx#2092: https://arxiv.org/abs/2104.07541 |
kurumuz#5695: huh, maybe grounding the LM on human preferences?
kurumuz#5695: I think openai had a paper similar to that.
AI_WAIFU#2844: Another thing to think about for a product like this is B2B applications. With different fine-tuned models, you could apply it to lots of things. ~~Especially automating journalists~~
kurumuz#5695: lol
bmk#1476: ok, I should have said "I'm talking *about* kuru, not "to kuru", but I don't think it's worth nitpicking over
kurumuz#5695: https://arxiv.org/abs/2009.01325
Sid#2121: i'd forgotten about it 5 minutes ago lol
Daj#7482: Yea me and colleagues have been working a lot on that
Daj#7482: Lots of fun ideas have been bubbling up
Daj#7482: It's almost a shame your stories are encrypted, so much rich human feedback data :berk:
kurumuz#5695: well we don't collect anything without their permission
kurumuz#5695: we didnt exactly decide on a policy for this
kurumuz#5695: we should do that soon
bmk#1476: you can't just barge in, challenge a claim, and then suddenly leave without any explanation whatsoever lol
Sid#2121: yes i can
Sid#2121: also i didn't "challenge" anything? I was just speculating about the technique
Daj#7482: Well if you ever decide to collect e.g. which outputs get rejected, do tell me, that's interesting data for human preference tuning hah
kurumuz#5695: I think our idea was, to have an experiments tab
kurumuz#5695: where you can enable certain collection of metrics or data
kurumuz#5695: but you always know what is collected |
kurumuz#5695: and you can just disable it
kurumuz#5695: so I will tell you if we do that :P
bmk#1476: you brought up a question, I tried to answer by saying it doesn't matter outside the scope I'm thinking of, i didn't do that clear enough and you picked on the technicality that I was responding to you and not kuru, and I clarified what I meant, and then you just left me hanging there
Sid#2121: well you said it doesn't matter outside the scope *I* was thinking of - but i think we've already spent too much time nitpicking over this little social confusion so i'm just gonna shut up now
bmk#1476: going forward can we please not leave people hanging in a conversation? like even just "I don't think this discussion is productive, let's cut it short" is better than just not replying at all
Sid#2121: dude it was less than 5 mins and the conversation wasn't even going anywhere. Not everyone checks discord all the time.
Teemochu#8740: There is one :firealarm: for using this kind of thing for alignment, and that's that this kind of training will naturally end up excluding the preferences of people who want to remain private, and those people probably have a fairly distinct set of preferences. That said, I don't see an easy way of fixing that issue because logging inputs without permission is far worse.
kurumuz#5695: yea
AI_WAIFU#2844: Just shove their brains in an fMRI and extract their deepest darkest desires.
bmk#1476: i feel like it happens modestly often
kurumuz#5695: also thought about that but what you can do
kurumuz#5695: ¯\_(ツ)_/¯
kurumuz#5695: privacy is important
Teemochu#8740: (my worry about outer-aligned systems, in general, is that they will be too-narrowly-aligned, whether that means to a single userbase, a single nation, or even a single century of humanity)
Teemochu#8740: and tbh 2221:2021::2021:1821, if not even more alien (and asserting that the beliefs of 2021 have moral supremacy is at the very least headstrong if not downright evil)
AI_WAIFU#2844: Hot take, privacy is only important because people have so much leverage over each other. If you give everyone a sufficiently good BATNA then people will reveal their true preferences.
Teemochu#8740: I mostly agree, privacy is the second-best option and the best one that actually works when people's spheres of influence intersect
bmk#1476: but when someone pops in and say something provocative and then silently abandons the conversation, it keeps bothering me for the next hour or so
Sid#2121: if you take innocuous speculations about a model's ability to generalise as provocative then that's your problem more than anyone else's
Teemochu#8740: like, the problem with a lack of privacy is that an employer can always choose to be private (about why they didn't hire you) |
Teemochu#8740: so you can't just legislate away the problems
Teemochu#8740: or you can try but you mostly end up throwing out the baby with the bathwater (e.g. assumption of parity between classes)
Teemochu#8740: Perhaps this is one reason for prudishness in church communities -- if talking about sex is verboten then people with preferences that don't fit the flavor of the times don't get singled out and potentially removed in what was historically the bedrock community structure.
Emad#9608: Why not do one of two things: i) have a "private by default" toggle and/or ii) make this story private button on new story generation (can even give a day/week/whatever before it gets added to the story pile as you don't care about freshness).
bmk#1476: ok I must have misinterpreted your tone over text, because it was the tone that felt provocative and not the content
chilli#5665: :thonk: disagree
chilli#5665: If you’re into ... furry porn, I think most people wouldn’t want others to know that regardless of what leverage they had.
Teemochu#8740: I think a world in which privacy is not important is one in which humans are extremely different than they are now (e.g. imagine a world where "bring your whole self, and we won't judge you" actually *is* the standard everyone follows)
Teemochu#8740: privacy norms are just a patch to the fact that this isn't the world we live in
sheggle#6841: It's sorta inevitable though isn't it? With stronger models, predictive power should also grow.
sheggle#6841: That and the ever increasing online presence of people
gwern#1782: https://www.reddit.com/r/GPT3/comments/ntvqw6/cyoa_aid_proposal_collaborative_storytelling_on/ wrote it up since the advantages of caching/CYOA don't seem to be obvious
kurumuz#5695: o nice
cognomen#6297: I think it's been recently demonstrated why we might not actually want to see all the choices other people make
Teemochu#8740: yeah especially if you aren't using a webhost that's idgaf
Teemochu#8740: and I don't think Trabia (Tor-exit-node-friendly host in Moldova) provides GPU servers
kurumuz#5695: I will just share this on our dev server lol
kurumuz#5695: @gwern If I could do a choice based game, I could really gamify it and make something really cool but that is not what we're doing
kurumuz#5695: open ended generations are hard
Jonnathan#1234: I don't like this kind of argument because it starts from the position of doing something that can be construed as wrong. What people rarely seem to point out is that society is not static. What is acceptable today may be unacceptable in twenty years. Imagine facing repercussions in 20 years for something you said in your kitchen which got recorded by some smart device. Now sure that sounds dystopian as hell, but no one can predict the future. Privacy gives us freedom of expression. People should care about privacy even if they are doing nothing wrong. |
bmk#1476: one counterargument would be that you could extend this argument to basically anything - imagine in 20 years anyone who didnn't publicly profess to be x today faces repercussions, and so therefore anyone who opts for privacy will be punished
bmk#1476: to give a concrete but contrived scenario, imagine vegans take over the world and punish anyone who wasn't openly vegan or something
gwern#1782: good to have the idea out there, and maybe get people thinking about going beyond the AID model and also about how to use caching to bring costs down. the costs are proving to be the achilles heel of these things
Jonnathan#1234: But has something like that ever actually happened? At the end the day in your scenario there's still some degree of plausible deniability. "I didn't know about X I was too busy with school/work to know anything about it at the time." Might be a shitty example, but in that scenario there's more plausible deniability at play. That being said I think this is getting off topic.
kurumuz#5695: yeah, indeed.
kurumuz#5695: with what you proposed, we can run davinci sized models and pricing would be fine
bmk#1476: what I'm saying is if they assume the worst whenever there's plausible deniability
gwern#1782: 'unraveling'
marmiteCloud#5923: have you considered placeholder-filling cached-generations? i.e. distil a current context into a hashed set of placeholders, and look up cached prior prompts that may match.
chilli#5665: I think it needs some way of integrating custom prompts, and adding that into the tree.
gwern#1782: like private sub-trees? yeah, that's possible of course. but the user loses most of the cost savings if other people aren't going to reuse that tree
bmk#1476: I wonder if there's an economic niche for building the best possible AID-like service with total disregard to cost
chilli#5665: No, just for choosing the prompts that are provided. Like, if you choose a custom prompt then it gets added to the tree
bmk#1476: like would anyone pay say $100/mo for an ultra premium AID with quality that absolutely destroys everything else out there?
Teemochu#8740: This is where privacy norms help tbh
Teemochu#8740: the idea being that whoever interlocutes first bears responsibility for both sides' reactions
Teemochu#8740: whether it's someone speaking or someone asking a question
Teemochu#8740: (of course, privacy norms aren't *always* best... e.g. someone speaking to an audience doesn't really bear responsibility for an attendee who shows up to cause trouble)
Teemochu#8740: but in more one-on-one environments, e.g. if you ask Bob who he voted for it's your responsibility to control your reactions to whatever he says (as well as your responsibility to back off if he declines to answer or bear responsibility for however he reacts if you don't)
Teemochu#8740: (and more broadly the idea that not sharing info is the default, and whatever tries to breach this default is treated as an Action) |
kurumuz#5695: We're open to trying this but the biggest model we have is 6B
kurumuz#5695: ¯\_(ツ)_/¯
gwern#1782: they absolutely would. this is a basic fact of game economics: whales. this is almost the only reason to kickstart games or ttrps: so you can soak the whales
kurumuz#5695: yea
kurumuz#5695: there is a lot of whales
kurumuz#5695: even just in our discord server
EricHallahan#1051: Look at *Star Citizen*.
kurumuz#5695: Someone pledged 100$ on patreon even though we literally offer nothing
gwern#1782: hoo boy
bmk#1476: so you could add a super ultra premium tier that costs a shitload but is hugely better
kurumuz#5695: so
gwern#1782: *don't* look at star citizen if you want to maintain your faith in human reason
bmk#1476: and helps subsidize the lower tiers too
kurumuz#5695: yep
kurumuz#5695: hmm
kurumuz#5695: 4 32 gig GPUs, how big of a model you can fit?
kurumuz#5695: 52B fp16?
gwern#1782: the problem, of course, is that there's not really any way at all to segment users. you can't offer something 10x better at $100/month because there is no such thing
kurumuz#5695: yea there isnt haha
Teemochu#8740: that's very close to the answer I believe |
kurumuz#5695: well there is some things you can bruteforce to make things a loooot better
kurumuz#5695: if you have the money
kurumuz#5695: well theoretically, lets say we distill a 100b model to that size
kurumuz#5695: and you have KGs and build vertices for the whole scene
kurumuz#5695: and you enforce consistency with a discriminator
kurumuz#5695: you can do that because 100$
kurumuz#5695: just throwing ideas out
bmk#1476: again it's not just model size
bmk#1476: you can do the thing I mentioned where you run the model 10 times and use a BERT model to pick the best one or something
kurumuz#5695: yea
kurumuz#5695: not just the model size ofc, just wanted to see how big of a model i could realistically fit
bmk#1476: ah right
kurumuz#5695: just run grounded RL agents each with seperate nets
kurumuz#5695: would be pretty expensive but can work
kurumuz#5695: this is a lot of dev time though
kurumuz#5695: just to create a 100$ tier
kurumuz#5695: depends on how popular it can be i guess
kurumuz#5695: competely changing the route we're taking with research though
bmk#1476: KGs sounds a lot like a thing that goose would be working on anyways lol
gwern#1782: like what? the rankers currently don't improve *that* much, and then you're SOL. tree search reliably turns completions into garbage at prsent, and everything beyond that is basically theory or wanking. a RL finetuned GPT-3 isn't even possible unless you are named 'OpenAI', it's not merely a matter of enabling an option somewhere |
gwern#1782: there is no straightforward way to turn compute/$ into much greater quality at present save for a very few actors who don't choose to
bmk#1476: goose really likes symbolic stuff for some reason
kurumuz#5695: well, he's "advising" us :P
bmk#1476: oh
kurumuz#5695: or something like that
bmk#1476: lol that explains why youre interested in KGs
kurumuz#5695: i was interested in KGs before he came around
kurumuz#5695: I Just didn't know i was interested in KGs
gwern#1782: (and the eocnomics for those few is to try to get maximal usage rather than segment... if OA spends another $10-100m to make a GPT-4 which is qualitatively better by 10x, they'll want to sell it as much as possible down to the marginal cost of the GPUs, and where's your moat or segmentation then?)
kindiana#1016: why would they want to sell down to marginal cost of gpus?
kurumuz#5695: i mean yeah its theory
gwern#1782: because otherwise they could buy more GPUs and sell more API calls, presumably
AI_WAIFU#2844: It's segmenting pretty straight forward? just have different models with different context lengths/sizes
kurumuz#5695: yea but what does that exactly improve
kurumuz#5695: some of the problems these models have will stay around
kurumuz#5695: 6B vs 175B doesn't seem like that much of a gain for me
kurumuz#5695: and your model gets some concepts completely wrong even at 175B
kurumuz#5695: you can improve the context length yea
kurumuz#5695: but it will still forgot some things because model just doesnt think they're important, even if they're in the context
EricHallahan#1051: The thing is that we really don't know what happens between 13B and 175B. |
EricHallahan#1051: For all we know we could see 175B performance at half the size.
kurumuz#5695: yeah, true
kurumuz#5695: I just don't think scaling the context length over 2048 and providing a much bigger model exactly justifies the 10x pricing
gwern#1782: I would expect different contexts to show the wrong kind of curve: essentially flat, and then going off a cliff. that's not a knob you can easily tweak to make something '10x better', that's the knob you tweak to make it '10x worse'. there's a difference and users will know
kindiana#1016: scaling laws :thonk:
gwern#1782: and you don't have 10x to throw away in the first place
EricHallahan#1051: Yeah lol
AI_WAIFU#2844: We know the largest models haven't converged.
gwern#1782: it's worth contrasting the situation with reinforcement learning, like selling a go/chess bot service. you totally *can* spend money to make it 10x better! (suitably phrased in terms of ELO/win odds)
AI_WAIFU#2844: Scaling laws for prod != scaling laws for papers
Teemochu#8740: I'm curious if the idea of training multi-token embeddings and using those (probably with learned positionals) on further-back tokens could be useful
Teemochu#8740: say, 1x128 4x32 16x32 64x32 256x32
Teemochu#8740: ends up with 11008 context length but 256 attention
kindiana#1016: :thonk:
kindiana#1016: like compressive transformer?
Teemochu#8740: reading
AI_WAIFU#2844: This is a fair point, I think we might need new ideas to go beyond current models for practical applications. The same way current vision literature is all about efficiency.
CRG#8707: Related: <https://arxiv.org/abs/2105.14039>
kurumuz#5695: well business needs to be sustainable ~~if you're not funded by VCs and got really good deals from openai~~
kurumuz#5695: big part of our work is pretty much on optimizing this stuff so it doesnt cost us a fortune |
kurumuz#5695: if you can run a model on frontend, run it on the frontend
kurumuz#5695: etc
kurumuz#5695: if you can go with less parameters, go with less parameters
Teemochu#8740: see this post, the amount of information you need at least for smallish token distances is *sub*inverse to the distance, so something that manages to put equal weight on "the last letter, the last word, the last sentence, the last paragraph, and the last chapter". Now it could be fully possible that these MLP kernels Kharr used didn't properly learn to attend further back, but my intuition is that this would be a straight zero-slope line to begin with https://discord.com/channels/729741769192767510/747850033994662000/850171368519499806 https://cdn.discordapp.com/attachments/729741769738158194/851227980851576883/unknown.png
Teemochu#8740: this is average attention *multiplied by [negative of] position* btw
Emad#9608: This is an interesting paper coming up at ACL 2021, 72.7% F1 score on SQuAD with just 128 examples: https://arxiv.org/abs/2101.00438 https://github.com/oriram/splinter
Kharr#7888: This patterns changes depending on the size of the model. Smaller models use more of the context and attend more tokens from what I've seen. Might have something to do with how much the model can memorize. Or simply put.. "big models need less context" which is already implied by GPT-3s experiments with bigger models doing better in 0-shot and few-shot settings.
Jonnathan#1234: Is there some quintessential knowledge distillation paper I should read? Maybe a top 3?
Jonnathan#1234: Guess I found this one: https://arxiv.org/abs/1912.13179
𓅬 gabriel_syme 𓅬#3220: will that be qualitatively the same end result you think?
Bruce23#6204: Wow, you guys already have trained a 6TB model? Amazing to hear that!
pebbles#7130: hmm, that's a very interesting question. Maybe, maybe not. I tend to think that once a certain threshold is reached at the task of self-improvement, and an intelligence explosion "goes off", then the AI will probably converge on an optimal design, more-or-less regardless of the original implementation details
𓅬 gabriel_syme 𓅬#3220: what are the % of steps per stage generally? like if I would finetune on a larger window, do you have a rough number expected to train on?
𓅬 gabriel_syme 𓅬#3220: having users do it gives you the advantage of learning their preferences I guess. Wonder how it works really early on though
𓅬 gabriel_syme 𓅬#3220: that makes sense yeah, will be interesting to see that unfold
𓅬 gabriel_syme 𓅬#3220: and then we all die
𓅬 gabriel_syme 𓅬#3220: this was a really cool discussion btw above, thanks everyone 🙂 I did my typical thing replying to last night's messages, sorry
pebbles#7130: it's night for me, timezones be crazy like that
pebbles#7130: yeah, hopefully we get to live long enough to see that happen
𓅬 gabriel_syme 𓅬#3220: I'll follow it closely, I think this text generation for specific purposes has potential in industry as well |
𓅬 gabriel_syme 𓅬#3220: yeah take a picture of the sunset
pebbles#7130: the future is going to be so awesome, if only we can live to see it through
𓅬 gabriel_syme 𓅬#3220: well, we'll see. a lot of issues coming up but also many cool things
𓅬 gabriel_syme 𓅬#3220: where do I read up on KGs btw, anyone knows? and what goose was on it? !goose?
Teemochu#8740: Read that as hopefully we will live long enough to die at first lol
gwern#1782: I was just thinking that
gwern#1782: "I hope we live long enough to see the Singularity. Both sides of it, specifically."
mkualquiera#3484: just make sure you ask for catgirls
mkualquiera#3484: it's a strictly dominant strategy
Imperishable_NEET#1969: ~~And ponies~~ :celestia:
gwern#1782: catgirl-ponies? ...not sure how much people would like that. https://youtu.be/2_ryNJVreiY?t=80
chirp#4545: is there a good way to do similarity search (~1M vectors, 3k vector dimension) in Colab?
kindiana#1016: scann/faiss?
chirp#4545: might work
chirp#4545: can't find docs though
chirp#4545: will look closer
chirp#4545: fwiw, what i'm trying to do is retrieve dataset examples
chirp#4545: idea is to take the activations at one layer of GPT-Neo (dim 3072) and find what examples from the dataset give the most similar activations
chirp#4545: hope to produce explanations like this:
|
> at layer 4, GPT-Neo upped the likelihood of the word "while" because your input looks like these other ones from the dataset
chirp#4545: ^ curious if someone has tried this before
chirp#4545: i'm basically trying to extend Key-Value Memories (https://arxiv.org/abs/2012.14913) to be useful for explaining how individual example inputs are processed
kindiana#1016: scann has decent examples iirc https://github.com/google-research/google-research/blob/master/scann/docs/example.ipynb
bmk#1476: this sounds like what we were talking about wrt the multimodal neuron thing in #alignment-general
chirp#4545: ooh link?
bmk#1476: and extending it to non-multimodal models
chirp#4545: if i get this working do you want to try it out?
chirp#4545: (if you can help, even better!)
bmk#1476: here's the link to the start of the convo, pls lmk if it works or not https://discord.com/channels/729741769192767510/730451873613611079/849531701117059072
𓅬 gabriel_syme 𓅬#3220: will u be open sourcing your implementation @chirp ?
chirp#4545: @𓅬 gabriel_syme 𓅬 yes!
chirp#4545: if i get it working lol
chirp#4545: i've gotten myself in pretty deep
bmk#1476: but tldr i was absoutely blown away by multimodal neurons and now i want to see if it's doable with non-multimodal
𓅬 gabriel_syme 𓅬#3220: nice thank you 🙂 well, maybe people in here could help. not me though lol
𓅬 gabriel_syme 𓅬#3220: wonder why the paper was never implemented btw
chirp#4545: @bmk if you want a tldr this is basically what i'm going for, except interactive with any prompt you enter https://cdn.discordapp.com/attachments/729741769738158194/851318198997221406/unknown.png
bmk#1476: oh is this doing some kind of logit lens thing?
bmk#1476: i was just thinking looking at units directly but i guess this probably makes sense too |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.