data
stringlengths 115
7.61k
|
---|
sschwarz25#1749: Well, we want to train our own is my point.
bmk#1476: In that case we probably won't be able to help you
Louis#0144: its very server infastructure specific
Louis#0144: you'd have a hard time
bmk#1476: You can use our models after we end up training them, ofc, since they'll be open and anyone can use them, but no promises on when that'll happen
bmk#1476: If you have engineers you can put to work helping us write open source code code no strings attached, that would bring our gpt3 replication closer, but otherwise we're not really interested
sschwarz25#1749: When you say infrastructure specific, do you mean you have had to customize a lot of the dependencies?
StellaAthena#3530: @sschwarz25 The way that your GPUs are wired together matters
EricHallahan#1051: Not just that, but topology.
EricHallahan#1051: (Sorry Mathematicians.)
sschwarz25#1749: I see, I was looking at some of the ways you are working to parallelize and speed it all up. This must be how its done.
Louis#0144: super computing is a different scale entirely
Louis#0144: parallelizing at this scale requires a very special expertise
Louis#0144: and its highly dependent on the setup
Louis#0144: what works for us to speed up the model would not work for you on a different setup
sschwarz25#1749: I wish I was here earlier, following along. So much amazing work going on.
sschwarz25#1749: Thanks for the info. I am going to keep digging and see what I can do.
StellaAthena#3530: PSA I updated `:citationneeded:` to be :citationneeded:
EricHallahan#1051: :citationneeded:
EricHallahan#1051: Much better. |
zphang#7252: that's perfect
zphang#7252: it looks slightly off-center
jrowe#5371: it's top aligned
jrowe#5371: lowering the ? might make the balance worse
EricHallahan#1051: Overall it's off-center.
jrowe#5371: https://cdn.discordapp.com/attachments/729741769738158194/819681050464092180/citation.png
jrowe#5371: blech
EricHallahan#1051: $\mathrm{[?]}$
TeXit#0796: **Eric Hallahan** https://cdn.discordapp.com/attachments/729741769738158194/819681197470253156/304058360893014018.png
spirit-from-germany#1488: Just out of insane curiosity... How much would that cost as a dedicated instance? 😂
jrowe#5371: https://cdn.discordapp.com/attachments/729741769738158194/819681652686716945/citation.png
bmk#1476: A lot, not sure exactly how much
StellaAthena#3530: Google's website says to call their sales rep to work out a custom deal
spirit-from-germany#1488: Lol
jrowe#5371: last try https://cdn.discordapp.com/attachments/729741769738158194/819683014702923826/citation.png
mkualquiera#3484: it's like 2 pixels off
jrowe#5371: nope
spirit-from-germany#1488: A naive calculation... Probably one would get a generous discount :joy: https://cdn.discordapp.com/attachments/729741769738158194/819683178146299945/IMG_20210311_222717.jpg
jrowe#5371: its exactly one pixel off
mkualquiera#3484: dang |
jrowe#5371: 7 from the bottom, 6 from the top
mkualquiera#3484: I wanted to say 1 but I didn't want to seem like I could measure that small
jrowe#5371: but it looks wonkier without the extra pixel on the bottom, 🤷♂️
mkualquiera#3484: Research be like
StellaAthena#3530: Good work. Updating
StellaAthena#3530: :citationneeded:
StellaAthena#3530: weird
StellaAthena#3530: Does it look more off to y'all as an emoji than it did as an image
StellaAthena#3530: JK I'm silly
StellaAthena#3530: :citationneeded:
jrowe#5371: cool 😄
Louis#0144: On this episode of weird shit my AI has written
Louis#0144: "They welcomed me, fed me forcefully, and took me sightseeing. "
Louis#0144: dorks
Louis#0144: how does gradient checkpointing work
Louis#0144: does it work well
jrowe#5371: sounds like a family reunion
cfoster0#4356: Re: how https://medium.com/tensorflow/fitting-larger-networks-into-memory-583e3c758ff9
cfoster0#4356: does it work well -> it depends
Louis#0144: I wanna split a softmax between GPUs |
Louis#0144: For some RL thing
kindiana#1016: Works well for transformers
kindiana#1016: Less well for convnet resblocks
Louis#0144: So I split the weights
Louis#0144: Or I split the softmax
Louis#0144: Also is it ez w transformers?
Louis#0144: The library
Deleted User#0000: If I understand correctly, a BERT-style objective for images struggles with multimodality, whereas a GPT-style objective struggles with image size
Has anyone tried combining them? I.e. mask out a small part of an image, use the rest of the image to predict some latent code, which is then used to condition an autoregressive model to generate the small masked-out part of the image?
(The "struggles with multimodality" vs "struggles with size" thing wasn't really how I came up with the idea. I more came up with it by asking what the *correct* way of modelling images would be, such that one could avoid all trouble like adversarial examples, and pick up all the relevant image features but none of the irrelevant ones, and such. Though also it's a bit more complicated than I phrase it, I guess, because if someone did this modelling, their instinct would probably be to use the BERT-style encoder for downstream tasks. Whereas the reasoning I got from my question of "what would be the correct way to do it?" implies that one should use the GPT-style part of the trained model for other stuff. Despite the fact that the BERT-style encoder has been trained to take in the whole image and the GPT-style thing only takes in a fraction of it. The GPT-style thing would also need to be inverted, in a sense; rather than taking its resulting vector after feeding it with the entire image, you'd want to do gradient descent to find a latent code that maximizes the probability it assigns to the image. I assume nobody has actually done that tho? 🤷 But if someone has combined the BERT/GPT-style objectives and released a trained model from them, then it should be straightforward to use this trained model to test my idea about where the most solid representations lie.)
CRG#8707: > combined the BERT/GPT-style objectives
MPNet or UniLM come to mind <https://arxiv.org/abs/1905.03197> https://cdn.discordapp.com/attachments/729741769738158194/819918651020017724/470c3bf5bd06db6d7aaf38fc40cc95c7.png
Deleted User#0000: Hmm, I don't think it'd work for language, at least the theory I used implies that it wouldn't
Deleted User#0000: Like
Let x be an image, M(x) be a masked version of the image where a small region (say 32 x 32) is masked away, C(x) be the image clipped to this same small region, F be some neural network taking the masked image, and G be some neural network that can conditionally generate a distribution of images
And I guess let P(y|G(f)) be the probability of y according to G conditioned on f
Then if you train G and F to optimize P(C(x)|G(F(M(x)))) over your training set x
Roughly speaking the intuition is
Any clipping C(x) of your image is going to have some features z that are important to the overall content of the image, and some features eps that are just irrelevant noise. The reason masked or autoregressive models work so well is because the z's are correlated across the image, and the reason they are correlated across the image is because images are highly redundant; they contain many more features than are actually needed for classification, with the actual signal being spread out thin across the image due to objects being extended in space. (And due to objects being correlated with each other in reality, which is due to more complex real processes.) |
Deleted User#0000: So basically, what we want to do is split the features of the image into those that are correlated with the rest of the image (signal), and those that aren't (noise).
If we predict some part of the image from other parts of the image (BERT-style), then that sorta achieves the split; except we run into two problems. First, you can't actually do this (multimodality), and secondly, this sort of inverts the features; rather than giving you the features of an image patch from the image patch itself, it gives you the features of the image patch from everything except that image patch. When I developed the idea, I mostly focused on the second part, but the former part is relevant too.
What we basically want to solve the latter of these is some sort of neural network that can give us the z from the image patch itself. And to solve the former, what we want is some way to represent an entire distribution of images, rather than representing a single possible image, since while you can't predict a specific image (the position of each blade of grass, for instance), you can make strong predictions about the distribution of the image ("there's grass there"). This distribution problem can be handled via a GPT-style method. And that method also handles the question of getting the z from the patch itself; namely, you can "invert" the generator by using gradient descent to optimize the z so that it assigns a high likelihood to the image patch.
Deleted User#0000: Doing this also """should""" solve issues like adversarial examples, if my assumption about how they work is right. Specifically, my assumption is that they work because the high redundancy of the images allows the models to ignore the overwhelming majority of the possible features, and so you can just identify the features that the networks use, flip those to the target category, and leave all other features alone. But a generative model like this would end up using a much greater fraction of the features, because... well, it generates the images. It has to account for everything about them.
Deleted User#0000: Idk if my explanation makes sense. It's sort of the distilled version of several different approaches that I considered (my questions in #math were about some alternate approach to the autoregressive objective, but it seemed impractical)
Deleted User#0000: I guess PixelCNN would be better than GPT for this, probably
Deleted User#0000: Also I guess I should add, obviously 32x32 is p small but my thought was that then after one has an encoding for the smaller stuff, one could stack it to get bigger
AI_WAIFU#2844: > So basically, what we want to do is split the features of the image into those that are correlated with the rest of the image (signal), and those that aren't (noise).
Isn't that just a VAE?
spirit-from-germany#1488: https://www.nytimes.com/2021/03/06/business/the-robots-are-coming-for-phil-in-accounting.html
Deleted User#0000: As I understand it, VAE does something that can be interpreted as vaguely the same goal, but in a completely different way from what I suggest
Deleted User#0000: But also I think usually the VAE goal is different? Like it can be interpreted as the same, but usually the goal is to have a generative model?
Deleted User#0000: Like I guess I should list some differences from the VAE:
* VAEs are encouraged to capture as many details as possible in their latent code. Mine is only encouraged to capture details about the image that can be inferred from elsewhere in the image.
* VAEs train an encoder, mine does not. (F isn't an encoder because the data it gets as input has no pixels of overlap with the data it's trying to predict. And because F is thrown away after the training.)
* VAEs do not have an autoregressive element
* The latent codes of a VAE code for single images (you need to add noise to get a distribution), whereas the latent codes of my approach code for distributions of images
sunny#5382: @Deleted User That's a fascinating idea. I think of VAEs primarily as the ELBO equations the define a "best tractable relationship" between a single data space and a single latent space. Your idea does seem closely related, with a change to the encoder. Some comments:
- I think you can reuse the VAE ELBO equations to figure out the appropriate loss function for your solution. The KL term in the VAE's ELBO equation would regularize the latent codes, which is where you'd slot in your thoughts from #math.
- With a normal VAE, the encoder function is given by a separate neural network. With your idea, the encoder function is given by inverting the decoder. |
- You need to use graddesc to heuristically invert the decoder since the decoder may not be invertible. Normalizing Flows solve the same problem by structuring the decoder so it's always invertible.
- Normalizing Flows have two disadvantages relative to VAEs: (1) their latent space needs to be the same size as the data space, and (2) the restrictions required to make the network invertible end up requiring the network to be deeper than a comparable VAE. Your solution would address both problems.
- Your solution comes with the cost of a potentially more expensive encoding step. If the goal is to create only a decoder, then this would slow down training, but it would result in faster inference. Your network would be half the size of a comparable VAE, and you need to compute that backwards pass anyway for training, so maybe there are tricks here to have your solution train faster than a comparable VAE.
- You'd need to calculate/approximate a Hessian matrix to train the decoder. That sounds expensive.
- You might be better off using RL to find the latent code for a given input rather than graddesc.
notooth#4850: Hello everyone,
What is the release date of the pre-trained Pile dataset?
EricHallahan#1051: You mean the model trained on The Pile?
Louis#0144: No date
Louis#0144: Anyone who says otherwise is a liar
EricHallahan#1051: soon™️
Louis#0144: LIAR
Louis#0144: LIAR LIAR PANTS ON FIRE
EricHallahan#1051: soon™️ is not a date, it is a time.
Louis#0144: We just pushed back the release date another week Bc of u
Louis#0144: I hope ur happy
EricHallahan#1051: Because of *me*? `:o`
notooth#4850: How soon? Is there estimated time?
zphang#7252: There is currently no estimated time
Deleted User#0000: Note that under my proposed training method, you wouldn't need the encoder to train it, and therefore the expensiveness of gradient descent for the encoder isn't a problem |
sunny#5382: I know. The expensiveness comes from having to graddesc on the decoder to generate latent points. The decoder needs to be trained somehow to have a good latent space. There's an open question of how you train the decoder to have a good latent space. If you're training it via graddesc, then you'd have to graddesc through the latent-point-generating-graddescs, which would require a Hessian.
sunny#5382: You can get around that by using RL to generate latent points, rather than using graddesc to generate latent points.
sunny#5382: There's some other work that needed to graddesc through a graddesc here: https://arxiv.org/abs/1703.04730. It might help with intuition on how that works.
Deleted User#0000: By good latent space, you mean one that permits linear interpolation? 🤔 Because that's not my goal
sunny#5382: It doesn't need to permit linear interpolation, as long as you have a way to search the latent space for points that decode well. I guess I'm unclear on how the decoder is trained.
Deleted User#0000: The idea is that the latent code should represent non-noise feature, with non-noise feature being defined as those that can be inferred from elsewhere in the image
So there's a throwaway network, F, which is given everything except the part that the decoder has to train
And then F outputs the features used to condition the decoder
3dprint_the_world#6486: tbf some cultures actually do this
sunny#5382: I confused the role of F and graddesc. Revising my understanding:
- The latent code can be thought of as the sum of two vectors: one signal vector corresponding M(x), and one noise vector corresponding to the information in C(x) not in M(x). In both cases, the latent code should only include information relevant to regenerating C(x).
- F can be used to generate the part of the latent code corresponding to M(x).
- Graddesc can be used to generate the part of latent code corresponding to information in C(x) not in M(x). When using F(M(x)) as a starting point, the end of this graddesc operation should give a latent code that includes all information corresponding to C(x), both signal and noise.
Does that sound right? I haven't figured out how to incorporate this part into my understanding:
> So there's a throwaway network, F, which is given everything except the part that the decoder has to train
(It looks like G is trained solely on the output of F, and the two networks are trained jointly. So wouldn't F have all the same information as G during training? Or do you not propagate gradients through G to F & use a different loss function to train F? Also, why throw away F after training since F seems to be the network that generates latent codes that represent non-noise features, and since F seems important for initializing the graddesc?)
I also didn't get how the noise vector is used during training, or if it's used during training. If it's not used during training, I didn't get how it affects adversarial samples.
Deleted User#0000: > - The latent code can be thought of as the sum of two vectors: one signal vector corresponding M(x), and one noise vector corresponding to the information in C(x) not in M(x). In both cases, the latent code should only include information relevant to regenerating C(x).
No, the latent code only contains the signal. The noise comes in via the autoregressive element in the generator. |
> - F can be used to generate the part of the latent code corresponding to M(x).
Yes, with asterisks. It's not that F is meant to be used for that once the algorithm actually gets used. F is just a tool used for generating the latent code in training. The issue/assumption/theory is that F would be vulnerable to problems like adversarial examples, which is one of the things that this method might fix. In order to generate the latent code after the systen is trained, one optimizes the input to the generator so that it assigns as high a probability to C(x) as possible. This will separate the signal from the noise, because the generator has only ever been trained to usefully use the signal, and therefore can't extract the noise into the latent code. But at the same time it will use "all" (or rather, as many as possible) of the features of C(x), because it's a generative model and so the only way to generate the image patch with high likelihood is by finding a latent code that explains as many features as possible. Which I think might make it resistant to adversarial attack (or possibly not ofc - one element of adversarial attacks is "non-robust features" being common. My intuition is something like, yes there are plenty of non-robust features, but if you add all of them together they probably become more robust? The trouble is that usual neural networks have no incentive to use multiple highly correlated versions of the non-robust features, preventing them from adding up. I should probably write a more in-depth explanation of what I have in mind wrt this)
Deleted User#0000: > - Graddesc can be used to generate the part of latent code corresponding to information in C(x) not in M(x). When using F(M(x)) as a starting point, the end of this graddesc operation should give a latent code that includes all information corresponding to C(x), both signal and noise.
No. G is an autoregressive model, e.g. ImageCNN or ImageGPT. It takes the latent code as input (presumably with some sort of FiLM stuff) together with the first part of the image, and then predicts the next pixel. In order to compute the probability of the overall image, for each pixel you feed all the image until that pixel into the generator, and then you see how much the generator predicts the pixel as the next. And then you multiply this probability over all the pixels. (Or more realistically, add the log probs. But y'know, details)
Deleted User#0000: > (It looks like G is trained solely on the output of F, and the two networks are trained jointly. So wouldn't F have all the same information as G during training? Or do you not propagate gradients through G to F & use a different loss function to train F?
What I meant is, F gets the masked version of the image, G generates the clipped version. So if you use gradient descent on G to generate the encoding, you get it based on the clipped image patch. Which is exactly the part that is masked out of F's input.
> Also, why throw away F after training since F seems to be the network that generates latent codes that represent non-noise features, and since F seems important for initializing the graddesc?)
It might be relevant to have a special network for initializing the gradient descent to get faster decoding. (Or to get decoding at all - it might not be super viable to graddesc on it from no information? Idk) However, you can't use F for this, because F gets the masked input image, whereas once you start encoding, you want to encode on the basis of the image patch. But you could totally just train an encoder on the image patches and use that for initialization, yeah. That encoder would presumably be less robust tho
Deleted User#0000: Gonna write down my intuition about how this all affects adversarial examples:
Deleted User#0000: One reason that standard models are vulnerable to adversarial examples is multicollinearity, I think.
Let's consider a toy problem. You've got a labelled set of uniformly colored images, and you want to classify them into categories depending on their color. Since you know that Bigger Is Better, you train a ginormous neural network to classify them. The network very quickly achieves 100 accuracy on both the training set and the test set, and you're happy. Then you test it against adversarial examples, and it fails badly. What went wrong?
Every single pixel of the uniformly colored images represents a feature that can be used to solve the problem. The implementation that would be least sensitive to adversarial examples would be to average all the colors and then classify on the basis of that average, but your model has no inductive bias towards that solution. How can we introduce this bias?
One way is with a generative model. With a generative model, the latent ends up affecting all of the features, and conversely this means that all of the features have a chance at affecting the latent when you pick a latent that best explains the features. In cases where the features are in conflict, rather than going with whichever tiny fraction of features it has learned, it needs to balance all of the features as otherwise it can't account for the image.
kindiana#1016: what about data augmentations like cutout? Seems like a nice idea but I'm not sure if this would be able to solve adversarial examples
Deleted User#0000: (... Has anyone tried to use standard robust training techniques on a toy dataset like I described above? As far as I've heard, standard robustness techniques don't work against all forms of adversarial attacks. The ones I've heard about probably wouldn't work on the uniformly colored images, but maybe there are others that would. Has anyone looked into this? It's no wonder they wouldn't work on real data if they wouldn't work on uniformly colored images.)
Deleted User#0000: You might use cutout to augment the data, if that's what you mean. Why?
kindiana#1016: I feel like cutout augmentation would have a similar effect to adversarial examples
sunny#5382: People have tried adversarial training on mnist. It does okay against blackbox attacks, and it doesn't work against whitebox attacks. There have been some new defenses proposed in the last ~3-4 years that supposedly work well, but I haven't kept up with them.
(Note: I'm still reading what you wrote)
Deleted User#0000: Cutout wouldn't solve the uniformly colored images task, since that just reduces the image size without doing anything else |
kindiana#1016: cutout does this https://cdn.discordapp.com/attachments/729741769738158194/820218977933459456/0LheHpgaVwsVw2p7L.png
kindiana#1016: it would mean you can't rely on a single pixel
Deleted User#0000: Ah oops, mixed up with random clipping
Deleted User#0000: Cutout would make it not rely on single pixels, but there's still all sorts of other aspects of the data manifold that the uniform images would have consistently have. I don't really think augmentations can create full robustness, because the data manifold has too many dimensions to cover them all?
kindiana#1016: yeah I totally believe that, I'm just saying I'm not sure your proposal would be significantly better as the data manifold created by G would be better than the one learnt by the discriminator with augmentations 🤷
Deleted User#0000: Black box = you see the outputs
White box = you get the gradients too
Right?
From what I understand, people can fix any specific adversarial attack, but when they fix those, people can just come up with new ones that break the network in new and exciting ways
sunny#5382: White box = you see the parameters
Deleted User#0000: Black box is still outputs only, right?
sunny#5382: Correct. Usually the attacker gets oracle access to the model for blackbox, so they can give the model whatever inputs they want, and they get to see just the outputs.
Deleted User#0000: I think one thing that mine would be able to deal better with than cutout would is the gradient of the image
Like mine would learn "huh the images are uniformly colored", which means that it would also be insensitive to high-frequency noise
Deleted User#0000: It's definitely possible that my proposal wouldn't be robust enough tho 🤷
kindiana#1016: try it out haha, also has some similarities to electra
sunny#5382: This guy probably references the latest attacks and defenses in his papers, if your want to check:
https://nicholas.carlini.com/papers
Deleted User#0000: Hmm
This would possibly be trouble for my method deoending on the data distribution and image size |
Like I think strawberries tend to be close to other strawberries, which would make it tend to consider containing a strawberry to be non-noise
But idk, it's going to depend on the data, and also on the model
I did have some thoughts about using representations at all levels for classification but that seems hacky and fiddly https://cdn.discordapp.com/attachments/729741769738158194/820223892206518292/Screenshot_20210313-101418.jpg
Deleted User#0000: 🤔 robustness to that sort of attack can probably most solidly be achieved by taking the intent of the image-taker into account
Deleted User#0000: Which a purely image-based generative approach could not do
Deleted User#0000: Like one could hack it in via all sorts of methods. Maybe my proposed method would achieve it by accident. One could maybe also let F condition on the text used to describe the image in some way. But really it's hacky to not model the intent, and part of the thought with my method was how to do this in a non-hacky way.
sunny#5382: Re-revising my understanding:
- F generates a latent code from M(x).
- G is an autoregressive model trained to predict a single pixel in C(x) given: (1) a latent code, and (2) all previous pixels in C(x).
- Generating a latent code without F is done by argmax G(image | -), calculated through graddesc or other.
- The hope is that F(M(x)) = argmax G(C(x) | -) = latent code encoding the signal of C(x).
Is that correct? I think the last point is the one that makes this robust to adversarial attacks, but I don't see how it's accomplished.
(Catching up on your newer messages now.)
Deleted User#0000: Yes
There's multiple points to the adversarial resistance. I think generative models like this are naturally more robust against adversarial attacks, because in the case of highly collinear features, they're incentivized to combine all of these collinear features. Whereas regression-based models are only incentivized to use some subset of it. Most adversarial robustness then ends up being "how do we ban the network from relying overly much on X non-robust feature?". The trouble is that there are too many features for this to work because there will always be some alternative feature to use, and also banning features degrades performance because features are important. What you really want your network to notice is something like "aha, these features go together, I better make sure to combine them". Which generative approaches do naturally, because otherwise they couldn't accurately generate stuff.
Deleted User#0000: Just doing stuff generatively is apparently not enough; one would also need to model things like intent. Consider the second row in the image I posted from one of the papers you linked. If you don't consider intent, then the objective answer is something like "a person in a blue and white shirt holding something" (this captures most of the stuff in the image), which I think my approach would do fine at capturing. But given that we know the photos are taken with some intent, and that intent is likely to show us something, it's the thing that is held which matters, not the person holding it. So my method fails in cases that depend on intent. 🤔
Deleted User#0000: I think one could probably do better if one trained it with text too. Like if you see an image with a masked-out section, and the image is titled "Here's our strawberry" or something like that, but you can't see a strawberry anywhere in the non-masked area, then the strawberry probably is under the mask. So training F with text too might work for making it understand intent. But it also might not, and at the very least understanding intent opens up a whole new huge attack surface that I'm not so sure about the consequences of.
sunny#5382: Yeah. I think to account for that, you'd need something like GLOM's notion of levels. The top level would encode something like "person in blue/white shirt holding a hermit grab". When you zoom in on just the thing in the person's hand, it would be "hermit crab". That way when someone makes a modification to the latent code, they might get away with making a small change to the top-level latent code, but the latent code for whatever's in the person's hand would have to be changed drastically.
Deleted User#0000: I think this method can naturally be done hierarchically, because once you have the small-scale encodings, you can use those as new "pixels" to train larger-scale encodings |
sunny#5382: Like a convolution getting increasingly-large receptive fields? That might end up getting hit with the levels-vs-layers confusion.
sunny#5382: (Note: I'm not sure if the levels-vs-layers confusion matters when trying to defend against adversarial attacks. That might be an open question in how humans process images.)
Deleted User#0000: Remind me, what's the levels vs layers confusion?
sunny#5382: Higher levels = higher levels of abstraction.
Higher layers = later layers in a feed-forward neural network.
There was an assumption for a long time that higher layers in a neural network encode information at higher levels of abstraction, though that seems not to be the case.
EDIT from the future: (Correction: it's levels of representation, not levels of abstraction.)
Deleted User#0000: I realize this doesn't apply in the general case, but I think my model would tend to encourage it by the way it is trained
sunny#5382: That alone would be a really interesting finding. I suspect there's a lot people can do if they could line up levels and layers. That plus some sort of attention tracking (i.e., which neurons contribute highly to which subsequent neurons) might let you create something like a parse tree for a scene.
Deleted User#0000: The abstraction level is determined by the features that F can predict. However, if F has to go further away to get its information necessary for prediction, and especially if F ends up having to rely on text for prediction because the receptive field is so broad that all the concrete features have been used up, then F can't use the concrete features.
Like if F is trying to predict one eye, then it can take concrete features from the other eye. But if it's trying to predict a head, then there's no other head to take concrete features from, so it will have to take features from the body, which would need to be more abstract because the concrete ones aren't there
Deleted User#0000: And of course if one extends it so F also takes text labels, then at some point you would reach some ultimate level of abstraction
Deleted User#0000: Wait
sunny#5382: (Correction: it's levels of representation, not levels of abstraction.)
Deleted User#0000: I guess text wouldn't fully solve intent because it wouldn't be good at going "well it's p misleading for you to include this small but distracting thing if that wasn't what you wanted to show me"
Soooo it's probably going to stay super vulnerable to anything that depends upon intent
sunny#5382: Yeah. Intent is pretty context-dependent too, so the exact same image can be used to refer to two different things.
sunny#5382: (Same for text)
Deleted User#0000: I think it's the same thing? At least when I google "labels of representation", it defines it in terms of abstraction |
sunny#5382: I think "higher levels of representation" means "more holistic", whereas I think of "higher abstraction" as "fewer defined properties". The levels-vs-layers distinction comes from Hinton's GLOM paper, which refers to levels of representation.
Deleted User#0000: I think the holistic part comes from it being generative and the abstraction part comes from the elaborate noise-vs-signal separation
Deleted User#0000: So it would increase in both with levels
Deleted User#0000: Ofc then there's the question of whether one *wants* to increase in abstraction. That has downsides too.
sunny#5382: Aahh, that's pretty interesting. I think you're right that increasing the receptive field for generative models should correspond to increasing holisticness, as long as there's no overlap between receptive fields. The connection between abstraction and noise-vs-signal has a similar corner cases.
I can explain tomorrow. I need to sleep, otherwise I'm going to die on a hike tomorrow.
sunny#5382: Good night!
Deleted User#0000: Bye! 👋 🌃
mgostIH#0245: What's up with mathematicians and hiking
mgostIH#0245: I see it as a quite common pattern
triggerhappygandi#0001: Yeah actually. What is with that
Deleted User#0000: Also I think probably the best way to understand the concept behind my proposal is to consider the DAG
pixels in patch 1 <- reality -> pixels in patch 2
Usually when you train generative models, you try to train it to encode the image into some latents that can reconstruct the image. But the trouble is, that just forces it to encode the pixels as well as possible. My proposal instead makes the latents the parts that are predicted by other parts of the image - which would be much more likely to correlate through the shared "reality" cause than if you tried to predict the pixels from themselves
(Or well, it would probably also go through shared "camera properties" causes and such. But conceptually speaking I think it makes sense! :pleading_face: )
So basically the hope would be that the resulting latent space would need to align better with reality.
EricHallahan#1051: Good for nerd sniping.
StellaAthena#3530: This is amazing https://twitter.com/mervenoyann/status/1370650861622398976?s=20
Sparkette#4342: I bet one of these days we're going to see machine learning cards with really big models (like GPT-3 scale or bigger) in mask ROM
Sparkette#4342: That is, actual ROM, not Flash memory |
Sparkette#4342: Can't be updated without physically swapping a chip, but AFAIK it's a lot cheaper in bulk than RAM or Flash
Sparkette#4342: And I figure it would be just like having it loaded into GPU RAM all the time, except it doesn't take up precious VRAM, and it couldn't be unloaded or modified, which isn't necessary for inference anyway
Sparkette#4342: Reminds me of how font data used to be stored, in the CGA era
gwern#1782: how does it go from the ROM to the actual circuitry doing matmuls?
EricHallahan#1051: At that point you would develop an ASIC first.
guac#4716: it'll be more like apple neural engines type components for consumer level
EricHallahan#1051: ROM is a forgotten technology in a way. The only computer I can think of that seriously used custom ROM is the TI 99/4. TI had a competitive advantage when it came to ROM fabrication, and so the primary medium for programs was ROM, else you had to shell out for a PEB.
EricHallahan#1051: The ROMs were weird too: they were a custom auto-incrementing pointer and read in a serial fashion.
EricHallahan#1051: But they were cheaper to manufacture and I guess it was better than tape.
guac#4716: Maybe they mean a piece of Nn dedicated hardware with essentially frozen weights
EricHallahan#1051: At that point you are building an ASIC.
guac#4716: Pretty much lol or as the cool kids say “inference-on-the-edge device”
Sparkette#4342: Good point
EricHallahan#1051: WORM is heavily undervalued though nowadays.
bmk#1476: My storage array is WORN
dopa#3178: it was/is expensive when it has to be in compliance, I needed it (as in worm storage)
mgostIH#0245: Noooo, NNs are just matrix multiply, they are totally not like the brain :noo: https://cdn.discordapp.com/attachments/729741769738158194/821131907721854997/unknown.png
mkualquiera#3484: so _how_ does the brain do backprop
cfoster0#4356: A good starting point https://brainscan.uwo.ca/research/cores/computational_core/uploads/11May2020-Lillicrap_NatNeuroRev_2020.pdf
mgostIH#0245: If my brain does backprop then why can't I understand it 😎 |
mgostIH#0245: Checkmate AI people
StellaAthena#3530: What’s the largest model where I can download the weights off the internet? GPT-2?
jrowe#5371: gpt-2 xl is 6.1 GB
EricHallahan#1051: T5?
cfoster0#4356: A few examples back in January
kindiana#1016: yeah here is fairseq's 11b https://github.com/pytorch/fairseq/blob/master/examples/megatron_11b/README.md
EricHallahan#1051: `mt5-large` is 4.6GB so no.
jrowe#5371: nice
Louis#0144: That’s small
Louis#0144: Lul nice one
vv#9042: mT5-xxl is 48GB https://huggingface.co/google/mt5-xxl/tree/main
gwern#1782: there's the 11b-param T5 google released: https://github.com/google-research/text-to-text-transfer-transformer#released-model-checkpoints
gwern#1782: hm.... so 13b-parameters? https://arxiv.org/pdf/2010.11934.pdf#page=5
gwern#1782: turing-nlg has still never been released that I know of, so I think the T5s are the upper limit right now
EricHallahan#1051: So I was right **hah!**
kindiana#1016: mt5xxl is 13B params
kindiana#1016: that is the biggest model I believe, but not autoregressive?
Louis#0144: PHAT
Louis#0144: god what a thicc gurl
bmk#1476: How many params? |
vv#9042: it is https://cdn.discordapp.com/attachments/729741769738158194/821158096624943144/unknown.png
bmk#1476: Oh wait nvm
vv#9042: https://cdn.discordapp.com/attachments/729741769738158194/821158121832448050/unknown.png
bmk#1476: How can it be only 13B when the checkpoint is so much bigger than the megatron checkpoint? O.o
kindiana#1016: megatron is fp16, this is fp32 I assume
kindiana#1016: oh I didn't realize that
kindiana#1016: does that mean when they use it for LM objective it only uses half the params?
Louis#0144: Why even bother with fp32
CRG#8707: This just means they tried both ways of training in the ablations.
Louis#0144: I thought the better way is if you can do fp32, do fp16 but bigger
kindiana#1016: I don't think you can train with just fp16 without fp32 master weights
CRG#8707: The final T5 objective is a custom span prediction. https://cdn.discordapp.com/attachments/729741769738158194/821158975512772658/19yFICqDlfprn-I_VZ5RHgw.png
kindiana#1016: fairseq must have quantized to fp16 before exporting the checkpoint
kindiana#1016: but t5 didn't
gwern#1782: might as well make a /r/mlscaling post about this, it's come up before
gwern#1782: or I would if reddit wasn't still erroring out on submissions...
jrowe#5371: funday Monday, with extra special dst glitches for extra excitement
jrowe#5371: all sorts of shit broke today
gwern#1782: https://www.reddit.com/r/mlscaling/comments/m5vtoq/largest_publiclyavailable_trained_model_checkpoint/?
stellie#3553: Is there anything that didn't make the pile but is still pseudo-English? I'm looking for a dataset of bad chinese-to-english translations on places like aliexpress to try to fine tune GPT-2 |
stellie#3553: or something similar
stellie#3553: I imagine that was filtered out already
maximillian#1592: Hello! is the Dall-e usable even though it is not pretrained?
EricHallahan#1051: I have heard reports that it indeed works as intended, but I am not an authoritative source on this.
stellie#3553: does it have a dataset associated with it?
EricHallahan#1051: Not yet.
maximillian#1592: who could I talk to about it?
EricHallahan#1051: That is an in-progress milestone.
EricHallahan#1051: Let me see who I could call in...
bmk#1476: our dalle code is not finished and it's not a top priority atm
bmk#1476: there are some issues that we just havent had time to iron out
bmk#1476: as for dataset, aran is working on a thing, and i'm pushing for danbooru, although that's kinda moot before our training code works
EricHallahan#1051: Oh, I got :lucid:'s code mixed up with our's :berk:
bmk#1476: oh, idk anything about lucid's
EricHallahan#1051: Apparently it works but there is no suitable dataset yet.
maximillian#1592: after gpt neo?
maximillian#1592: who do you know who has it working?
bmk#1476: i heard that lucid's code also doesnt work
bmk#1476: but im not super involved in that
bmk#1476: lucid doesnt seem to be prioritizing it either |
maximillian#1592: who is lucid?
bmk#1476: lucidrains
EricHallahan#1051: https://discord.com/channels/729741769192767510/730484623028519072/820423790202322974
EricHallahan#1051: I heard that it does.
bmk#1476: Huh, i have #art muted
bmk#1476: Wow, #art is basically a community of its own at this point
EricHallahan#1051: I'm almost always over there. I have all of alignment muted.
maximillian#1592: so basically it is working to an extent?
bmk#1476: I don't know half the people in there o.O
EricHallahan#1051: Yeah, to the extent that is might work at large scale but who knows.
EricHallahan#1051: I actually *might* suggest spinning off #art into it's own server, but that doesn't feel right.
bmk#1476: Mutiny!
AI_WAIFU#2844: This place is too big, there are entire channels I haven't visited in months.
bmk#1476: I never visit art or alphafold
bmk#1476: But i keep up with everything else
AI_WAIFU#2844: I don't think I've even been in equivalence
bmk#1476: That was made like a day ago lol
AI_WAIFU#2844: oh, well everything between gpt-neo and interactive agi is unread so I didn't notice
bmk#1476: Lol
bmk#1476: I'm going to involve myself a bit more than before in art from now on, i thought it was just the clip+StyleGAN image dumping place originally |
AI_WAIFU#2844: No you need to do more A L I G N M E N T. Resist the temptation to generate :catgirl3: .
AI_WAIFU#2844: Oh I just got a great meme idea
EricHallahan#1051: I'm like the only one who uses StyleGAN, everyone else uses BigGAN. :thonk:
bmk#1476: lol
bmk#1476: i *am* doing more alignment tho
bmk#1476: my goal is just to memetically infiltrate #art ~~and crush its separatist tendencies~~
EricHallahan#1051: Though my 100000 vector codebook works pretty well I have to say.
bmk#1476: We need to make a unified eleuther to pool our resources and work on alignment
𓅬 gabriel_syme 𓅬#3220: I'm using stylegan too! But my applications wish to be more practical than art
cfoster0#4356: Are any of our alignment ideas resource bottlenecked?
𓅬 gabriel_syme 𓅬#3220: also I have a 6 epoch dalle, only took a whole night 😮
cfoster0#4356: To be clear, the one that seems to currently be working is dalle-pytorch
𓅬 gabriel_syme 𓅬#3220: yep that's the one I'm training too, it works so far although I ran into some trouble loading the saved model
𓅬 gabriel_syme 𓅬#3220: but training from vqvae to dalle works out of the box, haven't produced results with dalle yet though
maximillian#1592: how much training do you plan to do?
𓅬 gabriel_syme 𓅬#3220: well I'm testing small (kind of naive) models that are domain specific
𓅬 gabriel_syme 𓅬#3220: so I only have 5k pairs, it took around one night for 5.2 epochs
𓅬 gabriel_syme 𓅬#3220: no idea if that setup works btw, but I want to put it to the test because if I can't use it for specific applications I might as well go with CLIP. I do think it will work though
maximillian#1592: what domains are you testing?
bmk#1476: not *yet* |
AI_WAIFU#2844: I could totally see the thing I just proposed being extremely computationally intensive. Because it's really hard.
bmk#1476: my general vague eleuther-specific alignment plan:
1. have a couple of us come up with one or two major research directions
2. try to get some promising results in those directions first, and figure out what we'd need to continue making progress
3. using that, make a big project capable of having tons of contributors and try to rope everyone in
𓅬 gabriel_syme 𓅬#3220: architecture now, these are 5k layouts with my own semantic annotations
Deleted User#0000: it does work https://github.com/lucidrains/DALLE-pytorch/issues/86
Deleted User#0000: spoke prematurely
Deleted User#0000: i'll come back and fix Eleuther's at some point
StellaAthena#3530: “When I get around to deigning to grace you with my appearance”
Deleted User#0000: i also got Taming Transformer's VQGAN VAE working
Deleted User#0000: with my DALL-E
Deleted User#0000: so people can train on codebook of 1024 (perhaps it'll converge faster)
Deleted User#0000: sometimes i think bmk likes to sit around and lament that nothing works
gwern#1782: (why is lucidrains a profile pic of a dog when he's obviously a cat)
bmk#1476: What
bmk#1476: That's not true at all
maximillian#1592: what is the plan with DALL-E looking like?
EricHallahan#1051: He left. |
bmk#1476: My assumption that lucids dalle wasn't working was specifically based on a post i saw over in tpu podcast by someone who tried it and said it wasn't working >.>
cfoster0#4356: Tbf the updates about it working have all been in #art which you have muted lol
StellaAthena#3530: My list of GPT-2 or larger autoregressive non-MoE models is: https://cdn.discordapp.com/attachments/729741769738158194/821252668160737320/Screen_Shot_2021-03-16_at_1.24.43_AM.png
Louis#0144: I’m surprised there hasn’t been anything to topple GPT3
Louis#0144: Even over a year later
StellaAthena#3530: @Louis I’m more surprised by the gap tbh
StellaAthena#3530: Nobody’s made a 50B model?
chilli#5665: out of curiosity, any thoughts on how many MoE parameters is worth one regular dense parameter?
chilli#5665: Wasn't somebody doing scaling laws for MoE?
Deleted User#0000: > My assumption that lucids dalle wasn't working was specifically based on a post i saw over in tpu podcast by someone who tried it and said it wasn't working >.>
@bmk ahh I apologize, somehow I'm in an irritable mood all day and my mind was seeking out ill intent
Deleted User#0000: > (why is lucidrains a profile pic of a dog when he's obviously a cat)
@gwern indeed, some days I'm more cat than dog
StellaAthena#3530: Someone here, or someone in general? We could fire that off over the weekend tbh.
chilli#5665: I thought aran was doing it or something
StellaAthena#3530: IDK what Aran’s been up to tbh
zphang#7252: Grover? (1.5B)
jekbradbury#2280: CTRL is also 1.5B, and Meena (https://ai.googleblog.com/2020/01/towards-conversational-agent-that-can.html) is 2.6B
Aran Komatsuzaki#5714: Sid and I were working on moe's scaling and found that adding moe roughly translates to a model proportionately grown 3x in param count in terms of LM loss. But since Shazeer et. al. released Switch Transformer paper with similar results, we haven't moved on since then lol
Aran Komatsuzaki#5714: that's not what bmk does. that's what I do. |
chilli#5665: so basically, 3 MoE parameters == one dense parameter?
chilli#5665: did switch transformer have these results too?
Aran Komatsuzaki#5714: this result holds if every other ffn is replaced with moe with sufficiently many experts
Aran Komatsuzaki#5714: so doesn't really mean that adding more moe params = adding one more dense
Aran Komatsuzaki#5714: but in terms of computes it's 3x more efficient
Aran Komatsuzaki#5714: (there's an overhead from using moe, which is like 40% if you use GShard)
Aran Komatsuzaki#5714: switch transformer's result wasn't really about scaling, but i thought their implication wasn't that different.
KattyMan#9818: So how do i use gpt-neo for question and answering?
Louis#0144: u dont
StellaAthena#3530: I would not recommend that.
KattyMan#9818: wait then what can I do with gpt-neo sorry
EricHallahan#1051: Is the recommendation to use Hugging Face valid?
StellaAthena#3530: "Question answering" is a very general assertion. Can you be more specific? Do you have a text you want to get answers about? Do you want to ask random questions and get back answers?
KattyMan#9818: I saw a guy who would type like "Red button that says "Something"" and then GPT-3 would return the code needed to do that
KattyMan#9818: Is something similar possible with gpt-neo?
EricHallahan#1051: Not right now IIRC?
StellaAthena#3530: We have not trained models as large as GPT-3 yet. We expect that to be possible, but it is not currently possible
KattyMan#9818: oh okay
KattyMan#9818: so could someone please tell me some gpt-neo use cases
Louis#0144: at its current size? |
EricHallahan#1051: None right now.
EricHallahan#1051: When we have a model we will.
StellaAthena#3530: We don't have a public model right now. We have trained 1.3B and 2.7B models, but that's about the size of GPT-2 and significantly smaller than other public models such as T5
KattyMan#9818: I see thanks
Louis#0144: If you want to use something right now I recommend BART though
Louis#0144: T5 gets finnicky sometimes
Louis#0144: finetuning T5 has never played out well for me
EricHallahan#1051: (I know we are being a downer right now, sorry.)
StellaAthena#3530: The HuggingFace transformers library is the most comprehensive library of public models. I recommend checking it out: https://huggingface.co/
EricHallahan#1051: :yes:
Louis#0144: @Teven we're shilling for u
Louis#0144: nah jk
Louis#0144: huggingface is great
Louis#0144: i recommend their API too
StellaAthena#3530: I don't recommend their API
StellaAthena#3530: I do recommend using them, even if their framework is garbage 💩
Louis#0144: LMAO
StellaAthena#3530: Seriously, we'd have a 1.3B public model if anyone knew how to work it
Louis#0144: yeah tbf their decode function sucks
Louis#0144: writing new logit processors simply doesnt work |
Louis#0144: but besides that i have not had any issues
Louis#0144: i have implemented my own model in HF too
StellaAthena#3530: Can you implement one for me?
StellaAthena#3530: pl0x
Louis#0144: next week i will have time
Louis#0144: 3 of my papers are ending
Louis#0144: yo question
Louis#0144: given a sentence S that is of the form A = > B, how do I negate S while still keeping the overall "style" of the text that S came from
Louis#0144: feels like this is some weird NLI paraphrasing thing
EricHallahan#1051: What do you mean by `A = > B`?
StellaAthena#3530: A implies B
StellaAthena#3530: @Louis It depends on how the sentence is phrased. Does it say "If A then B" or does it say something else?
Louis#0144: let me get the exact instructions
Louis#0144: its instructions from wikihow
Louis#0144: we are trying to generate contradictory instructions
Louis#0144: as adverserial examples
Louis#0144: ’For her birthday or your anniversary, do something nice for her in public. Get or bake a cake for her birthday, or give her a card for your anniversary.’
Louis#0144: If birthday => get a cake or a card
Louis#0144: its stuff like this
Louis#0144: What I want to say is if birthday => do not buy a cake or card |
Louis#0144: but in the format of the dataset
EricHallahan#1051: Are you looking for an automated way of doing this?
Louis#0144: yeah
Teven#6831: Y tho :'(
jrowe#5371: he's trying to automate mother in law relationships. killer app
Sahl#0630: Isn’t it if birthday => do something nice for her in public
Sahl#0630: and that’s an example
jrowe#5371: there's probably a simple pos parse to do the negation
jrowe#5371: find interrogative to declarative pattern, negate declarative
Louis#0144: ok but *how*
jrowe#5371: nltk or simplegrammar could probably do it
StellaAthena#3530: @Louis can you identify the main verb of the sentence
Louis#0144: usually yeah
StellaAthena#3530: SVO sentences just become S don't VO
StellaAthena#3530: That should handle a lot
Louis#0144: ooo
Louis#0144: true ok
Sahl#0630: maybe as a last resort you can use prompt engineered gpt3 to negate
Louis#0144: LMAO
Louis#0144: nah we need 600k examples |
Louis#0144: no can do
Sahl#0630: or another lm
StellaAthena#3530: > Get or bake a cake for her birthday, or give her a card for your anniversary.
becomes
> Don't get or bake a cake for her birthday, or give her a card for your anniversary.
StellaAthena#3530: reads a little weirdly
StellaAthena#3530: but not bad
Sahl#0630: it’d be better if it were transitive
jrowe#5371: would gpt-2 work? seems small enough that it might
StellaAthena#3530: if you can detect sentences with multiple operative verbs you can use DeMoivre
Sahl#0630: don’t get or bake a cake for her birthday, and don’t give her a card for your anniversary
StellaAthena#3530: Don't get and don't bake a cake for her birthday, and don't give her a card for your anniversary.
Sahl#0630: get or bake probably matches the original style better
StellaAthena#3530: Yeah, I'm going for a 80% quality 80% of the time rule
Louis#0144: we're trying to see what happens when you train DPR on adverserial examples
Louis#0144: so you train it on the negation of its normal context doc
Louis#0144: it should make it more robust
Teven#6831: (still genuinely interested in what you don't like in HF code @StellaAthena )
Louis#0144: well my issue is that logit processors are half baked
Louis#0144: like they are implemented |
Louis#0144: but at the end of the day, the equivalent functions are all hardcoded into the decode function
Louis#0144: its impossible to add new logit processors
Teven#6831: Ah yeah I see - have you opened an issue ? That doesn't sound too hard, so if you can get other people who want it to chime in you can probably get it done (or do it yourself 😛 )
Louis#0144: i did do it myself
Louis#0144: for HF 3.5.1
Louis#0144: but then the decode function changed
Louis#0144: 😦
Louis#0144: i didnt merge in time
Louis#0144: i will though sometime this week
Louis#0144: good point
Teven#6831: arf sorry to hear that, yean it went through quite a few changes in 3.x
StellaAthena#3530: @Teven I haven't spent a huge amount of time with it, but modifying models has felt weirdly complex so far.
StellaAthena#3530: For example, I would like to make a custom model which is GPT-2 except it uses local attention every other layer
StellaAthena#3530: You have local attention implemented in other models, so this seems like it should be straight forward. But it's much more involved than I expected. It's far from the copy/paste I expected.
Louis#0144: yeah the learning curve on writing your own models is weird
Louis#0144: and the documentation on it is sparse
Louis#0144: it took me a solid 2 weeks to get to the point where I was comfortable enough to write my own model
Louis#0144: (I intend to PR it at some point, since its just an improved RAG)
zphang#7252: If the only difference is swapping local/global attention, that should be quite straightforward to implement. Were there other issues you saw?
Teven#6831: yeah GPT2 doesn't have the greatest codebase tbh, the first few models we added tend to be a bit legacy (Transformer-XL flavours being the main culprits) |
Louis#0144: What’s the policy on supporting legacy models btw
Louis#0144: Are u guys gonna support Transformer XL in ten years lol
Teven#6831: If we're still around, then I'd say probably yes
EricHallahan#1051: I assume you'll just recommend whatever comes around that surpasses it.
Louis#0144: Like at this point all of the main benefits of transformer XL have been superseded no?
Teven#6831: yeah but people may still need to run experiments on it right
Louis#0144: True
Teven#6831: that's definitely the takeaway I get from all those is-progress-from-models-or-hardware papers
Louis#0144: Have you guys removed a model yet
Teven#6831: no ! and don't really hope to have to at any point
Teven#6831: idk any time anyone suggests something that could fuck over some user everyone's like "pls don't turn us into TF" in the Slack so the culture is definitely very BC-prone here
Louis#0144: So you guys won’t pull an ai2 and make all your models unable to run
Louis#0144: You can’t even run ELMO anymore
Teven#6831: oof yeah no that's really not something we want to happen
Louis#0144: Surprise BERT removal tmrw
Teven#6831: I'll make a PR that does that on April 1st and tell you if I get fired
Louis#0144: Nah HF people ha s a p good sense of humor
Louis#0144: Usually
Louis#0144: Except the French ones
Louis#0144: 😉 |
zphang#7252: wait what did they do to ELMo
Louis#0144: They broke their API
Louis#0144: And now haven’t updated it in a year
Louis#0144: lol
Louis#0144: They broke it so badly that even their download servers no longer work
Louis#0144: So if y don’t have a copy of Elmo already you can’t get it
zphang#7252: how does one break it so bad
Louis#0144: Idk
zphang#7252: I thought at worst it'd be "git checkout this version and run on pytorch 0.4"
Louis#0144: No
Louis#0144: Their download servers use the newest version of the API
Louis#0144: lol
Louis#0144: Which happens to be a version with no Elmo support
zphang#7252: sigh, ELMo's biggest lasting contribution is its naming influence on BERT
bmk#1476: Semi related but has anyone made an extension to docker that captures network dependencies too so that rebuilding the container in the future is guaranteed to download the same stuff?
gwern#1782: "nix. you've invented nix."
Minimax#0217: where do I get the GT role
EricHallahan#1051: ?
EricHallahan#1051: Georgia Tech?
Minimax#0217: yes |
EricHallahan#1051: I don't know.
EricHallahan#1051: It is mod assigned.
Minimax#0217: ah ok
Daj#7482: yea if you're from GT you can have it
Daj#7482: It's just a bit of an in joke
Minimax#0217: gt numba one
Daj#7482: Are you a PhD student? You know any of the other GT people around here?
Minimax#0217: i'm an undergrad and I don't think so?
Daj#7482: ah, fair
Minimax#0217: I just experiment with like GPT-2 and currently trying to figure out CLIP
Daj#7482: neat, best of luck
Daj#7482: the madlads in #art do some mad cool stuff with CLIP
Daj#7482: Just scroll up for some mindblowing AI art
Minimax#0217: yeah I've been paying a lot of attention to them
Minimax#0217: great colabs
Aran Komatsuzaki#5714: you can find artists in #art
you can find autists in #multimodal
Daj#7482: you can find autistics in ~~#multimodal~~ eleuther.ai
Louis#0144: MINIMAX
Louis#0144: LMAO |
Louis#0144: he’s in the GT server too
Louis#0144: @ephemeral is here as well
Louis#0144: I’m sleepy
Louis#0144: Holy moly@
Louis#0144: Can this week just end
Louis#0144: So I can have my papers submitted
Louis#0144: I wanna go on vacation. Already
Louis#0144: 😦
Louis#0144: It took a year but I’m sick of covid now LMAO
AI_WAIFU#2844: May or may not be useful https://en.wikipedia.org/wiki/Collective_operation
AI_WAIFU#2844: The article has theoretical time complexities for common collective operations.
Louis#0144: “This article”
Louis#0144: Fuckin links Wikipedia
Louis#0144: LMAO
Louis#0144: I saw this article and assumed it was a published paper
AI_WAIFU#2844: what did you want me to call it, "this shitpost"?
AI_WAIFU#2844: also it's your own fault for not reading the link
jrowe#5371: the article is basically the paper, reformatted for Wikipedia
jrowe#5371: lol
kinoc#5731: reformat paper to Wikipedia format ... sounds like a task for a transformer ... |
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/821592907470995496/unknown.png
chilli#5665: not sure the asymptotics are that interesting here lol
chilli#5665: but perhaps as a single page refresher its not bad
kindiana#1016: nobody actually does the log stuff afaik
chilli#5665: I mean, I don't think latency is that important for our contexts
chilli#5665: but perhaps it's more important in other contexts
kindiana#1016: its not the latency, its how the communication is distributed
chilli#5665: ?
chilli#5665: wdym
chilli#5665: that term is about the latency cost, no?
chilli#5665: alpha is the latency
kindiana#1016: oh i misread it
AI_WAIFU#2844: I think in MP the latency term can get pretty serious. Say you have 64 nodes, 100 layers and 0.5ms comms latency, then the forward + backward pass is going to be 6\*2\*100\*0.5 = 600ms. Which is non-trivial. And this assumes an optimal topology.
AI_WAIFU#2844: Plus all reduce is usually implmented in 2 steps so it's more like 1200ms
kindiana#1016: if you are doing 64 way mp your step time is going to be pretty long lol
AI_WAIFU#2844: Question for the Neo Devs, which network call is the bottleneck? Is it the gradient reduction or the pp/mp layer-to-layer communication?
EricHallahan#1051: Ask @Sid.
EricHallahan#1051: I would like to know too.
Sid#2121: it's mostly the dp gradient reduction at larger scales. the mp allreduces are also quite expensive, pp is the least expensive communication.
AI_WAIFU#2844: And you're using very little MP right? Like only 2? |
Sid#2121: well only because our topology limits us to doing it that way.
EricHallahan#1051: We only got NVLink between 2 GPUs each.
AI_WAIFU#2844: Ah, and you guys haven't started microbatching right?
Sid#2121: it might be that one allreduce in a large mp group is less expensive than lots of smaller pp layer-to-layer communications
Sid#2121: yes we have, it helps offset the pp communication costs nicely. I tested on a DGX box this morning and we're outperforming megatron's benchmarks.
AI_WAIFU#2844: 😎
AI_WAIFU#2844: wait pp communiciation costs, don't you mean dp?
AI_WAIFU#2844: or do you need multiple microbatches to saturate the pipeline?
Sid#2121: oop yeah sorry i do mean that
Louis#0144: Guess who’s Stella number is going to be one really soon B)
Louis#0144: Means my @bmk number is 2
bmk#1476: hey if you help with eval harness you can make that 1
Louis#0144: I’m helping on equivariance
bmk#1476: well you should still help on eval harness because eval harness is cool and stuff
guac#4716: Do you have a list of high priority tasks for the harness?
bmk#1476: There are a few gpt2-paper-ones that still aren't implemented, those are kinda high priority
bmk#1476: in no particular order: squad, gpt2, quac, storycloze, natural q's, wikitext
bmk#1476: ptb is also kinda important to get working, but it's cursed somehow and we cant figure out what we're doing wrong so no hurry on that
bmk#1476: lots of these are already assigned to someone, so it would make sense to ping whoever is put down as assigned on the task board and see what progress they have
bmk#1476: it's currently 5 actually https://cdn.discordapp.com/attachments/729741769738158194/821945313848983552/unknown.png |
Louis#0144: Lmao
Aran Komatsuzaki#5714: My bmk number is infinity, since I've never coauthored lol
bmk#1476: y u no coauthor
bmk#1476: single author paper bad
bmk#1476: 20 author paper good
kindiana#1016: imagine working with other people :berk:
Louis#0144: Imagine being in a PhD program without having ever coauthored
Louis#0144: Honestly standards for PhD admits used to be so low just a few years ago
bmk#1476: handing out free* coauthorships on eval harness paper
* you must contribute one or two tasks to the harness, depending on task complexity
Louis#0144: I have a task of my own
Louis#0144: If that works
bmk#1476: is it already described in a paper
bmk#1476: that we can cite
Louis#0144: It’s in review
Louis#0144: O
bmk#1476: arxiv?
Louis#0144: Wait no we haven’t submitted that one yet
bmk#1476: y u no arxiv tho |
Louis#0144: Idk
Louis#0144: Advisors decision
bmk#1476: lolk
bmk#1476: or you could just implement one of the tasks on our todo list
Louis#0144: Ill let u know when ig. I sent the task to Stella for your guys GPT3 paper
Louis#0144: Since u wanted a zero shot narrative comprehension task
Louis#0144: Which I have
Louis#0144: Good joke pretending I have time to program
bmk#1476: not even for moar authorship?
bmk#1476: huh, i thought there would be more appeal
bmk#1476: i thought yall academics were nuts for authorship
Louis#0144: I prefer authorships where I do theory work
Louis#0144: Not referring to Aran, saying in general
Aran Komatsuzaki#5714: I got into GT's ML PhD program 2 years ago without any prior research output lol
All I needed was to talk to a ML professor
EricHallahan#1051: I've never coauthored with anyone lol
Louis#0144: Literally how
Louis#0144: I’m floored at how easily people got in
EricHallahan#1051: Not that it really matters `:P`
Louis#0144: Riedl has been trying to secure PhD funding for me for two years now |
Louis#0144: No luck
Louis#0144: It’s kinda annoying but whatever
Aran Komatsuzaki#5714: I knew it was easy to transfer from other affiliated PhD program in GT to transfer to ML PhD, so I intentionally got into Math PhD, which was easier to get into, especially for me given my math background.
bmk#1476: I mean surely a few publications will help you bolster up your resume, no?
Aran Komatsuzaki#5714: No RA funding tho, but it's really easy to get TA funding from Math dept for only five hours/week of TA work.
Louis#0144: Oh wow
Louis#0144: I should have done that tbh
Louis#0144: I come from an alg top background
Louis#0144: So I could have applied for that
Aran Komatsuzaki#5714: yeah suitable for you
Aran Komatsuzaki#5714: and anyone from math background like Stella
Louis#0144: Stella would never do a PhD
Louis#0144: It’s not like her
bmk#1476: im still convinced that getting a phd is a scam, just like the rest of academia
Louis#0144: I am doing it Bc I want it
Aran Komatsuzaki#5714: i mean being a phd student helps in various ways for foreign students like me
Louis#0144: Not for the job prospects
Louis#0144: lol
bmk#1476: what is a phd good for anyways
Louis#0144: Skill sets |
bmk#1476: the only thing i can think of is you can be a RS instead of RE, hooray, but actually those are basically the same job same pay
Louis#0144: Learning how to chunk research up into sizable pieces
Louis#0144: Learning what makes a good research question
Louis#0144: That’s a super hard skill
bmk#1476: ~~learning how to break down research into pieces that are the perfect size to maximize resume value~~
Louis#0144: Very few people can do this
Louis#0144: Even among professors
bmk#1476: ~~learning what reviewers want~~
Louis#0144: lol
Louis#0144: I mean
Louis#0144: At the end of the day sure
Louis#0144: But it’s important to ask the right questions
Louis#0144: This is a skill almost everyone here would benefit from
EricHallahan#1051: This is not prompt engineering!
Louis#0144: You as well
EricHallahan#1051: Or is it? :ultrazucc:
Louis#0144: I can’t tell u how many times research questions are DOA
Louis#0144: Bc they’re the wrong question
Louis#0144: This is a weekly occurrence for me
bmk#1476: I still think academia is borked, industry is better, roving group of hackers is best |
Louis#0144: How do the latter ones get paid
Louis#0144: lol
EricHallahan#1051: They don't need to.
Louis#0144: Ppl go into academia for the freedom
bmk#1476: Best combined with industry for the money
guac#4716: freedom to do what your PI wants you to do
Louis#0144: Honestly a lot of PIs are just like do whatever tf u want
Louis#0144: lol
Louis#0144: I have complete freedom in my lab
bmk#1476: Roving group of hackers has the greatest freedom
Aran Komatsuzaki#5714: yeah i've never talked with my advisor for the last 8 months, but he's cool with it.
Louis#0144: That’s concerning
guac#4716: yeah they mostly don't care from what i hear what do i know i don't even have an associates deg. lel
Louis#0144: You should talk to him
Louis#0144: Wtf
bmk#1476: Also industry pays so much more than academia that doing industry for a few years and then coasting off that and doing roving hacker work is probably actually better than academia
Louis#0144: Why wouldn’t you
Aran Komatsuzaki#5714: my progress is fine, so no need for report. i asked him to be an advisor, so that my advisor won't annoy me.
Louis#0144: My advisor wants weekly check ins
Aran Komatsuzaki#5714: that's a lot of time committment |
Louis#0144: Eh
Louis#0144: It’s not much
bmk#1476: In industry as an RE you can make more in like 2-3 years than a decades worth of phd student stipend
bmk#1476: So just go do industry for 3 years and then fuck around for an entire decade with complete freedom
bmk#1476: (this happens to be my plan: use eleuther to build resume, go work at OA/DM to make $$$, return to eleuther and fuck around)
guac#4716: wow dipping on the squad for quick cash. like my old lady
bmk#1476: Hey I'll still do eleuther in my free time, like i do now
bmk#1476: But then after that I'll be able to do eleuther full time
bmk#1476: Full time eleuther alignment research is the dream
𓅬 gabriel_syme 𓅬#3220: this is quite specific to CS though, and even that only the last couple of decades I would guess
𓅬 gabriel_syme 𓅬#3220: try getting paid in sustainability for e.g.
𓅬 gabriel_syme 𓅬#3220: well I guess not just CS, a lot of fields have the same (anything finance comes into mind)
bmk#1476: Well i mean i assumed we were talking specifically about ml
bmk#1476: Obv this is highly industry and time dependent
jrowe#5371: sell your soul to a hedge fund
bmk#1476: Too far
bmk#1476: My soul is only for sale for OA, dm, gb, et al
jrowe#5371: i was friends with someone that wrote a neural net library in c, was widely used in open source
jrowe#5371: maintained it for years, then got an offer from a Swiss bank, sold out, and never heard from him again
bmk#1476: Ouch |
bmk#1476: See but I'm motivated by something that he isn't
bmk#1476: Alignment
jrowe#5371: wasn't close or anything, but it kills me to think how much progress gets lost by chasing greed
bmk#1476: Also i strongly believe that money is not the bottleneck
jrowe#5371: how much brilliance is locked behind impenetrable ndas
jrowe#5371: for all we know, banks have had transformers for a decade lol
bmk#1476: So after having enough money to bankroll my own alignment work, I'll be able to work on alignment full time
bmk#1476: And I'm used to living pretty bare bones so staying on a budget won't be hard
chilli#5665: I kinda wonder what the future of OAI looks
chilli#5665: a lot of people I've talked to seem to be quite pessimistic
chilli#5665: about their talent drain
bmk#1476: Yeah I'm also kinda concerned
bmk#1476: They still seems to be mostly aligned atm (compared to DM, FAIR, GB, MSFT, etc) but that's subject to change at any moment
jrowe#5371: aligned ai blows away any notion of wealth - everyone gets to be trillionaires
jrowe#5371: or terminators lmao
chilli#5665: lol my gf is deciding what grad program to go to now
chilli#5665: and watching terminator 2 influenced her interest in working on AI alignment
chilli#5665: by a non-negligible amount
jrowe#5371: whatever it takes lol
bmk#1476: I mostly believe that more money beyond funding a comfortable lifestyle and possibly hiring a few REs to help write code is almost completely useless for Alignment outcomes |
bmk#1476: And diminishing returns is probably way before then
bmk#1476: She's a bit confused but she's got the spirit
jrowe#5371: that, or she wants to nurture a secret Arnold fetish :p
jrowe#5371: no pressure @chilli
jrowe#5371: catgirls and Ahnoldbois
chilli#5665: tbh rewatching terminator 2 I don't think the scenario is that far off lol
chilli#5665: The thing that triggers the apocalypse is the government attempting to shut off the AI, which the AI views as preventing it from accomplishing its objectives
Teemochu#8740: Instructions unclear, AI turns world into cartoons
jrowe#5371: are ya ready, kids? I can't hear you!
Teemochu#8740: O wo captain!
Teemochu#8740: Anyway, regarding industry... a CS PhD might have a stipend in the $30k range. After a few years, a FAANG SWE job can also be that. Difference is one is per year and one is per month.
𓅬 gabriel_syme 𓅬#3220: I did NOT have the same reaction to that movie lol. What a difference a decade or does
𓅬 gabriel_syme 𓅬#3220: interesting thing about money, it never seems to be enough
𓅬 gabriel_syme 𓅬#3220: where I live and work, a lot of expats with big salaries. Good life ofc but sort of the same they did before, just more expensive stuff.
zphang#7252: I think the point of a phd is the training/advising/colabs with other students
zphang#7252: You can something similar in an industry research lab, but it's not quite as targeted imo
kindiana#1016: you can also get it from a discord server
kindiana#1016: :guilty:
𓅬 gabriel_syme 𓅬#3220: that's true, I've gotten more from here in a couple of months than years of around the web
triggerhappygandi#0001: Definitely. Deepmind didn't have Alex Graves or David Silver leave. |
triggerhappygandi#0001: Wonder why Dario left.
triggerhappygandi#0001: Makes one think... If paying for education is even worthwhile...
nz#9710: the best educational sources are free
triggerhappygandi#0001: And only the hollow ones cost money (talking about online courses like Udacity)
Daj#7482: The main good you buy from universities is not education but social status and social capital
Daj#7482: It used to be that to learn some math proof, you had to actually track down that math professor and ask him
Daj#7482: Before the internet, universities sold education
Daj#7482: In many fields (especially CS), that's just not needed anymore
triggerhappygandi#0001: Where do I get _that_ for free
nz#9710: thanks mr internet very cool
Daj#7482: Be handsome and born rich? lol
triggerhappygandi#0001: Lmao lost on both fronts
Daj#7482: Universities still are important for certain kinds of education, especially capital intensive lab sciences and the like
Daj#7482: and some people really just can't handle autodidactism
Daj#7482: It takes a certain kind of obsessive nature to teach yourself cutting edge ML for shits and giggles
Daj#7482: But if it's like a job and you have a boss and career goals, a lot more people respond to those incentives
nz#9710: tbh I unironically learned far more from this server and a couple others (e.g. yannic's) than in two years and a half of university
nz#9710: kinda insane when I think about it
Daj#7482: Not at all when you consider the incentives
Daj#7482: We select for getting shit done ~~and having fun~~ |
Daj#7482: Universities select for prestige
nz#9710: I feel like it's also about passion
Daj#7482: Yep
Daj#7482: something something
nz#9710: Like most of my professors really don't seem to give a fuck about what they're teaching, they just want to be done with it
Daj#7482: https://equilibriabook.com/
nz#9710: here most people are genuinely passionate about AI/ML
Daj#7482: The world is extraordinarily inefficient (in the EMH sense) in an extraordinarily large number of areas
Daj#7482: We select for passionate autism
Sahl#0630: it’s also hard to be passionate in uni
triggerhappygandi#0001: I haven't even revealed my final form
triggerhappygandi#0001: @Sahl yeah definitely. Idk why tho
Daj#7482: I'll die on the hill that LW is better than 80% of academia put together
triggerhappygandi#0001: Maybe it's because here are like minded people.
Daj#7482: LW is hyper selective for passionate smart autism
Sahl#0630: I feel like it’s because you’re incentivized to learn things that don’t necessarily interest you
triggerhappygandi#0001: Even in my uni I had a circle of like 10 people at best.
Sahl#0630: because of prereqs and degree requirements
triggerhappygandi#0001: And everyone was going after shit goals
triggerhappygandi#0001: Like a job at a good company |
triggerhappygandi#0001: Or god forbid
triggerhappygandi#0001: A fucking government job
Daj#7482: People are different
triggerhappygandi#0001: But come on
Daj#7482: Yea no I agree lol
triggerhappygandi#0001: Is this your life goal?
triggerhappygandi#0001: Kinda pathetic
Daj#7482: There's a saying in the rationalist community that every rationalist has this moment where they realize "Holy shit everyone is fucking insane"
Daj#7482: And it's true
triggerhappygandi#0001: I know hundreds of people who had zero passion about engineering; just entered to get a good job or a secure job in government.
triggerhappygandi#0001: Literal language models.
Daj#7482: yep, those people are insane lol
Daj#7482: Everyone's fucking insane
Daj#7482: The entire world is insane
Daj#7482: inb4 postrats "ummmm akshully"
triggerhappygandi#0001: Putting yourself through all of that for what? A stable job. You only get one life and you spend it doing what other people think is prestigious.
Daj#7482: The fact you think that way is why you're here
Daj#7482: There is a huge market inefficiency in high agentic-ness
Daj#7482: You can get so much leverage by just...doing shit
Daj#7482: (and being moderately intelligent) |
Sahl#0630: yeah! i’ve noticed this
Sahl#0630: it’s also what support systems for mental illnesses don’t consider
Sahl#0630: that the people involved have low agency
Sahl#0630: that’s a really interesting problem
Daj#7482: It is
Daj#7482: And I find it really tricky to figure out how much of it is learned and how much is innate, but there is a massive difference between individual agenticness
Daj#7482: But I do think it's something you can just, you know, practice
Daj#7482: probably
Daj#7482: maybe I'm projecting
Daj#7482: The ability to try and fail many many times is big here
Daj#7482: tbf those with mental illnesses is kinda a different case, agenticness falls into learned helplessness really quick when you're ill
StellaAthena#3530: I'm interested in a package for doing tensor regression, but it looks like the main one (TensorLy) doesn't support multidimensional outputs. Does anyone know a package that does?
I am specifically interested in replicating the analysis found in this paper. https://cdn.discordapp.com/attachments/729741769738158194/822142021548507167/a-network-approach-to-measuring-state-preferences.pdf
AI_WAIFU#2844: https://forum.effectivealtruism.org/posts/fX8JsabQyRSd7zWiD/introducing-the-nonlinear-fund-ai-safety-research-incubation
dmvaldman#4711: "Dirt-cheap Cloud GPUs for Deep Learning" https://gpu.land/
3dprint_the_world#6486: generally the better someone is, the less they want to teach
3dprint_the_world#6486: when lecturers are enthusiastic about teaching, it's often because they have nothing else going on
3dprint_the_world#6486: such is the sad reality of academic life
3dprint_the_world#6486: depends which era of LW you mean. The current incarnation of LW is very echo chamber-ish. |
3dprint_the_world#6486: sorry if this offends anyone
3dprint_the_world#6486: but yeah, the LW of years ago was for sure way better than academia
Daj#7482: tbf I include AF in LW
Daj#7482: and that's almost all I read
Daj#7482: I don't know what "LW proper" is like
Spy#9778: Any new word from openAI on what they'll be doing with DALL-E?
Spy#9778: selling API access etc
EricHallahan#1051: Apparently they plan on releasing the code, but it is messy research-grade TensorFlow. I am almost certain that they will not sell access or release the model, as that is too dangerous from a legal perspective.
EricHallahan#1051: CLIP knows copyrighted characters, so the assumption that DALL-E knows them too is very high.
alstroemeria313#1694: But why did they release CLIP then
EricHallahan#1051: Because CLIP by itself is not generative.
EricHallahan#1051: So no problem really.
alstroemeria313#1694: Ah, they can just say people are misusing it?
EricHallahan#1051: It isn't their responsibility.
EricHallahan#1051: From a legal perpective.
zphang#7252: is it legally different for GPT-3?
3dprint_the_world#6486: just want to give the daily reminder here
3dprint_the_world#6486: none of us are lawyers
3dprint_the_world#6486: including you @EricHallahan
alstroemeria313#1694: CLIP seems to contain a lot of weird artistic styles and themes that I have no idea where they came from |
zphang#7252: we should scare lawyers into thinking AIs are taking away their jobs so they join the winning side
zphang#7252: like we did for radiologists
3dprint_the_world#6486: I know a few lawyers in the AI space and trust me, they're in on this
3dprint_the_world#6486: one lawyer I know is absolutely super enthusiastic about AI. She can't stop talking about it.
zphang#7252: send her an invite
3dprint_the_world#6486: she's probably too mature for this audience
3dprint_the_world#6486: she has kids
3dprint_the_world#6486: etc.
cognomen#6297: my money is on 15.ai causing the first major AI content generation supreme court judgement
cognomen#6297: likely results are up in the air though
zphang#7252: unrelated: we should have a betting channel
EricHallahan#1051: related: we should have a precommitment channel
Daj#7482: This is actually an interesting idea
Daj#7482: Maybe a bot that uses fake points or something
Daj#7482: Not as good to force good thinking as real money, but hmm
Daj#7482: https://www.reddit.com/r/discordapp/comments/9vd98b/are_there_any_discord_bots_that_simulate_betting/
Seems we're not the only ones that came up with this idea
Daj#7482: Well this is more prediction markets
Daj#7482: User to User betting would also be really cool
Daj#7482: If anyone ever wants to do a little bot project hmu hah |
gwern#1782: 15.ai is really asking for it. I'm mildly surprised he hasn't had any legal problems yet
StellaAthena#3530: What is 15.ai?
gwern#1782: crazy mit grad
zphang#7252: Looks like it's a voice synthesizer - any particular reason why it would be especially susceptible to legal problems?
nz#9710: I found his twitter but don't get the issue
Daj#7482: lots of copyrighted characters
Daj#7482: makes for some good memes each time he opens up access
EricHallahan#1051: Well that is kinda what I am interested in on the other side of the equation.
EricHallahan#1051: But I have zero interest in these "characters".
Daj#7482: https://www.youtube.com/watch?v=dKqXjyChziM
Daj#7482: :ultrazucc:
EricHallahan#1051: That's a feature of pretty much every neural vocoder I've interacted with.
cognomen#6297: the quality is miles ahead of anything else I've heard and this guy has basically decided he's not going to settle for anything short of perfection
StellaAthena#3530: Yeah I don't get why this is raising noise
EricHallahan#1051: Because it is an end user doing something and getting an unexpected result?
cognomen#6297: it's not replacing actors, but an argument could be made that it makes it more difficult to maintain a side gig like doing requests on cameo
gwern#1782: because it's not *a* voice synthesizer, but *the* voice synthesizer of many very copyrighted, recognizable, actively-exploited-by-multi-billionaire-media-juggernauts for which almost the only use is fan-made media frequently commercialized to some degree (as well as outright displacing the side work of the voice actors on Cameo now and in many different venues before)
gwern#1782: and he's advertising them very specifically as the specific characters they so obviously are, no deniability
EricHallahan#1051: That's just asking for it.
gwern#1782: yes, he literally does ask for it on the FAQ, daring them to sue him, iirc |
gwern#1782: possibly he's toned down the language since I last read it...
EricHallahan#1051: Oh, he literally did ask for it.
gwern#1782: he's set himself up in pretty much the worst possible place for a fair use defense
mgostIH#0245: The fact that you can copyright voice is silly
gwern#1782: and given them substantial motivation to sue him. but... crickets
zphang#7252: hmm yea how does it differ from e.g. impressions
EricHallahan#1051: The fact that copyright is how it is is silly
gwern#1782: well, we don't know if you can copyright voice. it's an area in flux. but he does himself no favors by associating it with the specific trademarked characters and concepts
mgostIH#0245: Also expecially when we are living in an era where digital information can be made out of thin air
gwern#1782: it's sorta like, can you copyright Basil Rathbone's voice? eh... no one knows. can you copyright and trademark sherlock holmes in a deerhunter smoking a pipe? sure as shootin'.
nz#9710: Does he release the models?
gwern#1782: no. 15.ai does not play well with others. never has.
mgostIH#0245: There'll be more and more projects that can replicate voice even better in the future and a lot more content
EricHallahan#1051: Sherlock Homes is public domain IIRC.
gwern#1782: nope!
gwern#1782: that's why I used him as an example
EricHallahan#1051: Is this a certain incarnation you are referring too? ...
Oh, It's the on screen version
gwern#1782: (as for why he refuses to play with others and release anything, I chalk it up to his perfectionism. what if his released code isn't perfect? what if people used *outdated models*? how could he live with himself)
gwern#1782: well, it's not just the on screen versions |
gwern#1782: it's also some but not all of the stories
gwern#1782: horrid situation, but that's the world we live in
EricHallahan#1051: The deerhunter is not an original element
EricHallahan#1051: duh
EricHallahan#1051: I should have picked up on that.
gwern#1782: anyway, so a similar problem applies. can you copyright twilight sparkle's voice? well... no one knows. can you trademark "Twilight Sparkle" and can you copyright the twilight sparkle character and are there publicity and personality rights which may come with the package of "Twilight Sparkle saying XYZ"? sure looks like it. and guess which one 15.ai very deliberately sets out to enable and foster
EricHallahan#1051: Well it makes a lot of notoriety.
EricHallahan#1051: Pretty stupid.
EricHallahan#1051: Disney has a lot more to dump on legal defense.
gwern#1782: my best guess is that the IP holders are all aware of 15.ai but just don't want to spend the money and risk a bad precedent for something which is still so niche and minor and really isn't all *that* different from the status quo where you could hire random women online to voice your videos
gwern#1782: still, that's the sort of reasoning which is usually right but when it's wrong it turns you into aaron swartz
mgostIH#0245: You mean like risking a streisand effect?
gwern#1782: yes, bad PR too
mgostIH#0245: tbh I do wonder what copyright will be like in the next decade
EricHallahan#1051: That seems reasonable.
mgostIH#0245: After all it's just a matter of time videos get generated too
mgostIH#0245: What if you feed a super transformer all the MLP episodes and it keeps producing more?
mgostIH#0245: Or what about resurrecting bands that departed, like Daft Punk
gwern#1782: video is exponentially harder, imo, but yes, it'll start being possible soon too
mgostIH#0245: Well maybe sooner than video there'll be music and voice |
EricHallahan#1051: In pieces. I expect copyright overhaul in some degree to happen within that time period.
mgostIH#0245: I honestly don't expect much to change but just society to adapt on the technologies while the laws will be mostly unapplied
gwern#1782: nah. I would bet a ton against any copyright reform. congress is too static, there are too many interests on both sides of the puzzle. we're lucky that disney et al have abandoned attempts at strenghtening IP further
gwern#1782: like, what was it, last year that was the first year in the lifetime of pretty much everyone in this channel that stuff actually entered the public domain?
mgostIH#0245: > Disney et al
Lobbying is All You Need
Dromarion#3383: Wait suppose you can copyright voices. Who holds it, the company or the voice actor? If I voice a character and Disney copyrights it, would I not be allowed to speak in that speech pattern anymore?
EricHallahan#1051: I mean to at least passing a law to make it more powerful.
gwern#1782: it'd almost certainly be a work for hire that the company owns, and since they oversaw the voice's creation (voice actors don't act in their normal voice), why would you be allowed to copy it further?
mgostIH#0245: What about other countries
gwern#1782: both sides, I said. look at SOPA. big tech doesn't benefit from copyright being strenghtened
EricHallahan#1051: Yep, I was excited to see *The Great Gatsby* enter public domain.
mgostIH#0245: Like would China give a flying fuck about US copyright laws
gwern#1782: oh, who knows. but also who cares what they do? all of this is happening in the US/UK pretty much
mgostIH#0245: Well but even if China is a bit delayed in their models they can still produce all the creative output from them
mgostIH#0245: Like at most it's delaying the actual problem by a couple of years
gwern#1782: (we'll see if chinese dall-e/clip are any more than the usual catchup imitation... I've been hearing for a long time about china overtaking the US in genetics or AI or whathaveyou and it's never happened)
mgostIH#0245: Not that I think it's a problem, imo copyright laws should be entirely rethought from the ground in this digital era
Daj#7482: Why worry about copyright when AGI will replace humans in a decade or two anyways :thonk:
Daj#7482: inb4 the disney maximizer dukes it out with the paperclip maximizer |
mgostIH#0245: The impact AI will soon have on society will probably also affect alignment funding
kindiana#1016: https://www.youtube.com/watch?v=-JlxuQ7tPgQ :guilty:
Daj#7482: There will be no need for funding if alignment doesn't succeed soon lol
Daj#7482: short timelines gang
Daj#7482: Great video, I love Tom so much
mgostIH#0245: I think we might still have a few decades ahead of us
Daj#7482: ah, one of the "ultra long timelines" people, I see
Daj#7482: lol
mgostIH#0245: Not that I mean "We should just worry later", I mean "We should still worry about current AI development because society will see this first"
mgostIH#0245: So it's worth it to put effort on understanding how "creativity" will take place, since soon we might have models that can produce music very well
Daj#7482: I'm just teasing
Daj#7482: To be clear, I literally think AGI is so soon that none of this matters at all
Daj#7482: And AGI will pick whatever copyright policy it wants
Daj#7482: But it's fun to talk about ¯\_(ツ)_/¯
mgostIH#0245: I don't think the people working on alignment will care as deeply as lawyers into copyright
bmk#1476: I've updated away from short timelines tbh
mgostIH#0245: In a post AGI world I don't think copyright even makes sense
bmk#1476: I mean my idea of long timelines is not that long but still
mgostIH#0245: My timelines are "We'll surely get AGI in 40 years"
mgostIH#0245: And they are subject to change |
bmk#1476: I've updated significantly away from "agi in 5 years"
mgostIH#0245: But I still want to enjoy a world pre AGI
Daj#7482: I mean, if anything is sub 100 years, I wouldn't bet on any kind of large scale societal shifts
bmk#1476: I think sub 100 years is almost guaranteed
Daj#7482: Things will happen but like whatever
Daj#7482: The long term future of the lightcone is a lot bigger than any of this
mgostIH#0245: I still keep myself healthy in case I have to live until I'm 80 :viriglasses:
Daj#7482: Yea, being healthy is nice in general lol
mgostIH#0245: I'd really like seeing the perspective shift once we are close to AGI
mgostIH#0245: Like do you think lay people would understand it?
Daj#7482: You're in it lol
Daj#7482: Do they?
Daj#7482: The future is here, it's just unevenly distributed
mgostIH#0245: Well hm, it might sound silly but I still don't see being in touch with it as in "Let's just replace this job today with our new larger transformer!"
mgostIH#0245: I mean something a bit more drastic, when people will understand just the sheer efficiency at pretty much any task given
Dromarion#3383: If AI starts custom making media for users, it would be really hard for firms to do anything about it. Can't target each individual instance, a lot of which are private and no one knows about.
mgostIH#0245: @Dromarion I believe that too but in the beginning the models may be very privatized
mgostIH#0245: Like GPT-3
mgostIH#0245: Only groups of hackers would start their own version of it
mgostIH#0245: *cough* |
Daj#7482: AI is an exponential curve, and exponentials always look like a little curiosity until they very suddenly don't
Daj#7482: "I wonder who that's for"-meme
Daj#7482: people still don't get that COVID is serious, you expect everyone to grok AI?
mgostIH#0245: Oh well, that's right
mgostIH#0245: And covid was far more sudden than current AI imo
mgostIH#0245: Ye what I mean is something like "look at what happens there" to "my life has now changed" in a span of a month
mgostIH#0245: That kind of impact of AI
Daj#7482: ~~I even had a talk on this!~~
Daj#7482: give it a few years
mgostIH#0245: What didn't you have a talk on is the real question
Daj#7482: I've had a good number of absurd rants in #off-topic that would make !great! talks
mgostIH#0245: If you believe AGI is near then you should really watch HxH before you possibly get turned into goo for paperclips
mgostIH#0245: *There's still a chance*
mgostIH#0245: What if the secret to alignment was into that anime somehow
Daj#7482: bruh
AI_WAIFU#2844: If we're lucky
AI_WAIFU#2844: #IWillBeConfusedIfWe'reNotDeadIn5yearsGang
mgostIH#0245: 5 years is way too early for me
Daj#7482: I wish we had that bet bot now
mgostIH#0245: We are still a bit far from achieving good results in RL |
Daj#7482: I'm more the 15 year gang, but 5 is totally possible
mgostIH#0245: And I don't mean solving it completely
Daj#7482: Well we're learning quickly that RL was kinda the wrong approach
Daj#7482: or at least naive model less RL
Daj#7482: Look at e.g. Dreamer
AI_WAIFU#2844: That's because RL researchers are retarded
zphang#7252: the time travelers from the future will handle it
AI_WAIFU#2844: I mean have you *looked* at RL?
Dromarion#3383: I was a layman just a year ago and it took me several revelations starting with AI Dungeon to realize the gravity of the situation. Unless they have an experience like that, most people's exposure to AI are tech articles with an Elon Musk picture on it.
AI_WAIFU#2844: Hurr Durr Policy Gradient
mgostIH#0245: What do you think of Dreamer v2
Daj#7482: How tf RL has gone this long with model-free RL is a fucking mystery to me
Daj#7482: It's like DM is the only good RL shop in the entire world
AI_WAIFU#2844: Much more in the right direction.
mgostIH#0245: I was very surprised at the results of bayesian optimality of RNNs and transformers
mgostIH#0245: I think those will come very helpful at RL too, being bayesian optimal with data seems pretty damn useful for it
mgostIH#0245: Maybe model free would work if we just gave it like 10000 game variations and a million times more compute
Daj#7482: I almost fell out of my chair when I talked to an OA engineer about their LM RL stuff and he said basically "yea just supervised training the policy on n rollouts weighted by the reward model works about as good as RL"
Daj#7482: lol
Daj#7482: The future is now |
AI_WAIFU#2844: To be clear, I do think that policy gradient and Q-learing are valuable and have their plance. But the idea that you should just use that alone is dumb af, and if it works at all it's a miracle.
mgostIH#0245: You mean it's kind of like using fully connected layers for everything
mgostIH#0245: In theory they could work but in practice it's a stupid idea
cfoster0#4356: *NeRF*: :guilty:
bmk#1476: Honestly, the complicated math around proving (vanilla) policy gradient is a smokescreen for just how utterly simple it is
bmk#1476: I don't find this surprising at all because RL is *horrible*
bmk#1476: I haven't learned how the complicated stuff like PPO/TRPO work yet but my feeling fro m learning how vanilla PG, TD/Q, etc work both in theory and then in practice in actual code is "wait this is actually really simple and i bet someone could probably stumble on this before ever thinking of the theory"
Daj#7482: PPO is policy gradient except you just don't update if the KL between the policy pre and post update is too high
Daj#7482: that's literally it lmao
bmk#1476: lol huh
𓅬 gabriel_syme 𓅬#3220: I have some hopes for the neural episodic stuff
𓅬 gabriel_syme 𓅬#3220: or maybe the upside-down ideas by :schmid:
𓅬 gabriel_syme 𓅬#3220: I think you've won when you can write letters upside down in a paper https://cdn.discordapp.com/attachments/729741769738158194/822247442350997514/unknown.png
3dprint_the_world#6486: people still don't get that climate change is serious. Including some people right here in this discord. :sadge:
3dprint_the_world#6486: 40 billion tons of CO2/year is not exactly something we can handwave away with cute fixes like giving everyone a Tesla or planting more trees
𓅬 gabriel_syme 𓅬#3220: CC is almost an alignment issue isn't it? People really totally not aligned with the fact that it's an existential threat, much like AGI. Only difference is it's already started.
nay#9954: Just make AGI and have its utility function to keep global temps within 1.5C current temps
𓅬 gabriel_syme 𓅬#3220: I almost feel people actually believe that and it's not a joke
3dprint_the_world#6486: it's *the* coordination problem.
𓅬 gabriel_syme 𓅬#3220: yep, global reaching problem with no real organizational structure to even attack it |
bmk#1476: Climate change is not serious
We're all gonna die of paperclips first
𓅬 gabriel_syme 𓅬#3220: not really though
𓅬 gabriel_syme 𓅬#3220: I feel that's just another way to say "let's not do anything about it" but I'm biased due to my work
bmk#1476: Well, i have like 20-year agi timelines
EricHallahan#1051: What is your profession exactly?
bmk#1476: CC is not going to destroy the world in 20 years
𓅬 gabriel_syme 𓅬#3220: environmental design & engineering (in the built environment), although I do more wanky stuff of late
bmk#1476: Yes, i don't personally want to do anything for CC because that's time that could be spent on Alignment
EricHallahan#1051: No, it is going to torture us for 20 years.
3dprint_the_world#6486: maybe we'll get paperclipped before CC has any major effects BUT I feel like if someone claims they're serious about alignment but says CC isn't a big deal, then they're not actually serious about alignment
𓅬 gabriel_syme 𓅬#3220: it literally is hurting billions as we speak, but alright
AI_WAIFU#2844: @bmk secretly doesn't want to do anything about climate change because he lives in a frozen shithole |
3dprint_the_world#6486: and yes, as @𓅬 gabriel_syme 𓅬 said, CC is *already* causing suffering, it's not a future thing
StellaAthena#3530: Silly @𓅬 gabriel_syme 𓅬! We are utilitarians. That means that we don’t care about current suffering.
(Sarcastic, but far less than I wish it was)
𓅬 gabriel_syme 𓅬#3220: my bad :wojak_despair:
StellaAthena#3530: My hottest philosophy take is that utilitarianism is probably incoherent if you do it right due to the probability distribution of future utility not having an EV.
𓅬 gabriel_syme 𓅬#3220: very hegelian, the incoherency is part of the thing itself
𓅬 gabriel_syme 𓅬#3220: he would say
Daj#7482: I mean, if you believe AGI is the most tractable solution to CC, then working on AGI is the most direct way to address CC
Daj#7482: Dunno what more you want lol
bmk#1476: This is honestly probably not even wrong
bmk#1476: Some kind of subconscious thing
Daj#7482: Thankfully the universe is finite in size. Ultrafinitism gang
3dprint_the_world#6486: very *sniff* hegelian, the *nosewipe* incoherency is part of the *sniff* thing itself
𓅬 gabriel_syme 𓅬#3220: haha
𓅬 gabriel_syme 𓅬#3220: I wonder if people recognize that act?
nz#9710: alright now you've reminded me of my HS philosophy prof
AI_WAIFU#2844: Pascal's wager gang
EricHallahan#1051: Luckily, I have no knowledge of philosophy.
𓅬 gabriel_syme 𓅬#3220: it's not bad |
StellaAthena#3530: This unironically
Daj#7482: Utilitarianism is the :yes: of philosophy
bmk#1476: And, it directly follows that failure of creating agi alignment is strictly a bigger deal than failing to solve CC
StellaAthena#3530: It doesn’t actually matter if the universe is finite so long as the space of possible actions is not.
Daj#7482: It does though since phase space is also finite
Daj#7482: In this silly strawman version of the argument
bmk#1476: (i apologize for initially phrasing this in the spiciest way possible)
Daj#7482: Nah 3d just having a boomer moment again
bmk#1476: Lol
Daj#7482: He knows all our views but needs to bring up CC again and again lol
StellaAthena#3530: A more generous way to phrase that is that he’s advocating for an important minority philosophical position
3dprint_the_world#6486: I know your view, I don't know everyone's
Daj#7482: Sure but I don't appreciate the incendiary phrasing
nay#9954: we just need a CC channel for *screaming*
Daj#7482: So I call it as I see it
3dprint_the_world#6486: you can interpret it as you wish
Daj#7482: I am :chad:
bmk#1476: ~~We already have #alignment-general for that~~
3dprint_the_world#6486: It's not incendiary, I'm genuinely sad about it, but w/e
Daj#7482: Incendiary is orthogonal to your internal state |
Daj#7482: This is fighting words and I think you can see why
Daj#7482: to be clear I'm not actually mad
AI_WAIFU#2844: I also want to say that I'm up for a good fight.
Daj#7482: Yep fine with some fight
AI_WAIFU#2844: As long as the arguments arent dumb
Daj#7482: But I don't endorse it as a healthy regular debate norm
𓅬 gabriel_syme 𓅬#3220: good fights are important
𓅬 gabriel_syme 𓅬#3220: by that I mean fleshing out thoughts
Daj#7482: Since 3d has made this exact point many times
Daj#7482: And could have phrased it far more conducive to calm rational discourse
nay#9954: wait is it :smallbrain: to be concerned about both
Daj#7482: Not at all
AI_WAIFU#2844: Depends on your timelines and how you expect AGI to go down.
Daj#7482: It's just a question of prioritization
Daj#7482: And your skills
Daj#7482: If you're good at solar tech and not ML, work on that
bmk#1476: Anyways, even if y'all disagree with my timelines, surely you can agree that conditional on 20 year timelines my position on CC makes sense right
nay#9954: it seems to me like CC is a 20 years ago problem and AGI is a 20 years from now problem
AI_WAIFU#2844: What's the upper bound on the number of people CC can kill in 20 years?
EricHallahan#1051: All? |
bmk#1476: Unlikely
AI_WAIFU#2844: This would be news to me
EricHallahan#1051: You never asked for a reasonable upper bound.
bmk#1476: What is the 99th percentile
𓅬 gabriel_syme 𓅬#3220: Depends how you account for it. imo, billions are affected now but it's really hard to disentangle
bmk#1476: Just round up i guess
𓅬 gabriel_syme 𓅬#3220: I would guess, 20%? Again, really hard to disentangle from the other stuff it creates
3dprint_the_world#6486: seems like we're having calm rational discourse just fine? but sorry if I offended you my dude
𓅬 gabriel_syme 𓅬#3220: I do disagree btw that ML can not help with CC, in fact it could be incredible help
3dprint_the_world#6486: or if I offended anyone else
𓅬 gabriel_syme 𓅬#3220: there's just literally 0 interest not because of alignment but because there's 0 money in it
𓅬 gabriel_syme 𓅬#3220: CC startups fight for $1000 checks
𓅬 gabriel_syme 𓅬#3220: this is what I meant that I understand alignment in these terms. If you orient AI research towards actual problems that the world is facing, imo alignment becomes easier. That might be a naive position, but it is mine 🙂
bmk#1476: I genuinely believe that the probability of 100% of the worlds population being obliterated by agi in 20 years is over 50%
AI_WAIFU#2844: TBH the ML for climate change stuff I've seen gives of an enormous air of self-congratulatory ineffectiveness
EricHallahan#1051: You're not disagreeing.
𓅬 gabriel_syme 𓅬#3220: it's a loaded field obviously, a lot of ideology, but potential is still immense (imo)
𓅬 gabriel_syme 𓅬#3220: we just have literally no talent caring about it
EricHallahan#1051: The potential exists, it just tends to have terrible research.
nay#9954: because ML people are busy chasing benchmarks |
AI_WAIFU#2844: Yeah, to actually do something about CC you need actuators that affect the world. You can't just ingest data
nay#9954: the bottleneck is frequently policy
nay#9954: and getting people to care
nay#9954: No one wants to pay the real internalized cost of energy or food
AI_WAIFU#2844: It's also coordination, and the enormous economic displacement that comes with retooling our energy industry.
3dprint_the_world#6486: I'm liking the direction of this discussion.
bmk#1476: And anyways solving the coordination problems might be harder than alignment
nay#9954: we need to retool almost every industry which is hard to comprehend unless you're AGI
AI_WAIFU#2844: Yeah much easier to build AGI then act unilaterally
𓅬 gabriel_syme 𓅬#3220: I wonder if anyone sees the irony of this though
bmk#1476: Well, the biggest unknown here is how hard alignment is
bmk#1476: I don't, could you elaborate on that
AI_WAIFU#2844: ~~But who gives a shit about CC, we're gonna disassemble the planet to build a Dyson sphere. ~~
𓅬 gabriel_syme 𓅬#3220: wait my bad, it says unilaterally
𓅬 gabriel_syme 𓅬#3220: I thought it was the opposite
𓅬 gabriel_syme 𓅬#3220: I guess one of my hopes with involvement in ML is to show brilliant people outside my industry that exciting opportunities exist within CC. Like actually cool stuff, interesting, valuable, with real substance. It's not all policy and regulations, it's also technologies and innovation
nay#9954: someone needs to use equivariant transformers to design better faster cheaper stronger electrocatalysts for hydrogen fuel cells
bmk#1476: I unfortunately don't thing ml for cc is compatible with short timelines personally (i don't think it will help Alignment)
𓅬 gabriel_syme 𓅬#3220: fuels cells are actually a pretty decent option I'd say
nay#9954: there's even a big ol' dataset for it https://github.com/Open-Catalyst-Project/ocp/blob/master/DATASET.md |
𓅬 gabriel_syme 𓅬#3220: even simpler stuff. Robotic fabrication for example, for offsite construction to help build the 2 billion houses people need right now (but in an efficient way, in many ways)
𓅬 gabriel_syme 𓅬#3220: damn 1TB
Teemochu#8740: I think "media singularity in 10 years" (generating human-indistinguishable media of whatever your mind imagines) is below 50/50 but not unrealistic (photorealistic video with object permanence will probably be the hardest part). Not quite sure I would call what I have in mind general though, in that it probably won't have tool use functionality (both in the sense of physical world manipulation and, more to the point, in the sense of discovering and being able to use features of the broader Internet for self-improvement)
Teemochu#8740: "The coffee maker on this network is a tool that can get the programmer to improve me [so I can have a better answer etc]" is a thing I highly doubt will be in an existent system 10 years from now
Teemochu#8740: re:climate change, training ML to solve it is a good way to get AI winter tbh
bmk#1476: I don't think media singularity is on the same tech tree as reaching agi
bmk#1476: It's not necessary or sufficient for agi
3dprint_the_world#6486: I think AGI is plausibly 10 years away, and has been 10 years away for the past ~20 years
3dprint_the_world#6486: and I mean that in the sense that: We could have had AGI in 2010 if we actually had a coordinated push for it
alstroemeria313#1694: ...
alstroemeria313#1694: With what method
alstroemeria313#1694: ReLU wasn't even invented yet
3dprint_the_world#6486: what I mean is I don't think there's any special trick; it's all about putting in the right set of capabilities running on the right computing hardware. You're making the assumption that somehow ReLU is necessary for AGI. I disagree.
3dprint_the_world#6486: There's many roads to AGI.
3dprint_the_world#6486: right now there's no coordinated effort for AGI; people are too scared to do it, but ironically they don't steer their fear into a productive direction like working on alignment
3dprint_the_world#6486: instead they just fear-monger and say we should stop working on it
bmk#1476: Speak for yourself, i am steering my immense, paralyzing fear into alignment
3dprint_the_world#6486: yeah well this discord is a minority
3dprint_the_world#6486: lol
3dprint_the_world#6486: I mean an actual large-scale effort like the manhattan project etc. |
bmk#1476: Well, the goal of this discord is to get there
3dprint_the_world#6486: basically there's two big camps - those who think AGI isn't possible (used to be a majority, now a small minority) and those who are terrified of it. Everyone else is in the < 1%
3dprint_the_world#6486: or maybe less than 10%
bmk#1476: I dream of the day i have enough Alignment knowledge to make a research direction with a reasonable chance of success and harness all the free labor around here to work on it
3dprint_the_world#6486: it's the same with CC -- there are deniers, and people who are too paralyzed by fear to do anything.
3dprint_the_world#6486: people actually doing something useful are like < 0.1%
bmk#1476: come join the 0.1%
Teemochu#8740: Like Elon tbh
Teemochu#8740: Kinda hard to beat proliferation of renewable energy generation and a viable and palatable alternative to the hundreds of millions of portable dinosaur burners in America
Teemochu#8740: (as for what to do about China, China copies everyone, eventually they'll copy him)
AI_WAIFU#2844: Aren't they already doing that
EricHallahan#1051: I unfortunately don't see the appeal of Tesla today.
They have been trying to reinvent the wheel, literally. They literally are replacing the steering wheel with a yoke-thing.
Why?
Because they are a tech company, and they need something to change for the sake of change.
3dprint_the_world#6486: I guess Tesla is making some difference but it's a tiny difference
3dprint_the_world#6486: again
3dprint_the_world#6486: can't handwave away 40B tons/year of co2 just by making some battery powered cars
𓅬 gabriel_syme 𓅬#3220: it is a tiny difference because self-driving, electric cars are still cars
𓅬 gabriel_syme 𓅬#3220: not disruptive at all |
𓅬 gabriel_syme 𓅬#3220: we had fully electric, self-driving transportation since the 70s
EricHallahan#1051: I haven't heard a peep about low-cobalt batteries from them either.
EricHallahan#1051: PRTs?
3dprint_the_world#6486: anyone who really wants to make a difference in CC has to figure out how to solve the massive coordination problems
3dprint_the_world#6486: and incidentally, any strategy to do that would have *massive* positive implications for alignment too
bmk#1476: i guess, but it's kinda moot if you think the coordination problems are harder than just solving alignment
fazz#8459: Anyone here whos uses a python multithreading library that frees memory properly? I've been banging my head up against Ray (is this rubbish?)/ It maxes out to near 100% memory on like 20 cores running landmark recognition tasks
gwern#1782: a manhattan project would not have been remotely enough in 2010. remember, the manhattan project was just a couple billion dollars. that doesn't even buy you an existing chip fab
nay#9954: that's not with inflation, right?
gwern#1782: that's the real estimate, yes
gwern#1782: while a chip fab for merely 2021 levels of chips costs like $20b+ and that's with the, what, $100b/annual R&D of the semiconductor industry and all the learning-by-doing?
nay#9954: skip the fab and just use i7's then
gwern#1782: good luck with that
Teemochu#8740: At *best* you get GPT-3 for a couple B.
Teemochu#8740: *At best.*
Teemochu#8740: (And I'm magicking away quite a bit of differences in feasibility, not to mention knowledge)
3dprint_the_world#6486: manhattan project cost upwards of $20bn in inflation-adjusted dollars
3dprint_the_world#6486: actually sorry, upwards of $30bn
3dprint_the_world#6486: and almost ~$15bn of that was just on a single ball of plutonium
AI_WAIFU#2844: Can you elaborate on the problems you've run into with Ray? I was thinking of using it but if it has issues I'll use something else. |
3dprint_the_world#6486: kind of amazing to think about
EricHallahan#1051: Well they had to develop enrichment from scratch.
fazz#8459: Its not freeing memory properly somehow. I'm running maybe 40 workers / 20 cores. The workers doing the sharding are consuming nothing but he worker "Raylets" are up at 98%-100% and grinding. Tried making them stateless, forced gc collection etc.
3dprint_the_world#6486: they had to learn how to make plutonium from scratch, from scratch.
EricHallahan#1051: True, but enrichment also applies to uranium, so it isn't like all their development went into just plutonium enrichment research, even though it was the obvious way forward by the end of the program.
3dprint_the_world#6486: well most of the funds went into projects that were basically one-offs, e.g. calutrons and gaseous diffusion
EricHallahan#1051: Yeah, you realize they pretty much developed an entire subfield of engineering.
3dprint_the_world#6486: yes
3dprint_the_world#6486: this is kind of what I mean
3dprint_the_world#6486: not necessarily talking about funds per se
3dprint_the_world#6486: but just a lot of really smart and motivated people gathered together for a singular goal
3dprint_the_world#6486: with a clear objective, deadline, and access to whatever resources they need
bmk#1476: let's make eleuther that
bmk#1476: we have a clear objective: make aligned agi
we have an objective albeit uncertain deadline: before everyone else makes unaligned agi
we have a shitload of resources and if we can put together a strong case we can ask for more
AI_WAIFU#2844: NGL, we're in a pretty good position to do that, I think we need to cultivate quite a bit more expertise, but that's coming along pretty well.
StellaAthena#3530: Agreed
gwern#1782: (one does hope it does a little bit better than the manhattan project in terms of 'not causing imminent human extinction')
3dprint_the_world#6486: yeah, in retrospect using the manhattan project as an example is a bad choice |
3dprint_the_world#6486: sorry everyone
AI_WAIFU#2844: details
gwern#1782: 'details details details. the point is the bomb went off the first time they tried it!'
3dprint_the_world#6486: one good thing about nuclear weapons: they gave us the word bikini
AI_WAIFU#2844: https://cdn.discordapp.com/attachments/729741769738158194/822296330701373460/los_alamos.png
3dprint_the_world#6486: tbf they over-exaggerated how much atmospheric ignition was thought to be a risk
3dprint_the_world#6486: most of the los alamos people thought it was extremely unlikely
AI_WAIFU#2844: So what you're saying is that we're like them except *wayyyyy* worse.
gwern#1782: I don't envy the historians who will need to wade through piles of catgirl memes and worse to contextualise coments
Ward#1738: We will have AGI to do all the wading
AI_WAIFU#2844: You mean the superinteligent catgirls?
zphang#7252: take that, future AI overlord historians
gwern#1782: "Professor, this comment about whether they should do the final 100-trillion run after the final bugfix to recurrency mentions waffles and something called 'smegma'. I'm familiar with 'magmas' from category theory, is that related? Professor?"
bmk#1476: "it's like the manhattan project, except the funding is 5 orders of magnitude less, the people working on it are random nobodies from the internet, atmospheric ignition is the default rather than the unlikely outcome, and there are 5 soviet unions"
3dprint_the_world#6486: "It's speculated that 'uwu' was an acronym for 'upstarts wholly united', which was a code-word describing their oath of secrecy to one another"
Teemochu#8740: Hello, I am worse. :cutealoo:
gwern#1782: nice to meet you, I'm dad
3dprint_the_world#6486: tbf future historians will probably think we're all weird and not funny
bmk#1476: this is probably what we look like to historians https://cdn.discordapp.com/attachments/729741769738158194/822299481072467968/inside_joke.png
zphang#7252: you sure people won't go "man, remember how classy people were in the 20s? sigh modern memes" |
Teemochu#8740: We are the cult that created Cathuwu
bmk#1476: Pinned a message.
3dprint_the_world#6486: @bmk the very first cave art was very likely a meme
3dprint_the_world#6486: and don't get me started on all the penises, vaginas, and breasts humans have been drawing on things...
notooth#4850: Hello everyone,
What do you think about Pattern-Exploiting Training (PET) when it is compared to GPT-Neo?
EricHallahan#1051: TBH I am not familiar with all the different objectives out there.
Louis#0144: https://twitter.com/emilygorcenski/status/1372885343209467906?s=21
Louis#0144: Fun fact when someone posts German memes to this discord both Connor and bmk start foaming at the mouth and joyfully squealing
Louis#0144: Totally true fact
Louis#0144: It triggers their eidechsenhirn which is a special lobe in the brain just for Germans consuming memes
Daj#7482: bmk is way more into german memes than me, weirldy
Louis#0144: That is weird
bmk#1476: stell dir vor du schützt deine Daten
Louis#0144: are u foaming yet
bmk#1476: war's immer
Louis#0144: hot
Louis#0144: 🥵
jin kazama#3736: How can auto clipping of gradient be used in transformers, I think it will be helpful as it was for NFNets, boosted speed of training and certainly for inference time as well.
Deleted User#0000: @jin kazama gradient clipping is already standard practice in transformer training |
Deleted User#0000: just not the particular kind NFNet used, which is also a function of the parameter weights
Deleted User#0000: i think its worth a try
jin kazama#3736: I could not find any paper that suggested clipping in transformers. And if it is standard practice then can I find a transformer model trained in this way? (any link please).
Daj#7482: afaik every transformer I've ever trained just clips gradients
Daj#7482: Not sure if it's in any paper in particular
Daj#7482: Or maybe it's a GPT thing
jin kazama#3736: So, but what about layer normalization? if we clip gradient then why we need to normalize?
Daj#7482: maybe I'm mixing things up, I don't know much about the theoretical justification for normalization tbh
Daj#7482: all I know is any GPT I ever trained, you clipped the gradient global norm to 1.0 or similar
Daj#7482: Not sure if it's cargo culting or not
jin kazama#3736: I wish I could find more about it
Sphinx#2092: A lot of them have it on sorta by default, they just don't mention.
Sphinx#2092: You can see it in the public implementations.
Sphinx#2092: layer norm is a different beast entirely. It's more about getting the signal to safely propagate through the model rather than getting rekt.
jin kazama#3736: I thought that layerNorm was used because BatchNorm was not suitable for transformers (for NLP tasks)
Sphinx#2092: BatchNorm is a shit show.
Sphinx#2092: Ignoring that, it still doesn't answer why its necessary in the first place. If you want a more theoretical take on it, you can try https://openreview.net/pdf?id=B1x8anVFPr
Sphinx#2092: It's sorta like if the authors felt like writing a long short paper.
StellaAthena#3530: Paper on LayerNorm vs BatchNorm: https://arxiv.org/abs/2003.07845
jin kazama#3736: Thank you guys |
chirp#4545: https://twitter.com/fchollet/status/1373114872674717697
chirp#4545: I wonder if this will actually turn out to be true… or if it will be easier than expected to take these sorts of demos and scale them up
bmk#1476: https://twitter.com/fchollet/status/1373116543626735624?s=20 :mesh:
jrowe#5371: it's almost gofai in that respect, but vastly bigger
jrowe#5371: if you can trust it to recognize surprises, then it still serves as a huge productivity booster
spirit-from-germany#1488: Is there any Colab or pip - library to easily extract human poses from images into an array or simiar? All things I find on github appear to need lots of fixing before they run on Colab and give me access to a function I can use...
glazgoglabgalab#5255: https://teachablemachine.withgoogle.com
glazgoglabgalab#5255: no idea if there are better options, but it's the most convenient one I know of
glazgoglabgalab#5255: ah nevermind, didn't notice that you asked for colab/pip
Basedblue#9138: there is a colab trick to get p100 and hi ram automatically
`"colab": {
"name": "P100.ipynb",
"provenance": [],
"collapsed_sections": [],
"machine_shape": "hm"
}`
EricHallahan#1051: I think this was already shared.
EricHallahan#1051: Somewhere?
EricHallahan#1051: I don't remember. |
Sid#2121: ok eleuther hivemind
Sid#2121: what's the best way to search for an integer subsequence within a torch.LongTensor
Sid#2121: say i want to find the location of [1,2,3,4] in torch.arange(10)
bmk#1476: subsequence not subarray right
Sid#2121: subsequence yeah
Sid#2121: and by best i mean fastest
Daj#7482: guess :chad:
bmk#1476: so 1 2 3 4 is a subsequence of 9 1 9 2 9 3 9 4 9
Sid#2121: i mean, it has to exactly appear in the tensor, i guess i mean subarray
Sid#2121: thank you top contribution as always
bmk#1476: https://en.wikipedia.org/wiki/Subsequence
EricHallahan#1051: Index all 1s first, then work from there?
bmk#1476: so uh
Daj#7482: np
bmk#1476: why not just do it the dumb way of comparing it at every position
Sid#2121: because it's slow af
bmk#1476: did you already try that and ifnd it to be too slow
bmk#1476: how big are the tensors we're talking
Sid#2121: i did ```
def subsequence(tensor, subseq): |
"""
Finds and returns the indices of `subseq` in `tensor` (if present)
:param tensor:
:param subseq:
:return:
"""
matches = []
assert len(tensor.size()) <= 2
if len(tensor.size()) == 2:
for b in range(tensor.size()[0]):
for i in range(tensor.size()[1]):
if tensor[b, i] == subseq[0] and len(tensor[b, i:i + len(subseq)]) == len(subseq) and \
all(tensor[b, i:i + len(subseq)] == subseq):
matches.append([b, i])
elif len(tensor.size()) == 1:
for i in range(len(tensor)):
if tensor[i] == subseq[0] and len(tensor[i:i + len(subseq)]) == len(subseq) and \
tensor[i:i + len(subseq)] == subseq:
matches.append([i])
return matches |
``` and it sucks
EricHallahan#1051: Because that would be PREMATURE OPTIMIZATION
bmk#1476: what
EricHallahan#1051: For slowness
bmk#1476: it's literally the opposite of premature optimization
bmk#1476: trying things the dumb way first is literally the opposite of premature optimization
bmk#1476: @Sid https://en.wikipedia.org/wiki/Knuth%E2%80%93Morris%E2%80%93Pratt_algorithm this is what you need
AI_WAIFU#2844: How big is tensor and subseq, and do you have a GPU?
EricHallahan#1051: Can you do it in parallel?
bmk#1476: unless tensor is truly enormous, you dont need to do it in parallel
AI_WAIFU#2844: Yeah, I would do it as a sequence of map/reduce operations
bmk#1476: im assuming tensor is like 2048
Sid#2121: Around 2048 and the subseq will be like max 10 tokens
Sid#2121: yes
bmk#1476: oh then KMP is probably overkill, hmm
bmk#1476: i think the problem then is that you're crossing CPU/GPU bounds a lot
bmk#1476: so the solution is to do everything on-gpu
bmk#1476: make 10 2048-len tensors indicating which positions in subseq are equal to the corresponding position in tensor
bmk#1476: shift each tensor over the corresponding number of times, and multiply them all together
bmk#1476: move that tensor to cpu and the indices inside that tensor are what you want |
bmk#1476: @Sid this should work for 1d tensors, you can modify it to work with 2d
```def find(tensor, subseq):
"""
Finds and returns the indices of `subseq` in `tensor` (if present)
:param tensor:
:param subseq:
:return:
"""
tens = []
for i, v in enumerate(subseq):
tens.append(torch.cat([(tensor == v)[i:], torch.zeros((i,))]))
for t in tens[1:]:
tens[0] *= t
return [i for i, v in enumerate(tens.cpu()) if v]```
spirit-from-germany#1488: I just had an interesting idea on how to babysit colab Notebooks...
one could easily Download Checkpoints and data sets from a FTP Server without the tedious Google Drive authentification process... And save Checkpoints on the FTP Server from time to time... |
The reactivation of the notebook would only be one click, with colab pro once every twenty-four hours :)
And even that could be automated 😎
jrowe#5371: whenever they review usage logs, they're gonna be heavy handed with banning people that even appear to be gaming their system. if you have a legit use case that consumes a gpu for the full allowed time, I would cover your ass and email them
jrowe#5371: even if you don't get a response, it gives you a path to being unbanned when you can say that you asked
bmk#1476: just use one of the several dozen throwaway google accounts you undoubtably have
jrowe#5371: it'll be automated banning, is my only concern
EricHallahan#1051: You will 100% be temp-banned if you try this IMO.
bmk#1476: (i know google accounts are expensive to make compared to other types of accounts, like reddit accounts, which is why i said dozens and not hundreds)
EricHallahan#1051: They even throw you recaptchas to prevent this.
EricHallahan#1051: And they will serve you a full verification if they feel it necessary.
bmk#1476: recaptchas are surprisingly cheap to get around considering the, er, *economic value* of owning a few dozen google accounts
EricHallahan#1051: Yeah, it's just a nuisance to deal with.
EricHallahan#1051: But if you leave it idle, they will temp ban you.
bmk#1476: meh just farm it out to a captcha farm, pay pennies per captcha
jrowe#5371: seen any clip based anti captcha in the wild?
bmk#1476: sim cards are cheap too
EricHallahan#1051: It's after midnight so I'm off to bed. Goodnight!
triggerhappygandi#0001: Few dozen accounts? Damn. I only have 10 accounts in total. Time to make 10 more.
genai (Immortal Discoveries)#0601: Anyone have 1MB of pure plain openwebtext [already scrapped] ? |
cfoster0#4356: https://openwebtext2.readthedocs.io/en/latest/
genai (Immortal Discoveries)#0601: it's, big, can you just give me about 1-100MB of text of it?
cfoster0#4356: No, your best bet is to download a bit of it yourself
genai (Immortal Discoveries)#0601: If I download that 25GB is it going to be urls or text already ready to use?
kindiana#1016: text
genai (Immortal Discoveries)#0601: if anyone here has 1GB file you can open it fast in notepad++ and send me 10MB.........i wish you could
genai (Immortal Discoveries)#0601: i could download it though 1# it failed early and 2# i'm paranoid about throttling my hard disk with GBs of storage lol
genai (Immortal Discoveries)#0601: it's downloading now
genai (Immortal Discoveries)#0601: would be desired tho if you can send me quickly 10MB
genai (Immortal Discoveries)#0601: it's not going to make me need to stitch together documents is it?
genai (Immortal Discoveries)#0601: ex. it gives me 10,000 0.5MB files lol
andyljones#7746: fwiw it's a tar file, so you should be able to write something to stream it down
genai (Immortal Discoveries)#0601: isnt the code given to us to do so?
genai (Immortal Discoveries)#0601: it shows code
genai (Immortal Discoveries)#0601: i'm not going to get this am I?:
https://drive.google.com/drive/folders/12GsNbTie11IPHbjfioNfja8FlbMfPCVd
genai (Immortal Discoveries)#0601: i don't want urls or chopped up segments needing stitching
genai (Immortal Discoveries)#0601: most people working on AGI only need 1MB to 100MB for most/ rapid testing, you should upload such thing to the pile/ page, it's time taking to learn to use the stitcher scrapper thing
triggerhappygandi#0001: 1mb of OWT? What's so special about it that you wouldn't rather use enwik8?
EricHallahan#1051: At 1 MiB it really isn't worth your time to get a slice of OWT2 when there are far better sanity-check benchmarks out there. |
triggerhappygandi#0001: Enwik8 is literally a small part of Wikipedia. Much better.
triggerhappygandi#0001: Than a 1MB text dump from owt
CKtalon#7792: anyone knows how many dimensions/size the feed forward and heads are for GPT-2,3/BERT-large? https://cdn.discordapp.com/attachments/729741769738158194/823247760986996746/unknown.png
bmk#1476: gpt3 d_model is 12288
CKtalon#7792: oh, i probably typoed. thanks!
Sewing#2678: is someone here familier with the DALLE paper/appraoch?
StellaAthena#3530: @Sewing See #multimodal, where people are working on similar things
Sewing#2678: thx
LaPapaya#4347: Guys, then gpt neo is really out?
StellaAthena#3530: @rom1504 @bmk would know if we have a table of results for our models, but we have an evaluation repo that you can use to compare it directly to GPT-2 if you want
Daj#7482: The original TPU is stable, and there are now too smallish models
Daj#7482: The GPT3 sized model will still take an unknown amount of time
StellaAthena#3530: @sl24 Yes we plan to release weights for a 200B model (assuming we get it trained)
Sid#2121: @rom1504 I did run some metrics on the models but no idea where i put them lmao. I'll try to rerun them and post the results up to the gpt-neo repo soon, but as said in the announcement, we're moving away from that codebase to focus on the GPU codebase, so this won't be my top priority and might take a while
JC#3653: how long do you hope you reach that state?
Sid#2121: codebase is done, we're just waiting on hardware, really
Sid#2121: and then training time ofc
Sid#2121: original estimate of August seems... possible? but we're not guaranteeing anything
EricHallahan#1051: Ideally less than a year, so soon™️.
Sahl#0630: any time till heat death |
Sahl#0630: and maybe beyond
Sahl#0630: ™
JC#3653: that is surprisingly good imo, good luck.
EricHallahan#1051: One of my favorite estimates is "less time than it took for the Cassini family to map France."
sl24#8080: thanks @StellaAthena
rom1504#5008: You really will need to be checkpointing and compute a bunch of intermediate metrics for gpt-3
rom1504#5008: Multi month model training seems to have quite interesting problems :)
rom1504#5008: Having a second estimation of the hardware cost of training gpt-3 will be quite valuable in itself
sl24#8080: How’s this getting funded lol
sl24#8080: ah ok thanks
jrowe#5371: when a mommy dollar and a daddy dollar love each other very much...
sl24#8080: infinite money glitch
Zealousmagician#9587: Is that how they make pennies?
Daj#7482: We're getting the hardware from Core Weave for free
Daj#7482: To do this project
EricHallahan#1051: "They inflate so fast!"
freddiemitchell6#0094: Thanks for the share! Any idea when the weights will be PyTorch compatible?
EricHallahan#1051: We had it kind of working with Hugging Face Transformers, but ran into an issue.
Daj#7482: There is a conversion script somewhere afaik
Daj#7482: These two models should be fine since they don't have local attention |
Daj#7482: I think
freddiemitchell6#0094: ok, thanks. I'll try looking for a conversion script.
aero#1357: ask the 2.7B model to output the final trained weights for full gpt3 what could go wrong
aolko#7301: re: https://discord.com/channels/729741769192767510/730090096287547444/823296653481738290
EricHallahan#1051: I understand what you are saying.
jrowe#5371: just as an explicit reference, gpt-2-medium in that notebook is 375m parameters, gpt-2-large is 1.5b
aolko#7301: not quite the point
jrowe#5371: gpt-neo releases are 1.3b and 2.7b params
jrowe#5371: not responding to you exactly, making explicit what's happening
jrowe#5371: as of today, gpt-neo-2.7b is the largest publicly available gpt based model
jrowe#5371: they're a bunch of goddamn rock stars
jrowe#5371: anyway, a kitchen sink style notebook sounds like a good exercise, it'll probably take a couple days unless someone in EleutherAI has already started
aolko#7301: 👌
aolko#7301: it'll be great if it'll be inline as that one 🙂
jrowe#5371: that could be tricky, too many unknown unknowns to say for sure
aolko#7301: and focused on the usage of a pretrained model w/o training or finetuning
EricHallahan#1051: I think it isn't as easy as you would like it to be. I prefer notebooks written that way as well, but there is a lot of work involved in putting that together. There really isn't an interface AFAIK that abstracts that away as a nice package in the codebase right now. We can try to put one together, but it really isn't our priority at the moment.
jrowe#5371: ill start in about an hour, I'm an amateur but I'll throw some time into it
StellaAthena#3530: What is a “kitchen sink” notebook?
aolko#7301: ideally one that showcases features of the library |
EricHallahan#1051: Having the notebook have the code inline to help with understanding.
aolko#7301: and yes, all inline
StellaAthena#3530: Have what inline
StellaAthena#3530: The entire GPT-Neo repository?
aolko#7301: no, the general importing, configuration, usage as notebook snippets
EricHallahan#1051: It provides context on how it works, not just running an abstract function in each code cell.
jrowe#5371: how big is the 2.7b model?
EricHallahan#1051: 2.7B*2 Bytes?
jrowe#5371: cool, ty
jrowe#5371: that comes in just a smidge smaller than gpt-2-xl
jrowe#5371: nice
EricHallahan#1051: It is effectively identical.
aolko#7301: also it would be a good time to bump https://github.com/huggingface/transformers/issues/4658
EricHallahan#1051: This is not GPT-3, so no, we will not be bumping that.
EricHallahan#1051: We are hoping to release the weights there within a month, but to make it compatible we need to fix a few things.
EricHallahan#1051: Of course, all dates are subject to change.
jrowe#5371: is there a press release coming, or is the announcement just for discord for now?
StellaAthena#3530: @aolko Can you explain how our colab file isn't a "kitchen sink" walkthrough: https://colab.research.google.com/github/EleutherAI/GPTNeo/blob/master/GPTNeo_example_notebook.ipynb#scrollTo=dH0x3dI9j85P
StellaAthena#3530: TBH I still don't understand.
aolko#7301: 1. focused on training and finetuning, not usage |
2. depends on python script calls with buckets of args
EricHallahan#1051: Well, this is research code, not production code.
EricHallahan#1051: The fact that it exists and works is already good enough for most use cases.
Aran Komatsuzaki#5714: https://twitter.com/alrhemist/status/1373743383735328768
StellaAthena#3530: @aolko How does this not explain how to sample from the model? https://cdn.discordapp.com/attachments/729741769738158194/823307123302596640/Screen_Shot_2021-03-21_at_5.28.19_PM.png
aolko#7301: people who want to get the output don't really need that
EricHallahan#1051: This isn't really meant for them IMO.
aolko#7301: i'd probably say it's too advanced
StellaAthena#3530: This is literally "type in your prompt and run the model to get output"
Louis#0144: We know
Louis#0144: lol
Louis#0144: I’m gonna get working next week
aolko#7301: nope, mine is
Louis#0144: I already promised stella
Louis#0144: LMAO
EricHallahan#1051: I hate how that sounds, but I'm leaving it.
StellaAthena#3530: Your what is
aolko#7301: literally a text field with a text output
EricHallahan#1051: That notebook is abstracting away everything to HF.
guac#4716: i think they want a very high level surfaced API that's all cell interaction less file system %magics |
EricHallahan#1051: That is not our problem really.
StellaAthena#3530: @aolko So, you want no code?
aolko#7301: but really that should be the thing to strive towards https://github.com/minimaxir/gpt-2-simple
EricHallahan#1051: If we get it to Hugging Face then that will do it.
aolko#7301: i want "turnkey generation"
EricHallahan#1051: We will not be serving that API.
EricHallahan#1051: ^
EricHallahan#1051: It should be effectively turnkey at that point.
StellaAthena#3530: @aolko I still don't understand how "type prompt into this cell, hit go in that cell" isn't good enough
aolko#7301: __which__ cell
aolko#7301: that's the point...kinda
EricHallahan#1051: To put this in context, I am tagged as a developer, and I have never generated a single token with any model.
jrowe#5371: then modify the notebook for yourself, it's ready to be abstracted away so you can just run a two cell simplified version
bmk#1476: https://github.com/huggingface/transformers/issues/10834 if anyone wants our models on HF then this feature needs to be implemented
StellaAthena#3530: You type the prompt into the cell labeled as the prompt and then you run the next line https://cdn.discordapp.com/attachments/729741769738158194/823308828741337119/Screen_Shot_2021-03-21_at_5.34.54_PM.png
jrowe#5371: the hard bits are done
StellaAthena#3530: > Once training is finished, you can run the same command with the --predict flag to sample from your model.
> To pass in a prompt, save it to a .txt file, and pass in the name of the file with the --prompt flag.
> use the cell below to enter your prompt, and run it to save it to example_prompt.txt.
aolko#7301: in order to do that i have to untangle these calls |
```
!python3 main.py --model $pretrained_model --steps_per_checkpoint 500 --tpu colab --predict --prompt example_prompt.txt
```
aolko#7301: and inline them
bmk#1476: @aolko it's not our responsibility to make it exactly to the specifications you in particular need. we did the hard parts, you can figure out the rest
EricHallahan#1051: That isn't our problem.
EricHallahan#1051: We will not be providing that API.
aolko#7301: great, but you are not alone in here
StellaAthena#3530: @aolko I don't think you have the right picture of our target audience. We are not a start up. We are not producing a product. We have no interest in building a userbase.
StellaAthena#3530: We are researchers who are doing research and publishing models for people to use. People who do not know how to run python code are not our target audiance.
EricHallahan#1051: If you would like that you will need to wait for a Hugging Face release.
aolko#7301: then this is an issue for the library devs
EricHallahan#1051: What library devs?
StellaAthena#3530: What library?
EricHallahan#1051: What boat?
sl24#8080: What?
aolko#7301: similar librar**ies**
StellaAthena#3530: Again, I don't think you understand what we are going for. We are not publishing polished libraries or creating products
EricHallahan#1051: That isn't our vision for our product.
aolko#7301: re: |
EricHallahan#1051: You seem to not understand the complexity that is whisked away by that single call.
bmk#1476: @aolko we do what we want to and we don't get paid a dime for any of it. if you don't like what we're doing you don't have to use it. if you think it could be better, by all means, go ahead, make your own library that uses our model, but don't act like you expect us to deliver everything on a silver platter
aolko#7301: > if you don't like what we're doing you don't have to use it.
massive oversight of such statements is the lack of thought about alternatives.
StellaAthena#3530: > great, but you are not alone in here
What does this mean? We are literally the people who produce this stuff.
EricHallahan#1051: He is talking to me.
guac#4716: i feel like i've just been hit over the head
EricHallahan#1051: I am not alone.
bmk#1476: we are not a company, we are not making a product
aolko#7301: you are not the only ones in this server, there are users as well, probably library and app devs too
triggerhappygandi#0001: Because I did
EricHallahan#1051: But my opinion is shared.
StellaAthena#3530: You are aware you are currently lecturing two of the four server mods on how to conduct their own server, right?
asara#0001: I assume there is enough interest in the models that upon release, some people will make UIs/inference websites/APIs/side projects, whatever you want to call it, that allow for inference in the way you may want? That doesn't mean Eleuther will do it officially though, so I don't see why it matters
triggerhappygandi#0001: None of the regular people (that I know of) are. This message will just disappear among a sea of others. So what's the point?
aolko#7301: not really lecturing but typing a preferable route
EricHallahan#1051: We have expressed that we are not interested in front end development.
Sahl#0630: the hard part is the weights, the easy part is the interface
Sahl#0630: don’t worry about it |
triggerhappygandi#0001: Basically this is how I assume we will make the models compatible with HF. Someone else will lmao
aolko#7301: doesn't have to be a front-end, look at the library linked above
StellaAthena#3530: You are lecturing us, and condescendingly so.
This is a community driven research collective. Things happen because people want them to happen. If you would like to wrap something pretty around the model, you are welcome to.
triggerhappygandi#0001: Why is the colab not enough?
aolko#7301: answered above
triggerhappygandi#0001: That's not convincing though
EricHallahan#1051: If you would like that without your personal work, wait for the inevitable Hugging Face release.
aolko#7301: not a goal
EricHallahan#1051: End of story.
jrowe#5371: this is so disrespectful it's bonkers
triggerhappygandi#0001: Wrap it up into a command line if you want to, but why do you expect us to do that?
EricHallahan#1051: Our goals are clearly not shared.
aolko#7301: as well as contexts
StellaAthena#3530: What is your "context"
bmk#1476: @aolko if you want gpt2-simple like interface you can do it yourself and i very much look forward to hearing about it. otherwise I would suggest cutting this conversation off here
EricHallahan#1051: As the saying goes, assumptions make an `ass` out of `u` and `me`.
EstebanSir#2189: well this is certainly very poggers
StellaAthena#3530: @EstebanSir Welcome! |
EstebanSir#2189: oh hello, i've been here for a bit now- just don't talk much
EstebanSir#2189: i'm glad to hear good news from gpt-neo!!
EricHallahan#1051: Thank you.
StellaAthena#3530: Thanks 🙂 we are quite pleased. We have a 10B model that's going to start training soon and hopefully a 200B model one day
triggerhappygandi#0001: You see when you say _one day_
triggerhappygandi#0001: I go :guilty:
StellaAthena#3530: Let me be more precise.
StellaAthena#3530: Tuesday. It'll be a tuesday
jrowe#5371: 200B tomorrow! bahaha.(/s btw)
StellaAthena#3530: Is that concrete enough for you?
EstebanSir#2189: oh that would be awesome, but would a colab notebook be able to use that?
EricHallahan#1051: That is illegal.
triggerhappygandi#0001: This is 7x more precise. Thank you
Sahl#0630: aren't we getting the 900T model next Saturday
EstebanSir#2189: gpt singularity when
triggerhappygandi#0001: ***9 thousand T***
bmk#1476: yes
triggerhappygandi#0001: 9000T model will be AGI mark my words
aolko#7301: sure, but in order to do that someone has to push an issue, and judging how nobody is really eager to do it from team's side i guess it's up to the users
StellaAthena#3530: No, unfortunately not. We are going to try to distil the 10B model so that it fits in colab for inference, but it's unclear if colab would be able to handle that |
triggerhappygandi#0001: Actually 9000T+1
EstebanSir#2189: :( oh well
EstebanSir#2189: hey, just for reference, how many parameters did gpt2 have?
bmk#1476: 1.5B
triggerhappygandi#0001: 1.5B
triggerhappygandi#0001: Damn beat me to it
EstebanSir#2189: oh so this is certainly better
StellaAthena#3530: So one of our models is slightly smaller and the other is nearly twice the size
Sahl#0630: we will get AGI when we figure out 1.6 param models
StellaAthena#3530: Also, our data is a *lot* better
jrowe#5371: gpt-neo has effectively twice the parameters
triggerhappygandi#0001: Not to mention trained on much more data
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/823313748806074408/unknown.png
jrowe#5371: bpes or character level?
EricHallahan#1051: BPE
triggerhappygandi#0001: to be fair Pile >> OWT
triggerhappygandi#0001: We all know moar is better
EricHallahan#1051: Char-level wen
jbesomi#6954: Hey guys! jbesomi here, love the group and the idea! How can I help you?
jrowe#5371: wooo! 🏄♂️ |
StellaAthena#3530: @jbesomi Welcome! What's your background and skillset?
EricHallahan#1051: Welcome! Have a look at the resources in #rules as well. There is a lot of useful stuff linked there.
triggerhappygandi#0001: @jbesomi Here is a list of things you can do
https://github.com/EleutherAI/info/blob/main/jobs_board.md
jbesomi#6954: I'm a computer science/data scientist, author of Texthero (https://github.com/jbesomi/texthero). I guess I can support you on both data cleaning/scraping for the pile or similar, or helping out with the training of the model. I'm veery interested in producing good representation, so I would love to work on this section too
jbesomi#6954: Amazing! Thanks!
triggerhappygandi#0001: @StellaAthena he do have 400 like tho
StellaAthena#3530: RIP those screenshots posted in the wrong order 😢
triggerhappygandi#0001: Lol
jbesomi#6954: Sounds good! Will read everything
chilli#5665: :thonk:
chilli#5665: some of these twitter responses are very wrong
chilli#5665: :thonk:
chilli#5665: perhaps we shouldn't have mentioned GPT-3 in the tweet
triggerhappygandi#0001: Man Aran has way too much twitter influence
triggerhappygandi#0001: @Aran Komatsuzaki How get thousands of followers?
triggerhappygandi#0001: Need to feel this dopamine
ethan caballero#6044: @Carl-bot
Aran Komatsuzaki#5714: tweeting arxiv papers everyday for two years would do it lol
chilli#5665: Also |
chilli#5665: I'm surprised that people are so impressed by a 2.7B parameter model
chilli#5665: :thonk:
Deleted User#0000: soon conferences will pay Aran to show up, as influencer
triggerhappygandi#0001: 2 years? Lmao thats a grind. :brr:
EstebanSir#2189: oh boy... this is going to take a while to download
EricHallahan#1051: @triggerhappygandi Are you in support of the repo message now?
guac#4716: Imagine Aran is Siraj's alter ego
triggerhappygandi#0001: What message?
bmk#1476: @Aran Komatsuzaki pls dont oversell results ;-;
voxs#0001: let’s go woooooooo
Aran Komatsuzaki#5714: Siraj Raval with no financial drive
EricHallahan#1051: The "If you don't know what you are talking about" message.
triggerhappygandi#0001: Ohh the thing we talked a while back?
chilli#5665: how many followers do I need to be considered an "influencer"?
chilli#5665: 🤔
triggerhappygandi#0001: Enough that you get 400 likes in an hour
EricHallahan#1051: Yeah like the other day.
triggerhappygandi#0001: Lucid gets that many stars on his repo in an hour. How?
triggerhappygandi#0001: Literally gaming the github
bmk#1476: @Aran Komatsuzaki you should probably add a tweet to the thread that clears up that the big model isnt done |
Aran Komatsuzaki#5714: ok i will 🙂
Aran Komatsuzaki#5714: let me know if you have anything else to add
EricHallahan#1051: Didn't I call that this was going to happen?
jrowe#5371: lol, Twitter horde go :brr:
triggerhappygandi#0001: Otherwise you will invite all the AIDungeon populace lmao@Aran Komatsuzaki
triggerhappygandi#0001: With that tweet
EricHallahan#1051: I should have precommited.
bmk#1476: one of us! one of us!
EricHallahan#1051: Unfortunately I didn't create a repository.
triggerhappygandi#0001: I do not see anything in the readme
triggerhappygandi#0001: Ohh right I get what you mean
triggerhappygandi#0001: Lol
EricHallahan#1051: It was removed by Sid.
triggerhappygandi#0001: Yeah now I can see the point. But I am optimistic
triggerhappygandi#0001: That this is a one off
triggerhappygandi#0001: Next time it pops up maybe it won't get so annoying lol
bmk#1476: haha nope
triggerhappygandi#0001: Damn you are very nihilist
EricHallahan#1051: Looking at the social media reaction is not going to be.
voxs#0001: is gpt2neo benchmarked |
bmk#1476: not yet
Louis#0144: We should have a page on the repo for pictures of cats
triggerhappygandi#0001: Soon
triggerhappygandi#0001: why
Louis#0144: Bc cats
Louis#0144: Stella has many cats
triggerhappygandi#0001: Why not duck
Louis#0144: She can start it
triggerhappygandi#0001: Many?
Louis#0144: Many
triggerhappygandi#0001: @StellaAthena many? I only saw 1
StellaAthena#3530: I have two
jrowe#5371: it's hard to train ducks to use a litter box
StellaAthena#3530: Also a girlfriend who meows?
Louis#0144: That counts
triggerhappygandi#0001: Because they are independent thinkers
Louis#0144: I’d count that
triggerhappygandi#0001: They poo where they please
Louis#0144: There’s a really bad joke there
Louis#0144: That I won’t say |
jrowe#5371: ducks = Indian cats?
Louis#0144: Since it’s politirib
triggerhappygandi#0001: So, in other words, _technically_... a _cat_ girl
triggerhappygandi#0001: YES
triggerhappygandi#0001: We need to wirehead society into quacking, if they want to try to be cute animals. Rather than meow.
EricHallahan#1051: You are such a quack.
triggerhappygandi#0001: @EricHallahan btw I don't think the message would've prevented this thing
triggerhappygandi#0001: fwiw
EricHallahan#1051: I am not so sure. They had already seen our Colab notebook in the repo.
bmk#1476: can we pls add the message now tho
triggerhappygandi#0001: Vote.
triggerhappygandi#0001: https://tenor.com/view/impeachment-love-democracy-i-love-democracy-gif-15723806
guac#4716: is that hinton?
triggerhappygandi#0001: Lol
triggerhappygandi#0001: Sith Hinton = pro misalignment?
StellaAthena#3530: No, that would be my best friend who wears cat ears and (sometimes) a tail
triggerhappygandi#0001: :guilty:
triggerhappygandi#0001: _what_?
triggerhappygandi#0001: I sincerely hope the tail part is joke
bmk#1476: :wat: |
bmk#1476: why would it be a joke any more than the ears?
zphang#7252: fangs would be where we draw the line
bmk#1476: there is a strong correlation between earwearingness and tailwearingness
triggerhappygandi#0001: Because people sell headphones with cat ears. So I draw a line on tail
StellaAthena#3530: I'm not joking at all
triggerhappygandi#0001: :wojak_despair:
StellaAthena#3530: People sell plugs with tails, does that make it acceptable?
ersatz#0001: Is there a % of progress until GPT-Neo somewhere? 🤔
triggerhappygandi#0001: uhhh
StellaAthena#3530: We just released a 2.7B model
StellaAthena#3530: Like an hour ago
Sahl#0630: rats > cats
Sahl#0630: np
Sahl#0630: it's easy being right
bmk#1476: i assumed you were talking about tails using .. a somewhat different affixing method
triggerhappygandi#0001: Literally all I can say is :wojak_despair:
bmk#1476: :berk:%
nz#9710: :wojak_despair:
StellaAthena#3530: I believe you have the right picture of how Kate wears her tail
triggerhappygandi#0001: no :omniberk: % |
Eleiber#8347: What do you think OpenAI will do if you make the GPT-3 like model? Decrease their prices? 🤔
ersatz#0001: Is this the GPT-3 sized model or smaller?
StellaAthena#3530: @ersatz Between GPT-2 and GPT-3
triggerhappygandi#0001: Probably not
bmk#1476: way smaller
bmk#1476: microscopic
triggerhappygandi#0001: Since they have a whole API, and it is especially made to be used way too easily
EricHallahan#1051: Well, it depends on what you define as GPT-3.
ersatz#0001: I guess the bottleneck at this point is just money so the project is dead
bmk#1476: no, we have enough compute
bmk#1476: the project is not dead
triggerhappygandi#0001: These models take long time to train
triggerhappygandi#0001: The big boy will run for months
triggerhappygandi#0001: No, it is not a _chungus_
jrowe#5371: bottleneck is time, limited by the bleeding edge of dl training tech
triggerhappygandi#0001: As it is for life itself :wojak_despair:
bmk#1476: in the meantime, it would be great to absorb all the extra energy into alignment
triggerhappygandi#0001: To escape time bottleneck for life
triggerhappygandi#0001: I just want to grill outside the entire observable universe
ersatz#0001: Longevity research finds a way |
Louis#0144: Kinky
triggerhappygandi#0001: honk
bmk#1476: alignment is by far the most important thing we could be working on and after we finish gpt3 replication i'm going to be pushing to divert all engineering effort to alignment
Louis#0144: I’m not putting on the goose beak for u
triggerhappygandi#0001: Reading up a lot of LW lately. What more can I do to accelerate?
Louis#0144: Ehhhh
Louis#0144: I don’t agree
ersatz#0001: Isn’t alignment like almost pure math?
bmk#1476: mostly just read more for now
bmk#1476: there's a lot to read
Louis#0144: I think multimodal and equivariance is the most important
bmk#1476: not necessarily
bmk#1476: some of it is really heavy on math, other parts aren't
Louis#0144: A lot of the stuff discussed here is though
triggerhappygandi#0001: I think RL is OP but you are welcome to be wrong 😎
jrowe#5371: philosophy, game theory, number theory, psychology
bmk#1476: what we need to do the most rn for alignment is for a bunch of us to learn a lot about alignment so that when gpt3 replication is over we can immediately start doing stuff
triggerhappygandi#0001: After all no _true_ AGI will _not_ be an agent
jrowe#5371: early childhood development
Louis#0144: I simply don’t agree alignment should be our *focus* |
Louis#0144: Why should we compete with MIRI
Louis#0144: that seems silly
EricHallahan#1051: At that point I am going of to head the eventual audio/speech project.
Louis#0144: We can create our own niche
triggerhappygandi#0001: More like collab?
Louis#0144: I guess
Louis#0144: But i really think we should make our own niche
asara#0001: replicating 15.ai-quality of TTS would be a pretty nice project for this community I think. Most of the tools and knowledge is readily available as well
Louis#0144: I think it would be beneficial to us in the long run
jrowe#5371: alignment work is de facto collaborative, unless you're doing it wrong
Louis#0144: +1 for audio/speech
Louis#0144: That would be kickass
bmk#1476: alignment > audio
triggerhappygandi#0001: Bruh
bmk#1476: alignment is far more important
triggerhappygandi#0001: Even deep learning has no textbooks. Other than the goodfellow one but besides the point
asara#0001: Well given this view, alignment is more important than almost anything though, but that doesn't mean there can't exist many other projects
Louis#0144: If we do audio well then we establish ourselves to the ML community at large in different fields. If we do alignment well then all that does is give us cudos with MIRI and philosophers
bmk#1476: it isn't in any textbooks. join us, become a pioneer, read up on alignmentforum stuff
Louis#0144: I really think we need a cost benefit analysis of doing alignment before we make that decision |
Louis#0144: Like we seriously need to do a proper analysis
triggerhappygandi#0001: I do agree that alignment shouldn't be the sole focus after we are done with neo
bmk#1476: :gameryes: and i plan on doing everything I can to steer eleuther towards alignment and away from almost anything else. there are places to do other things
bmk#1476: maybe not the sole focus but our biggest project
triggerhappygandi#0001: Future of Eleuther after Neo is very much a discussion we need to have someday.
Louis#0144: Yeah
triggerhappygandi#0001: Entire species' biggest project
Louis#0144: I don’t think we can decide yet
jrowe#5371: *Oregon trail intensifies*
bmk#1476: do you seriously think i want to do alignment because i want to get kudos from miri
Louis#0144: We don’t even know what’s going to happen between now and when we finish neo
Louis#0144: No of course not
Louis#0144: I’m thinking about the benefits for the group at large
triggerhappygandi#0001: But I kinda want to go the OpenAI way: you can't do alignment without being on the bleeding edge yourself.
Louis#0144: I know you want to do it because alignment is deeply important to you
zphang#7252: bmk loses control, starts the GPT rumbling
Louis#0144: I understand that
Sahl#0630: I think pursuing a different approach to alignment could be a good focus but I think people in this server have a strong background in capabilities
Louis#0144: I respect it
triggerhappygandi#0001: I do not want us to be MIRI |
EricHallahan#1051: The original or the Apple II version?
nz#9710: >barges into the discussion
alignment is completely useless
>doesn't elaborate further
>leaves
Louis#0144: I’m not doing that
ersatz#0001: I agree that alignment is the most important thing we could work on but I personally doubt I could contribute anything of value
Louis#0144: I am saying it would have benefits, I understand why Leo wants to do it
triggerhappygandi#0001: https://tenor.com/view/opinion-gif-20305030
Louis#0144: I am also saying we need a discussion before we make a decision like that
triggerhappygandi#0001: @nz your opinion = sus
Louis#0144: Trust me I really love the debate architecture
Louis#0144: I think IRL is kick ass
asara#0001: I agree alignment is important, but what would Eleuther as a community look like if everyone did nothing *but* alignment? Is there even a way for that to work and be feasible?
bmk#1476: trust me, it's not as bad as it sounds
how much do you know about alignment so far? I can get you some good resources depending on where you're at
Louis#0144: I really like those approaches
bmk#1476: I don't see any reason why it wouldn't be feasible
Sahl#0630: I think interpretability counts |
triggerhappygandi#0001: Give me something. I have read like 2 blogposts from Christiano and halfway through the R:AZ
Sahl#0630: considering focusing on alignment instead of capabilities is a good idea imo
ersatz#0001: Pretty much expert level like reading the papers directly following debates and listening to podcasts
bmk#1476: i recommend the yud talk to everyone https://www.youtube.com/watch?v=EUjc1WuyPT8
bmk#1476: it's highly worth watching it
asara#0001: I was thinking of it as, surely out of the things that need to be done, only a select few are capable+interested in them, and already even having a whole group of channels for alignment is a great direction. We can keep steering people towards it, but what would differentiate Eleuther from other alignment communities if it was all there was? More 'exciting' projects (yes, I know, alignment *should* be exciting!) can keep many more people interested
bmk#1476: https://www.youtube.com/channel/UCLB7AzTwc6VFZrBsO2ucBMg also rob miles' channel
triggerhappygandi#0001: Fwiw, OpenAI charter is a decent thing to follow
Louis#0144: Strong agree
triggerhappygandi#0001: You can't do alignment if you are not on the cutting edge
bmk#1476: > what would differentiate Eleuther from other alignment communities if it was all there was?
well, for one, we'd structure everything around being radically open
bmk#1476: which is basically polar opposite of miri
bmk#1476: ah, nice
triggerhappygandi#0001: But more importantly, exciting projects like gpt-neo
triggerhappygandi#0001: Can't do alignment if no AGI
bmk#1476: @ersatz what's your opinion on doing more christiano style prosaic stuff rather than miri style deconfusion stuff
ersatz#0001: Funny charter: “we get to make infinite money for ourselves but if we create God we won’t enslave humanity lol”
EricHallahan#1051: @AGI , you exist, right? |
EricHallahan#1051: No?!?
triggerhappygandi#0001: Bot dead?
triggerhappygandi#0001: Shit
triggerhappygandi#0001: F
triggerhappygandi#0001: !battle @bmk
Carl-bot#1536:
triggerhappygandi#0001: Alive and well
ersatz#0001: That’s not something I believe I can answer by anything but intuition and that’s not good enough
bmk#1476: whats your intuition?
ersatz#0001: That we don’t need to solve all of philosophy and game theory to align an arbitrary powerful agent to a set of constraints that are consistent with human values
StellaAthena#3530: Is OpenAI's Ada 2.7B params?
bmk#1476: yeah
bmk#1476: well, i dont think miri claims we need to solve *all* of philosophy or game theory
bmk#1476: they just think we arent using the right concepts to think about this stuff
ersatz#0001: I don’t want to straw man them but I think they are essentially claiming that we need to solve all the part of philosophy that can be solved at all
voxs#0001: are weights fp16 or fp32
bmk#1476: fp32
voxs#0001: ah
voxs#0001: thats why the model seems to big
voxs#0001: compared to openai gpt2 |
bmk#1476: actually, that's because the optimizer states are in there too
bmk#1476: I guess if you narrow that down to aa perfect understanding of what CEV actually is then i guess so
triggerhappygandi#0001: Weights are always fp32 right? fp16 training just does its magic while training. the saved checkpoints are still 32 bit right?
bmk#1476: but you do need a perfect spec for CEV (at least according to miri people, probably) because goodharting
Sid#2121: wait, @bmk / @voxs weights are fp16, not fp32
bmk#1476: er, master copy of weights should be fp32?
bmk#1476: only activations are bf16
Sid#2121: oh yeah :thonk: guess you're right
voxs#0001: can the optimizer states be removed?
Aran Komatsuzaki#5714: yeah i didn't expect that this would blow up like this.
EricHallahan#1051: I did. I so should have precommited.
marcus.llewellyn#2923: People have been sitting on their hands waiting for a model release. But don't worry, I'm not about to demand a custom inference implementation. 😉
cfoster0#4356: Ngl I think a #precommits channel would be cool
EricHallahan#1051: I also want #entirely-off-topic
chilli#5665: one thing that this makes clear to me (judging from the quote tweets) is just how many people there are in this space just floating around
chilli#5665: who have no idea what's really going on in this space 🤔
chilli#5665: btw, I sometimes wonder
chilli#5665: How many people in this world
chilli#5665: do you think *really* understand something like, say, the Zero optimizer?
Aran Komatsuzaki#5714: i'd say zero |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.