data
stringlengths 115
7.61k
|
---|
Deleted User#0000: Hi! Anyone have a colab link to a pixelGAN thingy
Deleted User#0000: I lost mine, or it doesn't work, can't remember:thiccbrungus:
elderfalcon#4450: I feel like W.R.T. what you're saying here that you could do a similar thing with a clip model.
Esp if the pathway through space is highly curved, the halving procedure slowly semi-linearizes the process, I think.
alstroemeria313#1694: oh. does it backprop through an entire DDIM sampling process
alstroemeria313#1694: they use a shorter sampling process to speed it up
alstroemeria313#1694: they must
alstroemeria313#1694: it's going to be bad for quality though
timudk#8246: You mean because of the approximate posterior, right? I mean you could also use some continuous posterior but I guess you would want to use a codebook style posterior.
alstroemeria313#1694: usually for diffusion autoencoders i have just let it learn whatever latent space without regularizing it
alstroemeria313#1694: and found a way to sample from it to do unconditional generation
alstroemeria313#1694: like by using a second diffusion model *in the latent space*
alstroemeria313#1694: no i mean just do a normal autoencoder, not VAE
alstroemeria313#1694: the diffusion decoder has its own ELBO so i didn't know which you referred to.
timudk#8246: oh gotcha
timudk#8246: you are basically doing standard diffusion training but condition the score model on f(z|x_0) or f(z|x_t)
timudk#8246: I wonder if the signal is sufficient to make the latent space meaningful
CarsonPoole#0640: is there a reason there's not much going on in the OSS community with Pytorch + TPUs/XLA?
CarsonPoole#0640: seems like even for the NeoX project it could be useful
|
CarsonPoole#0640: like you can easily go to ~12B parameters on just a v3-8 (bfloat16)
CarsonPoole#0640: and that's without any offloading or any zero-like optimizations
CarsonPoole#0640: all that's really required at this point is a good MP implementation
CarsonPoole#0640: you can already do a DP Neo 1.3B and 2.7 on a v3-8
EricHallahan#1051: PyTorch XLA has not fully matured.
CarsonPoole#0640: what's undermatured at the moment
EricHallahan#1051: It also places constraints on how functions can be written.
CarsonPoole#0640: can you elaborate on that?
CarsonPoole#0640: they claim on the introductory docs that everything works mostly the same as normal pytorch
alstroemeria313#1694: random footguns
EricHallahan#1051: PyTorch XLA is still bound by the constraints of XLA.
alstroemeria313#1694: ops that cannot be run on tpu and thus involve a transfer to cpu and back
EricHallahan#1051: https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md
alstroemeria313#1694: like F.interpolate()
EricHallahan#1051: TPUs are still TPUs too.
elderfalcon#4450: Oh, haha, got scooped, was gonna link the same troubleshooting link EH did.
It's the silent quirks that kill (a model), methinks. :'( D:
EricHallahan#1051: Good luck trying to get GPT-NeoX working on TPUs.
EricHallahan#1051: It would likely require an overhaul of the entire codebase.
|
EricHallahan#1051: MTJ might have a reason to exist. :think:
Some Point Process#3793: Do you happen to know how the gradients of the data distribution get learned (in spite of the gradient noise due to small batch sizes), and in turn, how some *global* distribution here is used so that its reverse diffusion steps improve? (Instead of just memorizing the training images for example)
Some Point Process#3793: also: If the model happens to reach a stable fixed point, you can use a fixed point solver as advocated by Kolter et al. (applies to ddpm's/neural-ode models in general). This assumes some sort of convex/lipschiz continuous objective function during inference time
Some Point Process#3793: ref: http://implicit-layers-tutorial.org/implicit_functions/
alstroemeria313#1694: it doesn't though? what happens is very timestep dependent
alstroemeria313#1694: and you stop at t=0
alstroemeria313#1694: instead of continuing arbitrarily?
alstroemeria313#1694: wdym how they get learned?
Some Point Process#3793: I see. I keep forgetting it's that sensitive to tiemstep, almost as if a different model is being assigned to a particular point in time entirely
alstroemeria313#1694: yes
alstroemeria313#1694: in fact the original paper uses this formulation to introduce the idea
alstroemeria313#1694: then implements it w/ a single timestep conditioned model
alstroemeria313#1694: since what happens on close timesteps is pretty similar
alstroemeria313#1694: it was later generalized to continuous timesteps
alstroemeria313#1694: like. i can calculate the actual normalized log density of the perturbed data distribution given a point to evaluate it at and a timestep
alstroemeria313#1694: it just requires looking at everything in the training set
Some Point Process#3793: Well, I think, in some cases the fixed point solver is supposed to run on top of the existing "diffusion process". That way it can use the past trajectory anyway to "extrapolate" and hence accelerate the process
alstroemeria313#1694: but they're just uniform mixture distributions of multivariate Gaussians
alstroemeria313#1694: which all have the same timestep dependent diagonal covariance
alstroemeria313#1694: you can evaluate their log densities and logmeanexp them
|
CarsonPoole#0640: I'm certainly not referring to the optimizer changes and things like that, but doing a relatively simple tensor parallelism for 8 cores on a TPU v3 with static tensor shapes doesn't seem enormously challenging
CarsonPoole#0640: like just taking the HuggingFace GPT-J for example and adding some tensor parallelism
CarsonPoole#0640: the whole model is already written in simple pytorch code; how much more is there that I'm missing than just some control flow `.to(xladevice)`
CarsonPoole#0640: and changing the shapes of the tensors to be able to effectively split them between the cores
CarsonPoole#0640: these GPT models are just many `Linear` layers and `torch.matmul`
alstroemeria313#1694: we can throw ODE solvers at it
alstroemeria313#1694: and see what happens
EricHallahan#1051: You also cannot use in-place operations.
Grey M.#7151: I read all the rules on lurking/dumb questions so ill keep this to a minimum
As an artist who decided to get a computer science degree, My specific interests are in neural networks for artwork generation and style transfer
Im able to use the Colab notebooks that get released rather well, but I want to take the next steps and learn how to train off of a dataset, and learn more intimately how these neural networks work on a technical level
TLDR, which communities on #communities or otherwise would be best for me to use as a beginner looking to learn this specific field?
most of the communities there have no summaries of what they are about
CarsonPoole#0640: Am I mistaken that those don't seem to be used anywhere in the GPT-J or GPT-Neo HF code?
bmk#1476: tpu podcast used to be good for this but activity seems to have died down there recently
bmk#1476: you might also want to take a look at #art
CarsonPoole#0640: for learning about the technicals of ML I'd recommend the Andrew Ng Cousera Machine Learning course
Kia#2550: #art
CarsonPoole#0640: it's all super simple ML but it is a great foundation for understanding the actual math behind what's going on when you call these high level functions in Tensorflow or Pytorch
|
Kia#2550: There's The DALL-E server but people are pretty busy at the moment
Grey M.#7151: Im fairly active in #art but mostly just posting art ive made using the tools given to me
id prod more into training my own stuff there but I figured from the rules that me stumbling around as a beginner/novice to neural networks wouldnt be welcome and I understand why
Some Point Process#3793: Here's a paper on a continuous flow modeling that uses only black box solvers: https://openreview.net/pdf?id=8PS8m9oYtNy
(also since continuous flow models were associated to ddpms by Song et al) https://cdn.discordapp.com/attachments/729741769738158194/912096723021738034/unknown.png
๐
ฌ gabriel_syme ๐
ฌ#3220: you're the lucky one!
๐
ฌ gabriel_syme ๐
ฌ#3220: this is cool, who's up for some geospatial stuff
https://github.com/microsoft/torchgeo
CarsonPoole#0640: geospatial is one of the most straightforward fields to get a paper published in an ML conference
CarsonPoole#0640: most of the recent CV papers have never been applied to geospatial stuff
CarsonPoole#0640: like not sure if vision transformers have even been applied yet
CarsonPoole#0640: they hadn't been as of a few months ago
CarsonPoole#0640: and vision transformers were several months old at that point
CarsonPoole#0640: there are SotA papers using 5-10 layer conv nets
CarsonPoole#0640: also datasets are so easy to get and they're all high quality
CarsonPoole#0640: and training models doesn't take months like NLP
CarsonPoole#0640: you can come up with a methodology and train a model in an afternoon that can feasibly beat the SotA
CarsonPoole#0640: that's not an exaggeration bc i've done precisely that
StellaAthena#3530: lol
StellaAthena#3530: @CarsonPoole What are some examples of current SotA papers in geospacial
|
cuda_oom#8209: Does anyone have any experience with building a model to fix a geospatial dataset? I have a dataset that has many shifted misaligned labels (all labels in an image are all shifted by a constant x,y). I'm looking to train a model on a different, perfect dataset I have and jittering those labels for training and saving the x,y shift I do with software. Does anyone know what to do model-wise to get a model to predict an x,y shift given a mask and RGB image?
inox#5400: automate that and publish the automation
bmk#1476: why do that when you could also automate publishing and publish a zillion papers in an afternoon
bmk#1476: use gpt3 to generate slightly different but actually identical versions of the paper, submit each one to a different journal that nobody has heard of
inox#5400: sometimes you want the whole salami
ofirpress#6591: ๐ https://github.com/ofirpress/attention_with_linear_biases/issues/5
alstroemeria313#1694: ohhh
cfoster0#4356: Whelp. Rght after I replaced ALiBi with ~~kernel bias~~ neural relative position bias fields :goose10:
๐
ฌ gabriel_syme ๐
ฌ#3220: and yet the torchgeo paper in ICLR was rejected on the premise of 'what the hell is this paper doing here?'
StellaAthena#3530: Got a link to the ICLR submission?
๐
ฌ gabriel_syme ๐
ฌ#3220: sure let me get it
๐
ฌ gabriel_syme ๐
ฌ#3220: https://openreview.net/forum?id=ZgV2C9NKk6Q
๐
ฌ gabriel_syme ๐
ฌ#3220: in any case I'm pretty excited to try this library one day soon, like it was mentioned above a lot of low hanging fruits (both for publishing but also practically)
๐
ฌ gabriel_syme ๐
ฌ#3220: maybe I can go through with an old piracy idea ๐
AndroYD#4442: I have confirmed multiple times that training a model either with Tensorflow and Pytorch, under the same circumstances and datasets, can have small enough differences that one can be a total success while the other can be a total failure, in my specific case Pytorch isn't understanding a rule the same way Tensorflow does. This is not easy to debug at all, the sure-shot solution would be switching to an older and slower Tensorflow 1.x that at least I KNOW it WILL work. Unfortunately there's no Tensorflow 1.x checkpoint for the 125M model, it would be nice to include older models as well, not always the newer solutions work better, just my 2 cents.
_nah_cool#7283: Hello everyone!! I'm looking forward to implement switch transformer architecture using jax (Flax/Haiku) but the device placement of 'experts' seems quite non-trivial to me.. can someone point me to the resources that might help or perhaps suggest some method? Thank you.
elderfalcon#4450: Just a few quick questions:
1. What worker init fn are you using for PyTorch in your dataloader (s), if any?
|
2. Which norm you using?
3.a. Are you running anything on Ampere silicon?
3.b. Are you running anything remotely mixed precision in training?
4. Adam optimizer, with or without debias?
Those are the 'easy' quick ones I can think of off the top of my head D': Sorry, don't know how much you've debugged this, if at all. #1 is the single greatest performance destroyer of all time in the current PyTorch, it really needs to be fixed posthaste, I think. :'(
alstroemeria313#1694: pytorch adam always debiases
alstroemeria313#1694: tbh #1 is just people not understanding how fork() works
alstroemeria313#1694: and the fact that pytorch takes care of reseeding *its own* RNGs when it forks to produce dataloader worker processes but does not mess with other frameworks' RNGs.
alstroemeria313#1694: so people get used to not having to reseed manually and forget there is a potential problem
StellaAthena#3530: If I want to download a specific subset of the Pile, what's the best way to do that? Especially if I would like it to be preprocessed and pretokenized?
@bmk @zphang @researcher2 @cfoster0
Louis#0144: are u assuming the pile download script works still
Louis#0144: lol
researcher2#9294: If it the subset is broken down by Dataset then importing the dataset, instantiating then calling documents will give you a document stream which would then need to be randomized, flattened to chars and tokenized. @bmk do we have randomized subsets floating around?
Using opensubs as an example on a machine with a lot of memory (with low memory you could read the full thing first just to get a document count, write the whole thing uncompressed to disk and random seek). Doesn't include a dedupe step and untested:
|
```python
import the_pile.datasets
import random
open_subs = OpensubtitlesDataset()
documents = list(open_subs.documents()):
document_order = random.shuffle(list(range(0, len(documents)))
with open("flat_file.txt", "w") as flat_file:
for doc_index in document_order:
flat_file.write(documents[doc_index][0])
# Tokenize using whatever your model requires
# Load into model
```
StellaAthena#3530: Yea, Iโm talking about subdatasets specifically not some arbitrary subset of the data
researcher2#9294: I'll wait to see if bmk has this already, otherwise the above should work. If you want a dedupe that will be a little more painful.
researcher2#9294: Though some datasets are already deduped internally (OWT2)
|
Pierre Peignรฉ#5169: Hi everyone ! ๐
My name is Pierre, I am doing an ML Engineer apprenticeship at Engie as final year at my Computer Programming school (42 Paris).
Before that I did a Philosophy of science M.A. and I would like to be involved in AI research !
I already have some theoretical knowledge about DL (I know the classics architectures and how and they work under the hood) and I am currently learning Pytorch.
I will be glad to help on one of your current projects ๐
EricHallahan#1051: Welcome!
ersatz#0001: Hello and welcome! Do you know about Alignement?
Pierre Peignรฉ#5169: I am not a specialist (for now :D) but I am very interested by that. I think it is one of the most important topic in AI.
ersatz#0001: Have you watched Rob Miles' videos on the topic?
Pierre Peignรฉ#5169: Nope ! I will check that right now.
I was thinking about following the AGI Safety fundamentals curriculum in the next batch (january): https://forum.effectivealtruism.org/posts/BpAKCeGMtQqqty9ZJ/agi-safety-fundamentals-curriculum-and-application. Do you have any feedback about it ?
ersatz#0001: That's a great idea!
ersatz#0001: I would recommend starting by watching Rob's videos
ersatz#0001: https://www.youtube.com/c/RobertMilesAI/videos
Pierre Peignรฉ#5169: Ok thanks I will watch that. Are you working on an alignment research project right now ?
ersatz#0001: Unfortunately no but you have many researchers here including French, @adamShimi for example
Pierre Peignรฉ#5169: I also found the AI safety camp but it is starting at the same time as the AGI safety fundamental curriculum. Any idea of which is best ?
|
ersatz#0001: Maybe the FAQ could help you? Or someone else here knows? But don't be shy to email the organizers
bmk#1476: yeah this is probably the easiest way
elderfalcon#4450: Right, it's so bad I think tbh, it seems like an antipattern. I wish that defaulted on rather than off.
But the PTL function thankfully is p g for that if'n u copypasta.
Spy#9778: I just had the funniest meeting with my advisor
Spy#9778: We had a like, 15 minute debate over whether algorithm A or algorithm B was better
Spy#9778: basically he thought algorithm A would be more aligned with what we were doing, while I thought algorithm B was more principled
Spy#9778: but some terms cancelled out and they turned out to be the same
alstroemeria313#1694: ehehe~
Spy#9778: first time it has happened outside of a math homework ๐ค
adamShimi#8350: They're ppretty different.
Spy#9778: I thought it was just a myth
adamShimi#8350: Basically the seminar is about reading and discussing alignment posts/papers
adamShimi#8350: The safety camp is to try doing research with mentors.
adamShimi#8350: Note that you can actually do both.
adamShimi#8350: The camp is 3 hours per week and the camp around 7 hours per week.
Spy#9778: I now seem to be able to generate outputs of a desired length from left-to-right LMs :o
Spy#9778: (i.e. actually planning ahead so that the sequence ends "naturally" at the right length)
glazgoglabgalab#5255: what's the difference between autoregressively generating image + text tokens vs conditioning on image tokens and generating text tokens?
|
Sid#2121: if you understand what all those words mean individually, you should already know the difference?
Sid#2121: in 1. you are generating image tokens
Sid#2121: in 2. you aren't
elderfalcon#4450: Haha! That's great! Eehee! Hope y'all had a good laugh over that one! X'D
glazgoglabgalab#5255: Sorry, I meant to ask, what's the expected difference in performance?
glazgoglabgalab#5255: maybe I should just try it and find out
StellaAthena#3530: They do different things, so asking about that doesn't make a whole lot of sense.
StellaAthena#3530: Yes, if you have to generate the text from scratch also you'll do worse on an apples-to-apples comparison.
glazgoglabgalab#5255: Yeah I might just be confused.
I was thinking of a scenario where e.g. You're trying to caption some images. Does accuracy improve more if you also try to generate the image tokens or if you just generate the text tokens and use the spare compute on more examples
glazgoglabgalab#5255: Like despite the task being "given image tokens predict text tokens" do we benefit from autoregressively predicting the image tokens anyways?
glazgoglabgalab#5255: idk if I'm explaining myself correctly or I've got something confused
StellaAthena#3530: Conditioning on the correct image will always be better than jointly determining the image and the text in a remotely fair comparison
StellaAthena#3530: You're effectively getting a portion of the answer for free in one and not in the other
glazgoglabgalab#5255: concrete example:
clevr images -> image tokens
scene description -> text tokens
scenario 1
start from `<s>` generate image tokens + text tokens
|
scenario 2
start from `<s>, i_1, i_2, ..., i_n` generate the corresponding text tokens
glazgoglabgalab#5255: it's not quite a fair comparison because scenario 2 gets trained for less steps so maybe give it more examples? idk
glazgoglabgalab#5255: if we're only allowed to autoregressively generate M tokens during training should we spend those tokens on more examples or on generating the image tokens as well
glazgoglabgalab#5255: I'm just going to run the experiment lol
nostalgebraist#3542: You can put an autoregressive loss on the image tokens during training "for free"
nostalgebraist#3542: You need the image tokens in your context window either way, to condition on them
kindiana#1016: the cost is that you need to have a causal mask
kindiana#1016: so I think it depends on how much data you have
nostalgebraist#3542: Yeah, not causally masking the image might perform better (similar to MT results about whether to causal mask the source text)
nostalgebraist#3542: But it's the same amount of compute per example
glazgoglabgalab#5255: Thanks, I think I was just asking whether causally masking the image tokens improves performance
researcher2#9294: @StellaAthena Ok, let me know if you need some data wrangling.
StellaAthena#3530: If you could download, tokenize, and process all sub-dataset of the Pile for GPT-NeoX independently, and then expose them for download publicly that would be phenomenal.
AI_WAIFU#2844: I would just try playing around with pmap/xmap on like 2 devices to try and get a POC working for switching, then try to scale up to a switch transformer.
How many and what kind of devices are you working with? Are you going to need to do SPMD or do you just have 1 box?
PoeticWitch98#2561: Of the several servers listed in the #communities channel, which one do y'all suggest I go to in order to ask the general beginner questions? I know I've asked a couple of them on here but I'd like to not test my luck with the server rules ๐
I'm mainly wanting to know what I'd need to do in order to poke around with the image generation things on a computer instead of my phone. Idk if there's a specific software needed to put the coding things in, these notebooks and what they are, any of that. Which server is the most "for dummies" friendly?
Kia#2550: Um
|
AI_WAIFU#2844: The general AI server or the fast.ai server are probably good places to start
PoeticWitch98#2561: Thank you so much!
_nah_cool#7283: Thanks for the suggestion, I would definitely experiment with these..
I will be using a Machine with 5 TPU devices (through Google TRC).. ig that I'd be needing SPMD too..
_nah_cool#7283: Ohh... I didn't know about that.. much thanks!!
_nah_cool#7283: Well ๐ฌ.. plan to start from 1, if everything goes well, will try to scale up to 5.. just want to experiment with a few things :harold:
alstroemeria313#1694: Hey how do you get a file upload box on Colab
_nah_cool#7283: Yes ig.. each device with 8 tpu cores..
EricHallahan#1051: Check the snippets. ๐
> **Open files from your local file system**
> `files.upload` returns a dictionary of the files which were uploaded. The dictionary is keyed by the file name, the value is the data which was uploaded.
> ```python
> from google.colab import files
>
> uploaded = files.upload()
>
> for fn in uploaded.keys():
> print('User uploaded file "{name}" with length {length} bytes'.format(
> name=fn, length=len(uploaded[fn])))```
|
alstroemeria313#1694: mm
alstroemeria313#1694: ty
alstroemeria313#1694: and it's just a bytes object?
EricHallahan#1051: Seems like it from the example?
alstroemeria313#1694: yeah
alstroemeria313#1694: i got it working
๐
ฌ gabriel_syme ๐
ฌ#3220: Say which would be the best model to use for summarization? And is any pretrained model good on academic writing?
EstebanSir#2189: i'm curious on what you are planning to do with that, gabriel :p
EstebanSir#2189: also, I was wondering, i see a couple of question generator transformers out there, but they always ask questions about information within the context, is it possible to train one of them to generate questions about information never mentioned in the text? it probably doesnt have much use for many people though.
EstebanSir#2189: (would this be fitting for #research instead of here?)
elderfalcon#4450: For those in the US or who celebrate Thanksgiving --
Happy Thanksgiving, y'all! Whether you have a good or a bad year with this, just know that someone is thinking of you, all y'all, and that you're worth it to me! Hope you have a lovely day, however it goes. And if it isn't a lovely day at all -- the future is less limited than y'all may think. There's always room for good to come, you may be quite surprised. Love y'all and truly hope you guys have a great one! :) ๐
cfoster0#4356: This looks like a very nice intro resource on AR transformers to add to the list https://e2eml.school/transformers.html
jacquesthibs#6131: Is there anyone here with the necessary hardware for fine-tuning GPT-J that would like to do some pair-programming to do that this weekend?
jacquesthibs#6131: If not, I'd be willing to split the cost for using cloud computing to fine-tune a GPT model.
jacquesthibs#6131: I just want the experience to fine-tune it.
๐
ฌ gabriel_syme ๐
ฌ#3220: I want to build a neural search engine for all my DL literature of the last year
๐
ฌ gabriel_syme ๐
ฌ#3220: Happy Thanksgiving yes!
elderfalcon#4450: Make sure it doesn't rely on cron jobs and manual sh scripts if you ever publish it on GitHub pls ty
|
๐
ฌ gabriel_syme ๐
ฌ#3220: haha, I think I don't know enough to mess it up that way ๐
๐
ฌ gabriel_syme ๐
ฌ#3220: my idea was to use smth like Jina I guess, does that fall into that category?
๐
ฌ gabriel_syme ๐
ฌ#3220: at first though, maybe a pretrained model and a faiss db is a good start
elderfalcon#4450: I have no idea, I was just roasting arxiv sanity
elderfalcon#4450: A wonderful tool but definitely a little aged beyond its years haha
elderfalcon#4450: By the by, if anyone does want to work a replacement to ye ol' classic ASP feel free to pop a channel open and I can devote some hours to it. Backend and serverless-type tasks are def my weakness but happy to help where I could (and I think I could pull in some willing volunteers). :D ๐
rom1504#5008: Jina is pretty clean
rom1504#5008: Especially for something fairly low scale like what you're thinking about building
๐
ฌ gabriel_syme ๐
ฌ#3220: yeah I think so too, looks nice!
ilovescience#3282: I think karpathy is working on a new version of ASP now
ilovescience#3282: https://github.com/karpathy/arxiv-sanity-lite
jacquesthibs#6131: I'm trying to fine-tune GPT-2 on Colab right now, but keep running out of memory.
jacquesthibs#6131: I'm using this: <https://github.com/huggingface/transformers/tree/master/examples/pytorch/language-modeling>
jacquesthibs#6131: Is it maybe because I need to prepare the data in a special way?
jacquesthibs#6131: I've seen someone use "<endoftext>" tags. Do I need to add that to my custom dataset otherwise huggingface will not seperate things out properly, and that would lead to memory issues?
jacquesthibs#6131: Or is it supposed to be impossible to fine-tune GPT-2 on a P-100?
nostalgebraist#3542: the linked example says it works on a k80 so it should be fine on a p100
jacquesthibs#6131: Hmm
jacquesthibs#6131: Why do you think I'm getting:
|
> RuntimeError: CUDA out of memory. Tried to allocate 786.00 MiB (GPU 0; 15.90 GiB total capacity; 14.13 GiB already allocated; 593.75 MiB free; 14.21 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
> 0% 0/54 [00:00<?, ?it/s]
jacquesthibs#6131: I tried factory reset, but no change.
EricHallahan#1051: As mentioned in our FAQ, I recommend posing the question to the Hugging Face Community. We aren't really the ones to ask about the intricacies of Transformers unfortunately.
https://www.eleuther.ai/faq
cfoster0#4356: They've also got a Discord if you're interested
cfoster0#4356: Link in #communities
jacquesthibs#6131: Thanks for pointing me to the FAQ ๐
jacquesthibs#6131: I'll go ask them.
jacquesthibs#6131: I think I was able to make it work. I had to reduce the batch_size down to 2.
elderfalcon#4450: So he said! I've been watching his git history and I feel it might be on ice from the general dearth of commits in his activity board after the actual flurry (some around when he made the announcement).
I'm guessing something a little more organized might be helpful -- he's got Tesla juggling on his hands iirc, and paper trawling might be a tall order, I guess if we all had our top 3 Christmas list stuff. I should probably just start getting down a paper arch though before putting any more feet in my mouth..... ๐ฌ๐ฌ๐ฌ๐ฌ
But I guess we can all hope that he drops something and it's cool! Super hope for that one! :D
ilovescience#3282: he just made 4 commits today
ilovescience#3282: (lol thanksgiving holiday was probably the only time he got to work on this side project)
ilovescience#3282: anyway hopefully something interesting will come out of it soon
elderfalcon#4450: Niiiiiiiiiiiiice. Haven't checked in a while, that's p cool! Also, he should be spending time w family, but I roll w that! ๐๐๐๐
chirp#4545: i literally just used this advice and it fixed my model https://twitter.com/karpathy/status/801621764144971776
|
chirp#4545: (i know it's a joke but it seriously worked)
chirp#4545: https://cdn.discordapp.com/attachments/729741769738158194/913671855590740008/unknown.png
Daj#7482: Does someone have that paper about how ML doesn't use error bars and therefor tons of results are probably just noise? I think @StellaAthena posted it before?
chirp#4545: I donโt remember the paper but there was an interesting counterpart recently, namely that for some kinds of experiments the error bars are too small to matter https://twitter.com/giffmana/status/1463581378814038016?s=12
CRG#8707: Another datapoint for error bars being small: <https://arxiv.org/abs/2102.11972> https://cdn.discordapp.com/attachments/729741769738158194/913721202093207592/0ae4a60f5c932fcaf20137e987181c57.png
fenton#9978: Quick question about The Pile -- does anyone know how many tokens it contains?
Daj#7482: Depends on your tokenizer. If you mean UTF characters, the whole dataset is 825GiB, so divide that by something between 1 and 4 bytes. Dunno if we have more precise numbers, cc @bmk @StellaAthena
fenton#9978: Thanks Connor, I am looking to compare the sizes of The Pile with the GPT-3 training dataset (in terms of tokens) -- so The Pile is ~206B tokens? (if you were to use the GPT tokenizer)
Daj#7482: We have a version tokenized with the GPT2 tokenizer somewhere that we should know the size of, I just don't know it off hand and I'm pretty sure bmk is asleep
Daj#7482: @Sid might also know if he's awake
naclbbr#9203: IIRC The Pile is at the least 300B tokens but I'm not sure if this number is raw size or effective size after running multi epochs for some of the components
pragmaticml#1730: Check out haystack from the German company DeepSet as well. Personally prefer their API and the founders are more than willing to help out if you hit sharp edges -- super friendly team.
๐
ฌ gabriel_syme ๐
ฌ#3220: thanks I'll check it out!
fenton#9978: Perfect, that's good enough for me. Thank you!
Sid#2121: i think around 280B-300B tokens iirc
๐
ฌ gabriel_syme ๐
ฌ#3220: deduplication of the training data completely erased the training instabilities I saw before, although it reduced the dataset by 2 OOMs lol (it's a really constrained domain heh). My guess is the model will be much better this way though
cgarciae#9238: In this context tokens != word tokens i.e. unique pieces?
Kia#2550: @Daj Ah Self promoting?
Daj#7482: Yes, @Igrushkina no self promotion without previous approval
Kia#2550: Thanks:thinkies:
|
ethan caballero#6044: How is perplexity on Lambada calculated in GPT-3 paper? Is it just e^cross_entropy?
Sid#2121: sub word tokens (each token might be a whole word or a part of a word)
cgarciae#9238: Oh wow, 200B tokens sounds crazy. Can you fit the Embedding layer on a single machine for this?
Character level models start sounding more reasonable at this point.
Sid#2121: oh, maybe you misunderstood - the vocabulary size (total number of tokens in the tokenizer) is only 50k or so. But the total number of tokens in the pretraining dataset is around 300B. This latter number has no effect on the size of the embedding.
cgarciae#9238: Ah thanks for the clarification! Yeah, I just skimmed really quickly over the conversation sorry ๐
naclbbr#9203: An interesting experiment would be training a model w/ The Pile using a different tokenizer (ex. a very large vocab size ala AI21) :thinkies:
naclbbr#9203: I'm not sure how long AI21 trained their model, but outputs felt slightly stiffer
StellaAthena#3530: @naclbbr Jurassic was trained on the Pile. So, at least to an extent, that has already been done.
kurumuz#5695: huh, somehow i missed that detail
StellaAthena#3530: They didn't actually come out and say it
StellaAthena#3530: But I have confirmed it with the authors ๐
kurumuz#5695: already done without many defails ig? need to keep all the variables same
kurumuz#5695: :harold:
StellaAthena#3530: Oh boy you have no idea
StellaAthena#3530: I'm writing a paper rn
StellaAthena#3530: And going off on people not documenting shit in the footnotes
StellaAthena#3530: It's a blast
kurumuz#5695: o nice
kurumuz#5695: i still think ai21 did an extremely poor job about justifying that tokenizer
|
StellaAthena#3530: > Our model architecture and hyper-parameters largely follow \citet{brown2020language}, with a few notable deviations. The GPT-3 paper describes using sparse and dense layers in alternation, but what this actually constitutes is never discussed in detail. Indeed, our attention was recently drawn to the fact that GPT-3 did not use true dense attention at all in its \`\`dense'' layers, but rather used attention ``sparsely factorized across the heads resulting in a rank reduction of about 8x \citep{wtf-gpt3}.'' By contrast, we use dense attention throughout our model. We also use rotary embeddings \citep{su2021roformer} as our embedding type of choice, do not use weight decay, and use the parallelized attention and model initialization methods used in \citet{gpt-j}. For a full list of hyper-parameters, see \Cref{sec:hparam}.
kurumuz#5695: oh they used parallel attn +ff too?
kurumuz#5695: :tribalism:
kurumuz#5695: truly
kurumuz#5695: ~~seems like a replication of gptj with more compute~~
StellaAthena#3530: Less compute actually lol. This is for some interpretability experiments.
naclbbr#9203: IIRC AI21 never confirmed that they used The Pile (to some extent) but they cited The Pile in the paper
naclbbr#9203: So suspicious, yes
kurumuz#5695: stella confirmed it with authors. so take that as you will
naclbbr#9203: Ooh
naclbbr#9203: ๐ค
StellaAthena#3530: WuDao was also trained on the Pile, but you need to read the chinese language version of the conference slides for a conference that was in english to find this out
naclbbr#9203: Oh my gosh, Wu Dao as well?
kurumuz#5695: pile is already defacto training set for huge LMs
kurumuz#5695: pretty much
StellaAthena#3530: This is confusing, so let me clarify. There was a conference that was in English. The guy who spoke there uploaded english language and chinese language slides to his website. Only the (unpresented) chinese language slides mention what the English training data was
kurumuz#5695: interesting
kurumuz#5695: and begs the question of why
janus#0150: ? Oh your experiments, not the full model
StellaAthena#3530: This is the "scaling suite" for #interp-archive
|
StellaAthena#3530: I've been wanting to rant about reproducibility in NLP and this gives me the perfect opportunity to do so lol
StellaAthena#3530: Oh, I didn't quote the spiciest paragraph (from the intro)
> In this paper, we are concerned with questions of reproducibility in natural language processing and in particular in large transformer models. In addition to making our model suite fully open source and -- we hope -- fully reproducible, we follow the lead of [CITE MISTRAL] and make an active effort to document aspects of training not commonly discussed about training large language models. Some of these, such as the fact that GPT-3 does not actually contain any dense layers or that WuDao was trained on the Pile, are documented for the first time in the published literature.
Kia#2550: ~~More likely is From Testing out the model~~
Kia#2550: Wanted to point out something fishy,But I just kinda hold it for the moment
StellaAthena#3530: I'm training models which are close to (though not quite the same as) the GPT-3 models, starting at 125M and going as high as I can train in a reasonable amount of time, with a fully public training pipeline and releasing ~100 partially trained checkpoints for each model. The goal is to use these models to investigate how knowledge representations develop and evolve over time, and how those patterns change as the models scale, and to allow others to do the same in an actually comparable fashion.
StellaAthena#3530: apropos of NAI https://cdn.discordapp.com/attachments/729741769738158194/914157169526571048/Screen_Shot_2021-11-27_at_9.14.02_AM.png
kurumuz#5695: lol
Kharr#7888: Are you using any regularization methods? WD? Dropout?
StellaAthena#3530: No
Kharr#7888: Any particular reason? WD specifically has been gaining traction in LM
StellaAthena#3530: I haven't noticed that fact, can you point to some cases?
CRG#8707: Yeah, Alex Radford was insistent on "the benefit of wd"
CRG#8707: https://cdn.discordapp.com/attachments/729741769738158194/914161587949035592/ee99a12e66d7ceecf9c9cffef67a5cca.png
Kharr#7888: Well, GPT3 used WD and you said you were training models close to them...GPT2 used dropout.
StellaAthena#3530: Yeah, it's generally close to them but uses some settings and arch modifications that we have found to work better. I said that not meaning that I was trying to replicate GPT-3 or something like that
Kharr#7888: As far as I know, only GPT-Neo had no dropout/wd from the public GPT style models
StellaAthena#3530: Jurassic didn't
Kharr#7888: I noticed that Jurassic compares to Neo in the paper, is it the same codebase?
StellaAthena#3530: I don't know. I assume not, but I don't know.
|
StellaAthena#3530: The 7B parameter model used in this paper also didn't (info provided privately by the author and is not documented in the paper) https://arxiv.org/abs/2111.00607
CRG#8707: Do we know that anyone other than openai have tried to see if wd is better/ ablate it?
Kharr#7888: Would be nice of authors had proper method sections like they do in other fields describing the setup.
CRG#8707: I'd trust the openai conclusion (WD is good) otherwise
StellaAthena#3530: ๐คทโโ๏ธ
Kharr#7888: I doubt it given the time + cost to train these models.
EricHallahan#1051: Same.
Kharr#7888: I'm personally sold on _some form_ of regularization from what I've seen
StellaAthena#3530: Can you give a good justification for using regularization in a one-epoch training regime?
StellaAthena#3530: Eh that's unfair. We don't have a "good justifcation" for most things
CRG#8707: They upsampled data, so it's not really one epoch
inox#5400: what do one-epoch style models do when the training data comes from an infinite stream?
StellaAthena#3530: Sure, but that's not generalizable across training datasets. That's a statement about their particular set up (to be fair, my data is also upsampled but it's the principle of the thing)
Kharr#7888: We have decent justification for L2 (which is similar to WD) from traditional statistics. Also, there is a great paper on how WD changes the loss landscape (trying to find it)
CRG#8707: I'd be more comfortable with no regularization if the data was deduplicated with the method from that google paper or something similar.
StellaAthena#3530: Yeah, I've been meaning to dedupe the Pile...
CRG#8707: Also something something WD increases effective LR
Kharr#7888: https://arxiv.org/abs/1712.09913
CRG#8707: AdamP showed that if the parameter norm grows (because of the momentum term), the effective LR becomes much smaller. https://arxiv.org/abs/2006.08217
CRG#8707: And WD / the AdamP method help counteract that.
|
StellaAthena#3530: Wow this is facinating
alstroemeria313#1694: this looks like a shitpost but it is real https://cdn.discordapp.com/attachments/729741769738158194/914169305439031366/Screen_Shot_2021-11-27_at_7.02.19_AM.png
Kharr#7888: Like this, right? https://cdn.discordapp.com/attachments/729741769738158194/914170451348041748/unknown.png
CRG#8707: Yeah exactly
CRG#8707: https://arxiv.org/abs/2010.09697
Kharr#7888: I think the interesting part about this is that there is still a decent minima without skip connections, but finding it might be extremely difficult. In theory, skip connections wouldn't be as useful in the later stages of training
alstroemeria313#1694: ah
alstroemeria313#1694: How do repvgg type skip connections do with this, I wonder
alstroemeria313#1694: Bc those are meant to be removed for inference
Kharr#7888: I would expect it to work pretty well. I've mentioned this on and off a few times, but if you implement skip connections as a matmul over depth the model will converge onto an MLP-like structure and prune them over time. There are some really interesting phases it goes through.
alstroemeria313#1694: Ahh
Sphinx#2092: There are also some papers which gently remove skip connections.
Sphinx#2092: Though they were focused on the CV domain.
Kharr#7888: That's not surprising, CV is a few years ahead of NLP since it had big $ invested into it first. A lot of good technology was developed there. It's nice that we're seeing a convergence with multimodal models now.
Sphinx#2092: sure but this paper is like 3 years old
Sphinx#2092: maybe time for some people to do it in nlp
Sphinx#2092: I think FAIR tried a baby version of this by removing one skip connection in the encoder
Sphinx#2092: but the community has had mixed feelings about it.
Kharr#7888: The NLP community likes to "invent" ideas which have been discovered years before. Might need another 2-3 years for this one to catches on. Then it can have a silly name like "Pruneformer" or something. And, of course, someone will have to make grand claims about it being effective because the brain undergoes a process during early childhood where once skills/language/etc are acquired there is synaptic pruning.
Sphinx#2092: "Skip Less Often with Pruneformer"
|
Kharr#7888: I am going to quote this statement in a year or two. This seems inevitable :berk:
CRG#8707: > Residual Is Not All You Need: Skip Less Often with Pruneformer
FTFY
StellaAthena#3530: Easy 200 citations
tpapp157#3643: One view of why skip connections work so well is because they change the nature of the function that the model needs to learn. Instead of learning the absolute function, the model instead learns the delta (loosely derivative) of the function. If you're familiar with ODEs or you've done a bunch of time series modeling in the past, you'll know that the delta is often a much simpler and much more straightforward function to learn.
louis030195#2462: I came across this quote from Dawkins, 2006
>The workings of the sensory systems are particularly baffling, because they can achieve far more sophisticated feats of pattern-recognition than the best and most expensive man-made machines; if this were not so, all typists would be redundant, superseded by speech-recognizing machines, or machines for reading handwriting. Human typistswill be needed for many decades yet.
2021 models laugh at this ๐
alstroemeria313#1694: it's from 1976 right?
alstroemeria313#1694: the 2006 version was a 30th anniversary edition?
Kharr#7888: Based on this, would you hypothesize that a network which keeps norms under control during training will have greater capacity and thus capability?
CRG#8707: Exactly, see for example: <https://arxiv.org/abs/2006.08217> https://cdn.discordapp.com/attachments/729741769738158194/914212449153081395/0921faaf1b0de9e9d4a0efe431afb35a.png
CRG#8707: I think prenorm / final layernorm creates norm invariance without the need for weight standarization.
Kharr#7888: This is all very interesting. I wonder if this would explain some of the capacity differences I've seen between architectures :thinkies:
nostalgebraist#3542: "you vs. the guy she told you not to worry about"
Sid#2121: We should be using wd since both gptj and the 20B model are using it - can you send me the configs before you start anything?
StellaAthena#3530: DW, Iโm not crazy enough to start a training run without having you check everything twice. I have no idea where I got that from, because I thought the answer was yes and then figured Iโd double check before spouting nonsenseโฆ. But apparently I just failed to read the config file ๐ฆ
chilli#5665: https://twitter.com/cHHillee/status/1464678200898904079?t=0bF7xcyEH3PlsqsOKULc3Q&s=19
Deleted User#0000: what are your thoughts on Julia?
|
chilli#5665: Pretty cool :)
chilli#5665: Has a lot of appealing properties
chilli#5665: Namely the combination of flexibility and performance
chilli#5665: + the way the ecosystem is set up to be composable is interesting
chilli#5665: I do wonder to what extent the same is possible in python though
chilli#5665: For ex, Jax/pytorch/tf all get perf from python through various graph extraction methods
tpapp157#3643: Haha Julia. We'll see where that goes. Maybe I'll consider learning it if it's still around 5 years from now. But considering the life expectancy of new languages I won't be holding my breath.
ColdCall#4288: Julia is awesome, but I only have one friend studying discrete math that uses it semi-often.
ColdCall#4288: Never really caught on *mainstream*
StellaAthena#3530: People talk about Julia the same way they talked about Pearl tbh
AI_WAIFU#2844: daily reminder that julia will never go anywhere until the ditch 1 based indexing
ColdCall#4288: I have a soft spot for Perl and Ruby as they were the first languages I learned.
ColdCall#4288: Why do you think that?
AI_WAIFU#2844: because manipulating arrays with 1 based indexing is terrible.
AI_WAIFU#2844: Especially when you need to treat a 1 dimensional array as an N dimensional array or vice versa
AI_WAIFU#2844: which is a *really* common operation
EricHallahan#1051: MATLAB is terrible because it indexes from 1.
tpapp157#3643: Indexing from 0 or 1 doesn't matter at all. It's just an arbitrary convention that people seem to get strongly attached to for some reason (probably because it is arbitrary).
EricHallahan#1051: Indexing from 0 isn't arbitrary though; it directly reflects how the element is accessed. Indexing from 1 is done when the idea "humans count from one, so our notation should too" takes hold.
tpapp157#3643: Ok, that argument makes sense if you're coding in a very low level language where you're directly addressing hardware memory. The entire purpose of a high level language like Python is to not worry about that. Therefore, how you index is entirely arbitrary.
|
tpapp157#3643: You could index from any arbitrary number and it wouldn't functionally change anything.
guac#4716: yeah it makes no sense in a language where you don't even think of pointer arithmetic/offsets lol
janus#0150: I've long argued for indexing from 3 because 1 is the loneliest number and probably shouldn't be in an array and 2 can be as bad as 1.
Kharr#7888: This appears to be a deep rabbit hole. I spent some time testing a few optimizers with a few different architectures and some toy data and the behavior is pretty odd. The one thing that is pretty clear is that WD + anything = good (in terms of validation accuracy on unseen data). So it looks like it helps generalization (at least on the toy data). The odd part is the weights can take on a very different distribution depending on the optimizer used and the end result can still perform nearly identical.
Kharr#7888: Without WD Adam will also overfit noisy samples at the drop of a hat :berk: The background is colored wrt the decision boundary of the NN.. it made these hilarious tiny pockets for individual data points which are mixed in with the opposite class (without WD vs with WD below) https://cdn.discordapp.com/attachments/729741769738158194/914260856081088532/unknown.png,https://cdn.discordapp.com/attachments/729741769738158194/914260856290807838/unknown.png
tpapp157#3643: github seems to be down :blobsad:
alstroemeria313#1694: this is adamw, right, not regular adam + l2 loss on the params added to the calculated gradient?
Kharr#7888: Yes, AdamW
alstroemeria313#1694: :)
nostalgebraist#3542: i remember reading a bunch of papers saying small-L2-norm solutions generalize better, among those achieving identical training loss, which seems like a straightforward argument for WD
CRG#8707: There was "stable WD" a while back that proposed param - (LR * WD * param) / mean(adam_denominator).
tpapp157#3643: Has anyone played with AdamP? I meant to try it out a while ago and forgot. https://clovaai.github.io/AdamP/
CRG#8707: Needs much smaller WD (something like 1e-4) https://cdn.discordapp.com/attachments/729741769738158194/914267936678608896/Screenshot_20211127-223414.png
Kharr#7888: I've thought about that point as well... "WD calibrated for the first iterations of training will be too large for later iterations" :think:
alstroemeria313#1694: well, yeah
alstroemeria313#1694: but it should be calculated for later iterations though right >_>
nostalgebraist#3542: i'm kinda worried about what that would do when you get near the end of training, when the gradients are all small and mostly noise
nostalgebraist#3542: adam has a tiny denominator but also makes small updates b/c the numerator is also tiny and cancels itself out over a few iterations
nostalgebraist#3542: but the WD step points in the same direction every step, and gets divided by the small denominator
nostalgebraist#3542: but i guess if this hurts the model, grads will appear and adam will push back
|
alstroemeria313#1694: yeah
nostalgebraist#3542: intuitively it seems like this would still have a different equilibrium than regular WD, which i guess is the point... i should just read the paper
alstroemeria313#1694: what is their "norm loss"
CRG#8707: https://arxiv.org/abs/2106.13731
alstroemeria313#1694: oh
alstroemeria313#1694: trying to make the weights norm 1
CRG#8707: https://cdn.discordapp.com/attachments/729741769738158194/914271041709297694/Screenshot_20211127-224634.png
๐
ฌ gabriel_syme ๐
ฌ#3220: Could this be at all related to J and the 2nd epoch collapse you saw?
nostalgebraist#3542: iirc i saw the same phenomenon with and without WD (not sure).
nostalgebraist#3542: although an interesting thing with J finetuning and WD is that i originally figured it didn't matter, since turning it on/off had no discernible effect on train or val loss. so i kept it at 0.1 for most of my runs, and then i decided to turn it off on a whim, and then became convinced the samples were subjectively better despite no loss difference
nostalgebraist#3542: take this with a big grain of salt, i can't remember what convinced me and it's not like i did a blinded test or anything
nostalgebraist#3542: i should have written down the set of observations that actually convinced me, oh well
๐
ฌ gabriel_syme ๐
ฌ#3220: thank you! that is interesting, I can go back now and train another J without on my latest (deduped) data and see
alstroemeria313#1694: @nostalgebraist did you try feeding timestep into the text encoder yet, how'd it go
nostalgebraist#3542: I did, made some other changes too. Still no noticeable improvement in generates images
nostalgebraist#3542: Tried a bunch of other stuff too
nostalgebraist#3542: Possibly I just need make a bigger synthetic dataset. My current one is 275k
naclbbr#9203: I am guessing that GPT-J's default WD of 0.1 is designed after the fact that GPT-3 was also WD = 0.1
naclbbr#9203: we may need weight decay decay for smaller models
ethan caballero#6044: If evolution strategies is used instead of SGD, does training GPT-4 via Folding@Home make sense (i.e. latency/communication_bandwidth wouldn't be an issue)?
|
alstroemeria313#1694: i can't even optimize a stylegan W+ latent (9216 dim) using evolution strategies
alstroemeria313#1694: good luck
ethan caballero#6044: I know, but assume you were using evolution strategies.
EricHallahan#1051: No, because that doesn't circumvent the communication bottleneck and resource constraints of personal computers. You still need to manipulate the entire model somehow.
alstroemeria313#1694: i would expect it to not go anywhere at all.
alstroemeria313#1694: dimensionality too high.
ethan caballero#6044: You're basically saying model parallelism still would be an unavoidable requirement that requires high bandwidth communication, even if evolution strategies is used?
EricHallahan#1051: (I make the assumption that "GPT-4" == "bigger GPT-3", as "GPT-4" is undefined.)
alstroemeria313#1694: even if you used ES you would still have to run forward passes and that involves the whole model
EricHallahan#1051: Even if you could massively reduce the internode communication and the influence of latency, you would still be massively bottlenecked by resource constraints.
StellaAthena#3530: I mean, where do you intend to fit the model
alstroemeria313#1694: Not using backprop just saves you the backward pass
StellaAthena#3530: The core conceit of F@H is to use consumer GPUs
StellaAthena#3530: Youโre going to have to send models across the internet
StellaAthena#3530: Regardless of how you train
StellaAthena#3530: Like, even if God comes down and tells one of the contributors the next step of the training
tpapp157#3643: If by evolution strategies you mean something like genetic algorithms, PPO, etc, those aren't very good at optimizing large NNs. The loss landscape is too complex.
EricHallahan#1051: > The core conceit of F@H is to use consumer ~~GPUs~~ hardware.
ftfy, F@H supports CPU-only computation.
pebbles#7130: without analytical gradients, I don't see how LLMs could be realistically trained
|
CRG#8707: Feedback alignment?
pebbles#7130: not sure if you mean something like DFA, which iirc needs some form of gradient for the last layer
tpapp157#3643: I mean it's technically doable, you just wouldn't get anywhere close to the training efficiency or final model quality of gradient descent.
pebbles#7130: right, exactly. Afaik on supervised tasks where analytical gradients are accurate, SGD blows ES out of the water
ethan caballero#6044: So GPT-4 inference would be super slow using Folding@Home?
alstroemeria313#1694: you would have to ship activations from one box to another over the internet
alstroemeria313#1694: this would be slow
alstroemeria313#1694: it would have to go through all the layers
alstroemeria313#1694: in order
alstroemeria313#1694: this gets you one next token
EricHallahan#1051: :gameryes:
tpapp157#3643: If you could parallelize across enough computers then overall samples/s could catch up but you'd need to be processing a lot of samples consistently to see those efficiency gains.
alstroemeria313#1694: latency would be bad
bmk#1476: how much value has f@h actually produced
EricHallahan#1051: Latency would make reverse the effect of scaling further beyond a point.
tpapp157#3643: Yeah latency for an individual sample would be bad, but for 1000 samples overall processing speed could be good. There's only so far you can push a single machine before it caps out but you can scale parallel computation far beyond that.
EricHallahan#1051: Well you can't start predicting the next token until the first has completed.
tpapp157#3643: No but you can be working on another input. It's the assembly line approach.
EricHallahan#1051: Pipelining works for increasing throughput, not latency.
tpapp157#3643: Right. That's my point
|
bmk#1476: something something 9 women
EricHallahan#1051: Your latency will still suck if you have to send the activations halfway around the world, no matter how many computers you have available. And we haven't even touched the heterogeneous compute aspect yet.
tpapp157#3643: If you're taking a distributed computing approach then low latency isn't even an option so no point in worrying about it. Focus on the strength of the system which is massive parallel computation.
bmk#1476: latency typically matters more than throughput for the vast majority of applications
cfoster0#4356: Thought Ethan asked about training...
tpapp157#3643: With an evolutionary approach (no gradient calculation) training and inference are basically the same. Just lots of forward passes.
cfoster0#4356: Fair
ethan caballero#6044: here's the citation:
https://openai.com/blog/evolution-strategies/
bmk#1476: gradient descent is safer than evolution
EricHallahan#1051: The F@H backend is a massive job scheduler. There is no communication of any intermediate results. The job is scheduled, processed by the worker, and sent back. Every job is entirely asynchronous.
alstroemeria313#1694: doubt
alstroemeria313#1694: How big were these nets
ethan caballero#6044: small
alstroemeria313#1694: ah.
alstroemeria313#1694: > Mathematically, youโll notice that this is also equivalent to estimating the gradient of the expected reward in the parameter space using finite differences, except we only do it along 100 random directions.
alstroemeria313#1694: this looks really bad
tpapp157#3643: In the RL context, techniques like PPO often produce competitive results to gradient based techniques for a few reasons. The gradient estimation is often very poor and the loss landscape is very poorly behaved meaning there's substantial noise in the calculated gradient and it may even be pointing in unhelpful directions relative to the global minima. Techniques like PPO take a much broader view of the loss landscape which allows a more global gradient estimation that can avoid the messy local minima that SGD falls into.
alstroemeria313#1694: like an a "can't possibly scale to large nets" way
Deleted User#0000: PPO is a gradient based technique trained with SGD though :knoht:
|
tpapp157#3643: There's a small but important difference. PPO estimates the global gradient by running a bunch of simulations with perturbed parameters. In a sense, estimating the gradient across many policies (hence the name). Traditional DL RL techniques calculate the gradients directly for many runs of a single policy. So PPO requires no backward passes, while the latter does. Hopefully that made sense.
parzival#5010: PPO definitely requires backward passes, you are talking about this right? https://arxiv.org/abs/1707.06347
tpapp157#3643: I don't have time to refresh my memory right now but it's entirely possible I have PPO mixed up with another algorithm.
Dashiell#8739: I'm trying to do a thing where I need to backpropagate CLIP, in addition to my own model and @alstroemeria313 's (small) diffusion model, but it's blowing up my GPU's memory. Do I:
1. Try and figure out how to use haiku's "experimental" mixed precision w/ CLIP and/or diffusion?
2. Activate my TRC membership / whatever it's called and try and use a TPU
3. See if I can get this done without backpropagating through CLIP
???
Dashiell#8739: any and all advice is appreciated
alstroemeria313#1694: which model is it? we backprop through CLIP all the time for stuff, it generally works well
alstroemeria313#1694: wait how big is the GPU
Dashiell#8739: 3090, 24gb GPU RAM
Dashiell#8739: it's the small model
alstroemeria313#1694: which small one?
Dashiell#8739: I can backpropagate through CLIP on its own
alstroemeria313#1694: ah
Dashiell#8739: but I think CLIP + your small model + my similarly sized model is breaking it
alstroemeria313#1694: oh
Dashiell#8739: doing precise jax memory profiling seems to be dark magic, so I am just sorta guessing based on back of the envelope calculations
alstroemeria313#1694: well TPUs have less memory per core than a 3090
|
alstroemeria313#1694: and weird padding stuff that increases memory requirements for the same thing vs on a gpu
Dashiell#8739: so I'd have to split it up?
alstroemeria313#1694: yeah
Dashiell#8739: hmmmmm
alstroemeria313#1694: but like. how big are these models
Dashiell#8739: perhaps bigger than they need to be?
Dashiell#8739: this is what I was talking to you about earlier this week, I've just been slow to implement it
alstroemeria313#1694: Is it actually OOMing
alstroemeria313#1694: Or just allocating a lot of memory?
Dashiell#8739: I am not sure
Dashiell#8739: this seems to be the most informative part of the error
```
jax._src.traceback_util.UnfilteredStackTrace: RuntimeError: RESOURCE_EXHAUSTED: Failed to allocate request for 2.31MiB (2420736B) on device ordinal 0
BufferAssignment OOM Debugging.
BufferAssignment stats:
parameter allocation: 2.31MiB
constant allocation: 1B
maybe_live_out allocation: 2.31MiB
preallocated temp allocation: 0B
total allocation: 4.62MiB
|
total fragmentation: 1B (0.00%)
```
alstroemeria313#1694: Looks like OOM
alstroemeria313#1694: :/
Dashiell#8739: how many parameters is the 128x128 diffusion model?
alstroemeria313#1694: Are you trying to backprop through an entire sampling process
Dashiell#8739: yes....
alstroemeria313#1694: Oh no :/
alstroemeria313#1694: You need to use gradient checkpointing for that
Dashiell#8739: hmmmmm
Dashiell#8739: ok
alstroemeria313#1694: IDK how to do it in JAX well
alstroemeria313#1694: @nshepperd ?
Dashiell#8739: it should have been immediately obvious to me that backpropagating through the entire sampling process would have caused this....
Dashiell#8739: hmmmm
alstroemeria313#1694: nshep fixed the openai pytorch models' gradient checkpointing so she could backprop through a bunch of DDIM steps
alstroemeria313#1694: and generally knows JAX better than me
Dashiell#8739: though technically if I make the model more complicated and do something like Q learning I wouldn't necessarily need to backpropagate through the entire sampling process....
Dashiell#8739: @nshepperd any advice you have or directions you can point me in for implementing gradient checkpointing in jax would be greatly appreciated ๐
alstroemeria313#1694: wikiart_128 is 243M
|
alstroemeria313#1694: wikiart_256 is 291M
alstroemeria313#1694: imagenet_128 is 290M
alstroemeria313#1694: danbooru_128 is 564M
Dashiell#8739: hmmm my model probably is too big then, I'm getting 358M
Dashiell#8739: I basically just copied the code from wikiart_128 and then added stuff to encode the timestep and where the diffusion is relative to the CLIP target
Dashiell#8739: but I probably don't need a whole extra 100M
Dashiell#8739: I clearly put too much thought in what was the minimally sophisticated training strategy and not enough into the actual implementation details ๐
Dashiell#8739: why is danbooru_128 so much bigger?
alstroemeria313#1694: the dataset is huge in comparison
Dashiell#8739: gotcha, interesting
Dashiell#8739: I mean, I can also try and just do this without backpropagating through CLIP as _part of the model_ and instead just treat it like a more traditional (err modern) RL problem: just observations produced in a black box, no attempt to model the state
Dashiell#8739: but I liked the idea of treating this as a physics-ish backpropagate-through-the-model-dynamics sorta thing
alstroemeria313#1694: yeah
alstroemeria313#1694: people who do neural ODE stuff must use smaller models i guess
Dashiell#8739: ยฏ\_(ใ)_/ยฏ
alstroemeria313#1694: the fact that we can train the diffusion model timesteps independently spoils us and lets us use super chonk models
alstroemeria313#1694: oh wait. you know what it is they actually do.
alstroemeria313#1694: There are methods that let you not store activations when you backprop through the whole process
alstroemeria313#1694: but they require the model to have smooth second derivatives i think? not entirely sure
alstroemeria313#1694: anyway checkpointing ought to reduce the memory use, not by this much but it might be enough
|
Dashiell#8739: I should look into that
alstroemeria313#1694: the "adjoint method" https://github.com/rtqichen/torchdiffeq
alstroemeria313#1694: You can't use relu i think.
alstroemeria313#1694: They say use softplus or smth instead.
Dashiell#8739: interesting
alstroemeria313#1694: Actually they just say "avoid" relu and leaky relu, I am not 100% sure they 100% break it
CRG#8707: You can also use revnets
alstroemeria313#1694: also this repo's adaptive step size methods try to evaluate the diffusion model at negative timesteps
alstroemeria313#1694: so you will have to define those (I think you can define them in terms of an existing model)
Dashiell#8739: ok, I'm definitely gonna look into this
alstroemeria313#1694: or else find a way to tell it not to do that
Dashiell#8739: ty ty ๐
alstroemeria313#1694: but yeah with a v objective diffusion model you can integrate it with an ODE solver from your starting timestep to your ending timestep and it should work
alstroemeria313#1694: i think one hiccup is that you want to output pred at the end, not the partially noised image
alstroemeria313#1694: but it should work otherwise
nshepperd#2316: couldn't you approximate it by doing the reverse diffusion process
nshepperd#2316: like with ddim
nshepperd#2316: that's basically doing the ode thing manually
nshepperd#2316: i guess
Dashiell#8739: I don't understand, approximate the gradients of my model?
|
nshepperd#2316: like you do your n sampling steps to generate your image. then do sampling in reverse, step by step, backproping through one step at a time to get the grad for the previous noisy image
Dashiell#8739: ohhhhhhh
nshepperd#2316: while accumulating grads for the model itself i guess
alstroemeria313#1694: i think they're trying to learn a thing to control DDIM step size though
alstroemeria313#1694: which complicates things
nshepperd#2316: ohhhh
nshepperd#2316: hm yeah i have no idea how that would work
alstroemeria313#1694: i suggested checkpointing but idk how to do that in JAX super well
nshepperd#2316: like to calculate the derivative wrt the step size
nshepperd#2316: well, if you have enough ram to store the gradients for every noisy image, you can just checkpoint the sample_step?
nshepperd#2316: like in your sample loop, x = jax.checkpoint(sample_step)(params, x, t1, t2)
nshepperd#2316: and then differentiate the whole sample loop
nshepperd#2316: only problem is those gradients for the noisy might be padded to quite large bc TPUs :/
Dashiell#8739: right now I'm using a GPU
nshepperd#2316: like because they are [n, 3, h, w]
nshepperd#2316: ah, gpu should be fine
cfoster0#4356: Are they rebranding to "platform models"? ๐ค
Edit: maybe this is just the MSFT branding of it
https://youtu.be/a7G0no5KjfU
|
tpapp157#3643: Why can't we just stick to feature extractors? At least that's what I've always called large pretrained models designed to support many downstream use cases.
cfoster0#4356: I feel like feature extractor, as a term, is kind of content-free, which makes it harder to pitch as a researchable artifact (which is presumably what the CRFM is after)
cfoster0#4356: Compare:
๐ฅผ: "Hi yes we'd like $100M please"
๐ฆ: "For what?"
๐ฅผ: "To study... *functions*."
๐ฆ: "GTFO my office..."
tpapp157#3643: True. "Platform Model" may not actually mean anything but it does sound fancy. It's very buzzwordy.
๐
ฌ gabriel_syme ๐
ฌ#3220: oh no another one :berk:
bmk#1476: i should write a blog post "introducing" the term Big Models just for the memes
chirp#4545: Large Scale Transformer Models
ilovescience#3282: Goose Models
StellaAthena#3530: Massive
Overparameterized
Differentiable
Engines for
Language
StellaAthena#3530: Note that itโs MODELs, not MODELS. Thatโs very important
๐
ฌ gabriel_syme ๐
ฌ#3220: but can it be a paper and not a blog? I can use the citations
๐
ฌ gabriel_syme ๐
ฌ#3220: also, can I be the 46th author?
|
bmk#1476: if you can find 45 other people who will allow their name to be written on the paper, then yes
๐
ฌ gabriel_syme ๐
ฌ#3220: cool, maybe this can be the april's fool
bmk#1476: though.. why stop at 45
๐
ฌ gabriel_syme ๐
ฌ#3220: we can crowdsource in here, easy. Come on lurkers
ilovescience#3282: i'll be an author :berk:
ersatz#0001: I wonder if authors are already using the OpenAI API to write
ersatz#0001: or even GPT-J
ersatz#0001: (I mean professional authors)
Balaji#3611: Hi all.. One question on GPT 2 tokenzier used in GPT-J. Am adding some custom tokens to the pre trained tokenizer and resized model embedding.
During fine tuning does these newly initialised weights get updated in the embedding?
๐
ฌ gabriel_syme ๐
ฌ#3220: Now those are some cool research topics
https://twitter.com/mmbronstein/status/1465286698884022275
alstroemeria313#1694: @nshepperd do you think it is, by any chance, focusing only on low timesteps bc the scores there are so huge.
nshepperd#2316: oh
nshepperd#2316: huh i guess they would be yeah
nshepperd#2316: should maybe reweight
alstroemeria313#1694: also um i forgot to actually change it to softplus
nshepperd#2316: eheh
alstroemeria313#1694: so it was relu ofc
nshepperd#2316: is there some sort of analogy of v objective for this. like if we reweight it so it's scaled like v instead. and then simplify the formulas
|
alstroemeria313#1694: lol softplus is *way* better! https://cdn.discordapp.com/attachments/729741769738158194/914881231915716679/demo_00001-14.png
nshepperd#2316: ooh
nshepperd#2316: wow
alstroemeria313#1694: we can reweight scores so they are scaled like eps
alstroemeria313#1694: right?
nshepperd#2316: yeah
alstroemeria313#1694: however pred is still going to be ill-conditioned early on
alstroemeria313#1694: even if we weight the output like eps, unweight it to compute the traces and stuff, and weight those loss values back
alstroemeria313#1694: this loss variance is ludicrously high though
alstroemeria313#1694: like it is absurd even by diffusion model standards
nshepperd#2316: heh
nshepperd#2316: or rewrite scores in terms of v and noised image, and try to simplify the formulas to get something well conditioned
alstroemeria313#1694: *nods*
alstroemeria313#1694: oh huh https://cdn.discordapp.com/attachments/729741769738158194/914886938266325022/demo_00005-13.png
nshepperd#2316: working?
alstroemeria313#1694: yep~
alstroemeria313#1694: it's eps objective rn
nshepperd#2316: ahh
alstroemeria313#1694: bc i need to figure out the right weighting for v
alstroemeria313#1694: Amazing, this weird Jacobian trace method actually works
|
nshepperd#2316: ehehe~
alstroemeria313#1694: 14 epochs https://cdn.discordapp.com/attachments/729741769738158194/914893041368698880/demo_00014.png
alstroemeria313#1694: Now can you train an EBM with it.
nshepperd#2316: ooh
alstroemeria313#1694: like a timestep conditioned EBM
alstroemeria313#1694: that is going to be weird bc the log densities at different timesteps are going to be scaled very differently
nshepperd#2316: ...like with a third order gradient?
nshepperd#2316: bc the ebm outputs unnormalized log density?
alstroemeria313#1694: yeah you have to add an estimator of the Hessian trace to the loss
alstroemeria313#1694: so triple backprop
alstroemeria313#1694: so you have sigma squared on the denominator of the gaussian log density
alstroemeria313#1694: so we should multiply the output log density by sigma^2 maybe
alstroemeria313#1694: before backprop to get the score
alstroemeria313#1694: or after, i guess
alstroemeria313#1694: for numerical reasons
nshepperd#2316: either should be okay i think? for multiplying
nshepperd#2316: upscaler is improving i think... seems to need a lot of training https://cdn.discordapp.com/attachments/729741769738158194/914897834422198372/upscaler_new_450k.png
nshepperd#2316: like probably a few more days
nshepperd#2316: that's 450k steps
nshepperd#2316: like details at the scale of 1 pixel are taking a long time to appear
|
nshepperd#2316: idk why
alstroemeria313#1694: 32 epochs https://cdn.discordapp.com/attachments/729741769738158194/914899307444965406/demo_00032-4.png
kurumuz#5695: woah, based
kurumuz#5695: @aero would most likely love this
kurumuz#5695: @nshepperd does it work with diffusion
alstroemeria313#1694: it is diffusion, isn't it
nshepperd#2316: yeah it is diffusion based upscaling
kurumuz#5695: how slow is the inference
alstroemeria313#1694: the EBM is bad :/
kurumuz#5695: how many steps do you need to do
alstroemeria313#1694: 10 epochs https://cdn.discordapp.com/attachments/729741769738158194/914901554342035526/demo_00010-10.png
kurumuz#5695: maybe progressive distillation works with it
kurumuz#5695: that would be cool
nshepperd#2316: emm, maybe 2 minutes to upscale something from 128x128 to 512x512 with 250 steps
nshepperd#2316: on my 3090
nshepperd#2316: multiply the time for more pixels
alstroemeria313#1694: But like I still cannot believe that Jacobian trace estimator thing works.
nshepperd#2316: crunchy
alstroemeria313#1694: eheh
nshepperd#2316: ehehe yeah
|
nshepperd#2316: however every upscaler i have tried is pretty finicky with diffusion outputs because it likes to upscale artifacts too
nshepperd#2316: usually i have to downscale 2x, then upscale 4x, for a net of 2x, to get a decent result
nshepperd#2316: like mask diffusion artifacts by downscaling first
nshepperd#2316: ok bedtime now~
alstroemeria313#1694: nightnight~
nshepperd#2316: night~๐ธ
alstroemeria313#1694: so we don't even need to compute model targets for this Jacobian trace method... we only need to be able to sample from the perturbed data distributions (sample model inputs)
alstroemeria313#1694: Can we easily use different perturbations than Gaussian then?
alstroemeria313#1694: this is what happens when you change the 1/2 squared L2 term to 1/4 squared L2 https://cdn.discordapp.com/attachments/729741769738158194/914928733675192360/demo_00028-2.png
alstroemeria313#1694: idek lol
alstroemeria313#1694: like. if you don't have to compute model targets you can't get them wrong accidentally. ^^;;
Quill#9732: I'm training an upscaling diffusion model and also seeing very slow progress - what model size are you using?
janus#0150: It's beautiful ๐ฅฒ
alstroemeria313#1694: it's very smooth
alstroemeria313#1694: i am not sure why.
janus#0150: Whats the L2 loss on? During diffusion or during training? If during diffusion is it because it was trained with 1/2 squared? U would have guessed decreasing the metric weight would make them less smooth in either case....
spirit-from-germany#1488: We hit 2B samples ๐ฅณ https://cdn.discordapp.com/attachments/729741769738158194/914943166937980958/IMG_0397.png
alstroemeria313#1694: during training.
alstroemeria313#1694: It is on the gradient of the log density that the model learns. (It outputs the gradient directly and doesn't calculate the density.)
alstroemeria313#1694: Pushing it toward 0.
|
nostalgebraist#3542: why does pytorch have such bad built-ins for multihead attn...
alstroemeria313#1694: mm?
nostalgebraist#3542: like, this isn't even my main gripe, but. what the hell is this https://github.com/pytorch/pytorch/blob/3bd7dbf1196fcdd327ec09993444d5c1f5b8757f/torch/nn/functional.py#L5256-L5259
```
if need_weights:
# average attention weights over heads
attn_output_weights = attn_output_weights.view(bsz, num_heads, tgt_len, src_len)
return attn_output, attn_output_weights.sum(dim=1) / num_heads
```
alstroemeria313#1694: What
nostalgebraist#3542: "average attention weights over heads"? is that a thing anyone wants?
alstroemeria313#1694: Is that for some sort of attention map visualization thing
nostalgebraist#3542: and of course it's undocumented, outside this comment
alstroemeria313#1694: Or is it actually used in training/inference
nostalgebraist#3542: similar to HF, they let you optionally get attention weights as a second return value of the forward pass
nostalgebraist#3542: but they don't have a head dimension, because they average over the heads
nostalgebraist#3542: (in a normal transformer this would not be used in training/inference)
nostalgebraist#3542: my bigger gripe is that they take inputs called "query, key, value" which are actually the *sequences used to form* the q/k/v
nostalgebraist#3542: and then they bundle the "in projection" into the forward pass
|
nostalgebraist#3542: and have shape checking that enforces that the input sequences all have the same dim, before projecting them. so if you have e.g. a src sequence of dim 512 and a tgt sequence of dim 256, you can't do separate (512 -> 256) projections for k and v
alstroemeria313#1694: ugh
alstroemeria313#1694: sometimes you just have to write it yourself
alstroemeria313#1694: i would recommend mingpt as a reference
Grey M.#7151: Can someone point me to the specifics of the Discord ToS regarding data collection?
Wondering if theres any ethical and legal means to get discord messages to use for chatbot training
Grey M.#7151: Like would creating a seperate channel for data collection with some sort of stated agreement/consent form be good enough
tpapp157#3643: Normally if it's a public forum (twitter, twitch, etc) then it's perfectly legal to scrape data but because discord communities require an invite link to gain access I think they would legally count as a private forum and therefore scraping would be illegal. I'm not a lawyer though.
Grey M.#7151: well really an invite is private in the same way that a link is private
discord invites can still appear in search results if its a public server
EricHallahan#1051: Users own the content of their messages. My understanding is that you would need to have the consent of every user you scrape messages from, and it is unclear if scraping is even allowed under the ToS.
> __**Rights to use the Service**__
> You agree not to (and not to attempt to) (i) use the Service for any use or purpose other than as expressly permitted by these Terms;(ii) copy, adapt, modify, prepare derivative works based upon, distribute, license, sell, transfer, publicly display, publicly perform, transmit, stream, broadcast, attempt to discover any source code, reverse engineer, decompile, disassemble, or otherwise exploit the Service or any portion of the Service, except as expressly permitted in these Terms; or (iii) use data mining, robots, spiders, or similar data gathering and extraction tools on the Service.
>
> __**Intellectual Property Rights**__
> All rights, title and interest in and to all materials that are part of the Service (including, but not limited to, designs, text, graphics, pictures, video, information, applications, software, music, sound and other files, and their selection and arrangement), except for Your Content, collectively referred to as the "Service Materials,โ are, as between the Company and you, owned by the Company and/or its third party licensors.
>
> __**Your Content**__
> Any data, text, graphics, photographs and their selection and arrangement, and any other materials uploaded to the Service by you is โYour Content.โ You represent and warrant that Your Content is original to you and that you exclusively own the rights to such content, including the right to grant all of the rights and licenses in these Terms without the Company incurring any third party obligations or liability arising out of its exercise of such rights and licenses. All of Your Content is your sole responsibility and the Company is not responsible for any material that you upload, post, or otherwise make available.
|
- https://discord.com/terms
EricHallahan#1051: In general we have found the issue too much a hassle to be worthwhile.
tpapp157#3643: I mean there are tons of discord bots that monitor and record data from channels for various purposes. Discord provides APIs, etc for doing that.
tpapp157#3643: I'd guess if you create a community and make it clear to users up front (plenty of discord communities also make users agree to various terms of conduct before getting access) and you build and register your bot through discord then you're probably ok.
Grey M.#7151: > (iii) use data mining, robots, spiders, or similar data gathering and extraction tools on the Service.
I think thats just a cut and dry "do not collect data" but this might be reffering to methods outside the confines of the API
tpapp157#3643: Yeah that would be my interpretation as well. Otherwise many common discord bots would violate the ToS.
Grey M.#7151: I think the most legally safe option would be asking for people to send us messages to include in the dataset, informing them of the usage of said data
Grey M.#7151: is the format of the datasets for things like GPT-J individual phrases or is it conversations
EricHallahan#1051: Depends upon the objective.
Quill#9732: "documents" - which can be whatever, and most of them are not conversational
EricHallahan#1051: Ubuntu IRC: :guilty:
Quill#9732: but since you pretty fundamentally *want* to capture long-range dependencies in this context, I don't see why you'd do it on the phrase level
Grey M.#7151: Right now the general idea is to either use an existing dataset that we can self host
or preferably train our own dataset for a discord chatbot
said chatbot would have the objective of randomly replies to user messages
reason for using a specific dataset is to try and capture the casual/shitposty nature of discord chats
Grey M.#7151: I suppose that larger chatlogs would be necessary data for such a use?
tpapp157#3643: long range context is always the challenge with chatbots.
|
Grey M.#7151: that makes it rather difficult to get consent then :pain:
rules out the idea of people sending us their own quotes
EricHallahan#1051: ```<greym> Can someone point me to the specifics of the Discord ToS regarding data collection?
Wondering if theres any ethical and legal means to get discord messages to use for chatbot training
<greym> Like would creating a seperate channel for data collection with some sort of stated agreement/consent form be good enough
<tpapp157> Normally if it's a public forum (twitter, twitch, etc) then it's perfectly legal to scrape data but because discord communities require an invite link to gain access I think they would legally count as a private forum and therefore scraping would be illegal. I'm not a lawyer though.
<greym> well really an invite is private in the same way that a link is private
discord invites can still appear in search results if its a public server
<Eric Hallahan> Users own the content of their messages. My understanding is that you would need to have the consent of every user you scrape messages from, and it is unclear if scraping is even allowed under the ToS.
<Eric Hallahan> In general we have found the issue too much a hassle to be worthwhile.
<tpapp157> I mean there are tons of discord bots that monitor and record data from channels for various purposes. Discord provides APIs, etc for doing that.
<tpapp157> I'd guess if you create a community and make it clear to users up front (plenty of discord communities also make users agree to various terms of conduct before getting access) and you build and register your bot through discord then you're probably ok.```
```<greym> I think the only thing to worry about is if the bot is public, I suppose
<greym> the bot could be public and you could scrape it and gather data
<greym> so if you had a public discord bot that screpped and gathered data, I think thats ok
<greym> But if the bot was private, I would not allow it to scrape or collect data
<Eric Hallahan> Exactly.
<Eric Hallahan> I don't think there are any rules about scraping.
<Eric Hallahan> It is up to the discretion of the community.```
EricHallahan#1051: I just reformatted that conversation into the Ubuntu IRC format and I think the results aren't too bad with GPT-J.
|
Grey M.#7151: is htat second part GPT-J?
EricHallahan#1051: Did you write any of those messages? :berk:
Grey M.#7151: Damn thats amazing
tpapp157#3643: 'screpped' should be a real word.
Parker#3197: if you just made a bot that collected messages, (on your own server) they probably wouldn't care. (as long as your users are aware, etc)
Parker#3197: there are actually repositories on github to do this afiak
Parker#3197: but, I think people in this server just prefer to not have their chat messages in a dataset somewhere for other reasons
Parker#3197: it could also be considered against the tos like they said too
EricHallahan#1051: (I assume this is because greym used "seperate")
Parker#3197: it could even be considered illegal and open a person up to being sued depending on what someone is doing to collect data
Parker#3197: there was a company that was sued for scraping linkedin several years ago
Parker#3197: and idk if they ever got through all of their appeals, etc
Parker#3197: and I think bypassing like a login when you've been banned/not allowed accounts can also be considered "computer hacking" afaik (also I am not a lawyer)
Cheese is good#5316: @Grey M. Heyo
Grey M.#7151: o/
someKindaBean#8471: speaking of that, this paper is pretty cool
someKindaBean#8471: https://arxiv.org/abs/2107.07567
someKindaBean#8471: summarizing old conversation content in an efficient way to maintain long range context for future chatbot usage
EricHallahan#1051: ~~SMS transformer wen~~
random person#5234: Would be kinda illegal I think since you might scrap hipaa data by accident
|
nshepperd#2316: this one is 88M parameters
nshepperd#2316: idk maybe i should put lots of 1x1 res blocks in or something for more params
m_wAL99#1923: is there any manual gui/web aligners tool for NMT ?
I have a lot unaligned language text data but np++/vsc too inefficient
m_wAL99#1923: found :grimberk:
https://wanthalf.saga.cz/intertext
louis030195#2462: Can I add a funny discord bot that I made with GPT3?
StellaAthena#3530: No
bmk#1476: no
๐
ฌ gabriel_syme ๐
ฌ#3220: question: is it okay with me to be slightly upset every time I have discussions with people about DL models and they play the black box card? Isn't there quite a bit of work already that shows our capacity to interpret DL models?
Sid#2121: not enough lol
Sid#2121: i think it's a pretty fair card to play
Sid#2121: even the authors of the transformer paper can only guess why it works
Sid#2121: or, why it works *so well*
๐
ฌ gabriel_syme ๐
ฌ#3220: makes sense, there's definitely not enough and we still don't know things
StellaAthena#3530: I think a lot depends on the details of what theyโre appealing to that for
๐
ฌ gabriel_syme ๐
ฌ#3220: it's just that the people I discuss this with absolutely believe it's totally opaque
๐
ฌ gabriel_syme ๐
ฌ#3220: understanding models I guess, why they make 'decisions' or predictions, etc
๐
ฌ gabriel_syme ๐
ฌ#3220: (I should note these are not AI researchers or ppl in the domain)
Sid#2121: "understanding" has lots of different levels
|
๐
ฌ gabriel_syme ๐
ฌ#3220: yeah sry didn't like the term after I posted it ๐
Sid#2121: we can understand the models in that we can see in some given prompt, the model is attending to this part, and this part, and this neuron and this other neuron light up
Sid#2121: but if it then makes some decision, or outputs some completion in response - we don't have a clear 'understanding' of why it would choose that output/decision over any other.
Sid#2121: which is what we'd really need in order to totally entrust important processes/decisions to ML models
๐
ฌ gabriel_syme ๐
ฌ#3220: that makes sense thanks
๐
ฌ gabriel_syme ๐
ฌ#3220: I was just watching the video you shared and thought things are really much better than a few years ago
๐
ฌ gabriel_syme ๐
ฌ#3220: but maybe CV is a domain that already had tooling and prior work
Sid#2121: i believe we'll have similar capabilities in LMs soon - but both controllability (is that a word?) and being able to identify roles of individual neurons is a very different thing to understanding
Spy#9778: Anyone know of any good datasets to use for sentence level language modeling experiments?
Spy#9778: I could just use 1bwords or something, but I'm wondering if there's anything more recent
Spy#9778: Could also sent tokenize some pile, but something with other prior work on it would be nice.
StellaAthena#3530: What are you trying to experiment with, exactly?
Spy#9778: I just need a language model that will emit short-ish (100 or less GPT2 tokens) things with well defined boundaries
Spy#9778: currently I'm training one on 1bwords, but it's gonna take a few days
Sphinx#2092: Can't you just finetune gpt2 on sentences and use some EOS tokens to denote the end?
tpapp157#3643: The people I find to be the most uneasy with NNs are old school statisticians. They often don't trust anything they can't calculate p-values and confidence intervals and derive performance guarantees for. How much you care about those sort of things depends on your domain and project requirements.
tpapp157#3643: But the criticism is warranted, not only can you not do the above things which you can for a broad range of other models, but it's often extremely difficult to properly evaluate model results due to the complexity of the data being worked with. More than a few companies have gotten themselves into serious public and non-public trouble when their fancy NN model did something bad they didn't expect.
tpapp157#3643: It also doesn't help that a lot of folks training NNs tend to not havea strong understanding of what they're doing and throw them around rather cavalierly at problems.
ilovescience#3282: there's someone like this at my univ...
my advisor originally encouraged me to discuss with him but the prof was a little too negative about neural networks imo and was pushing his own stuff (polynomial regression and the like) and i didn't really get much out of our convos
|
Spy#9778: Yep I was asking what datasets people use for that
Spy#9778: I am currently doing that with 1bwords but it's so big
Spy#9778: And ptb is so shitty
nshepperd#2316: not being able to calculate p values for neural networks is a feature
chilli#5665: https://news.ycombinator.com/item?id=29401071
chilli#5665: this comment makes Zygote sound quite hacky
chilli#5665: lol
rwamit#5964: Hey guys, this question might sound silly but
I am working on a speech to text translation to be used for commercial purpose meaning it needs to be integrated in the company's website I am working for.
For this, I was testing the `wav2vec2` model using HuggingFace on my colab notebook. What I want to know can I use it for commerical purpose ? , HF says it is licensed under Apache 2.0
Again, I am not using their website for any of this. But then I am confused if there is a difference between using their website for this task and taking the library and using it in your own workspace.
Also, if there are any better solutions other than this, I can look out for, that would be really helpful.
๐
ฌ gabriel_syme ๐
ฌ#3220: I would recommend you ask that over in the HF discord server, link is in the #communities channel. This isn't the type of question this server deals with I think ๐
๐
ฌ gabriel_syme ๐
ฌ#3220: smth like a hetzner server maybe?
๐
ฌ gabriel_syme ๐
ฌ#3220: or maybe, vCPU instances might be really cheap? haven't looked in a while
|
๐
ฌ gabriel_syme ๐
ฌ#3220: Do we avoid the iid problem when using DTs?
alstroemeria313#1694: iid problem?
alstroemeria313#1694: it's a stationary objective like normal sequence modeling
alstroemeria313#1694: so long as you aren't doing things like sampling policies, computing their actual rewards, and feeding them back into the training set
๐
ฌ gabriel_syme ๐
ฌ#3220: I was thinking in the UDRL setting. But maybe that is no longer the same place where the problem comes up in the first place
Sid#2121: what does everyone use to make plots here? I fucking hate matplotlib so much, need alternatives lol
dmayhem93#3202: I've always liked bokeh
alstroemeria313#1694: just matplotlib
Sid#2121: noooooo ๐ฆ
Sid#2121: how is there nothing good
Daj#7482: ms paint
Sid#2121: i hate you all
Sid#2121: well, mostly @Daj
EricHallahan#1051: matplotlib sucks
flowpoint#7450: i hate to love matplotlib and love to hate d3js
kurumuz#5695: >just report to wandb
Sid#2121: someone please train a codex model on matplotlib so i can just ask it to plot for me
kurumuz#5695: I think copilot is already good at that, right?
kurumuz#5695: that is what i heard
kurumuz#5695: but yeah matplotlib is disgusting
|
Sid#2121: it's okay, but it can't seem to figure out how to put the legend outside my plot either :berk:
flowpoint#7450: gpt-j can do some svg (plotting) drawing
StellaAthena#3530: Seaborn can be better than matplotlib sometimes, but has its own awfulness as well
Sid#2121: seaborn is basically just a wrapper on top of matplotlib though
Sid#2121: i use it for simple stuff
Sid#2121: but i find that if i want to do anything more complex then i have to get into the matplotlib stuff anyways
StellaAthena#3530: yeah
zphang#7252: mucking around in matplotlib is my greatest value-add to papers
Arvi#0474: Hey, I'm looking for an open source language model that would fine tune on 11gb of GPU ram (super small dataset so even batch size 1 should be ok?). What would be the best model that fits?
rwamit#5964: thank you for the suggestion!
BoneAmputee#8363: I have finetuned gpt-2-345M with 11gb, but there might be tricks these days to go bigger? :cat_thonk:
Arvi#0474: Should be a good start, thanks
Arvi#0474: Personal ML projects are such a pain
MaxHager#6351: What is the advantage in building CNN in Javascript like on this site https://cs.stanford.edu/people/karpathy/convnetjs/index.html ? Is there a specific reason why not only using Python libraries for creating the CNN and displaying the results in the web with Python?
cuuupid#1372: Probably to run it client side and skip the server costs
StellaAthena#3530: This is a toy demo and not a serious project used for research on CNNs
alstroemeria313#1694: i think the main advantage is you can make client-side interactive demos easier
alstroemeria313#1694: like it is pedagogical
alstroemeria313#1694: you can do basic interpretability demos on tiny nets and stuff
Deleted User#0000: Hi, I came here from the-eye and have been lurking for some time now. Since a recent experience I once again found the drive to build an AGI. The first time around I found OpenCog which overwhelmed me by the complexity of the ideas it is based on. Now I wan't to start by reading about Artificial General Intelligence and how we might go about designing it. Could you recommend Books or Online Resources where I can start learning about existing concepts in that area. Can AGI be based upon GPT-J/3 or is that a bad idea?
|
tpapp157#3643: Nobody has any idea how to even begin developing super-human AGI outside of vague generalizations and guesses. Anyone saying otherwise likely doesn't know what they're talking about.
bmk#1476: and/or are a crank or grifter
cfoster0#4356: Ben Goertzel being a grifter is probably a decent contributor to why OpenCog is complex/overwhelming
Deleted User#0000: Hm ok, but there must be books that explain how an artificial intelligence could work or even how our human thinking works... I'm going to search for books myself, but just thought you could give me some recommendations...
rom1504#5008: Successful people try to slowly extend the functionality of neural networks that already work
StellaAthena#3530: Iโm not sure why you think there โmustโ be books detailing this.
rom1504#5008: You will find tons of book on how to build AGI and also how to find God but it might not be that useful
AI_WAIFU#2844: matplotlib + seaborn. Still kinda shit tho
bmk#1476: ask copilot to write matplotlib code for you
bmk#1476: literally life changing
chilli#5665: actually though
chilli#5665: I think copilot is great for this kind of stuff
zphang#7252: I feel like copilot is good for boilerplate / expanding simple things?
๐
ฌ gabriel_syme ๐
ฌ#3220: plotnine is incredible imo
๐
ฌ gabriel_syme ๐
ฌ#3220: plotting like a human being
zphang#7252: but matplotlib mucking is usually tweaking random little options to get the plots to look right
๐
ฌ gabriel_syme ๐
ฌ#3220: behold
```python
boxplot_p = (
ggplot(gpt2df, aes(x='factor(top_p)', y='accuracy', fill='factor(context)\n'))
|
+ geom_boxplot(outlier_shape=".")
+ labs(x="Top P", y="Accuracy", fill='Context length')
+ scale_y_continuous(breaks=np.arange(0, 1.1, 0.1), limits=[0, 1])
+ ggtitle("Semantic accuracy by top p value and context length")
)
boxplot_p.save(box_folder + '\GPT2_p&ctx.png', format='png', dpi=200)
```
zphang#7252: I remember the "Use R! It has ggplot2!" days of the R vs Python wars
๐
ฌ gabriel_syme ๐
ฌ#3220: everything is understandable
๐
ฌ gabriel_syme ๐
ฌ#3220: and it's python
๐
ฌ gabriel_syme ๐
ฌ#3220: the line that blew my mind is the +labs one
๐
ฌ gabriel_syme ๐
ฌ#3220: want to change the title of the legend bar? sure, just use the same variable and set it to the title you want
๐
ฌ gabriel_syme ๐
ฌ#3220: exactly like a sane person would do it I think
๐
ฌ gabriel_syme ๐
ฌ#3220: this is what that snippet makes. I think it looks great with minimal effort https://cdn.discordapp.com/attachments/729741769738158194/915748880585465866/GPT2_pctx.png
zphang#7252: this is too reasonable, where's the part where I google how pull out the legends object so I can change its title and use a totally different set of kwargs to format it
๐
ฌ gabriel_syme ๐
ฌ#3220: ye lol
๐
ฌ gabriel_syme ๐
ฌ#3220: and you can change that boxplot to a number of charts by changing +geom, and it just....works
nostalgebraist#3542: i'm perversely fond of matplotlib's jankiness because it provides me with a convenient, productivity-adjacent way to waste time when i don't feel like *actually* doing the next step of the task
glazgoglabgalab#5255: Predictive Processing books are probably the closest useful thing you'll find
|
glazgoglabgalab#5255: But mainly do this. NNs are the most versatile approach we have so far.
wabi-sabi#5811: The classic recommendation is Hofstadter. Most books on the topic are from the era of symbolic computing, which has little to do with how successful AI works today. I haven't had the time to look into it yet, but Tae Danae Bradley's work (with John Baez's involvement, IIRC?) on getting algebra and statistics to work nicely with each other feels like it's in the right area for big future progress to me. Currently, the algebraic symmetry exploiting parts of ML are mostly imposed by experts through architecture design, and a good AGI would probably need to be able to handle that itself.
Minkyu#4165: Where can I make a donation? I messaged Aran but he is away now.(Got in touch with Aran, problem solved!)
Daj#7482: I mean happy for them that they matched NVIDIA and stuff but honestly this is a yikes because if they're not much better and cheaper no one will bother switching to a much less well supported software and hardware ecosystem
Daj#7482: And the leaks for the upcoming NVIDIA chips look pretty nuts
kurumuz#5695: also, they matched a nvidia gpu that is almost 2 years old
kurumuz#5695: lol
Daj#7482: yeah oof
EricHallahan#1051: Yeah it's pretty mediocre.
kurumuz#5695: also i would be interested to see bigger models trained
kurumuz#5695: its a fucking resnet
kurumuz#5695: and bert
kurumuz#5695: hmm, i didnt see any details if training was fp16 or not
๐
ฌ gabriel_syme ๐
ฌ#3220: is this a gpu or a pod though?
kurumuz#5695: pod
๐
ฌ gabriel_syme ๐
ฌ#3220: image suggests pod vs pod
๐
ฌ gabriel_syme ๐
ฌ#3220: ok
CRG#8707: They should have tried sparsity during training like RigL or something like that.
๐
ฌ gabriel_syme ๐
ฌ#3220: how expensive are the graphcore stuff do we know?
๐
ฌ gabriel_syme ๐
ฌ#3220: A DGX was 200k iirc when it came out, I'd imagine it is more now?
|
kurumuz#5695: brutal
EricHallahan#1051: This is why I'm bullish on Intel: They seem to understand that oneAPI will need to compete with CUDA to gain market share.
Deleted User#0000: Because I can't be the first person to thing about how to create an AGI
Daj#7482: Many, many, many, many, many people have thought about it
Daj#7482: And, as you can tell by AGI not existing, no one has yet succeeded
Daj#7482: Many people have very different, mutually incompatible (and often very stupid) ideas of how AGI "should" be constructed
Daj#7482: They often write books about it
Daj#7482: So if you read a book about "AGI", it is usually very bad
Daj#7482: Not that nothing interesting has been written on the topic, the median is just very bad
Deleted User#0000: Ok thank you, I can understand that. I just want go get a basic idea on how people have tried. I will keep that in mind
Daj#7482: "AGI" is kind of a dirty word in modern ML, because early people were hilariously overconfident and made a bunch of bad predictions about how hard AGI will be (spoiler: Not easy)
Daj#7482: So usually when you directly search for "AGI", you'll get out of date and crankish stuff. If you want what most "serious" people are thinking atm, you would be best served studying modern deep learning and RL
Daj#7482: or Hutter's AIXI formalism if you want super abstract formal theories of intelligence
Daj#7482: that have no application to reality lol
65536william#9999: for related _light_ reading, I recommend the AI alignment forum
Daj#7482: """light"""
Daj#7482: lmao
65536william#9999: ahahaha
Deleted User#0000: Thanks, I will look into that
Deleted User#0000: From my basic understanding, I think that new hardware is needed that isn't binary based. I can see quantum computing be a good fit, but we are faaaar away from computing anything more complex than a few basic calculations
|
65536william#9999: lighter than Hutter
Daj#7482: I would not recommend the AF for people not already at least passingly familiar with alignment stuff (read e.g. The Alignment Problem by Brian Christian for that)
Daj#7482: Yeah everything you just said is wrong but explaining to you why will take a long time, I suggest you just start by studying basic ML and CS
Daj#7482: Quantum is not Magicโข๏ธ, but explaining that requires reading Computational Complexity Theory
Daj#7482: Which is dope as hell but not super beginner friendly
Daj#7482: if you really wanna learn about quantum, I recommend "Quantum Computing Since Democritus" by Scott Aaronson
Daj#7482: But that book probably won't help much if you don't at least have some undergrad linalg and computer science background (which is not that hard to acquire)
cfoster0#4356: I think that starting off with an AI textbook that includes a bit of history will be most useful
Daj#7482: I hear the Russell book is good
Daj#7482: Maybe a bit too much history for my liking, but I hear it's good
StellaAthena#3530: I was about to recommend it
EricHallahan#1051: Math good too.
Daj#7482: Linear Algebra is a must
Daj#7482: and various other bits and pieces of math
Daj#7482: never can have too much math
EricHallahan#1051: I need more math experience. :sadge:
cfoster0#4356: This is the Russell textbook https://en.m.wikipedia.org/wiki/Artificial_Intelligence:_A_Modern_Approach
cfoster0#4356: Shouldn't be too hard to find a copy somewhere on the internet or IRL
EricHallahan#1051: Especially because I haven't taken a statistics course or a linear algebra course lol
Daj#7482: you should
|
Daj#7482: Only thing that was worth anything in my college years was that I was forced to take math classes lol
65536william#9999: I dropped maths age 16 and did an English literature degree and now I'm here hehe
65536william#9999: like you Eric I need to pick up that linear algebra
Daj#7482: My number one piece of advice for learning math is "do the homework exercises"
Daj#7482: _actually do them_
kurumuz#5695: hmm, I have no problem with math when I actually get into it but I feel extremely rusty if I stop doing math for a few months
Daj#7482: then it's ez
Sphinx#2092: https://cdn.discordapp.com/attachments/729741769738158194/915973676724269076/saint_curious_george.png
EricHallahan#1051: Surprisingly I don't have to take either linear algebra or statistics to graduate lol
Daj#7482: cringe
kurumuz#5695: I dont think I read any ML books, but I feel like implementing an MLP/backprop/SGD was extremely valuable and a good start
Daj#7482: why even go to college if not as a masochistic self torture device to force you to actually pass linalg and statistics
kurumuz#5695: linear algebra was one of the first CS classes we took
Deleted User#0000: Hm, ok I tried CS at Uni but wasn't that into it, never touched ML. I think I have a great general understanding of CS but I am a complete beginner when it comes to NNs, ML and Quantum. Currently I am working with Servers and Networking. But recently I decided that I want to get back into coding. Could you explain what exactly was wrong with my statement? Our Brains are based on quantum physics, so building an "artificial brain" would involve quantum computing, no? I will read about everything you have suggested and get back to you with more precise questions in 10 years or so๐
Daj#7482: No, I seriously can't, it would be a book length treatment
Daj#7482: You have to study it yourself, sorry
Daj#7482: I would be happy to see you again in a few years :)
Deleted User#0000: Never had the motivation in Uni to do them, always just copied from friends. Thats probably why I always just barely got passing grades
Daj#7482: Yep, that'll do it
Daj#7482: If you just wanna code NNs, you can also just jump into fast.ai or something
|
kurumuz#5695: am i the only one who doesnt like fast.ai
alstroemeria313#1694: i don't know what it is
kurumuz#5695: like isnt the whole argument for it is being easy
EricHallahan#1051: Like the thing is without grounding the problems and having another motivation to solve them I find it hard to motivate myself to do practice problems.
Daj#7482: I have advice for this too: ||git gud||
kurumuz#5695: wait, practice problems get fun though
kurumuz#5695: damn fucking fun
Daj#7482: seriously, learning to force yourself to bash your head against math problems for hours until you almost fall over from tiredness is an excellent skill
kurumuz#5695: its just really important to follow the steps in math
kurumuz#5695: much more than CS
cfoster0#4356: Linear algebra is also a requirement for understanding quantum stuff, so all the more reason to study it if that interests you independently
kurumuz#5695: Actually the way I learn stuff still can apply to math, I just feel like the suffering is at least 10x of CS
Daj#7482: quantum mechanics is just linear algebra in a funny suit
Daj#7482: As Aaronson says, "quantum physics is really simple once you remove all the physics"
EricHallahan#1051: I feel like the thing which ironically prevents me from going out of my way to teach myself this stuff is school lol
kurumuz#5695: it generally is not
kurumuz#5695: tbh
kurumuz#5695: at least was not for me :berk:
EricHallahan#1051: I mean I would have likely taken the time to go through it over the summer if I wasn't wrangling with the disaster which was my spring semester.
Daj#7482: speaking of math, time to bash my head some more against infrabayesianism
|
Daj#7482: ono https://cdn.discordapp.com/attachments/729741769738158194/915976224977850368/unknown.png
kurumuz#5695: what did go wrong with humanity to create this abomination
Daj#7482: It's such a cool theory and has such cool results
Daj#7482: but yeah by god
Sphinx#2092: https://cdn.discordapp.com/attachments/729741769738158194/915977164590043156/unknown.png
kurumuz#5695: omg its real
Daj#7482: turns out the true theory for naturalized induction was sus all along
tpapp157#3643: Understanding the math behind algorithms is important for a good data scientist. At least to the level that by looking at the math you can understand the sort of assumptions the math is making about the data and in turn what sort of biases that imparts on your model results. This combined with a bunch of experience helps you build an intuition for which types of models are best suited for which types of data/problems. I always tell early career DS people that training models is literally the easiest part of DS, anyone with basic coding skills can brute force train a thousand models on a dataset and pick the best. The value that a DS brings is the experience to know in advance which models are likely to be best for a particular problem and why and the ability to properly evaluate the true performance of a model beyond trivial aggregate metrics.
doink#3458: I see the word decentralized used on the website but haven't seen any blockchain here so curious to know how is EleutherAI decentralized?
EricHallahan#1051: We reside all around the world?
StellaAthena#3530: "decentralized" doesn't mean "on blockchain." It means "lacking a central locus of organization"
doink#3458: Oh okay yes got it ๐
tpapp157#3643: Obviously Discord is too centralized and corporate for us to be associated with. Real 1337 underground scene data modelers communicate through a newsgroup message board hosted via a custom blockchain stack inside a Tor-style onion vpn.
Quill#9732: I hear Matrix has "spaces" for server-equivalents now (whereas previously it had channel-equivalents but no higher structure) :p
themoxon#0461: It's pretty good (speaking to you here from my discord matrix bridge!)
themoxon#0461: The only thing that is more of a hassle is voice chats aren't quite as smooth
gollark#3909: I found Matrix homeservers to be horribly resource-intensive or broken. Did they fix that at all?
๐
ฌ gabriel_syme ๐
ฌ#3220: last time I used it there were quite a few issues, but maybe it was us getting to know it idk.
salmon_seasons#2097: Does anyone have experience with deep learning for object detection on hyperspectral imagery?
Being gifted an aerial dataset with hundreds of spectral bands to detect a certain beach weed, it includes some ground truth and labeling. Although we haven't seen the data yet so not entirely sure on the details.
|
It appears from the literature autoencoders or CNN's are used, if anyone has any advice it would be greatly appreciated
(We're aware traditional ML might be easier, the org we're getting the data from already does clustering / regression)
alstroemeria313#1694: you can feed in any number of channels to a CNN
alstroemeria313#1694: the architecture is general
EricHallahan#1051: I would suggest a CNN
alstroemeria313#1694: like you can just take an existing object detection architecture type and modify it to take more input channels and train on your dataset
salmon_seasons#2097: Thanks, appreciate the response
Do you think there would be concern about overfitting having so many weights?
EricHallahan#1051: You may only need a couple layers.
EricHallahan#1051: I find it unlikely, but a train/test/val split should catch it if it does.
alstroemeria313#1694: it depends on how much data you have and also you can catch it with a validation set
EricHallahan#1051: The channel-wise aspect is the vast majority of the useful data as what you are mostly doing is looking for a particular spectra.
salmon_seasons#2097: Not sure on the details of the dataset, just that it's big with a lot of channels. Tried asking but it's not easy to get info from a goverment department on a Friday
We're a bit concerned that the spectral information will vary spatially, the beach is north south aligned, so I'm guessing we need a random split for train / test / val?
alstroemeria313#1694: always do random splits yeah
salmon_seasons#2097: Thanks for your help friends, much appreciated
alstroemeria313#1694: CNNs scale to thousands of channels
alstroemeria313#1694: like, internally CNNs use lots of channels where each channel corresponds to the intensity of a particular *pattern* in the input
|
alstroemeria313#1694: and the more layers you go into the CNN the more complex these patterns become
salmon_seasons#2097: For sure. I'm not used to the idea of having so many channels in the input layer
ilovescience#3282: why don't you like fast.ai?
ilovescience#3282: actually i feel school is great for learning these things if you can take the relevant classes... cuz then you're forced to learn the topic :berk:
zphang#7252: Microsoft's Turing NLR v5 takes top spot on GLUE and SuperGLUE
zphang#7252: https://www.microsoft.com/en-us/research/blog/efficiently-and-effectively-scaling-up-language-model-pretraining-for-best-language-representation-model-on-glue-and-superglue/
zphang#7252: > We will make T-NLRv5 and its capabilities available in the same way as with other Microsoft Turing models.
which is ??
cfoster0#4356: Not
ari#9020: Noobish question: Is simply training a critic on top of a language model a thing that has been tried? That is, take GPT-J, have it continue text from real documents (maybe a few dozen/up to a few hundred tokens at a time?), train a model to distinguish GPT-J's completion from the real completion, and then use that model to rerank completions. The theory is, while beam search doesn't work because of :aaaa: , language models seem to often fail by getting bumped off track by just one unluckily sampled token, and maybe even a weaker model could detect that when it gets to see full completions?
guac#4716: something like verifiers? https://arxiv.org/pdf/2110.14168.pdf
bmk#1476: this is just text gans with extra steps
bmk#1476: and we all know how that turned out
bmk#1476: (spoiler alert: text gans suck)
cfoster0#4356: Wait what?
ari#9020: I don't think you freeze the generator before training the discriminator when training a GAN?
ari#9020: Verifiers look a lot more similar (I've read this paper, I guess I got the idea from there :berk: ), just identifying correct solutions instead of real(istic) completions
bmk#1476: ok, fine, it's text gans with fewer steps
dmayhem93#3202: well, there's electra that kinda fits here
bmk#1476: it's half of a text gan training loop
|
ilovescience#3282: next you'll say it's not a text gan :berk:
๐
ฌ gabriel_syme ๐
ฌ#3220: has anyone trained a model with XMC-GAN? Tempted to try since it's in Jax
naclbbr#9203: another idea (non-GAN) that has been floating around is ranking outputs by sum of logprobs, which is basically what OAI did, and it's kind of meh
๐
ฌ gabriel_syme ๐
ฌ#3220: Huh
https://twitter.com/theshawwn/status/1466725607887282184?t=_c2FIm7hnEIO9fft1GpEww&s=09
ZodiacFenrir 2k18#6228: Hallo friends I have 10 days use of this thing
ZodiacFenrir 2k18#6228: https://cdn.discordapp.com/attachments/729741769738158194/916343078506340372/unknown.png
ZodiacFenrir 2k18#6228: aside from taking the NVIDIA Deep Learning classes what other hijinx can I get up to?
StellaAthena#3530: What is it
ZodiacFenrir 2k18#6228: https://developer.nvidia.com/embedded/jetson-agx-xavier-developer-kit
ZodiacFenrir 2k18#6228: the only "free" classes on their DLI page involve getting one of these things for like a grand
BoneAmputee#8363: as far as I know, jetson products aren't really meant for training. just inference. but since that one has 32gb, you could do gpt-j inference. maybe some diffusion+clip? though it might be slow
BoneAmputee#8363: idk maybe that one's capable of training things. my Jetson Nano sure isn't
StellaAthena#3530: It looks like it's a V100
BoneAmputee#8363: dat PCI slot :eyes_zoom:
EricHallahan#1051: Uhhโฆ no, it's nothing like a V100.
EricHallahan#1051: Except the architecture.
StellaAthena#3530: Oh
EricHallahan#1051: It's like 11.3 TFLOP/s FP16 on the Tensor cores.
EricHallahan#1051: It is very much an inference product.
|
EricHallahan#1051: Not to say you can't train things, it just isn't close to that level of performance.
StellaAthena#3530: That seems unlikely? It advertises 512 tensor cores and a V100 has 640
EricHallahan#1051: Sorry, wrong number.
StellaAthena#3530: Or does compute grow non-linearly with number of cores
EricHallahan#1051: Power
kurumuz#5695: so tensor cores on theirselves doesnt mean anything really.
kurumuz#5695: you can limit them to 75W or something and now they are power limited.
kurumuz#5695: memory speed etc all matters, you need to keep those tensor cores fed.
EricHallahan#1051: It is a complex SoC, so it is kinda hard to compare it directly. They have another dedicated unit for DL that does another 5.7 TFLOP/s FP16 for instance.
kurumuz#5695: interesting
tpapp157#3643: I haven't worked with the Jetson line, but my impression is that they're intended for edge inference.
BoneAmputee#8363: and teaching :thinkies:
BoneAmputee#8363: my Nano has been functioning as a local nginx/rtmp/hls server lately. not really usin it for AI :guilty:
AI_WAIFU#2844: do what I did an throw it in front of a multi-billion dollar cyclotron
tpapp157#3643: A fun little project is to use a pretrained yolo model and a camera to do some simple real time object detection.
flowpoint#7450: maybe an automatic captioning robot?
put an image cationing model on it,
plug in webcam and speakers,
point it at things and watch it brabble
BoneAmputee#8363: yeah it could probably announce what it's looking at like once every second or two :thinkies:
|
<https://colab.research.google.com/drive/1FwGEVKXvmpeMvAYqGr4z7Nt3llaZz-F8>
<https://colab.research.google.com/drive/171GirNbCVc-ScyBynI3Uy2fgYcmW3BB9>
flowpoint#7450: jetsons are aimed at efficient inference
i found few benchmarks, should be around this ballpark:
https://www.naut.ca/blog/2021/03/16/rtx-3060-vs-jetson-agx-for-bert-large/
flowpoint#7450: > not really usin it for AI :guilty:
same :guilty: , who does?
jloganolson#4579: i've been prototyping a centaur screenwriting tool (uses gpt3 davince but would be great to move it to an opensource model) https://www.youtube.com/watch?v=EuO589ReJe0
jloganolson#4579: would anybody be interested in trying it out? or is anyone interested on collaborating on other centaur creativity/story tools?
Sid#2121: There's a few devs from NovelAI (https://novelai.net/) hanging around here
kurumuz#5695: @jloganolson quite nice, are those spaces single space tokens
chilli#5665: I'll be giving a talk at TVMCon about a new API that makes it really easy for people to do stuff to the backwards graph during PyTorch training ๐
chilli#5665: https://twitter.com/cHHillee/status/1466841004498059264
chilli#5665: I've mentioned this stuff before on this discord, but I think it's pretty neat ๐
faraday#0862: has anyone tried to use LMs to output BNFs for new programming languages? I thought someone might already be working on describing scenarios (as program excerpts) and output BNF for a hypothetical programming language fitting these scenarios
jloganolson#4579: Oh, nice! I hadn't heard of novelai before - this looks pretty feature rich!
jloganolson#4579: The line breaks/formatting in the script? All of that is actually ml-free heuristics based on the generated text's formatting (the formatting is pre-disposed to the heuristics using some prompt engineering hacks)
kurumuz#5695: the problem is, its too many tokens being wasted for a consistent formatting.
kurumuz#5695: if you are doing finetuning I would advise you to turn those wild spaces into a character that is not widely used in your usecase, something like > or whatever.
kurumuz#5695: you can display those as spaces in the frontend again.
|
kurumuz#5695: (this all assumes you can do finetuning)
Louis#0144: https://twitter.com/taliaringer/status/1466866257618161664?s=21
Louis#0144: RT pls
Daj#7482: What kind of funding are you looking for?
Daj#7482: Like order of magnitude?
Louis#0144: 10k ish
Louis#0144: Not much
Daj#7482: Hmm might have some friends that can help
Daj#7482: but would like a decent writeup
Louis#0144: Yeah ofc
Louis#0144: Ryan talia and I are working on that rn
Daj#7482: cool when you have something concrete hmu and I'll see if I can help
jloganolson#4579: oh i think i understand what you're asking - no theres no extra spaces there - those are actually just margins/padding based on the line type (e.g. action, character, etc)
kurumuz#5695: oh ic, nice
kurumuz#5695: would be pretty inefficient otherwise :berk:
kurumuz#5695: @jloganolson I can give you a novelAI key if you want to check it out btw
jloganolson#4579: that'd be awesome, thanks!
DanGrahn#1112: Hey all! Joined from a Twitter tag.
I'm a PhD student at Wright State. My dissertation research is on ML-assisted software vulnerability detection. Looks like we have some overlap.
|
Louis#0144: @RyanT
EricHallahan#1051: Welcome!
Louis#0144: I wonder if talia is in the discord
StellaAthena#3530: I feel like they would have identified themselves if they were
SeishinSennin#9919: Sorry if this isn't where this belongs, but I think this post is relevant to the limits of prediction (agent foundations, oracles etc.) https://eighteenthelephant.com/2021/11/29/pushed-around-by-stars/
Ajay sahu#2540: Hello,does anyone haven a code/Colab notebook to convert image annotations (json -Coco format) into image captions ?
Ariel Ekgren#6449: Gm everyone! Over at the AI Nordics discord we are trying to start a Nordic Language Pile (Swedish, Danish, Norwegian and Finnish) completely inspired by you here at eleuther. I know this might be a big question but how did you all do it? We are new to distributed collaborative work and there are so many questions. Where did you do the intermediate hosting? How did you come up with a collaborative format? Did you have a main project leader or was it a truly distributed effort?
Sid#2121: Hey, welcome ๐
wrt the first question, we rented a hetzner server where we did most of the processing, but also had a few other compute donations. (if you're in need of compute it's possible we can help you out, but we'd have to enquire with our provider first).
I'm not sure what you mean by a collaborative format? As in how did we organize the work? We've found it helpful to have one person act as a team leader for every project we do, they generally take care of allocating tasks / making sure people get their tasks done. But in terms of completing the tasks themselves - gathering multiple datasets is massively parallelizable, so generally a single person, or a couple of people, tackled each dataset individually.
Sid#2121: @bmk took on the lead role for the pile
StellaAthena#3530: We organized it via GitHub issues, which you can still see here: https://github.com/EleutherAI/The-Pile
We are doing something similar for the LM Eval Harness (also lead by @bmk): https://github.com/EleutherAI/lm-evaluation-harness/tree/master/lm_eval
Ariel Ekgren#6449: Thank you so much for the input. It's really helpful to be able to take so much inspiration from the work you have done
Ariel Ekgren#6449: Yes I was wondering how you did the organization and thank you for the helpful answer. Did people in general do their stuff locally and then upload to your hetzner server where you did some organizing?
Ariel Ekgren#6449: Thank you Stella. That's good we'll look through the issues for inspiration. Started to read through the chat in the pile channel but eh there was quite a lot of chat ๐
Sid#2121: Can't speak for everyone else but I usually work directly on the server using vscode over ssh
|
dk#4416: does anybody know if there are plans to port jaxlib to windows
bmk#1476: the solution is simple, just dont use windows
ilovescience#3282: or use WSL instead
๐
ฌ gabriel_syme ๐
ฌ#3220: ngl, it's quite annoying not being able to compile jax
kendrick#9537: You can try this. But I haven't tested it yet. https://github.com/cloudhan/jax-windows-builder
kswanjitsu#7221: Hey guys, have been thinking of starting to develop a model for biomedical/clinical text disambiguation, with focus on text via hypernym substitution. Currently, have a pipeline the uses a few out of the box models (CWI, SciBERT) that determines words that would need substituting, then performing basic TFIDF with some rules for substitution. This worked well for point we were trying to make scientifically but eventually I would like to have everything handled with a GPT model. My background is in genomics (grad school MSc), then medicine (internal medicine). I am resident physician just rounding out the application cycle for clinical informatics (match day for us is the 15th, very excited!). I have a background in python/basic programming since grad school, continued some projects throughout medical school and residency, very noobish ML experience, but dabble with tensorflow, used a fair amount of SpaCy/NLTK/etc for basic NLP stuff. With the current pipeline, my PI (out of Harvard MGH CS lab/his own private R&D company), and my small team of junior level programmers (mostly CS students applying to medicine, degree graduated or in process) are going to submit to Nature (NPJ) Digital Medicine soon. Once that is done, I am going to switch focus to that model. This lab is composed of mostly volunteer researchers such as yourself, and we occasionally will use resources from Harvard (the project won a Harvard innovation lab grant, and also collab w/ some researchers from clinical/CS/DS space) as well as some resources from the company. I suppose similar to this group, none of this is for profit, mostly academic in goal and kind of just something we do for the sake of it. There should be some papers to come out of it if interested. We have discussed potentially applying for NIH grants etc after the first round of papers we plan to submit. Anywho, would love any ideas, collaboration, etc.
dk#4416: https://tenor.com/view/drake-laptop-drake-gif-21716481
Deorder#9592: Hello everyone. I am working on a Discord chatbot that I want to use on my Discord servers. What is the cheapest way to get a network running. Any hosting provider that you can recommend?
bmk#1476: wrong server
faraday#0862: hey guys, does anyone have any info on M1 Max Quadra performance and potential implications on ML research? Iโve seen comparisons to RTX 4090.
๐
ฌ gabriel_syme ๐
ฌ#3220: yeah that's pretty wild if it happens
Louis#0144: 4090?
chilli#5665: Are the comparisons that the M1 Max is wayyyyy slower?
chilli#5665: Lol
tpapp157#3643: Big yawn. People way overhype future hardware performance. I don't know why this happens every single generation. The current Mac chips were "leaked" to basically be the most powerful piece of silicon ever by a wide margin, but benchmarks now that they're out show them to be fairly average. Similarly, "leaks" about the 4090 are saying it'll have over twice the performance of a 3090, because, you know, the industry has never even gotten anywhere close to 100% gen on gen increase but somehow this'll be the one. It's all just garbage random people make up to farm internet clicks and you shouldn't pay attention to any of it until maybe a month before release at most.
kurumuz#5695: I would say the first m1 launch was underestimated
chilli#5665: V100 => A100 went from 125 => 312 TFLOPs, no?
Quill#9732: when GPUs move to a tile-based architecture, we might get extreme gen-on-gen gain... accompanied by extreme gen-on-gen increase in the top-of-the-line product price :p
since tile-based lets you just Use More Silicon at only mildly superlinear cost scaling
Quill#9732: you *could* make a 4x stronger card that costs 6x as much :p
|
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/917087052569391134/unknown.png
chilli#5665: tbh, I am quite skeptical that so many people understand einsum notation
chilli#5665: lol
chilli#5665: I guess it must be selection bias
kurumuz#5695: yeah, at fp16 its definitely more than 2x
kurumuz#5695: this is real throughput i observe.
chilli#5665: Like, according to this poll (so far: https://twitter.com/cHHillee/status/1467515728681000961), it's 75/80 => 94% of folk
AI_WAIFU#2844: you don't have to totally understand einsum to use it
AI_WAIFU#2844: but for basic matmul like things it's *super* useful, and self documenting in a way that most ops aren't
jesse#7865: personally i just sample einsum ops from copilot until the code stops raising exceptions
StellaAthena#3530: I donโt understand itโฆ
AI_WAIFU#2844: figuring out the input and output dimensions of each op is so annoying
chilli#5665: I mean, I think that's quite common, which is why I'm surprised about this poll's results so far
chilli#5665: lol
AI_WAIFU#2844: but with einsum you just look at the code
chilli#5665: maybe more people would choose this result: https://twitter.com/Ar_Douillard/status/1467524913078718467
chilli#5665: lol
alstroemeria313#1694: eheh
alstroemeria313#1694: I mean I do understand it but
alstroemeria313#1694: I often have to check it in a REPL
|
chilli#5665: I wonder if it'd be helpful to just easily translate from einsum to regular notation
chilli#5665: and back
chilli#5665: doing einsum to regular notation is trivial
chilli#5665: hmm
chilli#5665: but I guess it might actually be difficult for end-users...
MicPie#9427: This is a super nice intro to the einsum notation: https://rockt.github.io/2018/04/30/einsum
Sphinx#2092: Throws me back to the the nightmare of Christoffel symbols.
alstroemeria313#1694: hey so how do you do model/pipeline parallel in pytorch anyway
alstroemeria313#1694: Like on a cluster
alstroemeria313#1694: i think i can do one more doubling in my diffusion model size on 40GB A100s
alstroemeria313#1694: but they need more params
Gurkenglas#7362: How does OpenAI get embeddings from a GPT model such that similar texts become similar vectors? Wouldn't moving a text forward by one token throw everything off?
cfoster0#4356: The docs don't say anything about it being a GPT model fwiw
kurumuz#5695: umm not really?
bmk#1476: c o n t r a s t i v e
kurumuz#5695: but yeah its highly likely those models are constrastive
kurumuz#5695: I agree with bmk
bmk#1476: the thing that really screams contrastive is the one where they have different encoders for the summary and the text or whatever
bmk#1476: i wonder what the regular embedding model is trained on
bmk#1476: maybe declutr like objective?
|
cfoster0#4356: You don't need to wonder, right? Like... can't you ask directly
bmk#1476: I could but once I do I'll no longer be able to speculate about it in public
bmk#1476: and speculating is fun
CRG#8707: I mean, there's the 12288 dim comment for the biggest model
cfoster0#4356: Those confirm the mapping between API monickers and `d_model` but not much else
bmk#1476: it would be like peeking at spoilers
bmk#1476: (also it would increase the risk of leaking stuff)
bmk#1476: I mean.. it is pretty sus
Gurkenglas#7362: oh https://beta.openai.com/docs/guides/embeddings/what-are-embeddings mentioned the same 4 names but maybe they were just reusing the alphabet. which seems rife for confusion.
bmk#1476: they should rename to smol, chonk, bigchonk, hugechonk
cfoster0#4356: I have no idea how many different model types OAI trains up to chonk sizes, but I wouldn't be surprised if they try out a lot
cfoster0#4356: (since apparently there's enough different work streams that Leo doesn't know what methods are behind this particular model)
kurumuz#5695: smolgoose, goose, biggoose
StellaAthena#3530: So, for an EduTech course I decided to look at how good GPT-J is at solving people's homework assignments
StellaAthena#3530: It's *really good*
alstroemeria313#1694: Like essay writing?
StellaAthena#3530: Coding assingments, actually
alstroemeria313#1694: Oh.
StellaAthena#3530: Not only does it get them right, it doesn't trigger the anti-plagiarism checker my uni uses
alstroemeria313#1694: Wow...
|
alstroemeria313#1694: How complicated are the assignments?
StellaAthena#3530: Not very, I'm using assignments from a plagiarism dataset that contains examples of actually plagiarized assignments and non-plagiarized assignments. Here's an example assignment
StellaAthena#3530: > Write a Java program that accepts the weight (as a real number representing pound) and height (as two real numbers representing feet and inches respectively) of a person. Upon accepting input, the program will show that personโs BMI (real number) and a piece of information stating whether the BMI is categorised as underweight, normal, overweight, or obese.\\A person is underweight if $BMI < 18.5$; normal if $18.5 \leq BMI < 25$; overweight if $25 \leq BMI < 35$; or obese if $BMI \geq 35$.\\$Height = feet * 12 + inches$\\$BMI = weight * 0.45359237 / (height * 0.0254)^2$
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/917167676055502858/193204646687408129.png
StellaAthena#3530: The hardest one is probably
```latex
Write a program that accepts a $4\times 4$ matrix of real numbers and prints the total of all numbers placed on the leading diagonal of the matrix. You should create and use a method \textit{double sumMajorDiagonal(double[][] m)} which will return the total of all numbers placed on the leading diagonal of the parameterised matrix.```
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/917168009217450064/193204646687408129.png
StellaAthena#3530: That one it can't do, but only because the prompt is too difficultly worded
StellaAthena#3530: A more straightforward description of the problem works
StellaAthena#3530: Actually it did get that one with a couple more tries
StellaAthena#3530: ```java
import java.util.Scanner;
public class Coding_comp_challenge {
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
System.out.println("Enter a $4\times 4$ matrix of real numbers");
double[][] m = new double[4][4];
|
double sum = 0.0;
for (int row = 0; row < 4; row++) {
for (int column = 0; column < 4; column++) {
m[row][column] = sc.nextDouble();
}
}
for (int row = 0; row < 4; row++) {
sum += m[row][row];
}
System.out.println("The sum of the numbers on the leading diagonal is: " + sum);
}
}
```
EricHallahan#1051: > ```java
> import java.util.Scanner```
:goose17:
bmk#1476: :ptsd:
bmk#1476: Scanner delenda est
bmk#1476: did you also compare to codex
StellaAthena#3530: No, not yet
|
๐
ฌ gabriel_syme ๐
ฌ#3220: wait how are you doing this with J? Is it the finetuned model?
StellaAthena#3530: Literally just plugging questions in
StellaAthena#3530: And asking it for Java programs
๐
ฌ gabriel_syme ๐
ฌ#3220: cool thx!
jesse#7865: wasn't there already a comparison to GPT-J in the codex paper?
bmk#1476: yeah but that's not on the ShittyUniJavaAssignment dataset
bmk#1476: we should really get the code tasks in eval harness
bmk#1476: does anyone here know how to properly sandbox stuff?
bmk#1476: I don't think docker containers are secure at all
thrasher#7261: gvisor or firecracker viable? what are you trying to secure from the code
StellaAthena#3530: @thrasher In reality, we need to be able to execute arbitrary code in isolation tbh.
bmk#1476: https://nedbatchelder.com/blog/201206/eval_really_is_dangerous.html
bmk#1476: this is wild
bmk#1476: even after banning all globals and builtins, you can *still* break out of eval using this monstrosity: https://cdn.discordapp.com/attachments/729741769738158194/917246487849734184/unknown.png
kurumuz#5695: jesus chirst
kurumuz#5695: the fuck is that
bmk#1476: the post explains
kurumuz#5695: wait why can i read that
bmk#1476: but it's some wild shit
kurumuz#5695: what is wrong with me
|
someKindaBean#8471: I tried using GPT-J on a chem test that my chem prof friend wrote and it could name most of the compounds correctly, although it sometimes uses naming conventions that are outdated, although still correct
thrasher#7261: provision a small VM with no internet access for each eval?
StellaAthena#3530: @bmk thatโs nothing
StellaAthena#3530: Eval can evaluate `eval`
StellaAthena#3530: @bmk verifiable computation to the rescue!
StellaAthena#3530: Have the LM write the code, then instead of executing it to check correctness run a VC protocol
StellaAthena#3530: Sure, the client can cheat by not using the same code that the LM outputs. But they could also cheat by having a human answer the questions.
bmk#1476: we could also cannibalize an existing contest judge maybe
bmk#1476: https://github.com/DMOJ/online-judge
bmk#1476: a ton of people submit arbitrary python and C++ and whatever code to this thing
StellaAthena#3530: Also
StellaAthena#3530: We can just do the same BS we pretend is totally fine for natural language
StellaAthena#3530: Multiple choice questions with ranked perplexity and stuff
bmk#1476: im not sure i get what you mean by this
bmk#1476: im talking abouti mplementing, like, a task in eval harness
StellaAthena#3530: Right
bmk#1476: that we can run on gpt2
bmk#1476: where we ask gpt2 to generate somee code
StellaAthena#3530: The concern is that we need to run the output to check its correctness right?
bmk#1476: and then run it in a box
|
bmk#1476: yeah
bmk#1476: and we check to see if the output from the box is what we expected
thrasher#7261: this sounds like CI/CD
StellaAthena#3530: So you make the *client* execute the code and report the results
bmk#1476: what do you mean the client
bmk#1476: i *am* the client
StellaAthena#3530: The person with the language model
bmk#1476: that's me
StellaAthena#3530: Oh itโs accidental self-hacking youโre concerned about
bmk#1476: right
StellaAthena#3530: Not someone else submitting malicious code
bmk#1476: i dont trust gpt-j to not occasionally write something really dumb and delete everything on my computer
EricHallahan#1051: Yes
Sid#2121: is lambda not a global or builtin? :thonk:
bmk#1476: also i want to design as if the model was trying to do something malicious
bmk#1476: just to be on the safe side
bmk#1476: no it's a keyword
bmk#1476: keywords are not globals or builtins
Sid#2121: that post is cursed in the best way btw
bmk#1476: dismantling dmoj is like at the top of my list of things that might work
|
StellaAthena#3530: Step 0 is to examine your hardware with a scanning electron microscope
bmk#1476: lemme specify my requirements even more specifically
StellaAthena#3530: Itโs possible (in theory) to create back doors you can only detect at the atomic level
StellaAthena#3530: (Though youโd have to know what itโs supposed to look like, which nobody does in the total absense of trust)
bmk#1476: my threat model is "if a popular public facing website that's been around for almost a decade that allows users to run arbitrary code on it hasn't been hacked yet that's secure enough for me"
bmk#1476: basically if it can keep out a skilled hacker who doesnt have access to crazy 0days or anything it's good neough
bmk#1476: https://github.com/DMOJ/judge-server
bmk#1476: looks promising https://cdn.discordapp.com/attachments/729741769738158194/917252550816829440/unknown.png
EricHallahan#1051: I was about to post that screenshot lmao
kurumuz#5695: run it in a vm?
kurumuz#5695: do a simple kvm setup
bmk#1476: not necessarily safe enough
bmk#1476: for example, consider the following dumb contrived example:
kurumuz#5695: LM is not gonna find a vm_exit bug
kurumuz#5695: probably
bmk#1476: you write a server that takes requests and returns responses using pickles
bmk#1476: the evil code modifies the return pickles to inject malicious code back to your requesting server
bmk#1476: bam
bmk#1476: pwned
bmk#1476: so you also have to make the code you put in the VM secure
|
bmk#1476: my threat model is to assume the model isn't better than a smart human hacker who doesn't have any zero days
bmk#1476: which I think is about as safe as we can make it without going overboard with paranoia
kurumuz#5695: uh, maybe dont use pickles
StellaAthena#3530: Uh, maybe donโt use machine learning
bmk#1476: just giving an example
kurumuz#5695: wdym
kurumuz#5695: why would you literally need pickles to return something to the server
bmk#1476: my point is that VM security isn't enough
bmk#1476: you also need your server to not be vulnerable
kurumuz#5695: its enough if your data structures are safe and not pickles
bmk#1476: and I don't trust myself to make my server safe
kurumuz#5695: maybe talk to @OccultSage :berk:
AI_WAIFU#2844: air gaps
nshepperd#2316: ai boxes
nshepperd#2316: :thinkies:
AI_WAIFU#2844: actually, has openphil setup an airgapped supercomputer in a bunker somewhere yet?
AI_WAIFU#2844: cause' they should
AI_WAIFU#2844: sooner or later this stuff is gonna get dangerous enough that we should put in in a proper box
AI_WAIFU#2844: even if that's not exactly a great defence
bmk#1476: how am I going to ship that with eval harness
|
bmk#1476: also I wonder what is the current state of the art in terms of boxing
Kia#2550: :2box:๐ค:2box:.
bmk#1476: for the purposes of this question I consider both physical and software defences as part of the boxing
AI_WAIFU#2844: I've been thinking about this, unless someone puts in the effort to make a proper box immediately, there won't be a box
bmk#1476: and the model has to be able to communicate with us to a reasonable extent
bmk#1476: well uh then let's get on that
AI_WAIFU#2844: someone will just connect an agent directly to a bash terminal in the cloud
AI_WAIFU#2844: because nobody's got time for boxes
bmk#1476: idea: we should make a series of ready to use boxes of differing strengths
bmk#1476: the tradeoff is between ease of use and security, right?
Kia#2550: Ow wait you guys are serious
AI_WAIFU#2844: boxes are a lot of work and really inconvenient
bmk#1476: so what I'm thinking is we can make a really easy to use box that requires like two lines of code change
guac#4716: Idk it seems inhumane also
AI_WAIFU#2844: sure but I'm 100% ok with that given the stakes
bmk#1476: and then we can make a medium strength box which takes some work to set up and use but is a lot more secure
guac#4716: are there alignment posts about boxing/ai containment?
AI_WAIFU#2844: AI boxes as a service
bmk#1476: and then we can make a super ultra overkill box that involves a special faraday cage room and hardware designed specifically for the box
AI_WAIFU#2844: well there are a lot of posts explaining why it's a shitty idea
|
bmk#1476: right, if we actually do this, we will have to overcome the inertia against it
bmk#1476: AI boxing is kind of an anti-shibboleth at this point
bmk#1476: you know I think the first step is a really good one to start with
bmk#1476: we should make an execution environment for untrusted code that takes 2 minutes to set up
AI_WAIFU#2844: I don't have the resources or expertise to make that happen
bmk#1476: me neither
kurumuz#5695: its literally what wbrown did for years lol
bmk#1476: but I would really like if someone made this happen
kurumuz#5695: dude is a malware expert
guac#4716: wish i did :sadge: sounds kinda interesting
bmk#1476: @OccultSage pls halp
bmk#1476: your expertise is needed
bmk#1476: I'm kind of surprised that building a sandbox up to spec with current best practices is so hard for a non security person
bmk#1476: seems like a pretty important niche
AI_WAIFU#2844: realistically it's not a real priority
AI_WAIFU#2844: for almost everyone
AI_WAIFU#2844: nobody suffers any consequences for data leaks
AI_WAIFU#2844: with the sole exception of crypto busts
bmk#1476: so I feel like if we made such a library available it would be a huge net positive
thrasher#7261: you can just throw $ at the problem, aws, gcp, azure, etc run loads of untrusted workloads
|
AI_WAIFU#2844: yeah, and they're not really secure
bmk#1476: also us mere mortals don't really have access to their tech
bmk#1476: we can only use their security by paying for their instances
bmk#1476: which is.. bad
AI_WAIFU#2844: on a semi regular basis I hear about some exploit or another for breaking vm containment
guac#4716: like what if you wanted to purposely misalign a model to observe its behavior. wouldn't you need this kinda sandbox
bmk#1476: I want something where you need to change like 2 lines of code
bmk#1476: you don't want to
StellaAthena#3530: I just got an email about being in a data leak of 170M peopleโs accounts
AI_WAIFU#2844: you don't even want to go there, for the reasons covered in the aformentioned alignment posts
AI_WAIFU#2844: a box is wet tissue paper to a sufficiently powerful AI
AI_WAIFU#2844: it's only good for studying moderately powerful AIs
guac#4716: fair enough but i still don't see how you could faithfully tell when your model is misaligned without probing it in a sandbox
guac#4716: i should just read up on it lol excuse me
AI_WAIFU#2844: agreed, but it's one of those, "if it's misaligned and smart enough, you've already lost" kinda situations
guac#4716: ahh okay that's what i was getting to
guac#4716: thanks
kurumuz#5695: well even for a human intelligent level AI, I just imagine a George Hotz without any of the human drawbacks, no serotonin limit, no monkey motivation system
bmk#1476: I'm imagining trying to contain an AGI to look like that one Evangelion scene with ||the nanobots||
kurumuz#5695: would be hard to think that can't find a vm exit
|
kurumuz#5695: :berk:
bmk#1476: trying to find a good clip of it on yt rn
bmk#1476: it's so good
bmk#1476: https://youtu.be/2aorl5mX744?t=18 this is the best I can find, though it's cut and has irritating music put over it
Kia#2550: Noooo,Wish nothing personal got leak
bmk#1476: really love the animation work on the quarantine scenes https://cdn.discordapp.com/attachments/729741769738158194/917263519274123314/Screenshot_20211205-205712_YouTube_Vanced.jpg,https://cdn.discordapp.com/attachments/729741769738158194/917263519626461284/Screenshot_20211205-205651_YouTube_Vanced.jpg
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/917267429623726110/unknown.png
bmk#1476: NERV attempts AI boxing
Kia#2550: Is Generative Art(Umbrella Term) A field by itself in AI/ML?
EricHallahan#1051: I guess? It isn't really a formalized field.
Kia#2550: True,It's just something im thinking lately (Just noticing More papers Aims to get better results and beating Previous papers on Generative Task/works)
OccultSage#3875: it's hard even for a security specialist.
OccultSage#3875: lol -- i'm reading some of the back chat
OccultSage#3875: it's an entire discipline, proving access levels ๐
bmk#1476: this is kind of a problem because we want people to use good containment whenever possible
bmk#1476: but lots of people have the mindset that if you put it in a docker container you're safe or something
OccultSage#3875: Ironically, Docker has the worst security among the containerization/VM methods out there.
bmk#1476: makes sense
OccultSage#3875: The people who did Docker were not virtualization security experts, or virtualization experts really in the beginning.
bmk#1476: and VMs aren't as secure as they're often made out to be either, right?
|
OccultSage#3875: Yeah.
OccultSage#3875: All the way back to Blue Pill/Red Pill.
OccultSage#3875: And before that, as a zero day.
OccultSage#3875: So, what exactly are we trying to do? What are we attempting to secure against?
bmk#1476: ok so here's the spec
bmk#1476: so a lot of people are developing models that generate code
bmk#1476: and to find out how good the models are, we have to run the code
bmk#1476: since the models are trained on a huge amount of code from the entire internet, who the heck knows what it might do
OccultSage#3875: lol ๐
bmk#1476: right now our models aren't smart enough to hack out of the box, but it's kind of worrying that they might eventually
OccultSage#3875: especially with temperature in there.
bmk#1476: and so what we want to do is make boxes easy to use *before* we have models smart enough to do that
bmk#1476: the problem is that for a non security person like me or most other ML folks, it's pretty intimidating to try and set up a proper box
bmk#1476: I'd probably just like use a VM/docker and a quick hand rolled socket server
bmk#1476: which is probably not the safest option
bmk#1476: so yeah any ideas?
OccultSage#3875: Probably not. So, the threat model is: 'AI code generated that has a small possibility of busting out of jail and becoming SKYNET'?
bmk#1476: yeah basically, though since that's a pretty nebulous target you can think of a proxy of like a really smart team of hackers trying to break out instead
OccultSage#3875: OK. What level of privilege does the code being run need? Does it need access to the Internet, the system itself?
bmk#1476: hmm
|
bmk#1476: the use cases I can think of shouldn't need internet access at all
bmk#1476: nor the system itself
OccultSage#3875: OS level stuff like opening files?
OccultSage#3875: Or interprocess?
bmk#1476: it shouldn't be able to touch any files outside its container/VM
OccultSage#3875: Well, a container is effectively an abstraction of an OS.
bmk#1476: hm
OccultSage#3875: I'm asking - -does the process even need OS access?
bmk#1476: tentatively I'm going to say no, for the kinds of things I'm interested in
bmk#1476: but i think internet and OS access are the kinds of things that some people might want to use it for (and access to underlying hardware/peripherals is probably not something that anyone is interested in)
bmk#1476: and since the goal of this is to get more people to use it, it would be bad if important use cases were shut out
bmk#1476: hm
bmk#1476: I guess there are two tiers of use cases here
bmk#1476: "run this python function, it doesnt touch the os, internet, etc" and "basically a VM but like safer"
OccultSage#3875: Is there a subset of languages you want to be able to generate? The language/runtime also makes things harder/easier.
bmk#1476: uh, probably python is the main language we care about
OccultSage#3875: Perfect. Run it in a JS VM in a fake web browser inside a VM!
bmk#1476: yeah ok so I think "python, no internet access, fake filesystem but no real OS" is the setting we want
bmk#1476: uh how would that work
bmk#1476: why JS in the first place
|
OccultSage#3875: Mostly joking -- but people have compiled CPython for WebAsm. ๐
bmk#1476: oh ok
OccultSage#3875: Which isn't a bad idea, as WebAsm is a limited target.
bmk#1476: also as for the interactions between the safe environment and the outside world: we need to be able to put the code in, obviously, and we want to take stdout/return values back out
bmk#1476: that last part seems like it could be a bit tricky
bmk#1476: pickling the return value would be Bad so we'd want to serialize it to some other format
OccultSage#3875: Stdout/return values is pretty easy.
Kia#2550: Is This like an actually AI box attempt?
bmk#1476: this is babby AI box
OccultSage#3875: Keeping SKYNET contained!
bmk#1476: I get that there's almost always a tradeoff between safety and convenience
OccultSage#3875: Yeah, but most people just want to run a `.py` file, right?
bmk#1476: yeah
OccultSage#3875: Invert the question. Why not a restricted browser sandbox?
bmk#1476: why a browser?
bmk#1476: seems kind of weird
OccultSage#3875: Because a lot of work has gone into securing browsers. ๐
bmk#1476: huh
OccultSage#3875: And it makes it easy. You interact with the code generation via a web server.
OccultSage#3875: You get back a .py file you can run.
|
bmk#1476: so it's meaningfully safer to have the model in a browser in a VM than to just have it run directly on the VM?
OccultSage#3875: Nah, on the user's browser. ๐
OccultSage#3875: Lol
bmk#1476: oh uh
bmk#1476: clarification for use case: this is to be used programmatically by code written by a researcher
bmk#1476: like, from the researcher's perspective it should look like `safe_eval(generated_code)`
OccultSage#3875: So you want to be able to run a bunch of generated code and evaluate the result programmatically.
OccultSage#3875: Got it.
bmk#1476: right
bmk#1476: and it'll probably be running inside a loop so it shouldnt be *too* slow
bmk#1476: oh right and aside from breakouts we also dont want the model to do a fork bomb and slow the computer to a crawl either
bmk#1476: so we need strict limits on memory and cpu usage, and timeout after n seconds
OccultSage#3875: So -- https://doc.pypy.org/en/latest/sandbox.html
Kia#2550: You're fast:thinkies:
bmk#1476: hmm
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/917300406760124426/unknown.png
bmk#1476: sandboxlib: https://cdn.discordapp.com/attachments/729741769738158194/917300491443138570/unknown.png
OccultSage#3875: "One of PyPyโs translation aspects is a sandboxing feature. Itโs โsandboxingโ as in โfull virtualizationโ, but done in normal C with no OS support at all. Itโs a two-processes model: we can translate PyPy to a special โpypy-c-sandboxโ executable, which is safe in the sense that it doesnโt do any library or system calls - instead, whenever it would like to perform such an operation, it marshals the operation name and the arguments to its stdout and it waits for the marshalled result on its stdin. This pypy-c-sandbox process is meant to be run by an outer โcontrollerโ program that answers these operation requests."
bmk#1476: thats.. not super promising
OccultSage#3875: If it ain't broke, don't fix it.
|
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/917300681147289610/unknown.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/917300709727272980/unknown.png
bmk#1476: that's a lotta warnings
OccultSage#3875: Yeah. It's the approach that is closest to what we want.
bmk#1476: makes sense
OccultSage#3875: And do you actually need the latest Python 3 hotness?
OccultSage#3875: Would Python 3.6 to execute code be ok?
bmk#1476: 3.6 seems fine
bmk#1476: having a clear upgrade path to newer versions would be a plus though
OccultSage#3875: Nice, this seems to do everything we want:
```
To limit the used heapsize, use the --heapsize=N option to pypy_interact.py. You can also give a limit to the CPU time (real time) by using the --timeout=N option.
```
OccultSage#3875: So, fork bomb prevention.
bmk#1476: oh wow
bmk#1476: do you think it would be overkill to stick this entire thing in a VM or nah
bmk#1476: just to be safe
OccultSage#3875: Generally, a VM is overkill with this level of runtime. PyPy is effectively a VM of its own.
bmk#1476: gotcha
bmk#1476: C++ would have been a different story right
|
OccultSage#3875: Unlike CPython, it's provably correct.
OccultSage#3875: Ohhh, stay far far away from C++ dude.
bmk#1476: https://www.pypy.org/posts/2019/08/a-second-life-for-sandbox-6848726729476245390.html
bmk#1476: C++ is the most popular competitive programming language by a long shot so there's a lot of data for it
bmk#1476: but yeah best to avoid for now
bmk#1476: looks like sandboxing for 3.6 might be a bit of a pita to set up though
OccultSage#3875: Yeah, did they get any takers?
OccultSage#3875: And there's a reason for it -- you can do literally anything, as C++ has the kitchen sink in it. Functional, heavy, light, etc. This means that one person's C++ is dramatically different from another person's.
bmk#1476: ironically everyone who uses C++ for competitive programming uses the same few things lol
bmk#1476: https://mail.python.org/pipermail/python-dev/2013-November/130132.html also found this entertaining read about pysandbox
OccultSage#3875: Yeah, that's why I said, 'uh-uh, not happening with CPython'. He did call out PyPy as the correct approach.
bmk#1476: yeah makes sense
chilli#5665: Today I figured out that a ML compiler problem is not NP-hard, and is fact merely a max-flow/min-cut problem ๐
chilli#5665: feels good man
chilli#5665: too many problems in compilers are NP-Hard
bmk#1476: i used to know a prof who loved max flow min cut problems a lot
chilli#5665: yeah, I mean, a lot of problems can be framed in that manner ๐
bmk#1476: surprisingly versatile indeed
chilli#5665: lol, I remember one year GCJ had a kinda funny max-flow/min-cut problem
chilli#5665: where the "easy" subproblem was basically a very obvious max-flow problem
|
chilli#5665: but the "hard" subproblem required you to reframe the problem, and you eventually got a different min-cut problem out of it
bmk#1476: galaxy brain
OccultSage#3875: Hmm. https://cdn.discordapp.com/attachments/729741769738158194/917305106888790056/IMG_5397.png
chilli#5665: (it was this one)
chilli#5665: https://codingcompetitions.withgoogle.com/codejam/round/0000000000432fed/0000000000433109#analysis
Kia#2550: Now Im worried, About ML models breaking out from the box
Kia#2550: It's a worth a shot tho:thinkies:
Napolean_Solo#2907: https://wellsaidlabs.com/
These guys have some of the best voice AI models i have heard so far
Napolean_Solo#2907: Do you think they are using Tacotron?
cfoster0#4356: I'd be kinda surprised if they were, but who knows
๐
ฌ gabriel_syme ๐
ฌ#3220: Anyscale looks interesting right?
๐
ฌ gabriel_syme ๐
ฌ#3220: https://www.anyscale.com/
tpapp157#3643: Could. The website doesn't really say much.
๐
ฌ gabriel_syme ๐
ฌ#3220: yeah still waiting list for beta I guess
๐
ฌ gabriel_syme ๐
ฌ#3220: there's a video didn't look through all of it
bmk#1476: really glad that open source developers starting businesses based on their work is getting more normalized
๐
ฌ gabriel_syme ๐
ฌ#3220: what are you going to call the pyfra startup
๐
ฌ gabriel_syme ๐
ฌ#3220: only half kidding
|
๐
ฌ gabriel_syme ๐
ฌ#3220: I think it's amazing for OSS getting there as well, can bring such great value imo to industry
bmk#1476: probably going to develop a full suite of tools around experiment automation first
kurumuz#5695: :goose16:
kurumuz#5695: :goose:
kurumuz#5695: :goose2:
kurumuz#5695: :goose3:
tpapp157#3643: Well I wouldn't call them open source developers any more. Now they're just like any other company that offers a stripped down free demo version of their full product. You can't have your cake and eat it too.
bmk#1476: this is a perfect attitude to ensure that there's no incentive to do open source
bmk#1476: I think it's totally counterproductive to treat people who build companies around open source projects as being morally inferior or something
bmk#1476: doing things for free is not sustainable and we should be creating incentives for open source
tpapp157#3643: Oh I think they made the right choice. I'm just saying you can't be indie and corporate at the same time.
๐
ฌ gabriel_syme ๐
ฌ#3220: HF tries and does well I think
bmk#1476: what I'm saying is that we should make it possible to be both as much as possible
๐
ฌ gabriel_syme ๐
ฌ#3220: personally I don't see it as stripped down though, I see it as something someone like me can (hopefully) use
circuit10#0158: If they make their source freely available to use then itโs still open source
circuit10#0158: By definition
bmk#1476: I think the valence matters more than the literal definition
Sparkette#4342: I see people talking about this DALL-E model called DALL-3, which is freely available at https://dall-3.com, but I see next to no information about it. Google isn't any help, and even the website I linked is literally just an "index of" that only has the models. Where can I read more about this model?
Sid#2121: i've literally never seen anyone mention this and that website is extremely sus
Sparkette#4342: Actually, filtering the search helped: https://www.reddit.com/r/MachineLearning/comments/r8v4fq/project_dall3_generate_better_images_with_fewer/
|
Sparkette#4342: I added `site:reddit.com`
cfoster0#4356: It's a diffusion model that operates on VQ codes
cfoster0#4356: Or rather, it takes dVAE/VQGAN codes and decodes them to images with diffusion, guided with CLIP
alstroemeria313#1694: Which now
alstroemeria313#1694: Like diffusion in RGB space?
alstroemeria313#1694: Oh it's that guy!
alstroemeria313#1694: He got his diffusion decoder to work finally!
alstroemeria313#1694: > the condition for a VAE reconstruction error repairer is either the RGB image output by the VAE decoder or the VAE tokens directly. Actually I'm surprised feeding in the VAE tokens didn't work well, it should, I think!
โI commented on his GitHub thread
alstroemeria313#1694: Back before he got it working
alstroemeria313#1694: In the meantime I made a diffusion VQVAE trained end-to-end that worked
alstroemeria313#1694: i.e. not just a repairer for an existing encoder
Kia#2550: DALL-3 is A model Base on DALL-E Pytorch and uses Diffusion model at the end of the process
Kia#2550: Ok no nvm I was late in the convo like an hour
CarsonPoole#0640: random question about T5
CarsonPoole#0640: when you're doing huggingface's `generate()`, is it doing many forward passes or just one when doing a supervised task
CarsonPoole#0640: like for example if I have a T5 set up for summarization or paraphrasing
CarsonPoole#0640: is it doing an autoregressive style generate with every token
CarsonPoole#0640: or is it just doing one forward pass to get the output result
StellaAthena#3530: @CarsonPoole You should go ask HuggingFace. Or the people who made T5.
|
bmk#1476: don't think it's even possible to generate multiple tokens in one pass with T5
๐
ฌ gabriel_syme ๐
ฌ#3220: Doo we think that prompting large language models can be a science in itself? I understand that we have a lot of work to do to formally to understand the theoretical and practical consequences of interacting with large LMs, but I don't see how there isn't an avenue to that developing a scientific understanding for it, or even a discipline in itself
ilovescience#3282: I have two GANs that I have trained on two separate classes of images. They perform fairly well. I now train a GAN on both classes of images simultaneously. Performance worsens. What are some potential solutions for improving the model? Some things I am thinking of right now are increasing the model size or just training longer.
alstroemeria313#1694: increase model size yeah
alstroemeria313#1694: also. try making the GAN conditional
Some Point Process#3793: I read that as "make thy GAN conditional" at first glance
Some Point Process#3793: (need sleep)
๐
ฌ gabriel_syme ๐
ฌ#3220: get some sleep, sleep is wonderful
tpapp157#3643: Not really. At least not until large LMs are much more well understood in both practical and theoretical terms. Things are moving too fast for serious research of sub-sub-fields to really be of lasting value.
tpapp157#3643: Plus there's some validity to the view that the need for prompt tuning is a deficiency of current language models that should be solved with future improvements.
cfoster0#4356: Can you describe this view in more detail?
Ravna#1831: I agree with him. Better wait for the current wave of scaling to run out of steam first. Studying a non-plateau-ed LM's prompting technique is likely just be studying special quirks of the LM with a specific medium size.
tpapp157#3643: The simple argument is that other types of models and other types of data don't require prompt tuning to provide useful results, so it's an outlier that LMs do. I think the peculiarities of text data mean that it'll be impossible to completely get away from any sort of prompting for LMs to be useful, but I suspect there's still plenty of scope for LMs to get better than where they currently are.
cfoster0#4356: Interesting. I would guess that prompt tuning (or something very much like it) would be useful for any kind of generative model where the conditioning leaves a lot of ambiguity in what should be generated
cfoster0#4356: If true :thisup: would predict we'll see papers on, say, prompt tuning for image DDPMs and an improvement in performance from it
cfoster0#4356: Anyways I definitely agree there's a lot of chips left on the table with current LMs
alstroemeria313#1694: huh
alstroemeria313#1694: How would you do that actually.
cfoster0#4356: If you use attention in your UNet you can put it there
alstroemeria313#1694: well i mean. what is the loss that would be used.
|
cfoster0#4356: Regular diffusion MSE loss
alstroemeria313#1694: oh but
alstroemeria313#1694: you mean like on a specific dataset of images, or
tpapp157#3643: Right. In other contexts this takes the form of conditional variables or similar approaches. Like in conditional GANs for example. But even here the approach is often much simpler and more straightforward than when dealing with current LMs. For example, in conditional GANs which are able to easily adjust output in a striaghtforward and predictable manner based on a single input variable.
cfoster0#4356: Yeah most likely
alstroemeria313#1694: Oh. I've already tried fine-tuning all the weights of the model on a single image
alstroemeria313#1694: To impart its style
cfoster0#4356: That makes sense
cfoster0#4356: Mm if you did this prompt tuning thing, you could train a bunch of soft prompts on different things, and then mix and match diffusion steps from them at runtime, right?
alstroemeria313#1694: hm, this particular diffusion model is conditioned on a CLIP embedding
alstroemeria313#1694: So like. I could hold its weights constant and optimize the input embedding with a diffusion MSE loss instead.
alstroemeria313#1694: Or optimize some sort of more parameterized thing than a single CLIP embedding, like a thing that generates different CLIP embeddings for different timesteps
alstroemeria313#1694: Like there's no soft/hard prompt distinction bc it is continuous to begin with but maybe optimizing different CLIP embeddings for different timesteps might be a similar sort of thing
alstroemeria313#1694: also. technically the "prompt" for a diffusion model is the init image
alstroemeria313#1694: like the actual equivalent.
alstroemeria313#1694: but that's hard to optimize bc unlike an AR transformer it does not take a sequence of them but rather just the cumulative effect of the steps thus far.
tpapp157#3643: I wonder if the diffusion step schedule is well conditioned enough that you could start with embeddings defining high level objects and transition to embeddings defining things like texture detail.
alstroemeria313#1694: it would probably work
alstroemeria313#1694: Since CLIP guidance works
tpapp157#3643: kind of like stylegan blending.
|
alstroemeria313#1694: And the way I do the conditioning it isn't "allowed" to ignore it, I think.
alstroemeria313#1694: I do it with conditional per-channel scales/shifts at each conv layer.
tpapp157#3643: I think stylegan2 transitioned the scales/shifts from the activations to the conv weight matrices to avoid some issues. I'd have to refresh my memory though.
alstroemeria313#1694: they did but the issues arose bc they were normalizing each channel to have mean 0 std 1 then doing the scales/shifts
alstroemeria313#1694: i think i am normalizing all of the channels together to have mean 0 std 1
alstroemeria313#1694: at any rate i haven't seen the artifacts they got
tpapp157#3643: cool
ethan caballero#6044: Does this validly resurrect interest in "folding@home" type efforts? :
https://twitter.com/Tim_Dettmers/status/1468259286715228165
https://twitter.com/m_ryabinin/status/1467947772280201221
StellaAthena#3530: Let me know when they actually train a large model
triggerhappygandi#0001: Basically
naclbbr#9203: I have a spare A100 today so sign me up
naclbbr#9203: I just read the page and sounds a little bit too good to be true from previous similar experiments
naclbbr#9203: maybe just me ๐ค
naclbbr#9203: basically a larger batch + low-bit gradient descent?
naclbbr#9203: wandb integration for individual participants is neat though
kurumuz#5695: yeah its not gonna work lol
kurumuz#5695: they are free to prove me wrong
chirp#4545: I mean they claim to have a path to a 1000x reduction in required bandwidth
|
chirp#4545: Certainly some of it might not work
chirp#4545: But theyโre doing much better than anyone else has as of yet, since they actually have an idea of how to do it
chirp#4545: You can watch their live dashboard here: https://huggingface.co/spaces/training-transformers-together/Dashboard
spirit-from-germany#1488: what exactly are they training?
chirp#4545: DALL-E
spirit-from-germany#1488: how many parameters
chirp#4545: 125M (but the model has just as many layers as the full DALL-E; theyโre doing a lot of cross-layer parameter sharing)
spirit-from-germany#1488: ok โฆ Curious how long this will take to train
naclbbr#9203: we clearly need more participants
bmk#1476: is it actually robust to adversaries
chirp#4545: No
bmk#1476: then it's pretty useless
Sid#2121: 100 steps in 9 days... :harold:
Sid#2121: pretty sure you could train a model of the same size to convergence on a couple A100s in that time
naclbbr#9203: It looks like it's just averaging out
bmk#1476: so basically it's not at all robust
naclbbr#9203: maybe discarding extremes? I'm not sure
bmk#1476: just submit a single huge gradient update and you can break it
kurumuz#5695: hmm, is this DP only
kurumuz#5695: model is pretty small so should be DP only on this one
|
kurumuz#5695: but im curious in general if they are doing MP or PP at all
Sid#2121: i don't really understand what the point would be, with a 125M model
kurumuz#5695: yeah i know, Im saying if their framework allows it at all
kurumuz#5695: so they can train big models
kurumuz#5695: DALL-E at home: 125M https://cdn.discordapp.com/attachments/729741769738158194/917905868954157066/unknown.png
Kia#2550: Demo model
Kia#2550: `Hi! It's great that you noticed it slight_smile
The demo running there is only a of proof of concept: we're training a relatively small model with an architecture chosen by a random guess.
Our plan is to make a few more technical improvements (a couple of days) and then come to you and discuss what's the best model and training config for the main training run.
/* posting to updates is entirely up to you */`
Kia#2550: From @yozh(One Of the T@H people from the DALL-E server)
Kia#2550: They can,But it's just demo for the moment
Kia#2550: I think it's best to ask @Alexander B.
TurnTrout#5101: Is anyone here Good At Twitter / Advertising / Research promotion?
bmk#1476: you should totally do like a session here for show&tell or the reading group or whatever
bmk#1476: dunno if Eleuther is exactly your target audience but would be worth a shot
ilovescience#3282: thanks for the suggestion, will try increasing the model size...
|
it's already conditional in terms of having an input image (it's an image-to-image translation problem)
alstroemeria313#1694: ahh
๐
ฌ gabriel_syme ๐
ฌ#3220: is it one to many?
๐
ฌ gabriel_syme ๐
ฌ#3220: sry i mean supervised or unsupervised I guess (the translation part)
ilovescience#3282: unsupervised...
๐
ฌ gabriel_syme ๐
ฌ#3220: aha okay! which architectures are you trying?
ilovescience#3282: CycleGAN is my default arch...
DoesThisUnitHaveASoul#7264: anyone around here that works a lot with TPUs? Just wanted to ask some questions on tips and tricks.
bmk#1476: probably best to just ask your question and see who answers
๐
ฌ gabriel_syme ๐
ฌ#3220: aha okay I was just thinking you could try some of the newer ones maybe they work better. Like MUNIT for e.g.
๐
ฌ gabriel_syme ๐
ฌ#3220: I forget the others. SPADE works? Might be supervised
๐
ฌ gabriel_syme ๐
ฌ#3220: I haven't touched that space in years so I'm waaaay behind. Also I did paired data mostly which is..easy I guess
ilovescience#3282: On other datasets, I tried DualGAN, GANILLA and CUT but none perform better than CycleGAN... There are definitely others I need to try but I haven't had time for a comprehensive comparison yet...
๐
ฌ gabriel_syme ๐
ฌ#3220: ofc that's alright, was just suggesting ๐
๐
ฌ gabriel_syme ๐
ฌ#3220: I am also running around in gathertown in Neurips and found one if you're curious
ilovescience#3282: oh an unpaired translation paper?
๐
ฌ gabriel_syme ๐
ฌ#3220: oh no my bad it's cGAN (kind of pix2pix
ilovescience#3282: yeah right now I am looking if there's a quick hack/fix but I probably might need to try some other models soon...
๐
ฌ gabriel_syme ๐
ฌ#3220: https://github.com/samgregoost/Rethinking-CGANs
ilovescience#3282: oh thanks for sharing, it still helps me!
|
I might be working on some paired problems in the near future as well and I was looking at the literature recently and it didn't seem like there was much improvement since pix2pix (except for models like SPADE which are designed specifically for like scene synthesis)
๐
ฌ gabriel_syme ๐
ฌ#3220: yeah I have to look into that literature myself, we're still using pix2pix models in my stuff (which tbh still just seem SOTA in this task very few care about)
ilovescience#3282: > To this end, we enforce a bi-lipschitz mapping between the latent and generated output manifolds while encouraging Euclidean shortest paths on the latent manifold to be mapped to the geodesics on the generated manifold
I only partly understand this, but there may be some way of enforcing similar constraints for unpaired problems too, with some benefit...
the math looks a little complicated though ๐ฌ
๐โธป#2550: https://huggingface.co/flax-community/gpt-neo-125M-code-clippy
๐โธป#2550: thoughts on this model?
chirp#4545: https://twitter.com/tejajuttu/status/1468448601063964673?s=12
chirp#4545: tbh itโs a little weird to me how little code ML models take up
chirp#4545: Like in a lot of software domains you can write 500 lines of code in an hour
chirp#4545: But in deep learning land 500 lines could be the culmination of like 2 years of experiments
nostalgiahurts#3408: i wonder what the biggest performance increase from changing just a single character is. maybe something like tuning LR or changing an activation function
kindiana#1016: loss = 0 ๐
nev#4905: https://pytorch.org/live/
ilovescience#3282: okay i am thinking about this further...
any thoughts on whether i should increase discriminator model size, generator model size, or both simultaneously?
ilovescience#3282: okay i am trying with a larger discriminator for now..
๐
ฌ gabriel_syme ๐
ฌ#3220: Can't remember which paper but there was one that did that, and kept inference efficient at the same time
ilovescience#3282: yeah i feel like i have read that have a more powerful discriminator can improve results...
m_wAL99#1923: https://nethackchallenge.com/
|
ari#9020: Humanity will defeat Moloch by training a sufficiently powerful RL agent on NetHack
naclbbr#9203: there was an algo(non-DL) borg for Angband that can beat the game but NetHack would be really tough with only having limited resources in the game
naclbbr#9203: ๐
alstroemeria313#1694: both imo
alstroemeria313#1694: well i usually keep them ~symmetric
nev#4905: mom can we have wandb?
no, we have wandb at home
wandb at home: https://cdn.discordapp.com/attachments/729741769738158194/918137873994223676/unknown.png
alstroemeria313#1694: eheh
alstroemeria313#1694: wandb's EMA is so bad
nev#4905: just don't smooth
tpapp157#3643: Not really, discriminators can be a fraction of the size of generators and still provide good training signals. If the discriminator is too powerful then it'll adapt to the generator too quickly, the gradients saturate and lose meaning, and training collapses. This is why a lot of GAN papers have resorted to some rather crude techniques to hamper the discriminator, like directly injecting noise into the layer activations.
alstroemeria313#1694: that's even less useful
alstroemeria313#1694: it depends on the GAN type too, like with WGAN-GP the discriminator can be arbitrarily powerful and it should still work
alstroemeria313#1694: in other contexts the Lipschitz constraint (the GP) would count as "hampering" the discriminator and in fact you can take the GP and use it with a normal GAN loss and it will still make training more stable
alstroemeria313#1694: WGAN-GP is notable bc its loss function is pmuch guaranteed to diverge without the GP constraint
peps#9152: anyone with an rtx 3060 here?
faraday#0862: dear Eleutherians, considering the great consolidation in AI (as mentioned here: https://twitter.com/karpathy/status/1468370605229547522), I wonder how do you stay on top of ongoing developments in AI/ML/DNN ? Do you have an effective approach?
faraday#0862: how many papers do you have to read per week just to stay in sync?
CRG#8707: (Just read #research :berk: )
|
tpapp157#3643: Also worth remembering that 99.9% of papers will no longer be relevant 5 years from now. So if you're feeling overwhelmed just take a step back and don't get lost in the constant churn of minutiae.
alstroemeria313#1694: hi i need help with routing tables
alstroemeria313#1694: `172.31.0.0/20 dev ens5 proto kernel scope link src 172.31.8.167` Why is that a /20
alstroemeria313#1694: I need to get from this box to 172.31.128.67
EricHallahan#1051: port?
alstroemeria313#1694: Which is a different subnet
EricHallahan#1051: IDK
alstroemeria313#1694: How do I like... get over to the different subnet
alstroemeria313#1694: This is on AWS
BoneAmputee#8363: <https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing>
alstroemeria313#1694: The other subnet is 172.31.128.0/24
alstroemeria313#1694: These are both my boxes on AWS
alstroemeria313#1694: They are in different subnets of the same VPC
alstroemeria313#1694: I need to like... do something to let them talk to each other
alstroemeria313#1694: One of them is a jump host and the other is in a private cluster subnet
BoneAmputee#8363: I use iptables with cidr notation to whitelist subnets for ssh :cat_thonk: is it a firewall issue?
alstroemeria313#1694: No the boxes literally don't know what router to use
alstroemeria313#1694: And I don't know if there *is* a router or how to make one
alstroemeria313#1694: I literally have to set up routing from scratch
alstroemeria313#1694: There is not routing by default
|
Kharr#7888: It doesn't matter what you read as long as you keep in mind that all data is some variation of value + coordinates and modality/application doesn't matter too much anymore. The things you learn from any of the areas can be transferred into others.
alstroemeria313#1694: Literally how do I add a route on Linux
rom1504#5008: I'd say it's about how many techniques you know and how deeply rather than the number of papers
alstroemeria313#1694: Like I do not know *what IP to put in for the gateway*
rom1504#5008: What does ifconfig say ?
alstroemeria313#1694: ifconfig is not there lol
alstroemeria313#1694: This is `ip route list`: ```
default via 172.31.0.1 dev ens5 proto dhcp src 172.31.8.167 metric 100
172.31.0.0/20 dev ens5 proto kernel scope link src 172.31.8.167
172.31.0.1 dev ens5 proto dhcp scope link src 172.31.8.167 metric 100 ```
alstroemeria313#1694: I cannot get into the other subnet
rom1504#5008: So a gateway is 172.31.0.1
alstroemeria313#1694: Which is now 172.31.128.0/18, I remade it bigger
alstroemeria313#1694: Right.
alstroemeria313#1694: But it can't route stuff into the other subnet somehow?
rom1504#5008: What does traceroute 172.31.128.67 print ?
alstroemeria313#1694: it just hangs
alstroemeria313#1694: * * *
rom1504#5008: So hmm
rom1504#5008: The gateway needs to be know how to reach 172.31.128.67
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.