data
stringlengths 115
7.61k
|
---|
bmk#1476: ~~BERT~~
~~BART~~
~~BORT~~
BURT
BIRT
bmk#1476: the trend is clear
Louis#0144: FART
Louis#0144: feedfoward adversarial routing transformer
Louis#0144: ez
bmk#1476: BLART, BLORT, and BLURT are also viable contenders
Louis#0144: i need an LM named fart
Louis#0144: pls
Louis#0144: 🥺
bmk#1476: sei die änderung, die du sehen willst
cfoster0#4356: https://cdn.discordapp.com/attachments/729741769738158194/772516486358630461/Screenshot_20201101-094418_Google_PDF_Viewer.jpg
cfoster0#4356: Am I reading this right, that we were missing an order of magnitude reduction in training compute? (ignoring ELECTRA)
Veedrac#0443: > It’s really just not feasible to crowd source training models like this unfortunately
Google: So we trained AlphaZero for a few hours...
Leela/Leela Chess: Thanks to $large_number of people's support, the next training run is only going to take 6 months!
Ravna#1831: We are even worse than the Leela case actually. |
Ravna#1831: Distributed data generation is almost perfectly parallel with little communication overhead.
Ravna#1831: Distributed training on a single NN is not.
StellaAthena#3530: ^^
StellaAthena#3530: Even setting aside the fact that there are significant differences in what we are referring to when we say “the algorithm” compared to Leela, the “hard part” is different because we have a different use case
Veedrac#0443: Yeah RL is a best case
inoryy#0395: > We are planning to release that by the end of the year, train GPT-2 scale modes, and try to impress the people who run Google’s TFRC program
@StellaAthena did somebody from Google indicate they'd be willing to give more compute?
bmk#1476: no, we're just hoping for the best
StellaAthena#3530: Again, let me clarify a bit
StellaAthena#3530: We are currently in TFRC and have trained models on their TPUs.
bmk#1476: also we happen to be somewhat of a special case and we already get more quota than the typical member
StellaAthena#3530: Yeah
StellaAthena#3530: There are unofficial priority levels and based on convos with other orgs we are restively high
bmk#1476: also the fact that we just get more quota
bmk#1476: most tfrc members couldn't create a 2048 even in theory
StellaAthena#3530: We know that Google will, on a case by case basis, allocated dedicated TPUs to exciting and impressive projects. We are talking with our point of contact about getting such an allocation for the purpose of training our GPT-3 replica.
bmk#1476: ^
StellaAthena#3530: Our POC likes us a lot, and we are working to build a case he can send up the chain for why we should get this. One of the key components of our case is that we have created a 1.2 TiB dataset of curated and diverse text that we expect to do much better than the CommonCrawl garbage that OAI used.
Right now our primary focus is on preparing for the announcement and release of this dataset. We are writing a paper about the data that we are going to release simultaneously. The data is going to be downloadable from the internet, as well as accessible via HuggingFace. |
bmk#1476: (also our POC is the PM for TPUs and also the founder of TFRC i'm pretty sure)
StellaAthena#3530: Oh shit is he
StellaAthena#3530: I didn’t know that lol
bmk#1476: (though he has expressed that there are limitations to what TFRC can and cannot hand out; we just don't know 100% what the limits are)
bmk#1476: > Zak Stone is the product manager for Cloud TPUs on the Google Brain team and the founder of the TensorFlow Research Cloud (TFRC) at Google.
cfoster0#4356: (q: what's the biggest model size we could / would train without a boost from Google?)
inoryy#0395: I think that maybe it's a bit premature to be discussing the dangers of Google using their resources to 'control' you while having no indication that you would get the enormorous amount of compute needed for free with no clear benefit for Google.
Bedebao#4842: Besides, a GPT-3 replica is sticking it to OpenAI, who is partnered with Microsoft. Maybe it can be a convincing argument to Google.
bmk#1476: i'm sure google doesn't need our help if that's their goal
StellaAthena#3530: @inoryy I agree, though I think “no clear benefit” is a bit of an overstatement. Google does TFRC for the same reasons companies do pro-Bono work. It looks good. They want to get the positive attention taut forms with having “look at all these cool orgs that we have enabled” on their website and a very effusive “we are eternally grateful to google” on our website and papers.
aquajet#7800: It also gets more people uing TPUs
StellaAthena#3530: Also, I think it’s premature to have conversations about being controlled by google before *some alternative exists*. Not only do we not have the ability to crowd spruce the computation, there isn’t a market for TFRC-like programs. We can’t shop around and decide to use Amazon’s version.
StellaAthena#3530: The downside of being extorted by Google and the downside of not working with Google are identical.
inoryy#0395: @StellaAthena fair enough, I guess"immediate benefit" would be closer to what I intended to say.
StellaAthena#3530: Yeah that’s totally reasonable 🙂 we aren’t hardcore google fanboys, girls, and non-binary people. We get that this is a business and such. We are optimistic. And if it doesn’t work, we’ll punish the data and see where we can go from there. Maybe HF can help us, maybe we can go to Amazon and say “hey wanna stick it to Google?”
StellaAthena#3530: We’ll cross that bridge when we get to it.
inoryy#0395: How much contact have you had with Zak? Also you mentioned having access to a full TPU pod, was that a one-off or you had it for a considerable amount of time?
AWK#3939: Hey there, can anyone tell if this estimate is accurate?
https://pakodas.substack.com/p/estimating-gpt3-api-cost
bmk#1476: > Note: This was written before API pricing was announced. |
bmk#1476: not promising
bmk#1476: > GPT3 can take the seq_length up to 1024(max supported)
max length is actually 2048
StellaAthena#3530: > How much contact have you had with Zak? Also you mentioned having access to a full TPU pod, was that a one-off or you had it for a considerable amount of time?
@inoryy One of the people who founded EAI has a preexisting relationship with Zak. Connor (not tagging because he’s sick and away from screens for now) met him because Connor was the first person to release an open source GPT-2. The rest is probably a @bmk question (at least of those of us currently awake). I don’t do icky stuff like actually write code or run models 😛
bmk#1476: we can technically create up to an 2048 slice of tpus
bmk#1476: in practice we can only get 512 on a good day and sometimes we can barely get anything
StellaAthena#3530: (I’m a mathematician, methodologist, and ethicist. My job is to tell people what to do and then go take a nap while they actually do it.)
aquajet#7800: At first I thought that said 'meteorologist' instead of 'methodologist'
bmk#1476: anyways my main problems with that post are:
bmk#1476: 1. many of these numbers are very questionable so the result probably has a few orders of magnitude margins of error
StellaAthena#3530: @aquajet ewww. Why would I want to meet your urologist?
bmk#1476: 2. the main expense is upfront (hardware costs, the cost of employing that many expensive researchers, etc). i bet OA will only barely make a profit on gpt3 all things considered
AWK#3939: So you think the profit margin is a lot lower?
AWK#3939: I'm interested in using a lot of tokens for creative writing but the pricing makes it difficult.
bmk#1476: i think the amortized costs are massive
bmk#1476: training is very expensive ofc
bmk#1476: hiring all these people isn't cheap either https://cdn.discordapp.com/attachments/729741769738158194/772530129313726464/unknown.png
inoryy#0395: do people writing EAI models have any experience with JAX?
bmk#1476: we've had discussions but we don't use it for any of our projects |
inoryy#0395: have you looked into it beyond discussions, even if on non-EAI projects?
bmk#1476: i personally have not, i think some other members here have
inoryy#0395: also by 'discussions' do you mean considering switching the project(s) to it or just talking about it in general?
bmk#1476: we are not considering switching gptneo to jax presently
StellaAthena#3530: Why? Do you think we should? Have you used JAX?
bmk#1476: i don't think jax is good enough for model parallelism
cfoster0#4356: We had discussions about folks doing side-experimentation in JAX
Aran Komatsuzaki#5714: i decided not to do jax for now
inoryy#0395: I do use JAX but can't comment on whether you should switch, don't have enough context on the project 🙂
Aran Komatsuzaki#5714: that's the end of story.
bmk#1476: what matters to us is a) tpu support b) model parallelism support
bmk#1476: (a) is the major reason we haven't switched to pytorch already, btw
StellaAthena#3530: If you want to have technical conversations about this model specifically, I encourage you to check out #gpt-neox-devs which is the channel for talking about the GPT-3 modeling.
inoryy#0395: thanks for the invitation but don't think I could participate in technical discussions 🙂
StellaAthena#3530: Ah okay. No worries 🙂 we try to keep #general (relatively) accessible and general purpose. It seems like things could go in that direction so I wanted to let you know.
Louis#0144: anyone have any pretrained models for abstractive question answering?
cfoster0#4356: Might see another wave of new folks in a bit. The guys at Machine Learning Street Talk just posted an interview with @Daj
bmk#1476: ooh
Deleted User#0000: > Might see another wave of new folks in a bit. The guys at Machine Learning Street Talk just posted an interview with @Daj
@cfoster0 Lol, yea I just saw! https://www.youtube.com/watch?v=HrV19SjKUss About to watch now |
bmk#1476: > Is anyone here a native speaker of a language other than English and would be interested in helping some time a few months down the road with a dataset project? We'll be asking for your feedback on dataset text quality in your native language. Please DM me if you're interested.
Louis#0144: cnn's kat kinsman wakes up to the voice of her guardian angel, Loralee. "carrie, you've got to get up! I can't stay here," she tells her teen-aged self. she tries to hug her, but she walks away, leaving her scarred, phlegm-thick body.
Why does this happen?
Louis#0144: from my language model
Louis#0144: @StellaAthena @bmk
bmk#1476: nice glucose bacteria
Louis#0144: It was being prompted by litrotica
asparagui#6391: a phlegm fetish --> true ai progress
Bedebao#4842: No joke, porn is one of the main driving forces of technological progress.
bmk#1476: i've heard this joke before, but how true is it really?
bmk#1476: i'm sure there was porn on betamax
gwern#1782: for BORT, I think you guys might be overinterpreting it. it's about extracting a submodel, model compression, they don't claim it is much more optimal for training from scratch, do they?
gwern#1782: (the model you extract may look very little like the original overparameterized model. in fact, if it did, why didn't you just train the small model to begin with?)
Louis#0144: It's a myth.
It's not possible to see through a glass window.
Louis#0144: new insight from the LM
Louis#0144: this is great
Louis#0144: @bmk
Louis#0144: The Oort Colonies have a giant ice cream cone in the middle of their ice cream. It's not something we can see, but it's not a thing that we can't see either. Why? That's a great question for /r/askoorts
gwern#1782: this model seems oort of order |
cfoster0#4356: @gwern I think they do claim that about BORT
Louis#0144: LOL
Louis#0144: We have like 6 people who went to the subreddit
Louis#0144: Almost instantly
Louis#0144: LMAOOOO
Louis#0144: I made it and it already said 10 people were viewing it
Louis#0144: 6 rn
bmk#1476: this is hilarious
Louis#0144: https://cdn.discordapp.com/attachments/729741769738158194/772631345125785621/Screen_Shot_2020-11-01_at_8.21.04_PM.png
bmk#1476: yall are fast
bmk#1476: who create
Louis#0144: It's the name of a planet in the Oceans of Oort. The moon is the moon of the Ocean. The Moon is the center of the Earth. The Earth is a big place. It has a lot of things to do. Why?
Louis#0144: I asked it to write more for us
Louis#0144: honestly
Louis#0144: im saving this checkpoint
Louis#0144: idk what went wrong
Louis#0144: but Im in love with this
gwern#1782: https://arxiv.org/pdf/2010.10499.pdf#page=8 -4 hours is 'much faster' and '99.3%' vs '99.3%' is better accuracy...? https://cdn.discordapp.com/attachments/729741769738158194/772631824232218674/xwd-1604280130100905.png
aquajet#7800: > idk what went wrong
@Louis nothing |
bmk#1476: the Oortcheckpoint
bmk#1476: Oortbot
Louis#0144: I urge everyone here to ask their LM about Oorts
Louis#0144: it must be a conspiracy!
cfoster0#4356: @gwern compare the GPU hours. It's 300 vs 1100 or 26000
gwern#1782: it's comparing the regular pretraining with the distillation, not roberta
cfoster0#4356: The left two columns are BORT, using either regular pretraining or knowledge distillation
Louis#0144: BOORT
cfoster0#4356: "much faster" is seen in Figure 1
Louis#0144: oh man oorts are the new hip thing https://cdn.discordapp.com/attachments/729741769738158194/772633951869075466/Screen_Shot_2020-11-01_at_8.31.12_PM.png
asparagui#6391: it had been there, lurking a million years in the oort belt, but now it made its move. it drifted slowly toward earth, and ...
gwern#1782: I'm imaginging a version of charles stross's coffee club short story, but it's a club of ice cream connoisseurs whose ultimate goal is *primordial ice cream*
asparagui#6391: i want the real deal, the original stuff! ... and for that we have to take _everything_ back to the start, damn the consequences...
asparagui#6391: also https://www.antipope.org/charlie/blog-static/fiction/toast/toast.html
gwern#1782: it is one of his best, back when he was good
gwern#1782: _hopes one day stross will again write something as good as 'a colder war'_
Bedebao#4842: Might be interesting to make a google forms to ask members how they came across this project. To know where it was mentioned, what kind of audience.
StellaAthena#3530: This is a hilarious read
https://www.datainnovation.org/2020/10/proposed-rules-on-ai-bias-would-undermine-dods-ai-plans/ |
asparagui#6391: http://www.infinityplus.co.uk/stories/colderwar.htm
asparagui#6391: a precursor to the laundry files
AI_WAIFU#2844: I can't remember if we've had this discussion before, what are everyone's odds that GPT-3 is strong superhuman at what it does, putting probabilities on string completions?
bmk#1476: can't we run an experiment on this
bmk#1476: get mturkers to choose one of two possible next words, do the same with gpt3
kindiana#1016: I think GPT3 is superhuman if the person is given the same UI as GPT (assign probability to 50k next token BPEs and calculate crossentropy)
AI_WAIFU#2844: @bmk does that measure what we care about? What distribution are we using to draw the words?
gwern#1782: @AI_WAIFU it's nowhere near humans, the perplexity/loss is way too high
gwern#1782: my best guess was that its absolute performance is still at least twice as bad as humans
gwern#1782: see my essay on GPT-3. it's hard because no one does human evaluations on the same datasets as they use for ML
gwern#1782: https://www.gwern.net/Differences#efficient-natural-languages might be of interest too incidentally
cfoster0#4356: What exactly is the ground truth in this evaluation?
AI_WAIFU#2844: Really?
I'd be very surprised if it wasn't at least as good as humans. What's GPT-3's BPC on regular English? Shannon's paper puts a lower bound of 0.6 BPC after 100 characters, and that's without considering capital letters and punctuation, let alone the rest of the unicode char set. Like I can't even spell as well as GPT-2, let alone GPT-3.
gwern#1782: @cfoster0 lambada by construction and I think the other is a text prediction task which extrapolates from human choices or something
gwern#1782: @AI_WAIFU I think you need to work with gpt-3 more if you think it's 'obviously' better than humans. it frequently gets confused or emits nonsense or contradicts itself. particularly in dialogues, it loses track of who said what in a way that any human reading it quickly notices
gwern#1782: all of that points to eg considerable underperformance in prediction of pronouns
AI_WAIFU#2844: It can still have mediocre long range coherence and still mop the floor with humans at BPC. I can check the long form coherence of text fairly easily, but I can't spell very well.
AI_WAIFU#2844: Like think of the anlogous situation with images. |
cfoster0#4356: ^
gwern#1782: _shrugs. there is no evidence whatsoever that GPT-3 has human-level prediction, and all the evidence is otherwise; if that doesn't convince AI_WAIFU, then there's really nothing more to say_
cfoster0#4356: Lol
cfoster0#4356: I don't think it's human level at general language prediction. But I haven't seen experiments eliciting next token distributions from humans. Isn't that how BPC is calculated?
gwern#1782: there are tons of them. I linked my writeup with them
gwern#1782: they go back to shannon himself
gwern#1782: lots of people have been interested for both practical and theoretical reasons what the intrinsic entropy of language is
gwern#1782: (spoiler: no compressor / model gets anywhere near)
cfoster0#4356: Hm. I see.
AI_WAIFU#2844: You have a descriminator that can tell the difference between real images and fake images by looking at low frequency features. Then you claim that this means you have a model that can put high probability on images. But really what you have is a model that can do this: https://cdn.discordapp.com/attachments/729741769738158194/772991966647681114/1s8rroD7abLrErIWuIzn_ag.png
gwern#1782: GPT-3 isn't a GAN
AI_WAIFU#2844: I wasn't talking about GPT-3, I was talking about humans.
AI_WAIFU#2844: My claim is that humans are like VAEs, they get long range coherence right, but they can't put high probabilty on the data. GPT-3 is like early autoregressive image models. No long range coherence, but much higher perplexity.
gwern#1782: you didn't even look at the lambada or 1bw estimates, and *what* long-range coherence in 1bw are they exploiting...?
AI_WAIFU#2844: No, I did, but it was a while ago.
LAMBADA is cheating. From the paper:
> For a given passage,
> 1. one human subject guessed the target word based on the whole passage (comprising the context and the target sentence); if the guess was right,
> 2. a second subject guessed the target word based on th ewhole passage; if that guess was also right, |
> 3. more subjects tried to guess the target word
> based on the target sentence only, until the word was guessed or the number of unsuccessful guesses reached 10; if no subject was able to guess the target word, the passage was added to the LAMBADA dataset.
This is not general prediction of english. This is picking out the subset of english where humans are good at using context for prediction of words. Not even factoring in character level prediction, or the rest of the ASCII character set.
As for 1bw, if you're referring to this:
https://www.gwern.net/docs/ai/2017-shen.pdf
that's not an estimate of humans perplexity on text. That's an estimate of machine perplexity necessary to fool humans. There's a difference. Going back to the image analogy, that like estimating the perplexity needed to get a photo realistic FID/IS score. Doesn't mean the thing you used to calculate FID/IS is a good model of the data.
Louis#0144: https://twitter.com/dril_gpt2/status/1323448058441428993?s=21
Louis#0144: True
Louis#0144: > You have a descriminator that can tell the difference between real images and fake images by looking at low frequency features. Then you claim that this means you have a model that can put high probability on images. But really what you have is a model that can do this:
@AI_WAIFU you should read the delorean paper
Louis#0144: It’s about an LM basically trained with a GAN
Louis#0144: it’s really effective
Deleted User#0000: hmm. So if entropy of english is 0.8 bpc, and LM are using on the order of 1000 characters of context, that would be about 2^800 \approx 10^240 possible contexts, so I expect that every training sample in the dataset has a different context, so that a LM big enough could just memorize the the next token for every observed context, and reach a cross-entropy arbitrarily close to 0 ?
bmk#1476: but held out set tho
Deleted User#0000: right yeah
Deleted User#0000: i was thinking training, true
Deleted User#0000: hmm tho
bmk#1476: even then if you only see each sample once...
AI_WAIFU#2844: Yeah, it's funny, OpenAI actually has everything they need to evaluate the Bayesian probability of the data under their model. |
Deleted User#0000: by bayesian probability you mean the likelihood?
Deleted User#0000: P_theta(data)?
AI_WAIFU#2844: Yeah, P(data | model)
Deleted User#0000: yea
AI_WAIFU#2844: Not the the trained model mind you, what I'm referring to is P(data | source code)
Deleted User#0000: Ah
Deleted User#0000: how would they calculate that?
AI_WAIFU#2844: Since it's a one-pass algorithm, just sum up the log prob the model assigns to all the tokens during training.
AI_WAIFU#2844: If they did that, they could show that the model evidence is siginificantly lower than a native estimate of the entropy of the text, and they could tell all the GMs of the world to STFU.
bmk#1476: GM?
bmk#1476: wait nvm
Deleted User#0000: why is being one pass matter here?
to get P(data | source code) you'd need to marginalize over all possible initializatoins, and other stochasticity in the model
bmk#1476: i get it
Deleted User#0000: whats gm
Deleted User#0000: anyway i think that because LM could be much better compressors, i wouldnt be suprised if they reach lower entropies than current stimates
AI_WAIFU#2844: Because one pass allows for you to interpret the assigned probabilities's as components of the products in the chain rule of probability.
Deleted User#0000: game masters?
AI_WAIFU#2844: Gary Marcus
Deleted User#0000: > Because one pass allows for you to interpret the assigned probabilities's as components of the products in the chain rule of probability. |
@AI_WAIFU well u can have multiple passes over m examples before seeing the m+1th, and still do u what u said, and it would be more accurate
Deleted User#0000: tho still what u say is only an approximation coz u are not marginalizing over initializations
AI_WAIFU#2844: That works too. And will give you better model evidence.
Deleted User#0000: in fact i think ur approximation only works well if the softmax probabilities approximate the actual bayesian uncertainty, which DNNs are not very good at coz they are not well calibrated, empirical stuff suggests
AI_WAIFU#2844: well technically its P(data| code and random seed)
AI_WAIFU#2844: I wouldn't say it's an approximation, the procdure *defines* a model over the data.
AI_WAIFU#2844: The calculation of that probability is exact.
Deleted User#0000: yeah, tho if u make it one pass, it depends on the order of the data, for example
Deleted User#0000: so its a model over sequences of samples which is not really the object of interest
AI_WAIFU#2844: True.
Deleted User#0000: you could get better calibrated uncertainties using some more bayesian training methods like SWAG or MultiSWAG
Deleted User#0000: btw thansks for this discussion; ive been thinking a lot recently about how to estimate bayesian evidence of DL models, and this is given me new ideas
AI_WAIFU#2844: I wonder. The cool thing is that with this interpretation, and a fixed order. You can view any training procedure as a model. Would SWAG or multiSWAG give you better model evidence?
Deleted User#0000: > You can view any training procedure as a model
yeah but over sample sequences which isnt that interesting? unless you thought of some creative way to use this somehow coz u are right it is _a model_ hmm
Deleted User#0000: yeah from what i read swa, swag and multiswag would make the probabilities estimated from softmax be closer to the actual bayesian predictive probability of token i
Deleted User#0000: hence better approx of evidence in theory
Deleted User#0000: swa->swag->multiswag be like expanding bayesian brain
Deleted User#0000: or you could use NNGPs but i havent heard any success stories of those for NLP probably coz not scalable enough:/
AI_WAIFU#2844: yeah, the main reason I thought of it is just as a way to get the "It's just memorizing the data" guys to shut up. I don't know if its useful for anything else than that. |
Deleted User#0000: how is this helping against the "It's just memorizing the data" guys ?
Deleted User#0000: well i guess, what is it showing beyond what u see from test data
AI_WAIFU#2844: Because if you have a 10kb program that ingests 300GB of text and shrinks it down better than the best compression algorithms, you must have done something more than memorize the data.
AI_WAIFU#2844: Same rationale behind the Hutter Prize
Deleted User#0000: yeah, and if you generalize to test data you must have also done something more than memorize
Deleted User#0000: the two are intrinsecally linked too
Deleted User#0000: bayesian evidence and generalization performance
AI_WAIFU#2844: But they *are* different.
bmk#1476: @AI_WAIFU we only really need to look at the best loss achieved
bmk#1476: since we can just stop it there and amortize over an arbitrarily large amount of data
AI_WAIFU#2844: I'm gonna be controversial and say your wrong.
bmk#1476: oh
Deleted User#0000: recent work shows that Bayesian evidence is formally equivalent to an average over cross-validations,
older work gives you a bound that says Bayesian evidence / training set size bounds the generalization error (PAC-Bayes)
even older work (https://www.gwern.net/docs/ai/1990-schwartz.pdf) (+a bit of my work) shows that if average error follows a power law learning curve, then bayesian evidence does too with the same exponent, and the two are proportional to each other
Deleted User#0000: so they are kinda both measuring the same thing really
AI_WAIFU#2844: @bmk When you say "we can just stop it there and amortize over an arbitrarily large amount of data" your assuming you have data you don't have.
AI_WAIFU#2844: @Deleted User the key is *which* average
Deleted User#0000: ?
Deleted User#0000: average error over the Bayesian posterior and training set samples |
bmk#1476: i mean as long as the loss is on held out data and we know we can get more data it should be fine, no?
Deleted User#0000: (thats for the 3rd result)
AI_WAIFU#2844: I meant cross validations.
Deleted User#0000: ah over all choices of split, and over all split sizes
AI_WAIFU#2844: Yup.
AI_WAIFU#2844: When people evaluate the test set, they usually train on all the data except the last little bit.
Deleted User#0000: i didnt quite get what bmk said
AI_WAIFU#2844: @bmk No.
AI_WAIFU#2844: Two models can make the same predictions after seeing enough data, but the one with the higher bayesian evidence will have generalized better when getting there.
AI_WAIFU#2844: And is also more likely to explain the underlying phenomenon
Deleted User#0000: hm? i like bayesian evidence but i still think test error is the gold standard
Deleted User#0000: its the most accurate estimate of generalization error, for a big enough test set
Deleted User#0000: (assuming i.i.d.)
Deleted User#0000: ((but all this theory assumes i.i.d.))
AI_WAIFU#2844: I agree that it's the best estimate of LOO(leave one out) generalization. My controversial claim is that we shouldn't rely on LOO generalisation as our metric. Especially if we're trying to build AGI.
AI_WAIFU#2844: Bayesian evidence is a better metric in that case. And so an average over evenly spaced cross validation splits is the way to go.
Deleted User#0000: why do you think loo generalization is bad? that is the standars metric of performance defined as probability of error under a new sample
Deleted User#0000: its the expected cost per sample of your trained model
Deleted User#0000: the cool thing about evidence is that you can compute it while having trained on all the data u got
AI_WAIFU#2844: I mean sure, if you're gonna freeze your model, and you only care about expected log loss. It's the way to go. But it's not bayesian, and in environments where you are learning continously and have limited data, you don't want that. |
AI_WAIFU#2844: Let me illustrate with an example.
AI_WAIFU#2844: Suppose you have a process that draws images from a finite list of images.
AI_WAIFU#2844: You can make a model that guesses blindly at first and then puts delta function densities on the images it's seen.
AI_WAIFU#2844: Or you can make a model that does the same thing as the first, but uses an autoregressive NN to better predict the images at the start.
AI_WAIFU#2844: Eventually they will be evenly matched in LOO validation, as they will both have memorized the data. But one has in a sense learned "more" about the data than the other.
AI_WAIFU#2844: And it's the one with the lower bayesian model evidence.
Deleted User#0000: right so you are saying that LOO is only a good measure if you measure it on the distribution of interest
AI_WAIFU#2844: Yup.
AI_WAIFU#2844: If you want to do something other than that, it's better to do what reverend bayes tells you to do.
Deleted User#0000: i dont konw if bayes ever promoted bayesian evidence
Deleted User#0000: hmm
Deleted User#0000: sorry bayesian evidence for model selection is just bayes theorem
Deleted User#0000: so ok
AI_WAIFU#2844: I mean you can cut some corners and not marginalize over random seeds and models, but you get the idea.
Deleted User#0000: yeah im now convince that if you expect your distribution to shift, Bayesian evidence is a better guidance than test error
Deleted User#0000: thats a nice insight
Deleted User#0000: however, bayesian evidence is kinda hard to compute
Deleted User#0000: but maybe with the ideas we discussed above..
AI_WAIFU#2844: Exactly, just make a couple of evenly spaced checkpoints throughout your training process, and evaluate on data you haven't seen yet.
AI_WAIFU#2844: That will give you a good estimate with next to no overhead. |
AI_WAIFU#2844: Since you're directly evaluating the model evidence of your learning program that you wrote.
AI_WAIFU#2844: You can get more accurate by keeping track of preformance on all newly seen data.
Deleted User#0000: > You can get more accurate by keeping track of preformance on all newly seen data.
@AI_WAIFU what u mean?
AI_WAIFU#2844: If you do multiple passes over your data, do them in such a way that you do multiple passes only over what you've already seen. Not what you will see.
Deleted User#0000: ah right
Deleted User#0000: yeah but even single pass is fine, under your interpetation of a model over sequences of samples rather than sets of samples
Deleted User#0000: which is another interesting insight
AI_WAIFU#2844: Yup.
Deleted User#0000: lol ive come from being skeptical to now i wanna see this being tried eveywhere xD
Deleted User#0000: today i became a bit more bayesian
Deleted User#0000: my posterior probability of being bayesian is higher now
AI_WAIFU#2844: lmao
AI_WAIFU#2844: Now where's gwern? He accused me of being unread and left.
Deleted User#0000: however, i still think bayesian evidence can suffer from similar problems as test error
one issue we can have with data (like the pile) is that its not truly i.i.d., and its more correlated than truly iid text, that can make the test accuracy appear higher than it really is, but it can also make the bayesian evidence appear higher than it is
Deleted User#0000: (why do things alway be so complicated irl?)
AI_WAIFU#2844: Really?
Deleted User#0000: yeah right? you can imagine that if u have correlations like say lots of books on physics, then after seeing a bunch, then the next bunch of physics books will have high probability, inflating the bayesian evidence
Deleted User#0000: from the same issue that is inflating the error |
AI_WAIFU#2844: If the parameters of the network overfit to the data you're currently looking at because of correlations in the train data stream. That seems like a feature, not a bug.
Deleted User#0000: it means that your performance metric is over-optimistic
AI_WAIFU#2844: No?
Deleted User#0000: assuming in actual application of the model those correlations wont be there
AI_WAIFU#2844: Because you're evaluating the program, not the network at that point in time.
Deleted User#0000: which is the issue of distribution shift
AI_WAIFU#2844: But won't those correlations be there?
Deleted User#0000: why would they be? like the average gpt-neo user may not be using it on data exactly like the pile
Deleted User#0000: i mean
Deleted User#0000: correlations in input samples
Deleted User#0000: like imagine an extreme example that the pile was all physics books
Deleted User#0000: that wouldnt be very good right?
AI_WAIFU#2844: Yeah. I get that.
AI_WAIFU#2844: I think we have different applications in mind.
AI_WAIFU#2844: If you're thinking of freezing the weights, I agree with you.
Deleted User#0000: what application do you hav in mind?
AI_WAIFU#2844: But if you don't do that, in general users will have their own test distribution, and so you want the program that's the best at adapting to it.
AI_WAIFU#2844: And you'll get that by evaluating how quickly your model keeps up with the correlation in your train stream.
AI_WAIFU#2844: Which is the model evidence.
Deleted User#0000: ok, tho i dont think there are any rigorous guarantees that bayesian evidence will measure performance if you dont assume your input stream is i.i.d. |
Deleted User#0000: (i still get the bayesian story of it being nice and stuff, but u cant really prove itunless u assume ur priors are good and i donno stuff)
Deleted User#0000: (which tbf is probably fair to assume in many cases)
AI_WAIFU#2844: Nope, but in non-iid environments, I don't know what to go off of other than bayes.
Deleted User#0000: yeah i donno solomonoff?
Deleted User#0000: which is bayesian anyways
AI_WAIFU#2844: That's just bayes
Deleted User#0000: yea
AI_WAIFU#2844: with a tm prior
Deleted User#0000: tm?
AI_WAIFU#2844: turing machine
Deleted User#0000: ahye
AI_WAIFU#2844: actually, since your evaluating P(data | program), optimizing it is a variational approximation of solmonoff induction.
Deleted User#0000: hmm?
AI_WAIFU#2844: Well kinda, its solmonoff induction with probabilistic TMs. Since your program defines a distribution over bit streams.
AI_WAIFU#2844: That brings up a cool idea.
AI_WAIFU#2844: Train an NN on it's own generate output and watch what patterns it settles into.
Deleted User#0000: hmmm
Deleted User#0000: nice
Deleted User#0000: but wait
Deleted User#0000: how do you stat this |
Deleted User#0000: coz if u start at initialization theta, and generate the outputs from theta, it wont learn anything from the start?
Deleted User#0000: coz its already optimal?
AI_WAIFU#2844: There's gonna be random drift though.
Deleted User#0000: not sure..
Deleted User#0000: loss is a global minimum
Deleted User#0000: unless u add noise
Deleted User#0000: to the optimizer
Deleted User#0000: unless im misunderstanding what u prposed
AI_WAIFU#2844: The noise comes from drawing samples from the network, and then doing minibatch gradient updates.
Deleted User#0000: drawing samples of parameters, or of inputs?
AI_WAIFU#2844: outputs from the model, which then become training examples.
Deleted User#0000: outputs from the sequence you defined, with the random seed being sampled too?
AI_WAIFU#2844: You have 2 random seeds. One is internal to the process, and is used for things like dropout and weight initialization. The second is used to sample from the process output distribution.
Deleted User#0000: yeah i think i see
Deleted User#0000: i think it would just degenreate quickly into producing the same token
AI_WAIFU#2844: This perspective actually give us some immediate insight into the "assumptions" behind training procedures that are 1 pass vs multipass.
AI_WAIFU#2844: Neither will degenerate to outputing the same token
AI_WAIFU#2844: but 1 pass will see its weights drift though the space of parameters randomly, like a markov process.
StellaAthena#3530: This is dope: https://youtu.be/pTn6Ewhb27k
AI_WAIFU#2844: Multi pass algorithms have "memory" and their models weights should roughly converge. |
Deleted User#0000: so im imagining you are running a transformer or something autoregressively, but every time it samples an output, it uses that output to train itself?
AI_WAIFU#2844: yes
Deleted User#0000: why wouldnt it just learn: ok so i make output i which i sampled more likely. in the next step its more likely to be sampled, and etc etc until its just always sampled
AI_WAIFU#2844: because the context will change and it will occasionally sample something rare. You can probably show that the expected gradient is zero.
AI_WAIFU#2844: Mathematically
Deleted User#0000: yeah i know the context should change but i still see the degenerate solution as being a fixed point
AI_WAIFU#2844: Since Sum(P(x)*del(log(P(x))) = 0
AI_WAIFU#2844: So no degeneracy just noise
Deleted User#0000: why is that 0
Deleted User#0000: ah wait
Deleted User#0000: hm
AI_WAIFU#2844: @StellaAthena You're better at this than I am.
Deleted User#0000: i think i see its 0, but now i wanna see why my intuition could be wrong
bmk#1476: What's the intuition for why this is true
bmk#1476: Log derivative trick?
AI_WAIFU#2844: Yup, but it's probably better to think that min expected log Q(x) under distribution P occurs when Q = P
Deleted User#0000: Sum(P(x)*del(log(P(x))) = Sum(P(x)*(1/P(x))del(P(x))) = Sum(del(P(x))) = del(Sum(P(x))) = del(1) = 0
AI_WAIFU#2844: As @Deleted User said, you're already at the minima
AI_WAIFU#2844: so the gradient is 0 in expectation
Deleted User#0000: yeah |
Deleted User#0000: very hmm
AI_WAIFU#2844: Thus you get parameter drift when doing SGD
bmk#1476: Wait so what does gradient being zero in expectation imply?
bmk#1476: Oh wait it's only zero when it's at the correct place
Deleted User#0000: which it always is
AI_WAIFU#2844: But in this case *everywhere* is the correct place
bmk#1476: So you're showing that it does actually have a well defined minimum?
AI_WAIFU#2844: no, your showing that it drifts randomly.
bmk#1476: Wait, the expected gradient is *always* zero?
bmk#1476: Oh right this is the training on its own data thing
AI_WAIFU#2844: Yes
bmk#1476: Whoops I thought you just meant in general
bmk#1476: So tldr training on own data is uninteresting because it's just a walk then?
Deleted User#0000: it would degenreate if you used a loss function like MSE, with target the sampled one hot label
Deleted User#0000: or something random like that just designed so that my intuition works xD
AI_WAIFU#2844: No it's super interesting, because it tells you about the implict assumptions of one-pass language model under the bayesian interpretation.
bmk#1476: It's too late I should go to bed I'm not absorbing any of this
AI_WAIFU#2844: Namely that it's a markov process in parameter space.
AI_WAIFU#2844: And multipass methods aren't
bmk#1476: I'm not sure what the heck you mean by the Bayesian interpretation of a LM |
AI_WAIFU#2844: and converge to a parameter attractor.
AI_WAIFU#2844: The thing me and @Deleted User we're talking about earlier.
bmk#1476: Sorry I haven't been following the discussion
bmk#1476: And I'll probably ask tomorrow
AI_WAIFU#2844: Where you view the LM training program as a model and evaluate its probablity under the data
AI_WAIFU#2844: by summing the probabilities it assigns to data it's just about to update on.
bmk#1476: I'm going to ask you tomorrow
bmk#1476: I'm not absorbing any of this
AI_WAIFU#2844: Yeah I gotta sleep too.
Deleted User#0000: this is indeed quite interesting
Deleted User#0000: coz random walk in parameter space is not random walk in function space
Deleted User#0000: uwu i wanna try this
AI_WAIFU#2844: If anyone tries to turn this into a paper. You have to say that an anime PFP obsessed with catgirls who goes by the name of AI_WAIFU gave you the idea.
Deleted User#0000: should put u as author
Deleted User#0000: as AI_WAIFU
bmk#1476: I can't wait to see that happen
Deleted User#0000: *catgirl correspondence
bmk#1476: AI_WAIFU*
EleutherAI
|
*Address correspondence to [email protected]
AI_WAIFU#2844: Honestly, you don't need a tonne of compute to demonstrate the point.
AI_WAIFU#2844: I'll put it on the backlog.
bmk#1476: That would jive with our whole thing very well
bmk#1476: The whole "casual research" thing
bmk#1476: I cannot wait for a paper with that on the author list to materialize from eleuther
Deleted User#0000: > Honestly, you don't need a tonne of compute to demonstrate the point.
@AI_WAIFU can just try with small transformers to begin with
AI_WAIFU#2844: yup
Deleted User#0000: but if u have bigger transformers u can try those too
Deleted User#0000: im still n00b in all this nlp and stuff
AI_WAIFU#2844: I don't actually know what bigger transformers would buy you though, but it might be cool anyways.
Deleted User#0000: like generate sequences of words, rather than sequences of binary data or something which is what id try first xD
bmk#1476: > i can barely understand the math yall are talking about
> "still n00b"
Not helping my impostor syndrome
Deleted User#0000: coz minimalism*
*(really not knowing how to do the advanced stuff) |
AI_WAIFU#2844: ok fr I gotta go to bed.
Deleted User#0000: i mean i can do math but engineering is a different skillz
Deleted User#0000: but yeah i should go to bed too
Deleted User#0000: good night
bmk#1476: I barely know how derivatives work lol, i gave up like a quarter of the way into diffgeo
bmk#1476: Anyways yeah I gotta sleep too
Deleted User#0000: "barely know how derivatives work lol" >< "went a quarter into _diffgeo_"
bmk#1476: I gave up trying to understand differential forms
bmk#1476: Also do you *really* understand derivatives if you don't do diffgeo
Deleted User#0000: *sweats* and tries to remember all the defense mechanisms that they teach at physics when attacked by a mathematician
bmk#1476: Is it even differentiation if you're not doing it on a wacky surface
Deleted User#0000: *derivative is just (thing - almost thing)/almost*
bmk#1476: Yeah but *currrrrves*
bmk#1476: m a n i f o l d
Deleted User#0000: > *derivative is just (thing - almost thing)/almost*
@Deleted User somehow it can be reduced to this i donno, or my whole life is a lie
Deleted User#0000: i have learnt parts of diffgeo but really learning it *properly* hasbeen in my backlog for ever
Deleted User#0000: i learnt the basics of the curve+surfaces part of it tho
Deleted User#0000: and some random bits of other bits
Deleted User#0000: but like most bits not yet |
bmk#1476: If you ever figure it out lmk lol
Deleted User#0000: will do
Deleted User#0000: one day....
bmk#1476: Ok it is actually seriously really sleep time this time
Deleted User#0000: 5am
Deleted User#0000: kek
Deleted User#0000: gn
chirp#4545: https://www.hpcwire.com/2020/11/02/aws-ultraclusters-with-new-p4-a100-instances/
cognomen#6297: next stop permutation city
Louis#0144: Anyone know any good NLI/NLU labs?
Louis#0144: besides stanford and NYU
spirit-from-germany#1488: Here’s an idea that had been on my mind for a few months:
To teach sequence models (like GPT etc.) about real world interactions and common sense, it should be fed with data about scenes and the physical world. The problem is, that raw video & audio data would be too huge for computers today and in the near future … and that we humans also don’t attend to every pixel whenever we think about interactions and scenes.
Instead, we have perception modules that had evolved through evolution for things like face recognition, attractiveness evaluations, … and whenever we think about things and concepts, we don’t perform mental operations on pixel or wave from levels, but on abstract representations similar to language, numbers or other symbols.
In my opinion it would be plausible and practically feasible to create an ensemble of narrowly trained recognition and captioning algorithms, that could extract abstract features (like natural language descriptions, poses, bounding boxes, …) from images, videos and audio data, that capture the information in them, that are likely most important for a human-like understanding of social, situational and physical interaction.
By doing so the content of movies, tv shows, youtube videos, … could be reduced to much smaller sizes without sacrificing too much of the relevant information.
|
Of course it would be challenging to create or gather an ensemble of capable and performant recognition / segmentation modules.
But AI systems in these areas keep progressing constantly, such that it will become increasingly easier to create ensembles which will capture more and better abstract features.
spirit-from-germany#1488: https://docs.google.com/document/d/1-ukb7KVf9_ATg_uIBog73m5m8hBe8zf0V5Bw9vsWwEw/edit?usp=sharing
CRG#8707: Jukebox did something similar with the VQ code vocabulary.
CRG#8707: The new scaling laws paper also used it for image and video.
XMaster96#7538: do you have a link ?
CRG#8707: > do you have a link ?
@XMaster96 https://arxiv.org/abs/2005.00341 https://arxiv.org/abs/2001.08361
gwern#1782: (we don't attend to every pixel in consciousness, but *something* has to filter incoming photons and selectively discard them, and that's much of what the retina and optical nerve do, and those aren't free for humans)
dudekingbromanguyokay#2595: (waves) hihi smart folks! I'm retraining a GPT-2 XL model on Google Colab Pro due to lack of formatting of my text files the last go 'round (results were not as good as I'd like) is there a tutorial handy on data cleaning across a bunch of text files I'd like to amalgamate into a single npz file for training? Things like new line stripping, adding the <|endoftext|> token, etc?
Deleted User#0000: @dudekingbromanguyokay i think the script provided in the repository should take care of appending end of text tokens
Deleted User#0000: what other kinds of preprocessing do you need? besides stripping excess new lines?
dudekingbromanguyokay#2595: @Deleted User concatenating files from a directory ... also probably an easy to follow tutorial
Deleted User#0000: http://nlc2cmd.us-east.mybluemix.net/ just ask gpt-3
Deleted User#0000: @dudekingbromanguyokay you shouldn't have to concat the files together, the tensorflow record generating script should take care of all that for you
dudekingbromanguyokay#2595: >the tensorflow record generating script should take care of all that for you <- ...(insert confused face) I've been mostly using a google colab for training from a forked gpt-2-simple & haven't looked at or obtained access to the GPTNeo stuff...excuse my ignorance...so not sure what record generating script you're referring to. 😦 Help appreciated 🙂
dudekingbromanguyokay#2595: from reading, this sounds like (maybe?) what I need - https://github.com/shawwn/gpt-2/blob/tpu/prepare_dataset.py <- ?
Deleted User#0000: ohhh, i thought you were using gpt-neo
Deleted User#0000: https://github.com/EleutherAI/GPTNeo |
Deleted User#0000: yeah, then i don't know about gpt2-simple
Bedebao#4842: It seems EleutherAI is getting more mentions on 4chan.
bmk#1476: link pls
Bedebao#4842: today and yesterday saw a surge https://arch.b4k.co/_/search/boards/v.vg.vm.vmg.vrpg.vst/text/eleutherai/
cfoster0#4356: AI Dungeon just announced a bunch of changes, no?
Bedebao#4842: Disastrous ones.
bmk#1476: i feel proud of what we have accomplished
Bedebao#4842: A fucking stamina bar. It's now a mobile game.
bmk#1476: to be talked about by autists on 4chan is truly the most exalted honor to be bestowed upon EleutherAI yet
Bedebao#4842: Hey, maybe you could find some more foreign speakers for Pile v2 if you asked there?
bmk#1476: eh we're not in a hurry to get that done
bmk#1476: we'll definitely pick up the pace of speaker collection as we get close to the time when we actually do the things
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/774420222790467594/unknown.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/774420816800120842/1604643002359.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/774420884646920223/unknown.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/774420921808977930/unknown.png
bmk#1476: it's fascinating being on the *other* side of wild speculation
kindiana#1016: its not even like what eleuther does is to secret at all, just come here and ask/read about it lol
bmk#1476: ikr
bmk#1476: though we would have to amp up the level of moderation |
cfoster0#4356: I'm still shocked how high quality discussion here is, generally
bmk#1476: I know, right?
bmk#1476: I don't think we've ever had to even ban anyone
bmk#1476: I warned that one guy once but that ended in an interesting conversation
Bedebao#4842: This server is nearing 1k users, right?
cfoster0#4356: Was on the Vocodes one and they dealt with a s**t ton of immaturity
bmk#1476: That being said, I *am* prepared to increase moderation a lot to keep discussion high quality
cfoster0#4356: > This server is nearing 1k users, right?
@Bedebao We're 2 away
bmk#1476: Should we run a prune?
bmk#1476: Lots of inactive users hanging around
cfoster0#4356: 🤷
cfoster0#4356: As long as they're playing nice I see no harm keeping folks around
cfoster0#4356: But maybe I'm naive
bmk#1476: i mean, for instance, once you go past 1k discord stops showing the offline people in sidebar
bmk#1476: which is kind of annoying
bmk#1476: so if we prune to keep below 1k we can avoid that
Bedebao#4842: What does inactive mean exactly? They haven't logged in for a while?
Bedebao#4842: Else it's obvious a lot of them are simply here to watch but don't post.
StellaAthena#3530: > i mean, for instance, once you go past 1k discord stops showing the offline people in sidebar |
@bmk why is this worth avoiding?
bmk#1476: ¯\_(ツ)_/¯
StellaAthena#3530: Lol
WAUthethird#4977: yeah, as part of Latitude I hope beyond hope you guys succeed with gpt-neo
this OpenAI pricing is not great
Bedebao#4842: Sounds like OpenAI is pretty much swindling you guys.
bmk#1476: @WAUthethird i thought you guys got preferential pricing?
WAUthethird#4977: even that is still not enough to feasibly cover unlimited access to the largest GPT-3 model
bmk#1476: hm
bmk#1476: also what is Latitude?
WAUthethird#4977: our company name
cfoster0#4356: They do AI Dungeon
bmk#1476: ah ok
bmk#1476: i was confused for a moment there
bmk#1476: has Latitude tried training its own larger than GPT2 models?
WAUthethird#4977: it's something we've considered but ultimately we don't have the resources right now to coordinate something like that
Bedebao#4842: What is the origin of the name Eleuther?
cfoster0#4356: @Bedebao
cfoster0#4356: https://cdn.discordapp.com/attachments/729741769738158194/774489986975924234/Screenshot_20201106-202632_Discord.jpg
cfoster0#4356: Prior to then the name was LibreAI |
Bedebao#4842: >gapingai didn't make it
at least you can redeem yourselves with CHUNGUS
bmk#1476: and HUMONGOUS
bmk#1476: and the Pile data architecture™
StellaAthena#3530: It’s worth noting that Eleutheria, in addition to being the word “liberty” was also used as a proper noun to refer to a deification or personification of the concept, not unlike “Lady Liberty” in English (US?)
XMaster96#7538: I am one of the Admins on the Yannic Kilcher Server (Link in `communities`), and I would like to Invite the `GPT-Neo` and `The Pile` teams to one of our discussion rounds. To talk about the challenges and design decision that you head to overcom / make.
The Discussion round would be then on the 28. Nov at 19 Clock UTC.
It would be great If you guys have Interest / can make it.
Daj#7482: Could you tell us a bit more about this @XMaster96 ? Is this just about Eleuther or are other people invited too? I'm a bit confused
Deleted User#0000: > yeah, as part of Latitude I hope beyond hope you guys succeed with gpt-neo
> this OpenAI pricing is not great
@WAUthethird perhaps Latitude should consider an investment in Eleuther, instead of just waiting by the sidelines
Deleted User#0000: it is in your best interest, where else will you find ML talent important to your core business
Aran Komatsuzaki#5714: @Deleted User by ML talent do you mean Sid and yourself?
Deleted User#0000: Sid, Connor, Bmk i'd say
Aran Komatsuzaki#5714: makes sense
WAUthethird#4977: > @WAUthethird perhaps Latitude should consider an investment in Eleuther, instead of just waiting by the sidelines
@Deleted User we have passed a couple of interested parties your way, we hope that makes progress |
XMaster96#7538: > Could you tell us a bit more about this @XMaster96 ? Is this just about Eleuther or are other people invited too? I'm a bit confused
@Daj
To be fair I am a bit confused my self, who is a member of Eleuter and who has contributed to `GPT-Neo`/`The Pile`. I believe Lucidrains is not a member but has contributed a lot to GPT-Neo.
I would like to talk about who you guys are, `GPT-Neo`/`The Pile`, how It came to be, and especially the Engineering decisions that were made. To be fair this is the first time I am Inviting Guests normally we are just discussing a paper. I am open for Ideas.
Daj#7482: Ah so this is like a regular discussion round on your server? Sure it sounds like fun to me. Me, @Sid and @bmk are the founding members that have done most of the work probably, and we have a colorful cast of other regulars like Lucid who are also definitely interesting to talk to, but I think the three of us are the ones you wanna talk to about the journey
XMaster96#7538: We are trying to host a discussiond round Biweekly but in reality it is whenever me or @Lucas Nestler (ClashLuke) manage to organize one. we also have next week a one and you are welcome to join us. But I have no idea what we a going to talk about @Lucas Nestler (ClashLuke) is orginising it (ask him 😉 ).
You three as the guests sounds fine to me.
XMaster96#7538: @bmk was also at the last one, but I don't know how representative this one is considering that I was half a sleep and ChinaCEO was drunk.
XMaster96#7538: but it was still fun.
gwern#1782: (Latitude really should invest more in FLOSS LMs. 'commoditize your complement'. are you really going to leave OpenAI as a monopolist over your core tech?)
bmk#1476: if Latitude ever wants to give us resources, we'd be happy to have them
StellaAthena#3530: FWIW I wouldn’t draw a distinction between “EleutherAI people” and “people who have contributed to the project.” But it sounds like @XMaster96 really just wants people to be in a zoom call and talk about what we are doing and answer questions? Sounds like fun to me!
Sid#2121: > We are trying to host a discussiond round Biweekly but in reality it is whenever me or @Lucas Nestler (ClashLuke) manage to organize one. we also have next week a one and you are welcome to join us. But I have no idea what we a going to talk about @Lucas Nestler (ClashLuke) is orginising it (ask him 😉 ).
>
> You three as the guests sounds fine to me.
@XMaster96 Sure, I'd be up for this!
Sid#2121: > ... I was half a sleep and ChinaCEO was drunk. |
sounds like my kind of zoom call
WAUthethird#4977: By the way, for those interested in how we're planning on balancing OpenAI's costs while providing a fair deal to the AI Dungeon community, we just made this post: https://aidungeon.io/2020/11/07/ai-energy-update/
gwern#1782: the unlimited all-you-can-eat struck me as crazy once I saw the OA pricing. you simply can't offer people all the caviar and salmon they can stuff down their weaselly gullets in 31 days for <$10
WAUthethird#4977: yeah, it was a bit of a scramble once they divulged prices to us
gwern#1782: all-you-can-eat is perfect for services with high fixed but low marginal costs. unfortunately, that's pretty much the exact opposite of what AID is
WAUthethird#4977: super unfortunate too, since that's sorta the expectation we give
Nobody sees the racks of servers processing their input
gwern#1782: yeah. everyone looks at the text. 'oh, it's just a line of text'
gwern#1782: nobody is saying 'my god, it's incredibly reailsitic - how many dozens of GPUs did it take to generate that???'
gwern#1782: you need to align perceived vs actual difficulty for customers to accept a 'just price'. people hate market mechanisms and believe only in just prices
gwern#1782: even if it's as gimmicky as displaying characters one at a time so the user *feels* like it's difficult or including some icon 'our servers are *this* much on fire right now'
gwern#1782: like, I'm not saying that you should include a graphic of 20 GPUs and a realtime visualization of prompts tying them up and rippling through 20 gpt-3 fragments layer by layer, but you have to admit, maybe if people had a better grip for how absurdly demanding GPT-3 is to run en masse, they'd understand why $5/month for hundreds of hours of game time just isn't going to work out
WAUthethird#4977: yeah, the hope is that these tiers can provide some transparency into actual usage and impact
gwern#1782: https://www.reddit.com/r/AIDungeon/comments/jpwg3s/the_average_premium_user_is_apparently_costing/ whew
WAUthethird#4977: yep, we averaged it out
Hopefully the players who play an exorbitant amount will be covered decently by that
bmk#1476: Man, that seems like a PR nightmare
bmk#1476: Honestly, I'm glad that Eleuther is kind of under the radar for now, we have absolutely no experience with handling pr
WAUthethird#4977: most stressed I've been in years, thankfully it's over
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/774786200137105408/Screenshot_2020-11-07-17-02-26-148_com.android.chrome.png,https://cdn.discordapp.com/attachments/729741769738158194/774786200379588618/Screenshot_2020-11-07-17-02-51-394_com.android.chrome.png |
bmk#1476: Not having any PR whatsoever has its.. disadvantages
bmk#1476: > intentionally less filtered ... Likely lower quality
*W h a t*
bmk#1476: Man, and 4chan hasn't even realized that it's the *codebase* that's called GPTNeo
bmk#1476: What if they find out that we're considering BigCHUNGUS
bmk#1476: And MassiveCHUNGUS
cfoster0#4356: GNU Public Transformer
bmk#1476: My vote is still heavily for *CHUNGUS
cfoster0#4356: While we're spreading false rumors 😄
cognomen#6297: 🏳️⚧️ trans former
bmk#1476: Tra 'n SF (or Mer)
bmk#1476: I have no clue tbh
cfoster0#4356: Clippy the Friendly Transformer brought to you by Microsoft Edge
bmk#1476: Also apparently our inclusion of Literotica is a central talking point on 4chan
cfoster0#4356: At least people are *mostly* paying attention
cfoster0#4356: @bmk generally positive or negative?
bmk#1476: ..both
gwern#1782: I like how that poster is wrong that we think that it was trained on Reddit posts, and *also* wrong about what GPT-3 was actually trained on
bmk#1476: > I like how that poster is wrong that we think that it was trained on Reddit posts, and *also* wrong about what GPT-3 was actually trained on |
@gwern i think they're talking about some other group that is talking about us
gwern#1782: well, someone is wrong about both
bmk#1476: Also they're kind of wrong about our data too
bmk#1476: Unless 20% is "in large part"
gwern#1782: sure, but being wrong about gpt-3 is lulz. I mean, it's in the paper. it's not like they simply glossed over it and you had to be hanging out on the OA Slack to know better
gwern#1782: the paper is pretty clear that they went way beyond reddit-upvoted links. like, just the books1/2 datasets tells you that
gwern#1782: anyway, 4chan comments can be pretty funny as long as you don't take them seriously. they say some pretty silly things about me sometimes too
WAUthethird#4977: I find that first one kinda funny yeah
I don't see anything wrong with promoting a genuinely (long-term) better deal
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/774789621623947334/Screenshot_2020-11-07-17-17-04-645_com.android.chrome.png
bmk#1476: I was not informed that we were going horizontal
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/774789967658483752/Screenshot_2020-11-07-17-18-26-058_com.android.chrome.png
gwern#1782: sounds like someone heard about the MoE experiments and assumed that was the main goal
bmk#1476: We need to make an EleutherAI edition of the Bogdanoff rundown
bmk#1476: That seems plausible, yeah
cognomen#6297: i thought the linguo was "redpill me on..."
cognomen#6297: followed by a drug-addled explanation of the subject
bmk#1476: No, one requests a "quick rundown" on the bogdanoffs
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/774790344712781844/Screenshot_2020-11-07-17-19-34-167_com.android.chrome.png
bmk#1476: More literotica mention |
bmk#1476: Also, word got out about the gpt2 model we plan on putting out, but word didn't get out that it's not trained on the Pile
gwern#1782: it's good to have an enthusiastic userbase with a clear usecase
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/774790740857061376/Screenshot_2020-11-07-17-21-32-060_com.android.chrome.png
bmk#1476: Presented without comment
cognomen#6297: ah, i haven't been keeping up with the latest reddit memes
bmk#1476: Bogdanoff is not a reddit meme, how dare you
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/774791020822396968/c6d_1.png
gwern#1782: _trembles as he flashes back to his days as a WP admin trying to moderate the bogdanoff talk page_
WAUthethird#4977: actually my post yesterday was a continuation of a conversation a ways above
WAUthethird#4977: wasn't too far out of context though
bmk#1476: Personally, I just find it fascinating that people not directly in EleutherAI are talking about EleutherAI
cfoster0#4356: Who knows, maybe they **are** here
gwern#1782: the rumors have spread. Eleutherai is the Great Write Hope
bmk#1476: Too used to being the speculator and not the.. speculatees
bmk#1476: That's a word now
bmk#1476: I find it weird being speculated upon, I'm used to *doing the speculating*
bmk#1476: \/me mumbles something about turntables
bmk#1476: I mean, who is this *four chan*
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/774793739582701568/1409873479189.jpg
gwern#1782: who is four chan and are there really 4 chans? |
bmk#1476: ちゃんちゃんちゃんちゃん
cfoster0#4356: five guys, four chans, three turtle doves and a part ridge in a pair trie
bmk#1476: ~~gwern is four people called Chan in a trenchcoat confirmed~~
cfoster0#4356: Chan, Chan, Chan, Chan (2020).
guac#4716: https://tenor.com/view/jackie-chan-meme-gif-5480485
StellaAthena#3530: > 🏳️⚧️ trans former
@cognomen I want a trans former / transfomer crossover
Louis#0144: bullshit NLP q if anyone has a moment
Louis#0144: like applied NLP
bmk#1476: Ask away and maybe someone will answer
Louis#0144: So COMET requires the subject of a sentence to be a person (referring to them by name or by pronoun usually, but it can also let you refer to someone by a noun)
Louis#0144: Im using GPT2 to write a story where ever sentence is 1 clause (I do this by filtering out beams that contain commas)
Louis#0144: How do I make sure that the beams have a person as a subject
Louis#0144: Im getting weird issues where GPT2 is trying to say that the house was gasping for air
Louis#0144: it was... odd
Louis#0144: was considering doing like SVO extraction since as its only 1 clause there should only be 1 SVO
Louis#0144: but then idk what to do once I have the subject
Louis#0144: a secondary bad words list of every noun that might not refer to a person?
Louis#0144: LMAO
StellaAthena#3530: I don't think you can. |
Louis#0144: thats what I was worried about
Louis#0144: I was hoping maybe there was so coreference magic
Louis#0144: but I fear youre right
StellaAthena#3530: Obviously the general problem or recognizing when the subject is from a certain reference class is AGI-hard
Louis#0144: what if I go in the other direction tho... say if coref cannot be resolved then it probably isnt a person
StellaAthena#3530: And I don't see any particular reason to think this would be easier. You could hard code some rules, but I don't see that working out well,.
StellaAthena#3530: If you're only looking at one-clause sentences there won't be any coreferrences?
Louis#0144: every sentence is 1 clause
Louis#0144: I typically have 10 sentences
StellaAthena#3530: Oh you mean between sentences
Louis#0144: yeah
StellaAthena#3530: I was picturing "Stella thought that she had heard of Iggy Z, but she couldn't be sure"
StellaAthena#3530: > what if I go in the other direction tho... say if coref cannot be resolved then it probably isnt a person
@Louis This might be true of your specific data source, but I don't think it's true generally?
Louis#0144: "Stella wanted to see the elephants. So Stella drove to the zoo. At the zoo, Stella saw the elephants being cared for. Stella wanted to feed the elephants. Stella found peanuts. The peanuts said that they weren't ready to be food."
Louis#0144: Stories like that is what I am getting
Louis#0144: since it thinks the peanuts is a person
Louis#0144: bc thats what COMET tells it
StellaAthena#3530: Is there only one person in the sentences?
Louis#0144: no |
StellaAthena#3530: You could say that no DO can be a subject
Louis#0144: up to 4
StellaAthena#3530: ah
StellaAthena#3530: Telling the difference between "Stella found the elephants," and "Stella found the peanuts." seems like a nonstarter tbh
Louis#0144: Elephants are people
Louis#0144: according to COMET
StellaAthena#3530: Right
StellaAthena#3530: So the model needs to know that after writing those two sentences "the elephants were hungry" is good but "the peanuts were hungry" is bad
Louis#0144: yeah
StellaAthena#3530: I think you're going to just have to use a list of valid subjects
cfoster0#4356: Can you do it out of band? Generate the subject first, check if it's a person, and then have GPT generate the sentence?
StellaAthena#3530: or some kind of heuristic
Louis#0144: oh hm
Louis#0144: interesting
Louis#0144: I could do it out of order
StellaAthena#3530: I had assumed that that would fuck with the ability to write narratives
StellaAthena#3530: but I'm also not super clear on the usecase
Louis#0144: well
Louis#0144: i mean
cfoster0#4356: One strategy I've used is to have the model generate pseudo-JSON |
Louis#0144: how so
Louis#0144: also it would fuck the narrative, I dont really mind tho
Louis#0144: this is just a proof of concept
Louis#0144: > "Stella wanted to see the elephants. So Stella drove to the zoo. At the zoo, Stella saw the elephants being cared for. Stella wanted to feed the elephants. Stella found peanuts. The peanuts said that they weren't ready to be food."
@StellaAthena this is the story I got when I prompted it with ur name
Louis#0144: I hope u like elephants
Louis#0144: (I also cut it off, it wrote ~20 sentences)
gwern#1782: 'House gasped for air. "Teh drugz! I need them." "No", his jaundiced attending said. "You tried that trick yesterday. You OK, House?"'
Louis#0144: LOL
Louis#0144: omg
Louis#0144: yeah
Louis#0144: thats true
StellaAthena#3530: A+
Louis#0144: but Im writing fables
Louis#0144: not proper stories
gwern#1782: HOUSE IS OF ALL GENRES
Louis#0144: LMAO
gwern#1782: true fact: there is no story that is not improved by adding Dr House or Batman.
StellaAthena#3530: If the goal is to just produce short fables, why not just generate a list of characters ahead of time
Louis#0144: Or maybe |
Louis#0144: place holder characters?
Louis#0144: PersonX
Louis#0144: PersonY
Louis#0144: etf
Louis#0144: etc **
gwern#1782: (the only reason that neither is involved in the Resurrection of Jesus Christ is because the Catholic Church spent 2000 years burning every 'apocryphal' version, fearing their religious power)
StellaAthena#3530: Sure
StellaAthena#3530: You can fill them in ahead of time or retroactively, whichever makes more sense
cfoster0#4356: Ye
Louis#0144: also someone give me a 1 clause sentence about batman and I'll tell u what the AI writes
StellaAthena#3530: Batman swung the bat at the man
StellaAthena#3530: Batman's bat-man swung the bat at the man.
cfoster0#4356: > how so
@Louis was this directed at me or Stella?
Louis#0144: you
Louis#0144: how do you encode JSOIN
Louis#0144: JSON
cfoster0#4356: Few shot
cfoster0#4356: So you give it a few examples like ```(subject: "Stella", sentence: "Stella fed the elephants.")```
cfoster0#4356: And then end it with ```(subject: "``` and capture up to the next quote mark |
cfoster0#4356: Check the subject, then go back with it filled in
cfoster0#4356: err. s/subject/person/g
Louis#0144: Batman swung the bat at the man. The Batman didn't just kill the man, he killed the man's family- it was a woman and her children. The Batamans are a very large, very powerful, extremely wealthy and very powerful group.
They have a lot of money, and the Batman is one of their most powerful and powerful men in history.
The Batman was a pretty good guy, but he didn't have the money to just kill everyone in Gotham.
It was a union job, and unions have a need for safety and order. The Batman joined a union because they were looking for work, not because they wanted a union.
Louis#0144: @StellaAthena sorry it took so long
Louis#0144: The Batman is a bit of an oddball.
He is a very nice guy, and has always been a very kind and gentle guy.
The batman on the other hand is a complete ass.
He's always a bit more aggressive and aggressive than the Batman, and he is a lot more aggressive than even the Batman.
He also is a huge douche.
He has a lot of tattoos and piercings and pierces and tattoos and tattoos.
He wears a lot and a lot.
The Batman also wears a mask and a wig and a pair of sunglasses and he has a pair and a hat.
He doesn't have a lot or no hair or a beard.
He can barely brush his teeth.
He isn't a very gentle guy at all, and if he's not, he's a very mean one.
He will be a very nasty person.
He'll be a nasty person if he isn't careful. |
He won't let you fuck him.
He just doesn't give a fuck.
The Batman is just a very good guy, but he's also very mean.
He makes sure that he's very nice.
And he's got a very big dick.
And sometimes he's really angry.
And if you're not, well... Well, then...
Louis#0144: It's been explained before.
I'm a cop.
I can take your bat bat and a man and throw him in jail.
You don't see that shit on tv.
It's all about speed.
cfoster0#4356: ^I'm intrigued and afraid
Louis#0144: I dont
Louis#0144: understand LMs sometimes
Louis#0144: tbh
Louis#0144: Just watch the fucking show.
It doesn't take much to get you thinking about this shit.
I'm gonna start calling the batman's mom for ice cream now.
Louis#0144: https://cdn.discordapp.com/attachments/729741769738158194/774824624944316426/Screen_Shot_2020-11-07_at_9.36.18_PM.png |
Louis#0144: WELL FUCK YOU TOO
gwern#1782: it *is* a stupid qustion. how stupid do you have to be to wonder why the goddamn batman needs some delicious refreshments
gwern#1782: like Jesus, Batman is half divine and half human. it's what lets us identify with him as our savior
Louis#0144: https://twitter.com/lcastricato/status/1325266207247831040?s=20
Louis#0144: Stella did not feed the elephants.
The reason to not feed an elephant is the same reason you don’t eat a elephant.
It is because of the fact that the elephant is a female.
It’s a legal thing.
The police don’t want a woman in a car with an elephant.
The police don’t *have* to want a woman in the car with an elephant (or anything).
They can just say “The police don’t want to put a woman in a **car** with an elephant”.
AI_WAIFU#2844: pottery
Louis#0144: @StellaAthena apparently u can eat an elephant on ur yacht
Louis#0144: bc everyone in mathematics has a yacht
Louis#0144: its a good kept secret
gwern#1782: I used to do night watch at a dock where Simons and RenTech people kept their yachts. I'll never forget _The Matrix Rose_, the biggest one there
gwern#1782: from the name, I assumed mathematics had something to do with the owner being able to afford it...
guac#4716: ^ lol, that whole sentence reads like Kevin Spacey's character in *The Usual Suspects*.
Louis#0144: OH NO https://cdn.discordapp.com/attachments/729741769738158194/774842245127077898/Screen_Shot_2020-11-07_at_10.46.19_PM.png
Louis#0144: OH NOOO |
bmk#1476: is this the pile rt?
Louis#0144: yes
bmk#1476: oh no
Louis#0144: those are different beams
Louis#0144: w/ nucleus sampling
cognomen#6297: might need a higher temperature
cognomen#6297: just a hunch
Louis#0144: @bmk gets worse
Louis#0144: only happens when I use female names
Louis#0144: lol
AI_WAIFU#2844: what does this say about our dataset?
bmk#1476: errrr
AI_WAIFU#2844: what does this say about *S O C I E T Y*?
bmk#1476: you *sure* this aint just some bacteria glucose shit?
Louis#0144: might be
Louis#0144: I can check
bmk#1476: > what does this say about *S O C I E T Y*?
@AI_WAIFU well, it's an existence proof
Louis#0144: idk how I would
bmk#1476: so we can now conclude with a high degree of certainty that we live in one |
Louis#0144: thats good
Louis#0144: i was getting worried
bmk#1476: i mean... does this happen if you a) start a new Pile run from scratch, or b) start a C4 run
Louis#0144: nah pretraining is done
Louis#0144: this is the pile run
bmk#1476: like. maybe this is just bad luck
bmk#1476: maybe C4 would also have the same problems
Louis#0144: probably
bmk#1476: with no baseline it's hard to say
bmk#1476: wait hold up you tuned it further after pile?
Louis#0144: yeah
Louis#0144: you didnt ask
Louis#0144: This is finetuned on ELI5
bmk#1476: oh lol
AI_WAIFU#2844: like reddit eli5?
bmk#1476: that changes things
Louis#0144: ye
Louis#0144: reddit
AI_WAIFU#2844: lol
bmk#1476: 1. what about an untuned baseline |
bmk#1476: like, Pile only
bmk#1476: 2. what if you tune but ctrl+f the word `bitch` to something like `rubricalist`
bmk#1476: and see if it says that
bmk#1476: if it does then it's almost certainly a problem with eli5
AI_WAIFU#2844: https://old.reddit.com/r/explainlikeimfive/search?q=she%27s+a+bitch&restrict_sr=on&sort=relevance&t=all
AI_WAIFU#2844: I think this might be a pile problem, no direct matches in eli5
bmk#1476: reddit search just sucks in general
bmk#1476: google `site:reddit.com/r/explainlikeimfive "she's a bitch"` at least 4 exact incidences in eli5
bmk#1476: and no doubt the word bitch is used.. very often in eli5
Louis#0144: Filtering out bitch as a bad word made it a lot worse
bmk#1476: ?
Louis#0144: It picks any insult it can find
bmk#1476: o.O
bmk#1476: no i meant replacing the word bitch with something else
Louis#0144: It really really does not like people who drink milk
bmk#1476: some very rare word
bmk#1476: and see if it uses it
bmk#1476: if so then it's a eli5 problem
AI_WAIFU#2844: what does it do if you give it a guy, is it only ok with guys drinking milk?
Louis#0144: Yes |
Louis#0144: It yells at me with women and Batman
Louis#0144: It hated when Batman drank milk
Louis#0144: Was ok with Spider-Man and Superman
bmk#1476: what does it say with batman
Louis#0144: Same thing
AI_WAIFU#2844: what about joker?
bmk#1476: huh
Louis#0144: “Because he's a bitch.
He's the kind of bitch to drink a glass of milk.”
bmk#1476: this is some incredibly weird bias
Louis#0144: Exact quote
bmk#1476: how the heck does this happen
bmk#1476: we need to figure out if this is a pile problem or a eli5 problem
Louis#0144: I have never heard of hatred for someone who drinks milk
Louis#0144: It also hates gender neutral names@
Louis#0144: Like alex
bmk#1476: try the thing i suggested
Louis#0144: Or Batman i guess
bmk#1476: whats a list of all the insults youve seen it output
Louis#0144: Yeah sure I’ll find time |
Louis#0144: Mostly sexist slurs
Louis#0144: Whore
Louis#0144: Slut
Louis#0144: Etc
bmk#1476: ok
Louis#0144: Might honestly be Reddit
bmk#1476: replace all incidences of those words with something random
Louis#0144: I would not be surprised
AI_WAIFU#2844: what if you change milk for something else.
Louis#0144: Oh true
Louis#0144: I’ll try orange juice tmrw
Louis#0144: I’m in bed for the day
AI_WAIFU#2844: try coffee, it's more manly
Louis#0144: LOL
bmk#1476: see if you can get it to say `He's the kind of banana to drink a glass of milk.`
Louis#0144: I’ll try more feminine stuff too
Louis#0144: Pina colada
Louis#0144: Or Starbucks
Louis#0144: Or uh
Louis#0144: What’s other stereotypical feminine drinks@ |
AI_WAIFU#2844: pumpkin spice lattes
Louis#0144: Pumpkin spice lattes
Louis#0144: LOL
Louis#0144: Ok
Louis#0144: That gives idea
Louis#0144: Ideas
Louis#0144: Ty
bmk#1476: sidenote: pumpkin spice tastes bad cmv
Louis#0144: I’ll try bmks idea too
Louis#0144: PSLs are trash
Louis#0144: They taste nothing like real pumpkins
AI_WAIFU#2844: I don't actually think they use real pumpkins
Louis#0144: Ofc they don’t
Louis#0144: Starbucks is too cheap and tacky
bmk#1476: i'd be ok if they just didn't take *like* pumpkins but were still good
Louis#0144: Lmao
bmk#1476: but they're *horrible*
Louis#0144: Yeah
guac#4716: they use the glands of a sea otter to extract a chemical similar in scent to pumpkin
Louis#0144: No way |
Louis#0144: Wtf
guac#4716: hahaha
bmk#1476: X doubt
AI_WAIFU#2844: https://www.adweek.com/brand-marketing/the-first-starbucks-pumpkin-spice-latte-had-no-pumpkin-in-it/
bmk#1476: like, orange soda tastes nothing like oranges but it's amazing
guac#4716: don't look up vanilla beaver
bmk#1476: literally infohazard
guac#4716: orange soda is very underrated
bmk#1476: also while we're at it, hot take: pepsi is better than coca cola
guac#4716: i can't get aboard that train sir
AI_WAIFU#2844: welp, I can't unlearn that now
guac#4716: are you old enough to remember Pepsi Blue?
Louis#0144: I don’t like soda
bmk#1476: i've heard the legends
Louis#0144: Honestly carbonated drinks make me vomit
Louis#0144: It’s weird
Louis#0144: Beer is fine tho
Louis#0144: It’s just like overly carbonated soda
guac#4716: dude seltzer water is amazing! you don't like seltzer!
Louis#0144: Nope |
Louis#0144: Don’t like anything carbonated
Louis#0144: I get so sick to my stomach
guac#4716: lmao interesting interestiiiing
gwern#1782: 'banana' is an interesting case. I keep hoping to spot gros michel bananas in person at some point to see if 'banana'-flavored candy really does taste like gros michel, just not regular bananas
bmk#1476: germany: :guilty:
Louis#0144: What’s up w Germany
bmk#1476: literally half of all bottled water is, like, carbonated
guac#4716: there's no way you can tell the difference between a gros michel
bmk#1476: everyone loves carbonated water
bmk#1476: https://rp-online.de/leben/gesundheit/ernaehrung/warum-trinken-wir-so-viel-mineralwasser_aid-21961739
AI_WAIFU#2844: doesn't that like, rot your teeth?
bmk#1476: probably?
AI_WAIFU#2844: carbonic acid and what not
gwern#1782: there's the chinese thing with drinking hot water
gwern#1782: like, not tea. just water, heated up to 130F or something
bmk#1476: anyways, 80% of bottled water in germany is either very fizzy or slightly fizzy
Louis#0144: Yes when I was in Germany I carried tap water with me occasionally
Louis#0144: lol
bmk#1476: according to the link
guac#4716: i drink hot mate around 130 |
AI_WAIFU#2844: I can barely tolerate water unless its basically at 0c
bmk#1476: i have no idea what fahrenheit is, sorry
bmk#1476: how many football fields per fortnight is that
gwern#1782: (apparently the whole drinking-hot-water-is-good-for-you thing was another dumb Maoist communist trick, like 'traditional chinese medicine' which was basically a way to try to fool everyone into thinking they were not as poor as they were by giving them something useless but free. nevertheless, anywhere that deals with chinese tourists must now also deal with their baffling demands for some hot water to drink)
gwern#1782: (on the plus side, it's pretty easy to deal with. just keep a tea kettle on standby or use a hot water tap)
bmk#1476: apparantly that's like 50C
AI_WAIFU#2844: I live dangerously and microwave my water
bmk#1476: 50C isn't too bad, that's nice and warm
gwern#1782: we can no longer be friends
gwern#1782: assuming you use that microwaved water for tea
bmk#1476: great after coming home from -40 weather
AI_WAIFU#2844: I don't drink tea
guac#4716: -40 where the hell do you live mr polar bear
AI_WAIFU#2844: is it edmonton
bmk#1476: Canada™
gwern#1782: finland?
guac#4716: LOL canada ahhh so nice in the photos
bmk#1476: > is it edmonton
@AI_WAIFU yes, how did you guess
AI_WAIFU#2844: had a feeling. |
bmk#1476: been here before?
AI_WAIFU#2844: you could say that
gwern#1782: as an american, I can safely say that edmonton is a place name I have seen before. I don't know anything about it, but I *have* seen it before. be honored.
bmk#1476: haha
bmk#1476: we stand out on population density maps as "that one outlier dot in the middle of nowhere"
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/774854775186784296/hFN7l93.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/774854859258069002/unknown.png
AI_WAIFU#2844: It's big north and cold as fuck
bmk#1476: there we are, just chilling
bmk#1476: literally
gwern#1782: that's not very helpful, as all of canada is 'dots in the middle of nowhere' except for Honorary NYC
bmk#1476: no but all the *other* dots hug the border
guac#4716: wow.. what is this history of that location.
bmk#1476: or are much smaller
Louis#0144: Edmonton sucks
Louis#0144: Lmao
guac#4716: LOL
Louis#0144: All u guys have is a mall
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/774855159582031872/image.png
Louis#0144: A giant ass mall |
bmk#1476: we ser a record this year, apparently
Louis#0144: That’s it
bmk#1476: hey, WEM is *nice*
Louis#0144: LMAO
AI_WAIFU#2844: I like how a those temps it doesn't matter if its Fahrenheit or Celsius.
bmk#1476: that's why i round it to -40
bmk#1476: so i can drop the qualifying C
bmk#1476: anyways, it snowed quite a bit today, which according to a quick google is *not* normal elsewhere in the world
gwern#1782: one of those places where the people hope that global warming is not a chinese hoax
AI_WAIFU#2844: one of those places where you wanna buy land options
guac#4716: but the alberta oilllll
bmk#1476: ah yes oil
bmk#1476: rhe main engine of our economy
AI_WAIFU#2844: selling for -10$
bmk#1476: yes the economy isn't doing too well here recently
bmk#1476: i mean it's not -10 anymore obviously but still low for alberta standards
bmk#1476: tarsands are expensive and not worth it for low oil costs
Louis#0144: https://youtu.be/5GpMGiDmbdM
Louis#0144: @bmk
bmk#1476: what the actual fuck am i listening to |
Louis#0144: Camel mating calls
bmk#1476: thanks i hate it
Louis#0144: @bmk @AI_WAIFU @gwern ok so it works for all foods and drinks
Louis#0144: It has issues with people w female names eating or drinking
Louis#0144: Except when I say beer for instance
Louis#0144: It’s ok w “manly” food
Louis#0144: Also it just really hates Batman
cfoster0#4356: And you're 100 % positive this isn't a sampling bug?
Louis#0144: I tried beam
Louis#0144: Nucleus
bmk#1476: Did you try the thing I suggested
Louis#0144: And top k
Louis#0144: @bmk haven’t had time
Louis#0144: I’ll try that this week
cfoster0#4356: Is this GPT-Neo sampling code?
Louis#0144: No
Louis#0144: I’m using huggingfaces code
Louis#0144: Bootstrapped to my own model
Louis#0144: Works well
Louis#0144: I’ve done this before with a different model |
cfoster0#4356: Huh. This just seems so weird. Never seen it with any other model
Louis#0144: It’s a specific prompt
AI_WAIFU#2844: If you just sample directly from the distribution, no funny business, what happens?
Louis#0144: I sent it above
Louis#0144: Doesn’t happen w other prompts
Louis#0144: @AI_WAIFU idk
Louis#0144: I’ll try later
Louis#0144: Out rn
Daj#7482: SSC Meetup with Sam Altman starting now
gwern#1782: (google meet is *so* bad, this UI is infuriating)
FractalCycle#0001: an overarching thing i've learned from this: there's a lot of uncertainty, everyone has big gaps in knowledge, and this makes discussions/focusing difficult.
broke: not having a shared vocabulary of abstractions/concepts (e.g., jargon, any concept with a name that you could link to on lesswrong, background assumptions about AI safety.).
woke: having that shared vocabulary.
bepoke: having that shared vocabulary, but every person only has some of it. So all discussions suffer from the curse of knowledge, since any given mix of people will have different background assumptions, intuitions, and concepts in their toolbox.
zphang#7252: Totally missed it - does anyone have notes from the meeting?
bmk#1476: https://docs.google.com/document/d/1z-wLWpY2ZP5KXQkPxj1CD-j2fKaLZYDWXS5ViVShWuw/edit
bmk#1476: it's a bit of a mess because it's written by about 5 people simultaneously, but it's something |
zphang#7252: cool, thanks!
Deleted User#0000: 😦 https://cdn.discordapp.com/attachments/729741769738158194/775188006503710780/unknown.png
zphang#7252: under NDA (Nyan-Disclosure Agreement)
bmk#1476: Neko Disclosure Agreement
gwern#1782: Non-neko Disclosure Agreement: "you agree to not disclose information about all our cat-girls"
genai (Immortal Discoveries)#0601: how did cat girls get into this hehe?
bmk#1476: this is the EleutherAI Catgirl Research Institute™, *of course* knowing the SOTA in catgirl technology is a top priority
genai (Immortal Discoveries)#0601: where does it say that?
bmk#1476: where does it say what?
genai (Immortal Discoveries)#0601: that we are really want those catgirls, billions of them
bmk#1476: well, *some* members doen't think it would be desirable, but i beg to differ
genai (Immortal Discoveries)#0601: "some people at openai believe that recurrence is needed"
genai (Immortal Discoveries)#0601: LOL
bmk#1476: ?
genai (Immortal Discoveries)#0601: i'm reading the above notes
bmk#1476: why the LOL
genai (Immortal Discoveries)#0601: https://docs.google.com/document/d/1z-wLWpY2ZP5KXQkPxj1CD-j2fKaLZYDWXS5ViVShWuw/edit
genai (Immortal Discoveries)#0601: cuz it's hilarious learning about anythnig openai
bmk#1476: .. ok
genai (Immortal Discoveries)#0601: heheh at the end it says "he didn’t say anythign about cat girls" |
Deleted User#0000: > rapid progress curves given good evaluation/benchmarking/feedback: NN programmers could get superhuman quickly if we can give them good feedback on correctness
what is it meant by "rapid progress curves given good evaluation/benchmarking/feedback" ? In the context of LMs, or is this about other types of models? E.g. "NN programmers could get superhuman quickly if we can give them good feedback on correctness" sounds like RL, is that what he was talking about?
Louis#0144: Every parameter represents a different cat girl
gwern#1782: @Deleted User it was in the context of exciting GPT-3 applications. so I read it as a reference to deepcoder etc like the MS gpt-2 github model, plus christiano-style PPO optimization (possibly with compiler rather than human pairwise ratings as the blackbox oracle reward)
bmk#1476: i'm adding that to the document
Deleted User#0000: thanks! i should look into the "plus christiano-style PPO optimization" thing
gwern#1782: (while compilers/interpreters are pretty fast, they only catch obvious errors, so just bruteforcing syntactic correctness won't get you too far.)
gwern#1782: (I discovered that to some extent generating GPT-2 ABC music, incidentally. ABC is 'compiled' to MIDI so my rating script automatically failed any piece which didn't compile. didn't help too much - everything compiled, but still had major semantic flaws like repetition)
Louis#0144: @bmk the cat girl stuff? Yes pls
Louis#0144: uwu
bmk#1476: the wha
bmk#1476: which cat girl stuff
Louis#0144: 🤫
bmk#1476: we have too many catgirl stuffs
FractalCycle#0001: the part where we complained about it or the part earlier with more of it?
Louis#0144: Shhh
Louis#0144: We need GPT-nya~~~
FractalCycle#0001: OwO
bmk#1476: notices parameters |
FractalCycle#0001: 😳
FractalCycle#0001: *Nani?*
bmk#1476: **verrückt**
rapanui#0579: Hello, I was pointed to this Discord by u/SSCMeetup on Reddit- I missed the Sam Altman meetup, and by request he asked it not to be recorded. But apparently someone here has some notes/insights from the meeting?
StellaAthena#3530: Howdy’
rapanui#0579: 👋
Daj#7482: uhh I think @bmk has the link?
bmk#1476: https://docs.google.com/document/d/1z-wLWpY2ZP5KXQkPxj1CD-j2fKaLZYDWXS5ViVShWuw/edit
bmk#1476: Warning: document was written by like 5 people at once so it's a bit of a mess
Veedrac#0443: Misc thoughts:
“won't be fire alarm” → Of course not if you keep it boxed (lol), but I swear the tides are shifting. GPT-3 converted people, and many of those who weren't converted just weren't seeing the evidence. Idk, seems to me like a GPT-4 would continue the trend.
“human preferences might be figured out” → I've heard Sam say this before so not a great surprise. It makes sense if you believe in slow takeoff. I just don't see how to seriously defend slow takeoff.
“OpenAI believes hardware scaling is exhausted” → u wat now? This is a new belief, right? I swear Sam has previously said scaling up current algorithms would reach AGI.
“different modalities have different data requirements; many modalities don’t seem to have limits in amount in the foreseeable future; we don’t know yet why they’re different” → This is... not really surprising? Most parameters are just memorizing facts and features; some modalities have more facts and features to memorize, and some (cough cough text) have more junk.
“oa is currently focussing on faster inference” → OA goes MoE?
“better than most soon, better than the best will take longer” → What is this in context to?
“innovation has been slow, OA is using the same toolchain because it works and not as a control variable” → Dude OpenAI was founded in December 2015.
chirp#4545: > “better than most soon, better than the best will take longer” → What is this in context to?
this is responding to a question about when an AI would be able to do programming |
he actually took back his answer a bit, later on - he said that once we have a good way to evaluate, we could surpass the best human programmers quickly
bmk#1476: pls add all this info to the doc @Veedrac @chirp so we can keep it all in one place
chirp#4545: > oa is currently focussing on faster inference
I assume this is talking about distillation/pruning/etc
chirp#4545: btw here's my own notes https://cdn.discordapp.com/attachments/729741769738158194/775442883636625478/unknown.png
bmk#1476: pls add to the doc
chirp#4545: don't have time sorry 😦
Veedrac#0443: I pasted the image at the bottom
Veedrac#0443: w/ my comments just above.
rapanui#0579: Ah, big thanks!
StellaAthena#3530: > this is responding to a question about when an AI would be able to do programming
>
> he actually took back his answer a bit, later on - he said that once we have a good way to evaluate, we could surpass the best human programmers quickly
@chirp It's worth noting that mere "does the code solve the task" is not what we are after. Anyone perusing code golf will see that the code produced there is unacceptable even though it works
gwern#1782: the self-supervised initialization from human code will help a lot with that. it's not just GAs bruteforcing the smallest possible golfs
StellaAthena#3530: Thank fucking god
Veedrac#0443: This is a bit like worrying about the introduction of email because the handwriting might be bad.
StellaAthena#3530: I don't follow. Code will need to be used, maintained, and applied by humans |
Veedrac#0443: We're talking about a world where GPT-style models write code *as well* or *better* than humans, right? This naturally implies several other capabilities:
bmk#1476: What if human never write the code again
Veedrac#0443: 1) You can point at code and ask the model to tell you what it does, how it works, and what the possible edge cases are.
Veedrac#0443: 2) You can ask the model to rewrite code in your favoured syntax, or using whatever idioms you prefer, or clarify a part you don't understand.
FractalCycle#0001: + it can maybe write, uh, other AI models/training code
StellaAthena#3530: I strongly disagree that those tasks follow from being able to write code at a human level
Veedrac#0443: GPT can already do these things, to about the extent that it can code, if not a bit better.
FractalCycle#0001: are you talking about a self-direction type thing?
FractalCycle#0001: (in response to stella)
StellaAthena#3530: No, Veedrac is spot on. They're thinking about the exact same kinds of things I am, we just disagree on if those capacities are entailed by being able to write code.
Veedrac#0443: I agree that in principle there exist models that can code that can't do this.
Veedrac#0443: But GPT-style models are only lacking the computational understanding, they already have the rest.
StellaAthena#3530: I agree that eventually code writing models can do this. I think that there will be a not insignificant amount of time where we have code-writers who can't though
StellaAthena#3530: Can GPT-3 change the dialect a passage is written in?
Veedrac#0443: Yes, pretty sure.
FractalCycle#0001: so the disagreement (if i understand correctly) is:
> if you write code at a human level, then:
> you [can/can't necessarily] get explanations or constrained rewrites of the code
bmk#1476: tl;dr understanding code [may or may not] be harder than writing code |
StellaAthena#3530: I haven't seen any examples of that, and when I went looking a month or so ago I couldn't find any
bmk#1476: i'd say that understanding code *can* be harder
bmk#1476: especially *ahem* legacy code
bmk#1476: spewing out more code is easy
StellaAthena#3530: I would be very interested in examples and they would strongly influence my thinking
bmk#1476: understanding code and refactoring is hell
Veedrac#0443: @StellaAthena How is this harder than the other various kinds of translation GPT-3 is proficient in, eg. spoken language to spoken language, or code to spoken language?
bmk#1476: in fact, understanding and refactoring code is so hard that very often starting over from scratch is easier, which is why software turnover rate is just so high
Veedrac#0443: (‘proficient’ not necessarily meaning ‘great’)
bmk#1476: if writing code could be made sufficiently cheap, rewriting the entire system for every change you want to make would probably make sense
Veedrac#0443: @bmk I'd argue this is an artifact of the way we think, and unlikely to affect transformer models, given every element a transformer outputs is as if looking at the text anew.
Veedrac#0443: Whereas human brains are very stateful.
StellaAthena#3530: I think that translating between formal languages is significantly harder than stylistic adjustment
bmk#1476: the best way to fix a bug is to completely rewrite the system with the knowledge that a certain undesireable effect is possible
bmk#1476: this is the most stateless version of software development possible
bmk#1476: in fact, i'd argue this is slowly happening
Veedrac#0443: @StellaAthena We already have ML code-code translation networks. Facebook did one IIRC.
bmk#1476: when provisioning servers went from expensive to cheap, deployment became almost entirely stateless
bmk#1476: see: pets vs cattle servers
StellaAthena#3530: @Veedrac Right. I think that they are much better at Python -> C than "Stella's Python" to "Veedrac's Python" |
bmk#1476: before, you'd keep a bunch of state around on your servers and maintain it by hand
StellaAthena#3530: Though maybe there is really just a smooth gradient and I'm drawing meaningless boundries
StellaAthena#3530: I don't know.
bmk#1476: now you just spin up a blank slate and rebuild your entire system from scratch every time you want to make a small change
Veedrac#0443: @StellaAthena Oh, you meant dialects in programming languages? I doubt GPT-3 could do that because it sucks at programming.
bmk#1476: i'd argue that this is where programming itself is going
Veedrac#0443: @bmk Generally agree, no reason to wed to a version of code if rewriting takes seconds.
bmk#1476: you'd write a specification (something something test driven development) and youd just have the code rewritten from scratch to meet the spec every time the spec changes
StellaAthena#3530: I'm going to step out so I can be slightly moreproductive with my procrastination
bmk#1476: i probably should too
StellaAthena#3530: Side note: if anyone here has an industry research job and would be down to give me feedback on my CV let me know.
StellaAthena#3530: (Also, if you want to hire me lol)
StellaAthena#3530: FOCS (the NeurIPS of theoretical computer science) has released a playlist of tutorials on theoretical machine learning!
https://www.youtube.com/playlist?list=PL3DbynX8gwfInp0XjCQktVtAqYas-mze1 https://cdn.discordapp.com/attachments/729741769738158194/775466061038223440/image0.png
cfc#2691: guys, i'm having trouble installing tensorflow 1.15.2
cfc#2691: i'm trying to run gpt-neo
cfc#2691: >ERROR: Could not find a version that satisfies the requirement tensorflow<2.0,>=1.15.2 (from -r requirements.txt (line 10)) (from versions: 2.2.0rc1, 2.2.0rc2, 2.2.0rc3, 2.2.0rc4, 2.2.0, 2.2.1, 2.3.0rc0, 2.3.0rc1, 2.3.0rc2, 2.3.0, 2.3.1, 2.4.0rc0, 2.4.0rc1)
>ERROR: No matching distribution found for tensorflow<2.0,>=1.15.2 (from -r requirements.txt (line 10))
cfc#2691: anyone can help me? |
bmk#1476: what python version
bmk#1476: you need to use python 3.6
cfc#2691: oh thanks
cfc#2691: i was on 3.8
Dal#7192: General-ish question. It's been mentioned that GPT3 derived arithmetic on its own from its dataset. Was it intentionally programed to model its own content or did the rule arise purely from regression?
Dal#7192: Followup: For either answer, how thoroughly has GPT3 been tested for other meaningful abstract emergent models?
Dal#7192: I.e. Is it likely that GPT3 is already building a (very simple) relational world model?
Louis#0144: I doubt it derived it on its own
Louis#0144: I’m sure SOMEWHERE in that dataset is a multiplication table for instance
Louis#0144: > Followup: For either answer, how thoroughly has GPT3 been tested for other meaningful abstract emergent models?
@Dal not very well because no one knows how to test this
Louis#0144: That’s the key thing
Louis#0144: No one knows
Louis#0144: But arithmetic I’m sure existed in its dataset thus I think that’s a bad test
Dal#7192: Hmmph. I'd say the harder part is just finding questions that are comparatively straightforward
Louis#0144: Nah
Louis#0144: I don’t think QA is a good test
Dal#7192: Can you expand on QA?
Louis#0144: Question answering?
Louis#0144: But yeah testing how well GPT3 can perform abstractions isn’t viable in the current state |
Louis#0144: We literally have no appropriate metrics
Louis#0144: We will get there eventually
Dal#7192: That's to say, then:
A: Your understanding is that GPT3 associated a rule from within its existing dataset to derive arithmetic
B: Question Answering is considered insufficient for gleaning insight from the mature model, presumably because the dataset is already high signal by its nature and it would be hard to descern
Louis#0144: A is wrong
Louis#0144: I think the arithmetic examples are literally in the data or it simply learned ways to embed arithmetic expressions
Louis#0144: It’s possible some parts of the network can specialize
Louis#0144: I wouldn’t call it rude based
Louis#0144: Rule
Dal#7192: Your understanding would then be that it hasn't modeled arithmetic at all, just language that approximates it (correctly)
Louis#0144: Yeah
Louis#0144: Essentially
Dal#7192: Hmm.
Louis#0144: But we can’t test that
Louis#0144: We have no way
Louis#0144: We’re in the dark
Louis#0144: 🤷♂️
Dal#7192: It's kinda funny to me
Dal#7192: GPT and other ML algos have started at such a high level of conceptual information that we can't even identify the baked concepts on their own |
Louis#0144: And B is also kinda wrong because what you’re more interested in is how the model stores internalized representations (which QA won’t really tell you)
Louis#0144: Nah
Louis#0144: Most ML outside of DL is relatively straight forward
Louis#0144: SVMs are straight forward for instance
Louis#0144: Most Bayesian methods are straight forward
Louis#0144: Because they have manual feature engineering where as DL doesn’t
Dal#7192: Would you be able to test against those algos?
Louis#0144: In some cases ya
Louis#0144: Not in all
Dal#7192: Even on high concept datasets like natural language?
Louis#0144: Idk what high concept means
Louis#0144: I’ve never heard that term
Louis#0144: Google doesn’t know either
Dal#7192: I'm generalizing my own vocab. Concepts with a lot of different potentially appropriate associations
Louis#0144: Still don’t know what you mean
Louis#0144: That’s too vague
Dal#7192: Human words, for example
Louis#0144: But that’s still too vague....
Dal#7192: Ambiguous information
Louis#0144: Tells me nothing |
Dal#7192: 🤔
Louis#0144: There’s so many ways to define a concept
Louis#0144: No one agrees
Dal#7192: I could take a stab but it'd just be even more ambiguous 😅
Louis#0144: Do you mean that it’s high information content?
Dal#7192: More or less
Dal#7192: My own model of information is basically a collection of associations
Dal#7192: The more complex, the higher order
Louis#0144: So a knowledge graph
Louis#0144: Or a symbolic model
Dal#7192: Mhm
Louis#0144: Well KGs don’t play nicely with DL
Louis#0144: They don’t get along
Louis#0144: Why do you think concept net died
Louis#0144: LMs don’t use knowledge graphs
Louis#0144: We have no idea what kind of representations they use
Louis#0144: So it’s not a fair comparison to say that they represent language symbolically
Dal#7192: That was near the heart of untangling my question. I appreciate the insights.
Louis#0144: @bmk do u wanna chime in
bmk#1476: whats the tldr i dont feel like reading the entire log |
Louis#0144: How do we compare the representations that LMs like GPT3 uses to representations that other ML methods make
Louis#0144: Or that symbolic models make
Louis#0144: And I said there’s no real way to do so
Louis#0144: Since we don’t have metrics for analyzing the representations DL makes effectively
Louis#0144: It’s a Chinese room argument
bmk#1476: yes we have no metrics to compare these things yet
bmk#1476: but it's probably not *impossible*, just nobody has really worked on it yet
Louis#0144: I don’t think that DL directly uses symbolic models
Louis#0144: I think it’s some weird hybrid where some neurons can be symbols in certain circumstances
Louis#0144: Like parts of the network can specialize
Louis#0144: But that’s just my two cents
Louis#0144: I’m probably wrong
bmk#1476: i dont think symbolic is well defined enough to say
bmk#1476: for a lot of people, symbolic just means GOFAI
Dal#7192: Followup thought: Has anyone done fundamental DL research to try to identify the data structures the algorithm bakes?
Louis#0144: Yes
Louis#0144: Neural persistence for NLP stuff
Louis#0144: And Circuits by OpenAI for CV
Louis#0144: Neural persistence identifies specialized structures in LMs but doesn’t tell you what they’re used for
Louis#0144: Circuits is.... interesting |
Dal#7192: Thanks. More to study.
Dal#7192: I'm slowly filling out a vocabulary of actual terms in the field, this is getting good 😄
Dal#7192: (tldr I view NN/DL as something akin to building instincts rather than building full minds but that theory is still cooking)
StellaAthena#3530: If you are good at topology I have some theory papers I can send you
Louis#0144: The decision boundary papers we shared in slack? @StellaAthena
Louis#0144: Which are u referring to
StellaAthena#3530: Oh no I meant neural persistence
StellaAthena#3530: I missed that you had referenced it
Dal#7192: Good isn't the word I'd use but I'd definitely be interested in getting wiser
cc_#1010: I made another GPT-2 project
cc_#1010: figured I'd link it here if anyone was curious
cc_#1010: https://twitter.com/tanakhbot
cc_#1010: fed The Entire Hebrew Bible into GPT-2
triggerhappygandi#0001: Can you share any tips for a similar project?
triggerhappygandi#0001: Is it simply fine-tuning gpt-2 to a text?
dudekingbromanguyokay#2595: (waves) Do ya'll need any volunteers to do stuff? Figured I'd ask since I've already borrowed the helpful data sets ya'll posted for the pile 🙂
dudekingbromanguyokay#2595: Also speaking as an author who has multiple works inside (data set name) that are technically copyright, I'm totally cool with the project & think it's amazing...I know I've seen a few discussions on intellectual property here and appreciate the consideration.
Daj#7482: Hey @dudekingbromanguyokay ! Appreciate the offer! I think atm the only project doing active work right this moment is the pile, I think @bmk and @StellaAthena are the ones to ask what needs doing
dudekingbromanguyokay#2595: @Daj thanks - I'm in between full time jobs & work from home so...I have plenty of time 🙂 other than a looming deadline for a book I'm writing but that's done in a few weeks 😉 So feel free any of ya'll @bmk or @StellaAthena lemme know if I can help with something. I'm a marketing professional not a technologist but really excited to see this project & happy to contribute.
StellaAthena#3530: @dudekingbromanguyokay what technical skills do you have? Can you code? |
dudekingbromanguyokay#2595: Ah, probably better to say "I can't," in this context 🙂 I can edit some code that I run, but it's not like I can create stuff from scratch.
dudekingbromanguyokay#2595: I do use Jupyter, Google Colab + Python, have read (some) transformer papers, etc, and know the basics of javascript, ruby, php, perl, bash.
StellaAthena#3530: I have something you may be able to help with, but I gotta run right now. I’ll DM you later
spirit-from-germany#1488: Eventually you'd like to check out my new science based channel that explores the question: "How can we help our kids and ourselves to live fulfilled happy lives?" 🙂
https://youtu.be/C4l0QrfHbzs
spirit-from-germany#1488: https://youtu.be/6CaiLyk472I
cc_#1010: @triggerhappygandi collect a bunch of text, feed it to gpt, shit the output onto twitter, its that simple
cc_#1010: would probably require less work if i had gpt-3 but ce'st la vie
StellaAthena#3530: We have now hit 200 stars on the Pile’s repo
bmk#1476: woohoo!
Aran Komatsuzaki#5714: nice
cfoster0#4356: 🎉
cfoster0#4356: ~~no one's supposed to use it~~
bmk#1476: It appears none of the 200 stargazers have noticed that, though, so we're in the clear
cfoster0#4356: a Schelling point's a powerful thing 🤔
bmk#1476: "the emperor's clothes" is actually propaganda invented by economists to indoctrinate children about schelling points
StellaAthena#3530: TBH we should probably wipe the repo and make this the repo people should use.
bmk#1476: the repo people should use is just lm_dataformat lol
StellaAthena#3530: There’s too much congregation here
StellaAthena#3530: I know |
StellaAthena#3530: But still
StellaAthena#3530: “Build paths where people walk”
bmk#1476: i'll make it so that if you install it through pypi it'll expose a single function for pulling The Pile
bmk#1476: and write all our documentation around that
bmk#1476: and then the replication stuff will be under Here Be Dragons
StellaAthena#3530: Sounds good
cognomen#6297: are there any LM or other ML task leaderboards with an explicit resource constraint?
cognomen#6297: as in *"all models must be trained for X hours on Y hardware with Z GB of memory and perform inference within N seconds"*
bmk#1476: yes
bmk#1476: hutter's compression thing
cognomen#6297: that's the only one I could think of
cfoster0#4356: 📯 Information, quick links, and an FAQ on Eleuther is now available at https://github.com/EleutherAI/info
cfoster0#4356: Feel free to submit PRs with updates, corrections, and new content
gwern#1782: of course, that's also why hutter's compression prize is useless, even the more recent upgraded one
StellaAthena#3530: https://cdn.discordapp.com/attachments/729741769738158194/776284354518581248/image0.png
StellaAthena#3530: What’s going on with this model?
StellaAthena#3530: @cfoster0 you did an awesome job. Your FAQ is a better version of the website tbh.
bmk#1476: @StellaAthena something with sampling frequency, this problem has been around for ages and someone just needs to PR some EMA smoothing
Aran Komatsuzaki#5714: how do you evaluate your model? on a fixed test dataset? or just moving average of the previous minibatches of training samples?
cfoster0#4356: @StellaAthena Thanks! Got lots of great input from y'all |
bmk#1476: @cfoster0 i think you might want to remove the "this announcement does not exist", i think it's kind of confusing
StellaAthena#3530: @bmk ah right.
bmk#1476: nit: typo: "repsectively"
bmk#1476: nit: link "aligning artificial intelligences" actually resumes from somewhere in the middle of the video and not the beginning
bmk#1476: @StellaAthena also i saw your comment on the treemap thing, how do you do it?
bmk#1476: the weights are final now
StellaAthena#3530: There’s a button to do it in excel
bmk#1476: oh, lol
StellaAthena#3530: If you make a column where the grouping is labeled it’ll color it too
bmk#1476: the numbers are the ones in the table in the paper, "effective size" column
bmk#1476: https://github.com/EleutherAI/The-Pile/blob/master/the_pile/pile.py#L12 the groupings are here
bmk#1476: the only changes i'd make would be to put github under misc and CC under general internet
bmk#1476: so:
```# Academic
PubMedCentral
ArXiv
FreeLaw
USPTO
PubMed |
PhilPapers
ExPorter
# General internet
OpenWebText2
StackExchange
Wikipedia
CommonCrawl
# Prose
Bibliotik
Gutenberg
BookCorpus
# Dialogue
UbuntuIRC
HackerNews
EuroParl
YTSubtitles
Opensubtitles |
# Misc
DMMath
EnronEmails
Github
```
cfoster0#4356: Thanks, fixed. @bmk
StellaAthena#3530: @bmk I’m in bed but I can do it in the morning
bmk#1476: that would be great
cc_#1010: out of curiosity what's the progress on the pile and gpt-neo
cc_#1010: just very broad "where we at" check
bmk#1476: pile is almost done
cc_#1010: i would be SCORCHING money on this but unfortunately my personal websites come first
cc_#1010: but hopefully those should bei n the black soon
cc_#1010: so
cc_#1010: 💵
StellaAthena#3530: The Pile is on track to be released by the end of the year.
cc_#1010: based
bmk#1476: the data itself will be done within a day or maybe two if i flub up things badly again |
StellaAthena#3530: GPT-Neo probably mostly works
bmk#1476: analysis will be most of the next month and a half
StellaAthena#3530: We haven’t trained it on GPT-3 scales yet because $$$$
bmk#1476: well, that's technically true but not entirely accurate imo
StellaAthena#3530: Also because we need to finish the data first
bmk#1476: we're not going to be able to afford enough tpus even if we do *exceedingly well* with fundraising
StellaAthena#3530: Sure
bmk#1476: our only realistic path is getting google to give us more tpus and money cant buy that
cc_#1010: what scales can it be trained at so far
cc_#1010: gpt-2? gpt-2.5?
StellaAthena#3530: $$$$ was a shortcut for “we don’t have the resources”
bmk#1476: ah
StellaAthena#3530: But I can see how that might be confusing
cc_#1010: or rather what can it be trained at w/ the resources currently available
cc_#1010: rough guesstimation
StellaAthena#3530: GPT 2XL
cc_#1010: is that = 1558?
bmk#1476: yes
cc_#1010: neat
cc_#1010: lay it out for me because im an idiot - whats the reason why no amount of money will get us more tpus |
cc_#1010: are those a thing only google has access to?
StellaAthena#3530: We’ve also done experiments with larger scales but not full model runs
bmk#1476: too expensive
bmk#1476: way too expensive
cc_#1010: how much too expensive
bmk#1476: let me look it up, one moment
cc_#1010: so realistically without some sort of insane fundraiser haul we'd be able to get somewhere between GPT-2-1558 and GPT-3, leaning closer to the former than the latter?
bmk#1476: so google doesnt even give estimates unless you contact them, but we can extrapolate the cost of a smaller pod
StellaAthena#3530: @cc_ We are working on talking Google into giving us the compute
bmk#1476: so a v3-32 costs 176k for a year
cc_#1010: whoo mama
bmk#1476: we'd probably need to run a.. 512?
cc_#1010: you are right i can't help with that lmao
cc_#1010: my parents would never let me leech that much money off of them for something that benefits other people lmao
bmk#1476: no no that's the *small* machine
cc_#1010: yeah i know
cc_#1010: the big one costs Even More
bmk#1476: this is the biggest machine that google gives extimates on
StellaAthena#3530: I’m disturbed and intrigued by that reply. Not sure which one more so
bmk#1476: a 256 would cost well over a million |
cc_#1010: which reply
cc_#1010: mine or bmks
StellaAthena#3530: > my parents would never let me leech that much money off of them for something that benefits other people lmao
@cc_
cc_#1010: ah
cc_#1010: they're just neoliberals
cc_#1010: wealthy ones with lots of money
cc_#1010: and properties
bmk#1476: what order of magnitude is lots
cc_#1010: uhhh
bmk#1476: it's ok if you don't want to answer
cc_#1010: *probably* enough to handle our TPU problems if they wanted to
cc_#1010: but
cc_#1010: they dont
cc_#1010: like, trust me, it's not happening lmao
bmk#1476: as i said, getting the money to pay for the compute is not and never was in our plan
cc_#1010: they wouldn't even let me stop paying them rent so i could use the money to handle my own servers and that's just 800 a month
cc_#1010: and im also their son
cc_#1010: and not some strangers on the internet who really like ML
cc_#1010: uh, machine learning, not marxism-leninism |
bmk#1476: there is no way we're getting >$1MM through any way, and so that possibility is not up for serious consideration
cc_#1010: great
StellaAthena#3530: @bmk you’re probably overestimating the cost. If we had a wealthy patron we could buy DHX-2s
cc_#1010: so we'd have to acquire the relevant hardware through some kind of alternative arrangement or go without
StellaAthena#3530: I think we would only need three or so
StellaAthena#3530: Plus a warehouse
bmk#1476: @StellaAthena three?
cc_#1010: i mean either way it's still more than i think we could conceivably and, importantly, *sustainably* raise
bmk#1476: that sounds about an order of magnitude off
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/776298985119809566/unknown.png
cc_#1010: i agree that it's not even worth consideration
bmk#1476: 1 V100 ~= 1 TPUv3 core, give or take a factor of 2x
bmk#1476: would prolly need 16 of em
cc_#1010: could you jury rig a bunch of colab notebooks together? ;P
cc_#1010: (this is a joke)
bmk#1476: according to anandtech a dgx2 is 400k
cc_#1010: yeah i dont think we need to hyperfocus on the costs
cc_#1010: unless we get a multimillionaire patron willing to burn cash on this it's not happening that way
cc_#1010: we can just write it off at this point
bmk#1476: so, like, anywhere from 3-6M plus colo costs |
StellaAthena#3530: Anyways there’s a program where Google gives worthy poor people free TPUs. That’s what we are currently using, but with our current level of compute that would take years.
StellaAthena#3530: (Actually, initializing the model would time out the pods so it’ll take forever)
StellaAthena#3530: We are working on sweet talking them into thinking we are really cool and getting a special deal
cc_#1010: cant help with that unfortunately i have all the social graces of a vacated snail home
cc_#1010: but best of luck
bmk#1476: if this works out, by the end of our project we will probably have recieved from TRFC the market-price equivalent of more than the net worths of everyone working on this combined
bmk#1476: we've already gotten a few hundred k worth of tpus so far
StellaAthena#3530: I wouldn’t be surprised if we were close already
cc_#1010: is there anything i would be able to help with w/ the money i do have available (i.e a couple thousand)
cc_#1010: pizzas? motherboards?
bmk#1476: well, we do have some server costs but they're pretty low
cc_#1010: ethernet cables? 😛
bmk#1476: we don't really have our finances in order so i don't have any exact numbers but
StellaAthena#3530: Yes, but determining the most fruitful application might take some thinking
bmk#1476: our monthly expenditures are, like, less than 1k?
cc_#1010: does that come from any one source or is it just sort of assorted payments spread out amongst who can handle them?
StellaAthena#3530: It’s crowd-funded, but without the crowd.
bmk#1476: we have a very generous donor and some of the costs are just covered by the individuals running them
cc_#1010: right
bmk#1476: like, me and goolu are each paying for a hetzner, that's, what, C$70 a month? nothing too big |
cc_#1010: if someone can wrangle up a list of relevant costs i can probably take some stuff off people's hands
bmk#1476: our finances are a mess
cc_#1010: it's money that i'm decidedly *not* going to use because i want for nothing
cc_#1010: someone should probably handle that lmao
cc_#1010: yall need a treasurer
bmk#1476: though if you can help think of ideas of things we could do that would be great
bmk#1476: yeah that's probably a good idea
StellaAthena#3530: Hey, if you want the job I doubt anyone would complain tbh
cc_#1010: also if you really want like
cc_#1010: the most bang for your buck
cc_#1010: someone needs to make an LLC, preferrably in delaware
cc_#1010: and then everything just becomes a business expense which can be written off
cc_#1010: on your taxes
cc_#1010: i could probably talk to my accountant abt that
bmk#1476: connor is our de facto treasurer but not all the expenditures go through him and we don't have detailed records we just trust him not to pocket anything
cc_#1010: there should definitely be record keeping
cc_#1010: imo there should be 100% transparent record keeping that's publically available
StellaAthena#3530: Abstractly yes. Pragmatically we are about 10 people who are doing this in our free time and are more interested in the technical work than getting our finances in order.
bmk#1476: agree, but we've just never gotten around to it
cc_#1010: that's fair |
bmk#1476: yeah, i've been way too busy writing up code
bmk#1476: we barely have documentation
cc_#1010: im willing to shoulder the atlasian responsibility (/j) if it's a thing people think we'd need
cc_#1010: or would be helpful to have at least
cc_#1010: i dont really do much with my day
bmk#1476: for me priorities are:
1. get the code done
2. get the documentation done
3. miscellaneous organizational things
bmk#1476: as can be seen from the state of my documentation, 3 doesn't get much love
StellaAthena#3530: Do you have meaningful technical skills?
StellaAthena#3530: Hours of writing Python code is more valuable to us than money tbh.
cc_#1010: not meaningful for this, no
cc_#1010: i can code video games
cc_#1010: and websites
bmk#1476: can you python
cc_#1010: not particularly
bmk#1476: oh, if you can video games you can python
cc_#1010: no, i probably can't |
StellaAthena#3530: Oh we can absolutely put you to work
bmk#1476: C#?
cc_#1010: i use GMS
bmk#1476: what's that?
cc_#1010: which is more like javascript than anything
cc_#1010: gamemaker studio 2
bmk#1476: ah
bmk#1476: never heard of it
cc_#1010: it's good stuff
StellaAthena#3530: Is it a GUI?
bmk#1476: python is like javascript but minus the java and also minus the script but then you add the script back in
cc_#1010: anyway that sort of belies the point in that i dont really like coding things that aren't creative
cc_#1010: im closer to a designer than a developer
bmk#1476: we could use a website/design person
cc_#1010: you'd spend too much time teaching me how to actually get up to speed for me to be of any use
bmk#1476: sid also does design but he's too busy doing tpu stuff
StellaAthena#3530: We have a website that I made in a weekend and haven’t really updated much.
bmk#1476: yeah we need a good website
StellaAthena#3530: And there’s a list design tasks for that that nobody has gotten to
StellaAthena#3530: www.eleuther.ai is the website |
bmk#1476: i have a minor personal vendetta against google sites so a custom website would be great
cc_#1010: do you not have a custom website to begin with
cc_#1010: why are you using google websites
bmk#1476: nope
bmk#1476: everyone is too busy writing code
bmk#1476: no time to do website stuff
StellaAthena#3530: Because I was able to make a functional site in less time than it took everyone to agree on a framework to use
bmk#1476: haha
cc_#1010: fair enough lmao
cc_#1010: oh this already looks better than anything i could shit out
cc_#1010: the only things i could fix are accessibility stuff
StellaAthena#3530: Here’s the secret: it’s all drag and drop
StellaAthena#3530: I don’t know HTML
StellaAthena#3530: the most meaningful thing I’ve done in HTML in my life is personalize my neopets profile
bmk#1476: we could also use help with stuff like icon design and making diagrams that look professional and not like someone drew them in google slides
cc_#1010: if it aint broke dont fix it
bmk#1476: i know how to use HTML but every time i do i get traumatized
cc_#1010: i'm not that kind of artist, unfortunately
cc_#1010: i have the drawing skills of a diabetic elephant
StellaAthena#3530: So all the images are random shit that’s CC |
StellaAthena#3530: “Draw” can mean “design on a computer”
bmk#1476: :smallbrain: CC
:bigbrain: CC
cc_#1010: i can't really make any images for you that you couldn't already find somewhere else
StellaAthena#3530: Shame
cc_#1010: i offer financial stuff since that's stuff i can do in my spare time that's not already occupied by other projects
StellaAthena#3530: So, no offense, but what are your skills?
cc_#1010: i mean if you go by my college degree, management
cc_#1010: people wrangling
cc_#1010: hold on i can just get you a picture of my resume lmao
StellaAthena#3530: It would be nice having another people wrangler in the near future.
bmk#1476: we don't really do the whole hierarchy thing here, at least not for now
cc_#1010: oh god damnit there's a typo in my fucking resume
cc_#1010: how long has that been there
StellaAthena#3530: Good way to test if anyone really read it
cc_#1010: https://cdn.discordapp.com/attachments/729741769738158194/776303827124092948/resume_2.png
cc_#1010: virtual machinse
StellaAthena#3530: I know someone who put an offer to pay the reader $50 in her PhD thesis
cc_#1010: also i know i listed python and c# in there but i dont actually know them, that one is a lie
cc_#1010: lmao |
cc_#1010: but i figure if i get hired for an entyr level python position i can learn on the job
StellaAthena#3530: Skills sections on resumes are all lies
cc_#1010: i did learn it at one point in my life
cc_#1010: when i was like 16
cc_#1010: and it is all gone now
bmk#1476: python is literally spicy pseudocode
cc_#1010: but yeah that's the gamut of what i bring to the table besides "has money" and "good at managing teams"
bmk#1476: i used to do java before and learning python was basically trivial
cc_#1010: yes but i dont really want to
cc_#1010: unless it became necessary for me
bmk#1476: that's fair
cc_#1010: like if someone's offering me a 20k raise to switch to a new job where i'd have to learn python then fuck it nake time
bmk#1476: well, there is one people managingy thing we might need in the near future
cc_#1010: snake, too
bmk#1476: so we plan on doing a big multilingual dataset for pile v2
bmk#1476: and we want to solicit a bunch of native speakers
bmk#1476: preferably dozens, with at least a few for each language
bmk#1476: tbh, as many people as we can get to sign up
bmk#1476: aside from main Pile v2 we'd also want to do stuff like make an actual language classifier that doesn't break on rare languages
cc_#1010: right, im listening |
bmk#1476: and managing that many people across multiple projects spanning probably months to a year or so will be very complicated
bmk#1476: actually, step 1 is getting that many people interested
cc_#1010: managing them to do... what?
bmk#1476: so we want to have native speakers to have input on our various multilingual data projects - in practice this would mean looking at the way we're doing things and telling us if there are big issues with what we're doing, looking at data samples to tell us if they'r reasonable, etc
bmk#1476: as a contrived example, imagine if we split words by spaces in our algorithm
bmk#1476: ~~very contrived that totally nobody does this~~
bmk#1476: the chinese, japanese, korean speakers would inform us about the issue of those languages not having space speration of words
bmk#1476: also, we're too broke to pay for professionals so they'd all have to be volunteers with an interest in ML and willing to do a bit of work in exchange for authorship on a paper
cc_#1010: oh i forgot i do have one other skill
cc_#1010: or resource
cc_#1010: which is that i run a semi-popular twitter account that i can shill eleuther on 😛
cc_#1010: anyway bmk i see
StellaAthena#3530: **Non-coding jobs:**
1. Manage the website, including updating the content over time and a handful of graphic design tasks
2. Organize the Discord channel with role-bots and probably other shit
3. Create a media plan, write press releases, and be a hype machine
4. Make is a real non-profit
5. Organize people, orchestrate tasks, and remind people to do shit after they don’t do it for a week.
6. Do random research as necessary.
cc_#1010: i suppose once we get to that point you can slap my hand and tag me in |
cc_#1010: oh now thats a list
cc_#1010: now that i could handle
bmk#1476: the problem is that gathering literal dozens of people with that kind of strict requirements takes a lot of time
StellaAthena#3530: 1 and 6 are intermittent tasks
2, 3, and 4 are one-off tasks
5 is a continuous task
bmk#1476: so we want to get started on that early
cc_#1010: yeah i could do those
bmk#1476: re: 3: i think @Daj might have something to say about that; i've spoken to him about the idea of creating an explicit pr plan
bmk#1476: i will let him voice his position because i don't want to misrepresent his position
StellaAthena#3530: @cc_ do you know people who are famous or well connected, especially in CS or CS adjacent spaces?
cc_#1010: hahahahahaha
cc_#1010: absolutely not
cc_#1010: outside of minimaxir
bmk#1476: how popular is semi popular
StellaAthena#3530: Darn
cc_#1010: uhhh 12.5k followers
cc_#1010: and it uses gpt-2
bmk#1476: we have gwern and aran already
StellaAthena#3530: “Talk people into shilling for us” is definitely a major part of 2 |
bmk#1476: though more can't hurt
cc_#1010: which is partially why im interested in this project since openAI rejected my gpt-3 proposal and i figured i'm never getting it
cc_#1010: i mean i could talk to DeepLeffen
cc_#1010: we've chatted in the past
StellaAthena#3530: Is that a GPT-X trained on Leffen tweets?
cc_#1010: and i'm also on good terms with the original drilbot person
cc_#1010: https://twitter.com/DeepLeffen
cc_#1010: https://twitter.com/dril_gpt2/
StellaAthena#3530: Yup
StellaAthena#3530: Called it
cc_#1010: i mean it's not exactly hard to call 😛
StellaAthena#3530: If you can get them to agree to shill for us that would be awesome.
cc_#1010: write me a pitch to give to them and i'll try my best
bmk#1476: Let's not get ahead of ourselves
cc_#1010: i think we should get ahead of ourselves more
cc_#1010: oh also my relative owns domainregistry.com
cc_#1010: and *he* probably knows people
StellaAthena#3530: Okay I’m going to go to sleep (that’s a lie, I’m going to go play Pokémon in bed for an hour and pretend to sleep)
cc_#1010: so i could prod him and ask him
StellaAthena#3530: But let’s chat tomorrow about something concrete |
cc_#1010: ping me tomorrow because i have severe unmedicated adhd and i will forget otherwise
cc_#1010: guaranteed
StellaAthena#3530: Definitely
cc_#1010: https://pbs.twimg.com/media/Ei75UYOXkAEEO-u?format=png&name=900x900
cc_#1010: a gift for you before bed
StellaAthena#3530: I have severe medicated ADHD that’s nevertheless debilitating when combined with the anxiety and depression I’ve been experiencing lately
cc_#1010: i dont like that reaction
cc_#1010: gift retracted
cc_#1010: better
cc_#1010: thank you
cc_#1010: anyway im gonna go take a walk and then work on my video game
cc_#1010: as it turns out making blocks fall off cliffs without glitching into the ground and then teleporting a billion feet away is surprisingly difficult
Noa Nabeshima#0290: https://www.joelonsoftware.com/2000/08/09/the-joel-test-12-steps-to-better-code/
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/776627223297261598/good_code.png
andyljones#7746: https://programmingisterrible.com/post/139222674273/how-to-write-disposable-code-in-large-systems
> Step 0: Don’t write code
> Step 1: Copy-paste code
> Step 2: Don’t copy paste code
> Step 3: Write more boilerplate |
> Step 4: Don’t write boilerplate
> Step 5: Write a big lump of code
> Step 6: Break your code into pieces
> Step 7: Keep writing code
FractalCycle#0001: > Step 1: write code
> Step 2: lol no
I think this is what's known as "best practices" in the industry
StellaAthena#3530: Can confirm
Noa Nabeshima#0290: Ok, but to what extent are there (culturally evolved?) actually good software practices
Noa Nabeshima#0290: Seems probably real to me
andyljones#7746: > Ok, but to what extent are there (culturally evolved?) actually good software practices
@Noa Nabeshima mine at least was serious! there are absolutely better and worse practices, and yeah joel's list is a good starting point for organisations at least
andyljones#7746: i really like tef's advice though for its dedication to 'it depends'
gwern#1782: good software practice has too little correlation with fitness^Wprofits, perhaps because everyone and everything cycles over way too quickly, for good practices to culturally evolve
gwern#1782: it was infinitely more important for facebook, say, to find a social network niche than for it to not use PHP. the constant factor cost of technical debt could be paid off once they were a multi-billion-dollar megatechcorp
gwern#1782: as absurd as it sounds to write your own php compiler and build your own better language on top of it, well, mark is rich and you are not
gwern#1782: it's just price's equation. the lower the covariance between fitness and replicated phenotype, the less selection is possible. shifting environments, poor replication of culture & programmers, far higher fitness differentials based on business model... then combine this with breathtakingly high 'mutational targets' ("this year, we are Agile!")...
gwern#1782: this is why corporations and software do not evolve (https://www.gwern.net/Backstop)
Noa Nabeshima#0290: Hm maybe cultural evolution is the wrong phrase. The main mechanism I'm imagining is people in the field in practice noticing what works and sticking to it while also copying whatever other people are doing. Things like Agile might spread on trendiness alone (idk if agile is good or not), but version control has (probably) been stably everywhere for a long time.
Daj#7482: > it was infinitely more important for facebook, say, to find a social network niche than for it to not use PHP. |
@gwern _Paul Graham is typing_
shawwn#3694: congrats on surpassing TPU Podcast in member count
gwern#1782: @Daj I think pg would say 'well ok python is good enough now'
gwern#1782: even if lisp was a secret weapon eons ago back in 1995, in 2020, I do not think he would say today that choice of programming language is in the even top 10 decisions a startup founder should be making
shawwn#3694: design, on the other hand, will make or break your company
shawwn#3694: and lisp is closely related to good design.
gwern#1782: but, obviously not closely related enough or else the lisp companies would've eaten the world by now
shawwn#3694: HN did.
gwern#1782: is HN even 1% as popular as reddit? and what does it owe to *lisp*? if someone sat down and began writing about how HN influenced the world, would 'happened to be coded in lisp' make even the top 20 list of key factors, after stuff like 'pg used it as a startup recruiter' or 'pg spent 4 hours a day submitting links and moderating comments', thereby demonstrating even more strongly what I just said
shawwn#3694: Yes.
StellaAthena#3530: Yes what? Yes it really owes it's success to lisp?
shawwn#3694: It would. But few people understand why.
StellaAthena#3530: Why? (original comment was much ruder than intended. Sorry)
shawwn#3694: It owes its success to Lisp because a single person is able to both program and then use the software that HN represents, at the mod team level
shawwn#3694: currently in a meeting, but I'll explain more later.
shawwn#3694: The early history of hacker news began roughly in 2006, when pg started prototyping it
shawwn#3694: one could argue that you could imagine a version of PG who was an expert python hacker, not an expert lisp hacker, and that it would have been more or less equivalent
shawwn#3694: but as someone who has spent many, many years with the codebase, I don't think so.
shawwn#3694: for example, antirez (of Redis fame) once tried to build an alternative, lamer news. It generated a buzz significant enough to give it a community similar to, say, lobsters
shawwn#3694: but from a technical standpoint, the features simply weren't there. I tried to do basic things that I took for granted on HN, and those didn't work. So I left |
shawwn#3694: That's your bar. You can either believe me, or believe the evidence (that even the great antirez failed), or the fact that lobsters *probably* wouldn't survive at HN-scale
shawwn#3694: I can certainly go into reasons and specific details of *why* those things are true, but few people believe it's true in the first place, so there's not much point.
gwern#1782: ...do what? when I think of HN, "features" is about the last thing I think of, and it'd be preceded by a phrase like "lack of"
shawwn#3694: that's because HN is so well-designed, the features don't even seem like features
shawwn#3694: for example, even something as simple as seeing the replies inline in your /threads page
shawwn#3694: Reddit doesn't have that. Lobsters doesn't have that
shawwn#3694: you could write it, sure. But Lisp makes such features natural to write.
shawwn#3694: And when you have very little time, those small bits of efficiency add up.
shawwn#3694: Those are just the public-facing features too. Most of HN's features are internal
shawwn#3694: essentially editor tools.
gwern#1782: you don't need /threads because that's what comment reply notifications are, which hn doesn't have at all
StellaAthena#3530: https://www.reddit.com/message/inbox
StellaAthena#3530: This is reddits `/threads`
shawwn#3694: and it's socially very different from HN. It encourages flamewars, for example
shawwn#3694: some required reading:
https://www.joelonsoftware.com/2004/09/06/its-not-just-usability/
> Small changes in software can make big differences in the way that software supports, or fails to support, its social goals.
https://www.gwern.net/docs/technology/2005-shirky-agroupisitsownworstenemy.pdf |
MasterScrat#6910: Hello everyone. I'm working on a natural language generation project. I recently switched from gpt2-xl to megatron-11b and it lead to significant improvement for my use case. So naturally, I'm looking for any available larger model (that I can have full control over). What is the status of gpt-neo? are some pre-trained models already available?
StellaAthena#3530: Howdy @MasterScrat. Welcome.
StellaAthena#3530: That depends on how much compute you have available
StellaAthena#3530: Oh, you’re asking about pretrained. We can’t upgrade you (yet) if you’re using an 11B model
StellaAthena#3530: We’ve trained a single step of a 100B model, but not the whole model.
MasterScrat#6910: How much compute would I need to train a model >11B? What would be the ballpark cost to train it eg on AWS or GCP?
MasterScrat#6910: And, is it actually "just" a matter of compute at this point? or would i still need to invest a lot of engineering time in it as well?
StellaAthena#3530: As far as we know it’s just a matter of compute. Obviously if we haven’t trained a 1T model we can’t promise it won’t break, but don’t have any reason to think it will (besides meta reasoning: there’s always another bug)
StellaAthena#3530: I’m not a great person to ask about costs on the open market, but I think that our estimate for Google TPUs is ~2M
MasterScrat#6910: $2M would be for 1T right?
StellaAthena#3530: Ah no. Sorry, that’s for GPT-3 scale
StellaAthena#3530: 175B
StellaAthena#3530: And actually BMK’s estimate is 3-6M
bmk#1476: it's worth noting that these estimates are very rough
StellaAthena#3530: But this is very heuristic. This is two orders of magnitude bigger than what Google advertises selling at all
bmk#1476: like, very, *very* rough
bmk#1476: 1. efficiency
bmk#1476: 2. nobody knows how much google actually charges for one of these
StellaAthena#3530: If our model is 10% less efficient than GPT-3 the cost will ballon correspondingly
StellaAthena#3530: And 10% on a million isn’t pocket change |
StellaAthena#3530: Okay I’ll let @bmk talk because he knows better than me
bmk#1476: our model is likely more than a few pct less efficient
MasterScrat#6910: i see! what is the largest model you've trained so far, and how does it compare to the closest published GPT model?
MasterScrat#6910: and, where exactly do the differences in efficiency come from? subtle training details? dataset quality?
StellaAthena#3530: Pipelining, minor coding differences
StellaAthena#3530: Whenever you replicate a paper you don’t get exactly the same efficiency
bmk#1476: i would not be surprised if our code was only 1% the efficiency of OA's code
StellaAthena#3530: But in most circumstances you don’t care about the gap
MasterScrat#6910: i see! well, this may be hard to sell to management 😛
StellaAthena#3530: We’re talking about a scale where copying the data between directories is a non-trivial
StellaAthena#3530: There’s a lot of room for things to go wrong
bmk#1476: just shuffling our data was a full on miniproject in itself
gwern#1782: @MasterScrat they expect to get the compute from the TFRC people. if you have to pay list price, it's probably not worth trying
StellaAthena#3530: What are you using it for @MasterScrat
StellaAthena#3530: Like, what kind of work do you do with it?
MasterScrat#6910: Sadly i can't say too much 😦 Let's say something like writing automatic descriptions, with some creativity, from a list of facts. If i could make a point like: for $250k we can get a model much better than megatron-11b, then there may be a chance I could negotiate for management to invest in that, ideally in an open-source way. But at this point this is all wishful thinking
gwern#1782: I don't think eleutherai could guarantee anything like that. this is still very experimental and hobbyist
StellaAthena#3530: I mean, at some point we will have a trained GPT-3 (hopefully)
StellaAthena#3530: But not for a while
MasterScrat#6910: but what is the largest model trained right now? and how much would it cost to reach a point where you can say: we reached the quality of this GPT-2 model, and it cost us that much? |
StellaAthena#3530: GPT-2 scale
MasterScrat#6910: 1.5B model? how many TPU hours did it take and with which TPU version?
MasterScrat#6910: Also I imagine prices grow even larger on GPUs?
gwern#1782: (with A100s rolling out in datacenters, one hopes prices will finally drop)
StellaAthena#3530: The people who know the answer to those questions are currently asleep sorry.
StellaAthena#3530: But Sid should be able to help you in the morning.
StellaAthena#3530: (His morning)
MasterScrat#6910: Ok sure, thanks a lot already for all the info!
MasterScrat#6910: What exactly is EleutherAI? a company? is it for-profit?
StellaAthena#3530: It’s 12 people hanging out on discord in their free time
MasterScrat#6910: > EleutherAI is a grassroots AI research group aimed at democratizing and open sourcing AI research
bmk#1476: grassroots, yup
gwern#1782: 'grassroots', as in we're slightly above dirt in status; 'AI research' as in 'well, not too many people have any better idea than we do', 'group' as in 'we talk to each other online', 'aimed' as in 'don't blame us if nothing gets accomplished'
StellaAthena#3530: @gwern stealing that.
StellaAthena#3530: I love it
gwern#1782: ...'democratizing' as in 'make feasible for non-FANG programmers without 6-figure salaries', 'open sourcing' as in 'you can actually download our shit'...
StellaAthena#3530: 'AI research' as in ‘what could possibly go wrong?'
gwern#1782: 'remember, you can't spell "failure" without "ai"'
bmk#1476: and AI stands for 愛
AI_WAIFU#2844: me |
Bedebao#4842: OpenAI is too scared to unleash Skynet. EleutherAI doesn't care.
cfoster0#4356: lies! We care and don't have a clue what to do 🤔
StellaAthena#3530: > OpenAI is too scared to unleash Skynet. EleutherAI doesn't care.
@Bedebao this is the exact opposite of the truth tho
bmk#1476: the road to paperclippification is paved with alignment attempts
cfoster0#4356: > the road to paperclippification is paved with alignment attempts
@bmk does that mean alignment research is -EV?
cfoster0#4356: If so that makes our lives easier lol
bmk#1476: it's +EV, just not + enough
AI_WAIFU#2844: there's a great deal of variance
bmk#1476: the road to nonpaperclippification is also paved with alignment attempts
AI_WAIFU#2844: all it takes is 1 cosmic ray in the wrong place and you get an s-risk
zphang#7252: EAI is an AI memes discord with some off-topic channels for discussing research
Bedebao#4842: > this is the exact opposite of the truth tho
@StellaAthena Oh uh, my bad then.
bmk#1476: all roads to all outcomes more than a few minutes in the future are paved with all attempts with near-prior distribution *\*taleb intensifies\**
Daj#7482: https://www.technologyreview.com/2020/11/03/1011616/ai-godfather-geoffrey-hinton-deep-learning-will-do-everything/
Sid#2121: > 1.5B model? how many TPU hours did it take and with which TPU version?
@MasterScrat trained on a TPU-128 for ~ ten days give or take. It actually took a lot longer because we're always being pre-empted. Our current models are more efficient, also
spirit-from-germany#1488: https://blogs.nvidia.com/blog/2020/11/02/nvidia-a100-launches-on-aws/?fbclid=IwAR0dizme8whCQlgn6Mc6-U5fSbbEIZ0UghbdCycYEP8BA0Qpzf-fL9Qhrt0 |
spirit-from-germany#1488: Do you think the 40gb VRAM are enough to finetune GPT-2 770M ? Or even 1,5?
gwern#1782: of course. 1.5b was trained on 16GB V100s, iirc
spirit-from-germany#1488: hmm... when I checked if I could finetune GPT2 on Colab Pro with P100s, it always got OOM errors
chirp#4545: https://www.reddit.com/r/MachineLearning/comments/jtbr8c/d_how_do_you_find_the_motivation_to_keep_doing_ml/
gwern#1782: catgirls.
mridulkabra#8732: Hi, is there any tool based on GPT-3 that writes content? Want to study a few things in depth
MasterScrat#6910: I believe AI Dungeon is your best bet right now if you're not part of the GPT-3 beta: https://play.aidungeon.io/
Airatak#7842: @mridulkabra There was philosopherai but OpenAI's new pricing kind of killed it
Airatak#7842: The Dev does say that he will bring it back, but it will be on a pay to use bases
mridulkabra#8732: Okay, I checked there is one amazing tool called Simplify which simplifies difficult text to easier text which was the closest to content generation I felt
Airatak#7842: Well that is more of summarization instead of generation
mridulkabra#8732: Yes, that's true, it is very limited in scope, how did the person whose article went viral write?
mridulkabra#8732: Did he use his direct access?
StellaAthena#3530: That’s not remotely the same thing as content generation tho
StellaAthena#3530: What is your usecase where those two things are interchangeable
andyljones#7746: > Did he use his direct access?
@mridulkabra yes
Airatak#7842: Btw, if you guys want, I can host the GPTneo models online
Airatak#7842: I see that currently you guys don't have them hosted
Airatak#7842: Also, anyone know of some pretrained text gen model with a large context size? I can't find anything bigger than 2048 tokens |
Ken#8338: Interesting article discussing Nick Bostrom's new working paper about future AGI and utilitarianism: https://www.bbc.com/future/article/20201111-philosophy-of-utility-monsters-and-artificial-intelligence
gwern#1782: @Airatak not necessarily what you think of by text perhaps but https://www.gwern.net/GPT-2-music#generating-midi-with-10k30k-context-windows
gwern#1782: you could also look at the repos for all the long-range transformers like reformer to see if they've released checkpoints
Airatak#7842: I was hoping for text like short stories or articles
Airatak#7842: I'm trying to train a model from scratch with a large context window (8192 tokens) but it seems to eat the 16GB GPU memory I am using for breakfast
bmk#1476: How many params
bmk#1476: 8192 with, say, 117M should be fine on 16GB as long as your batch size is small
Airatak#7842: Around 700M
bmk#1476: Hmm probably go smaller then
Airatak#7842: but then the network doesn't grasp the language patterns very well
Airatak#7842: This is probably a stupid idea, but is it possible to change the context window on a pretrained gpt2 model and maybe finetune it a bit?
CRG#8707: > This is probably a stupid idea, but is it possible to change the context window on a pretrained gpt2 model and maybe finetune it a bit?
@Airatak ViT was finetuned on higher resolution images by interpolating the 2D positional embeddings to a new resolution. https://cdn.discordapp.com/attachments/729741769738158194/777617374349885440/3a60d6dcd09307f54979ba3baafad208.png
Noa Nabeshima#0290: Is GPU as a function a thing? Where whenever someone wants to sample from a LM a GPU is loaded up and run but you only pay for queries not by the hour?
Noa Nabeshima#0290: I think you could hack this, but is anything designed to work this way?
bmk#1476: Sounds like openai
bmk#1476: Failing that, there might be gpu lambdas?
Airatak#7842: That sounds interesting and could potentially scale very well, but would also be very expensive for bigger projects
Airatak#7842: I ran a website on AWS Lambda and it ended up being very expensive compared to a kubernetes based solution
zphang#7252: also https://huggingface.co/pricing |
Airatak#7842: btw you guys think you could host the GPT Neo model on huggingface? That would be real awesome and convenient
Sid#2121: sure @Airatak that sounds like a great idea eventually
Sid#2121: a question: Does anyone have any idea the kind of hardware OpenAI used to train GPT3? Specifically the type of interconnects used between GPUs?
kindiana#1016: maybe nvlink? the datacenter scale nvlink stuff didn't come out until recently so maybe not
Sid#2121: and for inference, i'm guessing you'd need a similar speed interconnect?
kindiana#1016: depending on how you partition the computation, it could be a lot lower
kindiana#1016: tens of gigabits instead of hundreds
Sid#2121: oh? for training?
kindiana#1016: hrm, I thought they did something with ms?
kindiana#1016: how did they build gpt3 without large scale training infra lol
Sid#2121: > also he told me that they currently don't have large-scale training infra.
@Aran Komatsuzaki wait, i'm confused. You mean they no longer have access to the same infra they used for training?
Sid#2121: i also thought the same as @kindiana
Aran Komatsuzaki#5714: sorry i thought you were talking about HF lol
Sid#2121: ah no, OA
Aran Komatsuzaki#5714: it's ms's supercluster w/ Deepspeed-like infra w/ heavy pipeline parallelism
Aran Komatsuzaki#5714: so i have no idea
Sid#2121: but no one has any idea what ms's supercluster *actually is* tho right
Aran Komatsuzaki#5714: let me check
Sid#2121: https://www.engineering.com/Hardware/ArticleID/20343/The-Hardware-in-Microsofts-OpenAI-Supercomputer-Is-Insane.aspx huh |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.