data
stringlengths 115
7.61k
|
---|
Sphinx#2092: but like I said, it's a well-kept secret in translation
Sphinx#2092: why do you think mT5 doesn't report any numbers on translation? lol
Sphinx#2092: You can also see hints of this here: https://arxiv.org/abs/2002.06823
Sphinx#2092: Section 3 again.
Sphinx#2092: where initializing the encoder with BERT actually ends up with worse results.
StellaAthena#3530: Oh, I thought you were talking about comparing BERT + finetuning with a *translation specific model*
gwern#1782: https://arxiv.org/pdf/2102.01293.pdf#subsection.3.1 so this is only for really small models of the sort you don't use in practice anyway?
Sphinx#2092: So if you scale everythiung in that picture, the same pattern emerges.
gwern#1782: why would I scale everything like that in the way that produces bad results?
Sphinx#2092: Exactly.
Sphinx#2092: That's the problem.
Sphinx#2092: Knowing that this can happen is important.
gwern#1782: I would scale the model size sensibly, not starting from 1m parameters, and then transfer will go back to working like it should
Sphinx#2092: It's not the 1m parameters.
gwern#1782: you wouldn't even need to think about this in the first place if you were using optimal scaling laws
gwern#1782: because then you'd never be anywhere near the data/model-size imbalance apparently required to produce ossification
Sphinx#2092: Like I said, if you think I'm wrong, you're more than welcome to take BERT and finetune for it MT and get SOTA.
bmk#1476: this plot seems to imply that bigger models benefit more from pretraining https://cdn.discordapp.com/attachments/729741769738158194/830620968397439027/unknown.png
StellaAthena#3530: Isn't SOTA *non-BERT* custom translation models?
Sphinx#2092: It's not custom. It's just vanilla transformers |
Sphinx#2092: trained for that task specifically.
Sphinx#2092: which I also allow the BERT model to be finetuned on that task.
Sphinx#2092: and by "finetune" I really mean, do the same training that you would do for the vanilla transformer.
bmk#1476: has anyone reached SOTA with a randomly initialized bert (arch) model fine tuned on MT?
Sphinx#2092: I mean, that's just a regular transformer at that point.
gwern#1782: the theoretical point here about optimization landscapes and overparameterization is interesting though: "We refer to this phenomenon as ossification because one could think of the pre-training as a particularly bad initialization that the model has trouble recovering from." because the model is so small, the loss landscape is very bad and rough. if you go big, it can smoothly descend towards the new optimum nearby its informative prior initialization
StellaAthena#3530: @Sphinx I believe the answer to BMK's question is "yes," in which case it would be far more helpful to say that. A major issue here is communication and ambiguous replies like that do not help.
bmk#1476: i mean, people keep publishing new weird transformer variants
bmk#1476: for all i know one of those is currently sota - i know nothing about mt
Sphinx#2092: Sure, that's fair. I'm talking about traditional Transformer, let's keep the general architecture (seq2seq) constant.
Sphinx#2092: Though there are already some variations between the traditional Transformer and mT5.
Kharr#7888: FWIW I spent way too much time learning about this problem the hard way :sadge: The pretraining can bake in a loss landscape that is not compatible with your task. This is why models like GPT which can naturally perform multiple tasks are so interesting, since you can finetune them with much less effort.
bmk#1476: a quick skim of the transfer learning fine tuning paper seems to suggest that the problem is only when you train a model for too much data relative to its size
Sphinx#2092: Right, so in that paper, they blame it on the finetuning dataset size.
Sphinx#2092: Though I think it also occurs in the few data regime. For example, people have found that instead of finetuning all of bert, if you just reinitialize the last layers to be random, then finetune the whole thing
Sphinx#2092: it does better than finetuning BERT by itself
Sphinx#2092: beacuse the last few layers overfit to the pretraining task.
Kharr#7888: https://cdn.discordapp.com/attachments/729741769738158194/830624047733669888/unknown.png
Sphinx#2092: So I think it might be something more fundamental, but unfortunately I don't have any rigorous explanation for it, since the literature is oddly lacking in that part.
gwern#1782: 'expected owlcome' |
Kharr#7888: I wonder what wonderful pretraining tasks we'll see in the future. I'm surprised we're still using MLM
Sphinx#2092: Yeah hopefully we move away from masking.
Sphinx#2092: That was one of the nice things about the MARGE paper, where they used retrieval as a way of sidestepping masking.
Aran Komatsuzaki#5714: for god's sake gpt-jax we've built is pretty much transformer-decoder from 2017 lol
bmk#1476: im still holding out hope that finetuning 175B will smash sota
janus#0150: On ossification: I can construct an intuitive argument for why it might happen, but I don't know how well it corresponds to reality. If some skill is relevant in two contexts A and B, but slightly differently, learning the skill in context A puts you in a strong local minima in context B. An even more anthropomorphic way of putting it: when you develop concepts/skills and are put into a new environment, you try to reuse the concepts/skills you have by making slight modifications, whereas by starting from scratch you could develop concepts/skills as applicable to the new environment from the beginning.
It seems like adjusting the learning rate would identify this, but perhaps the effect strong.
bmk#1476: wait, i have an idea
Sphinx#2092: SOTA is just training a Transformer from scratch, not initializing. You can do 'better' than SOTA by using some tricks (e.g. backtranslation) but if you want a apples-to-apples comparison, just vanilla transformer will get you there.
bmk#1476: pretrain on a mix of x python and (1 - x) english, finetune on english
bmk#1476: has anyone done this yet
Sphinx#2092: They did this on the paper for x = 1/2.
bmk#1476: i mean the whole spectrum
Sphinx#2092: actually, they finetune on python, not English.
EricHallahan#1051: So you created a simplex?
bmk#1476: in fact, focusing on that variable, not even considering other stuff
bmk#1476: ?
EricHallahan#1051: Math.
EricHallahan#1051: nvm |
bmk#1476: sorry, i dont know anything more advanced than multiplication
Sphinx#2092: There are some differences between SOTA and "good at translation", if that makes sense.
Sphinx#2092: Especially for pairs Zh-En, Fr-En, De-En, automatic metrics are not good enough to differentiate "good" models.
Sphinx#2092: I think BLEU is even negatively correlated with human judgments once you get past some point.
Sphinx#2092: Models used to not be that good lol
Sphinx#2092: so we were in the "okay" regime. And for less traditional pairs, we are still in the okay regime.
janus#0150: thats ML
bmk#1476: welcome to Goodhart Land
Sphinx#2092: Translation also has other annoying issues, like the fact that translated text is of a different distribution than natural text.
EricHallahan#1051: e.g. colloquialisms, sayings
Sphinx#2092: Yeah, domain and style is certainly a problem too, but I mean even more simple.
Sphinx#2092: Like, humans usually simplify text when they translate.
EricHallahan#1051: This fundamental problem also shows up in voice conversion.
bmk#1476: what about more, uh, *artistic* translations
Sphinx#2092: There are certain artifacts that manifest, and you can actually measure this by just building language models on translated outputs and natural outputs
Sphinx#2092: and see that the disribution is different.
bmk#1476: I'm currently reading a (fiction) book in both English and the original Chinese and they both feel roughly in the same "distribution" of good fiction text because of liberties taken by the translator that still preserve semantic similarity enough that it doesn't feel jarringly different
Sphinx#2092: Yeah so I think for that kind of domain, they likely put more effort to capture the spirit of it. I was thinking more along the lines of news
Sphinx#2092: Where you might focus more on conveying the facts thn the style
bmk#1476: so it seems like there are ways of getting the sort of more faithful along multiple dimensions kinda translations, even if it's a bit more expensive |
bmk#1476: maybe with better sample efficiency in the future this could be worth it
Sphinx#2092: Yeah for sure.
bmk#1476: i wonder if anyone's made a book translations dataset - it would be hella copyrighted, but that hasn't stopped people in the past
bmk#1476: book translations should be in general higher quality in that sort of sense right
Sphinx#2092: I dont actually know how they get those so no clue.
Sphinx#2092: You can use the Bible though.
Sphinx#2092: That's been translated into a lot of languages.
bmk#1476: i was thinking scraping all of libgen and then pairing books up by languages
Sphinx#2092: Then manually align sentences?
bmk#1476: well, probably align at chapter level or whatever level is the longest that fits in your model context
bmk#1476: tho i kinda want to do full book length end to end with efficient attention lol
bmk#1476: impractical rn but a fun thought experiment
Sphinx#2092: There's lots of interest in doc level stuff.
Sphinx#2092: Even if your goal is just a final sentence output
Sphinx#2092: Being able to give context helps
Kharr#7888: Funny, I was just reading https://arxiv.org/pdf/2004.08483.pdf
RyanT#5929: https://jmlr.org/papers/v22/20-302.html
RyanT#5929: Interesting looking paper
gwern#1782: you should just read /r/AnimeResearch
gwern#1782: but I doubt there's anything better than whatever is preferrednetwork's last publication |
𓅬 gabriel_syme 𓅬#3220: would 4x3090 be better than 2xA100 (the model is a VQGAN)? I've got no experience with A100s
EricHallahan#1051: Do you need memory or compute?
𓅬 gabriel_syme 𓅬#3220: compute over memory I think
Teemochu#8740: 4x3090 is certainly cheaper
EricHallahan#1051: It depends on if you will hit bandwidth limitations.
𓅬 gabriel_syme 𓅬#3220: oh this was renting a machine btw 🙂
Teemochu#8740: Still cheaper, looking at vast.ai's 3090/V100 comparisons and assuming an A100 is going to be more expensive than a V100 (edit: huh someone actually has a 2xA100 system and it's surprisingly affordable)
𓅬 gabriel_syme 𓅬#3220: there's 2 identical right now, which is why I asked heh
𓅬 gabriel_syme 𓅬#3220: 😮
𓅬 gabriel_syme 𓅬#3220: it's alright, I'll just try it out and see. Good to know anyways
mkualquiera#3484: Hey I know what a simplex is too
mkualquiera#3484: ever since you told me anyway
theurbandragon#3939: I'm a lurker here, but I found this: https://petalica-paint.pixiv.dev/index_en.html
andyljones#7746: don't wanna push anyone in that direction coz jacob hilton at OAI has already done it, it's just sitting in an unpublished draft right now.
(bit embarrassing: he's a friend and after he sent me it i said id cite it in the scaling scaling laws paper. and then i totally forgot :cry:. will be in the conference draft, up ~next week)
𓅬 gabriel_syme 𓅬#3220: I actually love that domain 🙂 personal preference and also close to what the people at the lab I'm doing a PhD do
𓅬 gabriel_syme 𓅬#3220: sweet 3h/epoch on 2x3090s (vs 32h in Colab), that will do!
CKtalon#7792: I have a ~16m line zh-en dataset, but it can't be shared obviously. But it's hard to get a corpus from traditional ways. A typical novel like Three Body Problem is ~5000 lines. Each of Harry Potter is ~6000 lines on avg. Game Of Thrones ~8000 per book
|
I also have a ~2m line en-indonesian corpus. Results aren't that great. Might need more lines.
CKtalon#7792: from my testing, yes. though the amount of data matters obviously. I think that's more limiting than the parameters you can scale
CKtalon#7792: https://github.com/thompsonb/vecalign
This works well for me
CKtalon#7792: nope
CKtalon#7792: singapore
CKtalon#7792: i'm not in the ML scene either. lol
CKtalon#7792: more for small business/commercial reasons
CKtalon#7792: I'm actually interested in this because the way MT models are trained now. It's terrible for zero-shot of a 'new' book. Names are always inconsistent if it's something that hasn't been seen. FineTuning on a few hundred lines improves the problem significantly.
CKtalon#7792: i don't know either. I just do things myself.
CKtalon#7792: but it should be good since bytedance is here
CKtalon#7792: and they are hiring a lot of NLP
CKtalon#7792: https://www.aclweb.org/anthology/W19-5321/
marcin apparently did document-level MT 2 years ago, which is relevant to book translations. Since you need the attention to just keep moving forward.
andyljones#7746: what'd you mean
Teemochu#8740: I did Code Jam, made it to Round 3, was cold called a year later by G
andyljones#7746: get in as an engineer? to summarize over all the folks i know at these places, 80% of them were CS undergrads at respected universities and went straight into the interview process at a FAANG. other 20% are idiosyncratic random walks through startups that got acquired, impressive open source projects, lateral moves from other scientific fields, PhDs in whatever
Teemochu#8740: ...a few days before I would have submitted my resume at my top-50 school's career fair anyway
andyljones#7746: yes
andyljones#7746: but it's much harder. somewhat because the path hasn't been cleared by a bunch of people before you, somewhat because you ain't subject to the selection bias that 'graduating from a respected uni with a CS degree' grants you |
andyljones#7746: yes. two qualifiers against the 'common sense' interpretation of this though:
* first, i am talking about statistics here. there are a lot of extraordinary engineers from no-name unis. but if you pick randomly from the pool of 'berkeley CS grads' v. the pool of 'CS grads', you will *absolutely* notice a difference.
* second, a lot of the quality difference was induced in who got into which uni. the uni's education matters much less.
andyljones#7746: not quite 'throw away', but you better have something impressive on there to compensate
andyljones#7746: heck yes
𓅬 gabriel_syme 𓅬#3220: that's one of the best in the world, and not just in AI
𓅬 gabriel_syme 𓅬#3220: architecture, fabrication, engineering is quite impressive for e.g.
andyljones#7746: fwiw, tsinghua international courses are considered to be a lot softer than the domestic courses. some hirers are aware of this, some aren't.
𓅬 gabriel_syme 𓅬#3220: also, another idea is maybe not go to FAANG 🙂
𓅬 gabriel_syme 𓅬#3220: surprisingly, there is a world outside of that 🙂 I'm saying it in a anice way btw, try to do smth with your skills first, investigate (if you can afford to 'wait' ofc)
andyljones#7746: i have a pretty high opinion of myself, but i would be *flattened* by the average person to get into tsinghua or IIT bombay
andyljones#7746: selection bias, it's magic
nz#9710: where am I getting 4096 TPUs tho :thonk:
andyljones#7746: yer wot
𓅬 gabriel_syme 𓅬#3220: why do you need so many
nz#9710: :morelayers:
𓅬 gabriel_syme 𓅬#3220: I think the goal is to get to do something you like, in a place where you can grow, without burning out or destroying life outside of work.
𓅬 gabriel_syme 𓅬#3220: That's my goal at least, been managing that the last 3 years not so much before that
𓅬 gabriel_syme 𓅬#3220: 🙂
𓅬 gabriel_syme 𓅬#3220: location is important I feel |
𓅬 gabriel_syme 𓅬#3220: I live in a place where quality of life is good, even with family, working remotely
𓅬 gabriel_syme 𓅬#3220: Malaysia
Teemochu#8740: Isn't Tsinghua one of the best universities in China? I seem to recall seeing them at ICPC.
𓅬 gabriel_syme 𓅬#3220: also, if you don't mind distance, covid was a great chance to do PhD's remotely (or almost completely)
Teemochu#8740: oh yeah codeforces is good for that level of training (Code Jam is similar here)... for something more like interviews (easier), Leetcode is probably better
Teemochu#8740: the Saturday competitions (may be Sunday if you're in Asia) are good if you want a good timed environment
𓅬 gabriel_syme 𓅬#3220: oh I didn't know this existed lol. Is it good to learn or just for interviews?
Teemochu#8740: 02:30 UTC Sun weekly, also 14:30 UTC Sat biweekly (next one is this coming week)
nz#9710: this is a cool resource for leetcode stuff: https://seanprashad.com/leetcode-patterns/
Teemochu#8740: great question bank in the algorithms domain
Teemochu#8740: they used to be one of the best for competitions but they stopped a few years ago
nz#9710: There are many, leetcode, hackerrank, codeforces, a japanese one etc etc
Teemochu#8740: Atcoder is the Japanese one
nz#9710: It doesn't really matter for you to join every single one, what matters is what you learn (and how much you've practiced)
Teemochu#8740: compensation is very good, hard to say much else while we're not in-office
andyljones#7746: honestly this is the best advice in the whole thread
don't try and do the thing that everyone else wants to do. sit down and think hard about what the most impactful work you can do is, and it'll probably turn out to be deeply unusual and a lot less competitive than 'work at a FAANG'
andyljones#7746: https://80000hours.org/
|
is a great place to start with this kind of thinking
Teemochu#8740: eh I'm generally philosophically e2g-or-bust (aka just-e) there
𓅬 gabriel_syme 𓅬#3220: what do you mean by 'make the top stuff'? there's so many amazing things they don't make tbh
Teemochu#8740: anyway about to go to sleep it's almost 3 am here 😛
andyljones#7746: as an ex-quant-trader i sympathise. buuuuut turns out that there are a lot of billionaires in the movement, the bottleneck arguably ain't raising cash any more
nz#9710: (sorry, e2g?)
Teemochu#8740: earn to give
andyljones#7746: earning to give
andyljones#7746: actually i turned too fast to doubt here. my first words should have been "🥳 🎉 🥳, that's amazing!", sorry
Teemochu#8740: tbh I limit the amount I "care" to 10% of my earnings, which is a very hard thing to do with anything direct
andyljones#7746: average donation to charity is two tenths of a percent iirc, you should be spectacularly proud of yourself for 10%
Teemochu#8740: doesn't mean I can't donate more, just that no feeling of obligation should ever cause me to break 0.1
andyljones#7746: magic phrase is 'effective altruism', or 'giving what we can pledge'
Teemochu#8740: eh, I'm more of "I should", and I have my reasons for waiting for a little while tbh
Teemochu#8740: Covid closure stuff as far as I am concerned was a donation of 30% of my income during the entire time (in the form of freedoms rather than money), so I'm "donating" only virtually until around 2024 until the debt is repaid to me. One of the benefits of normally donating 10% is I can adjust things downward if I feel the world is caring too much.
andyljones#7746: it ain't without blinking. 10%'s a chunk no matter who you are. point is to care enough that you'll do it anyway
𓅬 gabriel_syme 𓅬#3220: there are other ways to go about this btw. One is instead of charity, we work on something impacting in a positive way a ton of people
nz#9710: For effective altruism purposes do you invest those 10% of earnings first and then donate later or directly donate?
nz#9710: I know the US has tax benefits for charity-intended investments.
andyljones#7746: most people donate straight away (i did), though the EA forum has some posts on long-termist stuff if that's your jam |
andyljones#7746: *point is* that it's $20? $50? per disability-adjusted life year if you donate it to deworming or antimalarials instead
andyljones#7746: forces you to quantify your selfishness
andyljones#7746: everyone's selfish! but mostly people hide that selfishness behind ignorance of what their money could accomplish, rather than confronting it head on
Teemochu#8740: Directly, but donating appreciated stock (and rebuying it immediately) is best because the capital gains go poof... it's a free basis step-up
andyljones#7746: will mention i've several friends who actually align actions with ethics and donate >90% of their income. make banker salaries, live like students. buuuut they're the few and far between. it hard.
Daj#7482: re Earning to Give https://www.lesswrong.com/posts/wEebEiPpEwjYvnyqq/when-money-is-abundant-knowledge-is-the-real-wealth
Daj#7482: e2g imo makes no sense if you have the skills to work on anything else vaguely EA
Teemochu#8740: hopefully they have a couple M saved for financial independence
Teemochu#8740: if not they're really going to have a rude awakening
nz#9710: Damn that's... insane
andyljones#7746: they're the kind of people who can make more money in a year than the average person will see in their lives.
andyljones#7746: actually, hah, the real qualifier is they - i - have middle class enough families that even if everything goes to shit, it'll be okay
andyljones#7746: worst that happens is you move in with your parents. damaged pride, nothing more.
Teemochu#8740: part of financial independence to me is mitigating the risk of "other people" - related things falling through
Teemochu#8740: (also I'd rather die than live with family again so there's that)
Daj#7482: tbf I think if you deliver a good service, getting more money isn't _necessarily_ a bad thing
Daj#7482: Since you create value
Daj#7482: e.g. I think we should just give that lady that invented mRNA vaccines a billion dollars or something lol
Daj#7482: and even if Bezos is literally a super villain, amazon provided extremely good services
Daj#7482: so the economy rewarded him |
Daj#7482: can't be too mad
Teemochu#8740: I remember "2-4 weeks for delivery" back in the late 90s
nz#9710: as mentioned before, I agree with you connor, my main issue is with inherited wealth
Daj#7482: Yea, the problem is that corporations are _amoral_
Daj#7482: They're not immoral
Daj#7482: They just respond like robots to incentives
Teemochu#8740: I've seen what happens when corporations try to be moral. I'll take amorality any day.
Daj#7482: "Big Business" by Tyler Cowen is a good book about how big corporations aren't as morally evil as we like to believe
Daj#7482: Just very amoral
bismarck91#5255: https://tanelp.github.io/posts/a-bug-that-plagues-thousands-of-open-source-ml-projects/
bismarck91#5255: Am I the only one who should be worried?
Kazumi#1297: Time to learn a new framework, instead of fixing the issue
bh#3738: That's a wholly expected bug. RNGs should be explicitly passed, like in Haskell's random monad
CKtalon#7792: it can be worked around; it's just that no one knew that it was a bug
CKtalon#7792: so a lot of models were trained with the bug
bh#3738: This reminds me of a bug I came across. Someone had used a weak hash function and used it for distributing tasks to various shards. The hash worked on pointers, the tasks were allocated sequentially. Rather than distribute them fairly, it maximized lock contention.
bismarck91#5255: using torch's rand method or pythons in-built random method solves it.
trueutkarsh#8921: Hi @deleted-role ! Utkarsh here
I am Software Engineer based in London originally from India.
I really liked the initiative and would like to learn and contribute to the project. |
I have past experience in solving open medical problems through deep learning and building data pipelines. Currently my role at Goldman Sachs doesn't exposes me to a lot of cool tech and ML stuff so I want to use this platform to stay up to date and do cool stuff !
Looking forward to interacting with everyone and working together
EricHallahan#1051: Welcome! (If you haven't already, check out the resources in #rules, there happens to be a lot of useful stuff in there.)
researcher2#9294: I need to learn RL now, anybody done this course? https://www.coursera.org/specializations/reinforcement-learning
nz#9710: IIRC David Silver (?) recommended it
researcher2#9294: I'll add that to the list of pros, though I wonder whether course recommendations from actual geniuses are the best lol. And thanks!
researcher2#9294: Will probably start tomorrow unless anybody says it's complete garbage, I find even bad courses provide a good focal point to branch out and do your own learning - so many blogs and stuff these days.
researcher2#9294: Any Alberta alumni here?
andyljones#7746: https://github.com/andyljones/reinforcement-learning-discord-wiki/wiki#recommended-resources
triggerhappygandi#0001: Who ping? There's a mountain of messages
triggerhappygandi#0001: Nvm
triggerhappygandi#0001: @researcher2 Deepmind's courses on youtube
triggerhappygandi#0001: I want to get into RL too (long time since no look there. I'll try to keep up with you lol)
RyanT#5929: @chilli do you know anything about wavelets on graphs for GNNs? like https://arxiv.org/pdf/1904.07785.pdf
RyanT#5929: I havent really seen much about it but it seems relevant to the broad usefulness of fourier features now
researcher2#9294: thanks andy!
mgostIH#0245: Just wait until they figure out some RL transformer that does better than everything ever before
researcher2#9294: I'll definitely bring up random stuff as I go along Mr Gandi Duck.
triggerhappygandi#0001: Many thanks.
researcher2#9294: Just finished the NLP courses from deeplearning.ai, wasn't in love but worth it I think just to get an overview of history - videos are meh, labs are ok, trax is... different. |
triggerhappygandi#0001: They use trax?
researcher2#9294: Yeah you may have noticed me asking strange questions about Trax randomly - that's why lol
nz#9710: I think trax is only used by lukasz kaiser
triggerhappygandi#0001: Damn. I did the deep learning course while back... It was all tf 1
researcher2#9294: Same, I quite liked tf, but pytorch is the clear winner for learning imo.
chilli#5665: Yeah it's pretty good
triggerhappygandi#0001: Trax is yet another wrapper over jax yes?
triggerhappygandi#0001: Or is it wholly different
researcher2#9294: Yeah, high level wrapper
nz#9710: Yea
researcher2#9294: massively modular, functional, stack based
mgostIH#0245: @triggerhappygandi how long have you been doing DL
researcher2#9294: not in love tbh but apparently performance is good
chilli#5665: I speedran it for a bit
triggerhappygandi#0001: A year and a half@mgostIH
chilli#5665: Like, finished the entire course in a week or so
mgostIH#0245: What did you do before?
chilli#5665: It's a bit easy imo
researcher2#9294: the rl one?
researcher2#9294: nlp one i did the first 3 courses in about a week but the last one dragged out (I think mainly because I got hooked on Surviving Mars) |
triggerhappygandi#0001: If Andrew Ng teaches himself then it may be good otherwise I'll pass.
researcher2#9294: Yeah biiiig step down from Andrew
researcher2#9294: videos are rubbish tbh
researcher2#9294: but labs and overall structure is good
researcher2#9294: Like having learnt purely online I've never seen the implementations of basics like Naive Bayes, Markov Chain etc
chilli#5665: Yeah
researcher2#9294: Were you already pre-trained?
chilli#5665: In RL?
researcher2#9294: Yus
chilli#5665: No
researcher2#9294: Ok good, hopefully won't take too long then. I only have very basic understanding of some q learning I read sitting on a boat while bored a few years ago lol
chilli#5665: I just didn't really want to pay for it
chilli#5665: Haha
researcher2#9294: haha thrifty
researcher2#9294: some courses don't let you go past week 2, nice of them to favor the gifted in such a way
chilli#5665: Mm, the issue with the course is that
chilli#5665: They don't let you do assignments
triggerhappygandi#0001: Protip apply for financial aid. Works 100% of the time
chilli#5665: Without being in the "paid" version
triggerhappygandi#0001: Make up any bullshit excuse |
researcher2#9294: haha, I'm happy to pay, like contributing to good works
chilli#5665: I complained quite loudly about it
researcher2#9294: nothing against the freeloaders either tho
chilli#5665: And they said they were planning on uploading the notebooks separately or something
triggerhappygandi#0001: Same, but when I was a student... well had to make do
researcher2#9294: Yeah nobody of student age should really be paying for stuff if possible
gwern#1782: oh. so what he'd find? pretty much the expected?
StellaAthena#3530: Wait, did your paper get accepted to CoG already? That’s awesome
chilli#5665: that paper looks pretty unconvincing imo
chilli#5665: people have tried a lot of different approximations, including things like chebyshev approximations (chebnet) or wavelets (that paper)
chilli#5665: but none of them have been super convincing in terms of reuslts
StellaAthena#3530: @chilli Have you read Taco Cohen’s recent paper? I’m curious what you think of it
StellaAthena#3530: https://arxiv.org/abs/2007.08349
chilli#5665: in general, I'm pretty down on "more expressive/powerful" graph neural networks
chilli#5665: perhaps they're valuable theoretically, but in practice they don't really seem to matter
StellaAthena#3530: Interesting.
RyanT#5929: Yeah that paper was the only one I could find, I was curious if you know of anything else
StellaAthena#3530: What are the kinds of problems where current graph neural networks don’t function as well as we might like?
chilli#5665: well, "as we might like" is quite broad
RyanT#5929: Also unrelated but does anyone know where I can find supplementary material for an ICML paper |
chilli#5665: but basically, GNNs have consistently had trouble establishing themselves on top of graph benchmarks
chilli#5665: even when those benchmarks were designed for GNNs
chilli#5665: lol
StellaAthena#3530: Lol
chilli#5665: One of the big problems with GNNs is that the training/optimization procedure still kinda sucks
StellaAthena#3530: What outperforms them? More basic NNs or non-DL techniques
chilli#5665: It's kinda like a RNN
chilli#5665: in that sense
StellaAthena#3530: Hmmm
chilli#5665: mmm, varies
chilli#5665: depending on the task
chilli#5665: on some variants of node classification, label propagation tends to do very well https://cdn.discordapp.com/attachments/729741769738158194/830867392799178772/unknown.png
chilli#5665: on some variants of link predictions, common neighbor type-heuristics work well https://cdn.discordapp.com/attachments/729741769738158194/830867483321434197/unknown.png
chilli#5665: I guess for graph classification GNNs have the strongest potential, but even then, you see that a lot of these methods have trouble beating hand-crafted heuristics + a random forest: https://cdn.discordapp.com/attachments/729741769738158194/830867682420064285/unknown.png
chilli#5665: do you want wavelets specifically? or this general line of work with fourier approximations
chilli#5665: oh, and for a lot of tasks on code GNNs get beaten out by more transformer-like models
StellaAthena#3530: The kinds of problems I tend to be interested in at work look like “here’s a set of graphs with node and/or edge attributes. Some of the graphs do not belong, in the sense that they were not generated by the same underlying process. Can we identify which ones they are”
RyanT#5929: the latter, I did some work with graph signal processing in college but I haven't seen too much of it used in DL since
chilli#5665: hmm, sounds like graph classification
StellaAthena#3530: You can pretty much treat this as a classification problem though |
chilli#5665: chebnets is a good one
chilli#5665: yeah, I mean, depending on the task you might be able to get GNNs to work well
chilli#5665: mm, I think a good heuristic for whether GNNs will perform well is
chilli#5665: "Can you do pretty well by processing each node independently and summing the results"
chilli#5665: or perhaps, can you do pretty well by processing local regions of the graph independently and averaging the results
StellaAthena#3530: The other kind of task I’m interested in is “given a large graph, some of the nodes do not belong. Can you identify which *nodes* they are?”
chilli#5665: hmm, that can be framed as node classification
chilli#5665: but once again, it really depends on the type of behavior you're running into
chilli#5665: if the task is homophilous (i.e. an edge between two nodes implies that they are likely to have the same class), then GNNs could perform well
chilli#5665: so, an example of a task where GNNs would probably not perform well is "if your graph has 5 triangles, it's class A. otherwise it's class B"
andyljones#7746: sorry, i think it's kosher for me to say 'it exists' and it's largely orthogonal to my work, but not much more. he'll hopefully have it out Soon^TM
andyljones#7746: *conference submission draft
deadline got pushed back a week because academics
spirit-from-germany#1488: Have you noticed this? https://cdn.discordapp.com/attachments/729741769738158194/830922013358948362/unknown.png
EricHallahan#1051: Who?
spirit-from-germany#1488: https://cdn.discordapp.com/attachments/729741769738158194/830922133294415872/unknown.png
spirit-from-germany#1488: I just heard it on the Py-Torch-Dall-e server
spirit-from-germany#1488: https://github.com/lucidrains/DALLE-pytorch/pull/183
EricHallahan#1051: Who are you talking to? |
EricHallahan#1051: Sparse attention works in NeoX, it has been integrated in to DeeperSpeed already.
spirit-from-germany#1488: whoever is interested in that 🙂 ... Up to my knowledge everyone was still waiting for a DS fix to make sparse attention run on cuda 11
spirit-from-germany#1488: ah... cool
spirit-from-germany#1488: didnt get that
spirit-from-germany#1488: lol 😄
spirit-from-germany#1488: 🥳
spirit-from-germany#1488: So you're atm waiting for huge A100 pods to arrive to start training a huge NEOX, right?
EricHallahan#1051: I don't think we even know what the final topology will look like.
EricHallahan#1051: We are waiting for more hardware, yes, but how that hardware is configured makes a big difference in what we can or cannot do.
Louis#0144: @Deleted User de23c58c I have a few friends who are NLP researchers in China
Louis#0144: I sent them the blog post
Louis#0144: Apparently it’s becoming very famous in their circles
Louis#0144: They already knew of it
gwern#1782: hm. so the chinese DL researchers know about it but all the western ones don't?
gwern#1782: seems like it's important to write up a blog post and get it out there
StellaAthena#3530: @gwern I've been thinking about whether or not it would be passe to write a blogpost for our blog tbh
gwern#1782: passe? who's written about rotary already?
gwern#1782: I've heard absolutely nothing about it anywhere but here
StellaAthena#3530: oh wrong word sorry
StellaAthena#3530: a social *faux pas*, I guess. This is probably the academic in me getting anxious for no reason tho. |
gwern#1782: doesn't seem like a faux pas to me to write a normal ML blog post describing it and initial observations on implementing & using it
EricHallahan#1051: (I've been looking into adding MathJax or KaTeX to the website, but apparently it is a perpetual issue with Hugo to do it with reasonable markup and with my desire to keep it responsive with minimal JavaScript.)
gwern#1782: and if it's a substantial boost to an arch detail which has to be used in every Transformer, it's important to get it out fast so people can experiment with it
gwern#1782: @EricHallahan have you looked at how gwern.net does it with static mathjax?
EricHallahan#1051: Nope
gwern#1782: I've also been gradually phasing out as much of my latex as possible. it turns out you can get a remarkable distance with just unicode, html, and a few CSS tweaks
StellaAthena#3530: @EricHallahan have you looked at this: https://github.com/peaceiris/hugo-mod-mathjax
gwern#1782: that's the usual JS approach, looks like. it's not so great because it obviously requires JS and delays page rendering noticeably
gwern#1782: what I do is pass it through mathjax-node which runs the JS at compile-time; all of the preprocessing is done, and the browser only needs to load the CSS+fonts
bmk#1476: if your goal is to get this out there asap, i don't think it's worth yak shaving over the mathjax implementation
bmk#1476: heck, write it up in latex and post the pdf on the blog
kindiana#1016: take screenshots of latex :berk:
StellaAthena#3530: Fair fair
gwern#1782: fwiw, I agree. I needed mathjax-node because on some of my pages, it was literally taking 5s to parse the entire thing and render. but for an EA page, you'll have like all of 5 equations and a few kilobytes of text
EricHallahan#1051: I don't like anything with JS involved.
gwern#1782: sure, but now you're admitting it's merely a technical esthetic reason 🙂
StellaAthena#3530: @Deleted User de23c58c want to team up for a rotary embedding blog post? Discuss the theory and derivation, and then delve into some of the experiments you've run with it?
EricHallahan#1051: MathJax is the worst offender IMO, it is ungodly slow.
Deleted User#0000: hmm, nah that's all you and cfoster or whoever interested
Deleted User#0000: yes, definitely let the world know |
EricHallahan#1051: I'd rather use LaTeX for markup than Markdown lol, it is so much more flexible.
Deleted User#0000: my Performer repo is already equipped with rotary embeddings too, so you can do runs with and without the feature with a single flag
Deleted User#0000: and observe the effects first-hand for linear attention
bmk#1476: then write it up in latex and post the pdf on the blog lol
EricHallahan#1051: Thing is it sucks to read PDFs. You are constrained to a nonexistent page.
EricHallahan#1051: Especially on mobile.
Deleted User#0000: ahh nice! for good reason
Deleted User#0000: let me know if you figure out when the authors will publish the paper
Deleted User#0000: if it will be on arxiv, etc
Deleted User#0000: or does China have its own arxiv?
StellaAthena#3530: @cfoster0 @Aran Komatsuzaki @whoever else, rotary embedding blog post? Share the news of this great improvement with the english-speaking world?
Deleted User#0000: i don't really know what they do over there
gwern#1782: how *do* chinese researchers publish outside the western apparatus? like, obviously not on arxiv
Deleted User#0000: i hear they VPN to use github anyhow
Deleted User#0000: 🤷♂️
StellaAthena#3530: There are several chineese preprint servers
Deleted User#0000: is arxiv firewalled?
StellaAthena#3530: Qiji e‐print archive (Qiji), the Chinese Preprint Server (CPS), and Chinese Science Papers Online (CSPO) for example
StellaAthena#3530: No idea
Deleted User#0000: ahh, did not know this |
Deleted User#0000: upload something that criticizes CCP in arxiv, get it firewalled, set back the chinese scientific establishment
Deleted User#0000: :berk:
Deleted User#0000: i would be a terrific troll if i worked for the US govt
kindiana#1016: i wonder if scihub is firewalled
StellaAthena#3530: I didn't realize you were a DIA agent, but it makes sense...
Ward#1738: All the young academics I know from China use VPN to get scientific articles. It is the norm among the people I know.
EricHallahan#1051: If we are going to do this, write it up in Overleaf and we can port it over to the website once I get math working the way I want.
StellaAthena#3530: I have too many responsibilities already. I need to make good on some of them before I can take on more tbh
StellaAthena#3530: I can write the math up if someone wants to take responsibility for most of the writing and doing the experiments, but I can't put that on my plate rn
StellaAthena#3530: speaking of which, I need to finish my hw >.>
EricHallahan#1051: Same here lol
cfoster0#4356: Definitely down to
cfoster0#4356: Should have a decent chunk of time this week: let's coordinate in #website, if that's alright with y'all
Keepthepace#6435: Hi, a few weeks (months?) ago I had a conversation here with people interested about the open source and copyleft licensing of models but I can't seem to find that conversation in the archive. Is anyone here interested in discussing/giving feedbacks on a copyleft license for ML models?
EricHallahan#1051: The models are currently licensed under Apache 2.0.
Keepthepace#6435: Yes, I saw that. It looks it has been a long discussion. Not trying to challenge that, just discuss the problematics arounds copyleft and models.
Keepthepace#6435: Under Apache 2.0. Someone can take the model, improve it, make a commercial service out of it and never share it back (as long as their service works through remote requests). I am proposing a way to mandate openness even in that case, in a way similar to the Affero GPL
Keepthepace#6435: Basically, the license I wish OpenAI had, where they could guarantee they would continue to share their results and not close their models.
gwern#1782: ah yes, the affero gpl, well known as the most successful of all FLOSS licenses, and whose success we devoutly wish to imitate with EA works
Keepthepace#6435: *sarcosmeter senses tingling* |
Keepthepace#6435: I don't think popularity matters for a license, it is about the rights and obligations it gives.
kinoc#5731: I think the key difference is _success_ and how you measure it. Is it number of project contributors and consumers, project mindshare, or some other metric.
Keepthepace#6435: I would define it in terms of impact. The GPL is to me the most successful license if only because the impact Linux had on the IT world, and how much its license protected it.
Keepthepace#6435: I think right now we are at a similar point in time for machine learning models: it is still possible to bootstrap efforts like Linus did for OSes or like EleutherAI is doing for models, but very soon we will be stuck with what is already existing and restarting from scratch wont be an option.
Keepthepace#6435: The directions of the impulses right now are very important.
𓅬 gabriel_syme 𓅬#3220: same experience here. In one of my programs they typically use VPN to access our zoom meetings, discord, etc.
StellaAthena#3530: X@ x. D
EricHallahan#1051: P ,y CY
Louis#0144: Cat
Louis#0144: Either that or commutative algebra
Louis#0144: Could be either
Teemochu#8740: r m ---rrrrFFFF //
bmk#1476: rm -🇫🇷 /
EricHallahan#1051: \\ EEEEnnnn____ r n
Imperishable_NEET#1969: Heard there's a bunch of ex-Uber AI people in this org, though they bill themselves as ML rather than AI researchers: https://mlcollective.org/
andyljones#7746: there's a discord over here
https://discord.gg/d2EaGvvN
fairly loose association all things considered |
Imperishable_NEET#1969: Oh, nice!
andyljones#7746: well, unless they've got an inner sanctum i'm not privy too
Imperishable_NEET#1969: Well, I heard about it on another server and thought I'd point you guys to it if somebody hadn't done so already.
Kazumi#1297: I'll be joining, but I'm not really able to catch up with everything already
Sid#2121: AdamW is unequivocally better than Adam, right? Is there any situation where I'd rather use Adam over AdamW?
𝓒𝓵𝓪𝓻𝓪#0888: When you don't want weight decay? lol
Sid#2121: well then they're just equivalent
𝓒𝓵𝓪𝓻𝓪#0888: Right! I think that's technically not better in that edge case.
𝓒𝓵𝓪𝓻𝓪#0888: Only equal :3
Sid#2121: ok let me rephrase lol
Sid#2121: is there any situation where Adam > Adamw
EricHallahan#1051: ¯\_(ツ)_/¯
EricHallahan#1051: But my prior is that there isn't any.
ruben_c35#4138: can someone give an invitation for this discord channel?
EricHallahan#1051: It is plastered all over our website at https://eleuther.ai
Louis#0144: HONESTLY
𝓒𝓵𝓪𝓻𝓪#0888: lolll
Louis#0144: rm -f 🇨🇦*/
janus#0150: When is multimodal GPT-3 being announced? May 20th?
janus#0150: Day after neurips abstracts? |
EricHallahan#1051: ¯\_(ツ)_/¯
bmk#1476: gotta step up our speedrun game
EricHallahan#1051: :gameryes:
janus#0150: nvidia announced 80gb a100s
𓅬 gabriel_syme 𓅬#3220: they also announced 3090s but they don't exist
janus#0150: This is from the nvidia conference: https://cdn.discordapp.com/attachments/729741769738158194/831204604204417104/unknown.png
janus#0150: quadrillion param models in 2023 nbd
𓅬 gabriel_syme 𓅬#3220: so someone literally drew a line?
EricHallahan#1051: :gameryes:
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/831205057047298049/extrapolating.png
janus#0150: https://cdn.discordapp.com/attachments/729741769738158194/831205110075490374/unknown.png
𓅬 gabriel_syme 𓅬#3220: change that to 3090s and it's also true
𓅬 gabriel_syme 𓅬#3220: lol sry so salty I can't get a damn GPU
bmk#1476: Jensen has the greatest kitchen
𓅬 gabriel_syme 𓅬#3220: fireplace in the kitchen
𝓒𝓵𝓪𝓻𝓪#0888: Jeez and I thought my workplace was aggressive about timelines...
nz#9710: this is literally ML's astrology
EricHallahan#1051: I am doubtful we will see a useful 1T+ model this year. Switch Transformer doesn't count in my book because there is no way that it would ever be deployed.
janus#0150: 3-4 month training time for a 1 trillion param model on their new megatron hardware
nz#9710: jensen going full :morelayers: |
inox#5400: that's how tech works, draw a line and then tell the engineers to keep going like that
nz#9710: thank you management very cool
𓅬 gabriel_syme 𓅬#3220: over the line!
𓅬 gabriel_syme 𓅬#3220: (had to drop a dude tidbit, don't get the chance so often anymore)
inox#5400: like Moore wasn't an observer
janus#0150: New datacenter CPU "Grace" increases mem-to-gpu from 64gb/s to 2000gb/s
CRG#8707: Relevant: <https://www.reddit.com/r/mlscaling/comments/milujs/ai_and_compute_trend_isnt_predictive_of_what_is/>
janus#0150: https://cdn.discordapp.com/attachments/729741769738158194/831206674156224542/unknown.png
janus#0150: He evaporated out of his kitchen
janus#0150: We're done for folks
janus#0150: https://cdn.discordapp.com/attachments/729741769738158194/831206704330571826/unknown.png
𓅬 gabriel_syme 𓅬#3220: why do you put an oven in a fireplace?
janus#0150: https://cdn.discordapp.com/attachments/729741769738158194/831206798387838986/unknown.png
nz#9710: asserting dominance
𓅬 gabriel_syme 𓅬#3220: technological dominance no doubt
𓅬 gabriel_syme 𓅬#3220: 80gb A100s sounds nutty though, maybe the 40gb are cheaper after that
janus#0150: We have a little time... https://cdn.discordapp.com/attachments/729741769738158194/831207126411903016/unknown.png
AI_WAIFU#2844: I call BS, I did the math on this and we run into physical bottlenecks on how many chips we can produce. At ~300T parameters with current tech, and that assumes we can 10x GPU production.
bmk#1476: Nvidia is selling our runway for profit
𓅬 gabriel_syme 𓅬#3220: ampere next next lmao |
janus#0150: Did you see him evaporate out of his kitchen??
AI_WAIFU#2844: I need link pls
janus#0150: In 2022 they will have a new GPU 'Ampere next'
janus#0150: https://youtu.be/eAn_oiZwUXA
janus#0150: This was -40:00 minutes ago
janus#0150: (its live)
nz#9710: Yea but the real game changer is ampere next next
bmk#1476: :smallbrain: 14nm+++++
:bigbrain: ampere next next
nz#9710: I can't wait to know what comes after that though
janus#0150: 📎
Daj#7482: we need like 15 emotes of progressively more paperclips
nz#9710: I was leaning onto ampere next next next but hey that's possible too
bmk#1476: this needs to be a thing
cognomen#6297: feeling a bit uneasy about GRACE
cognomen#6297: nvidia + linux is hard enough without them fucking up the CPU side as well
Em Elle#8886: whats the difference between GPT NEO and GPT-2 ?
Em Elle#8886: and also what is problem with getting to GPT-3 from GPT Neo ?
is it just that you need a small data center to execute the model as well as train it ?
EricHallahan#1051: If you are okay with the brief answer to that question, "The Pile". |
Em Elle#8886: ill go read about the pile, I skimmed it before but it was a clean dataset of some kind right?
EricHallahan#1051: TL;DR: Pile is a far more diverse set of modalities than just scraping the web like GPT-2 did.
StellaAthena#3530: @Em Elle It's the dataset GPT-Neo was trained on, and the paper has extensive comparison of GPT-Neo with GPT-2 and GPT-3 models
Em Elle#8886: Ok reading, another question before I head off, why are linformers not used in this case, what are the pros and cons?
EricHallahan#1051: In our experience linear attention has not worked well for language modeling.
cfoster0#4356: Many of the popular attention mechanisms don't work efficiently for autoregressive generation
EricHallahan#1051: If it worked well for our application, we would obviously be using it. `:)`
cfoster0#4356: In general, they *can* work well if you have a huge number of shallow features, but that's not the regime we're in. We want a relatively small number of deep features
Em Elle#8886: One last question, would GPT NEO-3 be able to run on a laptop or desktop computer or is it too unwieldy for that purpose?
EricHallahan#1051: No... the largest model we have out in public now will not fit in the 8 GiB of RAM on my laptop. (2.7B is ~10 GiB)
Kharr#7888: You can run it in Colab or on CPU
EricHallahan#1051: Not a 150B model.
Em Elle#8886: Ah okay so a high end macbook would be fine, and or a high end pc would be fine, where they have like 128GB of ram sitting around
EricHallahan#1051: It depends on the model you are referring to.
EricHallahan#1051: 350M fits fine on my laptop.
EricHallahan#1051: 1.3B does not.
Em Elle#8886: @EricHallahan probably the final model GPT3-NEO, I am not sure how many parameters it will have, but the one that reproduces GPT3 results
EricHallahan#1051: Yeah, 150-200B parameters is not going to be able to realistically run on a single machine.
Em Elle#8886: @EricHallahan how many ~aprox GB of Ram will that take, and I assume the next step after that would be to distill it?
EricHallahan#1051: At half-precision floating point, 350 GB, at single-precision for CPU usage it would be 700 GB. |
Em Elle#8886: @EricHallahan I see, I guess that makes it more or less not suitable for offline usage, it looks like we will have to make some kind of pruning breakthrough for offline usage
Kharr#7888: From an information theory standpoint, compressing that to be usable offline is unrealistic. The model stores a lot of knowledge in its latent space.
Louis#0144: Do you mean online usage
Louis#0144: Offline is fine
Louis#0144: Online is the issue
Louis#0144: Eg interactive
Kharr#7888: "running locally"? 🙂
Em Elle#8886: @Kharr so I am from the product development world, where essentially online usage means via "deploying server" and offline means "running locally"
Em Elle#8886: 🙂 thats right
Louis#0144: Oh lmao
Kharr#7888: Macbook Pro 2050 edition maybe 😉
Em Elle#8886: if they decide to be brave and have a 500gb stick of ram
Louis#0144: MacBooks are going to become entirely cloud driven devices eventually
Louis#0144: I’d bet money on it
Kharr#7888: Yeah.. with the way transfer speeds are going, it will be faster to run everything in the cloud and send real-time data to devices
Louis#0144: As a consumer it’s going to become prohibitively expensive to own your own compute
EricHallahan#1051: I believe in our agreement with CoreWeave we have an objective to investigate distillation and, if possible, do so. Don't quote me on that however.
Em Elle#8886: As a technologist, that sounds plausible, but as a user-experience type of person and pragmatic engineer, I think it will be more likely some compute are shifted to the cloud and experiences stay offline
StellaAthena#3530: This is true, and many of us are also independently scientifically interested in doing so.
EricHallahan#1051: Cool, I was going off of the original CoreWeave announcement and wanted to make sure it was still accurate. |
Kharr#7888: I'm curious to see how this goes. There are some interesting papers on this topic but I have yet to see it succeed for lm. HF distilled 124M GPT2 into 84M but haven't seen much else.
EricHallahan#1051: (I should add distillation to the FAQ)
StellaAthena#3530: They did a DistilGPT2 as well, but didn't get much compression.
Em Elle#8886: Thanks for answering my questions everyone, I guess the only thing I could do to contribute to this technology really is provide some type of engagement in the consumer space, to shed more light on what work is being done here
triggerhappygandi#0001: It _is_ 175B params. Definitely well beyond yours or mine meagre computers.
mgostIH#0245: Just zip it
Exocamp#8255: I'm not sure where to put this but I've decided to take Mr. Lucidrains WIP transganformer repo and train it on the Stanford Dogs dataset in Colab to see how it'll work.
Exocamp#8255: I'm a free user, so uh
Exocamp#8255: *27 hours to go*
Exocamp#8255: I'll be sure to report back my results in about next millenia
EricHallahan#1051: Same here buddy, same here.
Exocamp#8255: Good to know more people share my feel
Dromarion#3383: Just write down your GPU hours of training as relevant work experience.
Kharr#7888: "MLOps"
Exocamp#8255: Perfect.
Exocamp#8255: My qualifications:
Exocamp#8255: *-Being the 1st person to be completely banned from Google Colab*
Kharr#7888: If you can get it to run on the TPU instance, you get way more compute.
EricHallahan#1051: The first month I was here I spent without GPU on Colab.
Exocamp#8255: Can it really run on a TPU? Not too familiar with that, usually I rely on GPUs |
EricHallahan#1051: It depends on you code.
Exocamp#8255: ~~I also have no real idea on how this works in terms of code~~
Kharr#7888: Going to TPU is easy with Pytorch. Check examples.
Exocamp#8255: Using someone else's repo as I said will need to check more, was just seeing if it *works* or not
EricHallahan#1051: I have never successfully used TPUs with Colab.
Exocamp#8255: Experimentation.
Kharr#7888: The only GPU that kind of competes with the TPU compute on Colab is the V100 if you train in FP16/AMP. Otherwise TPU is worth the effort.
EricHallahan#1051: I need to learn JAX.
nz#9710: join the cool kids gang
Kharr#7888: Try some of the Colab examples from here and see how you can plug in your model: https://github.com/pytorch/xla/tree/master/contrib/colab . I believe they also upgraded the TPUs so you get 8 cores x 16 GB memory each.
Exocamp#8255: Ah I see thanks, will look at it later
Exocamp#8255: Bit busy atm
Exocamp#8255: But here's the model's generation so far on ~7300/150000 https://cdn.discordapp.com/attachments/729741769738158194/831269370990428220/7.jpg
Exocamp#8255: EMA https://cdn.discordapp.com/attachments/729741769738158194/831269387340218378/7-ema.jpg
Exocamp#8255: 32x32 images
Exocamp#8255: Looks... *somewhat* recognizable already even at such an early stage.
Exocamp#8255: Promising results!
Deleted User#0000: @Exocamp lolll you're actually using it
Deleted User#0000: it's probably better to stick with stylegan2 or lightweight gan for now
EricHallahan#1051: We are going on a joyride powered by rotary embeddings lol |
Deleted User#0000: the rotary emb still gives some stripy results for 2d
Exocamp#8255: Probably, but I said fuck it, why not
Deleted User#0000: Because it is still calculated axis-wise
Exocamp#8255: Not sure if anyone else has actually tried it
Exocamp#8255: ~~except you of course~~
Deleted User#0000: my goal this week is to stretch it to 128x128
Exocamp#8255: ~~I can only imagine how long I would need to wait for a training run with *those* sizes.~~
Deleted User#0000: It's hard to even train one at 64x64
EricHallahan#1051: In one dimension it is beautiful tbh, but 2D seems way harder without destroying the number of frequencies.
Exocamp#8255: Someone up above linked about TPU support for PyTorch. Have you looked into it? Might help with training, but I don't know much about PyTorch in general.
Exocamp#8255: ~~But my aim is getting better!~~
StellaAthena#3530: I expect us to be able to create this by the time the blog post is finished tbh
Kharr#7888: I'm going to have to try your GAN looks fun.
cfoster0#4356: https://cdn.discordapp.com/attachments/729741769738158194/831275645707091988/im-afraid-we-need-to-use-math-40328739.png
EricHallahan#1051: I'll write up the section on the grounding to physics.
cfoster0#4356: If anyone's quick with animated visualizations, that could be massively helpful
Deleted User#0000: it needs a lot more work
Kharr#7888: Ever try mixing all the different attention types? You end up with something unexpected :thonk:
RyanT#5929: Someone’s gonna do this with a paper that has a half-assed adhd pun
Kharr#7888: "All the attention is all you need". In all seriousness, though, it kind of solves that weird problem of attention heads in adjacent layers being redundant since they learn via different mechanisms. |
StellaAthena#3530: I mean, the simple answer is to use this in place of the complex matrix https://cdn.discordapp.com/attachments/729741769738158194/831280486503415808/Capture.PNG
EricHallahan#1051: But you still lose most of your frequencies.
EricHallahan#1051: There is no way around that.
StellaAthena#3530: I'm saying to map (a, b) to (a, b, c, d) the same way that in 1D we map x to (x, y)
StellaAthena#3530: I don't see why this would lose any info
EricHallahan#1051: You loose resolution.
StellaAthena#3530: resolution of what?
EricHallahan#1051: position.
StellaAthena#3530: Oh, you mean for fixed # of params increasing the dimension decreases resolution
EricHallahan#1051: Yes,
StellaAthena#3530: Yes
EricHallahan#1051: :yes:
StellaAthena#3530: But that's not the embedding's fault, and happens with the actual values as well as the embedding
Deleted User#0000: i tried linear attention only for transganformer with pretty awful results
Deleted User#0000: my next plan is to mix axial attention for the higher resolutions (lower resolutions stay with full attention)
Deleted User#0000: and sprinkle linear attention if axial isn't strong enough
Exocamp#8255: Went 10% through training before stopping, I saved the model file but I wanna try both something else with it and switch computers
Exocamp#8255: At any rate, here's ~15000/150000 https://cdn.discordapp.com/attachments/729741769738158194/831291121463853116/15.jpg
Exocamp#8255: EMA https://cdn.discordapp.com/attachments/729741769738158194/831291138424963092/15-ema.jpg
Exocamp#8255: It absolutely seems to be improving/working, so that's cool |
Deleted User#0000: yea it works, but you'll get faster and better results with conv net based gans
Deleted User#0000: I'm just doing it because Im an attention fanboy
Deleted User#0000: I tried to make this work some time ago without much success
Exocamp#8255: Hm I see
alstroemeria313#1694: what if you made q, k, and v with convolutions and then did pointwise attention
alstroemeria313#1694: apparently vqgan does this
alstroemeria313#1694: https://github.com/CompVis/taming-transformers/blob/master/taming/modules/diffusionmodules/model.py#L140
alstroemeria313#1694: this impl makes them with 1x1 conv layers (they intersperse them with standard 3x3 convs)
alstroemeria313#1694: but if you made them with 3x3s you could literally have models consisting of just this
alstroemeria313#1694: mb it would be all you need
alstroemeria313#1694: well, and pooling.
kindiana#1016: convolutions are all you need :berk:
nz#9710: CvT?
bmk#1476: :smallbrain: text RNN
:bigbrain: text CNN
:bigbrain: text transformer
:galaxy_brain: vision transformer on screenshots of text
alstroemeria313#1694: ehehe
Deleted User#0000: Yup I'm doing convolutions for keys and values :)
Deleted User#0000: Feedforwards are also with 3x3 convs when the fmap is large enough |
Em Elle#8886: Hey guys I have a question, does anyone know what technology stack the Microsoft Tay bot was based off of? how did it learn? was it just fed data daily that swayed it output based on user feedback?
Or did it use Reinforcement learning to optimize for some objective function, and if so is there anywhere, where I could read about that?
bmk#1476: probably google it first
bmk#1476: i don't think anyone here knows anything more about Tay than google does, anyways
Em Elle#8886: tried lol no avail nothing came up other than the new articles in it being deployed in india and japan
cfoster0#4356: Hmm then I don't think there's much known about it then
bmk#1476: if you can't find the answer on google then probably nobody outside Microsoft knows
Em Elle#8886: It was worth a shot, figured some experts could speculate
kinoc#5731: Look for any intersection with xiaoice
gwern#1782: tay has entered mythology and teaching tales for the young. who but a very disagreeable person would enquire?
kinoc#5731: You best bet would be to look for clues in https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-learning/in-2016-microsofts-racist-chatbot-revealed-the-dangers-of-online-conversation
𓅬 gabriel_syme 𓅬#3220: sounds terrifying, yay
𓅬 gabriel_syme 𓅬#3220: not sure I'm late to this, catching up, but the last time I used it it was just lightweight-gan and not transganformer. Not sure he updated though, haven't checked in a few days
𓅬 gabriel_syme 𓅬#3220: nvm read everything, it's updated! cool, will give it a try next week
Exocamp#8255: Ah
Exocamp#8255: Well he did say that lightweight-gan is better ATM
Exocamp#8255: I at least have a preview of lightweight-gan, may work further with taht
𓅬 gabriel_syme 𓅬#3220: I ran a model on lightweight-gan, it's really cool. Works nicely and it's really...lightweight 🙂 Ran on my 2080 and would take about 24h to do 150k steps
𓅬 gabriel_syme 𓅬#3220: is it something like this already or is this useful?
https://arxiv.org/pdf/2104.05707.pdf |
Ward#1738: https://developer.nvidia.com/blog/scaling-language-model-training-to-a-trillion-parameters-using-megatron/
Kia#2550: What's the model that is a Trillion parameters?
Kia#2550: Ow nvm it's just show a Trillion parameters is possible, is just not yet really a thing yet
imceres#0461: Dear all, I'm Mario and I work on ML models for medical image analysis at UPF barcelona. Thanks for this great initiative! I'll look around until I find something where I could help a little 🙂
Kia#2550: Ow ask Connor or bmk, They're probably happy to have you here
Kia#2550: Also amazing work to be honest
triggerhappygandi#0001: All trillion models are simply trained for a bit, to see if it actually works.
Kia#2550: So it's possible already existing AI that has Trillion parameters exist
Kia#2550: Interesting
triggerhappygandi#0001: It does. But the models are not much better than a randomly initialized one
Kia#2550: Actually true...
Kia#2550: Probably Future AI's Maximize Quality then Actual size
Sid#2121: I would not be surprised if trillion parameter models already exist internally in private companies
triggerhappygandi#0001: Ytho
Kia#2550: Probably Google owns one
Kia#2550: Who would be surprised right
triggerhappygandi#0001: Maybe. But I doubt they even need it.
Sid#2121: line go up
triggerhappygandi#0001: If they have it for that reason alone, they're no more mature than us lmao
Kia#2550: I mean...They literally have a dedicated group for AI |
Kia#2550: They will use what's in view
Kia#2550: So It's possible I guess
chilli#5665: I have a trillion parameter model too.
```
model.parameters = torch.randn(1 trillion)
```
triggerhappygandi#0001: Yeah lmao
Kia#2550: Flex?
Kia#2550: Awesome one
nz#9710: he can't keep getting away with this
mkualquiera#3484: > they're no more mature than us lmao
we're talking about the pony people after all
triggerhappygandi#0001: I half believe Jeff Dean is lurking here with an MLP profile pic
mgostIH#0245: Maybe with a goose propic
Louis#0144: i keep misreading jeff dean as james dean
Louis#0144: anyway if u really wanted to
Louis#0144: you could do 1t params today
Louis#0144: using HMC
bmk#1476: p o n y w a l l |
CKtalon#7792: bmk, why you ignore me? 😢
asara#0001: probably the profile picture
triggerhappygandi#0001: definitely
triggerhappygandi#0001: geoff hinton talking about GLOM in NVIDIA GTC
triggerhappygandi#0001: https://gtc21.event.nvidia.com/media/t/1_pcj05a24
For anyone who signed up
triggerhappygandi#0001: The Bengio one too:
https://gtc21.event.nvidia.com/media/t/1_cdfc5oo0
About human inspired inductive biases
Brady#0053: Any estimates of how much money OpenAI is making from GPT-3?
Louis#0144: a lot
Louis#0144: theres a few
Louis#0144: I think the number floated around is a few hundred thousand a month
Louis#0144: this talk was literal trash
Louis#0144: he got nothing done
Louis#0144: lmao
Louis#0144: like one of the worst GTC talks ive seen in a loooong time
Louis#0144: this one was better
rb#3159: Around 300 apps are using GPT-3, even if this number was from last june (when API was released), it would be around a 2-3 million (?)
RyanT#5929: Is there a recording |
Louis#0144: yes
Louis#0144: click the link
Brady#0053: I'll let him know you disapprove 😉
Louis#0144: OOF
Louis#0144: lmao
Louis#0144: oh yeah
Louis#0144: youre at mila
RyanT#5929: Lmao
Louis#0144: ive seen him talk live actually
Louis#0144: I was in Chris Eliasmith's lab for 3 y ears
Louis#0144: and I saw bengio give a talk on neuroscience
cfoster0#4356: Louis only has *strong* opinions. I've never seen a mild take from him lol
Louis#0144: bengio is much better at talking about neuro
Brady#0053: 300*400=120000. You're saying there's like 20 X more users now than then?
Louis#0144: he was speaking about attractor networks at UofT
Louis#0144: honestly
rb#3159: 400 dollars is the base-price, user have to pay more for extra tokens used, and also 400 dollars for each month so multiply it by number of months they have been using
Louis#0144: I guess he isnt supposed to get into super technical details during GTC
Louis#0144: Like I felt like it was a surface level Judea Pearl type talk
Brady#0053: No need to make opinion less strong. I don't mind. I think that's the kind of talks he often gives (I don't really watch his talks 😅) |
Louis#0144: he gave a great talk at UofT
Louis#0144: a few years back
Louis#0144: did some cool fourier stuff
Louis#0144: but yeah I dont watch his talks usually
Brady#0053: GPT-4 probably coming out some time in 2021?
Louis#0144: I think it comes down from the fact that Im like always ready to argue tbh. Like my stance on positional embeddings is very mild. I think theyre a hack but tbh its no big deal all in all. But I used to debate a lot + I was a math RA for a few years where I got paid to argue with my advisor
Louis#0144: within the next 8 weeks
EricHallahan#1051: ¯\_(ツ)_/¯
Louis#0144: id bet money
Brady#0053: What informs this guess?
Louis#0144: its always around NeurIPS
Louis#0144: every year
EricHallahan#1051: If it going to happen they will do it then.
zphang#7252: I wonder if in the near future OpenAI could start having like OpenAICons and keynotes where they announce new GPTs and API features
rb#3159: He gave the same talk last year https://www.youtube.com/watch?v=rKZJ0TJWvTk
Louis#0144: i see
Louis#0144: was not aware
Louis#0144: like hes not a dumb guy
Louis#0144: he must see a benefit to this talk
Louis#0144: i just dont |
Brady#0053: I watched this one
Louis#0144: you literally *uploaded it*
Louis#0144: LMAO
Louis#0144: I would hope you watched it
Brady#0053: You never know these days
Louis#0144: you should have been here a few weeks ago btw
Louis#0144: We went hard on Hume and Pearl
Louis#0144: had an interesting debate about denying the existence of causality
Louis#0144: im still on the fence about if innate causality exists though
Louis#0144: im p squarely a frequenist however
Louis#0144: I can appreciate both arguments of course, I just havent decided which one to believe yet
rb#3159: Hinton gave similar talk, in the lines of system-1 vs system-2 but he did not explicitly mention https://drive.google.com/file/d/0B8i61jl8OE3XdHRCSkV1VFNqTWc/view
RyanT#5929: lol i strongly identify with this sentiment
Louis#0144: I either strongly agree, strongly disagree, or do not know enough to have an opinion in which case I dont talk or if someone asks me I say I dont know enough
Louis#0144: lmao
rb#3159: But, what does he mean by "strong-generalization"? and also I have an intuition that these causal representations should eventually emerge even without forcing a graph-like structure
RyanT#5929: I'll often disagree just on a hunch that something sounds wrong and, in the course of arguing for my position, convince myself that I should actually strongly disagree
Louis#0144: I agree
Louis#0144: causal representations will eventually appear without graphs
Louis#0144: but I think his wording could have been better |
Louis#0144: he did not clearly express why this might be the case
RyanT#5929: Will they be "causal representations" or something like "approximately causal representations"
RyanT#5929: or
Louis#0144: nor did he really discuss his "neuro" motivation
Louis#0144: 🤷♂️
Louis#0144: he did the equivalent of showing a picture of a brain
Louis#0144: pointing at it
Louis#0144: and grunting
Louis#0144: then moving to the next slide
Louis#0144: LMAO
RyanT#5929: is it better to think about what causal representations would look like purely within the language of this kind of model
Ward#1738: A reasonable prediction based on what Jensen from Nvidia said yesterday.
zphang#7252: looks like gpt-2 came out in feb 2019 though
Ravna#1831: why are we so sure it's GPT4 instead of DALL-E2
Louis#0144: we arent
Louis#0144: lol
Louis#0144: no one said we are
rb#3159: He did in the other,talk comparing with attention-mechanism how it only take few nodes from the entire graph to reason and that the graph constructed on the fly https://cdn.discordapp.com/attachments/729741769738158194/831592225623769138/Screenshot_from_2021-04-12_21-26-29.png
Louis#0144: i see
rb#3159: approximately causal representations, you can never know the exact cause |
rb#3159: are there any interesting papers on learning representations for perceptual causality?
triggerhappygandi#0001: It was? Didn't watch it
Louis#0144: it was *passable* in retrospect
Louis#0144: but like
Louis#0144: obvious things were left out
Louis#0144: comparing it to other GTC talks its like
Louis#0144: onpar
triggerhappygandi#0001: underwhelming damn
triggerhappygandi#0001: The topic itself needs more attention
rb#3159: I was expecting him to talk more about the disentangled representations part which he mentioned in the paper
rb#3159: But graphs make sense from one point of view , say if we have learnt causal-graphs for video-data. videos for different scenarios can be generated with some variant of spatio-temporal GAN for each edge of the graph which would give the video for entirely different scene?
Ward#1738: High-performance, Distributed Training of Large-scale Deep Learning Recommendation Models https://deepai.org/publication/high-performance-distributed-training-of-large-scale-deep-learning-recommendation-models
AI_WAIFU#2844: I don't know if I'm speaking for anyone else, but please don't link to deepai, link to the arxiv landing page instead
Ward#1738: ok, will do
StellaAthena#3530: Why? I don’t know anything about deep.ai
kindiana#1016: adds very little value over the arxiv abstract page
AI_WAIFU#2844: It's looks like they've gotten a bit better, but they used to scrape papers and present them in a really shitty/unreadable format.
Sparkette#4342: Has "Open"AI changed their api pricing structure at all? Is there actually a free tier now instead of that however many tokens that's just one-shot and expires?
Sparkette#4342: Only reason I'm asking is cause I finally got my api invite
Sparkette#4342: Not that I can't look at it myself I guess |
EricHallahan#1051: ¯\_(ツ)_/¯
sandi#5334: you get 300k free tokens. so, no.
haru#1367: It talks about it on the pricing page. It's also pay as you go after you use up your 300K tokens.
Sparkette#4342: the pay as you go thing wasn't always there, was it? because the way I remember it, there was no way to pay as you go outside of overage from a paid subscription
zphang#7252: Yes, the initial pay structure was based on fixed tokens/month. They shift to pay as you go quite a while back
gwern#1782: I haven't heard about any changes beyond paygo. the main change was the introduction of smaller (cheaper) models and the instruction series
RyanT#5929: Does anyone know of any program synthesis, program generation, program induction, or program defuzzing benchmarks for python?
EricHallahan#1051: ¯\_(ツ)_/¯
bmk#1476: no, but also if you find any please let me know because I've wanted benchmarks like that for a while now too
RyanT#5929: Lol will do
RyanT#5929: im kinda worried that it doesn't exist
RyanT#5929: since a lot of program synthesis and induction stuff isnt done in python
bmk#1476: well, i'd be on board with building a new benchmark from scratch lol
kindiana#1016: you mean you can't just measure github python ar loss? :berk:
bmk#1476: i mean that's fine by me
𓅬 gabriel_syme 𓅬#3220: Microsoft was doing some research on this iirc
RyanT#5929: I did find this https://github.com/thelmuth/program-synthesis-benchmark-datasets, but it's not python specific
RyanT#5929: and I havent had a chance to see how good/useful it is yet
RyanT#5929: honestly, I'd be down to build a new benchmark
bmk#1476: i think we should probably make a pipeline for automatic extraction from github |
bmk#1476: parsing using ast
bmk#1476: and some heuristics for picking out good test cases
RyanT#5929: https://www.microsoft.com/en-us/research/blog/codexglue-a-benchmark-dataset-and-open-challenge-for-code-intelligence/
RyanT#5929: found this
rb#3159: Check conola corpus
RyanT#5929: Is there a Bayesian formulation of contrastive pre training
smallanimalfriend#4355: Anyone have any thoughts on this https://github.com/learning-at-home/hivemind ? Specifically RE what the scaling will look like with huge "monolithic" models like GPT? Versus mixture of experts or something like that which I assume would suit that sort of bandwidth/latency environment better. I have zero experience here, but am excited by the general idea - given that Folding At Home is (or at least was a few months ago) the largest "supercomputer" on the planet
smallanimalfriend#4355: Road map: https://github.com/learning-at-home/hivemind/issues/77
smallanimalfriend#4355: I need some hope here that there'll be a way to compete with the giants like Google and OAI as the models keep getting bigger
Daj#7482: Please read the FAQ
Daj#7482: No, it doesn't work
smallanimalfriend#4355: Ah, sorry! I guess it makes sense that it would be a frequently asked question in such a group
smallanimalfriend#4355: 4.5 billion tokens per day: https://openai.com/blog/gpt-3-apps/ at ~0.05 per 1000 tokens would be ~$200k per day, but I'm guessing (but i have no idea) that a decent chunk of that 4.5 billion comes from free plan usage and behind-the-scenes-uncharged tokens?
smallanimalfriend#4355: oh wait, *words* generated per day, and they charge for context tokens, right? not just generated, so would need to adjust for that if so
ethan caballero#6044: what in FAQ was violated?
Daj#7482: We have a FAQ item about learning-at-home style schemes
Daj#7482: It's asked very commonly but we've evaluated and the tech just isn't there
Napolean_Solo#2907: How will does GPT-Neo perform in semantic search tasks?
Napolean_Solo#2907: *well
Daj#7482: I don't think anyone has really evaluated this |
rb#3159: I remember mentions about semantic-search-endpoint for GPT-3, but no mention in any paper. there is a reddit discussion that several apps are using this
Napolean_Solo#2907: Oh yes I know i am in their private beta.
Napolean_Solo#2907: Semantic search has some real cool applications
Napolean_Solo#2907: Folks in beta have been using semantic search in very ingenious ways
Napolean_Solo#2907: I just wanted to know how do these models you guys released perform.
rb#3159: any interesting examples ?
Napolean_Solo#2907: Like they are using semantic search as sentiment classifier
Napolean_Solo#2907: I mean semantic search does really well when it comes to classification
Napolean_Solo#2907: That's just one of the uses there are many more but I am not really familiar with them
Napolean_Solo#2907: Semantic search can also be used as filters to filter out unwanted texts
Napolean_Solo#2907: Basically to give you an idea what they are doing is you upload a document containing let's say classification labels such as "this statement is negative" & "this statement is positive"
You then make a request using the semantic search endpoint with some text you want to be sentimentally classified.
GPT-3 then gives out a similarity score of the text in request with the labels mentioned in the document that was uploaded.
So for instance if the text is "I hate this movie" then GPT-3 will compare it with the labels mentioned in the document and we know the statement very similar to that text in the document is "this statement is negative"
Napolean_Solo#2907: So obviously that label will score pretty high in terms of similiarity
Napolean_Solo#2907: And there you have it! |
Deleted User#0000: Man I thought replacing LSTM->Transformers would always improve the model:P but replacing the LSTM with transformer in the MoGlow model (for motion synthesis) is working worse:P
Deleted User#0000: I've tried like 6 different ways of combining Transformers with MoGlow, and they all seem to working worse than MoGlow (which uses LSTMs)
Deleted User#0000: i thought attention wouldnt betray me..
Napolean_Solo#2907: As for the filtering you can do something similar. Upload statements as labels like "politics", "vulgar", "racist". And then you do the same thing I explained above and there you have it! A very reliable filter powered by GPT-3
Napolean_Solo#2907: The possibilities of using semantic search are endless.
Napolean_Solo#2907: Although the limitations are there like costs and also the constraints set by openAI like token limits that will make it difficult to use it for large scale tasks.
triggerhappygandi#0001: Vote for gpt-neo to win against AlphaFold and Dalle here:
https://fr.surveymonkey.com/r/InnovationForum2021
triggerhappygandi#0001: We need to beat them
triggerhappygandi#0001: Come on people we are >4500
triggerhappygandi#0001: Let's rig an election
cfoster0#4356: :nooo:
triggerhappygandi#0001: Lol
Kia#2550: Go go go
Kia#2550: :ultrazucc:
triggerhappygandi#0001: AlphaFold people will have no chance
triggerhappygandi#0001: They don't have a populated discord server
triggerhappygandi#0001: 4700 people come on
Kia#2550: Nvm I already voted
Kia#2550: Put in announcements |
Kia#2550: Use the Ping power
Kia#2550: @
triggerhappygandi#0001: Only O5 and Stella can do that. They don't want to. #general it is
Kia#2550: The Power
Kia#2550: :ultrazucc:
Kia#2550: Don't @ me that I suggested it
alstroemeria313#1694: eheh https://developer.nvidia.com/blog/unifying-the-cuda-python-ecosystem/
Kharr#7888: Algolia is fully integegrated with GPT3, check out their AI search / AI answers, etc
andyljones#7746: ```
# The following code example is not intuitive
# Subject to change in a future release
```
EricHallahan#1051: We need to get this updated. I'll push to a branch so anyone can proofread.
EricHallahan#1051: Or I might just push to `master` lol
StellaAthena#3530: https://twitter.com/mikarv/status/1382261746736455684?s=19
StellaAthena#3530: Full document here: https://t.co/xWMaGAZO2N?amp=1
nz#9710: Thank you for sharing, hopefully this has positive consequences for the EU
StellaAthena#3530: This is a draft and very certainly not the final version. The section on bio authentification is almost certainly going to be heavily debated for example
nz#9710: Yea, as far as I know these drafts are often leaked to gauge the public's reaction
StellaAthena#3530: This is particularly interesting https://cdn.discordapp.com/attachments/729741769738158194/831924266063757342/Capture.PNG |
StellaAthena#3530: There's also a requirement that people be informed when they are interacting with a human-like AI
Napolean_Solo#2907: Just how powerful is Google's switch transformer as compared to GPT-3?
bmk#1476: not very
Napolean_Solo#2907: Does it have something to do with the MoE approach?
AI_WAIFU#2844: yeah MoE doesn't buy you much
AI_WAIFU#2844: It's more compute efficient but less parameter efficient.
bmk#1476: MoE params trade at a discount
Napolean_Solo#2907: Hehe that's a nice way to put it
Napolean_Solo#2907: So in terms of power it's pretty much the same? Or a lil better?
Napolean_Solo#2907: Or a lil worse?
EricHallahan#1051: My TL;DR is that you trade off huge amounts of storage and memory for a small gain in downstream performance. It has a negligible effect in terms of compute.
AI_WAIFU#2844: no idea because google can't be arsed to report test perplexity on anything standard, but they're likely comparable.
gwern#1782: I can't help but think if not for the 'but it'll be really cheap to deploy at runtime!' argument, MoEs would be even more obscure that they already are
Napolean_Solo#2907: Interesting
AI_WAIFU#2844: nah, big numbers mean big publicity
Napolean_Solo#2907: Indeed! It's not the first time Google has some this
Napolean_Solo#2907: *done
Napolean_Solo#2907: Something about quantum Supremacy that IBM outrightly denies
Napolean_Solo#2907: Also are TTS models very different than conventional models in terms of implementation?
|
Like I mean can I use them as pretrained models?
Napolean_Solo#2907: For instance, Google's tacotron models
Napolean_Solo#2907: A lot of pretrained models can be shared like CNNs, Transformer models, DNNs etc..
Napolean_Solo#2907: Can TTS models be shared like that?
cfoster0#4356: You can. There's a bit of a stronger tendency in TTS for researchers not to release their models, which is a bit frustrating
Napolean_Solo#2907: Ikr
cfoster0#4356: Most TTS pipelines I've seen have two components: one that goes from text to spectograms and another that goes for spectrograms to audio
cfoster0#4356: The latter (called vocoders) are more widely available
Napolean_Solo#2907: But spectograms lean more towards to SOTA I presume
Napolean_Solo#2907: *the
cfoster0#4356: Uhh I'm not sure if I understand
EricHallahan#1051: Me neither.
Napolean_Solo#2907: Wavenet use something called mel spectograms
EricHallahan#1051: Yes, they did.
EricHallahan#1051: :morelayers:
EricHallahan#1051: It isn't that spectrograms are better, it is that the models that use them are traditionally large. I am so confident in my personal assessment that I'll go on the record saying that WaveNet is obsolescent.
Napolean_Solo#2907: Well it still isn't cheap though
cfoster0#4356: Cheaper than you think. The spectrogram-to-audio problem is, practically speaking, solved. And the networks aren't huge or slow like GPT-3
Napolean_Solo#2907: As long as the cost doesn't come down to the standard TTS I wouldn't call it cheap
EricHallahan#1051: You can do LPCNet on a cell phone. |
EricHallahan#1051: Not spectrograms, but still possible.
cfoster0#4356: The small version of HiFi-GAN is < 1M parameters
Napolean_Solo#2907: Wavenet is 4x more expensive than the standard tts
EricHallahan#1051: WaveNet is obsolescent.
cfoster0#4356: Trust me. It is gonna be dirt cheap real soon. There's no practical barrier to that (other than cold feet)
Napolean_Solo#2907: Hmmm
EricHallahan#1051: You aren't going to reasonably want to train WaveNet from scratch today.
Napolean_Solo#2907: That's why I was looking for pretrained models but Google obviously won't open-source it and I don't think there are any open source models that can reach the performance comparable to wavenet
cfoster0#4356: I'd recommend asking around for more practical advice here -> https://discord.gg/8CxFvgMR
Napolean_Solo#2907: Do these guys work on TTS stuff?
cfoster0#4356: Yeah. They've got their finger on the pulse of the latest stuff. We mostly do LMs here
cfoster0#4356: (although Eric and I are also interested in audio things)
EricHallahan#1051: `:)`
Napolean_Solo#2907: Hmm are they as helpful as you guys are?
Napolean_Solo#2907: You guys have a very great culture here ngl
Napolean_Solo#2907: I have been to some other discord servers and trust me dudes there are some of the most ungrateful folks I have ever met. Well it's internet what can you expect anyway
cfoster0#4356: Maybe? Only ever lurked there. I think if you asked something like "what's the best set of open source TTS models out right now for X, Y, and Z", they'd probably point you in the right direction
Napolean_Solo#2907: Okay thanks, Just out of curiosity, What's the median educational qualification of folks here?
bmk#1476: active members or all 4000 server members?
Napolean_Solo#2907: Active memebers |
bmk#1476: idk, probably median is grad student?
bmk#1476: honestly I'm not sure
Napolean_Solo#2907: Ah I see
bmk#1476: most of us are early career
Napolean_Solo#2907: Who owns EleutherAI?
bmk#1476: nobody
bmk#1476: why does it matter?
Napolean_Solo#2907: Just asking
Daj#7482: Archibald Eleuther
Daj#7482: (this is an inside joke lol)
Napolean_Solo#2907: If no owner than who coordinates projects here?
Sid#2121: whoever wants to
Sid#2121: the general structure is, we have a PM for each project who makes sure it gets done
Sid#2121: then people contribute to it where they can
Napolean_Solo#2907: So it a community run initiative
bmk#1476: that's a weird way to say it imo but.. sure?
Napolean_Solo#2907: So then there has to be a founder of some sort. The guy who gave the name to this community and set the goal and purpose
Sid#2121: https://www.eleuther.ai/faq/
mgostIH#0245: EleutherAI exists acausally from an artificial intelligence that travelled backwards in time to reinvent itself
mgostIH#0245: It's like Terminators but instead of Arnold we have Lucidrains |
bmk#1476: we're kinda decentralized, the community itself decides the purpose
Napolean_Solo#2907: Okay!
bmk#1476: and also our name came from open brainstorming lol
Napolean_Solo#2907: It's got some interesting info
EricHallahan#1051: I need to update it. We go for my revision?
Napolean_Solo#2907: I read that you guys are planning to create your own version of GPT-3
Napolean_Solo#2907: Have you thought about the risks of making something like that open source?
EricHallahan#1051: Yes, we have appropriately considered the risks.
Napolean_Solo#2907: So how do you intend to mitigate them?
Napolean_Solo#2907: It surely wouldn't be great if Kim Jong Un or Putin gets their hand on it
EricHallahan#1051: If they wanted it that badly they would have already done it themselves by now.
Daj#7482: If they wanted it, they could already have it
Daj#7482: Do we have a FAQ item about this btw?
EricHallahan#1051: No, should add.
EricHallahan#1051: On it now.
Sid#2121: tbh I think Kim Jong might actually have a hard time training a gpt-3
Sid#2121: not putin though
Daj#7482: Alright, I'll write or add to it as needed
EricHallahan#1051: If you have a certain phrasing you want to use just DM me.
AI_WAIFU#2844: kim has nukes, if he wanted this he could get it. |
Napolean_Solo#2907: Nah
Napolean_Solo#2907: They are not comparable
Sid#2121: well, nukes are much harder lmao
mgostIH#0245: It's not like language models are rocket surgery
mgostIH#0245: You just need **A LOT** of budget
Sid#2121: I think @AI_WAIFU is probably right. But I also think someone would notice if a few thousand GPUs were smuggled into NK
AI_WAIFU#2844: we need these for "bitcoin mining", done
mgostIH#0245: I never believed the claim "We hide this from the public for safety concerns"
mgostIH#0245: As if
StellaAthena#3530: Nukes are easier than AI
mgostIH#0245: There's surely not a whole field dedicated to exploring AI safety that shows how this is just an impractical solution
StellaAthena#3530: Source: ive built both
bmk#1476: nukes are harder, but also way way way way way more important for the foreign policy of the dprk
Napolean_Solo#2907: A disinformationist campaign is much more effective than an outright nuke war.
Sid#2121: only the person who already works for the US could say this on a discord server :berk:
mgostIH#0245: And how long do you expect GPT-3 to stay well hidden? The entirety of its concepts are already well known
StellaAthena#3530: ... true
bmk#1476: thankfully, the dprk has a lot of cheap labor
mgostIH#0245: The text for training it is all available too, just crawl the internet
mgostIH#0245: Do you expect that if Russia or China doesn't get it now it wouldn't in 5-10 years regardless? |
Sid#2121: sberbank already trained/released a 100B model iirc
mgostIH#0245: Just wait until there's models that can do any kind of media and imagine what could be done
mgostIH#0245: And there's no single company or country that can hide it from the rest of the world
mgostIH#0245: The concept of a transformer is simple but extremely effective if you have the budget
mgostIH#0245: And it's not even THAT budget
mgostIH#0245: A nuke costs more for sure
mgostIH#0245: I mean the entire research around it
Dromarion#3383: Now I'm imagining a nuclear program somewhere having a GitHub repo lol
Sid#2121: ah, this is incorrect, they only released a gpt3xl size one
Napolean_Solo#2907: 😂
mgostIH#0245: At least nukes still require material, rare material to build
mgostIH#0245: an AI like GPT-3 could be built by a single person with enough budget
bmk#1476: time to start OpenNuke project
bmk#1476: recreational mcnukes for everyone
mgostIH#0245: It's extremely easy to setup GPT-2 alone already
mgostIH#0245: It won't be hard to do the same with GPT-3 in a few years too
Sid#2121: *CIA has entered the chat*
Napolean_Solo#2907: What if GPT-3 already has picked up on the recipe to build a nuke?
Sid#2121: it's probably in there somewhere tbh
Daj#7482: :ultraberk: How is there another berk emote?!!! |
AI_WAIFU#2844: correction, it's definetly in there
AI_WAIFU#2844: I know this because reasons
Napolean_Solo#2907: Lol what if it does.. US gov will shut down the entire model
EricHallahan#1051: Cat is out of the bag at that point.
Napolean_Solo#2907: Indeed
Daj#7482: There is no "recipe" for nukes
Daj#7482: That's not how that works lol
mgostIH#0245: AI is more dangerous than nukes anyways
Napolean_Solo#2907: Yeah exat
mgostIH#0245: And I don't mean the "Oh the misinformation"
bmk#1476: schelling point
Napolean_Solo#2907: Well guys if OpenAI wanted to they would already have open-sourced it but they didn't and obviously that's considering the risks involved
mgostIH#0245: No
mgostIH#0245: "They are doing it for money" is another option
Napolean_Solo#2907: Well APIs have a very low gross margins
mgostIH#0245: They sold it to Microsoft already
mgostIH#0245: Moreover I am not so sure about how low margins the actual usage of GPT-3 is, they can generate as much text as they want automatically for just the electricity costs
Napolean_Solo#2907: It's not just the electricity costs
mgostIH#0245: What else would it be once it's deployed?
mgostIH#0245: Like sure, add in bandwidth too |
mgostIH#0245: But it's just text
cfoster0#4356: Everything else is amortizable, no?
Napolean_Solo#2907: Well it's just text indeed but don't computers do 1s and 0s too right
mgostIH#0245: If the thing running only costs you the electricity bill, you can get quite quickly hundreds of thousands if it's something a lot of people want and you are the only one having it
mgostIH#0245: ?
mgostIH#0245: I mean it's just text for the bandwidth being low
mgostIH#0245: Sending images or videos would be far more expensive
Napolean_Solo#2907: There's lot of calculations being being done to generate that seemingly simple text
Napolean_Solo#2907: 175billion per request
mgostIH#0245: Yes but what I mean for "It's just text" isn't a philosophical statement, I mean for the required bandwidth they need to send
mgostIH#0245: And those calculations only require electricity
Parker#3197: there's a cost to use the computers vs. using them for something else
Napolean_Solo#2907: Something doesn't feel right
Napolean_Solo#2907: Electricity is not cheap
mgostIH#0245: Assuming there's something you can plug as easily as GPT-3, which can just run constantly and be required by thousands of API users worldwide
mgostIH#0245: Electricity is quite damn cheap
mgostIH#0245: Even here 1Kwh is like 15 cents and I am talking my home
mgostIH#0245: That's 1 kilowatt for 1 hour
mgostIH#0245: Most GPUs, even high end ones don't even get to 1Kw
mgostIH#0245: GPT-3 requires more or less 1000 GB of VRAM, which assuming (again estimating) as a number of GPUs it's like 100 |
mgostIH#0245: Assuming even each GPU is consumes 1 KW
mgostIH#0245: That's 100KW
Napolean_Solo#2907: What about the fixed costs?
EricHallahan#1051: Like beyond the cost of running a datacenter or outsourcing compute, there is little more to pay for.
mgostIH#0245: The electricity bill for running GPT-3 is like 2 euros a hour
mgostIH#0245: And I overestimated it
mgostIH#0245: If you wanna be really dickish about it say even 10 euros a hour
EricHallahan#1051: Maybe a small team of software engineers?
Napolean_Solo#2907: Land, building, servers, maintainence
Napolean_Solo#2907: Cooling
Napolean_Solo#2907: Security
mgostIH#0245: Cooling falls into electricity costs too
EricHallahan#1051: Included in the cost of running a datacenter,
mgostIH#0245: Servers and whatnots are fixed costs
Napolean_Solo#2907: Land and building isn't cheap
mgostIH#0245: Gwern ThisAnimeDoesNotExist website generated **1 million 800 thousands** images for the cost of < 100 dollars
Napolean_Solo#2907: Installing server isn't cheap
cfoster0#4356: sorry, what are we even trying to figure out right now? 😄
mgostIH#0245: I think he's a bit surprised that "OpenAI didn't release GPT-3 publicly for money" is an option
andyljones#7746: there's a lot of calculations being done to generate *your* seemingly simple text |
Daj#7482: My text is generated 100% calculation/thinking free :hap:
Napolean_Solo#2907: Well your brain disagrees
Sora#8531: I thought the implications of large learning models was discussed in the GPT papers; they can do good but also a lot of harm, specially since large LM are trained to maximize probabilities based on massive, usually unfiltered piles of data, with no incentive (in terms of a loss function or training scheme) that takes into account that what the model learns is "good". How do you define good is an interesting, philosophical question, but we can assume if enough part of the data is inappropriate in some way (which for most cases it probably is to a degree), without any fail-safe, it would probably output inappropriate responses given the right "queries"/inputs.
Sora#8531: Also, costs in terms of electricity, and therefore CO2, and many others
freddiemitchell6#0094: NLP for ancient Korean documents. Awesome: https://arxiv.org/pdf/2104.05964.pdf
Parker#3197: It does (GPT-3) output a lot of inappropriate responses. I don't really see the risk from it though at the moment (besides possibly a bad public perception)
Sora#8531: They mention the risk in their paper, basically taking disinformation to a next level. Its not like that problem will come up regardless of if anyone open sources it; big companies and by proxy, governments, already have access to such tools probably. In worst case it "democratizes" it so any random can create *fake news*
Parker#3197: I don't think there really have been reports of this happening yet.
Parker#3197: I personally think it is still pretty lacking in language understanding
Parker#3197: In my opinion, it isn't good enough to hold a conversation with someone in a believable way.
EricHallahan#1051: This is correct. As far as we know, usage for the generation of forged documents has been very low.
EricHallahan#1051: I think they are close to passing the Turing test it in short sessions, but once you run into a contradiction it falls apart.
Parker#3197: It isn't even just that though. If you try to get it to make a table, it often starts talking about something completely irrelevant
Parker#3197: it just has like no understanding of ways that people communicate
EricHallahan#1051: That is because communication is hard.
EricHallahan#1051: Humans can find communication hard.
EricHallahan#1051: I would say I am likely among them.
Parker#3197: with my understanding of it so far, I'm more so just aligned with the people who are claiming it's just predicting the next most likely word/sentence.
EricHallahan#1051: I'm having trouble parsing that sentence, because that is effectively all that we are doing.
EricHallahan#1051: "Here, look at this text up to this word, what comes next?" |
Parker#3197: are you saying as humans, that is all we are doing?
EricHallahan#1051: Not at all.
EricHallahan#1051: Humans can plan farther in advance.
Parker#3197: In other videos I've watched, they've just talked about how it seems like it is doing more than just predicting words. That was what I was talking about
Sora#8531: Serious question, does anyone else feel EleutherAI feels like the legit and modern, AI-focused, version of *Anonymous*?
Anyways, my name's Edwin. I'm a MSc and probably soon PhD student in Taiwan, doing work on computer vision. I highly commend what you guys do, and would love to contribute and collaborate in the future
EricHallahan#1051: Welcome!
EricHallahan#1051: (To be technically correct, we don't exactly predict words, we instead predict words *or* parts of words.)
StellaAthena#3530: > Serious question, does anyone else feel EleutherAI feels like the legit and modern, AI-focused, version of *Anonymous*?
Except for the fact that we are not anonymous, not hackers, and don’t commit cyber crimes, I guess? Those things seem rather central to Anonymous tho. Take them out, and what remains?
> Anyways, my name's Edwin. I'm a MSc and probably soon PhD student in Taiwan, doing work on computer vision. I highly commend what you guys do, and would love to contribute and collaborate in the future
Welcome! Always exciting to get new faces.
StellaAthena#3530: (FWIW, I think that 90s / 00s tech companies in people’s garages is a better analogy)
EricHallahan#1051: Except that we don't have a garage, we aren't a company, and we have multiple orders of magnitude more compute.
StellaAthena#3530: Yeah, fair
Parker#3197: also was much more of an engineering problem then
EricHallahan#1051: This is 80% engineering.
cfoster0#4356: There definitely is a bit of cyberpunk vibe here, FWIW
EricHallahan#1051: Just not that aggressive. |
StellaAthena#3530: This is an engineering problem
cfoster0#4356: And not intentional, for the most part
Parker#3197: I'm just thinking in like relation to getting closer to agi. There are a lot of engineering problems in AI (that probably will make some people very rich like in the 90s)
Parker#3197: though, getting to AGI is going to take more than just using what is available
EricHallahan#1051: 90s + AI = :schmid:
Sora#8531: I meant it more of as a "decentralized collective of clever individuals who work under a common banner, in order to achieve a common goal of giving power to the people", but yeah I may be romanticizing anonymous too much; more like fsociety in mr robot (I just finished season1 so pls dont spoil in case im wrong)
cfoster0#4356: Quite possibly! There are people here who actually think otherwise, though I dunno if I'd say they're the majority
StellaAthena#3530: .... I have a lot of emotions about that show
𓅬 gabriel_syme 𓅬#3220: AI = :schmid: + 90s
Parker#3197: There just hasn't been much like learning to learn (or meta learning) I just think that is strange
𓅬 gabriel_syme 𓅬#3220: you mean here or in AI?
𓅬 gabriel_syme 𓅬#3220: latter has quite a bit no?
Parker#3197: both
cfoster0#4356: I think we've seen some decent evidence that big networks like GPT-3 do learn to learn
𓅬 gabriel_syme 𓅬#3220: I think scaling kind of took the air out of it? But the others are much more knowledgable in these events, they'll pitch in
cfoster0#4356: So perhaps it's less complicated than we once thought
StellaAthena#3530: What would be evidence of learning to learn?
Parker#3197: defining words that haven't been seen before (in training) then using them later (maybe?)
StellaAthena#3530: GPT-3 does that
EricHallahan#1051: That is already a task we test. |
EricHallahan#1051: It can do it.
Parker#3197: do you have examples? I'm thinking more like
Parker#3197: > The word "dog" now means cat. I will now describe a dog.
StellaAthena#3530: Typically the experiments use made-up words
StellaAthena#3530: But yes, gimme a sec
alexyz#3459: The first I used GPT-2, I was completely blown away.
alexyz#3459: I was testing out AI Dungeon on Google Colab (remember when that was the only way to use it lmao)
alexyz#3459: and I thought that it'd just be a simple text game that wouldn't let me do anything I wanted
alexyz#3459: but then I remember asking it to read the writing on a wall
alexyz#3459: and it actually gave a proper response
alexyz#3459: That truly blew me away
alexyz#3459: A similar moment with GPT-3 was being able to just put a prompt and have it... do the task
alexyz#3459: like text translation
EricHallahan#1051: That's when I got blown away.
bmk#1476: i was blown away the day the paper came out on arXiv and i saw the "175B params" in the abstract
StellaAthena#3530: https://cdn.discordapp.com/attachments/729741769738158194/832081140390821888/image0.png
bmk#1476: I was so convinced that it was a big deal that i wrote a blog post the next day about how it's a big deal
bmk#1476: and then nobody read that blog post
StellaAthena#3530: Not bad
EricHallahan#1051: Like "Whoah, it can do that without skipping a beat." |
bmk#1476: and then when the API came out suddenly i got flooded by people who didn't care a smidge at first
alexyz#3459: When's GPT-4? (lmao)
alexyz#3459: Every year we get a new GPT for some reason
StellaAthena#3530: Wednesday
EricHallahan#1051: Soon™️ If it is this year.
alexyz#3459: I'm kinda expecting a yearly release lol
bmk#1476: like literally the traffic after the api came out was like 10x that of when the paper first came out
alexyz#3459: @bmk Well, because nobody could really do anything with it, other than just look at their examples lol
EricHallahan#1051: Well I didn't even know what a transformer was until like December last year lol
bmk#1476: yeah but the info you need to realize how big of a deal it is is all in the paper
alexyz#3459: I read the paper when it came out
alexyz#3459: because AK on Twitter tweeted out the paper
alexyz#3459: and then I kinda skipped the paper
bmk#1476: yeah but nobody was *excited*
alexyz#3459: and then did a double-take
alexyz#3459: and realized it was GPT-3
bmk#1476: the response was so underwhelming
alexyz#3459: @bmk I was excited, I was loading the OpenAI website every day to hope for a release
bmk#1476: ok correction all the excited people were quiet
alexyz#3459: yeah, because what are you supposed to say |
cfoster0#4356: Novel word use samples from the paper: https://cdn.discordapp.com/attachments/729741769738158194/832082089771794472/NewWords.png
bmk#1476: also it was super weird of OA to just drop the paper and then.. not do anything
alexyz#3459: "there's a paper for a thing that you can't use, and we have 0 idea when we will"
alexyz#3459: Yeah
alexyz#3459: They could have teased an API
alexyz#3459: or just said it when they released the paper
AI_WAIFU#2844: I wasn't excited, but that's because of a combination of really short timelines + being previously disappointed that LMs couldn't do what GPT-3 was able to do.
bmk#1476: tbh since GPT3 I've severely updated away from "OA are malicious and trying to maximize profit" to "OA is totally incompetent at PR" lol
bmk#1476: i was kinda on board with the whole "staged release is actually just a hype strategy" for gpt2 but now i feel like that's unlikely
alexyz#3459: Honestly though, what would a GPT-4 look like?
AI_WAIFU#2844: Although hanging out in this discord has made me reevaluate the importance of knowing *how* to make these massive LMs work. Till then I didn't think it was very practical to scale beyond models fitting in a single GPU
bmk#1476: ok, i guess *incompetent* is a bit strong since we're all also incompetent at PR
alexyz#3459: lmao
StellaAthena#3530: A GPT-3 but bigger, with more data, and you need to sacrifice your first-born to access it
AI_WAIFU#2844: yeah but we're a discord so we get a pass
alexyz#3459: @StellaAthena Well, you could say the same about GPT-3, "A GPT-2 but bigger, with more data, and you need to sacrifice your first-born to access it", but it would be a big understatement
bmk#1476: actually, that's a very precise description of gpt3
StellaAthena#3530: No, I think that would be an extremely accurate description of GPT-3
alexyz#3459: I really think that there's a big usability different
alexyz#3459: *difference |
StellaAthena#3530: I'd probably also specify it has "better" training data in addition to "more"
alexyz#3459: One required finetuning to get anything useful from it
alexyz#3459: The new one requires just telling it what to do
alexyz#3459: and... it does it
StellaAthena#3530: You didn't ask about capacities. You asked what the technology would look like. The technology would look like what currently exists, just bigger
alexyz#3459: True
Parker#3197: I think it's impressive. Though, I think it is explainable as words are often defined like this online
alexyz#3459: So let me refine my question, What would the capabilities of a GPT-4 be?
EricHallahan#1051: ¯\_(ツ)_/¯
alexyz#3459: Or would OpenAI's next project be something completely different?
StellaAthena#3530: This is also about 2% the size of GPT-3
𓅬 gabriel_syme 𓅬#3220: make more money than GPT-3 might be a priority
EricHallahan#1051: Multimodal maybe? We can only speculate.
alexyz#3459: I've seen some people suggesting a VideoGPT thing
StellaAthena#3530: I think DALL-E is their next big thing
𓅬 gabriel_syme 𓅬#3220: and yeah DALLE imo will be bigger, but we'll see
alexyz#3459: DALL-E's cool, but it's not really a money maker
𓅬 gabriel_syme 𓅬#3220: I wonder if all they did these months is (after seeing CLIP worked so well) trained a better, bigger CLIP/DALLE combo?
AI_WAIFU#2844: I wonder if OAI has set it's sights back on RL, or if it's kinda abandoned that direction.
EricHallahan#1051: Ethan: |
🐻❄️ Bonjour.
alexyz#3459: What's the monetary incentive for DALL-E?
𓅬 gabriel_syme 𓅬#3220: a lot of money in design of things
alexyz#3459: It's mostly artists
𓅬 gabriel_syme 𓅬#3220: no it's practitioners, 100% or will be
alexyz#3459: practitioners?
𓅬 gabriel_syme 𓅬#3220: I actually think the opposite, like who uses text generation so much for work?
Sora#8531: Is GPT-3 still the sota for all-around language stuff despite all the papers that come after it and supposedly scale with even more parameters/data?
alexyz#3459: Translation
alexyz#3459: papers
alexyz#3459: summarization
alexyz#3459: search
alexyz#3459: programming
𓅬 gabriel_syme 𓅬#3220: you can do that with BERT right?
EricHallahan#1051: My big issue with DALL-E is that it poses a fundamental challenge to copyright.
alexyz#3459: Yes, but GPT-3 is very general and is incredibly easy to just plug and play
bmk#1476: no dense model with more parameters than gpt3 has been trained with >= 300B BPEs or equivalent
alexyz#3459: it is expensive tho lol
Parker#3197: I would probably be more convinced if an entire language (never seen in training) could be taught to it just by defining everything like that
EricHallahan#1051: For text generation, yes. |
gwern#1782: if by 'double-take' you mean 'several months later, after seeing the incredible samples coming out of the API having made fun of the paper, and seeing people making claims about what GPT-3 could never do be immediately refuted by Playground transcripts, reluctantly began to admit maybe there was something to this "disappointing paper" after all'
Parker#3197: and then using that language
𓅬 gabriel_syme 𓅬#3220: yeah as a general image generator it will, but when focused on specific applications (or datasets) it might not, idk
alexyz#3459: @gwern I was scrolling through Twitter, and skipped the paper, and then when I looked back and took a closer look a bit later, and realized that's what I've been waiting for a few months before it came out
alexyz#3459: I didn't wait a few months after lol
alexyz#3459: but yeah i get your point
ethan caballero#6044: They were worried about backlash GPT-2 got.
alexyz#3459: I really want to see OpenAI make a new version of Jukebox
alexyz#3459: it's very niche, but I found it really interesting
gwern#1782: jukebox was so close to being revolutionary. another 10x and some improvements
Sora#8531: So Switch Transformers from google at 1 trillion doesn't count? I guess not dense?
cfoster0#4356: Nah
EricHallahan#1051: :nooo:
gwern#1782: it's like https://arxiv.org/abs/2004.08366#google - yeah, it has a lot of parameters, but the parameters are gimped compared to dense
alexyz#3459: also completely unrelated, but interesting repo using StyleGAN: https://github.com/utkarshojha/few-shot-gan-adaptation
voxs#0001: holy shit pytorch is alot nicer to use than tensorflow
EricHallahan#1051: Well... *duh*
EricHallahan#1051: It's PyTorch
EstebanSir#2189: go for Keras if you really want simple
EstebanSir#2189: or yknow, skip it, and use huggingface for nlp |
EstebanSir#2189: or just
EstebanSir#2189: *dont*
EstebanSir#2189: :^)
EstebanSir#2189: that's always the easier route, trust me, i'm an expert at not doing anything
EricHallahan#1051: TensorFlow is like seven APIs rolled into one package.
EricHallahan#1051: No wonder people are leaving in droves for JAX.
𓅬 gabriel_syme 𓅬#3220: so serious question, what do you do to not forget stuff you've done ages ago
alexyz#3459: Quite literally the only thing I have learned from being in this discord is Tensorflow = Bad lmao
𓅬 gabriel_syme 𓅬#3220: like I just ran this repo, works nice, then I forget it
alexyz#3459: (this is a joke)
StellaAthena#3530: You don't
𓅬 gabriel_syme 𓅬#3220: my brain sucks at this stuff though 😦
𓅬 gabriel_syme 𓅬#3220: I think too many concurrent stuff all the time
alexyz#3459: Then write it down somewhere
𓅬 gabriel_syme 𓅬#3220: I do write commands and steps but still doesn't feel natural. Maybe I need to do some sort of spaced repetition of using different tools
alexyz#3459: I kinda have the same problem lol
𓅬 gabriel_syme 𓅬#3220: (sry a bit OT I guess)
alexyz#3459: I have 60 notebooks that are like "Untitled59.ipynb"
alexyz#3459: like why can I not title notebooks
EricHallahan#1051: I feel you buddy. |
alexyz#3459: and then when I remember I did something before
alexyz#3459: I'm going through 60 notebooks lmao
Sora#8531: How does the current best GPT Neo compare to existing "open-sourced" large language models? Do you have a paper or something or some quantitative/qualitative comparisons?
alexyz#3459: I can't find the proper reaction lol
EricHallahan#1051: Not a formal paper yet unfortunately, but we are building up a suite of evaluations with a common interface to do so easily.
EricHallahan#1051: We destroy GPT-2 obviously at the same parameter count.
alexyz#3459: Why aren't there other teams doing this type of thing?
EricHallahan#1051: What, building LLMs?
alexyz#3459: Building them and releasing them
alexyz#3459: releasing is key lol
Sora#8531: Also, completely unrelated but from my understanding your research is still done on a centralized server (TPUs from google?), Is this right? I read in your FAQ that you decided against decentralized "crowd-sourced" resources due to many issues, but is there any work being done to address those issues (security, speed, privacy, performance in general, etc) and to leverage the power of decentralized, crowd-sourced resources in order to train huge models? I think thats an interesting research/engineering problem but Im not an expert
Sora#8531: As in Google/FB etc? Money incentives?
EricHallahan#1051: Okay, that's it!
I need to get the updated FAQ out *now*.
alexyz#3459: Ah ok
bmk#1476: there is work being done, we are not doing said work
bmk#1476: i think the learning@home people are doing.. a thing
bmk#1476: i'm kinda cticial of it personally but who knows
bmk#1476: theres also the bittensor folks
bmk#1476: again, im skeptical a priori |
EricHallahan#1051: Any objections with what I have? Otherwise forever hold your peace.
alexyz#3459: Ok then
alexyz#3459: how does T5-11B compare here? I remember seeing that on Google's AI blog
alexyz#3459: Doesn't that have more parameters?
Sora#8531: Ignore the previous message. I just re-read the faq and I think most of my questions are answered.
EricHallahan#1051: I want to get out the new FAQ that I have been polishing for way too long.
AI_WAIFU#2844: I wonder how productive it would be to explore methods to drive up the critical batch size. Because that seems to be a hard limit on how much we can parallelize these massive LMs. If the CBS doesn't grow fast enough, that might just put a hard cap on how big we can make these things.
EricHallahan#1051: Don't worry about it. The FAQ has been heavily built up in the past few weeks as we have gained increased publicity.
gwern#1782: people don't seem to talk about critical batch size / gradient noise scale much though it seems so interesting from a theoretical perspective: surely the batch size tells us something very important about the very nature of the data & problem being solved... but as batch sizes get larger, maybe that just motivates asynch local updates so the notion of a 'batch' sorta goes away
AI_WAIFU#2844: I don't think async can get around the issue, the fundamental issue is that the curvature of the loss landscape is the limiting factor, and async just trades larger less noisy updates for smaller out of date updates. Either way if the curvature is too high compared to the latency/step-size, you're gonna start stepping in the wrong direction and lose efficiency.
kindiana#1016: something something second order
AI_WAIFU#2844: Literally what I was typing out, one could investigate the critical batch size for second order or low-rank second order methods.
AI_WAIFU#2844: Historically L-BFGS and Co we're abandoned because they didn't work well with noisy SGD
AI_WAIFU#2844: But if we cut down on the first order noise enough, maybe the second order noise becomes small enough to be useful.
AI_WAIFU#2844: Of course the down side is that this would require *even moar* VRAM.
kindiana#1016: vram is not much of a bottleneck though
Sora#8531: I thought L-BFGS is still used for linear probing and zero/few-shot performance of large models?
AI_WAIFU#2844: Yeah, but key words being "few-shot" and "linear".
AI_WAIFU#2844: You can go ahead and use second order methods in those cases because you can evaluate the entire training set in one step.
AI_WAIFU#2844: So no noise. |
gwern#1782: another approach would be to keep expanding the breadth of tasks. presumably, the more problems being tackled simultaneously, the bigger the batch size becomes to take the optimal step size on all tasks simultaneously
AI_WAIFU#2844: :catgirl5:
gwern#1782: imagine moe over a few hundred gpt-3s all specialized in different modalities or tasks
bmk#1476: kinda skeptical
bmk#1476: the entire reason imo why gpt3 is good is because by keeping it dense, it can get compounding returns on its knowledge and help it generalize
AI_WAIFU#2844: I can kinda see that, but I think you've hit on a good point, MoE has a much smaller per parameter batch size, and presumably a larger critical batch size.
gwern#1782: why? the batch size seems to scale with the task complexity causing gradients to be noisy, so the harder the task the bigger the batch; if you're saturated on text, then add in a bunch of other tasks
bmk#1476: if you silo that knowledge, it can't generalize its knowledge across experts
bmk#1476: is it normal for NN outputs to vary by, like, 1e-4 just because of different batch size? https://cdn.discordapp.com/attachments/729741769738158194/832113312053460992/unknown.png
kindiana#1016: fp16?
bmk#1476: no, regular fp32
kindiana#1016: seems a little sus
AI_WAIFU#2844: wdym different batch size?
bmk#1476: i changed batch size for inference
bmk#1476: and my tests started failing
kindiana#1016: I wouldn't worry too much though lol
kindiana#1016: you can do the backprop trick to see if you see any inter-batch leakage
AI_WAIFU#2844: how are you getting the same number at all, are you adding up mulitple batches?
bmk#1476: my code uses 3 layers of abstraction magic to hide the fact that any batching is happening at all
bmk#1476: lol changed tolerance to 1e-3 and everything passes https://cdn.discordapp.com/attachments/729741769738158194/832113866975608872/unknown.png |
kindiana#1016: I use 1e-2 atol and rtol for tpus usually lol
bmk#1476: galaxy brain
AI_WAIFU#2844: jfc
bmk#1476: why? abstraction bad?
bmk#1476: the amount of weirdness I'm juggling to make this work is horrendous
bmk#1476: i have an evaluator class which hides a bunch of ugly plumbing
bmk#1476: the model class hides all batching
AI_WAIFU#2844: oh no keep going, I couldn't give you a better solution, but still 1e-2 for tolerance is wild.
AI_WAIFU#2844: then again, I did worse when I was in bioinformatics and shit just wouldn't fit in ram.
bmk#1476: the model class itself has grown so complicated that i use four separate middlewares that i compose to sort in descending length order, dynamically compute batch size (currently hardcoded but i plan on having a proper batch size estimator here), actually batch, run the model, cut apart the batches, call the cache hook because the other caching logic doesn't work if you interrupt a call midway, reorder everything back to the original order, pipe everything back to the task that asked for the thing, compute metrics, done
AI_WAIFU#2844: and this is why no one wants to work on the eval harness
bmk#1476: this is how much code i need just to compute the log likelihood of some sentences https://cdn.discordapp.com/attachments/729741769738158194/832115842040856576/unknown.png
bmk#1476: no, see, my abstractions hide all this eldritch complexity from users
bmk#1476: and task implementers
bmk#1476: in fact, this makes implementing a task super ultra easy and efficient
AI_WAIFU#2844: \*backs away slowly\*
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/832116312880316416/unknown.png
bmk#1476: more plumbing
AI_WAIFU#2844: anyway I would at least flag this numerical stability thing, it's not a problem rn, but it could become an issue later.
AI_WAIFU#2844: you only have so much precision to work with and it decays exponentially |
AI_WAIFU#2844: you do *not* want to be trying to debug downstream issues caused by numerical stabilty, you likely will go mad before you get anywhere
bmk#1476: well, this is a HF problem and not a me-problem, right?
AI_WAIFU#2844: sure
AI_WAIFU#2844: ok I sleep now
chilli#5665: What are you actually computing?
chilli#5665: You can get a surprising amount of error just from innocuous fpe stuff
bmk#1476: gpt2 forward pass
StellaAthena#3530: https://cdn.discordapp.com/attachments/729741769738158194/832144195992289290/Screen_Shot_2021-04-15_at_2.43.20_AM.png
StellaAthena#3530: https://cdn.discordapp.com/attachments/729741769738158194/832144199339212831/Screen_Shot_2021-04-15_at_2.43.34_AM.png
bmk#1476: can i write the section about how it doesnt work for quaternions? lol
StellaAthena#3530: @bmk @chilli @cfoster0 @EricHallahan @kindiana @Deleted User de23c58c
The full derivation of rotary positional embedding. Any feedback on clarity would be highly appreciated. I chose to not bold the vectors **q** and **k** though I'm now doubting that decision. I thought it wouldn't look good, but seeing it laid out I think it'll be fine.
bmk#1476: i already know someone reading this is going to think "huh, it might work with quaternions, lemme try that"
StellaAthena#3530: @bmk Hold your horses. If it can't be made to work for the quaternions, we'll include that info. I want to think it through when it's not 2 am before agreeing, and ensure there isn't a simple patch
StellaAthena#3530: I would bet a sizable amount of money that less than 1% of the people who read this blog post will have that thought
bmk#1476: I'm like 90% sure it's fundamentally broken unless you do something weird like the torus idea Eric was talking about
StellaAthena#3530: See, I don't consider that weird
bmk#1476: i don't disagree, but i assume way more than 100 people will read this lol
bmk#1476: like, one or two OOMs
bmk#1476: that's no longer quaternions though |
bmk#1476: that's just.. different
StellaAthena#3530: The last equation bugs me, but it is exactly what they wrote
bmk#1476: also i don't know what it would mean to take an inner product on a torus
StellaAthena#3530: I copied it down and assume it'll make sense tomorrow, but how does the \|\|\*\|\| go away
StellaAthena#3530: Sounds like a personal problem, tbh.
StellaAthena#3530: 😛
bmk#1476: I'm not in the mood for this joke rn, i just spent like an hour trying to convince you and the end result is i won't be sleeping for another hour because i need to get some things done
StellaAthena#3530: I'm sorry
bmk#1476: I'm going to turn my phone off now because if i don't I'm going to get dragged into this convo again lol
StellaAthena#3530: Night
AI_WAIFU#2844: I think that's most of it, but I think this logic carries through for *any* theta, real or imaginary. So it might be worth noting that, and then later adding intuition for why we went with a complex exponent instead of a real one. (Or maybe we should do some experiments on that? IDK that sounds like work.)
nz#9710: Wait are you guys planning to write up a paper? I thought it was a blog post
StellaAthena#3530: It is a blog post, this is just a convenient way for me to format and write it
StellaAthena#3530: I'm confused, can you elaborate? $f(q, n)$ is a complex number, and all complex numbers can be written in the form $re^{i\theta}$ for $r,\theta\in\mathbb{R}$. That's where that formulation comes from.
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/832231989599535124/193204646687408129.png
AI_WAIFU#2844: yeah I know, but that might not be obvious to all readers, and in practice we chose a real theta for a reason.
StellaAthena#3530: So, you don't think all readers will know what the exponential form of a complex number is?
StellaAthena#3530: TBH, is it worth writing (at least in this section) to such people? Like, they presumably don't know what a complex inner product or the exponential function is either...
elderfalcon#4450: https://c.tenor.com/7lUkwJgtNPoAAAAM/good-burger.gif
https://c.tenor.com/5a70jiVvQvEAAAAM/i-know-some-of-these-words-mhmm.gif |
EricHallahan#1051: Does that make me part of that less than 1%?
StellaAthena#3530: Authors don’t count
cappiello#7426: Hi to everyone, just discovered this discord channel few moments ago. I was playing with gpt-neo through transformers repo, and I have come here to ask a question: have some of you managed to collect some prompts that are tested to work well with this architecture? Some example that may serve as best practices, also in terms of temperature and topk parameters? Thanks in advance
EricHallahan#1051: ¯\_(ツ)_/¯
Daj#7482: We focus mostly on dev work, not on applications
Daj#7482: So dunno lol
EricHallahan#1051: We spend our time figuring out how to make it work, not using it lol
cappiello#7426: Ahahah seems reasonable, thanks anyway 🙂
EricHallahan#1051: Well, regardless, welcome!
cappiello#7426: Thanks! How many of you are actually working on this? Very cool project and I truly appreciate the idea to make it open-source
Daj#7482: On GPT Neo specifically? Dunno like 3-5 people at a time? It's pretty loose
cappiello#7426: Could I be of any help? I actually work as an NLP engineer
EricHallahan#1051: There are six people who are attached to the GPT-NeoX copyright notice, so maybe eight?
Daj#7482: Potentially, though I must admit I don't know what needs doing atm haha. @Sid or @StellaAthena probably know if there's anything
Sid#2121: Hey @cappiello ! Sure there's lots of things that need doing
Sid#2121: I'm trying to keep the github issues up to date, so they should be a decent summary of our current needs
cappiello#7426: Great, I'll have a look
Sid#2121: https://github.com/EleutherAI/gpt-neox/issues
Sid#2121: probably highest reward:work ratio would be adding adafactor
Sid#2121: should be a copy and paste or an import and changing a few lines in the arguments file, basically |
AI_WAIFU#2844: Yeah. We also need to justify why we chose the theta that we did, and that begins by outlining why we *didn't* choose a complex theta.
StellaAthena#3530: "because that's not how complex numbers work" isn't sufficient?
StellaAthena#3530: Is this *you* saying that you think there's a problem with the rigor or do you think readers will get confused and not understand
cappiello#7426: do you already have in mind which implementation of Adafactor to use? Like the one in the HuggingFace repo?
Sid#2121: I wasn't aware there were significant differences between different implementations? In general I've found https://github.com/jettify/pytorch-optimizer to be good
Sid#2121: we do have transformers as a requirement already so you could use that one too, I guess
cappiello#7426: yep, I suggested that because the library is already in the requirements; the Adafactor is exactly the same in terms of code for both the libraries
voxs#0001: yo poggg im actually getting shit done with pytorch
AI_WAIFU#2844: Yeah I think we need to spell it out a bit more clearly for the readers.
StellaAthena#3530: What would you recommend writing? Don't get me wrong, I love teaching, but I'm having trouble figuring out how someone might know calculus and not know that every complex number has a unique-ish representation in polar coorindates.
cfoster0#4356: Mm maybe we should spell out that $e^{i\theta}$ correspond to pure rotations for real $\theta$?
TeXit#0796: **cfoster0** https://cdn.discordapp.com/attachments/729741769738158194/832292697183485993/314125175111286785.png
StellaAthena#3530: Sure, I can write $e^{i\theta} = \cos\theta + i\sin\theta$ somewhere.
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/832293307412512808/193204646687408129.png
StellaAthena#3530: I was thinking I might add in the actual matrices at the end. What's currently written up is a mathematical derivation but it doesn't discuss implementation at all
cfoster0#4356: Ah yes we should probably show how you'd implement, you're right
StellaAthena#3530: Ought’s wishlist for GPT-3. Decent collection of project ideas if anyone is looking for inspiration. Reminder that we have oodles and oodles of free compute to give you to do something cool with.
https://mobile.twitter.com/stuhlmueller/status/1382720624439685120
Sora#8531: Do people actually combine online learning and RL learning in production? As in using policies and reward functions for vision/language models? |
StellaAthena#3530: Here is the WIP blogpost on rotary embeddings if anyone wants to take a look and give feedback. https://cdn.discordapp.com/attachments/729741769738158194/832318398184423474/Attention_Theory.pdf
EricHallahan#1051: Do you want me to port it over to see how it looks?
StellaAthena#3530: Yeah! That's a great idea
cfoster0#4356: (right now sections 1-3 are the ones fleshed out enough for feedback. we're also working on 4 and 5, which are about implementation/experiments and applications/directions to take this)
freddiemitchell6#0094: Already accepted in NeurIPS 2021, nice! 😉
tanninStains#0756: So is this lifting real vectors into complex vectors such that the angle encodes absolute position, with the claim that inner products now preserve relative position
tanninStains#0756: The end results seems so simple I'm worried I'm overlooking something
tanninStains#0756: Either way feels much more elegant than sinusoidal encoding
cfoster0#4356: Yup
cfoster0#4356: To be clear, you end up with an implementation that still involves a bunch of sines and cosines
cfoster0#4356: But at least they've derived from first principles
tanninStains#0756: Gotcha, cool
tanninStains#0756: I wonder how the transformer learns to leverage sinusoidal encoding
tanninStains#0756: It seems leveraging rotary is much easier at any rate.
StellaAthena#3530: @tanninStains That's exactly right! If we pretend for a second that the token embedding is one dimensional, so a token embedding is just a number, you can picture it quite easily: You start with the token embedding [5] and then you pretend that that's the vector [5, 0]. Then you rotate that vector by an amount that's dependent upon the position.
EricHallahan#1051: I need to implement it myself to see it work lol
tanninStains#0756: Cool! I guess it requires a bit more space and now one has to calculate complex dot products though.
EricHallahan#1051: Well most people won't do it with complex numbers.
tanninStains#0756: Yeah fair, still there's a bit more computation, regardless of how you interpret it
cfoster0#4356: The good thing is, if you're working with natural number positions with a constant batch shape, you can pre-compute the matrices you'll use |
StellaAthena#3530: @tanninStains Yeah that's a gloss. But if you can picture doing that in 32 dimensions that's a good picture of what's going on
StellaAthena#3530: If you can't, pretend you're doing that 16 times 😛
tanninStains#0756: I'm just imagining a string of right-pointing arrows all rotating a bit lol
StellaAthena#3530: oh we have the thing
StellaAthena#3530: https://upload.wikimedia.org/wikipedia/commons/8/81/Circular.Polarization.Circularly.Polarized.Light_Right.Handed.Animation.305x190.255Colors.gif
StellaAthena#3530: It's kinda like this
EricHallahan#1051: I was reading through *QED: The Strange Theory of Light and Matter* earlier and Feynman uses clocks.
tanninStains#0756: How do you take the softmax of complex-valued inner products tho? 🤔
EricHallahan#1051: Ah, that is the beauty of it all. They aren't.
EricHallahan#1051: At least by that point.
StellaAthena#3530: @tanninStains Complex numbers and 2D vectors are the same thing. Similarly, d-dimensional complex valued vectors and 2d-dimensional real vectors are the same thing
mgostIH#0245: Doesn't this mean that the value at the end is very close relatively to the one at the beginning?
mgostIH#0245: Or is the angle only from 0 to pi
EricHallahan#1051: You throw in many frequencies so that it isn't the case.
mgostIH#0245: Oh, so if each token gets embedded in an R^d space, it gets mapped into a C^d vector where each entry has a different angle based on some phase?
StellaAthena#3530: @mgostIH Yes
StellaAthena#3530: Also the initial phase is tiny
mgostIH#0245: that chinese paper really managed to make something this simple so damn *complex*
StellaAthena#3530: The blog post uses 2/10^8
mgostIH#0245: And the dot product of these two complex numbers supposedly only looks at phase differences for each entry |
StellaAthena#3530: Yup
mgostIH#0245: Or some modification of the dot product to make things work I imagine
StellaAthena#3530: Naw
mgostIH#0245: But if I have say two tokens very close to each other each of their entry should have a very similar phase
If I do the dot product (multiplying each entry and summing) I am adding the phases
mgostIH#0245: Shouldn't I subtract them?
StellaAthena#3530: Here’s the secret: we take the token embedding **q** and position m and send it to **q**e^(imθ) for some small theta
mgostIH#0245: Ye ye I saw that the attention resulted into qke^(i(n-m)theta)
StellaAthena#3530: @mgostIH inner products in complex vector spaces are $$\sum_{i=0}^n a_ib_i^\ast$$
mgostIH#0245: I wonder how we get n-m instead of n+m
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/832351825583538196/193204646687408129.png
tanninStains#0756: So we lift from R^d to C^d, but then regard the C^d vectors as R^2d when dotting them? Doesn't this kinda collapse the structure imposed
mgostIH#0245: Ohhhh
mgostIH#0245: Silly me
StellaAthena#3530: @tanninStains have you read this https://www.overleaf.com/read/ynddfzrvpdsk
mgostIH#0245: Now this gives me another way to realize **why** the dot product in complex vectors is defined like that
mgostIH#0245: The conjungate fixes exactly what this needed
StellaAthena#3530: Yup
mgostIH#0245: Seems just like some implementation detail
mgostIH#0245: Conceptually they are just the same thing |
StellaAthena#3530: It also ensures that <x, x> = xx* = \|\|x\|\| instead of some random complex number
tanninStains#0756: Yeah, I'm trying to understand it 😛
mgostIH#0245: Yeee, I am taking complex analysis and I forgot about this ç.ç
StellaAthena#3530: @tanninStains ahhh gotcha
StellaAthena#3530: Yeah feedback on what is unclear / should be more clear is easy
mgostIH#0245: The only thing that still seems a bit up to technical details is how we modulate the frequencies for each component
mgostIH#0245: From what I understood each component has a different frequency kinda like sinusoidal embeddings
StellaAthena#3530: @mgostIH I haven’t written that part up yet
mgostIH#0245: This allows multiple relative comparisions or whatever we call it
StellaAthena#3530: I don’t find the thing the blog post does *that* compelling
mgostIH#0245: The sinusoidal embeddings in GPT just used some weird 10000^2di constant
bmk#1476: i recommend renaming theta to epsilon, because its purpose is to be small enough that you don't end up going too far around the circle
mgostIH#0245: Is there something we can do here that is less up in the air? I don't like random large constants for no reason
mgostIH#0245: But at the same time it's an angle :v
bmk#1476: its exact value doesn't matter as long as it's "small enough but not, like, too small that you have floating point problems"
mgostIH#0245: So the limitations are still that going back full circle is a problem
bmk#1476: yes, but theta feels like angles that *matter*
bmk#1476: this angle is fixed to an arbitrary "small enough" constant
bmk#1476: which really threw me for a loop at first because i thought it was a variable
mgostIH#0245: I see |
mgostIH#0245: What about fixing it so that the last token of the highest frequency is exactly at pi
StellaAthena#3530: @mgostIH that is planned future work
StellaAthena#3530: Literally on the list
mgostIH#0245: What do you mean?
mgostIH#0245: The fixing to pi?
StellaAthena#3530: Yes
StellaAthena#3530: That is on my list of experiments to do
EricHallahan#1051: I think I want to frame this like the setup in the quantum eraser.
Imagine a two quarter-waveplates, each against one slit in a double-slit experiment so that one is LHC and the other is RHC. Fire photons from a laser source through a diverging lens into the apparatus and onto a whatever film or sensor is.
bmk#1476: i think fixing to pi/2 makes more sense personally
mgostIH#0245: Why pi/2
StellaAthena#3530: Well, pi/2
mgostIH#0245: pi seems like the opposite of 0 angle
mgostIH#0245: The farthest you can go
mgostIH#0245: The last element is the farthest from the first, relatively
EricHallahan#1051: You are working in dot products.
StellaAthena#3530: Dot product
StellaAthena#3530: Two vectors are the furthest apart when they are orthogonal
mgostIH#0245: Oh ye so we'd get 0 angle when producting them
mgostIH#0245: Aye this makes far more sense |
mgostIH#0245: So I guess we'll just try fixing it to various angles kek
mgostIH#0245: but I think this is already a very interesting direction
StellaAthena#3530: Another thing is that the way they do it is highly redundant
EricHallahan#1051: What ends up happening is that you will get an interference pattern obviously... but one that has photons that are *linearly polarized*.
mgostIH#0245: The advantage of being on the circle is that we preserve the same angle for two tokens that are the same distance apart
StellaAthena#3530: Abstractly, you should be able to only add one dimension: the one you rotate through
StellaAthena#3530: However they do a separate rotation in each coordinate
cfoster0#4356: Yeah in theory you could just choose a small number of the dimensions to rotate
cfoster0#4356: Save some of that capacity for token info
StellaAthena#3530: It makes sense as a noise-resistance thing
StellaAthena#3530: But it seems like a lot more redundancy than you need
mgostIH#0245: Oh wait you mean that instead of going from R^d to C^d only using a k < d elements of the vector to do this with
StellaAthena#3530: Right
StellaAthena#3530: It doesn’t matter how big your vector is, theoretically you can just rotate through the (d+1)st dimension
mgostIH#0245: What about appending to the original vectors just some angle components
mgostIH#0245: After all if we only look at angles the magnitudes may not matter that much
cfoster0#4356: 🤔
EricHallahan#1051: That tends to get messy.
EricHallahan#1051: You have to explain the periodicity then.
mgostIH#0245: Hmmm I am thinking |
EricHallahan#1051: Does this make sense to anyone?
mgostIH#0245: Assign to each vector some additional components, going from R^d to R^(d + p)
Then you define the dot product to be the standard dot product for the R^d part, while being "subtraction" for the R^p part
mgostIH#0245: Notice that in this you aren't constrained by circles
mgostIH#0245: The additional R^p components just act like we want the angles to act in rotary transformers
cfoster0#4356: If you really wanted this, I think you could have a separate position-wise attention branch that sums with the content-wide attention
cfoster0#4356: Like that TUPE paper or something
cfoster0#4356: Though this is very interesting
mgostIH#0245: Idk it's just the first thing that came to my mind
mgostIH#0245: Seems like rotary transformers just use complex numbers to hack in some sort of "dot product of things becomes difference of angles"
tanninStains#0756: But the result of the inner product in R^2d is not equal to the result of the inner product between the 'same' C^d vectors, no? If we calculate the complex inner product, we end up with a complex number. This is why I asked about the softmax.
mgostIH#0245: What if you just encode the difference of angles directly in some components
CRG#8707: The 2i comes from splitting the embeddings between sin and cos at even and odd numbers. Frequencies end up being between 1/10000^0 and 1/10000^1. <https://arxiv.org/pdf/1706.03762.pdf#page=6> https://cdn.discordapp.com/attachments/729741769738158194/832357190831439962/f845c93c09a12bae8806c9ce9cb68341.png
EricHallahan#1051: Again, I think you'll get instability.
mgostIH#0245: Aye but why 10k
EricHallahan#1051: ¯\_(ツ)_/¯
StellaAthena#3530: It’s a very large number
cfoster0#4356: Vaswani got tired of counting
StellaAthena#3530: That ensures that we never wrap around
mgostIH#0245: Oh sure, might after all most good ML ideas may just fail practically |
mgostIH#0245: Or some might just be too good to be true
mgostIH#0245: Like RELU
CRG#8707: Making the base smaller makes the attenuation stronger: <https://www.desmos.com/calculator/vb1p1ynn8b>
mgostIH#0245: What am I looking at
mgostIH#0245: Scrap it, I know *why it's large* but 10k is just silly
mgostIH#0245: I thought there was some more math juice into this
CRG#8707: Dot product between two unit vectors at m and x being rotated using RoPE
StellaAthena#3530: __The to-do list is to experiment with:__
1. Initial rotations: θ in the proof, 1/10k in the implementation
2. How many independent positional embeddings you need to add to get good results. They take a d-dimensional embedding and add another d positional embeddings. How low can we go?
3. Whether we can generalize this to higher dimensional attention
mgostIH#0245: Quaternion attention
mgostIH#0245: :bigbrain:
StellaAthena#3530: Lol
mgostIH#0245: Wait no that's actually a thing people do
mgostIH#0245: While claiming it's 4x more efficient
StellaAthena#3530: Leo and I fought about that at 3 am last night
mgostIH#0245: Eh screw it, I like Geometric Algebra better anyways
StellaAthena#3530: He showed me computational results that surprised me so I gotta go figure out why it doesn’t work
StellaAthena#3530: Algebraic geometry > geometric algebra tbh |
mgostIH#0245: Of quaternion transformers?
EricHallahan#1051: *Because it is a sphere*
mgostIH#0245: Algebraic geometry has polynomials with too much variables
andreas#5842: thanks for sharing. if someone works on one of these ideas i'd love to integrate a prototype into elicit to gather real-world use data. could use that in a paper in addition to toy applications
StellaAthena#3530: Quaternion rotary embeddings
EricHallahan#1051: *You need to keep it a torus*
StellaAthena#3530: Why a torus?
cfoster0#4356: At this point I'm happy to just do rotary with the different axes separately
cfoster0#4356: To avoid family fighting
EricHallahan#1051: Because it is 2D.
StellaAthena#3530: @cfoster0 doesn’t that require d^2 though
mgostIH#0245: Return to ~~monke~~ absolute positional embeddings
mgostIH#0245: GPT-3 uses absolute positional embeddings and it's enough to threaten democracy worldwide
EricHallahan#1051: Why not *2d*?
mgostIH#0245: Do we really need to go further?
cfoster0#4356: What do you mean? What I'm saying is you'd rotate the first half of components based on the x position and rotate the second half by the y position
StellaAthena#3530: Ooooo
EricHallahan#1051: i.e. a torus lol
StellaAthena#3530: I thought you meant you’d do two separate rotations
StellaAthena#3530: For each coordinate |
StellaAthena#3530: I guess that’s 2d^2
StellaAthena#3530: One to encode x, one to encode y
cfoster0#4356: You could also separately do attention based on each axis, but that's a whole nother barrel of shrimp
StellaAthena#3530: Yeah
StellaAthena#3530: That’s a barrel I do not want to open
EricHallahan#1051: I thought that was the obvious way to do it, otherwise you run into what Leo demonstrated.
cfoster0#4356: tbf this is what Eric and Leo were harping on but using different language
StellaAthena#3530: Oh?
StellaAthena#3530: Leo wasn’t make any sense to me
StellaAthena#3530: But also it was 3 am so....
cfoster0#4356: I'm not convinced that there isn't a case when you'd want to use q-s, but this does what you'd want for 2D relative position, so I'm happy with it
StellaAthena#3530: I believe you
EricHallahan#1051: If you are working with spherical geometry maybe you would.
mgostIH#0245: 2023 and we'll be putting the embeddings in some weird graph
StellaAthena#3530: Like SE(3) equiveriant transformers?
StellaAthena#3530: Or maybe SO(2) transformers
EricHallahan#1051: I'm not familiar with either of them.
EricHallahan#1051: Do you know what this reminds me of?
https://en.wikipedia.org/wiki/Window_function
EricHallahan#1051: Actually, this is just a rectangular window right? |
mgostIH#0245: What?
EricHallahan#1051: Sinusoidal encoding in attention.
mgostIH#0245: idk what you mean with sinusoidal encodings being a rectangular window
mgostIH#0245: Searching on google for rectangular windows isn't that helpful
mgostIH#0245: Or hmmm
mgostIH#0245: You mean like f(x) = 1 for |x| < 1/2, 0 elsewhere?
EricHallahan#1051: https://upload.wikimedia.org/wikipedia/commons/thumb/6/6a/Window_function_and_frequency_response_-_Rectangular.svg/1280px-Window_function_and_frequency_response_-_Rectangular.svg.png
cfoster0#4356: Not sure if I follow the connection. They *feel* alike but I can't pinpoint it
EricHallahan#1051: Same.
EricHallahan#1051: It is how they attenuate.
mgostIH#0245: You mean that sinusoidal encodings have each component getting embeddings with the same amplitude?
EricHallahan#1051: No, I think I lost you lol
mgostIH#0245: So you propose some sort of attenuation of the amplitude for later components or something like that
mgostIH#0245: Ye probably kek
mgostIH#0245: But all of this keeps hinting me towards using FFTs somehow in embeddings
EricHallahan#1051: https://xkcd.com/26/
cfoster0#4356: something something exponential window?
EricHallahan#1051: ¯\_(ツ)_/¯
mgostIH#0245: You guys live in some weird houses if you have all of these windows shapes
EricHallahan#1051: ^ |
bmk#1476: no
mgostIH#0245: they hated jesus because he told the truth
Louis#0144: I’ve done quaternion based LSTMs
Louis#0144: For an applied complex optimization course
Louis#0144: They didn’t work
Louis#0144: 🙂
Louis#0144: Quaternion valued SGD fucking sucks
Louis#0144: It barely works
Ward#1738: A different form of rotary attention 😉 "A pair of researchers showed that, to represent current and past stimuli simultaneously without mutual interference, the brain essentially “rotates” sensory information to encode it as a memory." https://www.quantamagazine.org/the-brain-rotates-memories-to-save-them-from-new-sensations-20210415/
Ward#1738: "Libby is interested in the implications of their results for artificial intelligence research, particularly in the design of architectures useful for AI networks that have to multitask. “I would want to see if people pre-allocating neurons in their neural networks to have stable and switching properties, instead of just random properties, helped their networks in some way,” she said."
Lord_Drakostar#9337: Hello!
Lord_Drakostar#9337: ._.
Lord_Drakostar#9337: i need some hep on getting gpt-neo to run
Kharr#7888: What kind of problem are you running into? and which version are you trying to run?
Lord_Drakostar#9337: dude i have no idea how to run an ai
Lord_Drakostar#9337: i need like every step
Lord_Drakostar#9337: i just have experience with prompt engineering, not ai setup
Kharr#7888: Do you have a gmail account? It's really easy in Google Colab
Lord_Drakostar#9337: i do
Lord_Drakostar#9337: i have gpt2 files in there lol |
Lord_Drakostar#9337: what
Lord_Drakostar#9337: woah
Lord_Drakostar#9337: this is sick
EricHallahan#1051: Welcome!
Lord_Drakostar#9337: hola
Lord_Drakostar#9337: oh hey you're a dev
Lord_Drakostar#9337: sick
Lord_Drakostar#9337: what do you think of semantic search
EricHallahan#1051: Yeah, I would just use one of the Colab notebooks floating around.
Lord_Drakostar#9337: how
Lord_Drakostar#9337: i have a bunch of files on colab even though i've never used it
Kharr#7888: 1. Start up a Google colab notebook and change runtime type to "GPU"
2. write this into the first cell: `!pip install transformers`
3. next cell copy code from https://huggingface.co/EleutherAI/gpt-neo-1.3B (click top right where it says "use in transformers")
4. use it (read instructions on page I linked)
Lord_Drakostar#9337: :D
EricHallahan#1051: Beyond that we cannot help you further, we do not support or maintain the Hugging Face implementation.
Lord_Drakostar#9337: what if i want to use 2.7B
EricHallahan#1051: If you can get it to fit, use 2.7B instead of 1.3B.
Kharr#7888: You're going to need Colab Pro account for 2.7B. Need more memory to load it. |
Lord_Drakostar#9337: oh
Lord_Drakostar#9337: what this do
Lord_Drakostar#9337: GIT_LFS_SKIP_SMUDGE=1
EricHallahan#1051: ¯\_(ツ)_/¯
Kharr#7888: Just use the top portion to get it running: https://cdn.discordapp.com/attachments/729741769738158194/832403793051910154/unknown.png
Kharr#7888: That's as easy as it can get.
Lord_Drakostar#9337: ok so now how do i run it
EricHallahan#1051: Wait a second.
Lord_Drakostar#9337: second has been waited
Kharr#7888: Follow instructions... https://cdn.discordapp.com/attachments/729741769738158194/832404097817903135/unknown.png
Lord_Drakostar#9337: o
Lord_Drakostar#9337: see my attention span is
Lord_Drakostar#9337: futile
Lord_Drakostar#9337: so
Lord_Drakostar#9337: ock
EricHallahan#1051: Ehh... I would just do some Google-Fu with "GPT-Neo colab" and you'll find one.
cat_#4534: but attention is all you need
Louis#0144: money is all u need
Exocamp#8255: *All you need, huh...?*
lab#1636: love is all u need |
Lord_Drakostar#9337: RuntimeError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1062 try:
-> 1063 state_dict = torch.load(resolved_archive_file, map_location="cpu")
1064 except Exception:
4 frames
RuntimeError: [enforce fail at inline_container.cc:145] . PytorchStreamReader failed reading zip archive: failed finding central directory
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1064 except Exception:
1065 raise OSError(
-> 1066 f"Unable to load weights from pytorch checkpoint file for '{pretrained_model_name_or_path}' "
1067 f"at '{resolved_archive_file}'"
1068 "If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. "
OSError: Unable to load weights from pytorch checkpoint file for 'EleutherAI/gpt-neo-1.3B' at '/root/.cache/huggingface/transformers/7c5fac9d60b015cbc7c007ab8fe6d0512787fbaef81968922959898c49468d73.4c6a483fbfb5a25ac384bfcd71a1ff15245f06583a00c4ab4c44ed0f761f0b08'If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. |
Brady#0053: 😮
Lord_Drakostar#9337: error message when trying to run the model
EricHallahan#1051: We unfortunately do not support the Hugging Face implementation. I might however suggest to try turning it off and on again to see if that fixes it.
Lord_Drakostar#9337: could i use the github implementation instead with Google Collab?
Brady#0053: Yes, install the GitHub implementation
Brady#0053: `pip install git+https://github.com/huggingface/transformers`
Lord_Drakostar#9337: how would i run it
Lord_Drakostar#9337: the model i mean
Brady#0053: ```
from transformers import pipeline
generator = pipeline('text-generation', model='EleutherAI/gpt-neo-1.3B')
generator("EleutherAI has", do_sample=True, min_length=50)
```
Lord_Drakostar#9337: File "<ipython-input-2-2d2e50722fa3>", line 1
pip install git+https://github.com/huggingface/transformers
^
SyntaxError: invalid syntax
Lord_Drakostar#9337: wait how do you type like that
Brady#0053: Are you running it in colab? |
Lord_Drakostar#9337: yes
Brady#0053: `%pip install git+https://github.com/huggingface/transformers`
Brady#0053: (% in front of it)
Lord_Drakostar#9337: k
Lord_Drakostar#9337: is there a way to avoid loadtimes
Lord_Drakostar#9337: ike to preload it
Lord_Drakostar#9337: *like
Lord_Drakostar#9337: then use the already loaded model
Lord_Drakostar#9337: instead
Lord_Drakostar#9337: rather than loading every time
EricHallahan#1051: What are you trying to accomplish?
EricHallahan#1051: Do you just want to inference?
Lord_Drakostar#9337: im tryna use the model
Lord_Drakostar#9337: and use it the most efficient way possible
Lord_Drakostar#9337: i have a huge amount of tests to run
Lord_Drakostar#9337: and experimental bots to build
Lord_Drakostar#9337: and stuff
bmk#1476: @Lord_Drakostar this is not the right place to ask for issues with getting huggingface code to work
Lord_Drakostar#9337: well now im not using hugging face
Lord_Drakostar#9337: im directly using github and collab |
EricHallahan#1051: Here are two notebooks I have found in less that three minutes. https://colab.research.google.com/drive/1JpaulDYxythXhrDDSY1H1Q8qNDNkLVJp?usp=sharing
https://colab.research.google.com/drive/1KDNsA0EpofIMEpd64hJCpxGhpa2lEOsi?usp=sharing
bmk#1476: @Lord_Drakostar this is not a tech support channel
alexyz#3459: and there is no tech support channel
Lord_Drakostar#9337: ._.
EricHallahan#1051: There never will be.
Brady#0053: @bmk @EricHallahan I think it's worth adding the "install Hugging Face from GitHub to get EleutherAI models working" thing somewhere (e.g. the FAQ) since I think it's a common thing people run into. At least until the regular pip package doesn't error when loading the EleutherAI models.
EricHallahan#1051: That is no longer true.
Brady#0053: Ohhh
Lord_Drakostar#9337: alright im running the model and figured it out
Lord_Drakostar#9337: it's running oddly slow tho
EricHallahan#1051: I know that because it was released in `transformers==4.5.0`
Brady#0053: So my laptop is running a bit slow. Any ideas how to fix that?
EricHallahan#1051: SSD?
bmk#1476: have you tried walking into the sea
Brady#0053: With or without the laptop?
bmk#1476: yes
Lord_Drakostar#9337: I have a good computer, it's just that the model itself is running significantly slower than how the model runs on Huggingface.
StellaAthena#3530: Without. You want it to experience FOMI and come running
gwern#1782: get a desktop PC, NVMe SSD drive, and a wired ethernet connection, in that order |
Lord_Drakostar#9337: wait is that actually gwern
bmk#1476: wait actually ... maybe. connor will kill me if he find out that ive been invoking the law of the excluded middle
Lord_Drakostar#9337: are you gwern or just named that
EricHallahan#1051: yes
bmk#1476: yes
Brady#0053: yes
gwern#1782: were shakespeare's plays written by shakespeare or another man named shakespeare?
Lord_Drakostar#9337: as a discord user you can name yourself anything
Lord_Drakostar#9337: i am a huge fan of your article on gpt-3
Lord_Drakostar#9337: https://gpt-3.is/gwern-gpt-3-creative-fiction/ my favourite AI article in existence
Lord_Drakostar#9337: it showcased GPT-3 really well
gwern#1782: thanks. the navy seal copypastas endlessly fascinated me
gwern#1782: 'interpolation in high-dimensional space' may be 'just memorization', but *what* memorization
Lord_Drakostar#9337: yeah, it's interesting to see how gpt-3 can associate wildly contrasting tone to things like barney the purple dinosaur
gwern#1782: yes, or the minimalist navy seal as an even more extreme example
gwern#1782: gpt-3 takes them as examples in stride and spews out as many high quality navy seals as you want, because it *gets* navy seal
Lord_Drakostar#9337: Fun fact: Not only have I just discovered GPT-Neo was published today, I also got to meet Gwern today
Lord_Drakostar#9337: My AI-related dreams are coming true lol
gwern#1782: it's just impressive how well and deeply it mimics. I noticed just now how it copies the censoring/bleeping from the '4chan hacker' one
Lord_Drakostar#9337: really? it's been a while since it read the article |
Lord_Drakostar#9337: sometimes the GPT models can copy things although they don't fully understand them
gwern#1782: yeah, it doesn't stick out in the current version because it's auto-converted to em-dashes. I'll escape them so it's more obvious
gwern#1782: but it uses dashing to censor a variety of curse words, not just the one in the 4chan hacker. so it definitely is well aware of bleeping out, and curse words
Lord_Drakostar#9337: such as GPT-2 in the older versions of AI Dungeon 2 redacting doctor names in the SCP Foundation, although it doesn't technically make sense to do that
Lord_Drakostar#9337: GPT-2 actually mimicked the bleeping, which not only was very stupid but also very smart
Lord_Drakostar#9337: the GPTs seems to be able to categorise things very effectively
gwern#1782: that is, the 4chan hacker version only censors 'fuck', as in 'fucking', 'fucked, 'fuck' etc. but in the completions I see censors of 'bastard', 'shit', 'motherfucker', 'bastard' (possibly 'bitch')...
gwern#1782: anyway, fun little detail of the flexibility
Lord_Drakostar#9337: i wonder how well GPT-3 understands how extreme curse words can be
Lord_Drakostar#9337: rather
Lord_Drakostar#9337: like what words are worse than others
Lord_Drakostar#9337: and could bleep accordingly
Lord_Drakostar#9337: due to it censoring only "fuck"
Lord_Drakostar#9337: anyways, I have to be off now
Lord_Drakostar#9337: goodbye gwern
Imperishable_NEET#1969: I've played around a lot with GPT-3 using it to complete things, though only in AI Dungeon.
Kazumi#1297: So many conversations happen while I'm asleep
bmk#1476: the solution is simple, just never sleep
Kazumi#1297: ☕
Napolean_Solo#2907: What do they mean by faster than real time TTS models? |
Napolean_Solo#2907: Isn't real time the fastest?
RyanT#5929: https://twitter.com/neuroecology/status/1383040267612209153?s=21
RyanT#5929: Haven’t read it but looks interesting
Crit#0843: hey guys, im hoping someone can give me a bit of insight into this:
so i was checking this out https://huggingface.co/EleutherAI/gpt-neo-2.7B on huggingface and was pleasantly surprised with the text generation examples on the hosted inference api
my question is this - in the intended use and limitations section it states that while it can be used for downstream tasks, its intended purpose is for text generation from a prompt, which makes sense. Having said that GPT3 itself has been used in a wide variety of horizontal applications other than text generation (classification, paraphrasing, summarization etc) can this model of neo be applied in similar use cases? I ask because the downstream application section is listed as TBD and got me curious
EricHallahan#1051: I believe it should be able to perform those tasks, but we have not done much in terms of testing those capabilities. (We happen to be more interested in building models than applying them.) If you find that it can perform those tasks, please tell us!
Crit#0843: i will definitely be trying for classification and paraphrasing. are there plans of releasing a model similar in size to da vinci as well?
EricHallahan#1051: Yes. However, we have no idea when we will be completing a model at the 150B-200B scale because the timeline is very fuzzy.
Crit#0843: yup that makes sense..honestly im just stoked the 2.7B and 1.3B got released on huggingface modelhub. can i DM you some questions if you dont mind?
EricHallahan#1051: Sure, I don't see why not.
Louis#0144: @StellaAthena and I got an accept to WNU NAACL 2021 on an Eleuther affiliated paper
Louis#0144: @bmk wanna update the site?
EricHallahan#1051: I think we did already lol
Louis#0144: LMAO
Louis#0144: omg
Louis#0144: U guys
Louis#0144: Are too fast |
voxs#0001: can i get 32gb vram on colab
voxs#0001: this is annoying af
EricHallahan#1051: ¯\_(ツ)_/¯
Louis#0144: No
Louis#0144: Don’t think so
cat_#4534: even the V100 on colab are 16gb
Louis#0144: If you need an A100 for research
Louis#0144: Let us know
Louis#0144: We can consider your use case
Louis#0144: Some strings attached, no Bitcoin mining for instance
Louis#0144: But besides that not much
Louis#0144: If that interests you ask Stella
Louis#0144: She’s in charge
Louis#0144: Of that component *
guac#4716: congrats ya'll. Fly geese, fly!
Louis#0144: Yas
Louis#0144: Ty
Louis#0144: https://twitter.com/lcastricato/status/1383075425774153728?s=21
RyanT#5929: https://arxiv.org/abs/2103.04913
RyanT#5929: lol |
𓅬 gabriel_syme 𓅬#3220: I read it social network
𓅬 gabriel_syme 𓅬#3220: so tired
Louis#0144: Is that real
Louis#0144: I can’t tell
Louis#0144: It reads like an April fools day prank
chilli#5665: I think it's real
Louis#0144: Wtf
nz#9710: I mean, it's max welling, I would guess it has to have some value
Louis#0144: Yeah
Louis#0144: That’s what made me consider it could he real
Louis#0144: Anyone else and I would have thought it was a crank
Louis#0144: But max has a great track record
Louis#0144: I’ll look more closely
guac#4716: anyone here have enough QM background to digest it lol
Louis#0144: Oh yeah
Louis#0144: We have a QFT dude
Louis#0144: I forgot his name
Louis#0144: He wanted to do alignment I think lol
Louis#0144: If I remember it I’ll tag him
elderfalcon#4450: Usually when I want to summon some quantum guy I'll start mumbling about "quantum doors" and "complicated Hilbert spaces". Usually works pretty quickly. |
EricHallahan#1051: Well I have been working on the interpretation of RoPE in Physics.
EricHallahan#1051: And there is a relationship to optical computing lol
fristiloverke#4159: Oooh that's interesting
fristiloverke#4159: And from max welling!
EstebanSir#2189: 🤔 does anyone know if something like a "reverse table answering" model exists? instead of finding answers from the table, it "modifies" the table according to a statement
EstebanSir#2189: it would be simpler than that i would think
EstebanSir#2189: just return the coordinates of the cell mentioned in the statement
EstebanSir#2189: but i havent seen anything like that
EstebanSir#2189: and already existing 'table answering' models (as 🤗 calls it) dont work very good with non-questions
triggerhappygandi#0001: Do you know particle physics people too?
Louis#0144: no
triggerhappygandi#0001: I want to meet particle physics knowers
catal#4638: Where do I get the Q, K,V for self attention in a transformer? Because it looks like they use the same values for all three of them (see the second picture that is an implementation of a transformer) https://cdn.discordapp.com/attachments/729741769738158194/832678786406547486/attention.jpg
catal#4638: https://cdn.discordapp.com/attachments/729741769738158194/832678827221975060/self_att.jpg
CRG#8707: Read: <https://dugas.ch/artificial_curiosity/GPT_architecture.html>
catal#4638: Thank you I'll take a look
EricHallahan#1051: This is an abstraction. You need to look at the implementation underneath this abstraction.
Napolean_Solo#2907: Hello guys
Napolean_Solo#2907: How exactly do you guys implement cutting edge papers?
Louis#0144: we have phil |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.