data
stringlengths
115
7.61k
bmk#1476: uhhhh so thats just doing backward on multiple batches but vectorized? chilli#5665: yeah DoesThisUnitHaveASoul#7264: This is a neat little group you guys have here, I like it 🙂 chilli#5665: There's some other cool stuff you can do with the combination of `vmap` and `grad` chilli#5665: or well, `grad` like functions chilli#5665: You can use `vmap(vjp(f))(x, v)` to compute the jacobian of `f` bmk#1476: what does vjp do chilli#5665: hmm, so imagine you have a function `f` from R^n to R^m. chilli#5665: Are you familiar with what a jacobian is? bmk#1476: yeah chilli#5665: So the Jacobian of this function is a `N x M` matrix chilli#5665: However, let's say you only wanted to compute the gradient wrt one of the outputs (say the first one) bmk#1476: oh thats what vjp does? chilli#5665: Then, the quantity you want is `jacobian @ [1,0,0,...]` chilli#5665: well, that's the quantity it computes chilli#5665: the way it's actually implemented is reverse-mode AD, except instead of initializing your gradient value with `output_scalar`, you initialize it with `v * output_vector` chilli#5665: so, `vjp(f)(x, v)` computes `vector * (Jacobian of f evaluated at x) ` bmk#1476: and so basically vmap runs that once for every single element in the output bmk#1476: except vectorized chilli#5665: mmm, well, it depends on what you're vmapping over
bmk#1476: i mean in this example chilli#5665: I omitted some notation since it was probably gonna be confusing, so lemme explain it bmk#1476: it's fine, i'm not in big brain mode rn so i probably wont be able to absorb it rn lol chilli#5665: If you do `vmap(vjp(f), in_axes=(0, None)(batched_x, v)` you'll compute `vjp(f)(x, v)` for every x in your batch chilli#5665: However, if you do chilli#5665: `vmap(vjp(f), in_axes=(None, 0))(x, batched_v)`, you'll compute `vjp(f)(x, v)` for every v in your batch. bmk#1476: and the result is same jsut transposed right chilli#5665: If `batched_v` is the identity matrix, then `vjp(f)(x, v)` for every unit vector is your jacobian 🙂 bmk#1476: oh bmk#1476: ohh bmk#1476: ok chilli#5665: There's other cool stuff you can do with the combination of vmap + grad, such as training a batch of "models" (i.e. training 1000 small models), or per-sample gradients (like mentioned above), or batching multiple meta-learning tasks together AI_WAIFU#2844: should we do this? https://minerl.io/competition/ AI_WAIFU#2844: I know we all hate RL bmk#1476: what would be our competitive advantage EricHallahan#1051: BiG MoDeL AI_WAIFU#2844: fuckloads of compute + andy jones for consultation AI_WAIFU#2844: actually wait there's a compute cap, we have no competitive advantage ethan caballero#6044: #scaling-laws for MineRL Louis#0144: does anyone know a more UX friendly version of git
Louis#0144: like something that wraps the git CLI AI_WAIFU#2844: yeah it's called git gud Louis#0144: lmao Louis#0144: I stand by what I said that git is completely useless Louis#0144: but everyone insists on using it Louis#0144: so whatever Louis#0144: i miss SVM so much bmk#1476: git is amazing wdym AI_WAIFU#2844: now this might be your hottest take Louis#0144: I have never had a good time with git Louis#0144: yeah Louis#0144: leo Louis#0144: wtf Louis#0144: git is trash Louis#0144: like everyone knows git is trash AI_WAIFU#2844: no I'm talking about you Louis#0144: theres just no better alternative Louis#0144: @AI_WAIFU have u ever used SVM Louis#0144: its like Louis#0144: magnitudes better
Louis#0144: but for some reason people left it for git because 1% of people need the customizability git offers Louis#0144: no one knows what git actually does Louis#0144: they just remember the commands Louis#0144: Git is designed from the ground up to be dense Louis#0144: the UX is honestly some of the worst in the industry Louis#0144: being "good at git" is not something to be proud of Louis#0144: https://news.ycombinator.com/item?id=25123014 Louis#0144: this thread sums it up nicely Louis#0144: (also the fact that git deletes your files every time you fuck up branching is not fun) Louis#0144: Reeee Louis#0144: anyway Louis#0144: I installed ungit Louis#0144: it seems to have what I want Louis#0144: its a nice wrapper around git that removes all the bullshit bmk#1476: [citation needed] bmk#1476: git is super simple bmk#1476: it all makes perfect sense Louis#0144: Yes I mean Louis#0144: The algorithm does bmk#1476: which part *doesn't* make sense to you
Louis#0144: But the UX doesn’t chilli#5665: i like git Louis#0144: How tf do I make a new local branch to push my changes without deleting everything chilli#5665: but maybe I'm just used to it chilli#5665: lol Louis#0144: Like git checkout -b deleted stuff Louis#0144: Always Louis#0144: 😦 bmk#1476: git stash chilli#5665: `git checkout -b <branch>` Louis#0144: I just don’t want git deleting my changes anymore chilli#5665: yeah, if you have unmodified changes then `git stash` Louis#0144: I did this Louis#0144: Oh chilli#5665: if you have unmodified changes then `git checkout -b `is just gonna fail lol chilli#5665: and throw a warning Louis#0144: I’m using ungit now Louis#0144: It’s a lot easier Louis#0144: Also I’ve been failing to use git since I was 14 now Louis#0144: So almost ten years
bmk#1476: maybe youre just bad at git Louis#0144: Maybe Louis#0144: But I miss SVM Louis#0144: I used SVM when I was 12/13 Louis#0144: It was so easy Louis#0144: Just drag and drop files and it did everything for you Louis#0144: It was Dropbox on steroids Louis#0144: Did all your version control and merging too chilli#5665: lol chilli#5665: when you were 12 you probably weren't developing with anybody else Louis#0144: Oh Louis#0144: That’s true Louis#0144: hm bmk#1476: lol chilli#5665: and you also probably weren't developing very large projects Louis#0144: Yeah just games in openGL Louis#0144: nothing massive chilli#5665: eh, i guess even for large projects chilli#5665: you don't really need branches per se Louis#0144: I did have branches
Louis#0144: It had a nice GUI to manage branches Louis#0144: And compare them chilli#5665: also, pretty sure it's SVN :thonk: bmk#1476: just use git diff Louis#0144: SVN oooo Louis#0144: Yeah you’re right SVN Louis#0144: it’s been a long time bmk#1476: maybe he uses a support vector machine for version control bmk#1476: dont judge Louis#0144: LMAO Louis#0144: You gotta admit though Louis#0144: The commands are not thoughtfully laid out bmk#1476: no Louis#0144: And the man page for git is DENSE Louis#0144: Archwiki helps a hit Louis#0144: Like I’m no stranger to dense terminal stuff bmk#1476: tell me something that you think git should be able to do bmk#1476: and ill tell you how to do it chilli#5665: bruh Louis#0144: LMAO
chilli#5665: SVN Is centralized? chilli#5665: I never used, so I don't know chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/834554265053823036/unknown.png chilli#5665: if you don't have internet connection to the SVN server you can't commit chilli#5665: lmao Louis#0144: Yeah it’s centralized bmk#1476: lol Louis#0144: Tbf that isn’t an issue modern day chilli#5665: I never really used anything before git, so I'm pretty happy with it Louis#0144: No one is going on planes anyway 😉 chilli#5665: I think I'm quite proficient with it haha chilli#5665: let's see, what are the commands I use often Louis#0144: Merge > 2 branches at once where each branch is in its own fork chilli#5665: pull, push, checkout, stash, reset, status, log, diff, add, cherry-pick, rebase, bisect, reflog chilli#5665: I think that's pretty much all you need bmk#1476: git doesnt have forks so i assume you mean two remotes Louis#0144: Yeah bmk#1476: cant you just merge them one at a time Louis#0144: No lets you want all the options for a certain line of code presented to you Louis#0144: I’ve had this happen
bmk#1476: then just do `git checkout master && git merge branch-a branch-b` bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/834555747702538321/unknown.png bmk#1476: i dont see what the problem is Louis#0144: its u leo https://cdn.discordapp.com/attachments/729741769738158194/834572259888070666/b5a0be5.png inox#5400: http://ratpoison.nongnu.org/ AI_WAIFU#2844: he's right tho Louis#0144: no thanks Louis#0144: id prefer to be able to copy text without going through vim Louis#0144: im v proficient at vim Louis#0144: but dear god is it slow inox#5400: you can copy in tmux `ctrl-b [`, space then select, `ctrl-b ]` to paste Louis#0144: mice were created originally to highlight text Louis#0144: I like to think they knew what they were doing Teemochu#8740: I am quite :catgirl3:, and you? 𓅬 gabriel_syme 𓅬#3220: What the :goose2: 𓅬 gabriel_syme 𓅬#3220: Oh for :goose2: sake 𓅬 gabriel_syme 𓅬#3220: OK I'll keep working on this Kia#2550: Go for it StellaAthena#3530: https://fosspost.org/researchers-secretly-tried-to-add-vulnerabilities-to-linux-kernel/ StellaAthena#3530: Ooof
Kia#2550: Oof that's a dumb a idea StellaAthena#3530: This is more than just a bad idea. StellaAthena#3530: It’s a felony StellaAthena#3530: You don’t get out of committing cyber crimes by saying “JK! It was just a test!” Kia#2550: That's hilarious Stupid...that's Just stupid AI_WAIFU#2844: that and the bit where vulnerabilities get into the freakin' linux kernel. AI_WAIFU#2844: that can be pretty disastrous AI_WAIFU#2844: on the other hand I've added "inject obscure vulnerabilites into open source projects" to the list 𓅬 gabriel_syme 𓅬#3220: Why would u ever do that lol 𓅬 gabriel_syme 𓅬#3220: Are you auditioning for some obscure cyber crime syndicate? gwern#1782: what felony is it? AI_WAIFU#2844: No, I'm just wargaming ways in which AGI can fuck us all. gwern#1782: (it *is* depressing how many vulns they apparently snuck in, without even being great coders providing the bait of awesome new features or fixes. i had hoped for better) bmk#1476: i feel bad for any aspiring kernel devs who happen to have gone to this university lol alexyz#3459: EleutherU gwern#1782: @bmk one can still submit pseudonymously, right? EricHallahan#1051: https://www2.ed.gov/admins/finaid/accred/accreditation-handbook.pdf bmk#1476: well, you're way more accustomed to this whole pseudonymy thing than like 99.999% of the population gwern#1782: guess now they'll learn the joys 𓅬 gabriel_syme 𓅬#3220: Oops my bad I was referring to the students not you
bmk#1476: the students are AGI trying to fuck us all gwern#1782: the students are lower bounds on *real* adversaries gwern#1782: honestly, what even is the NSA doing screwing around with dual_ec or waiting for heartbleed-like bugs 𓅬 gabriel_syme 𓅬#3220: I now want to go and rewatch wargames for some reason EricHallahan#1051: It was on YouTube for free with ads. Louis#0144: What’s going to happen to university of Minnesota is that their IRB is going to get gutted Louis#0144: I’ve seen this happen before EricHallahan#1051: I used that as an excuse to watch *Contact* as well. Louis#0144: This doesn’t seem like grounds to revoke tenure Louis#0144: I saw people discussing that on twitter and in IRCs Louis#0144: He’s gonna get slapped on the wrist yes Louis#0144: But the IRB gave him exemption Louis#0144: Imho the IRB is equally to blame here gwern#1782: tenure is pretty hard to revoke but I wonder if they'll manage it Louis#0144: It very very rarely happens Louis#0144: Except like Louis#0144: For criminal charges Louis#0144: Which this isn’t gwern#1782: stella claimed it was a felony 𓅬 gabriel_syme 𓅬#3220: Great movies tbh
Louis#0144: Oh shit Fr? Louis#0144: I didn’t see anyone mention that Louis#0144: She could totally be right though gwern#1782: I'm not sure what felony tho bmk#1476: shit, it's actually a felony? bmk#1476: errrrr 𓅬 gabriel_syme 𓅬#3220: I mean did the professor actually do this? Or it is his/her responsibility to know 𓅬 gabriel_syme 𓅬#3220: Because students are adults, mostly Louis#0144: They got the IRB to exempt them Louis#0144: But reading their paper Louis#0144: Im getting the impression they lied to the IRB Louis#0144: basically saying code reviewers aren’t human participants Louis#0144: So I might take back what I said earlier Louis#0144: Did Minnesota make a statement? 𓅬 gabriel_syme 𓅬#3220: Oh it was within a project? bmk#1476: sounds accurate, I'm basically braindead when i review prs bmk#1476: git clone -> run tests -> quick visual check that they didn't add an ascii art dong in the code -> approve Deleted User#0000: this happens in industry too Deleted User#0000: don't worry about it asparagui#6391: 8==()
Deleted User#0000: a lot of PRs are just ceremonial Deleted User#0000: software veterans will attest Louis#0144: I’m gonna add lots of ascii art Louis#0144: Next PR Louis#0144: goose girl ascii art Deleted User#0000: except when Louis makes the PR Deleted User#0000: that's why PR's exist Louis#0144: HAHAHA Deleted User#0000: there's always a Louis at a company asparagui#6391: ___ ,-"" `. ,' _ e )`-._ / ,' `-._<.===-' / / / ; _ / ; (`._ _.-"" ""--..__,' | <_ `-"" \ <`- : (__ <__. ;
`-. '-.__. _.' / \ `-.__,-' _,' `._ , /__,-' ""._\__,'< <____ | | `----.`. | | \ `. ; |___ \-`` \ --< `.`.< hjw `-' Teemochu#8740: I've put a :ditto: ascii art on my reviews in the past when I meant "ditto" (as in, "here's the third place in which you need to change the thing I mentioned above") bmk#1476: i hereby authorize Louis to include one sfw goosegirl ascii art in the code of a task he PRs to eval harness Louis#0144: Ty guac#4716: put it in quac! bmk#1476: i don't care which task it goes in as long as louis contributes a task EricHallahan#1051: But that is for the duck Teemochu#8740: Put it in honc bmk#1476: incentives™ asparagui#6391: https://en.wikipedia.org/wiki/Howard_the_Duck_(film) bmk#1476: BONUS POINTS to @Louis if he invents a new task called HoNK/HoNC or something similar and prs that task
𓅬 gabriel_syme 𓅬#3220: Nice movie Louis#0144: Ok wait Louis#0144: Bird classification? Louis#0144: Does anyone know a bird related NLP task asparagui#6391: http://www.vision.caltech.edu/visipedia/CUB-200.html Louis#0144: NLP 𓅬 gabriel_syme 𓅬#3220: Only dounds 𓅬 gabriel_syme 𓅬#3220: Sounds Louis#0144: oh this reminds me are we making an eval harness for DALL-E-Neo Louis#0144: Or whatever we are calling it 𓅬 gabriel_syme 𓅬#3220: Now I can contribute to that guac#4716: mm-eval-harness 𓅬 gabriel_syme 𓅬#3220: I'm making metrics for dalle generations right now but they would probably be too specific Louis#0144: No architecture previz is an amazing eval Louis#0144: I think that’s good Louis#0144: I also wanna do text adventures 𓅬 gabriel_syme 𓅬#3220: Cool I can add them once I have them there 𓅬 gabriel_syme 𓅬#3220: I really liked your adventures idea 𓅬 gabriel_syme 𓅬#3220: I was thinking about maps or strategy games, something of the sort later Louis#0144: Oooo
Louis#0144: That’s cool Louis#0144: Have you seen my advisors Lovelace paper Louis#0144: https://arxiv.org/abs/1410.6142 Louis#0144: This is what I think we should go for with a DALL E harness 𓅬 gabriel_syme 𓅬#3220: Nope but I'll take a look thx Brady#0053: @Louis @StellaAthena https://cdn.discordapp.com/attachments/729741769738158194/834650883731226634/Screen_Shot_2021-04-22_at_12.44.00_AM.png Louis#0144: WHAT Louis#0144: Invite him to the discord 😉 bmk#1476: holy shit bmk#1476: we need to reach out to him guac#4716: we need a yoshua bengio emote now lol Louis#0144: Yes bmk#1476: i have absolutely no idea where he got the idea that we're interested in "continual learning and OOD" Louis#0144: Yeah we aren’t really but if he wants to talk about it we’ll listen Brady#0053: Stella posted something about helping with continual learning on another Slack, so I sent that to Irina, a Mila prof who does continual learning. Irina told Yoshua Louis#0144: Which slack? bmk#1476: if i had to list the 3 words that describe eleuther the most, they would be: LMs, scaling, alignment Brady#0053: Elicit Louis#0144: Nah Louis#0144: LM, scaling, multimodal
Brady#0053: Nah Louis#0144: Alignment is fifth maybe sixth Brady#0053: LM, scaling, tech support Louis#0144: True Brady#0053: My printer recently stopped working. How do I fix it? Louis#0144: Install Jax bmk#1476: louis im not going to argue with you on this, i know youre just trying to stir shit up with me Louis#0144: I’m not Louis#0144: Most ppl here don’t do alignment Louis#0144: Like genuinely most Dont bmk#1476: most people here dont do multimodal either Teemochu#8740: Maybe 3 is the wrong k then Louis#0144: Anyway it’s irrelevant. Alignment is clearly important to EAI Louis#0144: I won’t disagree there Louis#0144: Continual learning though Louis#0144: What do we even do that slightly relates to that bmk#1476: continual learning definitely comes *after* alignment lol Louis#0144: Yeah Louis#0144: I agree Teemochu#8740: ~~where is goose~~
bmk#1476: let's go message bengio lol guac#4716: what grassroots project has resources for proper cont learning Louis#0144: None Louis#0144: lol Louis#0144: I feel like normativity stuff can be turned into applied continual learning cfoster0#4356: Tbf we are very interested in OOD if OOD is codeword for bootstrapping approaches to alignment :hap: Louis#0144: I don’t know of any alignment project we have in that direction tho bmk#1476: i mean, connor is doing model splintering stuff for aleph, which is basically OOD on steroids bmk#1476: he plans on bringing that to eleuther soon™ Louis#0144: I don’t really know what aleph does tbh bmk#1476: lmao and you applied anyways Louis#0144: I have a call in a few days too Louis#0144: Jason said he wants to meet with me personally Louis#0144: So Louis#0144: 🤷‍♂️ cfoster0#4356: Is Bengio really compute-limited? Louis#0144: No way Louis#0144: Mila has insane resources bmk#1476: no, which is why we need to set the record straight bmk#1476: at least imo, the purpose of eleuther is *not* to be a middleman for taking in compute donations and then handing them out to people with proposals
bmk#1476: that's what it's been recently because wasting compute is bad bmk#1476: but this should be temporary bmk#1476: there are way too many people in the niche of being a compute-distributing middleman cfoster0#4356: Yeah..... it doesn't help that we don't *own* any compute. We've always depended on the kindness of strangers bmk#1476: we need to emphasize that we're a research group with research goals bmk#1476: which means we need to get better at getting our ideas into papers Louis#0144: Which is why I’m trying to solidify a comp creativity subgroup so storytelling becomes part of the EAI research goals bmk#1476: which means we need better infrastructure to support research, which means i need to stop dragging my heels and get the framework done bmk#1476: computational creativity doesn't seem super related to the kinds of things i personally think eleuther should be focusing on EricHallahan#1051: I can *try* to look at TriviaQA tomorrow. bmk#1476: like sure i guess the normativity stuff is fine bmk#1476: I was talking about a different framework lol Louis#0144: Ye that’s why I’m having an issue rn relating it Louis#0144: I’ll find a Way that makes everyone hair Louis#0144: Happy * bmk#1476: what if you lean more on normativity bmk#1476: i can help you make it sound alignmenty Louis#0144: Yeah that’s true Louis#0144: We can discuss this tmrw Louis#0144: Bed
Louis#0144: I’m falling asleep EricHallahan#1051: But that one needs to get done too. So much for "get it done within a month if everyone pitches in." bmk#1476: i don't *personally* want to work on much normativity but i would totally support you doing eleuther normativity stuff Brady#0053: Do you guys like have meetings to organize? bmk#1476: nah lol EricHallahan#1051: Maybe EricHallahan#1051: ¯\_(ツ)_/¯ EricHallahan#1051: Meetings are never planned. cfoster0#4356: all of the organization is over discord basically Brady#0053: Interesting Brady#0053: Impressive EricHallahan#1051: Text is better for archival purposes. bmk#1476: inefficient tbh lol cfoster0#4356: In fact idk if I've interacted with anyone here substantively outside of this, Overleaf, and GitHub guac#4716: you ever make it to nyc dinners on me pal bmk#1476: I'm still working on the new and improved experiment tracking framework bmk#1476: by tracking i guess i mean more provisioning bmk#1476: experiment provisioning framework EricHallahan#1051: It looks really good tbh Louis#0144: I live near nyc
Louis#0144: Where’s my dinner bmk#1476: logging will be wandb Louis#0144: I’m in nyack rn guac#4716: hey we're both vaccinated let's hit up shops in nanuet bmk#1476: @Louis did you forget lol bmk#1476: you and guac live super ultra close guac#4716: he forgot about me already sheesh Louis#0144: OH YEAH Louis#0144: LMAOOOO Louis#0144: we should Louis#0144: Holy shit bmk#1476: when louis guac meetup Louis#0144: True cfoster0#4356: oo. Might pay a visit in the fall 🌆 bmk#1476: this needs to happen bmk#1476: livestream pls guac#4716: i'll make a twitch just for the occasion hehe Louis#0144: Ok yeah let’s organize that for sometime next week bmk#1476: excitinf Louis#0144: I’ve had both vaccines
bmk#1476: lol imagine living somewhere where the vaccine rollout is happening at a reasonable rate bmk#1476: jk vaccine rollout here is reasonable i guess guac#4716: i've only had one. next one is may 12th ... if you want to wait for me to be completely poked...i understand lmao Teemochu#8740: Canada is a lot slower than the US bmk#1476: but it could be worse Louis#0144: Nah idc Louis#0144: One is enough Louis#0144: I’m not gonna get like inches from u Louis#0144: Sorry guac Louis#0144: Not into that Teemochu#8740: First dose of pf/mrna is as effective as J&J bmk#1476: we only have az here rn Teemochu#8740: only reason it wasn't authorized for one is that the companies decided to use two in the studies (similarly why the dosage is so high compared to what probably could be used and still meet authorization criteria [which is >60% effectiveness iirc]) bmk#1476: and only for >55 i think 𓅬 gabriel_syme 𓅬#3220: I'm always learning in here, if that counts 𓅬 gabriel_syme 𓅬#3220: for real though, it's not a boring topic or anything StellaAthena#3530: For the record, people were asking about continual learning + language models and I said we are trying to get into that because of #deleted-channel, which I view as in that sphere or at least working towards that goal. 𓅬 gabriel_syme 𓅬#3220: I'm down for this btw, if you accept design as part of it Louis#0144: Yes ofc we will StellaAthena#3530: I Zzzzzzz now
Louis#0144: Gn StellaAthena#3530: @Louis remind me to tell you about Spectral Katz tomorrow Louis#0144: Nya ~ Louis#0144: Ok StellaAthena#3530: It's a way we might be able to measure the relative importance of different links in a large causal chain StellaAthena#3530: graph theoretically bmk#1476: (for the record, i view EEGI more as a sort of value learning / alignment-by-default kind of direction thing, since "post-deployment" doesnt make sense when you dont have a real deployment environment, but also idk what connor has in mind) Brady#0053: I'm interested (causality is my thing) 𓅬 gabriel_syme 𓅬#3220: wasn't there a discussion about RL and scaling recently? could that be put under continual learning? 𓅬 gabriel_syme 𓅬#3220: or do I have a wrong understanding of the latter? StellaAthena#3530: Anyways, night. I'll message @Louis and @Brady tomorrow bmk#1476: ;----------------------------------------------------------; https://cdn.discordapp.com/attachments/729741769738158194/834667168846184488/unknown.png bmk#1476: (while trying to do multii-gpu) bmk#1476: why is everything cursed and broken EricHallahan#1051: ¯\_(ツ)_/¯ rb#3159: I'm interested Daj#7482: It seems some people here are misinformed, this is _absolutely_ what I am interested in Daj#7482: Though it does not represent the bulk of work done at Eleuther thenightocean#6100: whats this? https://www.youtube.com/watch?v=00ROgQUBvxw thenightocean#6100: someone reads the paper in youtube stream?
nev#4905: https://openreview.net/forum?id=HklBjCEKvH nev#4905: discuss StellaAthena#3530: Train it on a real dataset, give it hard problems, and wake me up when it can solve them. Sphinx#2092: You might then be interested in this: https://arxiv.org/abs/2010.00710 Sphinx#2092: Testing the limits of what you can achieve when you sacrifice any and all practicality lol StellaAthena#3530: Oh? How so? Sphinx#2092: They get very amazing results, basically domain adaptation for free by simply swapping the datastore (i.e. no additional training) Sphinx#2092: I believe the method also improves the underlying MT system as well, which is nice. Sphinx#2092: The potential is incredible...except for the fact that it's like two orders of magnitude slower. StellaAthena#3530: Oof Sphinx#2092: In retrospect, it's sorta reminscient of @andyljones 's comment on test-time vs train-time compute. Sphinx#2092: Though this is far more powerful than just use very large beams. StellaAthena#3530: In the OpenReview link the authors mention that their method is about half as fast on Enwik8 and refers to this as “comparable training performance.” 😦 Sphinx#2092: Yeah I dunno about the LM paper, I only read the related MT paper lol Sphinx#2092: It's the same first author though. Sphinx#2092: In the kNN-MT paper, they claim: " During inference, retrieving 64 keys from a datastore containing billions of items results in a generation speed that is two orders of magnitude slower than the base MT system." StellaAthena#3530: Not nearly as bad as an order of magnitude, but even a factor of 2 is very sad when you measure train time in months Sphinx#2092: The whole thing is nice philosophically though. I think the upshot is that this is kinda like an oracle model. Sphinx#2092: Like if you trained a big enough model, you shouldn't need to lug around the datastore, since you can just memorize it. Maybe in some sense, regular training is like distilling these nearest neighbor models.
CKtalon#7792: but could this just be bleu-mining, instead of coming up with something that's idiomatic? Sphinx#2092: Dunno what you mean exactly by "bleu-mining" or "idiomatic" but I think the WMT experiments seem pretty convincing. The domain adaptation seem good as well, though the bleu scores are pretty high for those datasets, so it can be a bit hard to tell. Sphinx#2092: The premise of domain adaptation by simply swapping datastores, or even the more naive "just add more entries to the datastore" is quite nice if you ignore actually deploying this. CKtalon#7792: i mean it might do well on bleu, but it might not actually 'read' well for a human 𓅬 gabriel_syme 𓅬#3220: does this mean it will be fast enough (to be practical) in the next gen of hardware? Sphinx#2092: Dunno. Not sure if next gen hardware is really two order of magnitude faster. Or if it is, if we wouldn't just re-invest the compute into building an even bigger model. But I dunno, I don't do anything practical so I'm not the best person to ask about these things. 𓅬 gabriel_syme 𓅬#3220: Thanks. I was just thinking if bigger models will be able to do "domain adaptation for free by simply swapping the datastore"? Or maybe that's not as valuable as it sounds? CKtalon#7792: it will be valuable if true CKtalon#7792: for instance outdatedness of the model can be fixed CKtalon#7792: without expensive retraining/finetuning CKtalon#7792: but 2 orders of magnitude slower makes it quite unusable in practice CKtalon#7792: if a normal model takes 0.5 seconds to generate a paragraph, and this takes 500 seconds.. a human will probably have done it from scratch StellaAthena#3530: Follow-up, re deliberately submitting vulnerabilities to the Linux kernel as "research." https://twitter.com/SarahJamieLewis/status/1384871385537908736?s=19 Sid#2121: :thonk: first time i'm hearing of gradio hub Sid#2121: there's not really a paper for gpt-neo, we didn't really change much / introduce anything new so Sid#2121: we can add a citation thing to the github i guess? Deleted User#0000: ah ok, I have a temporary demo link to look at here https://48003.gradio.app/ EricHallahan#1051: Oh, it is 350M? :thonk:
Sid#2121: cool! I would request that you don't use the 350M because we didn't intend to release it *cough* @bmk *cough* and i think it sucks Sid#2121: what is this running on? can you go bigger at all lol bmk#1476: just take it down Deleted User#0000: yeah 1.3B, 5 GB is too big bmk#1476: i literally put it up for testing and we never made any promises Sid#2121: this is using hf in the backend right? bmk#1476: there's absolutely nothing wrong with just removing it EricHallahan#1051: If there really needs to be a demo, just use Colab+HF IMO. bmk#1476: if there are no objections, I'm deleting 345M from hf EricHallahan#1051: I am really confused. I'm not talking about the inference API. StellaAthena#3530: Are there not a bunch of models on gradio? It sounds like you’re saying “putting it on an obscure website is more accessible because it’s far less used” which seems incorrect. Louis#0144: https://twitter.com/thom_wolf/status/1385246156075192320?s=21 inox#5400: beyond frontiers of ethics?? inox#5400: uhhhhhhh Daj#7482: This is not what he meant, but I kinda wish it was lol Daj#7482: and yeah I/others here are involved with that StellaAthena#3530: The RoPE blog post was a massive success, attracting 2.5k views in the 36 hours it's been live. EricHallahan#1051: Is that from site analytics? StellaAthena#3530: Yup cfoster0#4356: Nice! Glad we were able to direct so many eyeballs towards Jianlin's work :hap:
chilli#5665: oh you're on this server - out of curiosity, are you affiliated with gradio? Deleted User#0000: it survived /mlreddit which is rare StellaAthena#3530: survived? Deleted User#0000: mlreddit is the place where your ideas usually get chewed up Deleted User#0000: in a very frank manner Deleted User#0000: im saying it as a positive thing StellaAthena#3530: ah StellaAthena#3530: Link? Tinytitan#5596: https://old.reddit.com/r/MachineLearning/comments/mvf7ho/r_rotary_positional_embeddings_a_new_relative/ Tinytitan#5596: or https://www.reddit.com/r/MachineLearning/comments/mvf7ho/r_rotary_positional_embeddings_a_new_relative/ Louis#0144: mlreddit is like Louis#0144: where all review 2s go Louis#0144: sad DoesThisUnitHaveASoul#7264: The best way is to be pre-emptive about it DoesThisUnitHaveASoul#7264: It worked when I tried it DoesThisUnitHaveASoul#7264: https://www.reddit.com/r/MachineLearning/comments/btnj4s/r_learning_to_learn_by_selfcritique/ DoesThisUnitHaveASoul#7264: >You can be as harsh as you want. You can't top reviewer #2 anyway. Lucy Stripes#1932: Hey guys! I'm just here because I have a question: will EleutherAI have a “text in, text out” interface? I know nothing about computer science and I love the accessibility of OpenAI's beta. cfoster0#4356: Hey there, @Lucy Stripes ! Lucy Stripes#1932: hi!
cfoster0#4356: At the moment, the best text-in, text-out interface to the models we've released so far is from HuggingFace. At some point, CoreWeave may also choose to offer a similar kind of API experience, I believe AI_WAIFU#2844: see: https://huggingface.co/EleutherAI/gpt-neo-2.7B cfoster0#4356: In general though, the folks here have been much more focused on doing the research and engineering behind training the models than setting up a smooth UX. Other parties are welcome to do that, since everything's open source here Lucy Stripes#1932: Ok this might sound like a weird question but would any of these models be good at poetry? lol (also i can totally see why you'd want to finish the engineering before starting the platform) Lucy Stripes#1932: (i'm a poet who's fascinated by the relationship between art and AI) AI_WAIFU#2844: Yesn't CRG#8707: Have you seen: https://www.gwern.net/GPT-3 ? AI_WAIFU#2844: ^ AI_WAIFU#2844: also feel free to check out #art Lucy Stripes#1932: you guys have been so helpful, thank you! EricHallahan#1051: Why haven't I added this to the FAQ lol Daj#7482: (add a suggestion to check out #art to the FAQ because it's rad lol) EricHallahan#1051: I keep forgetting to add it. DoesThisUnitHaveASoul#7264: @Lucy Stripes I have a colleague that worked on NLP for sarcasm. Getting a model trained on such a massive corpus to be a poet might be highly dependent on the prompt given, or, the more technical angle which would be some 'fine tuning' as we call it, i.e. tuning the model on a poetry dataset to prime it more towards that direction Lucy Stripes#1932: here's a preset i made based on my own poetry, it generates super depressing stuff lol: https://beta.openai.com/playground/p/onAg8Jc4TGYvuYtVY8dmJbAy?model=davinci DoesThisUnitHaveASoul#7264: I do not have an invite DoesThisUnitHaveASoul#7264: Maybe copy the text? 🙂 Dromarion#3383: Once I've scraped together the skillset, I'll be working on a way to use NEO as an assist tool for writing projects. It's certainly something I want to use. DoesThisUnitHaveASoul#7264: Grammarly is doing a pretty good job overall. I am assuming they'll be soon be using a transformer under hood. Lucy Stripes#1932: ok, here's the last poem that my preset generated:
Lucy Stripes#1932: Topic: sensory overload Poem: there is a drone in my ear a buzz in my ear it’s loud it’s loud can you hear it? a drone in my ear a buzz in my ear it’s loud it’s loud can you hear it? i can hear it i can hear it i can hear it i can hear it i can hear it i can hear it i can hear it i can hear it
i can hear it i can hear it i can hear it i can hear it i can hear it i can hear it i can hear it i can hear it DoesThisUnitHaveASoul#7264: https://www.grammarly.com/blog/how-grammarly-uses-ai @Dromarion DoesThisUnitHaveASoul#7264: Interesting. Apparently it can hear it Lucy Stripes#1932: That's the biggest problem that it has lol, it repeats itself a lot nz#9710: it can *really* hear it Dromarion#3383: Wait does grammarly generate text? I was thinking of doing more like what AI Dungeon does but for more general purpose writing. EricHallahan#1051: No, just edits AFAIK. Dromarion#3383: Yeah, I'm after something that'll basically write my stories for me, or at least carries some of the weight in writing the narrative. DoesThisUnitHaveASoul#7264: oh DoesThisUnitHaveASoul#7264: no, I like writing too much to let an AI take it from me for now DoesThisUnitHaveASoul#7264: but perhaps I can write something neat and then have the AI generate a bunch of angles, and I can choose the one that I like to adapt or something Dromarion#3383: I get blocked a lot so I personally like having an AI there to continue my train of thought or just provide ideas or directions on where to take things. DoesThisUnitHaveASoul#7264: I see. That makes sense.
Lucy Stripes#1932: i don't think AI could ever fully take over creative writing. One of the appeals of reading something is knowing that a person with their own life and perspective wrote it. alexyz#3459: I think it will DoesThisUnitHaveASoul#7264: what about an AI with it's own training experience and inductive biases alexyz#3459: just wait a bit more alexyz#3459: it'll never fully take over alexyz#3459: but like 10 years from now we'll have mainstream AI generated books (possibly) cfoster0#4356: Seeing what folks like janus make in the #art channel with (relatively) minimal human intervention has really challenged me in this regard cfoster0#4356: The outputs are just *so damn painterly* Dromarion#3383: I just realized that instead of writing my book, I'm taking dense machine learning coursework to get an AI to write my book for me. This is procrastination right 🤔 EricHallahan#1051: https://xkcd.com/1319/ theurbandragon#3939: Have GPT-NEO write the code... Dromarion#3383: *Working on an automation program to automate working on an automation program* inox#5400: @Lucy Stripes have you tried https://www.shortlyai.com/ ? inox#5400: iirc it's GPT-3 under the hood bmk#1476: i feel personally attacked by this xkcd bmk#1476: pyfra: :guilty: EricHallahan#1051: Though `pyfra` just makes things easier to automate. Lucy Stripes#1932: that looks cool! super pricey though theurbandragon#3939: https://news.ycombinator.com/item?id=23908820 EricHallahan#1051: But React doesn't automate anything.
EricHallahan#1051: https://xkcd.com/2451/ theurbandragon#3939: write the build script for it too? theurbandragon#3939: *have it write inox#5400: 3 days free 😅 Lucy Stripes#1932: so, i know you guys probably have more important things to focus on, but would you ever be interested in adding something like this to the pile? https://www.kaggle.com/johnhallman/complete-poetryfoundationorg-dataset cfoster0#4356: We considered adding a poetry dataset to the Pile way back when. I think the consensus was that most of the datasets were too small to be worthwhile or too hard to scrape Lucy Stripes#1932: :/ oh well! maybe someday someone who's as passionate about ai poetry as me but who can actually code will go through the trouble of scraping Lucy Stripes#1932: i wonder how OpenAI did it. they obviously have a ton of poetry Lucy Stripes#1932: oh wait i just saw you have Project Gutenberg in your pile! That's awesome!!!! As a future librarian I am OBSESSED with Gutenberg!!!! aze#1010: can gpt neo generate random sentences describing an object? e. a pink llama on fire, a frog with a hat on its head triggerhappygandi#0001: Depends on your input sequence I guess. EricHallahan#1051: Technically it can? aze#1010: ive been trying but its not very consistent finetune#0907: A quick thanks for pointing me in the direction of the local attention regarding that memory issue. I modified the huggingface implementation a bit and can now easily run inference with the full context window in a free colab instance with the 2.7B model. There's still a slight chance that I subtly broke things, but it seems to be working fine. EricHallahan#1051: I doubt it would generate that kind of sentence. EricHallahan#1051: Do you happen to have the code somewhere? We can try to go verify that. aze#1010: any ideas? or maybe references to models id need to train with those sentences finetune#0907: I posted a patch in my huggingface issue, so hopefully somebody from there will look into it finetune#0907: The issue was caused by the input being split into blocks in an odd way when the input length is not divisible by the window size, so now I am padding it and creating a mask to mask out the padding before it goes through the part where it's split into blocks EricHallahan#1051: Yeah, I just brought up the issue from GitHub to take a look, and it does look like it fixed the memory problem.
finetune#0907: Yes, it's definitely much better. I think it might even fit into 8GB now with a full length sequence. I still don't exactly understand what's going on with the block splitting in the original implementation there, but the generated outputs look fine. :hap: EricHallahan#1051: We utilize the HF implementation in our evaluation suite, so we can run your patch across multiple tasks to see if performance is unchanged. I'll set up an experiment at some point to verify it, and if it looks good, I'll report my results in the issue. finetune#0907: That would be very cool, thanks a lot for your help EricHallahan#1051: `v4.5.1`:```md | Task |Metric|Value | |-------|------|-----:| |lambada|ppl |7.4978| | |acc |0.5721| ``` `finetune-memory-fix`:```md | Task |Metric|Value | |-------|------|-----:| |lambada|ppl |7.4978| | |acc |0.5721| ``` They look to be identical, at least at small sequence lengths. The only difference I observed was that your patch was 8.4% slower on the benchmark. finetune#0907: That's very promising. If it's slower that means the sequences are long enough for it to do something. It pads to the next multiple of 256 tokens, so it makes sense for it to be slower as well, but 8.4% is quite a bit. Maybe padding to window_size/4 would already help with memory and reduce the speed penalty. I'll look into that. freddiemitchell6#0094: There are faster approx nearest neighbor libraries than FAISS though, like ScaNN (which I believe was used in an improved kNN-LM that attended over the nearest neighbors plus a gating unit). I personally believe kNN-LMs have lots of potential with better approx nearest neighbor libs - at least for applications that don't mind slower inference. EricHallahan#1051: I need to do more testing, but I don't have time for that right now. A single datapoint isn't good enough to really say if it is slower/faster than what is there. `:|` Bruce23#6204: Hi ! 🙂 Can I run GPT-NEO on my webserver, that's using a CPU?
cfoster0#4356: Hey there! 👋 cfoster0#4356: I'd encourage you to take a look at the FAQ on our website, if you haven't already. There's some info there that might answer your question Bruce23#6204: Thanks 🙂 Bruce23#6204: So if my plans are to interfere, my best bet would be the huggingface implementation (if I got this right) Bruce23#6204: If I don't want to train my own models right now EricHallahan#1051: Yes. Bruce23#6204: thank you 🙂 EricHallahan#1051: Update: further testing shows that this could be within run-to-run variance, and if it is anything, it is only around about half as bad as I originally suggested. EricHallahan#1051: More runs required to see if it is statistically significant or not. BIGBOSSHEAD#5071: Really grateful to be here with you guys I just join 💫🙏 paws#3311: https://twitter.com/ml_collective/status/1385392556976971778?s=19 :o Daj#7482: I encourage people attending ICLR to come by! I'm part of the organizing committee and I think it'll be good fun :) Sora#8531: That's really fucking cool! If ICLR this year is going to be virtual (I read so, or is this a misunderstanding?) how would this work? Daj#7482: It'll be a mix of Zoom and gather.town adamShimi#8350: Maybe I'm just bad, but I can't find a schedule link on the website: https://iclr.cc/Conferences/2021 adamShimi#8350: Is it not up yet? Daj#7482: It's not up yet, still being organized, but you can submit RFPs (5 min quick pitches of projects, it's explained well on the webpage in the tweet) Sora#8531: Do we need to pay the fee to ICLR to attend your social? Also, d oyou know if it's still possible to register for ICLR?
adamShimi#8350: Thanks @Daj Just wanted to look at the kind of topics presented, to see if I'm interested. ^^ Daj#7482: afaik you have to be registered for ICLR, yes. I think registration is still possible Sora#8531: Okay, thanks for the info @Daj ! finetune#0907: Cool, if there ends up not being a significant difference or just a smaller one, that's all the better nev#4905: what's the paper where they use NeRF with CLIP for faces called? 𓅬 gabriel_syme 𓅬#3220: huh I missed that one, let me know if you find it. The only one I'd seen was the putting nerf on a diet paper nev#4905: that's the one! nev#4905: has anyone tried pretraining bert-like language models without masking, i.e. only NSP or similar (like SOP)? or contrastive loss only? EricHallahan#1051: *WE ONLY DO AUTOREGRESSIVE GENERATIVE LANGUAGE MODELS HERE* /s EricHallahan#1051: On a serious note, ¯\_(ツ)_/¯ EricHallahan#1051: I'm not that familiar with them tbh nev#4905: how much will I be immolated if I say that BERT is based EricHallahan#1051: That is very much a joke. nev#4905: that is also very much a joke bmk#1476: re: the MLC thing https://cdn.discordapp.com/attachments/729741769738158194/835199827587235890/unknown.png Louis#0144: btw does anyone have an example of using logits processors to write a custom beam search w HF Louis#0144: im working on a prompt eng paper right now Louis#0144: and I cant find much documentation on this Louis#0144: man Louis#0144: there is absolutely *zero* documentationj
Louis#0144: on logit processors Louis#0144: why is this Louis#0144: there arent even comments Louis#0144: lmao StellaAthena#3530: Google Analytics thinks "EleutherAI Site" and "EleutherAI" are different things, resulting in a duplication of many of our pages in the per-page breakdown. Does anyone more familiar with Google Analytics / web stuff know what's up? https://cdn.discordapp.com/attachments/729741769738158194/835220965234573332/Capture.PNG gwern#1782: the meta/title tags are garbage anyay so I'm not surprised there's weirdness gwern#1782: i would check the dates on that first, it may hve been fixed already bmk#1476: its cause you changed the title at some point lol bmk#1476: and for the better, "EleutherAI site" is a horribly redundant thing to have in the title and I'm glad we changed it freddiemitchell6#0094: Has anyone actually downloaded C4 from Huggingface? I can't seem to DL it Sid#2121: the eye mirrored it https://the-eye.eu/eleuther_staging/c4/ rb#3159: Hi everyone, just wanted to know if there is any project in need of collaborators. or any open issue i can look at nev#4905: I forgot nev#4905: is there an eleuther AI project nev#4905: that's basically clip for audio StellaAthena#3530: #sp3 is the audio project cfoster0#4356: CLAP is the name of the project. There are 2 audio projects in the works in #sp3 Aran Komatsuzaki#5714: is either of the projects about generating waveform? cfoster0#4356: Yeah, Eric's project Methane is Aran Komatsuzaki#5714: thanks
nev#4905: what's the dataset used for CLAP btw? nev#4905: or is that not decided yet? EricHallahan#1051: Many cfoster0#4356: There are a couple of datasets I'm pretty sure we'll use because they're so big, incl: Common Voice, SPGISpeech, Facebook MLS, and Spotify Podcasts. Then there are a whole lot of smaller datasets that would require a bit more individual work. And then, we haven't fully decided whether we'll extend beyond English cfoster0#4356: I'm also mildly curious about collecting a whispered speech corpus, so triggerhappygandi might look into it nev#4905: ah, it's speech nev#4905: I imagined something like wikimedia nev#4905: with other natural sounds and captions nev#4905: maybe even music cfoster0#4356: That's totally doable, once we've got the codebase set up nev#4905: great cfoster0#4356: Only reason I suggested speech first is it's easiest and we've got the most data for it nev#4905: there's a lot of potential for audio AI art nev#4905: agreed gwern#1782: _looks sadly at https://15.ai/ which is STILL down for no good reason_ cfoster0#4356: Reminds me: anyone know how many hours of audio/transcripts the PPP has? EricHallahan#1051: I forget, but we should include it. milestones95#9376: Does anyone know how feasible it is to write 2-3 page stories using gpt-neo? And if so, how would you get started? EricHallahan#1051: With or without intervention of a human?
milestones95#9376: with works for a start milestones95#9376: I just want a story that is coherent to start, even if a human is helping EricHallahan#1051: I would say it is possible, but these kind of models tend to be only locally coherent. So you definitely need a human involved in the pipeline if you want to have something decent. milestones95#9376: How long is “local” 10 sentences? milestones95#9376: How do You define local is a better question EricHallahan#1051: I mean like the next sentence can entirely contradict the one before. milestones95#9376: Oh okay EricHallahan#1051: But it gets better with context. milestones95#9376: By context do you mean the initial prompt I give it? jimm#8158: Newbie Question: Where can I find tutorials for training the GPT-Neo model. EricHallahan#1051: All the models we have released have maximum context lengths of 2048 tokens. EricHallahan#1051: Both that and whatever it generates. EricHallahan#1051: Do you mean fine-tune? EricHallahan#1051: It depends on hardware. jimm#8158: I'd like to give the model past erotic stories, so that it can learn to write better ones. milestones95#9376: @EricHallahan do you have time to hop on VC? EricHallahan#1051: I would suggest one of the many Colab notebooks out there, as they tend to be written in that style. A quick search of your favorite search engine or social media platform will likely turn up multiple. EricHallahan#1051: Not right now sorry. `:\` finetune#0907: if you mean finetuning gpt-neo-2.7B on that kind of material, it might already have been done EricHallahan#1051: I highly suspect that it has lol
gwern#1782: (gpt-2-1.5b definitely has been but I haven't herd of neo yet) jimm#8158: Thanks I found this article https://medium.com/geekculture/fine-tune-eleutherai-gpt-neo-to-generate-netflix-movie-descriptions-in-only-47-lines-of-code-40c9b4c32475 finetune#0907: i'm quite sure neo has been too :smiley: gwern#1782: wink wink nudge nudge say no more eh EricHallahan#1051: By the way, I did further testing. The performance difference is masked by run-to-run variance on lambada, so as far as I can tell, it is negligible. Deleted User#0000: @EricHallahan do you have any pointers at what are good ways to make a transformer model "style-conditioned". I saw you mentioning AdaIN, but I'm not sure if it'd work for transformers as well, as it was designed with CNNs in mind. EricHallahan#1051: AdaIN should work fine as far as I know. I wasn't working with tokenized data though. finetune#0907: that's very good to hear, thanks a lot Deleted User#0000: hm, is there anywork which has used it for style conditioning in transformers? finetune#0907: i definitely don't have any suspicious colab notebook on my github EricHallahan#1051: I actually didn't use true AdaIN, but the PyTorch Instance Normalization implementation with the `affine` parameter set to true which I use an embedding to switch between. Deleted User#0000: ah i see, you just hold a finite set of IN layers that you adaptively switch between? Deleted User#0000: and is ur model a transformer model? EricHallahan#1051: It is technically just local attention layers without dense layers in between lol EricHallahan#1051: It also only used a single head. Deleted User#0000: ah ok. but looks closer to a transformer than to a cnn Deleted User#0000: and it works well for conditioning on style? EricHallahan#1051: But it overfit on my data ¯\_(ツ)_/¯ Deleted User#0000: lol hmm EricHallahan#1051: It was able to switch between each of them yes.
Deleted User#0000: well i guess i can try. the other way i could condition is just feeding an extra input token that represents the style Deleted User#0000: maybe i'll try that first hm EricHallahan#1051: Yeah, that is where I am at. EricHallahan#1051: You can do either. Deleted User#0000: or even just add a learned latent to every input, but attention with extrax token could simulate that anyway Deleted User#0000: so yea EricHallahan#1051: Oh, right that is another project that sp3 will eventually work on. Deleted User#0000: what, style conditoning? EricHallahan#1051: We wanted to literally just run a LM-style transformer on codec frames. Deleted User#0000: if i find one or the other works better for me, i'll let you know Deleted User#0000: codec frames? Deleted User#0000: how do those looke like? why that and not a spectrogram? Sphinx#2092: People have tried with this language, with limited success. Deleted User#0000: have people compared adain vs just feeding a latent as extra token? Sphinx#2092: Using special tokens is how most people do multilingual MT, so yes. Sphinx#2092: Though maybe not adain as-is. I'm thinking something naive liek just, using language-specific layer-norms. Deleted User#0000: so i guess extra token is working better? Sphinx#2092: Well, the language-specific layer norm definitely works better, just not htat much better. Sphinx#2092: Using language tokens is really not ideal Sphinx#2092: for a variety of reasons.
Sphinx#2092: It's just really efficient, in terms of parameter count and simplicity. kindiana#1016: have you tried just asking the model nicely :berk: Deleted User#0000: isnt that just feeding extra token, but a nice one? kindiana#1016: like, state what style you would like in natural language Deleted User#0000: yeah, and maybe feeding it an extra input, could allow for interpolation even Sphinx#2092: https://arxiv.org/abs/2004.11867 Sphinx#2092: You could perhaps be interested in that. Sphinx#2092: They looked at using language-specific layer norm and also introducing a language-specific dense layer at the end of the encoder Sphinx#2092: which is pretty ridiculous but alas EricHallahan#1051: > how do those looke like? It would be just like an autoregressive LM. You would just use frames from Codec2 because it is open source, low resource, and pretty good at compression. Feed them in like any other token data. It should be pretty trivial to set up, I bet it is possible to adapt any of the repos to do it. > why that and not a spectrogram? Please read the description of #research and get back to me. Deleted User#0000: i meant how do codec2 frames look like. I just have no intuition, but I'll look for the description in research EricHallahan#1051: It very much depends on what mode. It ranges from 3200 baud down to 450 baud, so obviously the format changes a lot between them. Deleted User#0000: which descrition in #research are you referring to? EricHallahan#1051: > Science isn't about WHY. It's about WHY NOT. Why is so much of our science dangerous? Why not marry safe science if you love it so much. In fact, why not invent a special safety door that won't hit you on the butt on the way out, because you are fired. EricHallahan#1051: - Cave Johnson EricHallahan#1051: Pretty much it is "It should take maybe a few hours of work to set up. Why not give it a shot?" Deleted User#0000: i guess. i'd like to understand codecs a bit better tho lol
EricHallahan#1051: Yeah, I'm looking through the source now to remember what the format is. EricHallahan#1051: ``` FUNCTION....: codec2_encode_3200 AUTHOR......: David Rowe DATE CREATED: 13 Sep 2012 Encodes 160 speech samples (20ms of speech) into 64 bits. The codec2 algorithm actually operates internally on 10ms (80 sample) frames, so we run the encoding algorithm twice. On the first frame we just send the voicing bits. On the second frame we send all model parameters. Compared to 2400 we use a larger number of bits for the LSPs and non-VQ pitch and energy. The bit allocation is: Parameter bits/frame -------------------------------------- Harmonic magnitudes (LSPs) 50 Pitch (Wo) 7 Energy 5 Voicing (10ms update) 2 TOTAL 64 ```
EricHallahan#1051: ``` FUNCTION....: codec2_encode_700c AUTHOR......: David Rowe DATE CREATED: Jan 2017 Version c of 700 bit/s codec that uses newamp1 fixed rate VQ of amplitudes. Encodes 320 speech samples (40ms of speech) into 28 bits. The codec2 algorithm actually operates internally on 10ms (80 sample) frames, so we run the encoding algorithm four times: frame 0: nothing frame 1: nothing frame 2: nothing frame 3: 18 bit 2 stage VQ (9 bits/stage), 4 bits energy, 6 bit scalar Wo/voicing. No spare bits. Voicing is encoded using the 0 index of the Wo quantiser. The bit allocation is: Parameter frames 1-3 frame 4 Total ----------------------------------------------------------- Harmonic magnitudes (rate k VQ) 0 18 18 Energy 0 4 4 log Wo/voicing 0 6 6
TOTAL 0 28 28 ``` EricHallahan#1051: ``` FUNCTION....: codec2_encode_450 AUTHOR......: Thomas Kurin and Stefan Erhardt INSTITUTE...: Institute for Electronics Engineering, University of Erlangen-Nuremberg DATE CREATED: July 2018 450 bit/s codec that uses newamp2 fixed rate VQ of amplitudes. Encodes 320 speech samples (40ms of speech) into 28 bits. The codec2 algorithm actually operates internally on 10ms (80 sample) frames, so we run the encoding algorithm four times: frame 0: nothing frame 1: nothing frame 2: nothing frame 3: 9 bit 1 stage VQ, 3 bits energy, 6 bit scalar Wo/voicing/plosive. No spare bits. If a plosive is detected the frame at the energy-step is encoded. Voicing is encoded using the 000000 index of the Wo quantiser.
Plosive is encoded using the 111111 index of the Wo quantiser. The bit allocation is: Parameter frames 1-3 frame 4 Total ----------------------------------------------------------- Harmonic magnitudes (rate k VQ) 0 9 9 Energy 0 3 3 log Wo/voicing/plosive 0 6 6 TOTAL 0 18 18 ``` EricHallahan#1051: Note that 450 is *technically* experimental. Deleted User#0000: ah interesting. Seems relatively interpretable, which makes me think it's more likely to work EricHallahan#1051: It is 100% interpretable. You could even pass them in with the VQ features if you wanted, which is probably the better idea, but it isn't as stupidly simple to implement. Deleted User#0000: which vq features? as in jukebox ? EricHallahan#1051: Also, 450 is terrible quality, it is really only meant to be a demonstration that it can remain intelligible at these bitrates. EricHallahan#1051: Most, if not almost all pure speech codecs use a codebook. The algorithm quickly searches the codebook based off of the perceptual quality of decoding in an iterative process, and the best entry is the one that is transmitted/stored. EricHallahan#1051: I highly recommend the Speex manual as it describes CELP at a high level very well. https://www.speex.org/docs/manual/speex-manual/ EricHallahan#1051: (The process here is called *Analysis-by-Synthesis*, and it is seems like an inefficient way of of performing this task. However, it is incredibly simple and is completed in real time no sweat by any competent processor, even down to cheap embedded microcontrollers.) CKtalon#7792: can someone tell me how distillation is done for MT models? I've read around and it sounds really trivial (saying it's just training on the output.. what output?), but I'm not sure what to do to actually distill it. StellaAthena#3530: That’s basically it, yeah.
EricHallahan#1051: Just train on the output. CKtalon#7792: what output? CKtalon#7792: lol EricHallahan#1051: From the model. CKtalon#7792: so i feed in text, and i get out text EricHallahan#1051: Yes, EricHallahan#1051: that is what you do. CKtalon#7792: so, isn't this just training an MT model using MT-translated texts? CKtalon#7792: and i assume i set smaller hyperparameters for this model? EricHallahan#1051: :gameryes: CKtalon#7792: actually, is there a difference between distillation, teacher-student, pruning? CKtalon#7792: i'm looking into shrinking/speeding up a big MT model CKtalon#7792: i think the first two are the same. and pruning is more involved? StellaAthena#3530: T-S is a methodology that can be used to do distilling StellaAthena#3530: Distilling is a task StellaAthena#3530: And pruning is a methodology that’s related but distinct CKtalon#7792: ok, so how is pruning done? CKtalon#7792: my take is it's looking at which nodes aren't important, and then slowly remove them StellaAthena#3530: That’s one way to do it CKtalon#7792: which sounds move involved than "training on the output"
StellaAthena#3530: Or, that’s really the only way to do it. But people have different notions of what it means to be “unimportant” CKtalon#7792: but is "training on the output" equivalent to training an MT model using MT-translated texts, albeit smaller hyperparameters CKtalon#7792: if so, i don't understand why this actually improves the BLEU scores based on the papers i read CKtalon#7792: aren't you using worse data to train a model when you had perfectly fine data to begin with (the corpus used to train the big model) CKtalon#7792: it's like back-back translation kindiana#1016: pruning doesn't help speed (unless you prune a lot) kindiana#1016: you can do distillation on soft model outputs, which helps the student learn more compared to one hot gt CKtalon#7792: can you elaborate? as i said, i only have a rough idea, but don't know the real specifics of what it entails kindiana#1016: https://arxiv.org/pdf/1910.01108.pdf kindiana#1016: this talks about the objective kindiana#1016: maybe not the best paper but you can follow the references CKtalon#7792: ok, thanks. will read it CKtalon#7792: can anyone answer this? haha 45#2247: idea: fine-tuning GPT-Neo on 5-10 cover to save time in applying to jobs 45#2247: (sth like 5-10 sentences input, added incrementally after generating like one paragraph, refreshing 10-50x per sentence) 45#2247: would that be worth it in EV (time not writing cover letters, learning about fine-tuning, insights from trained model, etc.), or just straight impossible? 45#2247: also: not sure if I could pre-finetune on my tweets? It's already pretty good finetune#0907: realistically, you're gonna need at least a few MB worth of cover letter text 45#2247: huum, ok so I should maybe fine-tune on some generally good cover letters online finetune#0907: you can try with less, but if you do one epoch, it won't make a big difference
45#2247: lmao it's already in production https://www.reddit.com/r/GPT3/comments/ltqxjs/gpt3_cover_letter_builder/ 45#2247: wait, are you saying that with < few MB I'd overfit when doing > 1 epoch? finetune#0907: probably depends on how long you keep training Kharr#7888: You're better off using the "few-shot" technique over finetuning when your data is that limited. 𓅬 gabriel_syme 𓅬#3220: Was curious, do you imagine those techniques, like the generative one you described for e.g., will be possible with multimodal models going forward (as long as one mode is text ofc)? Kharr#7888: It should work for anything. All you're doing is providing a prompt which gets converted into a latent vector and impacts generation of other items via attention. Technically, you can convert an image --> vector using another network and use that as prompt for a text model like GPT. Should work for any modality. 𓅬 gabriel_syme 𓅬#3220: yeah I thought to myself if it works with text it shouldn't be limited to it 𓅬 gabriel_syme 𓅬#3220: thanks, sounds super intriguing especially is these multimodal models come forward (it's totally intriguing now but I'm just not doing NLP heh) Kharr#7888: Have a look at https://arxiv.org/abs/2102.10772 which uses a shared decoder and task specific heads. Pinsith#5697: Private test site ilaw.lk:9000 Runs on gpt2 code 45#2247: so maybe few shots with say 500 words examples, given sub-structure? not sure if there are constraints in context windows etc. ``` these are examples of cover letter: <sep>Job title: X; Name of company: Y; main text: my awesome 500 words <sep>Job title: Z: ... <sep>Job title: W; ... main text: [that's where AI completes] ```
Kharr#7888: You'll have to see how many tokens that is after you tokenize it. GPT-Neo has a context of 2048 tokens so it should fit a few in there. 500 words usually translates into about 800ish tokens depending on how many subwords it has to use. tick#5512: Is there a "how to get started" document somewhere (gpt-neo) EricHallahan#1051: We do not have one, but there are many guides and notebooks out there already. Sid#2121: @tick the readme EricHallahan#1051: I guess the readme tick#5512: I haven't seen any readme EricHallahan#1051: In the repo? tick#5512: Where can I find link to the repo EricHallahan#1051: https://github.com/EleutherAI/gpt-neo tick#5512: Thank you EricHallahan#1051: If you just want to run the models, I suggest you use Hugging Face, you will have a far better user experience. See the FAQ for more details at https://eleuther.ai/faq tick#5512: Cool. Never heard of hugging face before, but will def check out milestones95#9376: In the documentation for setting up training, how do I know if the TPUs are connected? Create your VM through a google shell (https://ssh.cloud.google.com/) with ctpu up --vm-only so that it can connect to your Google bucket and TPUs and install the requirements with pip (see above). milestones95#9376: the command i ran https://cdn.discordapp.com/attachments/729741769738158194/835574502074744853/Screen_Shot_2021-04-24_at_10.54.23_AM.png EricHallahan#1051: Are you trying to just get started? Have you tried HF? It is a lot easier to use. godspeed#4450: Sorry to interrupt but does anyone have a good idea on how to best prompt Gptneo to output a cohesive paragraph of text? milestones95#9376: I want to train the model milestones95#9376: not just use the pretrain
EricHallahan#1051: You mean fine-tune? EricHallahan#1051: There is a big difference. milestones95#9376: what's the difference EricHallahan#1051: Tuning takes an existing model and specializes it toward a certain task. cat_#4534: Try lower temperatures, that usually helps make things more cohesive for me milestones95#9376: can any model be fine tuned to learn any topic? I want the model to understand how to write sex stories lol EricHallahan#1051: I personally consider training to "starting from scratch" with blank slate. milestones95#9376: Oh okay. I wasn't thinking start from scratch EricHallahan#1051: Yeah, that is really hard, that is why we did it for you. `:)` milestones95#9376: thank you thank you. So can you help me with the above question? to get it set up so I can fine tune godspeed#4450: Thank you, is there an editor you use for testing temperatures? cat_#4534: I just change the parameter in the code usually StellaAthena#3530: https://twitter.com/charles_irl/status/1386050080860377088?s=20 EricHallahan#1051: Me, looking at the singularity function yesterday on my exam: EricHallahan#1051: :guilty: bmk#1476: this but it's category theory and basic arithmetic bmk#1476: you, in tears: please, you cant just keep redefining basic mathematical operations as special cases of categories, this has to stop me, eating a burrito and pointing at addition: look, it's a coproduct in a bicartesian closed category StellaAthena#3530: @bmk I'm failing to find it but there's a book on category theory that begins by defining addition and multiplication via category theory
Kazumi#1297: TIL einsum was short for einstein summation EricHallahan#1051: https://xkcd.com/1053/ StellaAthena#3530: Pinned a message. Kazumi#1297: I'm today's lucky 10000 Max Brashear#4099: Not every model can be fine-tuned for every task. For instance, BERT isn’t trained autoregressively so you can't generate text with BERT. If you want to generate stories your best bet is fine-tuning GPT-2 IKEA#9631: Or gptneo, whenever it comes out EricHallahan#1051: I believe he was implicitly talking about generative models. Max Brashear#4099: My b I jumped in mid convo neko#5937: How to do text generation from fairseq megatron 11b? EricHallahan#1051: I have no idea. EricHallahan#1051: I know there is a draft PR for integration to HF, but it isn't done yet. neko#5937: Even that has bad results according to the demo neko#5937: Also how many gpu is needed to run fairseq megatron 11b? neko#5937: How much gpu vram neko#5937: 32gb enough? neko#5937: Even fairseq never answered how to do text generation in their issues >.> EricHallahan#1051: Back of the napkin math says it is only at fp16. neko#5937: How much gpu vram is that EricHallahan#1051: 22 Gigabyte neko#5937: Wow nice
EricHallahan#1051: But your not going to be able to fine tune that because gradients would push you over. neko#5937: I don't need any fine tuning EricHallahan#1051: Just felt that it was worth mentioning. neko#5937: I tried using anton's fork but got stuck at keyerror 'megatron'. I tried using fairseq and got stuck at 'please install the megatron submodule' neko#5937: *anton's fork of HF transformers EricHallahan#1051: My only usage of megatron is the heavily modified version that is NeoX. EricHallahan#1051: So I don't know exactly what the problem is. EricHallahan#1051: But I assume you need to submodule megatron? neko#5937: It's a strange error that has an unanswered issue, so no matter how many times the submodule install is run the same error appears neko#5937: unresolved issues https://github.com/pytorch/fairseq/issues/2719 Oct 11, 2020 How do I generate sentences using the pre-trained model? https://github.com/pytorch/fairseq/issues/3398 Mar 25, 2021 Please install the megatron submodule https://github.com/huggingface/transformers/pull/10301#issuecomment-785720421 Feb 25, 2021 HF low text-generation quality neko#5937: i spent 3 weekends on this and failed to get megatron11b running, tried both fairseq and HF neko#5937: it looks like even HF gave up and decided to do pull request on the very low quality unsolved version, it's a step forward but, surprised me that HF being so experienced and skilled were unable to figure it out easily neko#5937: idk i wasn't even able to run the HF code to do low quality generation neko#5937: i wanted to try using the nvidia megatron repo to run megatron 11b, but they required files i didn't have neko#5937: CHECKPOINT_PATH=checkpoints/gpt2_345m VOCAB_FILE=gpt2-vocab.json MERGE_FILE=gpt2-merges.txt
GPT_ARGS=<same as those in GPT pretraining above> MAX_OUTPUT_SEQUENCE_LENGTH=1024 TEMPERATURE=1.0 TOP_P=0.9 NUMBER_OF_SAMPLES=2 OUTPUT_FILE=samples.json python tools/generate_samples_gpt.py \ $GPT_ARGS \ --load $CHECKPOINT_PATH \ --out-seq-length $MAX_OUTPUT_SEQUENCE_LENGTH \ --temperature $TEMPERATURE \ --genfile $OUTPUT_FILE \ --num-samples $NUMBER_OF_SAMPLES \ --top_p $TOP_P \ --recompute neko#5937: ^nvidia generation code neko#5937: https://github.com/NVIDIA/Megatron-LM#evaluation-and-tasks neko#5937: by any chance do you think i can use their 345M VOCAB_FILE=gpt2-vocab.json and MERGE_FILE=gpt2-merges.txt to run megatron11b?
neko#5937: idk is there any way to run megatron 11b neko#5937: i haven't tried using parlai though https://github.com/pytorch/fairseq/issues/2358#issuecomment-694910124 EricHallahan#1051: I have no idea. If we did, I would think that we would have already tried to fine-tune it on Pile by now. neko#5937: i thought gpt neo uses megatron architecture, interesting that megatron11b was still too challenging StellaAthena#3530: We heavily modded it. And if you’re taking about training, we haven’t trained an 11B model yet EricHallahan#1051: One of our primary criticisms with launching the 1.3B and 2.7B checkpoints were that they weren't very big. We were commonly compared to Megatron 11B and told that, because we did not have the largest publicly available generative model, we were overhyped (which it likely was, and it wasn't our problem to fix it). EricHallahan#1051: Yes, GPT-NeoX is heavily modified. neko#5937: i was able to run gpt neo really quickly, but have zero luck with running megatron 11b StellaAthena#3530: Were you trying to run an 11B GPT-NeoX? EricHallahan#1051: No neko#5937: no, just the fb 11b megatron neko#5937: facebook's model EricHallahan#1051: I have a suspicion that very few people have successfully run Megatron 11B. neko#5937: i tried both the models from facebook and from anton's HF fork neko#5937: yes, the issues are unanswered and have multiple participants neko#5937: https://cdn.discordapp.com/attachments/729741769738158194/835673159856029696/unknown.png neko#5937: (https://github.com/pytorch/fairseq/issues/2719)^ EricHallahan#1051: Half of that is likely because the current GPT-Neo models are not built upon Megatron. neko#5937: i used HF to run gpt neo, it worked easily EricHallahan#1051: Yeah, the HF port is really easy to use.
EricHallahan#1051: I recommend it to everyone who walks in and asks how to run it. EricHallahan#1051: So much in fact that I wrote it in the FAQ: https://eleuther.ai/faq EricHallahan#1051: Yeah, I had been pretty annoyed by this, because I highly doubt that anyone with this criticism has ever run Megatron 11B. Anyway, our training data is really different to GPT-2, Megatron 11B, and the GPT-3 models. It makes a big difference to downstream performance. neko#5937: Gpt neo is the largest usable model imo neko#5937: At least from this perspective EricHallahan#1051: I think that 2.7B is going to be the largest readily obtainable model for a while. EricHallahan#1051: It is really hard to go larger without expensive hardware. neko#5937: Megatron 11b was released over 12 months ago and it's still hard to run it EricHallahan#1051: (Just ask CoreWeave.) neko#5937: Looks like everyone is working on 1T models StellaAthena#3530: More like “1T” models tbh finetune#0907: i'm curious what the biggest size would be that can be run on colab GPUs. i think 6B could still fit, but it'll be hard to actually load StellaAthena#3530: @finetune If you want to figure out what that number is we will train a model that size. It’s on our TO DO list EricHallahan#1051: 2.7B is actually 86.4 billion parameters, if you look at it from the perspective of bits at binary32 EricHallahan#1051: No one has figured out how large we can go. finetune#0907: maybe i'll try some things with untrained models to see what fits then neko#5937: maybe i'm doing it wrong but gpt2xl despite starting off at like 8.6gb gpu vram, will end up maxing out 16gb gpu vram after a bunch of long sequences neko#5937: kept crashing my gpu EricHallahan#1051: I had trouble running GPT-2 XL on Colab.
neko#5937: i wasn't using colab i don't know it's performance on colab EricHallahan#1051: I think it is very close to the limits of what Colab GPUs can handle reliably. finetune#0907: avoiding things being left on the gpu after they're not needed anymore takes some care neko#5937: yeah my implementation leaves stuff on the gpu neko#5937: no gpu garbage collection EricHallahan#1051: Nah, just leave it to the garbage collector, what could go wrong? /s neko#5937: it's ok, at least it works finetune#0907: i was manually loading the weights for the hf 2.7B model directly in gpu because loading it in system memory oomed there StellaAthena#3530: Colab TPUs seem like they can go bigger than GPUs finetune#0907: then i deled the snapshot but it still stuck around finetune#0907: had to iterate over the keys and delete the values too EricHallahan#1051: (I oppose Java explicitly because you can never override the garbage collector.) EricHallahan#1051: They should, but I have never run anything successfully on Colab TPUs. neko#5937: tpu=64gb, gpu=16gb neko#5937: i guess neko#5937: "TPUv2 available from within Google Colab comes with a whopping 180 TFlops, give or take. It also comes with 64 GB High Bandwidth Memory (HBM)."-https://jannik-zuern.medium.com/using-a-tpu-in-google-colab-54257328d7da neko#5937: personally i don't like tpus neko#5937: that's just me though EricHallahan#1051: I haven't even touched our TPUs we access to through TRC, despite theoretically having access to them. neko#5937: yeah my TFRC lab was trying to get me to use TPUs but it was i found it too hard so i gave up and used GPUs
EricHallahan#1051: Though then you have people like Ben who are wizards with them. EricHallahan#1051: They look so attractive until you start trying to use them. kindiana#1016: tpus are more difficult to prototype on, but they make scaling much easier imo EricHallahan#1051: Well, when you have them connected via high speed networking, it does make it a lot easier. godspeed#4450: Hello Everyone! I am trying to get GPT-NEO to produce a 300 word block of text, so I am setting min_length to equal 300. However, gpt-neo is producing less than 300 words. Am I misinterpeting how min_length works? godspeed#4450: here's my code: text = generator(prompt , do_sample=True, min_length=300, temperature=1) EricHallahan#1051: Have you tried changing `max_length`? bmk#1476: length is in tokens godspeed#4450: does a signle token = a single word? EricHallahan#1051: Not always. godspeed#4450: What does a token equal? EricHallahan#1051: It could be a word, a letter, a part of a word/syllable... Louis#0144: I recommend HFs tutorial on how to generate text @godspeed Louis#0144: It’s very helpful Louis#0144: I use it for training with my assistants EricHallahan#1051: I am *very* tempted to put together an official Eleuther guide to using the HF GPT-Neo models. bmk#1476: it would be a single line saying "Go read the official HF docs [here]" bmk#1476: using the hf gptneo models is identical to any other hf model EricHallahan#1051: I just want something that I can say "Read this and be off on your way." bmk#1476: i mean how would it be different from official hf docs
bmk#1476: https://huggingface.co/blog/how-to-generate point people to this EricHallahan#1051: I don't know, I don't like the HF docs that much. I find them hard to navigate. bmk#1476: or this https://www.kaggle.com/tuckerarrants/text-generation-with-huggingface-gpt2 bmk#1476: or (ugh tds) this https://towardsdatascience.com/text-generation-with-pretrained-gpt2-using-pytorch-563c7c90700 EricHallahan#1051: It would be an ultra low effort thing. bmk#1476: https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-generation or this bmk#1476: good code is self documenting, after all EricHallahan#1051: Until people try to use it because it is so simple they don't need to be able to code. EricHallahan#1051: "No code" solutions are becoming the hot trend today. bmk#1476: writewithtransformer is "no code" bmk#1476: we just need to convince them to add gptneo EricHallahan#1051: I thought you opposed that. bmk#1476: opposed what? EricHallahan#1051: Them using GPT-Neo in WWT. bmk#1476: wat bmk#1476: when did i say that EricHallahan#1051: Twenty one days ago. bmk#1476: link pls EricHallahan#1051: > if you make it so anyone who can type things in can use it, then people are going to start asking you how to fix their broken keyboard EricHallahan#1051: > and since a lot more people are going to be using it, you're going to have net more support requests
bmk#1476: i mean you suggested "no code" and WWT is no code bmk#1476: also im pretty sure the context of that was me saying that adding to WWT isnt a solution to the request volume bmk#1476: not that i blanket oppose adding to WWT EricHallahan#1051: I was saying that self-documenting code cannot be documenting if the person cannot code. bmk#1476: oh i thought you meant we should go "no code" EricHallahan#1051: No. bmk#1476: i mean then i dont see where the disagreement is EricHallahan#1051: We never disagreed. chilli#5665: I think it depends on what your setup is Muennighoff#9764: i still got like 400K openai gpt tokens left for >2 mon; does sb need any? what did u guys do with ur unused tokens? bolein#8956: I'd take some Deleted User#0000: anyone has run a model with encoder-decoder attention on TPUs with torch_xla/pytorch lightning? I've tried two implementations (default pytorch one ans x-transformer's one) and both failed janus#0150: lmao thats a great line janus#0150: bmk is that original? Deleted User#0000: btw given that TPUs are a pain in the ass for pytorch.. I was wondering about the message I saw above about the possibility of using some of coreweave's GPUs? What should I do to be considered for that? @StellaAthena Kharr#7888: If you're using the default Transformer from the PyTorch library and it isn't working properly, you should raise the issue with the pytorch_xla team Deleted User#0000: Yeah I was thinking of doing that now Kharr#7888: I'm curious what kind of error you're running into. I'm personally training a decoder-only transformer on Colab TPUs with pytorch xla just fine. Deleted User#0000: yeah the issue only happens when you have encoder-encoder attention. I don't know what the issue is. It just freezes at the first iteration of training; no error. I played with changing the code inside pytorch's MHA layer, and could basically confirm that it only happens with encoder-decoder attention, i.e. when the query is different from the value and key Deleted User#0000: However, when I started playing with the code, I found some very weirder stuff (you can see the tpu channel in TPUPodcast if you wanna see what), so that I really don't know what's happening
Deleted User#0000: like, it would freeze when there was a branch in the code (even though both branches executed the exact same code), but not when there was only one branch Deleted User#0000: and also it didn't seem to be fully consistent Kharr#7888: Sounds very strange indeed. I can't think of a reason why it wouldn't work in that case. Might be worth trying to tune some of the Hugging Face models which have encoder/decoder to see if the issue persists with that code as well. Deleted User#0000: yeah i haven't tried huggingface because i just want a simple transformer. I have tried lucid's x-transformers though, and it also fails, but differently. Rather than freezing it throws an error complaining that some variable "IsIntegral" Deleted User#0000: hmmm, when i tried to build a dumb minimal example using `nn.Transformer` with encoder-decoder attention, it works.... Kharr#7888: Might be time to debug on cpu 🙂 Deleted User#0000: oh, on cpu or gpu all my models work fine lol Kharr#7888: It could also be your training loop for TPU. When I first started working with TPUs I constantly ended up with weird locks (especially when breaking training loop to checkpoint) godspeed#4450: Good morning, everyone! So I wrote a half-page novel script. I discovered that it begins to break down and lose meaning after about three sentences. So, while decreasing the temperature, I ran a recursive script that fed each output back into the parameters as a prompt. But I'm still getting unreadable text; is there something wrong with my logic? How can I ensure greater cohesion by increasing fidelity? godspeed#4450: `prompt = "The year is 1910. Adolf Hitler, a troubled artist, has survived hundreds of assassination attempts by time travelers, but this one is special. This traveler has no desire to assassinate Hitler. He wishes to show him how to paint." outputs = [] temperature = 1 min_length = 75 text = generator(prompt , do_sample=True, min_length= min_length, temperature=temperature) outputs.append(text) for i in range(3): temperature -= 0.05 min_length *= 2 print('I AM MIN LENGTH, ', min_length)
text = generator(outputs[i][0]['generated_text'], do_sample=True, min_length=min_length, max_length=min_length, temperature=temperature) outputs.append(text) prompt = text[0]['generated_text']` alstroemeria313#1694: what's the best way to do gradient descent w/ line search? Louis#0144: Coordinate descent alstroemeria313#1694: so also like, the answers are constrained to the unit sphere Louis#0144: So you want to restrict yourself to when derivatives are rotations? Louis#0144: Something something SE(n) Deleted User#0000: there is a recent optimizer called Nero, that only rotates weights StellaAthena#3530: Tbh this seems pretty easy to hardcore lazily StellaAthena#3530: Like, can’t you just do coordinate descent in spherical coordinates? Louis#0144: Ye that’s what I was running Louis#0144: Thinking * chilli#5665: I can probably answer this chilli#5665: It sounds like you're running into recompiles Deleted User#0000: im now on the process of finding a minimal example~ Deleted User#0000: i tried going bottom up for it, but i didnt find the problem, so now im going top down Deleted User#0000: reducing my code down chilli#5665: There are some debug flags you can set chilli#5665: https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md
Deleted User#0000: so...... all this debugging to find out that it wasnt completely frozen. It just seems that when using decoder-encoder attention, its time to compile grows (verrrry fast) with batch size. So that with batch size of 1 it does it pretty quick, but for batch size 32, it takes so long that TFRC complains about my TPU being idle for too long xD. With normal just encoder or just decoder attention this doesnt happen though Deleted User#0000: so I just made the batch size smaller, and now it seems to be working~ chilli#5665: 😆 chilli#5665: yeah, sometimes XLA takes a very long time to compile chilli#5665: lol chilli#5665: If you've seen the mlperf timings, some of the programs take longer to compile than run Deleted User#0000: yeah.. lol Deleted User#0000: so~ was this message for real? Louis#0144: Ask Stella Louis#0144: She’s in charge of that stuff EricHallahan#1051: Not entirely. Louis#0144: I have no authority there EricHallahan#1051: Not really. Deleted User#0000: ah ok Deleted User#0000: :P. Deleted User#0000: would've been nice EricHallahan#1051: Thing is that most of our pods are being used right now. EricHallahan#1051: We had quickly expanded with #carp, #sp3, #vision, and realized that we might have been a little aggressive with offers. Louis#0144: If it’s multimodal grounding related then I can organize something with you because we want to run some experiments with that bmk#1476: * most of our pods are *claimed*
Louis#0144: Multimodal grounding does not need six GPUs btw bmk#1476: they aren't all being used Louis#0144: We at most only need four EricHallahan#1051: tru Louis#0144: So I can spare some GPUs if it’s grounding related Sora#8531: Serious question, where would you guys recommend to go if you need some GPUs for performing experiments? Any website or provider in particular? Louis#0144: Colab Louis#0144: If you want to be up and running ASAP EricHallahan#1051: I think we should start making several users lol guac#4716: sora uses colab lol try gcp if you want to flex on some tpus Louis#0144: true Deleted User#0000: Well, it will be multimodal grounding related, in the future, as I want to use my same model, to expand on the "Grounding language from play" work, but at the moment I'm using it to do music-conditioned dance generation. Siggraph deadline is quite soon, which is why I want compute power xD. I would like to publish this work as soon as I can, because I really wanna open source it sooner rather than later~ Sora#8531: Colab offers at most one gpu and is not even instant Sora#8531: Im talking about a pack of at least 2-4 gpus if not 8 EricHallahan#1051: Just `mchorse` is probably not a good idea. Louis#0144: I’m interested Louis#0144: Let’s talk some time Louis#0144: If you have a serious plan I would love this Deleted User#0000: Yeah I'm quite serious Deleted User#0000: we can talk when u like
Sora#8531: I know you need to pay but Im serious since the academic provider where I live charges 4 times as much as what I found with a brief google search, but I was curious if you guys know any cloud providers who are trustable and you've used in the past EricHallahan#1051: We obviously are biased towards CoreWeave, but that is because they have very nicely given us compute. EricHallahan#1051: #art uses vast.ai a lot EricHallahan#1051: Maybe ask there. Sora#8531: Thanks. I just looked it up. It says 1 v100 for 0.60 USD. For your reference where I live the academic cloud provider costs like 3/hr EricHallahan#1051: Yeah, the GPU compute servers I can access through my university are paltry and way overpriced in comparison to what you can get commercially. EricHallahan#1051: Do you mean like image synthesis or doing manipulation of rigging? EricHallahan#1051: I think the later is more interesting but harder to get data for. Louis#0144: Yeah guille and I are chatting Louis#0144: I think it’s the latter EricHallahan#1051: Yeah, the latter is definitely more interesting and should be lower bandwidth. Deleted User#0000: yea Louis asked me, so im writting a proposal; it'll summarize also what ive done already. It is manipulating the rig, not image synthesis Louis#0144: Do@you know zhiyu Louis#0144: He does something similar Deleted User#0000: i dont think so Deleted User#0000: whois Louis#0144: Ah he left the super Deleted User#0000: eh? Louis#0144: Look up zhiyu Lin at Georgia tech Louis#0144: He’s in my lab
Deleted User#0000: https://scholar.google.com/citations?user=_YsSQ6gAAAAJ&hl=zh-CN ? Deleted User#0000: ah. thanks for sharing. I didn't know him Louis#0144: He does osu! PCG research Deleted User#0000: lol. and i worked on beatsaber PCG Louis#0144: Used to do DDR research Deleted User#0000: loool Deleted User#0000: no way Louis#0144: Oh wait you worked on that beat saber AI didn’t you Louis#0144: The other discord Deleted User#0000: half of my model was based on the dance dance convolution Deleted User#0000: yeah Louis#0144: WAIT Louis#0144: you have met zhiyu Louis#0144: I introduced you two before Louis#0144: LMAO Deleted User#0000: wat Deleted User#0000: when? Louis#0144: he’s in that server too Deleted User#0000: lol Louis#0144: Last year
Deleted User#0000: wait Deleted User#0000: i cant remember Deleted User#0000: u are in that server too? Louis#0144: I was Louis#0144: Zhiyu and I wanted to make a beatsaber PCG RL model using inverse kin Louis#0144: So like model based RL for PCG Louis#0144: We didn’t have find tho Deleted User#0000: do you mean using a surrogate player to judge the levels? Louis#0144: Yeet Deleted User#0000: yeah i thought about that too Louis#0144: That’s what he’s doing for osu! now Louis#0144: It works rly well Deleted User#0000: oh i see Deleted User#0000: nice Deleted User#0000: thats pretty cool Louis#0144: I know your prior work then Louis#0144: I’ve read your DDR paper Deleted User#0000: and i guess he's using some measure of intermediate difficulty to judge levels Louis#0144: Yes Deleted User#0000: or interestingness
Louis#0144: Yep Deleted User#0000: no that isnt mine Deleted User#0000: i only did the model for beat saber Louis#0144: Oh Louis#0144: Wait ok so you worked with the DDR person? Louis#0144: The Stanford dude? Deleted User#0000: Donahue? Louis#0144: I can’t remember his name for the life of me Louis#0144: Yes Deleted User#0000: Not directly. We've spoken quite a few times, while I was doing the deepsaber project Louis#0144: Wasn’t he in the discord tho Louis#0144: He was always active there Deleted User#0000: yeah Deleted User#0000: he then made his own beat saber level generator Louis#0144: Beat sage Louis#0144: That was him Deleted User#0000: and pushed it much harder, publicity wise Deleted User#0000: yeah Louis#0144: Right? Louis#0144: Ah ok
Louis#0144: So zhiyu knows the beat sage devs Louis#0144: I thought you meant you were donahues coworker Deleted User#0000: nop. we just have talked/helped each other Louis#0144: Ah ok Louis#0144: I went from thinking you were Donahue to thinking you worked with Donahue to now Louis#0144: LOL Deleted User#0000: lul Deleted User#0000: hopefully its not with increasing disappointment lol Louis#0144: donahue is a beast Louis#0144: its very hard to compare to him Louis#0144: LOL Louis#0144: so no extra disappointment after that step Louis#0144: dw Louis#0144: if u were donahue I wouldnt have bothered asking for a proposal Louis#0144: 😛 Sphinx#2092: Chris Donahue? Louis#0144: Yes Sphinx#2092: Interesting. I emailed him many years ago. He helped me with my first dl project. Louis#0144: Ya he’s v friendly Sphinx#2092: But I was too dumb and mostly just wasting his time lol
Sphinx#2092: At least he liked my NES music generations Kharr#7888: Colab gives you 8 TPU V2 cores with 8GB memory each (works out to about 15ish if you use bfloat16) if you want to figure out how to use them. With mesh TF you can train some very large models. It is a lot of compute. AI_WAIFU#2844: What has deepmind been up to? The two most notable things I can think of both came out over a year ago, Agent57, AlphaFold2. Deleted User#0000: altho it didnt produce much buzz, their learning interactive intelligence paper is one of my favourites they've put out AI_WAIFU#2844: link? Deleted User#0000: https://arxiv.org/abs/2012.05672 Deleted User#0000: worth checking their video demos https://www.youtube.com/watch?v=510xBEcef_o&ab_channel=TimothyLillicrap nz#9710: That one was absolutely awesome IMO Deleted User#0000: they spent like 200K dollars to get the data. Could we match their data size on a more interesting environment with only free volunteers? hmm bmk#1476: can i glue a camera to my head and walk around town gwern#1782: the amount of money was really quite something. I keep waiting for a followup. you don't spend $200k on VR datasets like that and just... use it once, do you? Deleted User#0000: $200k on VR datasets ? Deleted User#0000: they didn't use VR here Deleted User#0000: (tho i wouldnt be surprised at all if they are working on that) Deleted User#0000: you can get VR and walk around VRChat IKEA#9631: imagine getting paid to fuck around in vrchat while using some anime/pony/furry avatar Deleted User#0000: (ive actually paid people to do that) AI_WAIFU#2844: Alright, this is pretty impressive, but I'm dissapointed both at how they chose to tackle the problem and the sheer amount of hacks deployed to make it work. Deleted User#0000: what do you mean? AI_WAIFU#2844: For starters, imitaiton doesn't seem to be the way to go, at least in general, It requires you to drop a bunch of cash on data collection, and that's if getting humans to do the same thing is even possible.
AI_WAIFU#2844: Then they have a bunch of regularization schemes and aux losses they use to make it work Deleted User#0000: they do behavioural cloning and then GAIL, a small tweak of GAIL (to be more like the learning from human preferences work) could push beyond the demonstrator's performance Deleted User#0000: well i thought that was interesting. They show that contrastive auxiliary losses give even more improvement than GAIL Deleted User#0000: also, I just haven't seen any method be able to achieve anything like this that isnt based on imitation learning. I think that's why they are doing it, we havent found any alternative AI_WAIFU#2844: Right, but by going that direction, you forgo the possibility of having a near-AGI tier system that you can deploy in an environment and expect it to learn to be useful, instead you have this very expensive and laborious process that relies on humans to do most of the heavy lifting, and then your copying that. Deleted User#0000: thats the starting point not the end Deleted User#0000: they say that in their paper. First get agents interesting enough that humans will want to interact with them, and then learn online from the actual interactions Deleted User#0000: They are tackling step 1 AI_WAIFU#2844: That's fair. We'll see where it goes. bmk#1476: should i do a test run of taping my phone to my bike and posting a short clip of it for quality check to see if e.g the jitter is too much EricHallahan#1051: ¯\_(ツ)_/¯ AI_WAIFU#2844: nah just go on youtube and type in "fpv gopro bmx" Deleted User#0000: level up Deleted User#0000: https://cdn.discordapp.com/attachments/729741769738158194/836028884876263455/unknown.png Deleted User#0000: https://cdn.discordapp.com/attachments/729741769738158194/836028908951306272/unknown.png Deleted User#0000: https://cdn.discordapp.com/attachments/729741769738158194/836028933458755634/unknown.png Deleted User#0000: (thats a friend, not me btw) bmk#1476: no, the entire point is testing feasibility of not purchasing expensive equipment AI_WAIFU#2844: the typical phone camera now-a-days is probably on-par with a gopro AI_WAIFU#2844: the latter are just more durable
guac#4716: this dude is living in 2085 AI_WAIFU#2844: I mean shit I was using them for this kind of stuff *over a decade ago*. StellaAthena#3530: Actually, the VR simulation is set to 1941 bmk#1476: yeah but i have all this duct tape, would be a shame not to use it EricHallahan#1051: I like to think it is somewhere between June 1944 and May 1945. bmk#1476: the cat ears tho bmk#1476: :catgirl3: bmk#1476: :goosegirl: Louis#0144: yeah wtf Louis#0144: @Deleted User get urself some goose ears Louis#0144: smh Deleted User#0000: i have a goose avatar in vr bmk#1476: :goose: bmk#1476: wait bmk#1476: i have a radical proposal Deleted User#0000: free radical bmk#1476: what if we combine cat ears and goose Deleted User#0000: proposal bmk#1476: catgoosegirl Deleted User#0000: hmm
gwern#1782: ...do they *do* something? like are they wifi antennas Deleted User#0000: i like to imagine they let him hear electromagnetic waves~ bmk#1476: *visible confusion* cat ears need to do things? Louis#0144: all cats actually have wifi alexyz#3459: those are called eyes Louis#0144: to enable it you need to hold the cat above your head and scream activate Deleted User#0000: every sense is a kind of eye bmk#1476: i look forward to a video of you doing this Deleted User#0000: even touch is just electric field measurement Louis#0144: ACTIVAAAATE Deleted User#0000: does it do this sound then? https://www.youtube.com/watch?v=xQ49jtlz_3I Deleted User#0000: actually caracals have antennas on their ears for real Deleted User#0000: https://www.youtube.com/watch?v=wQSvsEajxSU Louis#0144: so Louis#0144: my decode method Louis#0144: is 108sec/tok Louis#0144: on 2x v100s Louis#0144: 🙂 Deleted User#0000: that sounds bad Louis#0144: nah its actually much faster than before
Louis#0144: it used to be 2hrs per token Deleted User#0000: kek? Louis#0144: sadge Deleted User#0000: what tokens are these? Deleted User#0000: wikipedia per token? Louis#0144: just normal english Louis#0144: but every token does an NLI step Deleted User#0000: ahm Louis#0144: its zero shot planning as a decode method Louis#0144: weird shit Deleted User#0000: oh i see Deleted User#0000: then i guess it makes sense then Trainmaster9977#3932: is there any good way to finetune gpt-neo? EricHallahan#1051: Depends on your hardware. Trainmaster9977#3932: is it possible on colab? Louis#0144: a bit EricHallahan#1051: Depends on what model, but I know that 1.3B should be tunable with Colab TPUs. Louis#0144: if youre willing to use a lot of tricks and wait a long time Louis#0144: tbf though Neo's main benefit is the prompt engineering side Trainmaster9977#3932: i mean I tried prompts because couldnt get any finetuning working but....can't use thousands of words as a prompt
Trainmaster9977#3932: or at least couldnt work for me Trainmaster9977#3932: so hard to use as a replacement anyways Louis#0144: you typically do not need a prompt that long Louis#0144: try prompt eng + a pipeline method Louis#0144: like an EM algorithm Louis#0144: or ranker Louis#0144: thats what I do Trainmaster9977#3932: gonna be honest. i just came here to try to figure out how to train it on my own dataset, and I have no clue what you're talking about. EricHallahan#1051: (I don't either) EricHallahan#1051: Prompt engineering is useful only until a point. EricHallahan#1051: I am not too familiar with tuning our GPT-Neo models, but there is a Colab notebook in the repo that should be able to handle it IIRC. gwern#1782: (don't we have an FAQ for these questions yet) EricHallahan#1051: Nope. Trainmaster9977#3932: honestly tried and. couldnt get it to work, at all. tried tons of different notebooks and methods but nothing worked......and while you said you dont exactly do technical support, I had no other ideas so. figured I might as well ask here EricHallahan#1051: Yeah, I don't blame you. EricHallahan#1051: You will need to have a GCP account to store that data in though if you go the Colab TPU route, because for some reason Colab TPUs cannot read from Colab storage. EricHallahan#1051: ¯\_(ツ)_/¯ EricHallahan#1051: No idea why they force you to do that. Louis#0144: did u get memed into a lower VRAM GPU Louis#0144: that happens sometimes on colab
Trainmaster9977#3932: google cloud platform? made an account, still did not work, at all Trainmaster9977#3932: also tried multiple times so I assume not Trainmaster9977#3932: I have colab pro ftr. Louis#0144: I mean youre gonna need to be more specific than that Trainmaster9977#3932: well, more specifically, EricHallahan#1051: That is useful to know. Louis#0144: also if you do not mind can we move this to #off-topic Trainmaster9977#3932: sure! EricHallahan#1051: No, it is fine here. Louis#0144: oh ok EricHallahan#1051: I've heard people have success with this repo: https://github.com/Xirider/finetune-gpt2xl UnsupervisedLearner#4148: I been having thinkings on the nature of intelligence systems, language models, and information retrieval. So, GPT-N is impressive to people from both query and synthesis perspective. No one is surprised with google returning a relevant link to your query. Often with highly specific knowledge. You would be highly surprised if google returned a fresh-baked article for you written in clear clean language. This is where GPT models are unchallenged. But this synthesis is in a sense a response to a sort of query. The prompts for question answer and for news articles are just priming differently. So can we consider these giant models as a sort of differentially compressed datacenter? In that sense, can we apply datacenter techniques for scaling these knowledge centers? Has anyone been working on something like this?
Naively, I see this happening like: User: can you write me an article about pickles? -> interpretation -> soft prompt model -> routing model decides on information it needs to fulfill soft prompt -> query models x, y, z -> synthesis model fulfills user request EricHallahan#1051: Are you talking about retrievers? UnsupervisedLearner#4148: So in this sense instead of One Big Model™ you would have lots of narrow models each specialized to particular aspects of the process. Some for interpreting humans, some for storing neurally compatible information, and some for extraction of neural information, and finally some for transforming this 'neural' information into human interpretable/useful information EricHallahan#1051: i.e. MoE? Louis#0144: So interpretable MoE Louis#0144: ...? UnsupervisedLearner#4148: I suppose something similar Louis#0144: MoE with a retriever head sounds cool but I’m not sure it would work Louis#0144: Especially with current retrievers UnsupervisedLearner#4148: It seems like transformers are compressed knowledge centers with little mlps attached UnsupervisedLearner#4148: In that sense, it would make sense to treat them as a datacenter is treated. UnsupervisedLearner#4148: So instead of having to have these super expensive queries where you feed images and video and text through the entire giant GPT
You have a smaller model to design queries to the giant model. Limit information movement to the bare minimum needed to fulfill query UnsupervisedLearner#4148: Yeah it's not a fully baked idea yet UnsupervisedLearner#4148: But this Transformer-as-datacenter thing seems fruitful to me EricHallahan#1051: I would have used :thonk: if I thought that. EricHallahan#1051: I'm looking for a 🕳️ neko#5937: @UnsupervisedLearner sounds like #deleted-channel neko#5937: eegi = steer large pretrained models using comparatively small auxiliary networks to learn to do sophisticated, useful tasks EricHallahan#1051: Not really IMO. EricHallahan#1051: As far as I can tell it has nearly zero relation. EricHallahan#1051: But what do I know, I am not really sure what their goal is exactly. neko#5937: to explore the capability of our NLP models when learning from human feedback EricHallahan#1051: But there is no human feedback element. neko#5937: The idea is to create a web interface where users could rate the quality of the NLP model output (like summarization) and use this data to improve the model EricHallahan#1051: That is why I have trouble seeing why they relate. EricHallahan#1051: "Users" are implied to be human. cfoster0#4356: The *datacenter* thing doesn't have a human feedback element, not eegi neko#5937: https://cdn.discordapp.com/attachments/729741769738158194/836106236452798474/eegi.jpg neko#5937: ^eegi
neko#5937: oh i get it EricHallahan#1051: I know what the concept is, I don't know what they are doing under the hood to do it. neko#5937: Replicate Learning to Summarize with Human Feedback neko#5937: "We want to not only try PPO, but also Babble and Prune and PPLM" neko#5937: oof i'm just copy pasting the docs at this point neko#5937: i wish there was a more decentralized approach to this neko#5937: EleutherAI is the first time i've seen NLPers talk positively about RL tbh neko#5937: I like RL personally but kinda surprised people are receptive to it here paulbricman#2527: > Has anyone been working on something like this? I've been messing around with combining GPT-2 (switching to GPT-Neo soon) with several retrieval models here: https://psionica.org/docs/workshop/dual/#architecture So you have this knowledge base (in this case it's based on your working notes). And then when you ask it a question, it first retrieves relevant notes, concatenates them into a prompt, appends the query, and then goes on generating the response. Currently working on extending this in some exciting ways with other models in a user-defined way, hmu if you're interested in this stuff _wink_ EricHallahan#1051: I would say that most of us have *negative* experiences or opinions of RL in NLG, as it tends to be very difficult. However, I think we all at least see the benefit to what RL can provide to downstream performance and keep an open mind to it's potential applications. neko#5937: ok EricHallahan#1051: Again, I don't speak for everyone here, but I think I have a pretty good idea now that I am almost three months into all this. neko#5937: gotcha Louis#0144: Is there anything in the pangu alpha code base thats of interest to us Louis#0144: Or no EricHallahan#1051: Not the code, see #research as to why.
Louis#0144: Just skimming it now it looks kinda conventional EricHallahan#1051: The architecture is nothing new. Louis#0144: Yeah Louis#0144: I see cfoster0#4356: Hey! Your project looks very very cool, not gonna lie. What do you think of Ought and the stuff they're building? Aran Komatsuzaki#5714: was working on this a while ago. this is a relevant paper by a third party: https://arxiv.org/abs/2007.01528 Aran Komatsuzaki#5714: not working on it anymore tho 😦 paulbricman#2527: They're building cool related stuff as well, and we're actually collaborating because we're both non-profits. Andreas had some really pertinent feedback for Dual, such as: https://github.com/Psionica/Dual/issues/45#issuecomment-825586492 paulbricman#2527: That's super interesting, will take a look, thanks! CKtalon#7792: just days after alibaba's PLUG, huawei one ups them CKtalon#7792: https://twitter.com/cHHillee/status/1386541912279064578 EricHallahan#1051: It was discussed extensively in #research. CKtalon#7792: ah EricHallahan#1051: Notable for two reasons: - Chinese custom silicon - Chinese dataset EricHallahan#1051: Otherwise the 200B model is designed to be a flex for the purpose of marketing. CKtalon#7792: because less than a week ago, alibaba announced a 27B model for Chinese CKtalon#7792: a 27B parameter GPT-like model using 128 A100 for 120 days (300B tokens) for the Chinese language CKtalon#7792: Huawei did a bigger model, but on less tokens
CKtalon#7792: https://nlp.aliyun.com/portal#/BigText_chinese it's accessible here, but you need an AliCloud account (and probably pay, akin to OAI) IKEA#9631: Huawei makes silicon for AI workloads now?? :thonk: EricHallahan#1051: Always have been. IKEA#9631: goddayuuuuum :ultrazucc: https://cdn.discordapp.com/attachments/729741769738158194/836215889861935164/unknown.png nev#4905: @Deleted User de23c58c how to be awesome like you? alstroemeria313#1694: write lots of code inox#5400: how https://cdn.discordapp.com/attachments/729741769738158194/836230134892855326/Screenshot_2021-04-26_9.19.44_AM.png EricHallahan#1051: Because he is not human obviously. EricHallahan#1051: He is a dog. inox#5400: please no one tell me this is older than most of the people here https://en.wikipedia.org/wiki/On_the_Internet,_nobody_knows_you%27re_a_dog alstroemeria313#1694: i remember it! Louis#0144: I wonder what happened last may UnsupervisedLearner#4148: Definitely interested but have a lot on my plate the next few months 😦 Louis#0144: Mohit Louis#0144: I recommend talking to Mohit paulbricman#2527: What/who's Mohit? User on this server? Org? Louis#0144: No he’s a prof at umass Louis#0144: Super smart Louis#0144: Does lots of works with big LMs and retrieval Louis#0144: Kalpesh is also good to talk to
Louis#0144: I’m friends with him Louis#0144: He’s v nice and v helpful Deleted User#0000: truth is, this (and ice cream) is how i stayed sane during this pandemic Deleted User#0000: hopefully that becomes more white as the city reopens and i go out more chilli#5665: well, I think the actual compute put into it is also fairly impressive EricHallahan#1051: It is. EricHallahan#1051: Just not as much as GPT-3. chilli#5665: I agree chilli#5665: but "second most compared to GPT-3" is still impressive chilli#5665: and a big deal EricHallahan#1051: OpenAI spoiled it all. Mike#1327: months ago I bookmarked gettingstarted.ml, I visited it today and noticed it was down, do any of you where it has been transferred? EricHallahan#1051: ¯\_(ツ)_/¯ EricHallahan#1051: I've never heard of it TBH. Mike#1327: I am really sorry, a friend sent it to me and told me that someone from here told him about it EricHallahan#1051: I'm looking through the logs, but I don't see anything mentioning it here. It definitely looks to be down though, because it just redirects to an advertisement for some shady DNS service. The only reference I see to it is https://github.com/getting-started-ml/getting-started-ml.github.io and that just redirects to the aforementioned page. Mike#1327: thank you, I looked into it also, looking at the repo I figured that it wasn't what I was looking for anyway, sorry for wasting your time EricHallahan#1051: No problem.
Vova Zakharov#2625: Morning fellas EricHallahan#1051: I haven't seen you for a while. I would say good morning... but it is almost midnight local. :berk: FishofFlight#3096: GPT-NEO maaayyyy have a large influx of users soon EricHallahan#1051: Why? FishofFlight#3096: AI dungeon is doing a lot of censorship FishofFlight#3096: Lots of people are disgruntled bmk#1476: anyone can make their own ai dungeon competitor using gptneo EricHallahan#1051: Ah, I had heard so thing about that, but I haven't really paid any *attention* to it. EricHallahan#1051: (Sorry, more attention jokes) bmk#1476: at the moment we're not really doing work on any downstream applications with neo EricHallahan#1051: I have never used AI Dungeon before tbh bmk#1476: so don't expect us to build any ai dungeon competitor ourselves lol bmk#1476: me neither lol EricHallahan#1051: lol cfoster0#4356: Tbh the most likely company to use Neo to build the next AI Dungeon is Latitude :berk: bmk#1476: honestly probably bmk#1476: that would free them from OA's requirements EricHallahan#1051: Like our goal is to effectively give others the ability to do what they please. bmk#1476: also unrelated but i just checked and we now have more members than fast.ai server bmk#1476: next target: discord science network ai server
EricHallahan#1051: How much larger is that? bmk#1476: we need to double in size bmk#1476: and then double in size again to beat ai dungeon server EricHallahan#1051: Wouldn't be impossible. bmk#1476: yeah EricHallahan#1051: Like I could see at least one of those happening within the next six-to-eight months. EricHallahan#1051: Especially if we get another wave from a model release. :ptsd: bmk#1476: pls no EricHallahan#1051: Though I highly expect future model releases to be far tamer because I expect that the number of people that can run models beyond 2.7B drops off a cliff. EricHallahan#1051: Like 2.7B is the limit for realistic local consumer use. bmk#1476: it better bmk#1476: i dont want a repeat of the whole 1.3B/2.7B release fiasco :ptsd: bmk#1476: that one especially took us by surprise, lol bmk#1476: if we had known it would cause so much disruption we probably could have planned a bit betterfor it EricHallahan#1051: Unless NVIDIA either makes multi-GPU support good on GeForce or comes out with something better than the 3090, this is the case IMO. Teemochu#8740: A $10k computer is no longer "realistic" though Teemochu#8740: especially when >half that cost doesn't marginally improve gaming EricHallahan#1051: I don't know how, because I would have bet a significant sum (read: 1 USD) on that happening. bmk#1476: see, i see training bigger models as an absolute win:
- less attention because of the impracticality - more useful for our own research projects (EEGI, scaling laws, etc) EricHallahan#1051: Hence you get Megatron 11B bmk#1476: didnt you arrive here *after* model release tho Teemochu#8740: I'd absolutely love a GPT-3 that could be finetuned on a computer with a single [American 120v] power supply. I'd also love a purple flying unicorn. bmk#1476: hindsight 20/20 something AI_WAIFU#2844: It's the porn isn't it? They refuse to let it serve it's one true purpose. bmk#1476: yeah, about the unicorn, i know exactly why you, uh, would EricHallahan#1051: The thing is that nobody knows how to get it working. bmk#1476: $20 says that even if you get it working, it will be way worse quality than even gptneo-2.7B despite being like 3-4x bigger. if the result isnt patently obvious, this bet can be settled through blinded a/b testing Teemochu#8740: purple flying unicorn, cute little orange pegasus, same thing EricHallahan#1051: I arrived here on 2021-01-27. Teemochu#8740: https://app.inferkit.com/demo uses 11b zphang#7252: I'm not sure if running GPT-Neo would be cheaper than paying for GPT-3 Teemochu#8740: If you can buy the hardware yourself you'll probably do better by a *long* way after a few months zphang#7252: you'd have to pay those costs upfront + have engineers to set up + maintain them bmk#1476: highly sus given how bad the generations from megatron11b ive seen are bmk#1476: maybe they fine tuned it Teemochu#8740: they directly claim to be using 11b, it may be finetuned though zphang#7252: In theory it could be cheaper, but I think it takes a huge investment before it starts being cheaper than just paying OpenAI per query
EricHallahan#1051: But if you will be serving a lot of queries, then it will quickly become attractive if you can deal with the logistics. zphang#7252: from what little I've heard I don't think Latitude can handle such an undertaking yet bmk#1476: why speculate when you can have it from the horse's mouth bmk#1476: we have wau in here, we can ping him Teemochu#8740: I saw "attractive" and "horse" and then had to read the rest. Thank you brain. zphang#7252: on the flipside, coreweave+gptNeo might be competitive Teemochu#8740: also re:AI Dungeon, sorry to hear that, hadn't actually used the Dragon model and I guess I still won't Teemochu#8740: > I think it's kinda based that AI Dungeon community guidelines (which are similar to Discord's in their scope) basically have a section saying "hey these rules only apply to our forum and stuff you share, have fun by yourself as you wish" - Me, 2 weeks ago, on another server. Guess I was technically referring to a counterfactual world. :soweary: finetune#0907: guess i got my gpt-neo dungeon notebook working just in time Louis#0144: yo EricHallahan#1051: yo Louis#0144: there is a paper on training self driving cars with online learning EricHallahan#1051: ¯\_(ツ)_/¯ Louis#0144: I thought karpathy wrote it but Im wrong EricHallahan#1051: ¯\_(ツ)_/¯ Louis#0144: does anyone know the paper Im referring to EricHallahan#1051: ¯\_(ツ)_/¯ EricHallahan#1051: :berk:
Louis#0144: thanks eric Louis#0144: helpful as always EricHallahan#1051: Your welcome. Louis#0144: my welcome what EricHallahan#1051: No, If you find it, definitely share it here. finetune#0907: I'm seeing what kind of model size I can fit in colab. is there any intuition about which are more important when scaling up a model, heads or layers? EricHallahan#1051: There is a certain aspect ratio that is optimal, I am pretty sure it is described in the scaling laws paper. bmk#1476: you can use gpt3 paper configs bmk#1476: tldr is it doesn't really matter bmk#1476: heads increases cost/params quadratically while layers only linearly though finetune#0907: I'd kind of end up between sizes I think bmk#1476: i guess just eyeball it finetune#0907: I see, thank you bmk#1476: Kaplan paper showed that ratio doesn't matter a ton finetune#0907: 42 layers and 28 heads leave about 200MB VRAM free after doing inference on a 2048 token sequence bmk#1476: how many params is that? finetune#0907: :thonk: EricHallahan#1051: As long as you don't just make one gigantic head or a head size of one, you'll probably be fine. nev#4905: wait, a head size of one :thonk: nev#4905: that would be interesting
finetune#0907: it ooms when I try to rerun inference. guess it's cutting it too close EricHallahan#1051: I've considered the concept, because if you want to do attention over a periodically-sampled time series (read: raw audio samples), you need to consider how to do it, because you have no orthogonal dimension. EricHallahan#1051: You are forced to project to a higher dimensional space. finetune#0907: the gpt-3 6.7B config works, 32 layers, 32 heads, 4096 hidden size. seems to end up as 6.66B parameters finetune#0907: I also found a config with 6.73B parameters. 24 heads, 3072 hidden size, 58 layers :morelayers: finetune#0907: loading a dummy checkpoint works too, after splitting it up into per-layer checkpoints finetune#0907: source: https://colab.research.google.com/github/finetuneanon/misc/blob/master/SizeTest.ipynb joaogui1#8461: Hey folks, do you have a handy formula for estimating BERT and GPT memory consumption based on sequence length and number of parameters? EricHallahan#1051: Minimum memory consumption follows the rule of thumb of assuming a certain floating point format and multiplying by the number of network parameters. That is how I ballpark the memory consumption of our models in the FAQ. kindiana#1016: inference or training? EricHallahan#1051: However, I don't often do the estimates for dynamic scenarios, which is far harder because it depends on the size of the activations as well as any optimizer state during training. Sid#2121: number of parameters estimate: ```python def num_parameters(hidden_size, num_layers, seq_len, vocab_size, human=False): frac_1 = 13 / (12 * hidden_size) numer = vocab_size + seq_len denom = 12 * num_layers*hidden_size frac_2 = numer / denom x = 1 + frac_1 + frac_2 x = 12 * num_layers * hidden_size ** 2 * x return x
``` memory would be 4x num parameters for fp16 and 8x for fp32 iirc? if training, multiply that by whatever your optimizer overhead is joaogui1#8461: training kindiana#1016: for training you need parameters * 12 bytes or so at least with mixed precision, usually 2x that if you want good utilization joaogui1#8461: doesn't the sequence length affect memory consumption? kindiana#1016: yeah kindiana#1016: the 2x is for activation memory kindiana#1016: you want to keep batch size in tokens relatively constant kindiana#1016: and if you do that the activation memory is also relatively constant kindiana#1016: the sequence length doesn't really matter unless its super long joaogui1#8461: got it, thanks! Bruce23#6204: Hi, what is the do_sample parameter for? I can't information about it 😐 EricHallahan#1051: It toggles sampling. Bruce23#6204: Hm 😐 EricHallahan#1051: If you set it to false it greedy samples. Bruce23#6204: in what cases would I use greedy sampling? EricHallahan#1051: You would greedy sample you want to generate text fast or maximize the likelihood of each token individually. EricHallahan#1051: (Sorry, away from my computer) bmk#1476: greedy sampling doesnt change speed EricHallahan#1051: I guess it doesn't change the overhead. EricHallahan#1051: So yeah, that makes sense.
EricHallahan#1051: Ignore that part. Bruce23#6204: Hm alright Bruce23#6204: "maximize the likelihood of each token individually." that remains true? EricHallahan#1051: https://huggingface.co/transformers/main_classes/configuration.html?highlight=do_sample Bruce23#6204: ty 🙂 EricHallahan#1051: Yeah, I would have provided that right away if I had not been on mobile. It is quite hidden in the docs. Bruce23#6204: I appreciate that. I just don't have enought knowledge to translate that into pro's and con's 😄 Bruce23#6204: I'll dig a bit deeper! 🙂 EricHallahan#1051: I'll admit the Hugging Face docs are not the most intuitive thing to navigate. Bruce23#6204: Any suggestions for a hoster that hosts the models I am playing with? Would love to have one with GPU. Possible Google Cloud or AWS? Bruce23#6204: (Or maybe a cheaper one) finetune#0907: for playing around? or production? Kharr#7888: Colab is the easiest free service to play in. Comes with almost everything pre-installed and GPU is ready to go. Bruce23#6204: for production kindiana#1016: pay hf :berk: cfoster0#4356: I'd recommend hiring an expert cfoster0#4356: Any advice you're gonna get for free is probably worth what you paid for it Bruce23#6204: aha Kharr#7888: In all seriousness, like Ben said, HF offers an api service with hosting. Worth looking into instead of doing it yourself. Bruce23#6204: thanks 🙂
Bruce23#6204: from reading the plans, it appears that the lab plan ($199/mo) is CPU based. So maybe I don't really need a GPU server Bruce23#6204: I'll test 🙂 weakman54#2171: So, reading the rules and FAQ, I get that this isn't the place for a beginner of ML stuff, but where could I find something to get me up to speed with the basics? I'm specifically looking to just tinker around with the gpt-neo model to see what I can get. (Also, is it feasible to run the model on a local machine?) CKtalon#7792: depends on your GPU Kazumi#1297: I recommend google colab weakman54#2171: For running the model or getting up to speed on the basics? Kazumi#1297: yes weakman54#2171: Ah, I see weakman54#2171: I've got a relatively beefy one, so I guess I'll try it and see Kazumi#1297: well depends how you want to run it, you can't run something you want to have long term, but it's great for doing experiments for like a day at a time weakman54#2171: Yeah, I figured there was some limitations like that weakman54#2171: which is why I'd like to run it locally if possible CKtalon#7792: 3090? CKtalon#7792: otherwise, pretty much no hope weakman54#2171: Ah, damn Kazumi#1297: *cries in 1660* CKtalon#7792: colab's your only chance CKtalon#7792: i don't think you can do the 2.7B model though Kazumi#1297: the biggest gpt2 fits on the TPUs they offer, I'm not up to date to gpt neo's on google colab weakman54#2171: GTX 1060, so yeah..
CKtalon#7792: might be easier to use the hugging face version CKtalon#7792: although don't ask for tech support here CKtalon#7792: the EAI devs don't provide support for HF haha Kazumi#1297: TPU podcast people probably could help tho weakman54#2171: Does huggingface support longer text generation as well? CKtalon#7792: i think it's limited CKtalon#7792: haven't actually played with any of GPT Neo 😛 Kazumi#1297: I should... once I'm done with my image captioning thing weakman54#2171: mm, I guess I'll do some testing with gpt-2 on colab for now, seems simpler to get into CKtalon#7792: not sure if changing this allows it to generate more (I assume so) https://cdn.discordapp.com/attachments/729741769738158194/836888678994542592/unknown.png CKtalon#7792: that's for HF Kia#2550: Is Hugging face like a Hosting company? Teemochu#8740: Partially, but the thing they're most known for is their library Kia#2550: Ow that's cool Kia#2550: Very interesting nonetheless Teemochu#8740: And basically that they have things set up for most LMs available. And quite a few getting started docs as well. inspiration101#2728: hello, is there demand for something like gpt-3 sandbox, but for gpt-neo? EricHallahan#1051: Not really. We were hoping that developers outside of our organization would develop sandboxes, dashboards, and other front-end interfaces. EricHallahan#1051: We are more in the business of training models and preparing them for production rather using them in production environments. inspiration101#2728: I mean I am thinking about making one
ersatz#0001: the shitstorm about dungeon ai rn may be a strong boost for the project Kazumi#1297: boost of what? cognomen#6297: noise to signal ratio ersatz#0001: interest of course ersatz#0001: it's still a pretty unknown project ersatz#0001: but people are talking about it as an alternative to the api CKtalon#7792: oh what happened to dungeon ai? Kazumi#1297: drama CKtalon#7792: seems like it just went down without much notice? bmk#1476: we don't really need more interest tbh we just wanna train models in peace mgostIH#0245: Artificial Centrism EricHallahan#1051: Anyone who thinks we are an alternative to the OpenAI API is misinformed IMO. bmk#1476: we're just a bunch of researchers who want to train some models for fun and for our own research ersatz#0001: people are upset because openai imposed a change in the acceptable content or something, unclear tbh CKtalon#7792: epeen? mgostIH#0245: tbh we are indirectly that, after a model is trained it shouldn't be too hard to setup ersatz#0001: the model will hopefully be at the level of gpt-3 and running it in-house could be an alternative to using the api imho mgostIH#0245: But I doubt anyone will have a 175B parameter model for free use anytime soon kek CKtalon#7792: the computing power needed isn't gonna be cheap EricHallahan#1051: We will never serve an API. Our partners like CoreWeave might, but we will never serve it ourself.
mgostIH#0245: Well in the long run it is going to be cheap bmk#1476: yeah good luck running inference of a 175B model without a dedicated engineering team ersatz#0001: that's an alternative for startups not for random people mgostIH#0245: it's not much of the computing power imo, but the hardware itself bmk#1476: inference is *hard* CKtalon#7792: yea, the hardware is at least 3-5 years away from off the shelf hardware CKtalon#7792: probably 10 mgostIH#0245: But I am positive that we may even get better results than GPT-3 with fewer params mgostIH#0245: Like with better quality input text or alterations of other stuff EricHallahan#1051: It is definitely off-the-shelf already. EricHallahan#1051: It is not consumer accessable. finetune#0907: just an expensive shelf mgostIH#0245: And with hardware becoming cheaper it will become a reality in say 8 years ersatz#0001: 3-5 years is impossible or I'm missing something big CKtalon#7792: like a shelf of server racks? XD mgostIH#0245: with GPUs getting a lot of memory you may only need to run like 4 of them in the future for a model that's currently qualitatively equal to GPT-3 imo mgostIH#0245: It's still going to be expensive in 3-4 years ofc CKtalon#7792: it's only in the last generation GPUs got a memory boost though CKtalon#7792: (at least consumer wise) mgostIH#0245: But not "millions of dollars and an engineering team" kind of expensive
EricHallahan#1051: You are going to be waiting a long time lol CKtalon#7792: and it's still gaming focused CKtalon#7792: ain't need so much vram for gaming EricHallahan#1051: Same applies to ZeRO-Infinity. ersatz#0001: nvidia is going crazy with the ai pipeline optimizations now so maybe bmk#1476: tldr don't count on being able to run an Eleuther model CKtalon#7792: waiting for the distilled model XD mgostIH#0245: Imo 8 years and anyone with 10k will be able to afford what we now call "huge models", maybe not the training, but the inference for sure EricHallahan#1051: Again, refer to our FAQ at https://eleuther.ai/faq ersatz#0001: yeah that's an alternative for startups not for random people that's my point CKtalon#7792: isn't GPT3 level inference requiring something around 100+GB vram? mgostIH#0245: But competition may still make it extremely cheap for the average person to get output from the model! ersatz#0001: true EricHallahan#1051: 700 GB at binary32 for parameters alone. CKtalon#7792: oops mgostIH#0245: I wonder what a 20B parameter model could do in the future, with a lot of high quality input data and better methods mgostIH#0245: That'd be "only" 100GB of VRAM CKtalon#7792: even 8xA100 can't fit it then mgostIH#0245: Which seems feasible on 4 GPUs EricHallahan#1051: We do not expect any larger models we produce to run on consumer hardware any time soon.
mgostIH#0245: Like keep in mind GPT-3 was only trained once, maybe a lot of it could be improved far more mgostIH#0245: I'm overall very positive for the future! CKtalon#7792: i think meta learning happens somewhere from 13B to 175B, so not sure if 20B works finetune#0907: back of the envelope, 10B in fp16 should be able to inference on a 3090 mgostIH#0245: We'll get AGI once AMD joins the GPU scene **for real** finetune#0907: for 175B you can try getting 3x p4d instance from AWS for $100/hr total :hap: EricHallahan#1051: Back of the envelope doesn't contain activations or anything else that needs to reside in memory. mgostIH#0245: I am tired of the man with leather jacket, bring us Su bmk#1476: also the latency will be abysmal ersatz#0001: no one expects this? it would require a 3 orders of magnitude increase in performance at least, maybe 4 EricHallahan#1051: Hey, I'm all for AMD or Intel becoming competitive. finetune#0907: i extrapolated from peak memory use of 2.7B and 6.7B I've seen on colab CKtalon#7792: i think performance will rise faster than the vram available EricHallahan#1051: ¯\_(ツ)_/¯ CKtalon#7792: we ain't gonna get a few hundred GB of vram with Geforce 9090 bmk#1476: getting low latency inference at this scale is highly nontrivial finetune#0907: for sure EricHallahan#1051: You can get high throughput easily, but the latency is way harder to improve. EricHallahan#1051: The latency is what bounds a model size for a given application. CKtalon#7792: btw, @bmk did you see my comments on book translations
bmk#1476: uhh no ersatz#0001: maybe with the crazy photonic stuff from the former MIT group thing ersatz#0001: I don't know EricHallahan#1051: Hey, I'm not counting out photonic computing, but it is very much in its infancy. CKtalon#7792: u can see the few comments bmk#1476: oh that bmk#1476: i mean i think not being able to share the dataset is a bit of a limitation ersatz#0001: Lightmatter Inc iirc EricHallahan#1051: Like there should be clear advantages to optical systems. They can compute highly complex functions with relatively simple manipulations. EricHallahan#1051: But applications are mostly unproven. CKtalon#7792: i can help figure out how to scrape from libgen ersatz#0001: they are doing great for inference not much for training mkualquiera#3484: but it's going to take decades to get them to the scale needed for something like ML CKtalon#7792: we can chat in private if you are interested in pursuing it bmk#1476: right now dataset isn't top priority bmk#1476: but I'll ping you when we get around to pile v2 CKtalon#7792: sure ersatz#0001: Lightmatter photonic tech (former MIT people) is 10x faster than nvidia gpus using 90% less energy iirc ersatz#0001: for inference mr_seeker#1337: @CKtalon @ersatz as someone who is interested in AI for creating stories (and a user of dungeon AI), I can say this:
Latitude (the ones running Dungeon AI) have changed the way private content generation was done, by adding a "ToS" filter to all private content without prior warnings. They also added monitoring for "flagged" content, which meant that if your content was flagged by the ToS filter, not only was your public content getting reviewed by Latitude, but also your private ones would get under scrutiny (and they violated their own ToS). This was all done without any discussion with anyone in the community (not even moderators), nor realisation of the backlash this would cause. The idea was this: Any output involving minors, animals or cruelty would get censored by Latitude, and they would look into your account and review each and every story to see if you did not violate the ToS for OpenAI. This meant that a lot of "hardcore" stories would suddenly get flagged for review, because they would technically "violate the ToS", including all kinds of NSFW content. Not only would the AI stop refusing service, but your account would get flagged for review too. TL;DR: Your account would get flagged for review, even if the AI was the one causing the red flag... CKtalon#7792: i guess it's a result of risk deemed by investors bmk#1476: *taps sign* EricHallahan#1051: ~~Celebras~~ Cerebras has a better chance of succeeding than Lightmatter, and they just made a big chip lol ersatz#0001: can't find anything on "Celebras" EricHallahan#1051: https://cerebras.net/ ersatz#0001: thanks ersatz#0001: ceRebras lol EricHallahan#1051: lol ersatz#0001: i remember this ersatz#0001: a full datacenter in a single chip or something EricHallahan#1051: They are insane, I don't know how they would get the yields for this. mkualquiera#3484: very informative embed, "-2797" :berk: EricHallahan#1051: Agreed. EricHallahan#1051: :berk: bmk#1476: well probably they dont, they just route around the ded ones bmk#1476: getting a full wafer of working dies is basically impossible kindiana#1016: they have <1.5% extra cores apparently
kindiana#1016: and "100% yields" mgostIH#0245: I hope new hardware can improve things, but I am skeptical until they try some big model and show the result publicly EricHallahan#1051: Same. kindiana#1016: well, they can't do big models lol kindiana#1016: only 40GB bmk#1476: >big model >no off-die memory >only 18gb sram bmk#1476: oh, their new one is up to 40? bmk#1476: still tiny kindiana#1016: yeah EricHallahan#1051: Yeah mgostIH#0245: Aye but with claimed performance they could still try stuff like resnets and whatnot mgostIH#0245: Past SOTAs on various tasks EricHallahan#1051: 40 is a good number actually, most production applications should be below that. mgostIH#0245: If they could show that they can train a Resnet in 10x less time it'd still be a huge point in their favour EricHallahan#1051: 40 GB of SRAM is insane. CKtalon#7792: Huawei just did that apparently with their own hardware ersatz#0001: based on RISC-V iirc EricHallahan#1051: We all use DRAM for a reason lol
mgostIH#0245: Aye but one thing is some pseudo TPU, another is "a datacenter in a small box" mgostIH#0245: But link, I am curious EricHallahan#1051: I don't like conspiracies, but RISC-V is a ploy so that China can produce chips without licencing ARM designs. EricHallahan#1051: I am officially convinced. EricHallahan#1051: Literally no one but China has produced designs at large scales. cognomen#6297: why did MIPS jump on the bandwagon though kindiana#1016: huawei's tpus are very similar architecture to google ones ersatz#0001: nah just capitalism EricHallahan#1051: Oh, there are other companies, like Esperanto Technologies. https://www.esperanto.ai/ kindiana#1016: they did make their own riscv but that's not really related to ai stuff EricHallahan#1051: I was all on the RISC-V bandwagon until nothing seemed to come of it. ersatz#0001: china is doing stuff with it cognomen#6297: https://www.eejournal.com/article/wait-what-mips-becomes-risc-v EricHallahan#1051: There are a few microcontrollers and processors from China, and you also have SiFive which is the poster-child of RISC-V, but other than that, it effectively doesn't exist. cognomen#6297: and I'm assuming the terms of joining risc-v international were that they agreed to non-assertion of IP or licensing their patents ersatz#0001: patents related to risc-v only EricHallahan#1051: Also, the other major problem is the fact that the V Extension for Vector Operations is still not ratified. EricHallahan#1051: All of these "AI" processors are not sharing a common instruction set.
ersatz#0001: https://github.com/riscv/rvv-intrinsic-doc laterbit#7218: Tenstorrent is going to use RISC-V for their AI chips Shay#2039: hello all 45#2247: ok so I'm interviewing/podcasting connor again tonight, this time about EleutherAI 🎊 the "pinned github public projects" are: 1) GPT-Neo(X) + the pile, 2) Mesh-Dalle & 3) lm-evaluation-harness. my general feeling on this is that smh the main project was reproducing GPT-3 results, pile/GPT-Neo(X) are byproduct/first steps, and then the Dall-e things are like :lucid: -> :mesh: obviously not an expert on all of this, so I might take 1-2h to prepare and learn about these projects beforehand. To make this whole EleutherAI ad / podcast even better: - a) is there **any eleutherAI private/public project that we should definitely talk about**? - b) among those mentioned before & answers to a), **is there any where connor is esp. involved / would have cool insights on**? (from layman parsing of projects I see there's also vd-vae, agi experiments, "contrastive learning against CLIP cls tokens" (not sure what's going on in lm-thundersome, speech processing and vision, & I understand smh alignment but don't know if there are actual projects attm or if it's more debating etc.)) if we actually have a call & I have the editing time I will post a link here beginning of next week 😄 Louis#0144: Connor is not really interested in grounding Louis#0144: I wouldn’t ask about that one Louis#0144: Anyway ask him about goose girls Louis#0144: Do it at the end tho