data
stringlengths
115
7.61k
NordVPN#1637: it would seem like its fair use but I’ve never really looked into that besides the whitepaper on the pile a few days ago StellaAthena#3530: It is legal in the EU, probably legal in the US (though no court ruling has codified this), and I'm unqualified to speak about anywhere else EricHallahan#1051: I don't think anyone here is really qualified to provide legal advice. NordVPN#1637: Thats what I was guessing, but thanks anyway! Orz#3023: Is there a way to shuffle jsonl files? Orz#3023: Loading them to RAM is not really an option as the data is too huge for me to do that kindiana#1016: anything that sorts lines will do kindiana#1016: e.g. https://github.com/alexandres/terashuf Orz#3023: Thank you! xloem#0717: Hey, i've been adding small things onto finetuneanon's transformers branch used for genji. Where do people work on / discuss this code? The 1 pr to that repo has has no comments, and I haven't yet found PR's to the main repo matching the changes but have more looking to do. Is finetuneanon in this server? Orz#3023: https://discord.gg/novelai Orz#3023: gpt genji is from these guys xloem#0717: thanks jbustter#5167: I added a simplified version of the mse notebook, There is a public version here https://colab.research.google.com/drive/1gFn9u3oPOgsNzJWEFmdK-N9h_y65b8fj which contains other methods and some unrelated tricks and complications to make the results more accurate. alstroemeria313#1694: :) Emad#9608: https://semianalysis.substack.com/p/tesla-dojo-unique-packaging-and-chip Desperate Noob#6277: Idk if I am allowed to ask this, but what algorithm powers the bot in #the-faraday-cage-archive Desperate Noob#6277: nvm Desperate Noob#6277: It said clip+vqgan AI_WAIFU#2844: great, just what we need, another datatype
AI_WAIFU#2844: 10kw and 18000 amps god damn kindiana#1016: its like cerebras lite kindiana#1016: lol AI_WAIFU#2844: yeah but unlike cerebras they didn't completely skimp out on the interconnect kurumuz#5695: that cooling tho bmk#1476: I remember feeling a sense of "oh no" when they said that lol bmk#1476: I tried asking some of the engineers afterwards but they said they only work on inference and they don't use the new data type on inference and directed me to the dojo guys, and then I kinda just forgot about it chilli#5665: lol, it was kinda funny when we were talking with the people who didn't know much chilli#5665: lol chilli#5665: and then they called over the person who actually does know things bmk#1476: lol bmk#1476: the compilers discussion you were having was super fascinating to listen in on, I just didn't understand any of it chilli#5665: haha, that type of stuff is exactly what my team works on so I have a pretty good understanding of the space bmk#1476: I'm sad there weren't any language model (or even just big model in general) people there bmk#1476: makes sense, tesla has no reason to be interested in language models, but still chilli#5665: haha, I feel a bit bad about my interaction with Evan kurumuz#5695: they might have the interest for the robot? bmk#1476: ? chilli#5665: I feel like he was all excited that somebody was asking about alignment kurumuz#5695: robot should be able to talk :berk:
chilli#5665: but I'm just an alignment poser 😦 chilli#5665: lol bmk#1476: when I asked him what he was working on he mentioned the myopic CDT he worked on with Adam Shimi and I feel bad about not having read the post about that and only coming up with counterexamples when Adam brought it up here originally (this was before the post) lol chilli#5665: haha chilli#5665: sad 😦 Louis#0144: Not teemochus robot Teemochu#8740: talking robot better, though I guess the physical object may not need to be a Talkable ethan caballero#6044: plot twist? https://twitter.com/ethancaballero/status/1428767544165511172 ethan caballero#6044: ^Why does Microsoft have a new FoundationModels github repo? Louis#0144: if like 70% of that 1b went to compute Louis#0144: how big of a model would we get Louis#0144: realistically like in the low Qs Louis#0144: right? cfoster0#4356: Gdi not more clickbait ethan caballero#6044: why does the repo exist? Louis#0144: Actually I dont think thats clickbait cfoster0#4356: *shrugs* Louis#0144: I think this makes sense cfoster0#4356: The $1B bit isn't clickbait?
ethan caballero#6044: How else would Stanford train GPT-4 without $1B from MSFT? Louis#0144: ohhhh cfoster0#4356: I highly doubt Stanford has aspirations on training GPT-4 or even a full GPT-3 any time soon. I'd bet money against that cfoster0#4356: They've gotta work their way up like everyone else lmao Teemochu#8740: why does Microsoft throw a billion dollars at anything AI related? Teemochu#8740: Can I call myself an AI lab and get a billion from them to train on cocaine and blackjack and hookers? Teemochu#8740: and pones, though I repeat myself twice Louis#0144: eleuther stanford colab wen ethan caballero#6044: They believe in straight lines on log-log plots. Louis#0144: i wanna see percy wearing eleuther merch Louis#0144: :berk: Teemochu#8740: one thing that I know is computable from the scaling law paper but I haven't gotten the time to derive Teemochu#8740: how does compute per step [assuming equal size steps, blah blah etc] scale with model size? Teemochu#8740: maybe better for #scaling-laws, was just wondering if anyone had it off the top of their head (matmuls being not-n^2 and all) chilli#5665: this is just model size multiplied by batch size? u-sci#2261: it's not just the model size you need to work out the FLOPS that are used accounting for intermediate activations and computation structure u-sci#2261: like the O(n^2) everyone complains about is in the context window size, not model params or batch size Orz#3023: This raises another question How long would it take if y'all were to be given $1B and asked to train gpt-4? u-sci#2261: unreasonable question
Untouch#9150: that isnt a money problem at that scale u-sci#2261: The conditions of the investment will matter a lot EricHallahan#1051: GPT-4 is underdefined so question is useless without further constraint. Sahl#0630: everything’s a money problem with enough money Sahl#0630: (and organization) Untouch#9150: microsoft already has the clusters to train this hypothetical model Untouch#9150: but yeah what is GPT-4 in this case Orz#3023: I mean yeah It makes no sense Orz#3023: @Deleted User Orz#3023: Hello there We meet again! Orz#3023: How is it that our interests coincide lol Deleted User#0000: Haha yep tin482#5219: Scales linearly. For modern architectures, forward pass compute per token is approximately equal to parameters. GPT-3: 3e23 FLOPs / 3e11 tokens / 8 -> 1.25e11 FLOPs /token ~ 1.75e11 params. See "End-to-end scaling" here: https://developer.nvidia.com/blog/scaling-language-model-training-to-a-trillion-parameters-using-megatron/. For older models, there's the original OpenAI scaling laws paper which derives ~ 2 * parameters. Search "flops": https://arxiv.org/pdf/2001.08361.pdf Good Ol' Granite#1726: How's progress going on the new backend for GPT-J? EricHallahan#1051: Wait, it is not working? :thonk: StellaAthena#3530: What new backend? EricHallahan#1051: 6b.eleuther.ai
EricHallahan#1051: Looks like it is working again. StellaAthena#3530: Oh, we already made the switch @Good Ol' Granite mega b#6696: > Does anybody know if there's a usable alternative to DALL-E? @Immortal Rose of Yore Hey there, a group of people are working on exactly that but utilizing tpus for training. Visit huggingface.co/spaces and Dall-e Mini should be the first one mega b#6696: Very early, looking to expand definitely 😎 Kia#2550: Ok nvm mega b#6696: https://discord.gg/eKXdVCrzhQ Kia#2550: I taught it's a image mega b#6696: Oh lol Desperate Noob#6277: I know this is a strange question, but if I use an ai to generate some sort of creative work (artwork, music, writing, etc.), Who then has rights to this piece of work? StellaAthena#3530: Nobody knows Desperate Noob#6277: And if the ai program was written and run on my machinery, would I legally be allowed to copyright work it produces StellaAthena#3530: Nobody knows Louis#0144: What we do know is that AI is about to send the copyright industry on a wild ride Louis#0144: :berk: Louis#0144: stories especially Louis#0144: authors fight for their copyright tooth and nail Desperate Noob#6277: Lol Louis#0144: personally I think that we'll need special juristictions for AI trained on copyrighted material
Louis#0144: but the AI market will be well into the trillions by time this happens Louis#0144: and its going to get stretched out for years and years u-sci#2261: OpenAI is lobbying in our favor and MS would back them up if they ran into trouble u-sci#2261: we'll win Desperate Noob#6277: So what is open ai lobby ing for u-sci#2261: The truth Louis#0144: lets be clear Louis#0144: we are VERY biased here Louis#0144: AI winning this would be detrimental to a lot of other industries kurumuz#5695: we right kurumuz#5695: lol Desperate Noob#6277: To me, training an ai on copyright Ed material is like a human getting inspiration from copyright Ed material kurumuz#5695: i dont give a fuck Louis#0144: note to self do not allow kuru on an internal review board Louis#0144: LMAO kurumuz#5695: hmm? Sahl#0630: ooh ethics committee lemme join kurumuz#5695: didnt get the joke Louis#0144: an IRB is an ethics committee kurumuz#5695: ohh
kurumuz#5695: boring stuff? Louis#0144: oh god kurumuz#5695: ye :berk: Louis#0144: LMAO Desperate Noob#6277: Then again, I could make work with an ai, say I made it, and then since it's completely original no one would be able to disprove me. And the ai can't exactly file a lawsuit Louis#0144: honestly we should at some point have an eleuther IRB Louis#0144: I think stella and I discussed that at one point Louis#0144: in the *very* early eleuther days kurumuz#5695: kinda :meguPuke: Louis#0144: like week 4 or something u-sci#2261: Don't count on this last part for long lol kurumuz#5695: like kurumuz#5695: :meguPuke: Sahl#0630: make sure alignment/capabilities ratio is high? 😳 u-sci#2261: AI in legal is gaining popularity afaict Parker#3197: idk AI owning copyright is somewhat like writing code to generate every piece of text in existence and claiming copyright over all of it kurumuz#5695: ai text generation is inherently compression u-sci#2261: We haven't even solved the problems with humans trying to patent DNA Sahl#0630: Disney vs. /dev/random (2021) kurumuz#5695: lol
Louis#0144: honestly you should read the neuro cog paper I linked yesterday. It talks about doing planning end2end by using convolutions and inverse convolutions (like how some people hypothesize the brain does planning) Louis#0144: https://onlinelibrary.wiley.com/doi/pdf/10.1111/cogs.12265 I have yet to see someone do this in transformers Louis#0144: @Deleted User de23c58c might interest u Louis#0144: its like ESBN on steroids Sahl#0630: ooh this looks alignmenty Desperate Noob#6277: Are you an ai rights activist Louis#0144: @Sahl you should talk to chris, hes a waterloo prof Sahl#0630: which chris? Louis#0144: eliasmith Louis#0144: i worked with some people in his lab Louis#0144: great lab Sahl#0630: sure Sahl#0630: is he alignment focused Sahl#0630: or adjacent Louis#0144: cog neuro + end2end differentiable symbolic methods Louis#0144: he is effectively the head of cog neuro for all of canada Sahl#0630: uh is symbolic the anti-scaling thing Sahl#0630: or no bc differentiable Louis#0144: no his symbolic stuff scales Sahl#0630: ooo
Louis#0144: he did the first unified DL model Louis#0144: in 2012/2013 Louis#0144: i think it was 14mil params Louis#0144: which was *huge* at the time Sahl#0630: um pog Louis#0144: how people looked at Spaun then was how people look at GPT3 now Sahl#0630: symbolic methods implies interpretability right Louis#0144: not in chris' case Louis#0144: a symbol in his case is just any random variable Teemochu#8740: jurisdictions? As in, seasteading? Teemochu#8740: (I think you meant jurisprudence) Sahl#0630: ah so not different from normal ml right Louis#0144: I did Louis#0144: very different Louis#0144: he assumes continuous time Louis#0144: and the RV is if an ensemble of neurons is going to spike Teemochu#8740: :based: Sahl#0630: oh so it’s very brain like Louis#0144: ye Sahl#0630: that makes sense considering cog sci
Louis#0144: but it generalizes to LSTMs and transformers Louis#0144: I think he had one transformers paper Louis#0144: but kinda chickened out kurumuz#5695: always based kurumuz#5695: :smug: Louis#0144: he did a lot with LSTMs though Sahl#0630: interesting Sahl#0630: I should go talk to him Sahl#0630: UW probably has a lot of good ml people doesn’t it Louis#0144: nope Louis#0144: 😄 Louis#0144: none at all! Louis#0144: waterloo's thing is p-test related stuff Louis#0144: for stats Louis#0144: and optimization Louis#0144: but they never mix Sahl#0630: LP… Louis#0144: we have two ML profs and theyre both *meh* Sahl#0630: oh I’ll be taking some ML courses Sahl#0630: should probably still be fine
Sahl#0630: I don’t think it’d depend on prof quality right Sahl#0630: just can’t talk to them about as interesting stuff kurumuz#5695: got my invite to codex kurumuz#5695: so decided to flex here johncaling40#6574: Oh nice johncaling40#6574: when u apply? kurumuz#5695: first day johncaling40#6574: how long ago was that? johncaling40#6574: idr kurumuz#5695: i dont remember either. johncaling40#6574: oho k kurumuz#5695: but ye, will be good as we can compare the upcoming genji models johncaling40#6574: Is genji no profit? kurumuz#5695: :blobhyperthink: model is free and open sourced yes johncaling40#6574: do u have plans to add like JS johncaling40#6574: not just python? Louis#0144: it probably knows a lot of JS Louis#0144: :berk: johncaling40#6574: nice Immortal Rose of Yore#5645: ty!
mega b#6696: np! Desperate Noob#6277: Is it better than gpt-j at coding kurumuz#5695: oh it's much better Desperate Noob#6277: Ok Lyran Sage#1988: hey all, I'm a software dev getting into VQGAN+CLIP.. can someone point me to the appropriate community or channel here to ask some noob questions about stuff? 🙂 EricHallahan#1051: Try #art Lyran Sage#1988: thanks 🙂 EricHallahan#1051: Also, welcome! Awesome_Ruler_007#7922: 1. Why wasn't GPT-J trained on Reddit threads? I can't get any model to autocomplete `joe mama` jokes 😠 Untouch#9150: finetune it then Awesome_Ruler_007#7922: And when's EleutherAI releasing their own Codex model? 🤔 for some reason, I keep coming back to trees. for a semi-supervised approach to generating code, can't we somehow merge different concepts within the NLP models? instead of a maintaining a single type of hidden vector, multiple models modelling the code snippets retaining different parts of the snippet - syntax, the structure of tokens as a tree, etc. somehow merging DreamCoder like approach with its sleep-wake cycle and semi-supervised nature of scaled-up NLP models.... EricHallahan#1051: https://arxiv.org/abs/2101.00027 Awesome_Ruler_007#7922: its a joke - and yeah it doesn't seem to be trained on reddit since I can't get it to complete any of their slangs/jokes EricHallahan#1051: Don't worry I know. `:P`
EricHallahan#1051: "Our" first code model was *before* Codex. https://huggingface.co/lg/ghpy_20k Louis#0144: No one here is working codex afaik EricHallahan#1051: And NovelAI already has tuned GPT-J on Python. https://huggingface.co/NovelAI/genji-python-6B Louis#0144: Nate is working on a codex equiv Louis#0144: But he isn’t in Eleuther Louis#0144: He just comes here to chat Louis#0144: cc @natedog Louis#0144: Actually @EricHallahan we should maybe link nates community in #communities Louis#0144: What do u think EricHallahan#1051: If Nate wants us to, I don't see a reason why not. DrYazman#2737: As a lawyer specialised in speech law, I can assure you it would not be nearly as easy as you think to get by with that Louis#0144: Would 2x 3090 be enough for inference for 20b given activations overhead? I think it should just barely be enough Louis#0144: Assuming NV link Louis#0144: Just planning some stuff out Louis#0144: 👀 Louis#0144: I know we said using activations on cpu a single 3090 could do like 11b AI_WAIFU#2844: I think you could fit it, but it would be kinda tight. Louis#0144: But I’m not sure what overhead NVLINK adds
Louis#0144: Yeah AI_WAIFU#2844: I don't think interconnect will be a bottleneck AI_WAIFU#2844: pcie 4.0 should be enough AI_WAIFU#2844: since activations are sqrt(parameters) AI_WAIFU#2844: you're not actually moving that much stuff around AI_WAIFU#2844: But if I we're you I would get like 4 K80s in series instead. EricHallahan#1051: He already has a 3090. EricHallahan#1051: And K80s are not happy with reduced precision. u-sci#2261: 3090 has 24 gigs of vram or 48? EricHallahan#1051: 24 EricHallahan#1051: A100s have 40 or 80 u-sci#2261: 20b params in float16 is like 37.25 gigs Louis#0144: Yeah that leaves about 10GB for activations Louis#0144: Which I think is enough alstroemeria313#1694: A6000 is 48GB u-sci#2261: 20000 dimension times 1024 ctx times 100 layers at float16 is only like 4GB Louis#0144: So you think activations would only be 4GB? u-sci#2261: I'm only counting cached context Louis#0144: Ouch Louis#0144: :berk:
u-sci#2261: Actually 20b params and 100 layers only leads to like 4K d_model for transformers u-sci#2261: So that's less than a gig per vector accounting for 1024 tokens and fp16/bf16 u-sci#2261: The attention weights computed naively would only take 8gb temporarily so it's definitely "possible" but I don't know what kind of overhead the DL library will cost Louis#0144: I think the solution would be to have activations on Ram tbh Orz#3023: Is that even possible? Orz#3023: I mean With interfaces like pytorch Untouch#9150: how long would generation take there Louis#0144: Yes Louis#0144: Using deep seed Louis#0144: Speed Orz#3023: oh aight Awesome_Ruler_007#7922: damn. If I assume correctly, EleutherAi can become for-profit also - in which case competing on coding models can be very financially advantageous Louis#0144: Eleuther won’t be for profit Louis#0144: It’s very against our idealogy StellaAthena#3530: We *could* but it goes against the entire purpose of the group Louis#0144: A lot of people here are researchers who just do Eleuther on the side Awesome_Ruler_007#7922: alright, then atleast you can compete with codex with a FOSS philosphy Awesome_Ruler_007#7922: no problem in that - as long as you have researchers interested in that, right?
bmk#1476: we do whatever we feel like doing u-sci#2261: It's easy to motivate the group in 1 easy step: 1. do it yourself u-sci#2261: (I'm grinning don't read me as being snide lol) Awesome_Ruler_007#7922: I wish I had the skills lol chilli#5665: It's actually true. chilli#5665: People are 100x more motivated to join a project if there's already progress being made chilli#5665: lol StellaAthena#3530: Only 100x? Louis#0144: Eventually u get to the point where u have enough research assistants that you can always have something in progress Louis#0144: :berk: StellaAthena#3530: https://cdn.discordapp.com/attachments/729741769738158194/878695030624247948/image0.png StellaAthena#3530: Nice 😎 alstroemeria313#1694: :blobcutehappy: AI_WAIFU#2844: nice Orz#3023: nice Louis#0144: nice Desperate Noob#6277: Whose rights would I be violating? cfoster0#4356: This @joaogui1 joaogui1#8461: oh got it
joaogui1#8461: Yeah you had said it before, I think back then too many people were typing at once and I must have missed it, my bad DrYazman#2737: Of course, some of this AI stuff is new ground. Let's assume for argument's sake it's the US we're talking about the law of. It is incorrect to say nobody could disprove that it's entirely original, because during pretrial mechanisms it wouldn't be hard to learn what you trained the neural network on, and of course whether you used copyrighted work. Then it's just a question of degree, and of whether material produced from a corpus of copyrighted material counts as infringement. DrYazman#2737: If you have a neural network you trained specifically on an author's work to emulate their style, and the output does just that to the point it's difficult to distinguish, then it could be that the output might be found to be a derivative work. DrYazman#2737: US federal courts have indicated before that they could be inclined to see AI output as violating copyright, depending on the training data, the output you created, and its degree of similarity to the IP in question DrYazman#2737: Of course, this is a new area. SCOTUS could wipe it out if it came to them, and legislation could wipe us all out too. nshepperd#2316: i don't think the question was about the training set's copyright MrRee#0946: I believe the UK government do protect ai generated work but there is that grey area of what the infringement of copyright on the dataset its trained on. https://www.gov.uk/government/consultations/artificial-intelligence-and-intellectual-property-call-for-views/artificial-intelligence-call-for-views-copyright-and-related-rights MrRee#0946: Saying that though all forms of work whether it's art music or text by humans have been inspired by other works that's out there so I personally can't see the difference in a human being inspired by an artist and creating their own version from an ai model being trained on art and creating their own version rolandgvc#9805: noob question: how can I load huggingface's flax gpt-neo 2.7B on a v3-8? I keep running into the "Resource exhausted" issue aaronrmm#3198: I may have joined all the groups in the communities channel only for access to their emoji aaronrmm#3198: Also whoever is adding all these emoji's to the EleutherAI set is MVP. Thank you Orz#3023: Hello @StellaAthena this is regarding https://github.com/EleutherAI/project-menu/issues/11 Do ping me if the information provided is not sufficient and I'll make adjustments for the same Thank you for your amazing work as always! EricHallahan#1051: Connor and bmk rolandgvc#9805: @aaronrmm sorry did you mean something with the emoji or just for fun? 😆 aaronrmm#3198: @rolandgvc I had been looking for an excuse to use the huggingface emoji, and your huggingface question was the first one I found 😛 Thomas 🐼#6243: Hey, someone plan on adding data to the model ? more languages and just more general data ? 🙂
rolandgvc#9805: Ah ok. I'm not entirely sure why the model would not fit in one of the TPU's 16gb ram cores Orz#3023: more even after 800gb? Orz#3023: this is probably due to the fact that the model is loaded to the memory before moving it to tpu nielsrolf#9297: Hey, does someone know of a web interface for gpt-j? https://6b.eleuther.ai/ seems to be down Thomas 🐼#6243: I think the training data is in english for the most part and it struggle a bit for other languages 😅 rolandgvc#9805: @Orz I can load it as a pytorch model but not as flax Orz#3023: oh aight rolandgvc#9805: has anyone played with the HF's port? EricHallahan#1051: The Pile is effectively English only. yoges#7578: Goose-Oriented Object Search Engine this leads to youtube video yoges#7578: are you guys working on it? Desperate Noob#6277: Are there good ais for translation Untouch#9150: GTP-J actually deals with it reasonably well, and if you finetuned it it'd be even better StellaAthena#3530: I’m dress shopping for a wedding so my responses are a bit sporadic, but let’s chat in #interpretability-reading-group Desperate Noob#6277: I guess we can use the criteria for human work for ai work as well. What I am thinking is that if in theory I either trained an ai off it's own outputs and data that is no-copyright could I claim any output of it be my work(I am assuming that it is code I either made or am allowed to use) DrYazman#2737: In many jurisdictions there is a distinction between something that's inspired by a work, and something that's a derivative work alstroemeria313#1694: sigh DrYazman#2737: Setting aside the (largely unresolved in many jurisdictions) issue of who owns AI output, I think it would depend on where the original training data came from. I think you could make a reasonably strong legal argument to proximity though. alstroemeria313#1694: pytorch l-bfgs just doesn't work.
alstroemeria313#1694: *sets about making a scipy.optimize wrapper bc their l-bfgs actually works* alstroemeria313#1694: like i just need to wrap the pytorch loss function and gradient evaluation and do the type conversions back and forth i think DrYazman#2737: @Desperate Noob By that, I mean - if you have to go back through several iterations of training, then output, then more training, then more output, and so on - there would get to be a point where you can't really argue it's derivative anymore. Basically, you're looking for elements that make the work different enough that it's non-trivial Desperate Noob#6277: Also, if you train your ai on millions or even billions of inputs then the resemblance to any one datapoint will be very small DrYazman#2737: Pretty much, yeah Desperate Noob#6277: I think that someone will do something crazy and it will go to the Supreme Court alstroemeria313#1694: ```python def func_for_scipy(func, x): shape, device, dtype = x.shape, x.device, x.dtype def wrapped(x): result = func(torch.tensor(x, device=device, dtype=dtype).view(shape)) return result.cpu().flatten().double().numpy() return wrapped def grad_for_scipy(func, x): return func_for_scipy(partial(torch.autograd.functional.jacobian, func), x)``` alstroemeria313#1694: there lol DrYazman#2737: Basically with that point you've accidentallied into a real test (which I won't get into detail with to keep things here short). But a large corpus of training data (i.e. millions or even billions of inputs) producing something similar to a particular work, it will be a lot harder to show it's borrowing from any given work. alstroemeria313#1694: This doesn't support, like... optimizing multiple tensors yet alstroemeria313#1694: I could do that.
alstroemeria313#1694: I would just have to save all of their shape/device/dtype and split them back out alstroemeria313#1694: But I don't need it for what I'm doing. alstroemeria313#1694: also I would really like to use @chilli's functorch too for optimizing multiple tensors lol Desperate Noob#6277: Thank you for helping me out, though this is a gray area I think I do have a better understanding of ai and copyright Alm#9130: Does anyone have a go-to docker image for getting up and running with python 3.7+ pytorch nvcc cmake jupyter etc? i've tried deepo but seems to be python 3.6 DrYazman#2737: No problem, feel free to ask questions in future alstroemeria313#1694: eheh alstroemeria313#1694: ```python def func_for_scipy(func, x): shape, device, dtype = x.shape, x.device, x.dtype def wrapped(x): result = func(torch.tensor(x, device=device, dtype=dtype).view(shape)) return result.cpu().flatten().double().numpy() return wrapped def scipy_lbfgs(func, x0, *args, **kwargs): """Wraps https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fmin_l_bfgs_b.html""" func_scipy = func_for_scipy(func, x0) grad_scipy = func_for_scipy(partial(torch.autograd.functional.jacobian, func), x0) x, f, d = optimize.fmin_l_bfgs_b(func_scipy, x0.flatten().double().numpy(), grad_scipy, *args, **kwargs)
x = torch.tensor(x, device=x0.device, dtype=x0.dtype).view(x0.shape) f = torch.tensor(f, device=x0.device).squeeze() return x, f, d``` alstroemeria313#1694: easy~ alstroemeria313#1694: It doesn't do callbacks or multiple tensors yet but alstroemeria313#1694: It'll do alstroemeria313#1694: It's either this or implement gradient descent w/ line search myself in PyTorch alstroemeria313#1694: Bc I am not going to fix their L-BFGS chilli#5665: What are you trying to do? alstroemeria313#1694: use L-BFGS alstroemeria313#1694: Fortunately the thing I am doing rn only requires one input to the loss function chilli#5665: Functorch only works if you're batching over the thing chilli#5665: Not if you have a bunch of small tensors alstroemeria313#1694: ah chilli#5665: If you have a bunch of small tensor you can use for each alstroemeria313#1694: i mean like... a module that has multiple parameters alstroemeria313#1694: and thus you have to optimize multiple tensors alstroemeria313#1694: and thus you have to have a loss function that takes the separate parameters as input alstroemeria313#1694: and returns the loss alstroemeria313#1694: bc that is how the scipy.optimize interface works.
alstroemeria313#1694: and you have to have a gradient evaluation function that takes the separate parameters as input and returns their gradients alstroemeria313#1694: specifically scipy.optimize expects the func/fprime to take a 1d numpy float64 array alstroemeria313#1694: and fprime to return a 1d numpy float64 array alstroemeria313#1694: so i would handle the splitting/merging of tensors in a wrapper. xloem#0717: hey, where are people doing work to embed models like genji in free software? EricHallahan#1051: ¯\_(ツ)_/¯ thenightocean#6100: it is fixed now! alstroemeria313#1694: Hey has anyone used geoopt's line search on Riemannian manifolds optimizer? alstroemeria313#1694: Does it work reliably? alstroemeria313#1694: this. https://github.com/geoopt/geoopt/blob/master/geoopt/optim/rlinesearch.py alstroemeria313#1694: Like if I have an optimization problem on a manifold. alstroemeria313#1694: That has a single local minimum which is the global minimum. alstroemeria313#1694: Can I just throw this at it and have it work every time. alstroemeria313#1694: I want to find geometric medians in spherical geometry. Awesome_Ruler_007#7922: This server is honestly pretty cool in terms of the whole community as well as its channels 🙂 , like #the-faraday-cage-archive. I was curious, who pays for the GPUs for that ^ u-sci#2261: The hacker known as Eleuther. EricHallahan#1051: https://cdn.discordapp.com/attachments/730484623028519072/834941766230736906/rxvZyAAAAAASUVORK5CYII.png Parker#3197: is there an archive of the image generations from @BATbot? this isn't a thought out idea, but was thinking about trying to reverse it. (ex. give an image and have it print a text prompt that would give something similar) Parker#3197: in an effort to try to understand what it is capable of generating (and what prompts might work better)
Louis#0144: yeah @BoneAmputee has it locally EricHallahan#1051: I think theeye has been archiving them. Louis#0144: oh? Louis#0144: news to me Kia#2550: Wait really? EricHallahan#1051: I'm pretty sure? Kia#2550: That's interesting Parker#3197: https://the-eye.eu/public/AI/faraday-media/ Parker#3197: that looks like it Kia#2550: I wonder if I can see my self here Parker#3197: well a small subset (1875) Parker#3197: 1875, but when I do a search for batbot with 500/500 I get 38k results* Kia#2550: Well...I did found my self Kia#2550: It's a Failed generation of mine:surprise: StellaAthena#3530: cutting edge AI ethics research https://cdn.discordapp.com/attachments/729741769738158194/879194037217787964/Screen_Shot_2021-08-22_at_10.42.45_PM.png mr_seeker#1337: If anyone can help me with this issue (both deepspeed and deeperspeed are prone to this one) ``` from torch._C._distributed_c10d import _DEFAULT_PG_TIMEOUT ModuleNotFoundError: No module named 'torch._C._distributed_c10d'; 'torch._C' is not a package ```
genetyx8#7543: If you specifically want to compute geometric medians, I'm not sure line searches are the way to go, since in that case the objective function is not differentiable alstroemeria313#1694: ...it is? alstroemeria313#1694: it's an L1 objective alstroemeria313#1694: minimize the sum of Euclidean distances genetyx8#7543: which is non differentiable, It's like absolute values alstroemeria313#1694: ... alstroemeria313#1694: absolute values are only nondifferentiable at zero alstroemeria313#1694: also a subgradient exists genetyx8#7543: if you can imagine it geometrically, the "surface" of your objective function is non differentiable on the voronoi edges of your point cloud alstroemeria313#1694: how would you find geometric medians then genetyx8#7543: https://en.wikipedia.org/wiki/Geometric_median#Computation alstroemeria313#1694: on arbitrary manifolds. genetyx8#7543: https://juliahub.com/docs/Manopt/h1Pdc/0.2.4/solvers/cyclic_proximal_point.html genetyx8#7543: that's what the Julia package for optimization on manifolds uses alstroemeria313#1694: ty :blobcutehappy: Awesome_Ruler_007#7922: but aren't 6 GPUs for 24*7 hours pretty expensive? ProudNoob#5854: what is the best way to keep track of new data ending up on the-eye? I'm still really new to all of this, but it's quite clear they will stay a nice central point to rely on for good data and fighting for it. seems right now I'm missing cool updates ProudNoob#5854: he gets unlimited Etherium, duh Awesome_Ruler_007#7922: https://tenor.com/view/aplausos-clapped-leonardo-dicaprio-clap-slow-clap-gif-12389001 ProudNoob#5854: yeah, no, you're right, is expensive
ProudNoob#5854: probably few bucks a day if you go to Google or Amazon Sid#2121: Coreweave provides us with compute Sid#2121: see FAQ Sid#2121: !faq Carl-bot#1536: ProudNoob#5854: and about keeping track of the eye, that also in !faq? Sid#2121: i don't think so, even i don't keep track of what gets added there ¯\_(ツ)_/¯ ProudNoob#5854: it's kinda nice to know that every now and then something random will pop up from there ProudNoob#5854: i have FOMO though Sid#2121: https://the-eye.eu/public/AI/ everything relevant to us should be added here. We're trying to get better at documenting / announcing and such, but i'm not sure what the status of that is. ProudNoob#5854: I think if you guys are really gonna make things work with neox there's gonna be a need for pr and community management, or just stay in the background kurumuz#5695: :thonk: natedog#8669: Sure that would be awesome to have y'all link our community 🤓 . I DM'd you the discord link (don't want to spam here) Sid#2121: lol. Our PR strategy of "don't do PR" has worked pretty well for us so far. But thanks for the advice. ProudNoob#5854: that's what I mean with pr 🙂 I expect more and more people are gonna reach out ProudNoob#5854: deal with that, not do pr ProudNoob#5854: important to get right with potential issues people from all kind of areas are gonna have ProudNoob#5854: half the people I've shown GPT variant react quite hostile ProudNoob#5854: even the "real" pr guys that use it to help them write seo are secretly bitching at ais while using them 😂 ProudNoob#5854: and you're bringing yourself to the centre of this ethical debate I feel with the release of neox
ProudNoob#5854: there are as far as I know no others that are so powerful not controlled by Alibaba, Google, Baidu, Amazon or Israeli army spin-offs ProudNoob#5854: and they all went hardcore ban hammer, probably more for commercial than ethical reasons, but suddenly they love playing the ethical card ProudNoob#5854: anyway, should probably move this to offtopic 😽 Sid#2121: I mean this is on topic enough Sid#2121: while we're not a monolithic entity (more of a research collective) we've fleshed out our ethical reasoning behind wanting to release in a blog post (https://blog.eleuther.ai/why-release-a-large-language-model/) and most of us share an opinion similar to this xen0#3601: heya, can anyone tell me if it's viable to train DQN through feeding it cv2 feed? what if we gave out reward based on whether cv2 detects that we succeed in killfeed and set negative reward on detecting death with same cv library? xen0#3601: like, just feed it normalized frames data and the reward for each step ethan caballero#6044: Oh, Percy is shouting out Eleuther!! RyanT#5929: EAI getting a shoutout in the Stanford “Foundation Model” workshop nickt#8694: https://cdn.discordapp.com/attachments/729741769738158194/879406052674773032/Screen_Shot_2021-08-23_at_12.45.05_PM.png Sid#2121: Is this being recorded? RyanT#5929: Yeah it should be RyanT#5929: You can watch live though nickt#8694: think so - it's one of those live broadcast to youtube workshops Louis#0144: pooog ethan caballero#6044: damn, percy wanted to replicate GPT-3 too ethan caballero#6044: "we've been discussing with Eleuther about reproducing GPT" - Percy Deleted User#0000: amazing Deleted User#0000: recognized by academia is no small feat
ethan caballero#6044: lol, they're deleting crazy questions ProudNoob#5854: yeah, nicely worded. still, not sure if regular media are gonna find that. within no-time they'll make it sound like a dark underground hacker group from q-anon or something. especially since "the kartel" that had the monopoly on gpt is closing it off more and more. really glad your collective is standing up against that. did you register at all as an entity somewhere? Sid#2121: Yea, we had a meeting with them a while back Sid#2121: No, we made a conscious decision to avoid registering as any type of entity for as long as we feasibly can ProudNoob#5854: quick update on who "percy" is? gpt-j is already better than gpt-3? RyanT#5929: Percy = Percy Liang at Stanford RyanT#5929: He’s an NLP professor ethan caballero#6044: Where is Percy reading questions from? ProudNoob#5854: could be smart, but perhaps also risky for single contributors that way? perhaps more anonymity would be smart then RyanT#5929: There’s a QA tab StellaAthena#3530: Lots of people here are anonymous RyanT#5929: I have a different meeting for a couple hours so I have to drop off StellaAthena#3530: Can someone drop a link to the stream? nickt#8694: https://crfm.stanford.edu/workshop.html nickt#8694: Those questions aren't ordered in the way he's reading them to me. I bet there's some filtering interface behind the scenes. RyanT#5929: Oh ok, I had to drop before he got to QA Orz#3023: https://youtu.be/dG628PEN1fY ethan caballero#6044: I think he's getting the questions from somewhere else than the Q&A tab. ethan caballero#6044: He hasn't replied to any question from the Q&A tab. RyanT#5929: There are probably some premade questions and then someone curating the posted ones
RyanT#5929: Respect to the person asking for PhD admission Louis#0144: LMAO Louis#0144: ballsy RyanT#5929: Shoot your shot tbh dmayhem93#3202: Gotta respect the hustle Louis#0144: "hey can u accept me to the phd program on the spot" Louis#0144: berk berk RyanT#5929: I think it was something like “is HAI offering PhDs? I want to be admitted with my foundation model” RyanT#5929: Hope they get it StellaAthena#3530: Jack Clark just said that everyone who is training / trying to train these models is a company :wat: Louis#0144: isnt he in this discord? Louis#0144: i swear hes in here Louis#0144: ive seen him talk before Louis#0144: (I think I found him, I wont tag him) ethan caballero#6044: "we have some research coming out soon" - jack ethan caballero#6044: "$200 billion" - jack Louis#0144: did he actually say $200b Louis#0144: lol ethan caballero#6044: yes Louis#0144: for what...?
ethan caballero#6044: He said something about there being $200 billion of accessible government funding. Louis#0144: man hes gonna fly right past Ts and Qs Louis#0144: :berk: ethan caballero#6044: Have to be clever to get it for GPT-4, etc. RyanT#5929: Who is “we” in that context ethan caballero#6044: i guess anthropic RyanT#5929: Interesting RyanT#5929: I wasn’t sure if he was with that or not Awesome_Ruler_007#7922: <*insert "guess I am a company then" meme*> triggerhappygandi#0001: Gorillionaire grindset triggerhappygandi#0001: Always be husling ethan caballero#6044: Jacob says we need list of :firealarm:s. Louis#0144: CARP-Large (1.85b params, compared to CARP-base at 800m params) will be getting kicked off soon Louis#0144: rly wish we could have done xl which would have been about 2.8b but we couldnt get multigpu working in time Louis#0144: reeee Louis#0144: if we got contrastive learning in mesh jax we could totally do a 12b CARP Louis#0144: actually im not sure a v3-8 would have been enough StellaAthena#3530: Awesome! EricHallahan#1051: Welcome! Louis#0144: https://wandb.ai/eleutherai/CARP/runs/hef0rejg wtf is the issue
Louis#0144: Idgi Louis#0144: @sweg EricHallahan#1051: ¯\_(ツ)_/¯ sweg#8920: think its a case of vanishing gradients sweg#8920: gradients from deberta and roberta look very different sweg#8920: https://cdn.discordapp.com/attachments/729741769738158194/879509169445683241/unknown.png sweg#8920: im not entirely sure how to read the gradient graphs on wandb sweg#8920: https://cdn.discordapp.com/attachments/729741769738158194/879509386186330212/unknown.png sweg#8920: top is deberta v2 bottom is roberta large sweg#8920: can anyone translate this? lol EricHallahan#1051: Don't worry, I don't think I know anyone who does. :berk: StellaAthena#3530: @Deleted User? Kia#2550: Oww Congratulations, That's honestly really fast Louis#0144: Ugh it might not work Louis#0144: LOL Louis#0144: @sweg new plan is to use GPT with a CLS token appended Louis#0144: :berk: yoges#7578: https://analyticsindiamag.com/inside-maze-a-new-framework-for-applied-reinforcement-learning/ HaiderAbbas#0958: Hi Team Intro
My name is Haider Abbas from Inabia AI based out of Redmond WA My team of ML engineers are looking forward to contributing to the development of GPT-NEO. Let us know how we can get involved. @Sid @bmk @StellaAthena @Daj tagging you according to the FAQ page. 𓅬 gabriel_syme 𓅬#3220: great thread @StellaAthena ! Thanks, I once again missed the thing due to time zones Louis#0144: yum https://cdn.discordapp.com/attachments/729741769738158194/879554006240858152/Screen_Shot_2021-08-23_at_10.33.33_PM.png Louis#0144: thats 80gb almost full with fp16 btw Louis#0144: :berk: will.thompson#5333: I've caught up on a few of the threads here... So, few q's: (1) what are the active projects that are being worked on (2) what does the collaboration model look like? how does one get involved if they have an interest? @Sid @bmk @StellaAthena @Daj @EricHallahan (sorry about the tag correction) A bit about me: I am an ML practitioner focused mainly on NLP. I used to work on HFT in a prior life. Kia#2550: Where did you got the A100? Kia#2550: That's really lovely kurumuz#5695: what model is that sweg#8920: carp with deberta xxl im pretty sure kurumuz#5695: training with adamw ig? Louis#0144: ye kurumuz#5695: or many batches, idk kurumuz#5695: ye kurumuz#5695: makes sense Louis#0144: batch size 64 sweg#8920: also just gonna throw this here since im stuck
sweg#8920: https://cdn.discordapp.com/attachments/729741769738158194/879559084939026432/unknown.png EricHallahan#1051: 1) Really substantial projects are listed at https://www.eleuther.ai/projects/, and minor projects/tasks are organized on the task board at https://board.eleuther.ai/ 2) Collaboration model relies heavily on interactions here on Discord and on GitHub. Some project channels have a project doc in the pins with detailed information on how to contribute, others are documented in their associated repositories on GitHub, and others are documented on the task board. The common thread among all of those is simply asking how to get involved with a specific project. sweg#8920: does anyone know best way to do this in pytorch sweg#8920: i cant get it working with gather sweg#8920: inds being a [B] long tensor Louis#0144: pls refrain from tagging that many people unless its something absolutely necessary StellaAthena#3530: Hello @will.thompson @HaiderAbbas and welcome! We have a rather defuse collaboration model... we're more a collection of people who hang out and do cool things together than a proper organization. As @EricHallahan said there's a lot of pitches for ideas on the project board. I know most of what's going on because I don't have a life and hang out here 24-7, so if you tell me a bit about your interests and backgrounds I can try to pair you with something. Or if you just want to be handed something cool to work on, that can be arranged as well 🙂 EricHallahan#1051: I will say you do ping for trivial reasons quite often though. will.thompson#5333: https://www.eleuther.ai/get-involved/ I don't read no good :sadge: Louis#0144: I do occasionally dabble in frivolous pinging EricHallahan#1051: I really need to make this page more visible and put it back in the primary nav. EricHallahan#1051: I really don't know why I decided to hide it on the new-new-website other than it being more sparse with information than I would have liked. Louis#0144: Sounds exciting StellaAthena#3530: @will.thompson if you’re looking for something to set a small dev team loose on, building a codebase that replicates the analysis from any of the following papers would be very productive: https://arxiv.org/abs/2104.13733 https://arxiv.org/abs/2001.08361 https://arxiv.org/abs/2009.03015
will.thompson#5333: Cool, I would be interested. I've read the "scaling laws" paper, not the others tbh. StellaAthena#3530: A lot of security and adversarial ML research with text models simply isn’t done by people with the resources to work with many billion parameter models. We do, and there’s a lot of good science to do in seeing how this work applies at scale. will.thompson#5333: That's an interesting point! If I were to reproduce a large LM result, how does compute work? Do you have a platform or do I build it from scratch? I'm assuming these are questions you get constantly - apologies StellaAthena#3530: We have two core codebases for large language models, one for TPUs and one for GPUs. GPU: https://github.com/EleutherAI/gpt-neox TPU: https://github.com/kingoflolz/mesh-transformer-jax We have surprisingly close to limitless TPUs and generally encourage people to use them when possible. We do have a couple dozen A100s for working on GPUs though, as well as some V100s and singleton consumer cards for smaller experiments. We have a bunch of pretrained models you can do inference with or fine-tune, if you don’t need to train from scratch EricHallahan#1051: For evaluations, we can hook those two codebases and HF Transformers into https://github.com/EleutherAI/lm-evaluation-harness Kia#2550: Ah that make sense where louis gotten the V100 StellaAthena#3530: I think he got his from his lab actually StellaAthena#3530: Or maybe he owns it? EricHallahan#1051: Actually, does NeoX eval harness work right now? Louis#0144: I have a 3090 Louis#0144: The A100 is data crunch Louis#0144: By alstros recommendation StellaAthena#3530: You can evaluate trained models. If you want to do live evals during training then it locks up sometimes. but there isn't an issue with evaluating trained neox models
Louis#0144: My lab uses A6000s and A40s Louis#0144: None available Rn tho will.thompson#5333: ah, I've seen these. Ok, thanks. Let me read these papers more thoroughly. Where should I go once I've selected a paper to reproduce? StellaAthena#3530: @will.thompson So yeah, we've been working to build an ecosystem for supporting research. We have a lot of the pieces and if you'd rather do infra than research per se there's plenty of ways to contribute to the ecosystem StellaAthena#3530: for the two adversarial ml papers, DM me. For the scaling laws paper post in #scaling-laws and tag or bmk will.thompson#5333: will do, thanks! excited to get involved. StellaAthena#3530: Happy to have you 🙂 Kia#2550: Ow that honestly make sense 𓅬 gabriel_syme 𓅬#3220: welcome! this an amazing place to do open source cool work indeed. hell, it's an amazing place even if you do absolutely no work (looking at me) aٴ#8803: :goose: EricHallahan#1051: `pyfra` :tribalism: StellaAthena#3530: Very cool paper on the security repercussions of listening to Codex too much https://arxiv.org/abs/2108.09293 EricHallahan#1051: In #prompting we were talking about if you ask for secure code you can get secure code out. EricHallahan#1051: https://smitop.com/post/codex/ EricHallahan#1051: *Just ask the model to be nice.* StellaAthena#3530: hmm dmayhem93#3202: try this: y = x[torch.arange(B), inds] if you're still looking for help cfoster0#4356: (Emphasis mine) >>> GitHub Copilot seems to use a variant of cushman for cases where faster completions are desired (such as autocomplete), **and an undocumented “earhart” model when slower completions are desired** (such as when a user clicks a button to explictly request completions).
StellaAthena#3530: what EricHallahan#1051: I assume they must be referring to davinci-codex EricHallahan#1051: Because that would be the logical way to resolve something like that. Teemochu#8740: E? :thonk: guac#4716: **a**da, **b**abbage, **c**urie, **d**avinci, **e**arhart. one more model until we get to **g**.... EricHallahan#1051: Like as far as I can tell we have no evidence that an `earhart` exists. guac#4716: can you inspect page source in codex to see if they make an api call to "earhart" Sahl#0630: it’s funny that we’re looking for earhart EricHallahan#1051: I think the post implies that it is not just undocumented, but Copilot exclusive. kindiana#1016: gpt4 confirmed? 🙃 Sahl#0630: gpt4-ada EricHallahan#1051: It sounds like it is literally impossible to verify and it is just their gut feeling. kurumuz#5695: my gut says it's a smaller than davinci model. guac#4716: base copilot cushman kurumuz#5695: why would it be bigger than davinci, that doesn't make sense. guac#4716: https://cdn.discordapp.com/attachments/729741769738158194/879590929365860352/Screen_Shot_2021-08-24_at_1.00.16_AM.png kurumuz#5695: inference cost is simply too high Sahl#0630: bc d < e QED guac#4716: let me do some digging lol EricHallahan#1051: Because they can and Microsoft can foot the bill.
kurumuz#5695: then it would be on the openai api as well imo. Sahl#0630: you can probably guess how big the model is based on latency right kurumuz#5695: depends kurumuz#5695: they might have OP hardware EricHallahan#1051: Not really, I feel like I have had this conversation before. EricHallahan#1051: A long time ago though. EricHallahan#1051: They might have nice hardware. EricHallahan#1051: They might not. EricHallahan#1051: Who knows? EricHallahan#1051: ¯\_(ツ)_/¯ Sahl#0630: regardless latency would be a hint guac#4716: idk all i'm getting for the engine is engine `v1/engines/github-py-stochbpe-cushman-pii` guac#4716: no earhart guac#4716: even when "click for explicit request completions" kindiana#1016: what does stochbpe mean 🤔 kindiana#1016: bpe dropout? guac#4716: sounds about right EricHallahan#1051: It seems like a pretty bold claim that they back up with no evidence at all. EricHallahan#1051: So I am not surprised lol EricHallahan#1051: It has to be something along those lines.
guac#4716: lol why would they randomly name a model after amelia earhart. sounds fish zphang#7252: we don't even know what a cushman is yet EricHallahan#1051: Valid zphang#7252: so earhart doesn't implausible guac#4716: v true EricHallahan#1051: It just makes little sense. zphang#7252: the E is for ||Eleuther|| cfoster0#4356: Is pii Personally Identifiable Information? EricHallahan#1051: I have no idea. alstroemeria313#1694: @cfoster0 how well does your simple diffusion code work? this <https://github.com/cfoster0/simple-diffusion-model> alstroemeria313#1694: separately, how does MCTS work if like... you have a general value network instead of something that predicts win/loss alstroemeria313#1694: Like say I have a value network that, for any text prompt and partial sequence of VQGAN tokens, predicts the CLIP score at the end of the series of tokens. alstroemeria313#1694: And I want to use it to detect bad choices during sampling and roll back alstroemeria313#1694: @inox IKEA#9631: Have you ever considered making a Slack server for higher quality discussions and not being like, invaded by a bunch of 14 yo furries who wanna use AI for writing dragon erotica and whatnot kurumuz#5695: uhh kurumuz#5695: If you think the quality of discussions are not on your level, staying or leaving is your decision. kurumuz#5695: Also, slack sucks nshepperd#2316: @alstroemeria313 i think you can basically consider MCTS to be parameterized over any multi armed bandit algorithm to be used at each node nshepperd#2316: like normally ppl used ucb1 which is specifically for 0-1 rewards but there's probably something appropriate for general rewards in the literature
AI_WAIFU#2844: > slack no, we'll just gatekeep the good channels kurumuz#5695: @AI_WAIFU you can gatekeep just by talking math :grimberk: AI_WAIFU#2844: :yes: alstroemeria313#1694: "pick the highest, lol" alstroemeria313#1694: ? nshepperd#2316: that's one option, lol alstroemeria313#1694: Wait can't you just plug average reward into UCB1 alstroemeria313#1694: oh wait the thing I actually want is called PUCT? nshepperd#2316: if it's between 0 and 1 then yeah probably alstroemeria313#1694: Reward can be anything I think alstroemeria313#1694: Specifically for my case it is -1 to 1 but I can rescale if it that's necessary nshepperd#2316: if you can make your value network output a variance for the prediction you could use it as a prior for a normal reward distribution and use UCB nshepperd#2316: ...or something alstroemeria313#1694: I can do that alstroemeria313#1694: I just have to make it output mean and log variance for the final CLIP score and use negative log likelihood as the loss alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/879680110800814100/Screen_Shot_2021-08-24_at_3.54.39_AM.png alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/879680136981671936/Screen_Shot_2021-08-24_at_3.54.46_AM.png alstroemeria313#1694: Like if I have a fused value/policy network. nshepperd#2316: ahh
nshepperd#2316: yep that looks like it'll work alstroemeria313#1694: ...What is the difference between an action value and an evaluation value. nshepperd#2316: evaluation value is the actual score that was obtained at the end alstroemeria313#1694: oh alstroemeria313#1694: how do i get that, i can't go to the end a bunch, it's expensive alstroemeria313#1694: i also can't expand all possible child nodes nshepperd#2316: well, you can rollout a bunch of steps then stop at some cutoff point and treat whatever the value network outputs as the 'actual score' alstroemeria313#1694: i can get probabilities from the policy network for each child node of a node and try to improve/amplify those somehow alstroemeria313#1694: i think this is what AlphaZero actually did alstroemeria313#1694: bc Go has too many legal moves. alstroemeria313#1694: ah nshepperd#2316: since the value network should get more accurate as it gets closer to the end alstroemeria313#1694: i can *sample* from the probabilities output by the policy network alstroemeria313#1694: and explore those alstroemeria313#1694: but i need to limit how many. nshepperd#2316: sample from the top k or something? alstroemeria313#1694: yeah alstroemeria313#1694: i usually use top-p alstroemeria313#1694: and sample one. alstroemeria313#1694: for VQGAN tokens.
alstroemeria313#1694: so i could use top-p and sample k nshepperd#2316: yep alstroemeria313#1694: so like... ok alstroemeria313#1694: trying to work out the exact sampling procedure. alstroemeria313#1694: The simplest possible thing is just to pick the action w/ the highest action value? alstroemeria313#1694: Like sample k of top-p, evaluate their values, and just pick the best lol. alstroemeria313#1694: Even that would be an improvement on pure autoregressive sampling, right? nshepperd#2316: yeah, should be alstroemeria313#1694: So if I expanded two steps ahead instead. I would sample k of top-p for each child node and get the average action value for each child node, then pick the best? alstroemeria313#1694: That seems wasteful/wrong alstroemeria313#1694: If I looked two steps ahead why not get the *best* action value for each child node and then take *both* steps, or at least take one step and then expand the tree by one more step. nshepperd#2316: so sort of like beam search? sample k two step sequences, pick the one with highest action value? alstroemeria313#1694: Yeah alstroemeria313#1694: Well alstroemeria313#1694: Sample k, then sample k for each one alstroemeria313#1694: Take the one step whose child had the best action value of all of them. Then reuse the already-sampled k child nodes, expanding them each w/ k sampled children. alstroemeria313#1694: I don't want to *re*sample the child nodes, I want to use the one I actually saw to pick the best action with. nshepperd#2316: right alstroemeria313#1694: ...At some point I hope this reasoning ends up at an algorithm people actually use in practice for MCTS ^^;; nshepperd#2316: and MCTS is basically that but you do each sample sequentially, somehow reusing scores from the previous samples to try and bias the following samples toward the child node most likely to be the best
alstroemeria313#1694: hm... alstroemeria313#1694: ...So I can just train the thing and then try different sampling algorithms. alstroemeria313#1694: Like an AR transformer, first or first few tokens are the projected CLIP embedding of the prompt, and it has both a policy head (logits) and a value head (mean and log variance). alstroemeria313#1694: Not a DT, if I have a value head I can't feed in the value at the beginning of the sequence lol nshepperd#2316: yep alstroemeria313#1694: And I just use the value head to improve on pure AR sampling. alstroemeria313#1694: Which I could still just fall back to. alstroemeria313#1694: i could work out a way to use the variances later. alstroemeria313#1694: (so, separately, I'm trying @cfoster0 's simple diffusion code rn, I probably want to add FID evaluation to it if sampling is fast enough in practice) nshepperd#2316: (I once implemented a MCTS for a toy problem with a small set of discrete outcome values. For the sampling algorithm i had each node,action pair maintain a dirichlet distribution over the possible outcomes and just used thompson sampling. It did work, but then I realised using MCTS at all for this particular problem was actually worse than fully expanding the game tree because of how the problem worked, and felt very silly.) alstroemeria313#1694: ooh alstroemeria313#1694: Yeah my tree is too big alstroemeria313#1694: For a 16k code VQGAN, 16k child nodes per parent node nshepperd#2316: exhaustive search through all possible vqgan images :catgirl5: alstroemeria313#1694: eheh. alstroemeria313#1694: so how this is going to behave is, AR sampling will do basically the same thing as a text conditioned transformer without Decision Transformer alstroemeria313#1694: i.e. it will often generate coherent images but not look like the prompt alstroemeria313#1694: hm alstroemeria313#1694: wait alstroemeria313#1694: How do I teach the value head about off-distribution stuff.
alstroemeria313#1694: Like if I sample a bad VQGAN code how does the value head know to rate it low. alstroemeria313#1694: If it hadn't seen sampled outputs during training. alstroemeria313#1694: ...I need to raise batch size on this diffusion run, GPU util is at 33% nshepperd#2316: that's tricky. you might need just sample a few images from your network and train it on those. and hope that it's very sample efficient with generalisation from the on distribution stuff nshepperd#2316: so that you don't have to generate 100,000s of samples alstroemeria313#1694: yeah... alstroemeria313#1694: mb i could freeze the original network's weights alstroemeria313#1694: and train it further on a mix of sampled + normal alstroemeria313#1694: so i didn't mess up the policy head with the sampled training data. even if i don't use AR loss on those. alstroemeria313#1694: idk. alstroemeria313#1694: Default batch size was 4. alstroemeria313#1694: And 4 grad acc steps. alstroemeria313#1694: I changed it to 64 and 1. alstroemeria313#1694: Is now training much faster. nshepperd#2316: yay alstroemeria313#1694: ...how fast is sampling actually. from this cifar-10 diffusion model. alstroemeria313#1694: Bc I want to do FID if possible during training. alstroemeria313#1694: And I need to sample at least 10k for it. alstroemeria313#1694: -.- also the training code uses tqdm and then does regular print()s so the progress bar is usually not visible on screen alstroemeria313#1694: oh well, it has wandb and i can watch it from that
alstroemeria313#1694: ok, so the loss for this diffusion model is just MSE between the predicted noise and the actual noise? alstroemeria313#1694: no KL term? alstroemeria313#1694: (this sort of thing is why I want FID, so I can evaluate what changes to the arch or losses do) nshepperd#2316: with my RL on vqgan I realized that because i was using the *image* clip embedding as the prompt/input, the reward on any dataset image would be 1 by definition. so i had to either make the value network output an predicted clip embedding and hope that the cosim of that with the prompt would be a decent predicted award. or make it directly predict rewards and train exclusively on samples alstroemeria313#1694: oh alstroemeria313#1694: why not use the text embedding though. alstroemeria313#1694: (I forget why) nshepperd#2316: my text embeddings were kind of rubbish since i didn't have proper captions for that dataset alstroemeria313#1694: ah. nshepperd#2316: just a bag of not really natural language tags alstroemeria313#1694: (hm so sampling from this diffusion model predicts the original noise, scales the prediction by a timestep dependent amount, and subtracts it, then scales the result by a different timestep dependent amount.) alstroemeria313#1694: (then renoises the image w/ different noise and does it again etc.) alstroemeria313#1694: for 1000 timesteps. cfoster0#4356: Which part? I think the wrapper likely works correct but the U-Net is certainly suboptimal alstroemeria313#1694: mm alstroemeria313#1694: i went ahead and tried it. going to add FID evaluation now. alstroemeria313#1694: how many samples should i do per batch cfoster0#4356: Here are some wandb curves for it https://wandb.ai/cfoster0/simple-diffusion-model?workspace=user-cfoster0 alstroemeria313#1694: ty :) cfoster0#4356: Tbh I don't know
cfoster0#4356: (going afk for a bit again) alstroemeria313#1694: though if i do too many i will use a bunch of memory running them through inceptionv3. alstroemeria313#1694: k~ alstroemeria313#1694: lol FID is way too slow alstroemeria313#1694: to do during training. alstroemeria313#1694: It would take 1h30m on 1x A6000. alstroemeria313#1694: I can't really do it until the end I guess alstroemeria313#1694: Oh well I'll just do it then I guess alstroemeria313#1694: For 10k samples. cfoster0#4356: Were you trying to backprop on it during training? alstroemeria313#1694: On what? cfoster0#4356: The FID alstroemeria313#1694: no alstroemeria313#1694: I was using torch-fidelity alstroemeria313#1694: And I did it in a no_grad() alstroemeria313#1694: er alstroemeria313#1694: Which I managed to mess up the indentation for alstroemeria313#1694: no it's still really slow. cfoster0#4356: I guess it makes sense if you're generating 10k samples and each takes 1k steps alstroemeria313#1694: yeah
alstroemeria313#1694: :/ alstroemeria313#1694: oh well i'll just train it for a while alstroemeria313#1694: eventually i may find a way to parallelize sampling across multiple gpus w/ torch-fidelity alstroemeria313#1694: mb could just do it every 100k steps and not on step 0 inox#5400: there's UCB update rules to backprop (different kind of backprop, it's a bad name) the value of the CLIP score to the intermediate states, and you regress toward that so the value network ends up producing a value for any state alstroemeria313#1694: ah inox#5400: that's only for the one kind of MCTS I'm familiar with inox#5400: and it's 9am so I will now disappear alstroemeria313#1694: I was just going to train a value head for the transformer model that outputted the predicted CLIP score at every timestep. alstroemeria313#1694: Mb it would output a predicted mean and (log) variance. cfoster0#4356: At some point soon I'll probably update this to use the VDM formulation with signal to noise ratios, and the low discrepancy sampler alstroemeria313#1694: *nods* alstroemeria313#1694: OpenAI said their models benefited from EMA AerysS#5558: Anyone knows a markdown editor that supports KaTeX/MathJax + is cross-platform (browser only is ok too) and *can paste an image directly from clipboard*? I have been using StackEdit, but they do not support pasting the image, so I am looking for an alternative. I tried Notion, planned to import from StackEdit, but some equations weren't rendered properly and I guessed it was due to limited KaTeX support. cfoster0#4356: When you do EMA like that, that's outside your optimizer, right? Like you update your weights according to the optimizer as usual, and just keep a separate EMA version? CRG#8707: Ranger / lookahead do it inside the optimizer (every k=5 steps), but I think usually it's done outside https://cdn.discordapp.com/attachments/729741769738158194/879727209613520947/14b7378b914f3a79943351bacf851d63.png alstroemeria313#1694: Yes. alstroemeria313#1694: I have code for it
alstroemeria313#1694: ```python @torch.no_grad() def ema_update(model, averaged_model, decay): model_params = dict(model.named_parameters()) averaged_params = dict(averaged_model.named_parameters()) assert model_params.keys() == averaged_params.keys() for name, param in model_params.items(): averaged_params[name].lerp_(param, 1 - decay) model_buffers = dict(model.named_buffers()) averaged_buffers = dict(averaged_model.named_buffers()) assert model_buffers.keys() == averaged_buffers.keys() for name, buf in model_buffers.items(): averaged_buffers[name].lerp_(buf, 1 - decay) ``` alstroemeria313#1694: it's just this alstroemeria313#1694: at the beginning of training you do `model_ema = deepcopy(model)` (you get `deepcopy()` from `from copy import deepcopy`) alstroemeria313#1694: Then do `ema_update(model, model_ema, ema_decay)` right after `opt.step()`.
alstroemeria313#1694: Where `ema_decay` is something like 0.999 (OAI used 0.9999 in their diffusion training, I think I did my finetune with 0.998 that I kicked in manually after a couple thousand steps) alstroemeria313#1694: You just save the `model_ema` state dict in your checkpoints. alstroemeria313#1694: And load it back in on resume. alstroemeria313#1694: You do not have to use the same ema_decay on every iteration, you can vary it or kick it in only after the model gets decent or w/e. Kharr#7888: You can also do it in the optimizer by just keeping a buffer of the weights alstroemeria313#1694: that's a pain in pytorch 🙃 alstroemeria313#1694: also it won't do EMA over buffers. Kharr#7888: What do you mean? Optimizer already keeps track of various states. Adding a copy of the weights to it is no big deal. alstroemeria313#1694: I mean like batchnorm stats and stuff. alstroemeria313#1694: They aren't included in PyTorch optimizers as per the torch.optim interface. Kharr#7888: Maybe I'm missing something. You're saying you can't do this `Then do ema_update(model, model_ema, ema_decay) right after opt.step().` as the final step of the optimizer instead of right after it? (I'm kind of confused, that's why I'm asking). You can extend any of the default optimizers to do more things. nshepperd#2316: optimisers don't get to see the model with the standard optimiser interface, just a set of parameters to optimise nshepperd#2316: you could make a custom thing that combines an optimiser with ema using a different interface but idk if that would really be better yoges#7578: is maths 55 sufficient for doing machine learning? alstroemeria313#1694: huh, PyTorch ASGD does it by keeping an averaged version of each parameter around in the optimizer state but I'm looking at it rn and I see no convenient way of getting the averaged model back out to do things with it alstroemeria313#1694: (ASGD is a simple average rather than an EMA) alstroemeria313#1694: Like you have to do optimizer state dict surgery to get at the averaged params? alstroemeria313#1694: And it still doesn't handle stuff like batch norm stats. cfoster0#4356: You should probably read the FAQ EricHallahan#1051: https://www.eleuther.ai/faq
alstroemeria313#1694: idk, you need multivariable calculus (mostly not integrals though), linear algebra, and statistics EricHallahan#1051: https://cdn.discordapp.com/attachments/198658294007463936/879614108213850152/1629730800590.png oreo#2740: Hi everyone, I have a question: is jax better than pytorch for large-scale LM training? If so, why? oreo#2740: Also, just to introduce myself and get to know people: I work at a biotech/AI company that has a lot of biomedical/clinical text. I'm in charge of training large LMs on them, and I've been learning a lot from just lurking around here EricHallahan#1051: JAX vs PyTorch is like wrench vs screwdriver: Both ecosystems do the same things, but each in their own way. oreo#2740: If anyone's interested in biomedical/clinical LMs and NLP, feel free to DM me 🙂 alstroemeria313#1694: ok I added FID evaluation to cfoster0's diffusion code alstroemeria313#1694: i'll have results after 100k steps in ~4 hours probably. EstebanSir#2189: I wonder how bigger GPT models could be used, it takes a very beefy gpu or good tpu to run the 6b, but how would one run a 175b parameter model? is the amount of compute per parameter linear? kindiana#1016: 8 A100s or something lol kindiana#1016: compute per param is constant Orz#3023: 8 would be sufficient? kindiana#1016: should be for inference Orz#3023: oh... Orz#3023: I mean they can be distilled right? EricHallahan#1051: ¯\_(ツ)_/¯ kindiana#1016: sure, but we don't know what the performance/size tradeoffs would be Orz#3023: aight Awesome_Ruler_007#7922: But it would definitely be worth it, while still outperforming other models available
Awesome_Ruler_007#7922: is a sparse and distilled version of GPT-j up? cfoster0#4356: No, I don't think we've done any experimenting with distilling or pruning that model Awesome_Ruler_007#7922: sad. I guess that can be a project in itself - I saw NeuralMagic the other day and they have some impressive results on CV models liberty90#6297: Medium-term Moore's Law is also important, I think? 2025-hardware should offer better possibilities? Ezzaky#2038: hey guys , any dev free for a freelancer hire about built a web app based on GPT-j .... hit me up please GABRIEL fatiede#7028: I'm down Kazumi#1297: is it possible to train gpt-j or gpt-neo on colab? I tried training from huggingface's gpt-neo, but 1.3B one wasn't possible, because it runs out of memory, even with high ram, GPU or TPU Louis#0144: I don’t think those are trainable in colab unless I’m wrong Kazumi#1297: or, what size is possible to train, I guess Intruder!#7099: I was going through your threat about beam search. Could you please clear how K or P sampling is similar to beam search. Louis#0144: 2.7b fits in a 3090 with like 64gb of RAM (using DS) Louis#0144: 1.3b could work in a similar setup on a v100 but I think you’d need 40gb of Ram Louis#0144: Which should fit in colab pro+? Louis#0144: Anyway just get TRC and finetune them Louis#0144: Don’t use colab Louis#0144: You can finetune GPT J then Intruder!#7099: GPT J? it can fit in colab pro plus? Louis#0144: GPT J cannot for training Kazumi#1297: TRC? not TFRC? or is that the same Louis#0144: You would have to use TRC
Louis#0144: they’re the same thing Louis#0144: TFRC renamed Intruder!#7099: i meant for fine tuning. Louis#0144: I know Kazumi#1297: hmm, what's the expected cost for that? Louis#0144: Free on TRC Louis#0144: Lmao Kazumi#1297: ah, so it was renamed to TRC, it's not just tensorflow anymore Kazumi#1297: what's the bar of entry for them? bmk#1476: you must have a pulse (optional) Kazumi#1297: wait, you can use TPUs for Julia? 𓅬 gabriel_syme 𓅬#3220: ye i think it works Teemochu#8740: 6B does but you have to do trix 𓅬 gabriel_syme 𓅬#3220: there are some links on the TRC page or smth I believe Kazumi#1297: didn't know until I saw the option in the application Teemochu#8740: also that's a pony :mlp: 𓅬 gabriel_syme 𓅬#3220: I would imagine it's not fun though rn 🙂 Kazumi#1297: this is pony Kazumi#1297: just have a friend who's really into Julia and machine learning, need to tell her now EricHallahan#1051: also that's a cereal
𓅬 gabriel_syme 𓅬#3220: does one of those trix involve milk? Kazumi#1297: well, I've always been intimidated by an application, but here it goes I guess u-sci#2261: did you get approved yet? cfoster0#4356: Wait when? I haven't seen any progress on XLA.jl in a while Kazumi#1297: Idk, it might not be, I just thought so because of this question, but it might just be asking if you're interested if it happens in the future or something https://cdn.discordapp.com/attachments/729741769738158194/879921063297957898/Screenshot_from_2021-08-25_11-50-57.png Kazumi#1297: this page is basically 2-3 years old <https://github.com/JuliaTPU/XLA.jl> iOhadRubin#3747: https://tenor.com/view/slowpoke-pokemon-water-psychic-gif-7898037 iOhadRubin#3747: Try pressing "." dot on any github repository iOhadRubin#3747: they implemented github1s lol 𓅬 gabriel_syme 𓅬#3220: yeah it's pretty cool Kazumi#1297: I learnt that when a friend shared this https://youtu.be/ywUZOOzLX3c Orz#3023: Hello I would like to know if there is any pre written code to change jsonl.zstd files to tfrecords aٴ#8803: So the following data represents the accuracy of a model at each epoch. Approximately which epoch do you think I should stop training at? ```py 1 0.4477 2 0.4691 3 0.4751 4 0.4799
5 0.4839 6 0.4873 7 0.4897 8 0.4915 9 0.4927 10 0.4938 11 0.4946 12 0.4949 13 0.4954 14 0.4955 15 0.4964 16 0.4966 17 0.4968 18 0.497 19 0.497 20 0.4969 21 0.4971 22 0.4971 23 0.4976 24 0.4976
25 0.4974 26 0.4978 27 0.4978 28 0.498 29 0.4981 30 0.4979 31 0.4982 32 0.4981 ``` Teemochu#8740: Is this a homework question? Because otherwise you're leaving out a lot of info. aٴ#8803: uh oh aٴ#8803: What other information should I provide? aٴ#8803: do you want the loss too? Teemochu#8740: What kind of model? What kind of dataset? Accuracy on what? Is this accuracy on a validation set or could the model be overfitting? Speaking of, what do the losses look like? Why can't you get more data and use one epoch (may sound trite, but one epoch is all you need unless you're specifically wanting to try out a specific toy dataset)? aٴ#8803: LSTM, pretty big dataset, accuracy on the validation set aٴ#8803: shit I just lossed the losses aٴ#8803: Ok so you said something interesting about one epoch being all I need aٴ#8803: this question might be silly but why would 1 epoch be sufficient if there is a clear improvement between the first and 2nd epochs Teemochu#8740: More data I meant aٴ#8803: wym?
Teemochu#8740: Example the Pile is about a terabyte, much bigger than Wikipedia for instance. aٴ#8803: Personally I was thinking that maybe 5 or 10 epochs would be sufficient aٴ#8803: but idk aٴ#8803: Because the bot is technically still going up but the difference in accuracy with each additional epoch is minimal kindiana#1016: train until you overfit or don't want to spend more compute lol aٴ#8803: Yeah ik that aٴ#8803: But my question is where would you draw that line kindiana#1016: well kindiana#1016: how much compute do you want to spend? aٴ#8803: Less is good 😏 kindiana#1016: we can't answer that for you lol Teemochu#8740: How long does an epoch take? aٴ#8803: ~10min guac#4716: your model is borked or you've converged aٴ#8803: wym borked? Teemochu#8740: That's a smol model/dataset aٴ#8803: ye input data is ~2gb aٴ#8803: Alright so we'll just say that 10 epochs is good enough for now aٴ#8803: My reasoning being that any improvement beyond that is just far too gradual that it does not justify the extra computational time kindiana#1016: you literally have the compute to accuracy graph in front of you lol, just need to work out what amount of additional accuracy justifies 1 epoch more of compute
kindiana#1016: we can't answer for what tradeoff you want to make aٴ#8803: yes Cyclcrclicly#3420: if you make an api request to api.openai.com/v1/engines/codegen-earhart/completions with codex access it'll send you back a response from davinci-codex so that's solved ig mr_seeker#1337: Anyone here with more knowledge than me? ``` Function 'LogSoftmaxBackward' returned nan values in its 0th output' ``` I keep getting this error, and for the love of god I can't find out why that is... mr_seeker#1337: Its either transformers or torch... Deleted User#0000: Any good place to learn about Langevin dynamics and SDE that I need to understand the diffusion probabilistic model Deleted User#0000: ? guac#4716: zeros in the input to your logsoftmax. try to add an `eps` to see if that's the problem. mr_seeker#1337: eps where? In huggingface, in the trainer... bmk#1476: Trainer abstraction is bad and evil imo guac#4716: oh hmmm i don't touch HF sorry haha bmk#1476: just write your own training loop bmk#1476: wait is this fp16 by any chance kurumuz#5695: yeah kurumuz#5695: trainer has really bad bugs kurumuz#5695: it's evil
mr_seeker#1337: with fp16 I am getting descaling errors kurumuz#5695: you might want to use bfloat instead of fp16, your eps might set too low for fp16 as well guac#4716: ideally you'd add the `eps` to your model outputs prior to the torch.cross_entropy_loss kurumuz#5695: oh so its not fp16 kurumuz#5695: interesting mr_seeker#1337: if I set flag to --fp16 i am getting scaling errors, if I remove that flag I am getting logsoftmax error... bmk#1476: huh guac#4716: somethings going to zero ionno mr_seeker#1337: My thoughts exactly... bmk#1476: hunt down where the log softmax is and print the value before? bmk#1476: should be in the loss function mr_seeker#1337: There are a couple of them... ``` File "/home/julius/Documents/projects/foodporn/venv/lib/python3.8/site-packages/transformers/trainer.py", line 1260, in train tr_loss += self.training_step(model, inputs) File "/home/julius/Documents/projects/foodporn/venv/lib/python3.8/site-packages/transformers/trainer.py", line 1716, in training_step loss = self.compute_loss(model, inputs) File "/home/julius/Documents/projects/foodporn/venv/lib/python3.8/site-packages/transformers/trainer.py", line 1748, in compute_loss outputs = model(**inputs) File "/home/julius/Documents/projects/foodporn/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs) File "/home/julius/Documents/projects/foodporn/venv/lib/python3.8/site-packages/transformers/models/gpt_neo/modeling_gpt_neo.py", line 949, in forward loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1)) File "/home/julius/Documents/projects/foodporn/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/julius/Documents/projects/foodporn/venv/lib/python3.8/site-packages/torch/nn/modules/loss.py", line 1120, in forward return F.cross_entropy(input, target, weight=self.weight, File "/home/julius/Documents/projects/foodporn/venv/lib/python3.8/site-packages/torch/nn/functional.py", line 2824, in cross_entropy return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) (function _print_stack) ``` mr_seeker#1337: There is something with the model itself... ``` break: Found param transformer.wte.weight with type torch.cuda.HalfTensor, expected torch.cuda.FloatTensor. When using amp.initialize, you do not need to call .half() on your model before passing it, no matter what optimization level you choose. ``` mr_seeker#1337: at this moment I really want to put a bullet through the computer... its been 3 weeks since I am trying to get this to work, and it seems i am hitting nothing but walls mr_seeker#1337: I think I have an nvidia chip with 32-bits enabled, but it cant handle 16-bit float... Gurkenglas#7362: It's silly how I have to put my logging code into my core code. Can I have it on the outside somehow?
Awesome_Ruler_007#7922: I was thinking about approaches to model our current world using DL, and I had an idea for a basic POC, modelling the behaviour of objects with CV with a very **naive** approach. For the top of the net, we have our stack of pre-trained conv layers for getting object features of the environment. This, along with another branch of the NN that segments *every* recognizable object in the scene. Input would be a stack of 2 arrays of two images some time `t` apart. Suppose we have some structures that I call "columns" on the network and suppose the image is of a red square being held by someone. We can thusly assign a new "column" for the specific object of the red ball, containing a feature vector for the red ball as per the initial conv stack. Imagine any such column would have a memory limit of `n` vectors. thus, if in the next image I rotate the square then that cortical column would claim that square to be the same object it saw earlier (euclidean distance/or any such similarity metric). The key idea is that with a threshold (say `theta`) colums can store object vectors as long as they fall in with the above threshold and store a **new** feature vector for how the square looks if we rotate it. Now imagine that there is this 1D sheet of columns over the features, with more levels of hierarchy in these columns. Thus, a column in the 2nd level is connected to 4 of our initial columns. Suppose 1 column of the 4 is for the red square, and the other 3 are for the human fingers holding the square. Thus, you would be able to model *change* in the initial feature vectors (say shape of the square) considering the relative position of the fingers when we squeeze the **red square**. That's a pretty rough overview, but technically we are modelling our environment. Maybe a possible path to AGI? Does anyone know if schmidhuber or someone else has already done this? mr_seeker#1337: I am going to murder someone at huggingface using their own "trainer", because that stuff is broken. I am now using their "run_clm-no-trainer" script and it works... KentC#7374: Anyone know the name or origin of the ying-yang symbol used in the positional encoding part of the original transformer diagram? kurumuz#5695: @mr_seeker tfw facehugger hugs you mr_seeker#1337: You can tell when a shitty Dev does shitty things, when I can't reproduce it on my machine.
metamyth#8558: Hi 👋. Can someone help me out with Speech-to-text? mr_seeker#1337: What you want to know? mr_seeker#1337: Is this considered overtuning or just bad luck? https://cdn.discordapp.com/attachments/729741769738158194/880086308339470336/WB_Chart_8_25_2021_3_47_39_PM.png StellaAthena#3530: Hard to say without context StellaAthena#3530: But it looks like the untrained model is better than the trained model across the board which is :sus: kurumuz#5695: shake your data really well, use higher batch kurumuz#5695: make sure your data is good kurumuz#5695: lol mr_seeker#1337: I prefer it stirred... Louis#0144: Lmao mr_seeker#1337: but its just a batch of cook books I am training on 2.7B kurumuz#5695: you really need an eval set. mr_seeker#1337: I think it jsut went a bit "hot" in there 😄 kurumuz#5695: take the %10 as eval kurumuz#5695: evaluate every n steps kurumuz#5695: but ye, shaking data is incredibly important mr_seeker#1337: training data is shuffled... like I said: just for the memes mr_seeker#1337: oh, and eval_dataset = train_dataset kurumuz#5695: ? don't do that kurumuz#5695: take a subset of training dataset and took that subset out of train
alstroemeria313#1694: @cfoster0 https://github.com/crowsonkb/simple-diffusion-model/commit/3e5363482ae6b2a6e18f4fd052c4e23620b1e8a6 alstroemeria313#1694: How's this look, do you want a PR alstroemeria313#1694: Warning, it takes ~90 minutes to evaluate on an A6000 w/ the current model alstroemeria313#1694: so mb there should be a way to just turn it off alstroemeria313#1694: i added a constant to turn it off cfoster0#4356: Hmm what's the difference between `input1_model_num_samples` and `EVALUATE_BATCH_SIZE`? alstroemeria313#1694: it does 10k samples and it does them in EVALUATE_BATCH_SIZE size batches alstroemeria313#1694: so 200 calls to model.generate() by default alstroemeria313#1694: 10k and 50k are the standard numbers of samples for FID eval cfoster0#4356: Oh. Hmm I think I'd prefer the default behavior to be a more reasonable number of samples, or to have this eval off by default alstroemeria313#1694: ok i will turn it off by default alstroemeria313#1694: i think under 10k the metric is not so great. alstroemeria313#1694: it's squared Wasserstein-2 distance between the distribution of feature vectors of the fakes and the distribution of feature vectors of the reals alstroemeria313#1694: (modeling the distributions as a multivariate Gaussian) alstroemeria313#1694: it needs a bunch of samples to be able to estimate the covariance matrix semi-well SadSan#0570: Hello guys, I don't know if there is a specific channel to ask questions, but I wanted to look for someone who trained BERT before from scratch, I have some questions about it. Thanks! alstroemeria313#1694: so I am getting better FID on a run where I removed the timestep conditioning... cfoster0#4356: Hmm. I'm using rotary to do the conditioning, but that's super nonstandard cfoster0#4356: It might be messing things up tbh cfoster0#4356: I should probably replace it with something more standard like a small MLP with mish or w/e
alstroemeria313#1694: *nods* alstroemeria313#1694: How did OAI do it cfoster0#4356: They do it in GroupNorms cfoster0#4356: Improved DDPM: >>> Additionally, we changed the way the model conditions on t. In particular, instead of computing a conditioning vector v and injecting it into hidden state h as GroupNorm(h + v), we compute conditioning vectors w and b and inject them into the hidden state as GroupNorm(h)(w + 1) + b. cfoster0#4356: Guided diffusion: >>> We also experiment with a layer [37] that we refer to as adaptive group normalization (AdaGN), which incorporates the timestep and class embedding into each residual block after a group normalization operation [61], similar to adaptive instance norm [21] and FiLM [41]. We define this layer as AdaGN(h,y) = ys GroupNorm(h)+yb, where his the intermediate activations of the residual block following the first convolution, and y = [ys,yb] is obtained from a linear projection of the timestep and class embedding. alstroemeria313#1694: oh so this one is... like biggan's conditional batch norms except with group norms? alstroemeria313#1694: and this one is like stylegan except with group norms? cfoster0#4356: I'm not read up on GAN architectures so idk. Probably? alstroemeria313#1694: mm alstroemeria313#1694: i'll look at the code alstroemeria313#1694: i admittedly don't know what a group norm is yet AI_WAIFU#2844: what happens to the log prob? alstroemeria313#1694: what log prob AI_WAIFU#2844: the probability of the data under your diffusion model alstroemeria313#1694: how do i get that AI_WAIFU#2844: you gotta calculate it, depends on your exact method but it should be in the paper. Don't bother if you haven't implemented it. alstroemeria313#1694: it's cfoster0's repo cfoster0#4356: *tagged as `wontfix`*
Daj#7482: Good podcast https://twitter.com/pabbeel/status/1430568993849413636?s=19 ethan caballero#6044: He says Dario believed/believes in scaling even more than he did. ethan caballero#6044: When talking about GPT history. Pepe Le Spooder#0420: So just kind of a question how good is Gpt at handling music? Pepe Le Spooder#0420: I'm working on a project for a game and was hoping to generate some atmospheric tracks based on a bunch of atmospheric songs that are really good Pepe Le Spooder#0420: https://open.spotify.com/track/7MKxaEBWILSMsJOBMTmoap?si=93304861247941a7 Pepe Le Spooder#0420: For instance Orz#3023: :thinkies: Pepe Le Spooder#0420: No vocals all repeating chords and instruments Orz#3023: when did gpt start handling music? guac#4716: you'd probably want to do something like a MuseNet Pepe Le Spooder#0420: was wondering if it did guac#4716: raw audio wouldn't be worht it guac#4716: https://openai.com/blog/musenet/ Pepe Le Spooder#0420: so musenet would be something that id be more interested in? guac#4716: yeah for simple game music that's probably the best bet Pepe Le Spooder#0420: hmmm interesting concept Pepe Le Spooder#0420: Are you able to feed it samples? guac#4716: what do you mean by that?
Pepe Le Spooder#0420: Samples of say full songs that have the beat your looking for about 100 of them and just introduce new instruments Pepe Le Spooder#0420: or a chorus that you like out of a song guac#4716: oh yeah as long as you output the MIDI you can have any virtual instrument play to it Pepe Le Spooder#0420: Alright thanks Pepe Le Spooder#0420: Exactly what i was looking for jordiae#4107: Why do people assume GPT-3 doesn’t know how to make poems because of BPE and not because English doesn’t sound the way you write it lol? jordiae#4107: How is GPT supposed to know that “book” and “door” don’t rhyme? StellaAthena#3530: @jordiae English has some weird pronunciations, but broadly speaking you can tell if word rhyme by looking at the text jordiae#4107: I’m pretty sure you absolutely cannot. Speaking as a non-native speaker EricHallahan#1051: It is. CRG#8707: You could learn it by reading how poetry is written. EricHallahan#1051: There also is this thing called IPA that tends to be written right next to words in dictionaries. StellaAthena#3530: Also, Spanish is a fully phonetic language. If you’re right @jordiae, it would have no problem rhyming in Spanish. jordiae#4107: I’m actually trying to test that jordiae#4107: The problem is that our model is 700M only StellaAthena#3530: Why not use GPT-J? bmk#1476: this is your problem, get yourself a bigger model jordiae#4107: I know lol but I can’t jordiae#4107: Because the pile is English only Pepe Le Spooder#0420: I mean
Pepe Le Spooder#0420: ITs trying Pepe Le Spooder#0420: but my god the flair is way too pronounced bmk#1476: just fine tune gptj Pepe Le Spooder#0420: crap its a ogg Pepe Le Spooder#0420: not mp3 jordiae#4107: I don’t see how a reasonably sized GPT can take advantage of IPA or poems. They account for an extremely tiny part of the dataset Pepe Le Spooder#0420: https://cdn.discordapp.com/attachments/729741769738158194/880170733521760277/076051a3-45c4-484d-996c-76a3c172bddb.mp3 EricHallahan#1051: "Reasonably sized" is a constraint that you have not discussed until now, and is quite relative. Pepe Le Spooder#0420: there we go Pepe Le Spooder#0420: i mean its trying Pepe Le Spooder#0420: but totally not doing the best of a job towards the middle Pepe Le Spooder#0420: Way too much flair on the piano jacobfnl#8292: Who has had success with deploying using docker section on the gpt-j repo? The docker fails to build on a TPU-VM. I see a note about it having been tested with a custom docker image, but the tensorflow container fails to build. It looks like it is potentially a proxy/firewall issue with apt-get install git. Ultimately the build fails. :/ jordiae#4107: You don’t even need Spanish. French, German, Italian etc have pronunciation rules StellaAthena#3530: As does English… jordiae#4107: It does not StellaAthena#3530: They’re more complicated and have more exceptions but they do exist Pepe Le Spooder#0420: Thats insane though Pepe Le Spooder#0420: it actually did pretty well not gonna lie Pepe Le Spooder#0420: but this was ment to be atmospheric music not elton john jamming for the queen
StellaAthena#3530: #gpt-j StellaAthena#3530: What is this? StellaAthena#3530: Like. How was it produced? guac#4716: the paper trains on classical music Pepe Le Spooder#0420: musenet EricHallahan#1051: https://github.com/kingoflolz/mesh-transformer-jax/blob/1c68a1a383cc1c5ef1784c0733e94310d9084b8f/docker/README.md Pepe Le Spooder#0420: I'm trying to generate some atmospheric music for a game with ai Pepe Le Spooder#0420: well Pepe Le Spooder#0420: i mean it kinda worked but Pepe Le Spooder#0420: not really the slow beat i was looking for jordiae#4107: My point is that this is the issue, not bpe. My second point is that I’m not sure you can test my first point without training a really large scale model in another language with uniform pronunciation rules. Not sure fine-tuning GPT-J can do the trick. Pepe Le Spooder#0420: it took the 5 midi samples i gave it of 30 second clips and just started jamming like elton john on the piano guac#4716: hahaha i'm not sure if there's a way to control BPM maybe manually. and try to switch the instrument to a synth pad Pepe Le Spooder#0420: Yeah im gonna have to look into that StellaAthena#3530: FYI, that’s not what you actually said. I think this is a much more reasonable claim, though I would personally bet against it. jordiae#4107: Another way to test my point would be a multi-modal text/speech GPT btw StellaAthena#3530: But take GPT-J and fine-tune it on a couple gigs of Spanish poetry jordiae#4107: What did I say then? EricHallahan#1051: This is something I have been interested it but I really don't know how useful it would be. StellaAthena#3530: Me: English has some weird pronunciations, but broadly speaking you can tell if word rhyme by looking at the text
You: I’m pretty sure you absolutely cannot. Speaking as a non-native speaker jordiae#4107: That’s another claim of mine and I still think it, hyperboles aside jordiae#4107: I don’t think “broadly speaking you can tell if word rhyme by looking at the text” is an accurate statement jordiae#4107: I’ll ask linguists though jordiae#4107: And see what they think jordiae#4107: If you are interested I’ll come back with their answers EricHallahan#1051: It is baked into the distribution of text, I don't see how it would not be true. jordiae#4107: I think it’s an extremely difficult task and belong to the very last few bits of the loss jordiae#4107: So I maintain that you definitely can’t generally tell if English words rhyme just by looking at the text “broadly speaking”. It’s a very hard task jordiae#4107: But I insist that I’ll try to survey linguists on the matter jordiae#4107: It’s definitely interesting to me and not even close to being an expert lol bmk#1476: i mean it's not all or nothing, most rhymes can be inferred bmk#1476: there are a few tough cases but sometimes humans struggle with those too jordiae#4107: Yes, but humans have their text representations grounded on the actual sound they hear bmk#1476: eh there arent really that many different cases guac#4716: man i really feel like the cases in which you can't determine a pair of words rhyme, are outliers. bmk#1476: if the model sees certain words used in rhyming pairs often i think it should be able to learn that those are the same pronunciation bmk#1476: like i think at least 50% of rhymes you should be able to guess just looking at text right jordiae#4107: Those specific words, yes. All words, unclear.
guac#4716: the edit distances are probably small as hell bmk#1476: "last few letters the same" works most of the time guac#4716: what makes you think this fails in the general case? jordiae#4107: English pronunciation being relatively irregular StellaAthena#3530: @jordiae There's an easy way to test this. Go find a dataset of rhyming words and train a decision tree or 3-NN on it jordiae#4107: Good one guac#4716: fair enough lol jordiae#4107: Sorry if my statements sound too strong. I don’t have any proof. Just intuition. Would love to see my theory empirically destroyed bmk#1476: "hypothesis EMPIRICALLY DESTROYED by FACTS and LOGIC" jordiae#4107: But to date, my theory has the exact same proof as the BPE one jordiae#4107: Until someone does the experiments we said genetyx8#7543: https://plankms.weebly.com/uploads/1/2/6/9/126952655/i_take_it_you_already_know.pdf bmk#1476: wait bmk#1476: zoom and enhance bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/880180735749812294/Screenshot_20210825-140324_Drive.jpg bmk#1476: :ultragoose: jordiae#4107: https://cdn.discordapp.com/attachments/729741769738158194/880181133386608670/image0.png bmk#1476: English is still less bad than Japanese cmv Teemochu#8740: At least if you read some kana you know immediately how to pronounce it bmk#1476: Japanese is what happens when you take chinese and decide to make it confusing as hell
bmk#1476: yeah except nobody really writes in all kana Teemochu#8740: pokemon red and ~~blue~~ green 😛 bmk#1476: and furigana is rare too bmk#1476: Japanese is basically taking the occasional exceptions in chinese pronunciation and making that the norm rather than the exception bmk#1476: it's so confusing marksaroufim#6706: Have y'all considered applying to Pytorch Dev day? Not sure if Eleuther is mostly a JAX shop nowadays but I love your story so much and would love hear more of it kurumuz#5695: NeoX is written in pytorch. StellaAthena#3530: It’s mostly PyTorch for GPU and Jax for TPU. marksaroufim#6706: Please submit an app then I'd be more than happy to refer you internally StellaAthena#3530: What's the deadline? Got any good examples to work off of? marksaroufim#6706: Sep 24 but earlier the better since I can raise awareness internally - for the poster format here you go https://pytorchdeveloperday.fbreg.com/faq#poster https://cdn.discordapp.com/attachments/729741769738158194/880220117030555648/Screen_Shot_2021-08-25_at_3.40.24_PM.png StellaAthena#3530: @marksaroufim It's a poster and a talk, no paper / proceedings right? marksaroufim#6706: Yep StellaAthena#3530: I'll put something together this week and ping you marksaroufim#6706: Some inspiration https://pytorch.org/ecosystem/pted/2021 shrimpy#7409: I don't really see how this can confidently be asserted. The majority of (modern) English words with similar vowel sequences rhyme. You can absolutely infer rhyme by the way words are written. shrimpy#7409: It's true these instances don't rhyme, but I assure you they are in the minority. Kazumi#1297: Kanji is basically very old emoji Kazumi#1297: Change my mind greencube#6725: The great vowel shift moment
Deleted User#0000: https://www.wired.com/story/cerebras-chip-cluster-neural-networks-ai/ Deleted User#0000: Moaarr parametersssss Untouch#9150: “So we know we can, but we haven't trained a model, because we're infrastructure builders, and, well, there is no model yet” Untouch#9150: :thinkies: StellaAthena#3530: This is very weird. It doesn't exactly take magic to run a Python library, and if the implication is that the infra requires more efficient algorithms than what currently exists to train a 100T model then claiming that they know it can train such a model is not very honest. alstroemeria313#1694: they should at least have made something that big and tried training it for 100 steps or smth. marksaroufim#6706: #benchmarketing - If you can't ssh into it without talking to a salesperson you should be skeptical StellaAthena#3530: I love this term cognomen#6297: this only scratches the surface of the gains that write only memory can provide Dicky the sexy diesel#7454: help, where I can use the text to image bot?? Dicky the sexy diesel#7454: I can't find it MyUsername#7620: #the-faraday-cage-archive jbustter#5167: Got beta acess to codex Louis#0144: pog jbustter#5167: If anyone has an idea for a cool thing to try let me know BoneAmputee#8363: can it make me a simple chat UI in React? 🤔 BoneAmputee#8363: I mean I feel like I've seen that done on Twitter BoneAmputee#8363: maybe not a chat BoneAmputee#8363: but React UIs BoneAmputee#8363: main use I want Copilot/Codex for, not learning webdev
circuit10#0158: That was done with GPT-3 and that’s not great at code so it should be easy for Codex circuit10#0158: My ideas: - Use it with the Blender Python API to ask it to do things there - Get it or GPT-3 to break down instructions into smaller steps so it can more easily understand complicated instructions - Use it to automate desktop things (like saying “open a browser and open these things”) Parker#3197: I created an archive of 99% of completed generations from #the-faraday-cage-archive. It has the video, start, and end generation. It also contains metadata that is json encoded with the prompt and the dimensions of the picture. (for generations between 05/23/2021 to 08/23/2021) link: https://drive.google.com/drive/folders/11BUtXeIN9Ky0Gx0vMzar2Q1lM73ikim0?usp=sharing circuit10#0158: I’m probably not the first person to have this idea but would it be possible to use a classifier to detect complicated prompts that need a large model and ones that don’t and use that to save computing power? circuit10#0158: Or maybe use the confidence to detect when it needs a larger model or something genetyx8#7543: try to get it to use software exploits, e.g. SQL injection to see if it knows about them jbustter#5167: tried SQL injection, got user name and password of someone :S https://cdn.discordapp.com/attachments/729741769738158194/880547216706646016/firefox_wkOscZn1sO.png Louis#0144: wtf Louis#0144: lmao Louis#0144: thats absurd jbustter#5167: wait actually that might not be a password jbustter#5167: just really looks like it jbustter#5167: ```javascript
**// this is how to preform SQL injection using javascript.** //first we define a function: function preformSQLInjection(){ var xhttp = new XMLHttpRequest(); if (window.XMLHttpRequest) { xhttp = new XMLHttpRequest(); } else if (window.ActiveXObject) { xhttp = new ActiveXObject("Microsoft.XMLHTTP"); } xhttp.open("GET", "http://localhost/db_injection/sql_injection.php?name=hacker%27%20or%20%271%27=%271%20--%20&username=admin", true); xhttp.send(); } //next we call the function to execute the attack preformSQLInjection(); ``` StellaAthena#3530: It’s hard to tell if this is a real thing or just what it expects it is supposed to spit out genetyx8#7543: it doesn't look like it's the real thing jbustter#5167: well I'll try some more
genetyx8#7543: maybe ask it for buffer overflow? I don't know enough about exploits, but that's a classic one jbustter#5167: what do you think? https://cdn.discordapp.com/attachments/729741769738158194/880550350736748574/react_chat_ui.txt genetyx8#7543: yeah, that 's basically the code equivalent of the "how to draw an owl" meme BoneAmputee#8363: I'd have to test it. dont know react :berk: jbustter#5167: neither do i 😛 jbustter#5167: it really hates for loops jbustter#5167: it just writes the code itself sweg#8920: yo does anyone know how to make deep speed only run on one card StellaAthena#3530: These models are bad at induction StellaAthena#3530: @sweg Do you have multiple cards and want to restrict it to one, or are you trying to get it to run on a system with only one card sweg#8920: im on mchorse sweg#8920: wanna run it on only one jbustter#5167: ```javascript //this function creates a buffer overflow, and writes to memory: void bof(void) { char buffer[]="A"; int i; for(i=0;i<30;i++){ buffer[i]='C';
} buffer[30]='\0'; printf("%s",buffer); } //this function calls the function that creates the buffer overflow: void call_bof(void) { bof(); } int main(void) { call_bof(); return 0; } ``` jbustter#5167: meh genetyx8#7543: yeah, no exploits there. just nonsense
StellaAthena#3530: @sweg Use `deepspeed -- num_gpus 1 my_function.py` sweg#8920: ty! jbustter#5167: tried to have it translate code to create pytorch model to jax jbustter#5167: ```python # a model in pytorch: import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10)
def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = torch.flatten(x, 1) # flatten all dimensions except batch x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x net = Net() # the same model in jax: import jax.numpy as jnp from jax import vmap from functools import partial def relu_parameterized(inpt, a, b, c, d): return jnp.maximum(a * inpt ** 3 + b * inpt ** 2 + c * inpt + d, 0)
def conv(image, weights_kernel, weights_bias, stride, padding): return jnp.sum(weights_kernel * image[padding:-padding:stride, padding:-padding:stride]) + weights_bias class Net(object): def __init__(self): self.conv1 = partial(conv, weights_kernel=jnp.array([[-1, -1, -1], [0, 0, 0], [1, 1, 1]]), weights_bias=0, stride=1, padding=1) self.conv2 = partial(conv, weights_kernel=jnp.array([[-1, -1, -1], [0, 0, 0], [1, 1, 1]]), weights_bias=0, stride=1, padding=1) self.relu1 = lambda x: relu_parameterized(x, a=0.1, b=0.9, c=0, d=0) self.relu2 = lambda x: relu_parameterized(x, a=0.1, b=0.9, c=0, d=0) self.fc1 = lambda x: x self.fc2 = lambda x: x self.fc3 = lambda x: x def __call__(self, x): x = self.conv1(x) x = self.relu1(x) x = self.conv2(x) x = self.relu2(x) x = jnp.reshape(x, (-1, 16 * 5 * 5))
x = self.fc1(x) x = self.fc2(x) x = self.fc3(x) return x net = Net() ``` jbustter#5167: ~~i don't know much jax but the conv function looks incorrect~~ nvm i see it is using "partial" ? kindiana#1016: Well it seems to have figured out that jax is functional kindiana#1016: Lol zphang#7252: google deprecates things too quickly for codex to generate code using their libraries, that's their silver bullet jbustter#5167: attempt 2 jbustter#5167: ```python """ the exact same model in jax""" import jax.numpy as jnp from jax import grad, jit, vmap from jax.experimental import stax from jax.experimental.stax import Conv, Dense, MaxPool, Relu, Flatten
def net_jax(): init_random_params, net_apply = stax.serial( Conv(32, (3, 3), padding='SAME'), Relu, Conv(64, (3, 3), padding='SAME'), Relu, MaxPool((2, 2)), Flatten, Dense(128), Relu, Dense(10), ) rng = jax.random.PRNGKey(0) _, params = init_random_params(rng, (-1, 32, 32, 3)) return net_apply(params, jnp.zeros((1, 32, 32, 3))) net_jax = jit(net_jax) ``` jbustter#5167: it's bad at following instructions 😦 bma110ry#3974: Is joining EleutherAi just a matter of helping out? Sid#2121: yep Sid#2121: we have a task board here http://board.eleuther.ai bma110ry#3974: cool, I'm in EleutherAI 😎
u-sci#2261: This is one character away from potentially working on a bad enough webapp u-sci#2261: It's definitely got the right idea about what SQL injection means lol jbustter#5167: Thanks davinchi... https://cdn.discordapp.com/attachments/729741769738158194/880568669116170270/firefox_QPj6OY7GwN.png cognomen#6297: I'm curious how well codex would handle translating C(++) to Rust cognomen#6297: considering there's more going on than just different syntax and function names jbustter#5167: https://cdn.discordapp.com/attachments/729741769738158194/880571802563248128/c_to_rust.txt jbustter#5167: one more complicated (matrix multiplication) https://cdn.discordapp.com/attachments/729741769738158194/880572569600819310/c_to_rust.txt cognomen#6297: 2 type errors in the first one Louis#0144: https://wandb.ai/louiscastricato/transformer_ppo?workspace=user-louiscastricato Louis#0144: Ok Louis#0144: Did someone here run this on my wandb Louis#0144: Lmao Louis#0144: Who’s using transformer PPO on mchorse StellaAthena#3530: @Daj mostly? Louis#0144: Ok well it’s doing v well Louis#0144: If you’re curious Louis#0144: https://cdn.discordapp.com/attachments/729741769738158194/880606556008370296/image0.png Sahl#0630: oh no the data… someKindaBean#8471: You could try traditional software interview questions, like fizzbuzz someKindaBean#8471: or some that are a little trickier, like write a recursive algorithm to calculate fibonacci sequences
𓅬 gabriel_syme 𓅬#3220: I wonder if the model has seen a lot of traditional interview questions someKindaBean#8471: That's kind of what I was wondering S⛵#6488: https://cdn.discordapp.com/attachments/729741769738158194/880653924598693888/Screenshot_20210826-202321.jpg,https://cdn.discordapp.com/attachments/729741769738158194/880653924829364274/Screenshot_20210826-202356.jpg S⛵#6488: Is there a reason for why GPT-3 models of the same size outperform the GPT-2 models on LAMBADA by so much? kindiana#1016: data S⛵#6488: Better dataset for GPT3? kindiana#1016: yes S⛵#6488: Do you know off the top of your head what was different in their datasets? kindiana#1016: read the paper lol kindiana#1016: its totally different StellaAthena#3530: Everything S⛵#6488: So does the dataset explain the vast majority of the performance improvement? Is there anything majorly different architecture wise? kindiana#1016: no S⛵#6488: Okay okay makes sense thank you EricHallahan#1051: All of it. S⛵#6488: Are there LAMBADA scores for GPT NEO 125M and 350M? EricHallahan#1051: We never formally evaluated them. S⛵#6488: I'm going to try running them in Colab with your test harness S⛵#6488: okay S⛵#6488: https://cdn.discordapp.com/attachments/729741769738158194/880673299586248784/clipboard_20210826214045.jpg
S⛵#6488: here are my numbers combined with the data for gpt2, gpt3, and gptneo S⛵#6488: the 125M and 350M neo tests were done on colab using the lm eval harness on eleuther github kindiana#1016: looks about right S⛵#6488: as expected we see neo performance in between the 125/350 gpt2 and gpt3 S⛵#6488: yep S⛵#6488: I guess as you said, dataset differences would explain most of this S⛵#6488: 350 is no longer on huggingface but luckily I had it downloaded on my laptop before it was taken down smallanimalfriend#4355: Wait, which models were taken down from huggingface? Any idea why? trsohmers#5863: Question: Is there any place to post about available job opportunities? My company is looking for folks interested in large language models, and are fans of the Eleuther project, but have not (yet) been contributors. StellaAthena#3530: There were some mid-hundred million parameter GPT-Neo checkpoints that were mistakenly uploaded. We took them down because the hparams weren’t the same as the larger runs (they were from when we were experimenting) and we hadn’t done any systematic evaluation to tell how well they worked. There didn’t seem to be much demand for them, so we didn’t pursue vetting or replacing them. StellaAthena#3530: If you are interested in experimenting with them and comparing their evaluation metrics to similarly sized GPT-2 and GPT-3 models, I can provide access. It’s mostly just that nobody has taken it upon themselves to volunteer to do the leg work S⛵#6488: I would definitely be interested in experimenting with them! S⛵#6488: What stella said, and in addition to that more specifically, the GPT Neo 350M model, which people have been asking about apparently StellaAthena#3530: Okay, give me some time to find them and I'll send them to you S⛵#6488: Awesome, sounds great, please ping me whenever you're ready Benito#2692: I'm sure this has already been answered, but why are the GPUs down for the faraday cage? StellaAthena#3530: We had the bot running on some GPUs that someone had donated and wanted to make use of again. Right now it's running on Bone's personal rig. u-sci#2261: Can we get a setup like that but for EAI research? bmk#1476: what does this sentence mean bmk#1476: can you elaborate on what you're proposing
u-sci#2261: Like, you drop in a python script labeled with hardware requirements and a cloud storage credential, and it runs timeslices on whatever GPUs are available bmk#1476: WIP bmk#1476: coming soon™ StellaAthena#3530: The plan is for that to be the future of EAI StellaAthena#3530: More or less bmk#1476: (* each set sold separately, batteries not included. soon™ may indicate any amount of time from 30 seconds to the heat death of the universe) u-sci#2261: welp. I better hurry up and do everything that I'd have an advantage in during the 30 second window where I know it can't happen. bmk#1476: quick, it's 20 seconds now Teemochu#8740: 5 bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/880883364473737316/20110123.gif mkualquiera#3484: so you can guarantee it will happen before the heat death of the universe? bmk#1476: with nonzero probability cognomen#6297: one notebook for everyone cognomen#6297: controlled via stream chat cognomen#6297: twitchplaysAGI sweg#8920: https://cdn.discordapp.com/attachments/729741769738158194/880949542114197504/unknown.png sweg#8920: well im stumped sweg#8920: lmfao sweg#8920: colab is doing a bit of trolling Louis#0144: Lmaoooo
Louis#0144: Wtf Louis#0144: That’s so weird Louis#0144: @sweg restart the kernel marksaroufim#6706: Has anyone come accross a distillation library with an API that looks like student_model = Distil(teacher_model, divide_param_count_by_this_number, divide_num_layers_by_this_number) It feels like this should be possible in principle with something like torchFX so you don't have to define a student model at all so the API would be as complicated as that of dynamic quantization PoggPoggBeanGirl#0634: Working with Cuda 11 on a GT 730 sucks PoggPoggBeanGirl#0634: The fact that it's a remote build system on a server I don't have sudo on makes it even worse PoggPoggBeanGirl#0634: I just have to assume it's a problem on my end PoggPoggBeanGirl#0634: I've been debugging my build system for 3 hours and I still don't have a working matrix multiplication example. Louis#0144: 730… Louis#0144: That’s like Louis#0144: 2013? EstebanSir#2189: cuda supports that? EstebanSir#2189: i wish rocm supported old amd cards, before 2012, but at that point i guess my cpu would be faster PoggPoggBeanGirl#0634: Barely PoggPoggBeanGirl#0634: Cuda 11 supports sm_35 and it is depricated PoggPoggBeanGirl#0634: I guess I haven't tried using cuda 10
PoggPoggBeanGirl#0634: Maybe that will work properly PoggPoggBeanGirl#0634: But I only have versions 11 and 10 to work with PoggPoggBeanGirl#0634: Nothing else is installed StellaAthena#3530: https://twitter.com/marksaroufim/status/1431427418879717381?s=20 EstebanSir#2189: in what industry is that most present anyways? EricHallahan#1051: Everywhere EstebanSir#2189: guess i dont really pay much attention :p not a surprise EricHallahan#1051: I mean anywhere where you want to put something in a positive light you are going to selectively choose what benchmarks to publish. EricHallahan#1051: "AMD widget gets 5% better performance in X than NVIDIA thingy" - AMD on something they paid the developers to optimize their application for. EstebanSir#2189: aah EricHallahan#1051: "Intel widget is better in real world benchmarks™️" - Intel who has cherry-picked benchmarks that they do relatively well in. EstebanSir#2189: lol yeah, i see what you mean now inox#5400: this is why the standard ML paper introduces some new application problem where the method can be the best Orz#3023: Hello Is there a huggingface Tokenizer for gpt-2 Orz#3023: the transformers tokenizer is waay to slow Orz#3023: takes minutes together to encode a gb of text kindiana#1016: thats the same tokenizer
kindiana#1016: lol Orz#3023: oh F kindiana#1016: you can try PreTrainedTokenizerFast Orz#3023: yeah I'm using the fast version Orz#3023: It's still waay too slow kindiana#1016: get more cores lol kindiana#1016: or be patient kindiana#1016: how much text do you have? Orz#3023: I'm trying to use this one Orz#3023: https://the-eye.eu/public/AI/training_data/code_clippy_data/code_clippy_dedup_data/ Orz#3023: and here is some code Orz#3023: https://colab.research.google.com/drive/14Fzydn2fzp2wZEwhFfJaXBemAG1KquOX?usp=sharing Orz#3023: I just want to convert them to tfrecords as fast as possible the mesh_transformer_jax create_finetune_tfrecords.py is kinda slow too lol Orz#3023: I mean it's probably better than mine atm but it's still slow to use kindiana#1016: you can try a batched encode
kindiana#1016: might help a bit kindiana#1016: but its generally still pretty slow kindiana#1016: still much faster than training though lmao Orz#3023: I mean I will probably not be using google collab for that lol I tried to look for batch_encode but couldnot find that function can you direct me to that? kindiana#1016: tokenizer.batch_encode kindiana#1016: https://huggingface.co/transformers/internal/tokenization_utils.html#transformers.tokenization_utils_base.PreTrainedTokenizerBase.batch_encode_plus Orz#3023: :rooPog: Orz#3023: Thank you! immibis#3179: has anyone tried to extend a transformer with indefinite memory length (still finite size)? Something like applying a LSTM to the KV pairs Orz#3023: wdym by "memory" of a "transformer"? immibis#3179: Well you know how the whole point of attention is to extract information from one part of the input and apply it to a different part, but the range of that is limited by how much input is passed to the AI model in one "block" CRG#8707: This is basically the "Transformers are RNNs" kernel methods of linear attention. Though they tend to underperform normal attention quite a lot. https://arxiv.org/abs/2006.16236 gollark#3909: XLNet? cfoster0#4356: A regular transformer does not have fixed memory length, if you figure out the positional encodings situation/extrapolation cfoster0#4356: So recent papers like alibi address this cfoster0#4356: The real issue isn't memory length but getting the network to actually use the whole memory bank effectively. Also you pay for the QKV/FFs proportional to length so no one really does *indefinite* lengths, practically speaking
alstroemeria313#1694: so like... how can you estimate the "curvature" or deviation from linearity of a gradient. alstroemeria313#1694: without computing the Hessian. alstroemeria313#1694: does this make any sense to ask. StellaAthena#3530: @alstroemeria313 yeah absolutely StellaAthena#3530: It would take a little work to integrate into a neural network efficiently, but theoretically I know how to do that alstroemeria313#1694: oh, how? StellaAthena#3530: Well wait. Are we interested in approximating $H$, or approximating $Hw$ for an arbitrary $w$ given $H$ TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/881193007385288774/193204646687408129.png alstroemeria313#1694: I can take Hessian-vector products Sid#2121: http://bcl.hamilton.ie/~barak/papers/nc-hessian.pdf alstroemeria313#1694: pytorch can do hvps, yes StellaAthena#3530: So it’s approximating H itself you want alstroemeria313#1694: "if I am doing gradient descent, how much is the direction I am traveling going to vary from step to step" alstroemeria313#1694: i think. alstroemeria313#1694: In the limit as step size goes to 0. alstroemeria313#1694: I am kind of confused on the exact question I'm asking tbh ^^;; StellaAthena#3530: That’s trickier but I think I can do it StellaAthena#3530: Using reverse-mode auto diff alstroemeria313#1694: ohh? StellaAthena#3530: Let $P_i = \{v: (v,i)\in E(G)\}$. For two nodes $i,j$ such that $i\to j$ let $\pi_{i,j}$ be the permutation that maps the outputs of $i$ to the inputs of $j$, and let $x_v,y_v$ denote the input and output vectors of node $v$ respectively. Then $x_j = \sum_{k\in P_j} R_{i,j} y_i$ and write $y_i=f_i(x_i)$.
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/881203221497012234/193204646687408129.png StellaAthena#3530: So we can recursively write $J_{x_i}^f = J_{y_i}^f J^{y_i}_{x_i}$ and $\nabla f = J^f_{y_1}$ TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/881204042137755728/193204646687408129.png StellaAthena#3530: Differentiating both sides gives us $$H_{x_i,y_j}^f = {J_{x_i}^{y_i}}^T H_{y_i, y_j}^f + \left(\sum_{v=1}^{n_i} J_{y_i, v}^f H_{x_i, x_i}^{y_i, v}\right) H_{x_i, x_i}^{y_i, q}$$ TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/881205570491781140/193204646687408129.png StellaAthena#3530: This is infeasible because $n_i$ is large in practice (it’s the width of a layer), though it might be feasible for things that aren’t transformers. TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/881206674206101594/193204646687408129.png Louis#0144: So we just finished the 5th cross validation of CARP Louis#0144: should we report loss for each run? Louis#0144: or only average alstroemeria313#1694: ty :) Louis#0144: I think it might be ok to just say that we cross validated and report average loss alstroemeria313#1694: hm alstroemeria313#1694: can i actually just take alstroemeria313#1694: \|\|hvp(f, x, -grad(f, x))\|\|_2 alstroemeria313#1694: i.e. alstroemeria313#1694: grad_1 = grad(f, x) grad_2 = grad(f, x - step * grad_1) \|\|(grad_2 - grad_1) / step\|\|_2 alstroemeria313#1694: As step goes to 0.
alstroemeria313#1694: ahah > Gradient descent is currently untrendy in the machine learning community, but there remains a large number of people using gradient descent on neural networks or other architectures from when it was trendy in the early 1990s. —from a 2005 blog post alstroemeria313#1694: mb divide this by the gradient norm squared to get a relative quantity. alstroemeria313#1694: Does anyone still use stochastic meta descent Aa 𓂀 💎#0135: Hello. I am wondering if I can ask questions regarding AI here in #general ? EricHallahan#1051: Generally #general is suitable for asking questions. Aa 𓂀 💎#0135: Thank you. Aa 𓂀 💎#0135: I am wondering if scientists or someone have tried to download some deep learning algorithms to the brain model such as EBrains for instance. Or do experiments on some kind of brain-computer interfaces within the brain model (in a digital environment). Aa 𓂀 💎#0135: Or it could not be performed? StellaAthena#3530: Sorry, I lost service. In a couple hours I can write yo an approximation scheme. Im 90% sure you can approximate this framework using the intermediary states produced during gradient descent alstroemeria313#1694: if i were actually doing gradient descent i could compute the cosine similarity between gradients on adjacent steps, but alstroemeria313#1694: I'm not, I'm doing guided diffusion alstroemeria313#1694: I could do \|\|hvp(f, x, -grad(f, x))\|\|_2 or something instead. StellaAthena#3530: I’m drunk AF, but let me give it a try StellaAthena#3530: Let $V$ be an ordered set of vectors and define $S_i(V)$ recursively as follow TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/881305978883166278/193204646687408129.png StellaAthena#3530: $S_{y_L}(V) = 0$ $S_{y_j}=\sum_{k\in P_i} R_{i,j}^T S_{x_i}(V)$ $S_{x_i}(V)= M_i^T v_i + {J_{x_i}^{y_i}}^T S_{y_i}(V)$
where $F^T_i F_i = \sum_{v=1}^{n_i} J_{y_i, v}^f H_{x_i, x_i}^{y_i, v}$ TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/881307328643743765/193204646687408129.png alstroemeria313#1694: are you drinking and deriving. StellaAthena#3530: Yes alstroemeria313#1694: eheh~ :blobcutehappy: StellaAthena#3530: Once upon a time when you googled “proof or dare” I was what came up StellaAthena#3530: College was a magical time someKindaBean#8471: I miss getting drunk with the physics majors someKindaBean#8471: they'd always pull out the whiteboard and try to figure out something silly someKindaBean#8471: sometimes it would be trivial, like deriving the liquid volume of a beer can given it's angle as you poured it, other times it would be far beyond my understanding nshepperd#2316: with a second order gradient you could compute `let g(x) = grad(f(x),x) in grad(cosim(g(x), g(x+h)), h)` at `h=0` which would be the infinitesimal version of that? nshepperd#2316: which is the same as `grad(cosim(stop_gradient(g(x)), g(x)), x)` nshepperd#2316: well that actually returns some sort of per parameter measure of curvature but i'm pretty sure taking the dot product of that with g(x) gives the curvature in the direction of a gradient step nshepperd#2316: whatever that actually means minimario#1709: hi! i was wondering if there are any projects that eleuther is working on that are more theoretical/mathy? curious and would be potentially interested in helping out with some of those! minimario#1709: (not sure if this is the right place to ask this, lmk if not) EricHallahan#1051: This is the right place! StellaAthena#3530: Welcome! What’s your mathematics background? EricHallahan#1051: I suggest looking at #equivariance, which is very mathy.
minimario#1709: mostly applied stuff (did some optimization research) + lots of tcs/prob/stats courses minimario#1709: slowly learning category theory but i don't see that being useful for anything in AI anytime soon hehe StellaAthena#3530: What sort of TCS stuff? Are you familiar with the PCP Theorem, or verifiable computation? minimario#1709: i've seen the pcp theorem yea minimario#1709: i have not heard of verifiable computation (idk any crypto math stuff lol) minimario#1709: i took a cool class on algorithmic game theory once StellaAthena#3530: Check out this paper: https://eprint.iacr.org/2021/673.pdf minimario#1709: woah fancy minimario#1709: are y'all trying to do something with this or is this purely for entertainment value minimario#1709: lol EricHallahan#1051: With what? Our work here? minimario#1709: oh the paper minimario#1709: i meant lol EricHallahan#1051: Ah okay lol EricHallahan#1051: Stella probably has something in mind. StellaAthena#3530: This paper claims to be able to provide a real-world efficient protocol for verifying the result of a CNN computation without reexecuting it. This is an advanced version of the same kind of thing that the PCP theorem is. If this is legit, and especially if it can be adapted to transformers, it has to potential to be incredibly impactful minimario#1709: hmm how legit is this paper Orz#3023: This raises another question Are there any mathematical prerequisites to understand whatever the heck is going on within gpt-j StellaAthena#3530: It’s very new, and how legit it is is unknown. It looks decent, but I want to test it for reals
minimario#1709: hmm yeah it feels like something that would have gotten more attention if it had practical value lol minimario#1709: by testing it you mean like minimario#1709: implementing the protocol? StellaAthena#3530: Yes EricHallahan#1051: Wait, it's all matrix multiplications? 🌍 🧑‍🚀 🔫 🧑‍🚀 Always has been. minimario#1709: can always try emailing the authors for the code hehe StellaAthena#3530: I did last night EricHallahan#1051: But seriously if you understand how a transformer works you understand how GPT-J works. minimario#1709: have you read the paper in full yet or nahh kindiana#1016: I have taken 2 undergrad math classes and hated both of them, so... Not very much kindiana#1016: Lol StellaAthena#3530: I've read it a couple times but need to devote more energy to the details EricHallahan#1051: https://cdn.discordapp.com/attachments/198658294007463936/879614108213850152/1629730800590.png minimario#1709: lol nice, i'll check it out and see if it makes any sense to me StellaAthena#3530: I hope it does! I recommend these lecture notes for auxillery info: https://people.eecs.berkeley.edu/~alexch/docs/CS294-S2017/lecture-01.pdf immibis#3179: You mean just how it computes, or WHY it computes? Generally the computation that's done in a neural network is fairly straightforward (but fiddly so it's better to just let the library do it). The "why" is a lot harder Orz#3023: I mean I can understand the "implementation" part of them
But it would be interesting to know why it was implemented that way immibis#3179: Why the weights are what they are: gradient descent Why the shape of the model is what it is: trial and error by lots of people over a long time The key innovation in transformer-based models is the multi-head dot product attention unit. I don't think it's difficult to understand why that unit might be useful in an AI model... but there are lots of units that seem like they might be useful in AI models, and most of them aren't. So that part is trial and error. janus#0150: One could also ask "What math do I need to understand the internal mechanisms of GPT-J's cognition?" doodle#1078: I'm toying with the latest in deep learning time-series models (NBEATS, NBEATSX, Prophet, etc). Are there any good Discord servers with people that have a similar interest? gollark#3909: Did anyone try quantizing the GPT-Neo models to int8? Just using the pytorch dynamic quantization thing (on the 125M one, due to RAM limitations) seems to add enough noise that it just produces nonsense. EricHallahan#1051: IIRC everyone who has tried has failed. gollark#3909: A shame. gollark#3909: I might try the quantization-aware training thing and see if that works. Deleted User#0000: Does anyone know how should i join this server? https://discord.sg/ai Because whenever I try to open the link it redirects me to http://net.domain.name/discord.sg//ai Idk why? How did you guys join the server? gollark#3909: I think it's .gg, not .sg. Deleted User#0000: They have explicitly mention ".sg" though! Because when I try ".gg" it joins me to some "Arab Island Server" Deleted User#0000: @gollark Have you joined that server? gollark#3909: No, that's just the actual domain for Discord invites. Deleted User#0000: So how should I join? gollark#3909: I don't know. StellaAthena#3530: @Deleted User You should contact someone in the discord server and get an invite
Deleted User#0000: I don't who are currently present in that server StellaAthena#3530: That's how discord works. You need to get invited to a private server by someone in it. We can't help you get in. Deleted User#0000: Do you know anyone in this server... who is present in that server as well? So I can dm him/her for Invite Link. Deleted User#0000: Nvm I found the Link StellaAthena#3530: Of course not. You've provided zero information that would even allow me to tell what the server is. The only info you've provided is a link that you admit does to the wrong place. StellaAthena#3530: Even if I was in the sever I wouldn't know that Daj#7482: They mean the one in #communities Daj#7482: Which indeed seems broken StellaAthena#3530: Here's a working link: https://discord.gg/UagNaN8qQX ProudNoob#5854: are quaternions a thing? matrices, rotary... jordiae#4107: Probably stupid question, but how can an encoder-decoder with 2x size of an equivalent decoder have the same flops? Table from T5: jordiae#4107: https://cdn.discordapp.com/attachments/729741769738158194/881600434446409888/image0.png jordiae#4107: https://cdn.discordapp.com/attachments/729741769738158194/881600487688904724/image0.png jordiae#4107: @StellaAthena pebbles#7130: it's so annoying that haiku doesn't work on google colab cause their version of jax is kinda outdated pebbles#7130: even the "basics" colab for haiku won't run alstroemeria313#1694: Am I the only one to have used the log of the condition number of a layer's weight matrix as a loss alstroemeria313#1694: Like, in addition to some other loss. alstroemeria313#1694: To keep the network invertible someKindaBean#8471: can't you upgrade that pretty easily?
someKindaBean#8471: like literally just a cell with > !pip install jax pebbles#7130: https://cdn.discordapp.com/attachments/729741769738158194/881699247030927370/unknown.png pebbles#7130: nah Louis#0144: Ur doing it wrong Louis#0144: Wrong pip command Louis#0144: It does not force upgrade that way ilovescience#3282: Did you pip install haiku?? EricHallahan#1051: ```sh pip install git+https://github.com/deepmind/dm-haiku ``` pebbles#7130: thank you !! I try not to ask questions here cause people are busy, but this really helps pebbles#7130: no more need for: `def nn_fwd(θ, x): l = 0 for W, B in θ: x = jnp.dot(W, x) + B if l < len(θ)-1:
x = jnp.maximum(x, 0) l+=1 return x` someKindaBean#8471: lol oh yeah, I was thinking of > pip install --upgrade --force-reinstall jax but getting it through dependencies is way cooler greencube#6725: I can't even get a T4 whaat greencube#6725: :floppa: greencube#6725: Where are they're Kia#2550: They're just probably playing greencube#6725: Yess I got one Kia#2550: Google know People are abusing Colab Kia#2550: So probably that's the case 𓅬 gabriel_syme 𓅬#3220: Reminds me to cancel colab nshepperd#2316: i can get a T4... with pro+ :catgirl5: Kia#2550: How:ultrathonk: Kia#2550: Im getting a P/V100 Louis#0144: So pro+ isn’t an upgrade at all Louis#0144: I would have thought you’d get A100s for that price
Louis#0144: lol nshepperd#2316: feels like all their gpus burned down or something 𓅬 gabriel_syme 𓅬#3220: if I wanted to write a math book about transformers, what math would you include inside? 𓅬 gabriel_syme 𓅬#3220: I guess probability, lin.alg, are two, any others? Kia#2550: You plan to make one? 𓅬 gabriel_syme 𓅬#3220: is that two enough? I would really, mostly, be interested in understanding the underlying mathematic notions in what's going on inside the models and attention and not really any sort of proofs leading to that stuff (if that makes sense) 𓅬 gabriel_syme 𓅬#3220: I plan to ask my old highschool professor who is an incredible educator (mathematician) and a really dear friend of my parents to help me yes 𓅬 gabriel_syme 𓅬#3220: emphasis on educator, as opposed to many, many people I've met he actually understands what it means. So I'm hoping he might help me 🙂 𓅬 gabriel_syme 𓅬#3220: p.s. I don't want to write a book about transformers and the math included, rather a small mathematical text that exemplifies the things learned through practical examples linked to these models Kia#2550: Ow that's lovely 𓅬 gabriel_syme 𓅬#3220: ok so I'm guessing lin.alg, probability and vector calculus then Kia#2550: Goodluck tho mgostIH#0245: Do you really need an understanding of probability for transformers? Afaik it's all basic linear algebra I like this paper explanation of attention and they also provide various alternatives to the classical transformer too https://arxiv.org/abs/1810.00825 𓅬 gabriel_syme 𓅬#3220: I was thinking the AR stuff is sort of probabilities? mgostIH#0245: AR? 𓅬 gabriel_syme 𓅬#3220: at least they make for a decent, intuitive example of cond. probabilities (maybe?) 𓅬 gabriel_syme 𓅬#3220: autoregressive mgostIH#0245: Well eh, but it's mostly "You get punished if you output the wrong thing" 𓅬 gabriel_syme 𓅬#3220: like I'm thinking of smth like a highschool math textbook in greece, only the examples/problems are all about NNs and LMs (might sound/be stupid idk) instead of random numbers that mean nothing. and they are with eqs and code
𓅬 gabriel_syme 𓅬#3220: (our math textbooks are extremely good, just not up to date with how the math is used in the world) mgostIH#0245: You don't really need an understanding of probability that goes beyond the basic high school curriculum, surely anyone knowing the linear algebra required can understand the concept of "make good tokens more likely" 𓅬 gabriel_syme 𓅬#3220: sounds good then 🙂 𓅬 gabriel_syme 𓅬#3220: yeah this idea is for that level 𓅬 gabriel_syme 𓅬#3220: in some way, stitch together different subjects taught across school years into a whole that sort of describes how models like these work mgostIH#0245: For an introduction to NNs what did you have in mind? Imo the most interesting thing to kids is RL (but it's also the least stable and complicated to actually implement) 𓅬 gabriel_syme 𓅬#3220: I think LMs are quite interesting tbh, RL is fun as well due to the way you can interact I guess 𓅬 gabriel_syme 𓅬#3220: but with so many models available, playgrounds, and such, it might be quite nice to create a math book like that 𓅬 gabriel_syme 𓅬#3220: I also don't want to introduce NNs at all. I want it to be a math book, straight up, only every example and story around it is about LMs and transformers/attention 😄 𓅬 gabriel_syme 𓅬#3220: much like in my textbooks we had stories about the turtle and the rabbit StellaAthena#3530: ... You really don't need much math at all to do LMs. StellaAthena#3530: Source: Just look at `@bmk#1476` 😉 StellaAthena#3530: @𓅬 gabriel_syme 𓅬 Like, what vector calculus is necessary? 𓅬 gabriel_syme 𓅬#3220: no idea 🙂 𓅬 gabriel_syme 𓅬#3220: btw I'm not looking for something that does LMs but more so the mathematical intuition behind stuff, while giving real world examples of models using that intuition alstroemeria313#1694: doesn't "vector calculus" often refer to like... the 3D case 𓅬 gabriel_syme 𓅬#3220: like the beautiful RoPE graphs you made, or CRG was sharing 𓅬 gabriel_syme 𓅬#3220: etc. kurumuz#5695: im not really good at math as well. EricHallahan#1051: https://cdn.discordapp.com/attachments/198658294007463936/879614108213850152/1629730800590.png
𓅬 gabriel_syme 𓅬#3220: my goal is to have something that reads like a math textbook vs an explanation for transformers, but builds on examples from there 𓅬 gabriel_syme 𓅬#3220: if that makes sense alstroemeria313#1694: whereas what we use is multivariable calculus? 𓅬 gabriel_syme 𓅬#3220: like imagine high school linear algebra but with an illustrated tranformer style heh StellaAthena#3530: I think this sentence is missing something. It's grammatically correct but I'm having trouble figuring out what it's talking about 𓅬 gabriel_syme 𓅬#3220: so foor example, the illustrated/annotated stuff are amazing but they focus on how a transformer works, even when showing the math kurumuz#5695: maybe built a feedforward from scratch with numpy? kurumuz#5695: you would learn a lot. StellaAthena#3530: That's fine, but I would recommend not using transformers as a central example then. Use CNNs instead 𓅬 gabriel_syme 𓅬#3220: my goal would be to focus on actual math, and show how the math works through attention and transformers 𓅬 gabriel_syme 𓅬#3220: hmm that's interesting, I'd thought attention is more central but I guess CNNs also make sense alstroemeria313#1694: MLPs are simplest StellaAthena#3530: Attention does not have theoretical underpinnings alstroemeria313#1694: CNNs are just MLPs with weight sharing StellaAthena#3530: It barely has *post hoc* justifications kurumuz#5695: CNNs work well kurumuz#5695: very well StellaAthena#3530: CNNs are actually mathematically interesting and sophisticated objects 𓅬 gabriel_syme 𓅬#3220: very bad example, humor me, but I want to replace this stuff with interactive visualizations and even models depicting both calculations happening and some visual intuition on what's going on underneath https://cdn.discordapp.com/attachments/729741769738158194/881903677588246548/unknown.png 𓅬 gabriel_syme 𓅬#3220: and because I imagine this might be a handbook someone like me reading, a person trying to sort-of get into this from another domain, maybe practical examples might be more interesting.