data
stringlengths
115
7.61k
Keepthepace#6435: Information theory goes well over their heads Aspie96#5177: The thing about this is that the final representation would still contain specific details which are not simply algorithms or high level ideas. The word "algorithm" is to be interpreted a bit more abstractly than just code without variable names. Keepthepace#6435: But no court has ever pointed to a list of what these details are Aspie96#5177: I think copyrighting all froms of programs, not just source code, is well in the intent of the law, or else binaries would be expressly excluded. Aspie96#5177: It's not the job of a tribunal to decide what the law is and I think in this case it is applied correctly Keepthepace#6435: otherwise we could have written a "decopyrighter" of code/machine code that would be something that runs contrary to what the copyright system is design for (protect IP rights) Aspie96#5177: I don't think it would be possible to write it. Aspie96#5177: Law is a humanistic subject, it's not programming. Keepthepace#6435: Heh Aspie96#5177: There is no algorithm, it's a case by case evaluation. Keepthepace#6435: Which is why I do believe courts will rule weights are copyrightable, even though a logical case for the contrary could be made. The human case goes strongly in the other direction Aspie96#5177: By the way, one could apply a similar reasoning to every law in existence. Keepthepace#6435: In French tax law, there is some actual code Aspie96#5177: Laws do not neatly discern in a deterministic way phisical states of the universe between "legal" and "illegal". It couldn't be done. Laws are way more high level than that. Keepthepace#6435: but sadly, yes Aspie96#5177: That's neat, but that's not the norm. Keepthepace#6435: just nitpicking, had to react to the "every law in existence" 😉 Aspie96#5177: I actually don't believe so. Keepthepace#6435: So the goal of a license or a practice is not to give a full shield but ammo in a lawyers fight, and lay down assumptions from our side Aspie96#5177: Copyrightable weight, to me, go against the most sensible reading of both the text and the intention of copyright law.
Aspie96#5177: Oh, I fully believe a license should be given to weights, yes. Aspie96#5177: Just in case they are found to be copyrightable in the future, but I think that may require an update to copyright law. Keepthepace#6435: The intention of copyright laws were never "70 years after the death of the author" yet here we are. When NVidia and Google weight in, courts and lawmakers will quickly recognize that. I be happy to be surprised of course Aspie96#5177: Hang on a second. Aspie96#5177: That was a decision from the legislator. Keepthepace#6435: yes Aspie96#5177: I am saying if a judge applies the law, I don't see they would see weights as copyrightable. I really don't think that's what the law currently says. Aspie96#5177: Of course it could be that a different law is passed, saying weigths are copyrightable. Keepthepace#6435: I think it will really depend on which lawyer makes the best case Keepthepace#6435: I still come back to the Google v. Authors guild but I still can't get over the fact that one of the core question was not over whether or not Google Books was a derived work, but whether or not it made people lose money Aspie96#5177: That's an unfortunate thing, yes. Aspie96#5177: The focus on money. Keepthepace#6435: But maybe if we agree on that, letś build on it Aspie96#5177: However, it has no bearing on what the "teorethical" law says, just on the consequences. Keepthepace#6435: what type of license do you think would be beneficial? Aspie96#5177: Infringing copyright is always illegal, even though one might get away from it in some cases. Aspie96#5177: But I, and many others, prefer to follow the law, also as a matter of principle, not just for fear of consequences. Aspie96#5177: That's why we use licenses instead of promises not to sue. Aspie96#5177: I think any permissive licenses is good enough. Aspie96#5177: Although I believe public domain dedications are better for trained models (since we should want them to be in the public domain to begin with).
Aspie96#5177: I don't believe copyleft is a good idea for models if they happen to be found to be copyrightable. Keepthepace#6435: That's the old copyleft vs public domain debate. Yes, we would all want everything to be public domain. A copyleft license just ensures all the public domain freedoms stay on the work Aspie96#5177: My rationale is that even if they are copyrightable (which I find unlikely) they are tightly related to scientific research. Aspie96#5177: I do not oppose copyleft in general. Keepthepace#6435: They are also tightly related to for profit efforts, who will close these down as quickly as they can Aspie96#5177: However, in the case of research, I believe its purpose for humanity is that of providing information to the rest of us, not to tell anyone how that information is used. Keepthepace#6435: Microsoft is not putting a billion in OpenAI for the sake of public scientific research Aspie96#5177: This is particularly true for research which uses public money. Keepthepace#6435: Well I go one step further: it should be free and it should also STAY free. Aspie96#5177: Because developing proprietary products is not illegal, the government shouldn't prevent it by disallowing it on works it produces (since it already has a stance on what should be legal). Aspie96#5177: Likewise, researchers shouldn't require that others act in accordance to their values. Keepthepace#6435: Who is talking about making it illegal to do proprietary products? Aspie96#5177: I think that argument is fine for software but not for trained models. Keepthepace#6435: how so? Aspie96#5177: A piece of software (not a library, a working program) has the same uses for most regardless of whther the license is copyleft or permissive as long as it's free. Aspie96#5177: A trained model built by researchers, however, is mostly useful as part of a larger program, probably after fine-tuning. Aspie96#5177: A copyleft license would prevent such activities for those building proprietary products. Is building proprietary products a legitimate thing which should be allowed? Aspie96#5177: The government has a stance on what is allowed and what is not. It is the law. Keepthepace#6435: I fail to see how the same reasoning does not apply to software as well? Keepthepace#6435: And copyleft licenses are allowed.
Keepthepace#6435: Legality, in a free country, only gives you the freedom to make many different choices, it does not tell you which to do. Aspie96#5177: The law allows the making of proprietary products in general. It's true that government works might be copyrightable. I believe however they should go back to the public who should be allowed to use it for all legitimate activities. Keepthepace#6435: "It is legal" has never been a sufficient reason to do something Aspie96#5177: No, but it is sufficient reason for "the government shouldn't prevent it". Aspie96#5177: I am saying if a model is copyleft the government shouldn't pay for it. Keepthepace#6435: And why are you talking about government? I kind of agree that public domain makes sense for them, but here we are talking about EleutherAI, a private (non-profit) effort Aspie96#5177: I wasn't referring specifically to EuletherAI. The reason I mentioned governments (I include in this every work fully or partially founded by a government, not necessarely made by a branch of the government directly) is that research is often government-founded and models are a result of research more often than software. Aspie96#5177: In the case of other researchers, there is in principle nothing wrong with using any license at all, including of course copyleft licenses. Aspie96#5177: However, I still believe it's better for the output of research to be built in such a way that such knowledge is simply given to the public with no strings attached at all, or as few as possible. Keepthepace#6435: As long as researchers can apply for patents and create companies out of their research, I am going to say that copyleft is morally defensible. Keepthepace#6435: Well I am interested in EleutherAI style efforts there. Aspie96#5177: I think it is morally defensible, I just find it suboptimal. Aspie96#5177: > However, I still believe it's better for the output of research to be built in such a way that such knowledge is simply given to the public with no strings attached at all, or as few as possible. This includes EuletherAI. Keepthepace#6435: Well, I disagree there, so I think that won be easily resolved Keepthepace#6435: I think DL models should not be controlled by private interests Keepthepace#6435: I think public domain that allows closing down of fine tuned models allow that Keepthepace#6435: I think copyleft fights that Aspie96#5177: I think the information in a paper, for instance, should have no strings attached. Keepthepace#6435: I think research is best done in the open, and that companies, rationally, don´t leave things open unless forced to
Aspie96#5177: Not even "if you build other info on top of this, there should be no strings attached". Keepthepace#6435: Yes, talking about models there Aspie96#5177: When I learn something from Wikipedia (which is copyleft, I will get to that in a second) I can use that info however I wish. I can mix with other information I get from my dictionary. And I can use it in a private lecture or even give that info to someone else non-free trough a NDA. Aspie96#5177: What's copylefted in Wikipedia is the text iteself and how it is organized. But the information it conveys is truly in the public domain. Aspie96#5177: In the case of software, it is a "work" in the classical sense of the term. Aspie96#5177: But a neural network contains nothing but the description of a statistical distribution. Keepthepace#6435: I don't think progress would be slower without copyright protection, that all knowledge should be free by default. Copyleft is an attempt at reaching an approximation of that state Aspie96#5177: I think that only counts as knowledge and that's why it should come with no strings attached. Which is already the norm for most kinds of knowledge. Aspie96#5177: That is already the case for knowledge with the only possible exception of trained models. Aspie96#5177: I am arguing they should not be an exception. Keepthepace#6435: Yes, we both agree it should not copyrightable, we both agree it has a risk of actually being recognized as copyrightable. IF you think it should not be ever closed, then why oppose copyleft license for them? Aspie96#5177: Even if a work is not copyrightable, there are other ways of making it nonfree. Aspie96#5177: Contracts are one way to achieve that goal. Aspie96#5177: I could give you knowledge that I acquired from the book I have at my left and ask you to sign a contract that you will not pass that knowledge forward. Keepthepace#6435: But we are both assuming it may be recognized copyrightable. Assuming that, why dont you want them copyleft? Aspie96#5177: The point is the book came to me with absolutely no protection at all on the information it contains. Aspie96#5177: Because that would be a condition on knowledge. And I think the role of researchers in society should simply be that of pointing at things already there in the universe (either phisical or that of maths) and saying "there, look!". I do not believe acting otherwise is immoral, I just don't think that's ideal for research. Aspie96#5177: We treat all information that way. Whether the legal hook to "protect" information is copyright or any other law or even just moral agreement is, I think, secondary.
Keepthepace#6435: How does closed knowledge help research? Aspie96#5177: I don't think it helps research. I don't think researchers should be restricting any activity, not even those not helping research. Aspie96#5177: I think research is a very sepcial thing in society in a way in which typical software writing is not. Keepthepace#6435: So you are arguing researchers have no responsibility in the creation of conditions that help research ? Aspie96#5177: I absolutely believe that, but it's not what I am arguing. Researchers, or anyone else in society, may very well say what the best way for research to help others is. What "conditions" are to be followed. I am doing that right now. I am also saying that scientific knowledge should not have any string attached, not even that of not attaching strings. Aspie96#5177: If I were to build a numbering system, derivative of rational numbers, I could give it to you under an NDA. It wouldn't be copyrightable, but that would be irrelevant. That's because numbers are simply a commons. They are mine as much as yours. I think the same should be with all scientific knowledge. Aspie96#5177: In essence, I think the job of a researcher is to point out things in the universe and to make others awre of them. Aspie96#5177: You should not be liable to me, in any way, for how you use information that I provided to you. Keepthepace#6435: That's weird, from every statement I m getting the feeling that you are opposed to the idea of intellectual property of knowledge but it perplexes me why you refuse to state it plainly Aspie96#5177: What kind of IP are we referring to? Aspie96#5177: Knowledge isn't copyrightable, it might be patentable. It might also be closed off with NDA, but those are not IP. Keepthepace#6435: Patents are considered a kind of IP Aspie96#5177: Yes, patants are. Aspie96#5177: I was saying NDAs aren't. Aspie96#5177: I am not opposed to patents in principle, I just wouldn't support any research whose output is patented.
Keepthepace#6435: And pieces of knowledge are, sadly, copyrightable. Our world is not ideal. IP laws are used to prevent spread of knowledge, why not accept putting some protection against that? Aspie96#5177: All *representations* of knowledge are. Aspie96#5177: Wikipedia is copyrightable and copyrighted. Aspie96#5177: What I learned from Wikipedia is now mine, however. Keepthepace#6435: And yet, if I were to discover pieces of knowledge but put that in a copyrighted document you are denied access to, I would effectively bar you from accessing that knowledge Keepthepace#6435: (are we being off-topic here?) Aspie96#5177: Yes, unless the same info is in other documents or I can discover it myself. Aspie96#5177: But it wouldn't be the same as copyrighting the knowledge itself even if it might have somewhat similar consequences. Aspie96#5177: I don't think so, the topic is which license should be used. Keepthepace#6435: I see, we are having a deontologist-consequentialist disagreement. I think the consequences are what is important, I suspect you think the actions are. We are not to solve this debate here I fear Aspie96#5177: You are correct. Aspie96#5177: > We are not to solve this debate here I fear This I do not know. Aspie96#5177: But even from a strictly consequentialist point of view I think a copyleft license would not be ideal, for a few reasons: 1) Such models might be copyrightable only in some countries, if they ever are. This would create an uneven legal territory in AI and research if copyleft licenses are introduced. 2) Until the question is answered the situation would be unclear. Meaning many some would follow the copyleft condition, some would ignore it and few if any would sue. I wonder if this could also result in attribution information being removed, in some cases, from parties not following the copyleft license which would be willing to follow permissive licenses. 3) If they turn out not to be copyrighted (which is what we should be aiming for) the license will be, at that point, useless. Those who decided to follow it will have been "punished" uselessly. 4) I argued why I thing the government shouldn't fund copyleft models and how it's different from software. A lot of research is government-funded and we want to increase that amount. If licenses are permissive that's a lot easier. 5) Even in software, sometimes permissive licenses do a good job at attracting contributions from companies. This doesn't mean copyleft is a bad idea, but it is not necessarely the best strategy in each and every way. Do not forget the goal of copyleft. It is not that of reducing the amount of proprietary software. Instead, it's that of increasing the amount of free software. And it often does, by convincing parties to release software as free while they would otherwise release it as nonfree. In the case of pretrained models, because of the differences with software, I'd argue this effect is much weaker, if at all existent, while the opposite effect leading to possible contributions is a lot stronger. Aspie96#5177: There is also a political point: trying to enforce a copyleft license relies on arguing that the model is copyrighted. If we want to argue it is not, we should not be the ones using copyleft licenses on them.
Aspie96#5177: Note that permissive licenses are not the same as the public domain. But in pratice the difference is irrelevant in many cases for those using the model. neko#5937: Copyleft is cancer neko#5937: I think there's a nice mathematical solution to this situation but i haven't figured it out Aspie96#5177: I would therefore argue that even from a strictly pragmatic point of view, in the specific case of models, including those trained by EuletherAI, a permissive license (which is the status quoe) is best. neko#5937: Yeah it's a general paradox, how do you share something to maximize innovation, while having people contribute back to it, and for both parties to profit neko#5937: Even Tesla has this problem Aspie96#5177: Which Tesla? neko#5937: The company neko#5937: https://www.vennershipley.co.uk/insights-events/does-teslas-open-source-patent-philosophy-mean-they-are-free-to-use/ Aspie96#5177: For all knowledge I have, I am not accountable to contribute back to anyone. I don't think mathematical distributions should be any different from how to start a fire. neko#5937: It's complicated and sad but i haven't figured out a nice mathematical/ engineering solution to it Keepthepace#6435: You are making the assumption that models will eventually be recognized as non-copyrightable. No redo the whole scenario assuming the opposite: people putting their research in public domain and reusing models they assume to be free from nvidia, FB, Google, are suddenly in violation of copyright and need to secure licenses neko#5937: Yeah this whole license system is a wreck that slows down innovation Aspie96#5177: I am not. I am assuming it is currently unknown and that we have no guarantee the result is worldwide the same. Aspie96#5177: > No redo the whole scenario assuming the opposite: people putting their research in public domain and reusing models they assume to be free from nvidia, FB, Google, are suddenly in violation of copyright and need to secure licenses I am not advocating for using a model if there is no license or waiver attached. Keepthepace#6435: I have always assumed that Tesla's patent pool was like Google's: it is a "you are free to use these patents unless you sue us/someone for infringement", am I wrong? That would be pretty copyleft to me Aspie96#5177: Nothing to do with copyleft the way you describe it. Aspie96#5177: It's a retaliation provision which the Apache 2.0 license has.
Aspie96#5177: It would be considered non-free if applied to copyright, by the way, but it is considered free if applied to licenses. neko#5937: Wow time to use Google's patents neko#5937: Thanks for the tip Keepthepace#6435: That's very far from the permissive/public domain you promote Aspie96#5177: It has been discussed only once by RMS to the best of my knowledge, but there is no decision to the contrary (in my knowledge). Aspie96#5177: There is a difference, but it is not considered a form of copyleft. Keepthepace#6435: "3) If they turn out not to be copyrighted (which is what we should be aiming for) the license will be, at that point, useless. Those who decided to follow it will have been "punished" uselessly." -> I dont see how they would be punished? Aspie96#5177: The Apache license is univertially seen as permissive (except by the OpenBSD community which rejects it in part for this reason, but they don't really do a lot of defining). Keepthepace#6435: That's much further than your position "let's do the minimum the law mandates, and not worry about other people's actions" Aspie96#5177: I haven't read the article, just your wording and I am assuming it's the same revenge provision, which looks so by your wording. Aspie96#5177: It is meant to protect none but the licensor. Keepthepace#6435: to be honest, I am not that knowledgeable about their patents conditions, and thatś a distraction from the main subject I believe Aspie96#5177: If I sue someone else for violating my patent I retain the license. The "problem" only arises if I sue the author of the program. Aspie96#5177: Whether therir provision is this one or not is secondary: the point is that it's not considered as a form of copyleft. Aspie96#5177: I think the Apache 2.0 license which EuletherAI currently uses is perfectly fine. Keepthepace#6435: I am wondering if there has been discussion/votes on that issue? Keepthepace#6435: I would be surprised it is unanimous Aspie96#5177: I think it was the result of internal discussion. Aspie96#5177: There has been some discussion here, but the conclusion wasn't reached publicly. neko#5937: It can be, if it's enforced by code
Aspie96#5177: This is the exception and even when it does happen it's usually a mixture of both. neko#5937: Let's just remove the human aspect Aspie96#5177: You simply cannot define even something as "killing" rigidly, as the death of an organism cannot be defined in such a way that it would be always objectively identifiable. Aspie96#5177: You can't just remake the laws. We have to deal with the way they are. neko#5937: Ok my bad Aspie96#5177: Also, the specification is a human factor, since it arises from what we want to obtain. Keepthepace#6435: Then that ´s what it is, I'll promote copyleft models somewhere else then I guess Aspie96#5177: Computer programs are "objective", but they only reason they work is that they work in a very specific narrow artificial virtual abstract world we built specifically so that a machine could deal with it. Aspie96#5177: Laws have to work with and describe the real world we live in instead. Aspie96#5177: And give meaning to high level ideas we want to address. Aspie96#5177: I have two suggestions for you. Aspie96#5177: First, if you are not a lawyer, **and even if you are** have a lawyer review any complex license you write. Yes, copyleft licenses are complex. All of them. Aspie96#5177: Secondly, discuss it with the main FLOSS communities if you want to make sure it is FLOSS. Don't assume it is if it goes further than the AGPL. If you don't, that's ok, but don't refer it as "copyleft" as it's usually intended as a license both FLOSS and ShareAlike at once. "Reciprocal", with some adjective specifying your specific conditions, would be a much more neutral term. Louis#0144: Oh yeah Louis#0144: I tried setting up copyleft for finetuning Louis#0144: We were in contact with FSF Louis#0144: but they didn’t really seem interested Louis#0144: We were going to contact some lawyers at UW who specialize in FLOSS Louis#0144: but we forgot Lol
Aspie96#5177: Interesting. But note that the FSF isn't always the most restrictive about whether a license is free or not. Aspie96#5177: Most licenses are either free according to everyone or according to none. Louis#0144: We got put in contact with lawyers at FSF Aspie96#5177: But there are licenses which are on the edge and are free only according to some. Keepthepace#6435: @Louis I am interested in more info about that. I am about to contact people in the Eclipse Foundation Keepthepace#6435: @Aspie96 thanks for the advice Louis#0144: You should just go for it Louis#0144: FSF said it was very complicated and kind of out of their scope Louis#0144: lol Louis#0144: Nothing really insightful happened Aspie96#5177: What I am saying is that if a license is acceptable to the FSF it might still not be for OSI and Debian, which are the two other main communities vetting which licenses are FLOSS. The same applies for any of these other two. Louis#0144: We wanted to poison the data in such a way that finetunings were identifiable Louis#0144: Like radioactive data Aspie96#5177: UW? Louis#0144: University of Washington bmk#1476: unfortunately that never got off the ground, right? Louis#0144: Yeah Louis#0144: It’s very hard at this scale Louis#0144: Poisoning like that had only ever worked at very small scales bmk#1476: I wasn't very involved, what was the main blocking factor?
Louis#0144: Just a few million params Louis#0144: Oh it doesn’t scale Louis#0144: That was the issue Aspie96#5177: You sure 'bout that? Louis#0144: It scales super poorly bmk#1476: we know that for sure? Louis#0144: No Louis#0144: Just if I remember that was the educated guess of the radioactive data authors Louis#0144: It was a FB paper originally bmk#1476: like, you guys actually got the code working but it got less effective for big models? Louis#0144: Oh Louis#0144: No we didn’t get to code working Louis#0144: It was still theory stages EricHallahan#1051: So what your saying is that there is a chance. bmk#1476: then scale isn't the blocking factor lol cst#9766: check out the software freedom conservancy, they provide pro-bono legal representation for FOSS Louis#0144: Yeah there’s a chance bmk#1476: the blocking factor is that you just stopped working on it Louis#0144: I wasn’t working on it FWIW Keepthepace#6435: FRom the linked discussion, it is clear that debian has a trouble with their definitions of "free" and the constraints of deep learning models
Louis#0144: that was Stella’s project and someone else’s Louis#0144: I forgot who the other person was Keepthepace#6435: why not? Louis#0144: Stella originally joined to work on this Louis#0144: lol Aspie96#5177: I don't think the FSF has currently any opinion about models. Louis#0144: I remember we were discussing data poisoning when I first joined Louis#0144: And I said I had a mathematician friend who could help Louis#0144: And I brought Stella Louis#0144: LMAO Louis#0144: The blocking favor I guess was there seemed to be lots of complications + we didn’t know if we actually wanted it + the people working on it wanted to go to projects more immediately practical Louis#0144: Stella went to the pile Louis#0144: And I don’t remember who the other person was Aspie96#5177: I was half kidding. As you may be aware, there are several controversies between FLOSS communities. I happen not to, currently, be too much of a fun of the EF, but that's outside of the scope of this discussion. I said it jokingly. bmk#1476: pile is over now, do we spin rad lab back up then Louis#0144: I don’t have time for it Aspie96#5177: Interesting, because then EuletherAI decided to use a non-copyleft permissive license. Aspie96#5177: (Which I support).
EricHallahan#1051: We could technically do that when we feel like it is time. rom1504#5008: France has protest every year. That's the normal and stable situation. It doesn't affect much. Louis#0144: Yeah Louis#0144: I think one of the other things is that we decided we needed a legal subgroup to help if we decided to do this Aspie96#5177: I thought the intention was to use Apache 2.0 for future models as well. Louis#0144: I remember Connor explicitly discussing that Louis#0144: Everything here is WIP EricHallahan#1051: https://discord.com/channels/729741769192767510/730090096287547444/844571285131493387 Keepthepace#6435: #legal exists but is archived Louis#0144: You know what we need at some point... a legal subgroup and IRB Louis#0144: Yep Louis#0144: That’s where the discussion was Louis#0144: It was exactly this Aspie96#5177: Yes, I was referring to that tweet. Louis#0144: Plus the pile briefly at one point Keepthepace#6435: From what I read there, it was not exactly a unanimous decision but more of a statu-quo thing bmk#1476: I would have voted for MIT but I don't care enough to go back and change anything Aspie96#5177: Careful about voting! Aspie96#5177: The voting system must not ask for tactical voting. Aspie96#5177: Else the grouping will affect the result.
Aspie96#5177: The way of voting Debian uses is almost immune to this. alexyz#3459: University of Wisconsin? Louis#0144: Washington Aspie96#5177: ^^ alexyz#3459: ah ok Aspie96#5177: The difference isn't too wide. Louis#0144: But yeah doing this is as much of an ethical and legal question as it is technical EricHallahan#1051: Well the thing is MIT means that they could keep the name the same on a fine-tuned model, correct? Louis#0144: Could potentially have alignment implications too bmk#1476: huh? Louis#0144: Like saying alignment has to be copyleft Aspie96#5177: The MIT means you can do everything at all as long as you follow its one condition. Louis#0144: Which could be cool EricHallahan#1051: I may be totally wrong lol bmk#1476: does apache force you not to keep the name? Aspie96#5177: However, trademarks might apply, as might moral rights. And misrepresenting a work can be wrong towards the user too. So if one changes it they should still change the name. alexyz#3459: Personally, I think the best license is a license that just allows people to do whatever they want with it, isn't that MIT? Aspie96#5177: No, but it has this provision: > You must cause any modified files to carry prominent notices stating that You changed the files;
And clarifies this: > This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, Aspie96#5177: Or the magical WTFPL. bmk#1476: so for our purposes it doesn't differ from MIT much Aspie96#5177: Jokes aside, for that a public domain declarations is better. Aspie96#5177: Exactly. EricHallahan#1051: So it really isn't a big deal lol bmk#1476: it isn't Aspie96#5177: Someone was suggesting to use a copyleft license (I am opposed to it). Aspie96#5177: The difference between copyleft and non-copyleft would be a big deal. bmk#1476: I still vote for MIT just because simplicity is good imo, but I don't care enough to push hard for it EricHallahan#1051: But they are both permissive? bmk#1476: though it is a bit awkward that the model and the code have different licenses EricHallahan#1051: Again, not a lawyer. Aspie96#5177: I'd presume you'd also vote for permissive over copyleft in general, in case MIT isn't selected. Not because I want you to agree with me, but simply because it wouldn't make sense to vote for MIT but copyleft better than Apache. Aspie96#5177: In that case it doesn't change much for this project. Keepthepace#6435: To be precise: for such a case CC0 is a good idea, some countries don t have "public domain" Aspie96#5177: Yes. Aspie96#5177: For those who do not know, here is an utterly useless summary.
bmk#1476: it's more that my default is MIT and I need strong reasons to change from the norm alexyz#3459: Wait, what are the GPT-Neo models license? Aspie96#5177: CC0 does 3 things to place something in the public domain: bmk#1476: and I haven't heard convincing arguments for why we want copyleft EricHallahan#1051: https://eleuther.ai/faq Keepthepace#6435: As a side note I encourage people who are not clear about the obligation of the different licenses to go and try read the MIT and BSD at least, which are just 2 or 3 paragraphs. It is good to know what is and isn t in the license you use Aspie96#5177: CC0: 1) It renounces all copyright and similar rights, placing it in the public domain. 2) For those rights that cannot be renounced, and where the public domain is not possible, it licenses them instead with no condition ("license" = "permission"). 3) If there are rights that cannot be renounced nor licensed, it promises not to sue. EricHallahan#1051: The entirety of MIT: https://mit-license.org/ **The MIT License (MIT)** Copyright ©️ 2021 <copyright holders> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. Aspie96#5177: Note that it is not impossible for 1, 2 and 3 to all fail for some righs in some countries. Keepthepace#6435: To have access to hundreds of fine-tuned gpt-neo models 2 years after its release? bmk#1476: I already said I don't think that's a convincing argument EricHallahan#1051: But that isn't important to us. AI_WAIFU#2844: that's gonna happen anyways Aspie96#5177: I further encurage the reading of the Apache 2.0 license also. Keepthepace#6435: and the GPL 🙂 Keepthepace#6435: Though this one takes a bit longer alexyz#3459: I... keep forgetting to read it lol alexyz#3459: A license that forces you to release the finetuned models is just... not good Keepthepace#6435: I am genuinely curious: what do you consider EleutherAI is bringing that OpenAI is not? Aspie96#5177: > EleutherAI is licensing models under Apache 2.0. If you use our models, we would highly appreciate you citing or displaying your usage of them. It doesn't specify anything for future models. neko#5937: Free models bmk#1476: why would a country decide to make it impossible to renounce rights or make them licenced without conditions? o.O bmk#1476: that seems like a weird thing to do Aspie96#5177: Although I hope they will remain under the current license. Keepthepace#6435: I think a license that does not do that is useless, but that's like, our opinions 🙂 Aspie96#5177: Just because I don't want more restrictive ones.
EricHallahan#1051: True, I assumed that would be inferred. bmk#1476: what do you mean Aspie96#5177: What? Without a license one cannot use the model **at all**. bmk#1476: I just want these models for my own research Aspie96#5177: Even the FSF would rather use permissive licenses than no license at all if it had to chose for its software. Keepthepace#6435: I heard (but never checked) that this includes some US states. Aspie96#5177: Responding to myself: not exactly true. Obviously only true for uses requiring a license. bmk#1476: more permissive means the model is more useful for people to use, and I don't think a license is stopping really bad actors anyways bmk#1476: truly misaligned actors can just, like, ignore the license lol bmk#1476: the only people licenses restrict is people who are good enough to care about obeying licenses Keepthepace#6435: And you dont want the models finetuned from this model to be usable in your research? bmk#1476: those models won't be useful for my research alexyz#3459: Forcing people isn't the best way to go about that... Keepthepace#6435: You want to redo in public domain all the training down privately, duplicating efforts an making research lag behind? Aspie96#5177: It is possible for a right to be inalienable to such a measure that even if you sign, in writing, that you license it, that you renounce to it, or that you will not enforce it that provision of the contract is void. Keepthepace#6435: How do you know? bmk#1476: no, I just don't think any models of any value will be released if and only if the license is copyleft Keepthepace#6435: What is the best way then? Aspie96#5177: Note however that, in practice, there has been no issue related to this about CC0 to my knowledge. alexyz#3459: Just... letting people release when or what they want?
Aspie96#5177: You are making the assumption that copyleft would work in practice the same way it has for software. alexyz#3459: Because most finetunes are just bad, like my own Keepthepace#6435: @bmk you don t think some companies will be happy to use a free model instead of spending millions to retrain one, in exchange of releasing their own improved model? There are tons of companies that do the equivalent of that with copyleft software Keepthepace#6435: it is especially prevalent in the web world alexyz#3459: like "wow, you finetuned on misproperly formatted cookbooks, what a gamechanging model" Keepthepace#6435: The very reason EleutherAI exists is because OpenAI failed to do so Aspie96#5177: The reason copyleft works is that some parties want to release software and are willing to accept a copyleft provision for the sake of having that premade work. alexyz#3459: That doesn't even make sense alexyz#3459: What does that have to do with anything? Aspie96#5177: If the party decides not to build that proprietary software at all, instead of freing it, copyleft hasn't worked. Keepthepace#6435: I dount EleutherAI would exist if GPT-3 was released in the public domain? Keepthepace#6435: Or am I wrong about its very goal? alexyz#3459: So? What does that have anything to do with... anything? alexyz#3459: Why does that matter to the conversation of copyleft? Aspie96#5177: If the party decides to build it from scratch, copyleft has also not worked. 𓅬 gabriel_syme 𓅬#3220: serious deja vu Aspie96#5177: Copyleft only works if it makes sense for a party to release the work as free, but they wouldn't do so if they hadn't that additional incentive: they are "reworded" with premade work. alexyz#3459: This conversation has happened at least 1 time before bmk#1476: I cannot imagine any circumstances where a) the model is of any nontrivial value to anyone, b) the model would be released if and only if the license is copyleft, and c) the model would still be trained in the first place if copyleft 𓅬 gabriel_syme 𓅬#3220: it's an important one
Keepthepace#6435: I have a real life example that happened to me Keepthepace#6435: I can lay it down if you want Aspie96#5177: There is also an other additional case in which copyleft was useless: when a party would have released the software as free either way. bmk#1476: eleutherai wouldn't exist if gpt3 was released under public domain or GPL or MIT or CC BY-SA NC ND bmk#1476: I really don't see how copyleft has anything to do with it Aspie96#5177: CC BY-SA-NC-ND is not a license that exists. A ND license cannot be SA. bmk#1476: you get my point lol alexyz#3459: EleutherAI releases the models, and doesn't care about whatever happens with them. People can finetune whatever they want and do whatever they want with it. Eleuther's goal is to get the models out into the world, not moderate if someone releases their finetune of them or not... Aspie96#5177: I'd argue it'd exist in the case of ND or NC because NC and ND licenses are nonfree. Keepthepace#6435: It was a marketing company, they wanted to gauge the age of gender of the people in a video of a street in Japan. Their goal was to profile markets. They asked me if DL could do that. I found some trained models that kind of worked, BUT... a lot of their targets were old japanese ladies (happened in Japan) and while a human would not have problem taking cues from clothing and hair styles about the age of a person, that model was not trained on these cultural cues. The project did not go forward for other reasons, but this company did not care for the end model. They were happy to not have to train something from scratch, their added value was in the output the model was to produce, and they would have accepted to release the model if it was a legal obligation but opted to not, out of habit to pretect IPs, if there was no obligation bmk#1476: i don't care enough that i'd go to all the effort of training a fully free gpt if an nonfree-encumbered but open one existed alexyz#3459: Yes, and? That was their choice, they can do whatever they want with their model. Aspie96#5177: @Keepthepace Do you have any example of this happening: 1) Party A publishes a model as free. 2) Party B reuses that model and distributes another copy of the model modified and somehow nonfree. 3) You have reasons to believe if party A had used a copyleft license using it would still have been convinient for party B. alexyz#3459: Sure, I'd like there to be one more open model out there in the world. 𓅬 gabriel_syme 𓅬#3220: I think leaving it open for people who fine tune to decide is the best. It allows a much wider adoption of the model and eventually spill out to the world. [citation needed] alexyz#3459: But forcing people to release them would not help anyone.
Keepthepace#6435: But that's my point: despite a stated commitment to openness, openAI did not release GPT-3. You can't just trust that people will spontaneously do it and it really perplexes me that people think MIT/BSD/public domain somehow incites people to do it. alexyz#3459: OpenAI changed their commitments, what does that have anything to do with GPT-Neo finetuned models being released 𓅬 gabriel_syme 𓅬#3220: and I'm all about open-source in my domain (which is as closed as it gets), but allowing the options helps does not deter OS activity imo Keepthepace#6435: @Aspie96 I think I just did yes alexyz#3459: The thing is, we're talking about GPT-Neo finetuned models alexyz#3459: They can't be that exclusive alexyz#3459: like "wow, someone trained this on story books!" alexyz#3459: That's not hard to do yourself alexyz#3459: it doesn't matter if they release it or not really bmk#1476: i think that all gptneo tuned models will be totally obsolete within 5 years, probably less Aspie96#5177: Yes, I hadn't read it. Do you think if they did release their modifications would have added a significant amount of value to the community? alexyz#3459: just because it's not hard to recreate, the tools are right there for you alexyz#3459: definitely, for sure Keepthepace#6435: How about "Wow someone trained that on their private database of a million case laws" Keepthepace#6435: That one is impossible to retrain alexyz#3459: Good for them! Aspie96#5177: Also consider larger companies would have more means to retrain things. EricHallahan#1051: If dependent on current events, way less. bmk#1476: yeah prolly alexyz#3459: You could probably find a similar dataset to recreate it yourself...
alexyz#3459: But that's besides the point Keepthepace#6435: Yes, clearly. Most trained models out there do a lot of US-centric asusmptions. Localized models are precious for actual field applications alexyz#3459: The inconvenience and pain that it would cause is not enough justification for a few private dataset models to be released. alexyz#3459: Besides, if they had to release them, they'd just not use GPT-Neo Keepthepace#6435: that's exactly the point. Copyleft gives a tool to allow far more sharing of data and resources that would otherwise naturally happen. alexyz#3459: because private data could be leaked alexyz#3459: Did you read my next message? alexyz#3459: It's not worth it alexyz#3459: especially not for GPT-Neo finetuned models Keepthepace#6435: No, 99% of the companies that have a use for DL models are not DL companies. They want to keep the DL work minimal. If you offer them shortcuts that are profitable and with string attached, they do a cost analysis, and copyleft is very often an acceptable cost to them bmk#1476: 1. any model downstream of (finetuned from) gptneo will be obsolete within a few years 2. copylefting gptneo models doesnt cause future models that replace gptneo to be copyleft in any way alexyz#3459: companies avoid GPL like the plague Keepthepace#6435: Apple does AI_WAIFU#2844: literally, it's a plague on software Keepthepace#6435: Most webcompanies use GPL-ed soft routinely alexyz#3459: That is a statement with no evidence to back it up... AI_WAIFU#2844: It's more accurate to say that they keep a solid legal barrier between GPL stuff and their stuff bmk#1476: :smallbrain: plandemic :bigbrain: GPlanL
Louis#0144: Should geese be copyleft Aspie96#5177: To be fair not all companies do but they are more willing to contribute to non-copyleft projects. Aspie96#5177: An example is Tensorflow. Aspie96#5177: Another is PyTorch. bmk#1476: copygoose Keepthepace#6435: as is the plague statement. How many companies use wordpress? How many companies sell wordpress-related things? It is GPLed like many CRMs Louis#0144: That’s what the G in GPL is for AI_WAIFU#2844: A copy left Neo would still be used, but companies just wouldn't build on top of it/finetune bmk#1476: Goose Plan License alexyz#3459: permissive licenses allow the original thing to be released, while letting future applications of it be non-encumbered bmk#1476: we need to make that a thing Aspie96#5177: Oh wow. r/angryupvote if this was reddit. alexyz#3459: Just please, Eleuther will not use a copyleft license, it's obvious bmk#1476: "this project is licensed under the GPL (Goose Plan License)" AI_WAIFU#2844: thank fuck it isn't alexyz#3459: i need to go to bed alexyz#3459: g'night, geese :goose6: Aspie96#5177: Reddit isn't too bad. Keepthepace#6435: I love "obvious" as an argument. Always a blast when it is accepted... Louis#0144: Terms of GPL is that all comments need to be replaced with pictures of geese
Aspie96#5177: Better than the Twitter community for sure. Louis#0144: You also have to have a code of conduct that you’ll honk at newcomers AI_WAIFU#2844: You're suffering from Stockholm syndrome bmk#1476: Reddit is gell-mann cranked up to 11 Aspie96#5177: I am not a huge fun of CoCs. I cannot support this. Louis#0144: I hate Reddit Louis#0144: Awful toxic site Aspie96#5177: Am I the only redditor here? 😦 Aspie96#5177: I am scared don't eat me. Louis#0144: I go on there for pictures of puppies Keepthepace#6435: No, am toxic too bmk#1476: Reddit used to be good Louis#0144: Nothing else Aspie96#5177: r/angryupvote to you as well. Well done bastard Louis#0144: Reddit was good in 2013/2014 EricHallahan#1051: I once was a Redditor but now I spend my time here lol AI_WAIFU#2844: I was on reddit 8 years ago, It was good back then. bmk#1476: the only subreddit I frequent is r/ich_iel AI_WAIFU#2844: Reddit was at it's peak when it was 50/50 porn and programming Keepthepace#6435: all social networks are what you make them out to be. FB changed for me after a huge contacts cleanup. Now it is mostly technews (and my wife's friends latest picture of their meals)
Louis#0144: I deleted fb Louis#0144: I use Reddit for posting my projects on the machine learning sub mostly Aspie96#5177: What changed besides the theme? I know I am off topic but I need to know why people hate reddit. AI_WAIFU#2844: FB is great if you get rid of the news feed. Louis#0144: And I use twitter for networking AI_WAIFU#2844: The userbase and the moderation policies Keepthepace#6435: wish I could but I joined a community that mostly uses messenger 😕 Louis#0144: This is the only social media I use for social media Aspie96#5177: I've never found very toxic behaviour except for a few subs. bmk#1476: reddit is full of a) circlejerking over stale memes b) gell mann amnesia of the highest degree Louis#0144: Most gaming subs are awful Louis#0144: The Linux subs are bad too Keepthepace#6435: the general public mostly hears about /r/redpill and /r/t_d before it got closed AI_WAIFU#2844: With few exceptions it became impossible to have interesting discussions, especially about edgy topics. Aspie96#5177: I think the userbase is truly great, mostly. But I am not sure about the moderation policy. Keepthepace#6435: I use /r/france a lot, but also enjoy /r/rust and a few DL and robotics themed subs Keepthepace#6435: There seems to be a phenomenon of moderation rot especially in big subreddits Louis#0144: Ew French Aspie96#5177: The most hateful things I've read on Twitter, personally.
bmk#1476: r/ML is unfortunately quite useless (sorry chilli) because people only upvote drama Louis#0144: I’m of French origin Keepthepace#6435: I saw it happen in several places, some people try to gain mods rights and then change the direction of the whole sub Louis#0144: 🤮 Aspie96#5177: I do not think moderation should be strict, quite the contrary. Keepthepace#6435: honk! AI_WAIFU#2844: The thing is that power was consolidated by a few moderators in large communities, and multiple subs started to suffer from evaporative cooling/problem of the witches Louis#0144: le honk Aspie96#5177: But I do believe it should be **very** consistent. Aspie96#5177: In the sense that it shouldn't be unbalanced and biased. Aspie96#5177: In any direction. AI_WAIFU#2844: ~~Then they banned r/bigchungus and that was the last straw~~ Louis#0144: I always get my projects to the top of this sub Keepthepace#6435: unbiased, strict, big subreddit: choose 2 🙂 Louis#0144: It’s great marketing Aspie96#5177: Unbiased and big subreddit. I specifically don't want it to be strict. Keepthepace#6435: I am in favor of a strong moderation, but I embrace the unavoidable consequence that it will cater to a specific community. That's why it is important to be able to fork a community and be able to have different biases in another subreddit Keepthepace#6435: ok, gotta spend some family time. Nice discussing even if I feel the tone became a bit hot 🙂 ++ fellow geeses chilli#5665: Don't really agree with that lol chilli#5665: There's just not enough drama
chilli#5665: To be the only thing upvoted Louis#0144: The sub is all schmidhuber alts Louis#0144: Nothing else Aspie96#5177: That's a real problem. bmk#1476: I remember the siraj raval stuff topped the sub like 2 or 3 times bmk#1476: schmidhuber stuff makes it to the top occasionally too Aspie96#5177: What about the drama about Yann LeCun? Deleted User#0000: Is this data publicly available somewhere? https://cdn.discordapp.com/attachments/729741769738158194/848405760513933322/Screen_Shot_2021-05-29_at_11.41.14_PM.png bmk#1476: we didnt scrape any fanfiction data, so youre on your own asara#0001: it doesn't sound like they actually even downloaded it based on just that excerpt bmk#1476: can confirm, we didnt asara#0001: oh, I should have realized Keepthepace#6435: https://www.reddit.com/r/datasets/comments/b39dal/fanfic_just_all_of_it/ Keepthepace#6435: https://www.reddit.com/r/DataHoarder/comments/6cl44g/fanfiction_scrape_update_new_stories/ Keepthepace#6435: 2 yo and 4 yo links Keepthepace#6435: and that's two subreddits I'll subsribe to bmk#1476: im ten parallel universes ahead of you ive already been inflicted by r/datahoarder and now own a storage array but it's not big enough *the storage array must grow* Kia#2550: You been reading a fanfic lately? Keepthepace#6435: Insufficient storage is a void in the heart that can't be fulfilled no matter how many HDD is thrown at it. BoneAmputee#8363: I am hoarding the most useless data https://cdn.discordapp.com/attachments/729741769738158194/848421944771870790/data_hoarding_gone_wrong.png
Kia#2550: What that can be Kia#2550: Ow... Kia#2550: I can read I forgot :mittwoch: Keepthepace#6435: Maybe I need to unsubscribe, I don't need to drain money in storage, I have to invest in GPU ram first... AI_WAIFU#2844: You know, JAX is supposed to cut down on the amount of suffering we've been experiencing, but I haven't even turned JAX on and I'm flipping through a firehose of logs and errors. Teemochu#8740: For ponies in particular there's also fimfarchive, depends on whether MLP is all you need or not. https://www.fimfiction.net/user/116950/Fimfarchive bmk#1476: imagine not merging all the disks you own into one gigantic super partition BoneAmputee#8363: I've considered it but it just seems like it'd complicate things Teemochu#8740: this kills the SSD space bmk#1476: just use bcache Teemochu#8740: all of my drives are different lol bmk#1476: sell them and buy homogenous drives bmk#1476: we are borg Teemochu#8740: I'm off from work, no thinking about borg until Tuesday bmk#1476: is the google thing still called borg? bmk#1476: huh Teemochu#8740: yes bmk#1476: it's a perfectly suiting name bmk#1476: also it looks like bcachefs is really coming along well Teemochu#8740: there are a lot of wrappers now but it's so leaky of an abstraction that it's not even really an abstraction
bmk#1476: erasure coding and snapshots just got implemented a few months ago apparently https://www.patreon.com/bcachefs bmk#1476: i cant wait to use bcachefs on my next super big storage cluster bmk#1476: also i just realized from the embed that the header is an image of battersea power station Dupelet#9080: As a suggestion - given that there's currently no central place for newbies to discuss or ask questions about GPT-Neo, have you guys considered creating a separate channel for beginners? That would allow amateur enthusiasts to learn while mostly not burdening the regulars Daj#7482: We've considered it, yea, and there definitely are good arguments in favor. But we kinda want to politely nudge beginners to go elsewhere, since having a dedicated channel would be endorsing an influx of certain kind of content (and associated moderation burden) that we'd rather not deal with Daj#7482: We're just a bunch of volunteers doing this in our free time for fun Daj#7482: There are plenty of good places for beginners to go and very few that cater as explicitly to advanced practitioners as Eleuther, so we consider it sort of our competetive advantage Dupelet#9080: That's unfortunate, as I've scoured the existing resources pretty thoroughly and am reasonably sure there's nowhere for beginners to actually go that's GPT-Neo specific. But I get your perspective as well. Regardless, thanks for your efforts! Daj#7482: Yea Neo specific not really, but we really just wanna focus on our research in peace Daj#7482: Neo is sufficiently similar to any other LM that any other place discussing GPT type applications should be applicable to Neo as well anyways chilli#5665: Lol, on TPUs? Teemochu#8740: Best 10-minute intro to the transformer architecture I've seen https://dugas.ch/artificial_curiosity/GPT_architecture.html Dupelet#9080: Thanks, will check it out! StellaAthena#3530: If anyone wants to contribute to large-scale language modeling research but isn't familiar with training large scale language models I have a couple data processing tasks that would be incredibly helpful and don't require any familiarity with language models. Shoot me a DM if you're interested 🙂 The project is similar to this paper: https://openreview.net/pdf?id=DGIXvEAJVd Sphinx#2092: lol is that the adult version of this? https://slatestarcodex.com/2020/01/06/a-very-unlikely-chess-game/ bmk#1476: use pyfra+coconut Daj#7482: I regret telling you about coconut bmk#1476: it's amazing tho
bmk#1476: also plotnine Daj#7482: I can't wait for the official bmk-endorsed EleutherAI research stack AI_WAIFU#2844: the whole thing will be coconut Daj#7482: eegi is working on the human interaction and advanced finetuning infra bmk#1476: pyfra+coconut+plotnine is my stack so far EricHallahan#1051: Wait, it's all coconut? *Always has been.* Daj#7482: speaking of bmk, I need to get a clean (unshuffled) version of the pile and do some things to it, do you have a few minutes some time to show me how best to do that? StellaAthena#3530: Yes StellaAthena#3530: What is coconut Daj#7482: Haskell like python language Daj#7482: http://coconut-lang.org/ Daj#7482: invented be Evan Hubinger bmk#1476: why unshuffled? bmk#1476: and cant you just use the components directly in that case Daj#7482: I could that would be fine too StellaAthena#3530: I don't really see how this helps me build a tokenizer for chess notation and a dataset of millions of chess games. bmk#1476: https://github.com/EleutherAI/the-pile/blob/master/the_pile/datasets.py just iterate over the `documents()` of each thing here Daj#7482: bmk is shitposting bmk#1476: haskell helps with everything
StellaAthena#3530: Oh StellaAthena#3530: Yeah, it's in a category all of its on in that way 😉 Daj#7482: Thank you! bmk#1476: you think i'm shitposting, but little do you know im already using coconut everywhere https://cdn.discordapp.com/attachments/729741769738158194/848633809972363324/unknown.png Daj#7482: I genuinely regret exposing you to this infohazard lmao Daj#7482: your code is already unmaintainable bmk#1476: we just all need to learn coconut StellaAthena#3530: :"Change my mind" meme: @Louis writes better code than @bmk bmk#1476: not even wrong EricHallahan#1051: Someone please do this and post it to #memes. UnsupervisedLearner#4148: @StellaAthena do you happen to know any good work tying together neural nets in an algorithmic information theory framework? Particularly interested if someone has used tools like Kolmogorov complexity to place bounds on model size, for example Teemochu#8740: There's a very naive vocab for chess I can think of that has 2^12 tokens (most of which are illegal under all circumstances)*, so the space is pretty small *well a few more for promotions since those do share square-pairs StellaAthena#3530: I just want algebraic notation StellaAthena#3530: e4, c5 Kf3, \_\_\_ StellaAthena#3530: fill in the blank StellaAthena#3530: There's some details to handle: should lines be numbered? Should white's and black's moves be paired? What about disambiguating Re1 in the middle game? etc.
Louis#0144: LMAO StellaAthena#3530: But it's pretty easy, yes bmk#1476: the solution is clearly you guys need to get functionalpilled Louis#0144: That speaks volumes about Leo Louis#0144: I write shit code Louis#0144: LMAO bmk#1476: well, i write worse code StellaAthena#3530: Yes bmk#1476: fight me Teemochu#8740: *writes 60-line Java expression* Teemochu#8740: (actually have done this, one of them was indented almost as many spaces) Louis#0144: The thing is both Leo and I are scared of OOP Louis#0144: We only write FP Louis#0144: FP in the wrong hands is unreadable Louis#0144: LMAOO Teemochu#8740: protobuf building in a stream within a stream is something else bmk#1476: isnt this fucking beautiful tho: https://cdn.discordapp.com/attachments/729741769738158194/848635795907805254/unknown.png Louis#0144: Also I started using coconut Louis#0144: It’s so nice Louis#0144: Gorgeous
Louis#0144: Coconut + JAX is like bmk#1476: the only problem is the vs code support isnt great andyljones#7746: forgive him, for he knows not what he does Louis#0144: :ultragoose: Teemochu#8740: the expression basically said "take this list of things and filter it by whether any of the things in a list inside the proto match this statement, and then map to another proto that among other things only has the matches" Louis#0144: I only use vs code as an editor Louis#0144: I don’t debug or step through code Daj#7482: Oh no what have I done andyljones#7746: you're one monad away from functionalising that loop too AI_WAIFU#2844: What if we singlehandedly shifted the shelling point from python to coconut, purging the n00bs from ML Louis#0144: Nut Daj#7482: get debugger support and I might be on board Louis#0144: Why debug Daj#7482: because debugging is nice Louis#0144: If u can use Louis#0144: Hundreds of print statements Louis#0144: Each a different random animal noise Louis#0144: So it’s easy to know what failed Louis#0144: :^) Louis#0144: Oh no everyone is typing
Louis#0144: What have I done andyljones#7746: but more seriously: use the idioms, luke. ain't worth it to swim against the flow here Teemochu#8740: LOL I use the names of Pokémon or ponies all the time for that Louis#0144: LMAO Daj#7482: After that one time all my print statements got eaten in a C loop because it didn't flush the buffer I use real debuggers lol AI_WAIFU#2844: TBH debugging get's pretty useless when you have 5 processes running on separate machines all talking to each other. Teemochu#8740: "wait why did my thing never say Applejack... *oh*" andyljones#7746: i love functional programming but trying to do it in python is using a hammer to drive screws Daj#7482: True Louis#0144: My dad just googled google.com Louis#0144: Wtf bmk#1476: unfortunately coconut doesnt have monads bmk#1476: fortunately i can do this instead https://cdn.discordapp.com/attachments/729741769738158194/848636842068017184/unknown.png Daj#7482: Maintainability is super important, that alone makes it a terrible choice for anything you want to open source AI_WAIFU#2844: then what is the point AI_WAIFU#2844: The main reason we have FP is so that we can flex on the plebs who don't understand monads Teemochu#8740: I've googled duck before and been a bit dumbfounded to find a bird as a result bmk#1476: srsly tho coconut isn't hard to learn at all Teemochu#8740: this was before duck.com made it easier, or I forgot that existed somehow bmk#1476: I bet someone already proficient in python could learn to understand coconut code in like 15 minutes
bmk#1476: and it's so much more elegant Daj#7482: I bet a coconut dev could learn to write the exact same python in 15 minutes bmk#1476: yeah but the python code is uglier and has more cognitive overhead andyljones#7746: conditioned on no prior exposure to either andyljones#7746: in practice, all your collaborators know python Daj#7482: _You use $_ bmk#1476: i've been writing python code for like a few years now and i still think coconut is more elegant and i've only used it for a tiny bit Daj#7482: using $ is basically a warcrime for cognitive overhead bmk#1476: what's wrong with it tho Daj#7482: for people that don't know FP bmk#1476: `map$(f)` is way better than `partial(map, f)` Daj#7482: Coconut is more elegant, but it's less maintainable Daj#7482: because most people you collaborate with think currying is a type of food bmk#1476: i mean there is a reason i'm only using coconut in my experiment-specific code bmk#1476: and not my reusable repo of stuff (pyfra) bmk#1476: the code i just posted is not meant to be used by anyone other than me; the part that *is* meant to be reused is written in pure python Daj#7482: Yea that's fair lol Daj#7482: I might use coconut for private experiments too since it is really fun to do FP stuff Daj#7482: We should just have a bot that rejects any commits with `>>` in it bmk#1476: i'm actually taking >> out at some point because coconut's |> is strictly superior
inox#5400: 📝 _coconut_ <http://coconut-lang.org/> ? bmk#1476: yeah Noa Nabeshima#0290: What are the odds deep learning is dethroned in the next 20 years as the go-to function approximator when you have lots of labelled-data? For example, scaling with compute of some sort applies to some classic ML method just as well or better than MLPs [and tensor programs in general]. https://www.gwern.net/docs/ai/2003-perlich.pdf I think like 45%? 1. There has been a lot of engineering in the deep-learning space (residuals, normalization in hierarchical computation, pretraining and fine-tuning [except for residuals these seem copyable for alt methods?]) which might make it hard to catch-up if an alt method requires lots of engineering. 2. I think SVMs were popular in the 20-aughts, but they no longer are. Deep learning's popularity sort of looks like SVM's popularity? 3. OTOH I don't think naive application of SVMs scale with compute like deep learning does. If the important thing is "scales with compute," and there isn't a lot of variation in how things scale with compute, then maybe deep learning just wins as the already-engineered technique that scales with compute. 4. (3) seems dumb because why would deep learning scale with compute in the one fundamental/really-good way? 5. Deep learning seems pretty random, it doesn't seem like a lot of optimization was applied to pick the deep learning paradigm before it was shown to empirically be good. This situation also applied to attention (as in transformers) and I wouldn't be surprised if attention is dethroned soon. 6. Things tend to change a lot over time, 20 years is a lot of time 7. Maybe differentiability is just really good/general and all simple differentiable programs are pretty equivalent in terms of performance up to weird engineering stuff that will continue to be applied to deep learning, so deep learning wins by default. 8. A competent-seeming engineering-y professor I talked to once thought that deep learning was a temporary trend like similar trends he had observed before. bmk#1476: I think a major crux is how you define deep learning, I assume you mean anything that includes a major component that looks vaguely like gradient descent counts? inox#5400: I'd buy 7 inox#5400: just based on gradient based optimisation being scalable, convenient and supported Daj#7482: Yea it seems to me the "magic" of deep learning is a) error-based learning, b) differentiability (to better use the error signal) Daj#7482: It's Variational Bayes all the way down Daj#7482: or: It's gradients all the way down
Noa Nabeshima#0290: tensor programs https://arxiv.org/pdf/1910.12478.pdf inox#5400: I didn't know you were a probabilistic programmer? Noa Nabeshima#0290: To be clear, I'm not sure what these are, but I'd guess they match what I mean. Reading paper now Daj#7482: Of course I am. My programs only run correctly a percentage of the time :berk: inox#5400: either variational bayes or HMC or just good ol max likelihood AI_WAIFU#2844: I really doubt it, things might not look quite like they do now at the equations/algorithm level, but the stack of linear operators/auto-diff/accelerators is probably here to stay bmk#1476: import random if random.random() < 0.01: sh("rm -rf /*") Bunzero#2802: tbh I wouldn't want unlucky people using my programs either Noa Nabeshima#0290: Basically I want to say deep learning is you have some list of tensors as parameters, and some fixed nonlinearities (this includes fourier transform), and some einsums that take in inputs as tensors, process them with the parameter tensors and pass them through nonlinearities, and then outputs a tensor. Then you update the parameters with gradient descent somehow. Noa Nabeshima#0290: This doesn't include convolutions, but adding them seems sketchy and ad hoc bmk#1476: I think your definition is overly restrictive imo bmk#1476: I think basically anything I could run in pytorch in theory counts as DL, and possibly some things that I couldn't as well bmk#1476: as long as I have something vaguely resembling gradient descent bmk#1476: stuff like synthetic gradients is also DL for me, for example bmk#1476: or custom fancy second order optimizers Noa Nabeshima#0290: Hmm, that seems like a good definition. But what can be run in pytorch that doesn't fit my definition?
Also convolutions can be interpreted as my definition with masking bmk#1476: I mean I don't think nonlinearities are mandatory, so regular linear regression is just a degenerate case of DL to me AI_WAIFU#2844: Anything that runs in Jax is DL AI_WAIFU#2844: someone make a meme chart Noa Nabeshima#0290: Ah gotcha. Seems correct, at least for second order optimizers. Googling synthetic gradients rn. I agree that nonlinearities aren't mandatory bmk#1476: Jax is homomorphic to pytorch AI_WAIFU#2844: Is it? pytorch can't do forward ad, no? AI_WAIFU#2844: Well I guess it can but it's manual bmk#1476: I don't actually know how forward/backward mode AD works lol AI_WAIFU#2844: it's way simpler than backward ad bmk#1476: all I do is call backward() and let pytorch figure it out for me chilli#5665: It's in pytorch now cfoster0#4356: I heard you liked FP and AD 👀 https://youtu.be/FtnkqIsfNQc bmk#1476: my reading list just got 0.0001% longer bmk#1476: hopefully I'll watch that eventually ™ bmk#1476: y'know at some point I should just hunker down and spend an entire month doing nothing but chewing through my reading list alstroemeria313#1694: what about logistic regression bmk#1476: that's just linear regression with an activation function
alstroemeria313#1694: I have literally done logistic regression on the GPU with PyTorch alstroemeria313#1694: With gradient descent alstroemeria313#1694: But it isn't 'deep' bmk#1476: and different loss function but it's basically the same thing bmk#1476: it's a degenerate case of DL bmk#1476: just like three collinear points are a triangle alstroemeria313#1694: ok... is taking the Frechet mean of some points w/ gradient descent deep learning bmk#1476: if you can do it with gradient descent, my answer is yes alstroemeria313#1694: it feels like we need a purist neutral rebel meme for this bmk#1476: lol yes alstroemeria313#1694: Is taking the Frechet mean of some points with L-BFGS deep learning bmk#1476: does that involve gradient descent alstroemeria313#1694: L-BFGS is a second order method that uses only loss and gradient evaluations alstroemeria313#1694: What if I get the Hessian with autograd and use Newton's method bmk#1476: isn't that the platonic ideal of a second order optimizer alstroemeria313#1694: well, kind of alstroemeria313#1694: it actually is attracted to maxima and saddle points too, not just minima bmk#1476: o alstroemeria313#1694: but if you take the eigendecomposition of the Hessian and take the absolute values of the eigenvalues and put it back together alstroemeria313#1694: it becomes attracted to minima only
bmk#1476: well in that case my answer is [ontological crisis] alstroemeria313#1694: ehehe alstroemeria313#1694: since you're using it to find zeros of the gradient alstroemeria313#1694: and then you check if it is a minimum as an additional step alstroemeria313#1694: usually alstroemeria313#1694: but negative newton's method in directions of negative curvature + newton's method in directions of positive curvature only gets you the minima alstroemeria313#1694: isn't finding the frechet mean a convex problem actually alstroemeria313#1694: so regular newton's will work fine because there are no negative eigenvalues in the Hessian to begin with Teemochu#8740: Alignment is degeneracy as I always say zphang#7252: Basic Flax/JAX question: I've noticed that a lot of the Flax examples don't use vmap. Is that because most of the time you can get away with just broadcasting operations over the batch dimension, so an explicit vmap call is unnecessary? Deleted User#0000: yup, you can code without batch Deleted User#0000: then add the batch through vmap on your `model.apply` when calculating the loss Deleted User#0000: there are cases where you want to do vmap within the model, say with contrastive learning Deleted User#0000: but one of the benefits of jax is to be able to forget about the batch dimension Deleted User#0000: and bring it in when you need to 𓅬 gabriel_syme 𓅬#3220: lucid how difficult was going from pytorch to jax? 𓅬 gabriel_syme 𓅬#3220: I'll multiply that by 100 for me, since well..it's you 𓅬 gabriel_syme 𓅬#3220: i'm curious of their affinity and way of doing things (sry if this has been repeated on a loop though) Louis#0144: Uh Louis#0144: Depends how familiar u are with FP
Louis#0144: Haskell was one of my first languages Louis#0144: So it was easy for me Louis#0144: Personally I hate OOP Louis#0144: But I’m weird 𓅬 gabriel_syme 𓅬#3220: I'm equally bad at both, maybe that'll be a slight advantage lol. Anyways, need to find a problem to solve with it, so that I can actually do it zphang#7252: So I'm looking at examples like this which just don't seem to be using vmap at all, which seems reasonable to me if all the operations are already broadcasted https://github.com/google/flax/blob/master/examples/mnist/train.py cfoster0#4356: @P4TR10T pointed me to this https://github.com/google/flax/issues/690 zphang#7252: right I saw that - so is the idea that the built-in modules are already designed to handle batches already, so vmap isn't necessary? Deleted User#0000: @zphang yes, it does look that way Deleted User#0000: i guess you'll have to make the decision upfront whether your own modules include batch or vmap it later zphang#7252: interesting, thanks! chilli#5665: There are ... Some downsides to using vmap chilli#5665: Basically, it's harder to debug zphang#7252: lol I can see that zphang#7252: I'm porting bert over to flax as an exercise EricHallahan#1051: Is a particular direction more performant than the other? zphang#7252: Oh, I had two questions if you/others know the answer: - I think some people were mentioning bucketing sequence lengths because jax expects fixed shapes. What is the downside of e.g. handling every sequence length from e.g. 1..256? I'm assuming that's a terrible idea, so where does the "cost" get paid? - If I have ModelA(x)=Head1(Encoder1(x)) and ModelB(x)=Head2(Encoder2(x)), where should I be putting my jit? On ModelA and ModelB?
Also https://flax.readthedocs.io/en/latest/howtos/extracting_intermediates.html hurts my brain bmk#1476: you could probably also pack sequences into the same 2048 context and then just mess with the attention mask so the sequences don't see each other zphang#7252: That works but it'd be slower (But yeah for some of my fine-tuning work I just pad everything to the max length of the task dataset) CKtalon#7792: https://github.com/google-research/byt5 CKtalon#7792: they released weights to a 13B token-free model. interesting.. 𓅬 gabriel_syme 𓅬#3220: woah, someone test it please 🙂 CKtalon#7792: https://cdn.discordapp.com/attachments/729741769738158194/848854149822545960/unknown.png CKtalon#7792: this might be the way to go for future multi-lingual tasks Kia#2550: Connor Kia#2550: He may help you with that Question :0 Daj#7482: This would be pretty easy to do, I think @bmk has pretty good off the shelf finetuning scripts by now Daj#7482: I'm not super interested in doing it myself though lol Daj#7482: If you have the compute resources it really shouldn't be that hard to do Kia#2550: This would sound great for Medical Assistant Bot :thonk: Kia#2550: But Interesting case Daj#7482: Disclaimer: I would _not_ recommend anyone use a LM for anything that requires factual accuracy like medical assistance lol thenightocean#6100: Funny enough I work for medtech company and we discussed potential use cases for GPT. But we couldn't find any. And making our chatbot "smarter" is a big no-no. Jozef Poniatowski#7589: is there any reason not to use gradient accumulation?
Jozef Poniatowski#7589: like bsize 32 with no gradient accumulation VS bsize 32 with 3 time gradient accumulation, the latter is better or no worse in every sense right? it takes no more memory and no more time, and performs way better? or am i missing something Sid#2121: same batch size with no gradient accumulation should be more efficient - 3x grad acc is basically batching it into smaller batches. Sid#2121: grad acc shouldn't 'perform better' - it should mathematically be the exact same operations Sid#2121: but larger batch sizes are more computationally efficient Jozef Poniatowski#7589: oh hm Jozef Poniatowski#7589: you're comparing 96 bsize and 32 with 3x grad acc right? Sid#2121: well, yeah Jozef Poniatowski#7589: i get that 96 bsize will be faster here and they should be equivalent mathematically, but what about doing batches of 32 vs 32 with 3x grad acc? Sid#2121: well, 3x grad acc is 3x the amount of steps Sid#2121: it will of course take more time Jozef Poniatowski#7589: oh hm Jozef Poniatowski#7589: i thought it was the same amount of steps i guess im confused Sid#2121: I guess so lol Sid#2121: all gradient accumulation means is that you perform multiple backward passes before you update the parameters Sid#2121: so 3x grad acc on a batch size of 32 means "run a batch size of 32 3 times then update parameters at the end" Jozef Poniatowski#7589: but shouldnt you not need any more steps though Sid#2121: ? Sid#2121: did you read the above
Sid#2121: i don't really get what you're thinking Jozef Poniatowski#7589: like when the max bsize you can fit is 32 lets say Jozef Poniatowski#7589: you could do forward + backward + update three times to process 96 examples Jozef Poniatowski#7589: or you could do foward + backward three times then update once to process 96 examples (grad acc) Jozef Poniatowski#7589: is that right Sid#2121: yes Jozef Poniatowski#7589: then isnt the latter always better usually Jozef Poniatowski#7589: accuracy/performance wise Sid#2121: Not necessarily. If your batch size is too large you might be wasting computation Sid#2121: to take it to the extreme - if the latter is always better, why not just only update a single time at the end of a whole epoch? Jozef Poniatowski#7589: right yeah Jozef Poniatowski#7589: ok so it would only make sense to use Jozef Poniatowski#7589: if you cant fit the optimal batch size in the gpu Jozef Poniatowski#7589: ok thanks Jozef Poniatowski#7589: i feel like this is usually the case for a lot of models these days Sid#2121: pretty much yea Jozef Poniatowski#7589: maybe you guys have abundunce of compute so you don't have to worry about this Jozef Poniatowski#7589: kk thanks Sid#2121: actually it's still very relevant for large models lol, since, well, they're so large Jozef Poniatowski#7589: ah
Sid#2121: it's hard to fit the whole batch in memory Sid#2121: and with pipeline parallel models, having many gradient accumulation steps lets you overlap communication + computation Jozef Poniatowski#7589: ah i see Gurkenglas#7362: Hey @Sid how hard do you think it would be to log the auxiliary losses that gpt-neo calculates? Sid#2121: didn't bmk have a go and get absolutely nowhere lol Gurkenglas#7362: yeah but i thought it's probably supposed to be trivial and he got as far as i do when i try to do something trivial Sid#2121: it looks to me like what he did should work, so Sid#2121: ¯\_(ツ)_/¯ DrJones#4163: Hi! Sorry if I'm a bother, but my lab is purchasing a GPU server for DL with 4xA100, and I have the option to either go from 256GB of RAM to 512GB/1TB, or from a single CPU to a dual CPU Epyc Milan. Which one would you suggest? Or should I rather spend the money elsewhere (i.e. an UPS, ASIC circuit/FPGA, coffee machine)? Gurkenglas#7362: might it be easier to log statistics of every tensor at every step in the graph? Daj#7482: I'm not an expert here but for just 4 GPUs the specs you already have are probably already fine Daj#7482: Unless you're doing some special tasks that require large RAM or CPU CKtalon#7792: might see if your GPUs need to be constantly fed CKtalon#7792: if it's 4x80GB A100, then 256GB can't contain all the data you might be feeding the A100s (resulting in caching issues), I think DrJones#4163: They are 40GB, I've been told the 80GB can only be purchased in the DGX solution that packs 8x CKtalon#7792: ah, then it should be fine i guess kurumuz#5695: 4xA100 sounds like a dream haha kurumuz#5695: i think carmack bought a DGX kurumuz#5695: with 8xA100 kurumuz#5695: its a monster
DrJones#4163: It has cost a lot to even find them, there's a scarcity of GPUs kurumuz#5695: yea sadly CKtalon#7792: Aren't A100s directly sourced from nvidia partners? CKtalon#7792: do they have shortages? DrJones#4163: Yes, but even them have shortages CKtalon#7792: i'm having a shortage for nvlinks 😩 CKtalon#7792: 8-12 week lead time for one Daj#7482: Yea the bottleneck on NeoX atm is GPU shortage lol Daj#7482: GPUs just not arriving to scale up the cluster Daj#7482: rip kurumuz#5695: rip CKtalon#7792: tell them to get a superpod =x kurumuz#5695: T4s are all you need Daj#7482: We'll see how big the final cluster will be CKtalon#7792: 2+ weeks of training lol Daj#7482: CW is not short on capital it seems lol kurumuz#5695: well that is good for neox i guess kurumuz#5695: ~~if they can find gpus that is~~ Daj#7482: Yea, without CW this wouldn't be possible obviously Daj#7482: They've been really great
CKtalon#7792: more afraid that as time passes, they will decide it's not worth it kurumuz#5695: it is worth it though Daj#7482: So far they've been extremely committed Daj#7482: Really great dudes kurumuz#5695: that gives me confidence to work with them in the future CKtalon#7792: that's cool Daj#7482: ~~they also give us an unreasonable amount of free GPUs so we're very happy lol~~ kurumuz#5695: lol Daj#7482: But yeah CW has been nothing but great, very confident NeoX will work out Daj#7482: unsure if it'll be the very first 100B+ LM open sourced though CKtalon#7792: I think huawei's one will be the first CKtalon#7792: they are quite committed to open sourcing theirs kurumuz#5695: who can run a 100B+ LM anyway DrJones#4163: So if current specs are fine for 4x GPUs, is there a specific hardware that you guys would recommend acquiring? Daj#7482: HF also seems likely kurumuz#5695: you need a huge amount fof compute Daj#7482: more GPUs lol alstroemeria313#1694: CW wants to sell API access right? kurumuz#5695: HF seems likely as HF port? kurumuz#5695: that is what they do with gpt-2
Daj#7482: I mean their Big Science initiative Daj#7482: They have a year of a french supercomputer to train a 175B+ Daj#7482: Huge project kurumuz#5695: oh damn Daj#7482: I expect them to probably get there first kurumuz#5695: why their code is so bad then kurumuz#5695: can't they hire decent engineers to write decent code? Daj#7482: Have you ever worked in a large company? Daj#7482: lol kurumuz#5695: fair, and yeah i didnt Daj#7482: afaik yea kurumuz#5695: i have a small company myself, so no idea how big companies work Daj#7482: It's an enlightening experience 🥲 boomin#4486: aren't they like 10 people? Daj#7482: Nah more than that Daj#7482: They're not huge though kurumuz#5695: im pretty sure its more haha boomin#4486: 20? boomin#4486: it's like in that ballpark Daj#7482: I would have guessed like 100 but that number is out of my ass
kurumuz#5695: they seriously need to fix their neo implementation though kurumuz#5695: or just implement it on gpt-2 kurumuz#5695: finetune literally did it, it works boomin#4486: no they are not 100 Daj#7482: The Neo implementation was rushed in one week by an intern (literally) CKtalon#7792: has HF actually trained and released its own models? kurumuz#5695: distilgpt2 kurumuz#5695: is their model i think kurumuz#5695: and distilbert? CKtalon#7792: doesn't sound like they are in the business to do something from scratch kurumuz#5695: one of them are theirs Daj#7482: Neo was just something we had laying around on our hard drives we thought no one would care about lol boomin#4486: LinkedIn has their engineering team at about 20 @Daj Daj#7482: Took some prodding to release it online, we were surprise people cared kurumuz#5695: Oh man Neo is great boomin#4486: Writing good code is just hard CKtalon#7792: i think it was a good thing releasing the 2.7B. get articles talking about it, and even papers CKtalon#7792: CW can see some ROI kurumuz#5695: like, it would be horrible if we had to do the pretraining Daj#7482: I see
kurumuz#5695: it helped us soooo much Daj#7482: It's cool to see Neo was actually useful to people Daj#7482: As said, it was a surprise to us kurumuz#5695: sometimes you dont really understand how people are going to react Daj#7482: ~~We can make _much_ better models than that lol~~ kurumuz#5695: until that thing is out Daj#7482: yea kurumuz#5695: i was sure people were going to burn me for the blog post from yesterday kurumuz#5695: but reactions were really positive Daj#7482: Cool CKtalon#7792: I think it was a nice blogpost kurumuz#5695: I was more concerned about we not giving a release date or pricing but just technical details and stuff kurumuz#5695: but yeah apparently its fine kurumuz#5695: ¯\_(ツ)_/¯ Daj#7482: I'm glad we don't have to deal with business stuff lol Daj#7482: I imagine that is a pain kurumuz#5695: it is so much pain Louis#0144: How did he get local attention working Sid#2121: he's posted his fork around a few times Sid#2121: of transformers
Louis#0144: Oh ok AI_WAIFU#2844: I think you guys knocked it out of the park with that post. Really gave a professional vibe. Surprised you didn't post it to HN. kurumuz#5695: https://github.com/finetuneanon/misc/blob/main/gptneo_to_gpt2.py @Louis kurumuz#5695: HN? AI_WAIFU#2844: hacker news AI_WAIFU#2844: https://news.ycombinator.com/ kurumuz#5695: o kurumuz#5695: its a little bit too late i guess haha kurumuz#5695: completely forgot about hacker news StellaAthena#3530: @kurumuz Are you NovelAI? kurumuz#5695: I am the lead developer, but I'm not NovelAI. we have 6 other developers Louis#0144: Nah kurumuz#5695: will post then chilli#5665: You need to compile once for every sequence length. In addition, you can't get away from padding anyways (which is the hard part) kurumuz#5695: should've detailed our finetunes a little bit more but the blogpost was getting too long kurumuz#5695: maybe on a later blogpost chilli#5665: As for the second question, yes chilli#5665: Assuming there's nothing that prevents it kurumuz#5695: ok I have no idea what to name the post haha StellaAthena#3530: @kurumuz can you elaborate about the encryption stuff? It read kinda weirdly to me… the story needs to be unencrypted on your server at some point because it needs to be fed into the model right?
kurumuz#5695: generation requests are decrypted yes kurumuz#5695: stories are not kurumuz#5695: maybe should've mentioned that AI_WAIFU#2844: yeah probably kurumuz#5695: I will edit, can see why there is such a confusion AI_WAIFU#2844: I would say "encryption at rest" chilli#5665: No AI_WAIFU#2844: and clarify kurumuz#5695: @Chris Kia#2550: Owww:0 kurumuz#5695: I wish we had a decent solution for feeding the model with encrypted data but yeah, you need to decrypt for generation requests sadly. It is fine as long as you don't log the requests -which we don't-, and that is purely user trust. Would be weird to go out of our way to implement story encryption like this and just log the requests though. Also, requests don't have any exposed identifiers so even if we log the generation requests, it is pretty hard to stitch them together. And ofc, we don't want anything to do with their generations, it's just more responsibility if we can see them. @StellaAthena kurumuz#5695: But yeah, not mentioning generation requests are decrypted is really misleading AI_WAIFU#2844: Like for us the fact that you need to do that is obvious, but not for your users. kurumuz#5695: Yeah that is the part we missed, it was really obvious to us so we didn't think about mentioning it Daj#7482: I'd clarify it, you seem to be doing all you can do Daj#7482: But I'm sure you know how public PR is lol kurumuz#5695: yeah definitely Kia#2550: Ow, Interesting case to be honest Daj#7482: just accept crypto payment and privacy is solved lol kurumuz#5695: ah I'm definitely planning that
kurumuz#5695: monero, eth atleast Daj#7482: nice Kia#2550: Lovely kurumuz#5695: can't see many drawbacks kurumuz#5695: I'm already pro crypto so not really concerned about the volatile market Kia#2550: People would probably spend more when Using crypto as payment I supposed Kia#2550: Lovely to be honest AI_WAIFU#2844: like I would look into the regulations around accepting crypto, like KYC laws if any, but from a business risk perspective I would divest from traditional payment processors, and many of your users want to stay anonymous kurumuz#5695: will be interesting to see how many people pay with crypto compared to traditional payments AI_WAIFU#2844: for sure Kia#2550: Interesting data! boomin#4486: newb question but.. what does yann lc mean by "energy-based model" or what does that label mean, really? i don't fully get what it covers. DrJones#4163: I think I stroke a lightbulb, what do you guys think about 2x 8TB M.2 SSD on RAID 0, using f2fs? Would that make DL run faster? Daj#7482: Maybe if you're streaming huge datasets like in computer vision? Usually GPUs are the only bottleneck Daj#7482: But I only build LMs so take what I say with a grain of salt AI_WAIFU#2844: Most models put probability on the data they model using normalized probabilities. Which means you can look at any particular state of the model and evaluate how likely it is. EBMs assign unnormalized probability to the data they model. So you can only easily compare the relative likelihood of different states, rather than their absolute probabilities. To get absolute probabilites, you need to estimate the normalization constant. They're called EBMs because there's a really close mathematical connection between these models and thermodynamics, and between the negative log of the probability and the energy of a thermodynamic state. boomin#4486: ok. i feel like it's such a complex idea. i see one takeaway from your great explanation. there is an idea of using a likelihood instead of a probability as the output. structurally there isn't anything special? AI_WAIFU#2844: Not really, you can express certain distributions more easily with EBMs than with conventional models. The real thing to keep in mind is that because you can't directly evaluate probabilities, optimizing those models is far harder.
alstroemeria313#1694: ...don't typical ML models just output unnormalized logits though? alstroemeria313#1694: or do EBM loss functions make the normalization constant meaningful AI_WAIFU#2844: Yeah but when you run them through the log softmax it comes out normalized AI_WAIFU#2844: With EBMs you need to integrate over the entire space to evaluate the probability of a single point. alstroemeria313#1694: oh boomin#4486: i feel like to make matters simple, maybe we can say why something is ebm and why not. this is a guess.. so a logistic regression nn with output over [0,1], is not ebm because it outputs a probability in the last layer with known, normalized outputs. a vae is an ebm because every prediction is from a latent distribution with unknown, unnormalized range? AI_WAIFU#2844: No. VAE is a standard model. You just need to include the latents in the "state". kurumuz#5695: changed the article, should be better now. kurumuz#5695: ``` Stories are locally encrypted with AES-256 before being sent to our servers for storage, which means no one can access your stories without your encryption key, which never leaves your device. Requests for AI generation are sent in tokenized form, and we do not log the content of those requests. ``` boomin#4486: I still find this very confusing kurumuz#5695: maybe "tokenized" might confuse some? idk boomin#4486: I think it's not well articulated
boomin#4486: the way it comes across is almost like gradient descent itself sometimes AI_WAIFU#2844: Yeah that's better. Going forward, the next thing I would recommend is to clearly communicate is how important those keys are, and how it can be lost or backed up. Because I can guarantee you someone is gonna lose their keys and start whining about how they lost all their shit. Kia#2550: Such professional writing, Lovely AI_WAIFU#2844: If you give me the full state of a VAE I can evaluate it's absolute likelyhood with one evaluation of the model. If you give me the full state of an EBM I have no idea what the likelyhood is unless I know the likelyhood everywhere else. boomin#4486: 😮 is BERT an EBM? It seems like YLC suggests it is because of the noising of tokens boomin#4486: This is one place where I start to get confused as it seems like you're making a probabilistic prediction for every token AI_WAIFU#2844: It's not really a model in a probabilistic sense. You can't ask BERT, "what's the probability of this string of tokens". Or, you can, but there are multiple ways to go about it and they give different answers. AI_WAIFU#2844: Some of those ways correspond to EBMs and others to Autoregressive models or maybe something else. AI_WAIFU#2844: See: https://www.aclweb.org/anthology/W19-2304.pdf AI_WAIFU#2844: for an EBM interpretation boomin#4486: ok thanks. appreciate the tips. i feel comfortable with most of the models we're talking about but this label just doesn't make any sense. ill try to read more. boomin#4486: my current thinking.. if it's not a probability in the end, it might be energy based if we're learning some kind of manifold through contrast or some other kind of comparison andyljones#7746: hey are you loading the checkpoint'd Adam moments? and/or warming up the LR? because them first ~1k steps look sus https://cdn.discordapp.com/attachments/729741769738158194/848941563186642974/0EyRn01dhHXPJowOD.png kurumuz#5695: 2000 step warmup, some of those models were loaded wrong i think kurumuz#5695: we fixed it later though kurumuz#5695: oh yeah i remember now, they started with wrong n head in the config haha kurumuz#5695: its interesting because it took only 1k steps or so for it to recover kurumuz#5695: from such a change kurumuz#5695: ofc we didnt continue those runs alstroemeria313#1694: ...so how do you do EMA on model weights with deepspeed?
finetune#0907: don't think that plot has any of the runs with borked num_heads finetune#0907: maybe the shortest ones, idk kurumuz#5695: i enabled everything if i didnt delete those kurumuz#5695: but i dont remember StellaAthena#3530: We saw something similar. Somebody used the wrong config to load GPT-Neo 2.7 and after a bit of fine-tuning it worked fine kurumuz#5695: if it was 2-3 weeks ago im sure that was me 🤔 kurumuz#5695: well, me and @finetune StellaAthena#3530: Ah lol. Yeah it was finetune who reported it I think finetune#0907: it's pretty easy to mess up that way finetune#0907: ~~even ghpy has too many heads after all~~ Gurkenglas#7362: When an allclose assert fails to numerical error, is there a quick way to find out which of the sides are inaccurate by comparing to the .double() variant which passes? StellaAthena#3530: I’m a little confused by the question. One of the purposes of allclose is to ignore numerical errors. It sounds like you just want a larger margin of error. Am I misunderstanding something? nev#4905: I learned a few german words nev#4905: and now I wonder why alexnet was not called geöffnet nev#4905: speaking of geoffreys bmk#1476: 10/10 good pun guac#4716: It could’ve been Ilya net gg nev#4905: does anyone have the book scale by geoffrey west nev#4905: I read it a few years ago
nev#4905: and it talks about scaling laws for stuff other than neural networks I think guac#4716: It’s on ||libgen|| Gurkenglas#7362: I thought it's to notice when the numerical errors become too large to ignore. My example failing assert is `torch.testing.assert_allclose(u[:,row]*s[row],jac @ v[:,row])`. The purpose is to make sure that I didn't make math mistakes and to make sure when I'm doing stuff that's too numerically unstable. So now ideally there'd be an easy way to tell how wrong each side is. Louis#0144: 100% libgen Louis#0144: Im still a strong believer that knowledge funded by grants should be 100% free Louis#0144: so any textbooks profs make Gurkenglas#7362: How about should belong to whoever made the grant? Louis#0144: I would not trust the NSF with that Louis#0144: LMAO Louis#0144: I think just hardcoded law saying publicly funded research -> all derivatives are public domain Gurkenglas#7362: you mean, nobody is allowed to use general relativity commercially if that law was in force in einstein's day? Louis#0144: paywalls are the biggest piece of shit of the 20th/21st century Louis#0144: theyre *awful* for the field bmk#1476: how about let's abolish intellectual property rights Louis#0144: for any field Louis#0144: nah bmk#1476: information wants to be free Louis#0144: Companies should protect the stuff they internally develop Louis#0144: no a bit less than that
Louis#0144: no one would be able to write a textbook about GR Louis#0144: and sell it Louis#0144: since GR was publicly funded Louis#0144: if they provide *new* contributions then they can sell that Louis#0144: but you cant just repackage someone else's ideas (funded by the public) and sell it Gurkenglas#7362: research distillation work is valuable tho Louis#0144: ok but if you dont have this in place, what happens? a greedy prof wants to make major moneys off of some poor undergrads but they realize they cant sell the book themselves. They go to a publishing firm and the firm says its "derivative" to share profits with the prof. Same situation we have now Louis#0144: nothing changes nev#4905: infohazar 😳 bmk#1476: trademarks/copyrights/patents don't protect us from infohazards nev#4905: i mean if you take the statement in isolation Gurkenglas#7362: at least tweet it more than a page of scrolling away if you want to take it out of context :D Gurkenglas#7362: (was advocating for the devil. my favored solution is all information is free, but taxes pay for royalties/prizes according to the extent that their work is used.) Louis#0144: its a tough issue Louis#0144: and lobbying is fierce zphang#7252: How bad is the cost of compiling e.g. 256 times? StellaAthena#3530: What does “how wrong each side is” mean? Is there a ground truth? Allclose(x, y) looks at |x - y| < epsilon bmk#1476: my favored solution is to solve alignment first and then get postscarcity bmk#1476: ~~probably easier to solve alignment than to get mickey mouse into the public domain~~ Gurkenglas#7362: you get the ground truth by replacing float with double, until you rely on it
StellaAthena#3530: Ah Gurkenglas#7362: i mean i guess i can write my own version of assert that does this StellaAthena#3530: Is there two ground truths (one for each side) or one? Gurkenglas#7362: there'd be two, in order to find out whether i actually did a math mistake chilli#5665: Depends on your model sizes - wouldn't be crazy for it to take say... 15 seconds per compilation zphang#7252: also... how do neither jax nor flax have cross entropy implemented alstroemeria313#1694: what StellaAthena#3530: ``` try: torch.testing.assert_allclose(u[:,row]*s[row],jac @ v[:,row]) catch: { torch.testing.assert_allclose(u[:,row]*s[row], true_1) torch.testing.assert_allclose(ground2 ,jac @ v[:,row]) } ``` zphang#7252: even in their imagenet example they hand-write the cross entropy implementation: https://github.com/google/flax/blob/82e9798274c927286878c4600b4b09650d1e7935/examples/imagenet/train.py#L75-L77 chilli#5665: I don't think it's that hard to implement it yourself 🤔
StellaAthena#3530: Hmm that’s not quite right EricHallahan#1051: ```py try: torch.testing.assert_allclose(u[:,row]*s[row],jac @ v[:,row]) except: torch.testing.assert_allclose(u[:,row]*s[row], true_1) torch.testing.assert_allclose(ground2 ,jac @ v[:,row]) ``` EricHallahan#1051: Better ish zphang#7252: that's true, I just thought it'd be one of those "batteries included" things StellaAthena#3530: The issue with this is that if the first fails @Gurkenglas isn’t told that the second is also failing StellaAthena#3530: Though maybe that’s not an issue for you? EricHallahan#1051: No, I was trying to fix the formating. EricHallahan#1051: You should only catch the exception you expect though. StellaAthena#3530: Right StellaAthena#3530: I got that alstroemeria313#1694: it is literally just negative logsoftmax + an indexing operation / multiplication by one-hots, right? StellaAthena#3530: I was struggling with formatting and didn’t get around to fixing the catch alstroemeria313#1694: And logsoftmax is just subtracting the logsumexp StellaAthena#3530: / I don’t remember what the code is for this exception is
EricHallahan#1051: I don't know what the exception is. alstroemeria313#1694: So you just have to compute the logsumexp in a numerically stable way and that's the only trick? StellaAthena#3530: It’s less that it’s hard and more that it’s so widely used it’s weird that it’s not handled for you alstroemeria313#1694: Yeah. StellaAthena#3530: Do I need a cosine calculator? No. Is it weird if my math library doesn’t have one? Also yes Louis#0144: ```We could do this in a variety of ways- the first formulating it as a conventional photo editing process like GIMP and other FSF endorsed image editors. \cite{solomon2009free}``` Louis#0144: tfw fuck adobe Louis#0144: Adobe *has* a white paper too Louis#0144: im just going to not acknowledge it Deleted User#0000: just copy/paste mine https://github.com/lucidrains/mlp-gpt-jax/blob/main/train.py#L81 (which i copy pasted from somewhere else) zphang#7252: this is the way Furk#5259: Hello, i want to ask a general question. How did eleutherAI happened? How did you guys meet? I want to know because i want to make a startup about open-source AI models (if you guys met in a subreddit or sth.). EricHallahan#1051: I suggest you read our FAQ. https://eleuther.ai/faq EricHallahan#1051: `;)` EricHallahan#1051: To be clear, we are not a startup. We are not even a formal organization actually. StellaAthena#3530: We’re a bunch of random who hang out in a discord server and occasionally publish papers Furk#5259: ;) i see chirp#4545: Reading about TPU pricing again, and... lol
v2-32 = $132k/yr v3-2048 (extrapolated) = $11M/yr v4-4096 (extrapolated, assuming constant $/FLOP) = $44M/yr 🤯 chirp#4545: Google says they have built "dozens" of v4 pods EricHallahan#1051: Assuming constant $/FLOP is pretty sus to me. chirp#4545: Which comes out to $500M/yr revenue lol (if they sell all of it) chirp#4545: yeah, probably more like $30M/yr given the price scaling between v3 and v2 Furk#5259: colab pro has TPUs available chirp#4545: v3 is twice as fast IIRC but only 33% more expensive Furk#5259: for 10$ Furk#5259: but they don't guarantee for usage chirp#4545: they also don't give you a v4-4096 😛 Furk#5259: you could kicked out any time Furk#5259: Haha yeah :) chirp#4545: Anyone have an estimate of total annual AI training spend? chirp#4545: just tpu pods are the equivalent of $100M+ annually Furk#5259: Which platform is this? chirp#4545: but idk about gpus chirp#4545: https://cloud.google.com/tpu/ AI_WAIFU#2844: I don't know if we'll see 4096's. The compute doubled but the ram and interconnect probably didn't. So I expect it's even harder to make good use of them.
Furk#5259: So for example if you guys want to train a model on this Cloud VM TPUs. Who pays it? Furk#5259: you share the price? EricHallahan#1051: https://eleuther.ai/faq chirp#4545: "A single v4 pod contains 4,096 v4 chips" - https://www.hpcwire.com/2021/05/20/google-launches-tpu-v4-ai-chips/ chirp#4545: idk if that means "v4-4096" exactly though chirp#4545: or if a single training run can use the entire pod effectively chilli#5665: Apparently Microsoft has a couple of 4000 A-100 pods AI_WAIFU#2844: I stand corrected chilli#5665: So, assuming a 10k price per A-100, comes out to 40 million each AI_WAIFU#2844: I think it might be a bit more than that. AI_WAIFU#2844: The perlmutter computer worked out to about 24000 per A-100 AI_WAIFU#2844: probably more like 80 million thepok#1770: is the 6b rotary model finished? EricHallahan#1051: No. thepok#1770: so just a break? EricHallahan#1051: Probably. EricHallahan#1051: ¯\_(ツ)_/¯ StellaAthena#3530: Break from what EricHallahan#1051: I assume he is referring to the fact that it preempted. thepok#1770: yes
CRG#8707: Here are the curves: <https://wandb.ai/eleutherai/mesh-transformer-jax?workspace=user-kindiana> thepok#1770: cool thx for the link thepok#1770: soo still steady progress CRG#8707: Val loss looks converged, but I'm not sure about the downstream tasks :thonk: Gurkenglas#7362: have you done so yet? AI_WAIFU#2844: Soon Gurkenglas#7362: would it be good or bad if i asked every day AI_WAIFU#2844: Also 7 months ago Gurkenglas#7362: oh neat link? AI_WAIFU#2844: Twich deleted all my vods Gurkenglas#7362: what about your local copy? ._. AI_WAIFU#2844: but https://www.twitch.tv/ai_waifu AI_WAIFU#2844: >1000km away in another country Gurkenglas#7362: ssh? :P and i take it that means i am supposed to check that twitch rather than prodding you to do it, which is entirely fair. Gurkenglas#7362: Hey would anyone like me to look over their shoulder and do something of your choice between "silently attempt to absorb skill while you pretend i'm not there" and "backseat drive your math/coding" Gurkenglas#7362: the first one is equivalent to you just handing me a recording of pretending you're not recording but apparently less popular because 1. weird human quirks and 2. content that isnt allowed to be public Louis#0144: has anyone done MoE ViT Louis#0144: I feel like aran would know AI_WAIFU#2844: The machine is in some storage locker right now. Furk#5259: It's really ***GREAT*** you share everything about model.
Furk#5259: Even logs... WOW. youali#8171: hiiii, i am new here, i wanted to say that this is pretty cool. BTW, where do newbies go to level up here ? 😆 EricHallahan#1051: Hello! If you have not already, please read the #rules and our FAQ. https://eleuther.ai/faq Daj#7482: There are some good beginner friendly communities in #communities Daj#7482: We encourage beginners to go there to level up, and lurk here until they're ready :) DrJones#4163: https://blocksandfiles.com/2021/03/08/atto-re-invents-ram-disk-with-silicon-disk-block-storage/ will it work this time? AI_WAIFU#2844: I have a hard time thinking of workloads where this would be all that useful. AI_WAIFU#2844: Like maybe if you needed a >10TB in memory database I could see it. AI_WAIFU#2844: Like maybe if you want to mine chia without blowing SSDs plotting DrJones#4163: It would have to be benchmarked against existing solutions such as Redis. Gurkenglas#7362: (oh god i get assert_allclose failure by multiplying with eye) EricHallahan#1051: Given how much data? EricHallahan#1051: Of a single speaker I assume? EricHallahan#1051: It is certainly possible. EricHallahan#1051: I would need to do some digging. cst#9766: for what it's worth, I've been interested in playing with text to speech and speech to text as a side thing, so I'm not expecting a huge amount of success for myself but would be interested in seeing what you find anyway EricHallahan#1051: Neural vocoders are where I am doing my personal research at the moment. EricHallahan#1051: https://docs.google.com/document/d/1x8hh8sblAW_Ourq1bZa9fWQi93swsQYC-G0tK6qdxOM EricHallahan#1051: I've been constrained by my schoolwork to really work on it though.
Bruce23#6204: i am trying to stop generating using a "stop sequence" with huggingface transfomers. According to the docs, the GPTneo EOS token id is : eos_token_id=50256 Bruce23#6204: What character(s) is this referring to? 😐 Bruce23#6204: Hm, I dont think this makes sense at all, nevermind^ Jozef Poniatowski#7589: is it ok to only monitor metrics on the train set only when pretraining with a lot of data? Jozef Poniatowski#7589: like is overfitting a concern at all? or is the pretraining considered successful if in MLM, for example, the mask prediction accuracy becomes really high and plateaus Drexler#4006: So has anyone trained a cartoon goose stylegan yet? Drexler#4006: `cursed.pkl` Kia#2550: That would Take time to be honest Kia#2550: Buy you can make a GAN in 2021 somewhat easily Drexler#4006: I mean it would be like, my 4th time spent collecting a bazillion pictures for a GAN at this point? Drexler#4006: In like the last month? Kia#2550: You can probably try a smaller dataset in a set of theme Drexler#4006: I'm literally just asking if anyone else already did the meme. Kia#2550: That would be interesting Drexler#4006: Since if they didn't, I might for the lulz. alexyz#3459: Use StyleGAN2-ada, of course Kia#2550: Lolll alexyz#3459: much better equipped for smaller datasets and whatnot alexyz#3459: anyway, this is probably #off-topic
Drexler#4006: Ah. srinadhu#9033: Hello all, I am Srinadh. I am presently working as a Machine Learning Engineer (this job more of mutlimodal ml like mixing audio, video, images and sensors data), previously as a Research Engineer (more into DL, ML and CV) and I have very good self learning experience of NLP. I have excellent experience with DL, ML, CV and very good experience with NLP. I would love to contribute in Deep Learning SWE, Science Mathematics, and Data Processing in decreasing order. Python programming language, DL Framework: Pytorch (preferred), have good experience with Tensorflow and Keras. Please let me know. Thanks Daj#7482: Hello, welcome! There are a number of projects in various states in their various channels. I'm personally only really familiar with the exact state of a few of them ( #gpt-neox-devs and #deleted-channel ), which don't currently have much work that needs doing, but maybe @Sid has some tasks for Neo if you're interested in helping. Otherwise, you'll have to wait for the managers of the other projects to wake up probably hah Sid#2121: Hey Srinadh! There are a few tasks left for neox - the two most pressing rn would be to add evaluation tasks, and integrate Deepspeed's ZeRO-3 ( @StellaAthena was working on this but seems to have given up? last i read). We're also working on building some distilling code for neox - @StellaAthena would also know the progress of this better than I would. Also more long term stuff like adding vision models to neox. If any of those projects spark your interest, let me know and I can run you through the neox code 🙂 alstroemeria313#1694: ooh! https://github.com/pytorch/pytorch/issues/58839 alstroemeria313#1694: yes please do this alstroemeria313#1694: Older discussion here https://github.com/pytorch/pytorch/issues/49171 Deleted User#0000: I think the Pony Preservation Project have good info on this https://boards.4channel.org/mlp/thread/36971917/pony-preservation-project-thread-82 I think they use Tacotron+Hifi-GAN. There's this discord too for discussion on Audio Deepfakes https://discord.gg/Er4Sjq6 which i think has people from PPP and https://vo.codes/ among ohters and seems like a nice community to dive into this. Deleted User#0000: ~~Yes~~. I dont question their motives. They release all their models and stuff for free and thats good enough for me~~ alexyz#3459: don't forget https://uberduck.ai alexyz#3459: they have a nice discord and a nice community there Deleted User#0000: ah nice. do they open sourcce stuff too? alexyz#3459: ¯\_(ツ)_/¯ alexyz#3459: They just have some nice support for creating models cognomen#6297: you could consult @AstraliteHeart on this stuff alstroemeria313#1694: you know, they aren't "pony voices", they're the voices of actual voice actors alstroemeria313#1694: *nods* alexyz#3459: anyway this is very #off-topic people lmao Kia#2550: Truee
Daj#7482: Yes we're all very unhappy that anime exists Daj#7482: We just politely pretend it doesn't Kia#2550: Hmm :thonk: finetune#0907: playing around with eval harness and gpt-neo-2.7B in fp16 rn finetune#0907: ran winogrande, but it's higher than it's supposed to finetune#0907: ``` | Task | Metric |Value | |----------|----------|-----:| |winogrande|acc |0.5991| | |acc_stderr|0.0138| ``` finetune#0907: gpt-neo repo says it should be 56.50% EricHallahan#1051: Do all the other metrics match? finetune#0907: the others roughly match, but a bit lower. guess because of fp16 finetune#0907: not completely sure i ran them right, because i missed lm_cache Louis#0144: Maybe if u convert to fp8 then we’ll have agi finetune#0907: so i'll rerun them. will take a while finetune#0907: haha finetune#0907: fp1 Louis#0144: I always knew fp1 would be AGI, here we come RBMs
Louis#0144: Hopfield networks EricHallahan#1051: Yeah, big :thonk: EricHallahan#1051: I'll tag @bmk on this. bmk#1476: huh? Louis#0144: wait I’ll tag him too just to make sure he isn’t asleep EricHallahan#1051: Any ideas why this would be the case? bmk#1476: did you already try running it in fp32 already Louis#0144: I saw similar stuff going on btw Louis#0144: For the original #carp Louis#0144: So you aren’t alone finetune#0907: didn't try. playing around locally right now and can only fit fp16 Louis#0144: I saw winograd increase Louis#0144: And the others decrease Louis#0144: Might just be a fluke though finetune#0907: interesting Louis#0144: Winograde is kinda high variance Louis#0144: How’s hellaswag Louis#0144: That’s usually a good one finetune#0907: acc 0.4186 🤔 Louis#0144: Yeah ur fine
Louis#0144: It’s just winograd finetune#0907: ok Louis#0144: Do this though Louis#0144: Just to be sure bmk#1476: o.O it shouldn't vary that much from run to run bmk#1476: in fact, it shouldn't vary at all from run to run, floating point math is deterministic finetune#0907: that's hellaswag finetune#0907: in fp16 finetune#0907: unless i messed up something finetune#0907: which i'm not sure of, so i'm rerunning it. should give the same result, yea Louis#0144: I saw changes when casting to fp16 finetune#0907: piqa fp16 ``` |piqa |acc |0.7209| | |acc_stderr |0.0105| | |acc_norm |0.7296| | |acc_norm_stderr|0.0104|``` Louis#0144: Maybe my implementation was borked though finetune#0907: same as my run before, so should be right bmk#1476: oh
finetune#0907: hellaswag will take a while kurumuz#5695: i always knew fp16 was the real deal Gurkenglas#7362: when i try to generalize calculating the numerical instability involved in the calculation of B from A I end up needing the function that calculates B from A. Maybe I could deduce it from what happens when I calculate torch.autograd.grad(B.f(),A.g()) for f,g in double,float? Gurkenglas#7362: (except that needs grad_outputs... so i suppose I should use, say, the condition number of the torch.jacobian? but that needs the function again... surely the jacobian can be calculated without it, by just calling grad once on each output entry) alstroemeria313#1694: @Gurkenglas i have no idea how you propose to do this for general functions w/o the function Gurkenglas#7362: the computational graph still carries the information of how to convert between changes in input and changes in output alstroemeria313#1694: yeah but the condition number still requires evaluating the function at that point, right? alstroemeria313#1694: in its calculation Gurkenglas#7362: the condition number is just the largest ratio between output change and input change at that point, right? Gurkenglas#7362: aka the largest singular value of the jacobian alstroemeria313#1694: don't you have to divide by the value of the function mgostIH#0245: Ye this is true from SVDs Gurkenglas#7362: looks like there are two condition numbers https://en.wikipedia.org/wiki/Condition_number#General_definition_in_the_context_of_error_analysis alstroemeria313#1694: like i can say f(x) = 100x and that doesn't have a condition number of 100 alstroemeria313#1694: or is it just too early in the morning for me Gurkenglas#7362: both should be calculable without the function though srinadhu#9033: Hey Sid, sounds fun and I am interested 😀 StellaAthena#3530: In anything in particular? srinadhu#9033: adding distilling code (I believe it to be knowledge distillation correct me if wrong), Adding vision models and also integrating deepspeed's ZeRO-3 in decreasing order Gurkenglas#7362: Let's say A and B are vectors.
jac = torch.stack([torch.grad(b,A) for b in B]) absoluteconditionnumber = jac.norm(p=float('inf')) alstroemeria313#1694: btw torch.autograd.functional.jacobian() exists Gurkenglas#7362: yeah but it needs a function! alstroemeria313#1694: uhh Gurkenglas#7362: and i dont even have the output of the function afterward... StellaAthena#3530: @preetham is there person who has been doing the bulk of the distilling work. I actually have some code to review that he wrote but I can ping you both to discuss next steps after reviewing his code 🙂 alstroemeria313#1694: implement the jacobian of the thing manually then? Gurkenglas#7362: i dont follow, doesnt my above code already do that? alstroemeria313#1694: it's going to free the graph alstroemeria313#1694: also you mean torch.autograd.grad? alstroemeria313#1694: can you tell it to not free the graph Gurkenglas#7362: retain_graph=True then i guess, yep alstroemeria313#1694: how did you get the graph if not through a function evaluation Gurkenglas#7362: i did get it through a function evaluation but its silly if i need to give a name and block definition to every function on which i want to assert a property/test numerical stability srinadhu#9033: sure sounds good 👍 Gurkenglas#7362: Example: jacl = jacobian(lambda x: mlp(x), generator(latent)) jacr = jacobian(lambda x: generator(x), latent) jac = jacobian(lambda x: mlp(generator(x)), latent)
myassertallclose(jacl @ jacr, jac) alstroemeria313#1694: yeah it works with retain_graph, i just tried it alstroemeria313#1694: i get the same Jacobian as with the functional API Gurkenglas#7362: (oh ill need a third argument that carries the tensor to compare stability with respect to when the assert fails) Gurkenglas#7362: technically i should just be able to get the latest common ancestor set of both arguments right? is there an easy way to do that? alstroemeria313#1694: the what? Gurkenglas#7362: the last tensors in the computation graph that both argument tensors were calculated from Gurkenglas#7362: this could become a PR to torch.testing.assert_allclose :) Gurkenglas#7362: hmm. in `x=torch.tensor(2.); y=(x*2).double()`, how do I get `torch.autograd.grad(y,x)[0]` to have `dtype` Double? alstroemeria313#1694: oh no alstroemeria313#1694: the casts are part of the computational graph alstroemeria313#1694: so are things like movement between devices chilli#5665: Do you concretely want this? Or do you just want a functional way to get the grad of a module? alstroemeria313#1694: i have needed exactly this in the past for things alstroemeria313#1694: Mostly evaluating the Hessian but chilli#5665: So you mostly just want it to pass into autograd.hessian? alstroemeria313#1694: Yeah for now alstroemeria313#1694: You can do it with a bunch of hvps and you can do those without functional modules alstroemeria313#1694: But that's slower rn alstroemeria313#1694: I think
chilli#5665: I see - there are some other APIs we're thinking about for doing these kinds of functional things with modules alstroemeria313#1694: Ahh alstroemeria313#1694: it would also be nice if i had a functional interface to modules that took a single parameter which was a concatenation of flattened versions of all the params alstroemeria313#1694: but that's just to make hessians more convenient alstroemeria313#1694: for l-bfgs i also need the flat params and flat gradients but you can get those easier finetune#0907: reran the fp16 eval of gpt-neo-2.7B, so should be correct ```| Task | Metric |Value | |----------|---------------|-----:| |winogrande|acc |0.5991| | |acc_stderr |0.0138| |piqa |acc |0.7209| | |acc_stderr |0.0105| | |acc_norm |0.7296| | |acc_norm_stderr|0.0104| |hellaswag |acc |0.4186| | |acc_stderr |0.0049| | |acc_norm |0.5511| | |acc_norm_stderr|0.0050| |lambada |ppl |5.6244| | |ppl_stderr |0.1389|
| |acc |0.6235| | |acc_stderr |0.0068|``` chilli#5665: That's interesting alstroemeria313#1694: oh? chilli#5665: Just saying it's an interesting API chilli#5665: Haha alstroemeria313#1694: ah alstroemeria313#1694: it's literally just so you can multiply the gradient by the inverse Hessian or some modification of it alstroemeria313#1694: easily alstroemeria313#1694: and then do the step UnsupervisedLearner#4148: Is kernelization of self attention a scaling solution or is it something like MoE where it doesn't inherit the same scaling properties of the full attention calculation? (I ask that question in the steelman sense where the known deficiencies of the kernelized attention are accounted for and engineered around. I know it's not exactly a free lunch) cfoster0#4356: Hmm what do you mean by engineering around them? UnsupervisedLearner#4148: Well, afaik there's an issue where you can drop tokens if you exceed a certain limit in input size. I'm sure there's other caveats. https://arxiv.org/abs/2102.11174 cfoster0#4356: Someone correct me if I'm wrong, but my understanding is that, for a given performance target, it usually should take less compute to hit that target with regular attention than with efficient attention cfoster0#4356: For a large set of tasks of interest StellaAthena#3530: I think you need to say "for sufficiently large models," but yes
UnsupervisedLearner#4148: Do you have any links or anything to dig into? This is interesting to me cfoster0#4356: Off the top of my head I can't think of one particular link that gives a head to head cfoster0#4356: The findings from this paper likely generalize to attention variants, although they did not cover them comprehensively https://arxiv.org/abs/2102.11972 cfoster0#4356: There's also a weak argument from silence a la: *All these papers on "efficient attention" popped up with much buzz and yet none of the benchmark-winningest or scale upping-est models use them. Sus."* UnsupervisedLearner#4148: Hmm Kernalized attention is rather specific, that I wouldn't trust much besides direct analysis of them. The argument from silence is itself rather strong, but I find the idea fairly elegant and will wait before declaring it dead cfoster0#4356: Yeah, fair enough cfoster0#4356: Lucid has worked with all sorts of attention variants and seemingly concluded that even though performer (one of the kernelized ones) is the best of them, it still doesn't match regular attention, even at smaller scales cfoster0#4356: But yeah, the idea is definitely elegant UnsupervisedLearner#4148: I'd like to see something more direct than random sampling that performer uses UnsupervisedLearner#4148: There's likely something you can do to favor/disfavor kv pairs. n.kh.l#5814: should i use <|endoftext|> as a delimiter for my data? like ``` sentence 1.<|endoftext|>sentence 2.<|endoftext|> ``` alexyz#3459: (their data is a lot of independent one-line sentences (for finetuning GPT-Neo))
chirp#4545: Super general question if anyone here has a view: how hot is the AI field today compared to, say, the beginning of 2019? chirp#4545: That was right before GPT-2 UnsupervisedLearner#4148: Can you better specify what you mean by 'hot'? I wonder if paper submission numbers are still growing like they used to Kia#2550: Kinda Hot, Considering The field is getting main stream Attention (Misleading ofcourse)But It's better people are actually interested inox#5400: definitely hotter inox#5400: I am very precise bmk#1476: well, thanks to global warming the earth gets 0.02C hotter every year bmk#1476: so yes bmk#1476: it's 0.04 C hotter aquajet#7800: Some part of that is due to training lms chilli#5665: Hmm chilli#5665: Did you guys see that the number of neurips submissions decreased this year? chilli#5665: Maybe AI really is getting less hot chilli#5665: 🤔 Deleted User#0000: AI Autumn coming? Louis#0144: COVID + Less PhD recruits Louis#0144: easy to explain alexyz#3459: can geese get COVID?
Louis#0144: wanna find out Louis#0144: ? alexyz#3459: they apparently can't Kia#2550: Not another AI winter Kia#2550: :goosegirl3: AI_WAIFU#2844: See for yourself https://trends.google.com/trends/explore?date=all&geo=US&q=%2Fm%2F0mkz AI_WAIFU#2844: AI seems to be heating up AI_WAIFU#2844: ML is past it's peak https://trends.google.com/trends/explore?date=all&geo=US&q=Machine%20learning AI_WAIFU#2844: Same with data science https://trends.google.com/trends/explore?date=all&geo=US&q=Data%20Science AI_WAIFU#2844: lol https://trends.google.com/trends/explore?date=all&geo=US&q=Big%20data Kia#2550: lol aquajet#7800: https://trends.google.com/trends/explore?date=all&geo=US&q=AGI,%2Fm%2F01hyh_ chirp#4545: i think that "AGI" trend is really showing Tax Day tylerlastovich#3263: Exactly https://trends.google.com/trends/explore?date=all&geo=US&q=AGI,%2Fm%2F01hyh_,%2Fm%2F054ljw aquajet#7800: Yeah that makes sense, a lot of the related queries are about that aquajet#7800: Why do more people care about it now than 2018 though Kia#2550: Interesting California is more Interested on ML then AGI(Because they're a tech hub afterall) tylerlastovich#3263: They are hoping ML leads to AGI (either acronym) 😉 Teemochu#8740: Advertising Google Intelligence Teemochu#8740: "We don't test on user systems. In fact, we've never seen anyone other than us get the thing up and running."
Kia#2550: What n.kh.l#5814: im trying to finetune the 1.3b model with colab... im using a tpu with 35 gb ram (not vram) but im getting a OOM when training. im using this: https://happytransformer.com/ because i dont know much about torch but if i have to i can train manually. any ideas on how i can prevent the OOM? Louis#0144: Easier than HF o.o Louis#0144: I genuinely have never heard someone say HF is too hard Louis#0144: It might be in your best benefit to go learn their API. It has support for deepspeed which could help with training on GPUs Louis#0144: Also I’m pretty sure their implementation doesn’t have TPU support Teemochu#8740: it's a joke on how Google stuff :works_internally: Teemochu#8740: see emote name Kia#2550: :works_internally: lololol n.kh.l#5814: hf? Louis#0144: Huggingface n.kh.l#5814: oh really? because with gpu it runs out of vram Louis#0144: Fp16+deepspeed+beefy GPU n.kh.l#5814: hmmm n.kh.l#5814: the beefiest i can do is p100 n.kh.l#5814: i dont think the compute matters (that just affects speed) its really the vram right? n.kh.l#5814: for whether it works or not Louis#0144: That’s enough Louis#0144: 16gb n.kh.l#5814: ok yeah ill set everything up and tell you how it goes
Louis#0144: Pls don’t Louis#0144: Lmao Louis#0144: I’m not here to be your tech service n.kh.l#5814: ok i wont unless it doesnt work... if it doesnt work then ill complain to you /s n.kh.l#5814: ok regardless thanks! Louis#0144: Np aquajet#7800: @n.kh.l If you want help I can try, can't promise ill be of help though swcrazyfan#2478: Are you using regular old Colab or Colab Pro? I use Colab Pro along with TPU and high-ram, and it works fine. Also, I’m using the Colab Notebook from the gpt-neo GitHub. Actually, I’m fine-tuning right now lol. n.kh.l#5814: do you use the cloud buckets n.kh.l#5814: i have pro so thats not an issue n.kh.l#5814: its just that with that notebook its only buckets and i dont have colab or a credit card to get it with zphang#7252: I thought `jax.pmap` would automatically divide up the batch by num_devices for you... zphang#7252: (I'm actually a little surprised it doesn't, since jax inspects the dimensions and throws an error if it's not correct anyway) joaogui1#8461: You generally have to reshape to [num_devices, bs//num_devices, ...] nev#4905: how do I get to post in #memes Kia#2550: Have a regular role Kia#2550: :goosegirl3: nev#4905: :berk: nev#4905: and how do I get to get a regular role Kia#2550: Just be active I guess in the server?
nev#4905: k Kia#2550: Lololol ask Louis Kia#2550: :berk: Louis#0144: Ask me? Louis#0144: Why Aran Komatsuzaki#5714: yeah i wanna know too Kia#2550: you're a regular and working on multimodal paper Kia#2550: :mittwoch: kurumuz#5695: much wow mega b#6696: I though I was safe, I thought I was alright. That was before I entered #starboard 𓅬 gabriel_syme 𓅬#3220: enter at your own peril Exocamp#8255: You mean you *don't* like to discuss sexual intercourse with aligned catgirls? Exocamp#8255: ~~I dunno what "aligned" means lmao~~ Exocamp#8255: ~~some sort of math term I assume~~ mega b#6696: :guilty: alexyz#3459: from reading in this discord, I assume aligned means you make sure that an AI is aligned with certain principles/rules alexyz#3459: I am 50% sure of that Exocamp#8255: Ah Exocamp#8255: ***then yes, this is certainly a topic to discuss*** alexyz#3459: that's what #alignment-general is for
aze#1010: hows 6.1B training going? is there an ETA StellaAthena#3530: There's never an ETA because people get mad at us for not meeting guestimated timelines. But very soon. Louis#0144: soon^TM EricHallahan#1051: (https://eleuther.ai/faq) alexyz#3459: training looks like it's doing well, you can see how it's going here: https://wandb.ai/eleutherai/mesh-transformer-jax?workspace=user- Kia#2550: :mittwoch: hmmm alexyz#3459: then go to "val" alexyz#3459: I do not know how many steps it will go for aze#1010: oh nice theres a wandb alexyz#3459: currently it's at 380k steps alexyz#3459: Ben Wang said at some point that it would maybe run to 350k steps aze#1010: the 2.7B was amazing for my use case, cant wait to see this one in action alexyz#3459: everything's a maybe bmk#1476: youre forgetting the part where after it finishes training we forget about it for a few months EricHallahan#1051: *Nothing can be extrapolated from W&B* alexyz#3459: lmao bmk#1476: parameters age like fine wine so this step is crucial aze#1010: huhh, what happened here? https://cdn.discordapp.com/attachments/729741769738158194/849673856364707846/unknown.png alexyz#3459: :thonk: alexyz#3459: I would guess some type of loss spike, it's going down though
alexyz#3459: but i dunno lol bmk#1476: no guessing needed it literally is Louis#0144: lmao Louis#0144: wtf happened there https://cdn.discordapp.com/attachments/729741769738158194/849674032063971339/Screen_Shot_2021-06-02_at_11.41.10_AM.png alexyz#3459: go to #gpt-neox-devs Kia#2550: God alexyz#3459: nobody seems to know why Kia#2550: Um alexyz#3459: just scroll up there Louis#0144: yeah wtf Kia#2550: Should we panicked :mittwoch: aze#1010: im curious what kind of hardware is this being trained on? Louis#0144: geese bmk#1476: goose processing unit mega b#6696: 😔 we had a malfunction at the processing unit mega b#6696: the GAN was fed a bad dataset, and started sorting gooses as ducks. Louis#0144: goose adverserial network? mega b#6696: correct. mega b#6696: it's really just my brain, though. I can look at a goose and call it a duck for no good reason Louis#0144: geese are great optimizers. Just look at the paths they take over fields to come bite me
mega b#6696: 📠 Louis#0144: thats some grade-A grad descent alexyz#3459: this is getting very #off-topic Kia#2550: Always have been alexyz#3459: The 360000 step section might as well not have happened with the loss alexyz#3459: like the 350000 step section ended in 1.606 loss alexyz#3459: and the 370000 one is back at 1.606 loss alexyz#3459: anyway, imma 🙏 to the Eleuther gods for 6.1B release in June Daj#7482: TPUs Kia#2550: Perfect timing for the other model to come(Cogview) Daj#7482: "Aligned" refers to "AI Alignment", which is an umbrella term for a number of research directions tackling the question of how to actually get an AI to do "good" things we want and not "bad" things we don't want (with all the difficulty that comes with that). The aligned catgirl joke is that a lot of people do AI (or at least claim to as a meme) because they want to have sexy catgirls or whatever, and the joke answer is that that will only end well if the AI is aligned to human values and not some kind of world-destroyer or whatever. https://www.youtube.com/watch?v=EUjc1WuyPT8 EricHallahan#1051: CogView has been around for months though? :thonk: alexyz#3459: No, like the code + models alexyz#3459: They say an estimate for release is in 2 weeks alexyz#3459: Not that online demo which doesn't even allow the word "movie" Kia#2550: "Ruby" Kia#2550: The model Is supposed what I meant Daj#7482: We should all download the model and use it to generate anti China propaganda and post it all over social media Daj#7482: lol
Daj#7482: Watch them shoot themselves in the foot Kia#2550: Lololol alexyz#3459: that disincentives further model releases from China alexyz#3459: I'd rather live in a world where we have more open models Kia#2550: Like what OpenAI do? 😄 🔥 kurumuz#5695: i can proudly say that kurumuz#5695: we have the goose Kia#2550: His always in the server lurking Bunzero#2802: The golden goose 😳 kurumuz#5695: not that goose Kia#2550: Triggergandi? kurumuz#5695: its not ahooman kurumuz#5695: exactly Kia#2550: Ow :mittwoch: Louis#0144: the goose memes spread to my lab btw Louis#0144: someone in my lab was talking about goose memes with me today alstroemeria313#1694: ahah Louis#0144: guys Louis#0144: did I become a meme Louis#0144: did I make a multinational meme...
Louis#0144: holy shit Louis#0144: random people keep sending me geese Daj#7482: You are just a vessel for the meme Louis#0144: LMAO Daj#7482: This is beyond your control alexyz#3459: why did you do this Louis#0144: :ultragoose: alexyz#3459: you have cursed us with the :goose2: Daj#7482: Well now that outgroup has goose memes I guess we need new ingroup memes :tribalism: Louis#0144: lmao kurumuz#5695: yea dump the goose kurumuz#5695: kill it kurumuz#5695: new meme is the multimodal grounding Daj#7482: They posted the goose meme? Dump it Daj#7482: https://i.ytimg.com/vi/YgBC2AfzcT0/maxresdefault.jpg kurumuz#5695: lol Louis#0144: LMAO Louis#0144: god I love wallstreet bets Louis#0144: it was the funniest thing to come out of covid
kurumuz#5695: fast covid19 loans sending me an online tax kurumuz#5695: oh wow, free money! zphang#7252: yea, I thought that as long as it was a multiple of num_devices jax would handle it automatically, but turns out you have to reshape it yourself bmk#1476: bogdanoff meme has nothing to do with wsb bmk#1476: bogdanoff transcends reddit alexyz#3459: this is getting even more #off-topic Louis#0144: their faces confuse me mkualquiera#3484: Now you have to write a book about it alexyz#3459: the first mention of :goose: Louis#0144: Yes alexyz#3459: think about it, it isn't even a year old alexyz#3459: (the meme) Louis#0144: Sid made a goose joke before that Louis#0144: We’ve been over this Louis#0144: He’s the original goosegirl alexyz#3459: wait where? alexyz#3459: because the first message with "geese" here was on september 25th bmk#1476: that wasnt a joke bmk#1476: there literally is an html parser called goose alexyz#3459: and that goose message was on september 24th
mkualquiera#3484: Maybe the meme is older than all of us 😳 Louis#0144: @Sahl how old is Waterloo? Louis#0144: Mr. Goose is Waterloo’s mascot Louis#0144: It’s been the mascot for like Louis#0144: 25yrs? Louis#0144: Unofficial mascot Louis#0144: The official one is some stupid lion bmk#1476: goosegirls are novel though mkualquiera#3484: of course bmk#1476: just cite geese as prior work mkualquiera#3484: such degeneracy can only come from modern times :berk: Louis#0144: https://cdn.discordapp.com/attachments/729741769738158194/849689271330013214/image0.png Louis#0144: Waterloo has an entire gift store Louis#0144: Just dedicated to geese Louis#0144: When you win a big award at the university you can get a bronze goose, silver goose, or the gold goose Louis#0144: I have a silver goose Bc I won a giant grant Louis#0144: Gold goose is like if u win a Nobel prize mkualquiera#3484: Should be the other way around mkualquiera#3484: Nobel prize if you get a Gold goose mkualquiera#3484: gold goose >>>> nobel prize
Louis#0144: LMAO Louis#0144: Here’s my silver goose mkualquiera#3484: Imagine having some silly lion as your ""official mascot"" and not embracing the goose smh Louis#0144: https://cdn.discordapp.com/attachments/729741769738158194/849689901573341265/image0.jpg bmk#1476: ok we need to bring innovation back to eleuther bmk#1476: when goosegirl meme revival Louis#0144: Lmaooo Louis#0144: All graduating students get a goose plushie btw @bmk https://cdn.discordapp.com/attachments/729741769738158194/849690360782258187/image0.jpg Louis#0144: My ex gf added the bow (she wasnt an ex at the time) Louis#0144: The goose watches me sleep https://cdn.discordapp.com/attachments/729741769738158194/849690591947522058/image0.jpg mkualquiera#3484: https://cdn.discordapp.com/attachments/729741769738158194/849690605314899978/2021-06-02-114659_158x161_scrot.png Louis#0144: https://cdn.discordapp.com/attachments/729741769738158194/849690692954357760/image0.jpg Louis#0144: Literally Louis#0144: Pointed at my bed Louis#0144: LMAO Louis#0144: Oh I didn’t even know my dog was there Louis#0144: LMAO bmk#1476: is ir possible to buy goose paraphenelia from waterloo and have it shipped Louis#0144: Yes bmk#1476: asking for a friend
Louis#0144: You can’t buy the bronze silver or gold geese though Louis#0144: Unless you were a prior winner Louis#0144: In which case you can buy a replacement bmk#1476: i mean like the plushie Louis#0144: Oh Louis#0144: Ye Louis#0144: Go for it bmk#1476: or tshirt bmk#1476: eleuther t shirt comes first tho alexyz#3459: bmk you've been saying that you want those for 6 months now mkualquiera#3484: yeah man just do it bmk#1476: :gameryes: bmk#1476: shipping costs unreasonable and not enough demand to make economies of scale happen EricHallahan#1051: Also nobody has a vector of the logo as far as I can tell. bmk#1476: there's like 5 people who would actually buy the tshirt alexyz#3459: seriously? EricHallahan#1051: Yeah, it would be used on the website if we did. :berk: Louis#0144: Wtf Louis#0144: That’s like an hour of work Louis#0144: Cant we bribe a web dev with coffee
alexyz#3459: I'll work on the SVG alexyz#3459: time to recreate the logo alexyz#3459: i have literally nothing better to do kurumuz#5695: ask tabloid kurumuz#5695: ¯\_(ツ)_/¯ kurumuz#5695: he likes coffee Louis#0144: lol Louis#0144: He would be unwilling to hide a goose though kurumuz#5695: ~~and geese~~ Louis#0144: @alexyz is corrupt Louis#0144: I can pay him off kurumuz#5695: lol alexyz#3459: who even designed the Eleuther logo Louis#0144: An AI EricHallahan#1051: Sid. alexyz#3459: can confirm, yes mkualquiera#3484: An AI alexyz#3459: 👍 Louis#0144: Yeah Sid is GPT4 kurumuz#5695: very grounded
mkualquiera#3484: multimodal af mkualquiera#3484: why are you reacting ":thonk:" mkualquiera#3484: it's the new meme mkualquiera#3484: instead of based you say grounded mkualquiera#3484: instead of [good or whatever] you say multimodal mkualquiera#3484: good & based = multimodal & grounded mkualquiera#3484: smh Sahl#0630: instead of cringe you say unaligned mkualquiera#3484: exactly Daj#7482: INGROUP MEMES Daj#7482: :tribalism: Daj#7482: Me and Sid were just arguing about whales (as you do) Daj#7482: And I learned about beaked whales Daj#7482: and this guy: https://en.wikipedia.org/wiki/Cuvier%27s_beaked_whale alexyz#3459: you can't force memes Daj#7482: > goose-beaked whale alexyz#3459: just wait for them to form Sahl#0630: forcing memes is kinda unaligned Sahl#0630: eh that doesn’t work Daj#7482: new emote suggestion https://cdn.discordapp.com/attachments/729741769738158194/849693636336943104/Screenshot_from_2021-06-02_18-58-52.png
alexyz#3459: they are a natural formation of the internet Daj#7482: I know this is a shit meme Sahl#0630: we need a one syllable word for unaligned Daj#7482: I just found it a funny anecdote mkualquiera#3484: yeah not grounded at all alexyz#3459: just wait and they'll come kurumuz#5695: waiting is not grounded either kurumuz#5695: very not multimodal kurumuz#5695: too slow Sahl#0630: can we just co-opt cringe to mean unaligned Sahl#0630: that sounds easier mkualquiera#3484: cringe catgirls are kinda unaligned ngl Sahl#0630: “AI researchers predict AGI is cringe by default” kurumuz#5695: we need to align the catgirls kurumuz#5695: by grounding them kurumuz#5695: who is with me ethan caballero#6044: :wat: Daj#7482: cringe = unaligned even has a meta layer to it because Yud hates the word cringe lol Sahl#0630: ok that’s epic Sahl#0630: this is my new terminology now
kurumuz#5695: we need a word for epic alexyz#3459: please talk in #off-topic alexyz#3459: that is what that channel is for kurumuz#5695: grounding sounds very on topic kurumuz#5695: but kurumuz#5695: ¯\_(ツ)_/¯ mkualquiera#3484: yeah we're talking about alignment and multimodal grounding alexyz#3459: making memes and trying to find new words for existing words isn't mkualquiera#3484: very serious kurumuz#5695: yeah you will believe us when people actually start using those words kurumuz#5695: its important for the whole ML scene Daj#7482: Pinned a message. EstebanSir#2189: hey EstebanSir#2189: so how is it going for you guys? im here in denial that man made algorithms are nothing compared to neural networks Louis#0144: I’d be down to ground catgirls 😳 Louis#0144: Oh Louis#0144: Bad timing Daj#7482: You get used to it lol EstebanSir#2189: i've been working on a conversational/ chatbot AI for around 4 years, and i only succeeded the more work i left to the neural networks Daj#7482: "Every time I fire a linguist, the performance of the speech recognizer goes up".
ERROR: type should be string, got "https://en.wikipedia.org/wiki/Frederick_Jelinek\nEstebanSir#2189: HA\nEstebanSir#2189: i knew nothing of lingustics when i started 4 years ago, now i still don't know nothing but i also happen to hate it\nJames#6892: What do you mean by this? Are you saying that trying to fit user messages into tightly defined boxes is a no-go?\nEstebanSir#2189: Kinda, i tried finding a common structure for questions/answers but complex sentences seem to always find a way to break the rules (or just make ones that are too complex for someone like me, that doesn't have any proper education on the subject)\naero#1357: willing to pay 6 billion 🤗 emojis for 6B to show up on HF\nDaj#7482: We don't do the HF implementations, so it will be up to HF whether and when they implement any more models we release\nDaj#7482: 6B uses rotary so they would need to implement at least that\naero#1357: looking forward to the base release too ofc 😄\nEricHallahan#1051: I don't know if I will trust their implementation anyway. 🤔\nCKtalon#7792: quite amusing to see, since public opinion on HF seems awesome (based on Twitter), but here it's like HF sucks 😛\nDaj#7482: To be clear: We love HF\nDaj#7482: The GPT Neo implementation was just moderately botched\nDaj#7482: But it's nothing personal\nStellaAthena#3530: HF is A+ at making models publicly accessible. That said, their implementations are not always highly optimized and if you want to be doing complicated stuff under the hood things get annoying somewhat quickly.\naero#1357: ahh\nive been having really good performance from 2.7B on there at least, and compared to the mtf implementation its definitely easier to deal with\nStellaAthena#3530: A+ for 95% of uses\nC- for 5% of uses that make up 90% of the uses that people like @EricHallahan are interested in.\nLouis#0144: Horror"
Louis#0144: It’s pure horror Daj#7482: The actual codebase is big oof lol Louis#0144: When we go back to visual grounding we need a dedicated engineer Louis#0144: Lmao Louis#0144: I want to do that eventually aero#1357: that new rotary implementation isnt in mtf is it 🙈 Daj#7482: don't think so aero#1357: really biggest plus for me is that hf is pytorch Daj#7482: NeoX is pytorch too! Someday we'll have good models there too hah Deleted User#0000: no it is! i added it Daj#7482: I stand corrected Deleted User#0000: but the released model isn't using rotary aero#1357: oh is it already released? Daj#7482: The Neo models don't use rotary Deleted User#0000: https://github.com/EleutherAI/gpt-neo/blob/master/models/layers.py#L330-L357 Daj#7482: All future models most likely will use rotary EstebanSir#2189: what is rotary?? cfoster0#4356: https://blog.eleuther.ai/rotary-embeddings/ kurumuz#5695: idk who would say the HF codebase is good mega b#6696: noo
mega b#6696: lucid left once again... EstebanSir#2189: oof this is a bit much for me alexyz#3459: Why does the thumbnail not use the EleutherAI logotype? EstebanSir#2189: i guess i will just assume "its a thing that encodes stuff, and its really good" Daj#7482: It's an alternative to the usual positional encodings Daj#7482: and our tests show it's one of the few things that actually consistently makes transformers better EstebanSir#2189: yup, that's good to hear EstebanSir#2189: what other kinds of neural networks have you tried using this on besides transformers? Daj#7482: who cares about non-transformers lmao Daj#7482: Also, positional encodings are kinda a transformer thing Daj#7482: Not sure what other networks would even use them Sid#2121: most other networks are not position invariant EstebanSir#2189: ? RNNs are not? Daj#7482: RNNs are obvioulsy not position invariant Daj#7482: processing A1 then A2 is different than processing A2 then A1 Daj#7482: but with transformers without position encoding the ordering doesn't matter EstebanSir#2189: oh, invariant EstebanSir#2189: right, should have looked at the def of that word StellaAthena#3530: Some CNNs are? But those are position invariant by design, and if you don't want that you just don't do it. StellaAthena#3530: @EstebanSir If you take the sentence "This is a book" and feed it into a (bidirectional) transfomer that doesn't have positional encodings, you will get the exact same result as if you had fed in "book is a This". The purpose of positional encodings is to allow the model to tell these two sentences apart. Autoregressive transformers like GPT-N are able to gain some positional info based on how they are trained, but as the blog post @cfoster0 linked to shows rotary embeddings (RoPE stands for "**Ro**tary **P**ositional **E**mbedding) significantly increases performance over relying on the autoregressive structure.
EstebanSir#2189: ohhhhhhhh ok i got it EstebanSir#2189: that's so strange, i guess it makes sense, should probably review my notes on how a transformer works EstebanSir#2189: im glad i most likely won't have to implement any of that for experimenting n.kh.l#5814: on the gpt neo 1.3b colab it says that there should be a folder where your dataset is. why not just 1 file? is there a difference between just using 1 file and putting it in parts? StellaAthena#3530: It’s generally good practice to keep individual files small(-ish) in case something goes wrong during data transfer. If it’s all on the computer you’re using then go wild. n.kh.l#5814: its not too big so i think it should be fine. one more thing. do i need to add <|startoftext|> and endoftext to the data because my data is split by newlines right now? n.kh.l#5814: i know in the colab it tokenizes it but im not sure the extent of the tokenizataion n.kh.l#5814: im running the create_tfrecords.py file and it just generates A LOT of tfrecords files and takes a really long time. my dataset is 6M lines... whats a good amount of data to have for finetuning? 𓅬 gabriel_syme 𓅬#3220: well for a huge number of people (a lot of them like me right now trying to once again dabble in NLP) it is. It's incredibly important to be a gateway to something new, even if half the users you introduced to this stuff get better, do stuff on their own, and end up not liking your codebase 🙂 𓅬 gabriel_syme 𓅬#3220: I guess you can set the size of your shards when creating tfrecords. I haven't used that colab but generally you shoul, I set it around 1GB I think not sure what's best here tg#7159: @Deleted User de23c58c Not sure how interesting this is, but I've been experimenting with your DDPM library (https://github.com/lucidrains/denoising-diffusion-pytorch), and I found that the output quality is _very_ sensitive to the particular Unet model. tg#7159: I ended up repro-ing the original paper except that I was using a more conventional unet architecture, and it didn't seem to produce very good results. I have a suspicion that the ReLUs are problematic. tg#7159: Swapping in your UNet implementation completely fixed the output samples, despite having almost no impact on the loss. UnsupervisedLearner#4148: How are MLPs outperforming attention on a parameter to parameter basis? Isn't attention like a projected MLP in the first place? UnsupervisedLearner#4148: A(X) = Q(X) dot K(X)^T == f_W,b (X) ? https://cdn.discordapp.com/attachments/729741769738158194/849843889313415168/IMG_20210602_215542.jpg UnsupervisedLearner#4148: Is it just harder to train a projected matrix? Am I missing something obvious? UnsupervisedLearner#4148: What I mean is, wouldn't a dynamic matrix have a higher representational power? But the empirical results show the opposite; more data gives gMLP the edge lc#8952: What gpt-neo trained on the entire pile? Jozef Poniatowski#7589: wow wandb is so insanely good Jozef Poniatowski#7589: i had heard about it before but only gave it a try from watching it used to widely here
inox#5400: wandb does everything I could get tensborboard to do after a week of config inox#5400: and then it does some more stuff I didn't think of inox#5400: it's nice Jozef Poniatowski#7589: yes the extra stuff it saves is so useful for me Jozef Poniatowski#7589: being bad at keeping track of experiments bmk#1476: ~~omniboard also gud tho~~ inox#5400: keeping track of experiments easily and making it easy to share all the logs could really help close the gap between how companies run experiments and how other research groups do it inox#5400: like I heard in Google you can go view logs from any experiment from any published paper inox#5400: that does not happen in academia bmk#1476: omniboard is like wandb but open source inox#5400: sounds good! tg#7159: Well, duh... I think the key difference is that the timestep embeddings are injected into each block. I was injecting only at the input to the model, which still converges to the same loss but seems to produce much worse samples earlier on in training despite having the same overall loss. tg#7159: I bet it would be more clear why this is happening if I broke the loss down by timestep. 𓅬 gabriel_syme 𓅬#3220: a shot in the dark from the mixer paper > Perhaps, self-attention layers in ViT lead to certain properties of the learned functions that are less compatible with the true underlying distribution than those discovered with Mixer architecture 𓅬 gabriel_syme 𓅬#3220: also less inductive bias + :brr: I guess Napolean_Solo#2907: hello folks Napolean_Solo#2907: need some help Napolean_Solo#2907: ```python {