data
stringlengths
115
7.61k
MasterScrat#6910: Could be a pre-trained general purpose RL system, like a decision transformer trained on thousands of envs that can do zero-shots on new ones Ravna#1831: it contradicts with its recent sentiments reflected on the interview on its robotics team Ravna#1831: they are abandoning RL for now MasterScrat#6910: Oh would love a link on that MasterScrat#6910: Sad though Fessus#9563: We've had this discussion before but DT's aren't really RL AI_WAIFU#2844: Watch it be Jukebox 2.0 MasterScrat#6910: Sure, for now MasterScrat#6910: (Assuming the point was that they’re doing offline learning only?) Fessus#9563: Could be Fessus#9563: There's nothing stopping some organization with huge resources from making DALL-E but for action spaces Fessus#9563: I.e. NLP description of goal -> sequence of actions Ravna#1831: they said in the interview that they think the breakthrough of RL should only be achieved after video-based unsupervised pre-training becomes computationally viable Ravna#1831: so they decided to wait first Fessus#9563: They're going to be waiting for a while unless they come up with a better sequence modeling framework AI_WAIFU#2844: So video GPT maybe, after like 4 years? Fessus#9563: 4 seems optimistic without a breakthrough in transformer efficiency Fessus#9563: but you never know Fessus#9563: maybe they'll just power though with more compute resources CRG#8707: <https://blog.deeplearning.ai/blog/the-batch-new-year-wishes-from-fei-fei-li-harry-shum-ayanna-howard-ilya-sutskever-matthew-mattina>
> This ability to process text and images together should make models smarter. Humans are exposed to not only what they read but also what they see and hear. If you can expose models to data similar to those absorbed by humans, they should learn concepts in a way that’s more similar to humans. This is an aspiration — it has yet to be proven — but I’m hopeful that we’ll see something like it in 2021. Fessus#9563: Well if anyone has the near infinite compute resources it's google Teemochu#8740: oddball guess, new paper on enhanced context length, comes with a ~2T model that costs the same as davinci Louis#0144: That’s what I was thinking Fessus#9563: That would be nice Louis#0144: I think it’s just another language model tbh Fessus#9563: Maybe they figured out how to make O(logN) transformers work Louis#0144: Kinda skeptical of that tbh Louis#0144: Like local attention wasn’t essential for GPT3 Fessus#9563: Hey, it works for us https://en.wikipedia.org/wiki/Forgetting_curve EricHallahan#1051: You must mean **O**(*N* log *N*)? Louis#0144: It felt kinda tacked on Fessus#9563: I do not EricHallahan#1051: :thonk: Teemochu#8740: it would be lg^2 n technically Fessus#9563: Doesn't need to work as well on a per-parameter basis as a vanilla transformer, just needs to keep scaling with more parameters CRG#8707: GPT-3 is already ~N^1.5, but it's not really the bottleneck. AI_WAIFU#2844: I actually have ideas for that. AI_WAIFU#2844: You use linformer + fenwick trees. EricHallahan#1051: I don't know, I am not really too familiar with sub-quadratic attention mechanisms.
AI_WAIFU#2844: Then since it's all linear you can pass gradients though n steps in log(N) time. Ravna#1831: 2T model is still pretty weak, far from gwern's "2.2 million more compute" requirement extrapolation estimation. Probably couldn't suddenly be much more disruptive than GPT3. Louis#0144: Also tbf dense 2T wouldn’t be crazy hard with grace or the super pods AI_WAIFU#2844: Yeah I think 2T is well within current capabilities. Louis#0144: We might have to shift the neox goalpost Louis#0144: 2t neox wen Louis#0144: 😉 AI_WAIFU#2844: $$$ James#6892: I feel like 2T is not significant from the user's perspective. It needs to be a much longer context or something James#6892: They won't release a 2T model if its just a bit better than gpt-3 Louis#0144: Sure they would MasterScrat#6910: Yeah - anyone else worried their current work/research is suddenly gonna get crushed when they announce 😅 Fessus#9563: Nope, all my research is biomedical EricHallahan#1051: Plot twist: It is a 2T model distilled to 175B. James#6892: LOL wtf Teemochu#8740: so my basic idea (haven't coded it up yet, I swear I'll try sometime) is basically do a standard gpt architecture with learned positionals, but for the context embeddings you learn multi-token... so each set of two tokens has its embeddings concatenated and then fed forward into a d*4 layer and back down to d. Stack for 4, 8, 16, 32, etc tokens. Now for the farther back embeddings in the context, use these multi-token embeddings in lieu of the single-token. Need to use learned positionals to my understanding since now the elements don't all consist of the same token size. James#6892: and they would call it gpt-3(x) Louis#0144: Feels like domain independent progressive generation? Louis#0144: Honestly that feels almost likely Louis#0144: Where did anyone hear about the OAI stuff anyway
Teemochu#8740: aka "we have 2000 dimensions, so *why the honk are we using them to encode a single word instead of using them to summarize entire sentences, especially farther back in the memory?*" bmk#1476: in my interviews with the alignment folks theyve talked about some pretty cool stuff but nothing that would be such a Big Deal with all the hype, unless it is and it turns out to be a big let down James#6892: Yeah, why does it seem like everyone knows about the *thing* CRG#8707: https://discord.com/channels/729741769192767510/747850033994662000/852607259078688815 EricHallahan#1051: CRG, the wonder retriever. James#6892: should just build CRG into an AI Teemochu#8740: > 2048 is more than enough yes, but imagine if you can do this with *just 72 embeddings in the context* James#6892: would be better than gpt-3 EricHallahan#1051: CRG is a retriever from the future. bmk#1476: i feel like the alignment people probably wouldnt allow me to be privy of the Big Thing though Fessus#9563: More context allows you to do more non-text things better bmk#1476: if they themselves are even privy of it James#6892: more context is definitely important imo, assuming the model can actually take into account everything inside the context Teemochu#8740: pretty sure they'd less let me, part of why I feel like capapilling myself instead of letting others capapill me Ravna#1831: Most shakespeare poems are way shorter than 2048 tokens. Before every poem a language model spews out is shakespeare level, the context size isn't the limiting factor yet. Teemochu#8740: I adjust what I hear for the likelihood I'd hear it given the agenda of the speaker ersatz#0001: I'm confused by what Yann LeCun is railing against here https://twitter.com/ylecun/status/1409940043951742981 is this another strike against the symbolists? James#6892: As much as I think its cool, I'm pretty sure poem generation is not what most of their users are doing and is not likely to be the use case they're optimizing for Ravna#1831: Poems are just an example. What I mean is that the context of 2048 already contains a huge amount of information and the current capacity of models aren't nearly making use of all of it.
bmk#1476: i thought shakespeare was a made up poet, like how we made up schmidhuber Ravna#1831: yeah but who cares about physical historic actual persons Ravna#1831: what we make up is what matters James#6892: I don't think so, lots of valuable use cases I see making it to prod is long form Q/A, long-form summarization, classification with lots of examples CRG#8707: Yeah, shortformer saw gains from training on shorter contexts -> longer TrXL caching afterwards James#6892: but i guess they have special endpoints for these now James#6892: I'm surprised there even is a shortformer lol 45#2247: i've talked to one guy in robotics going to nlp, can confirm James#6892: are they abandoning RL in general, or just in robotics? Ravna#1831: in robotics, also only temporarily CRG#8707: Gains were pretty great (cheaper pretraining also: <https://arxiv.org/abs/2012.15832>) 45#2247: for aligning AIs, it can appear more productive to work on aligning NLP systems than RL systems in continuous/robotics setting 45#2247: like apparently this flopped https://openai.com/blog/safety-gym/ James#6892: yeah cuz one of their product leads told me RL using human feedback was one of the thing they were hoping would drastically improve GPT-3 James#6892: I hope it doesn't turn out to be a flop MasterScrat#6910: They already have 2 papers out for that right James#6892: I think so yeah 45#2247: hum the summarizing from human feedback and? James#6892: ~~adapting values to society or something~~ James#6892: its on their recent blog
James#6892: but not sure if its the same idea MasterScrat#6910: Fine tuning gpt2 from human feedback from 2019, similar work MasterScrat#6910: Wasn’t that the project going on in #lm-thunderdome Ravna#1831: Anyway i think someone should pull a novelai and make a programmer tool equivalent. Judging by the reacting speed of copilot, it seems to be based on a model smaller than 175B. Ravna#1831: It might as well be as the same of magnitude as gpt-neo's. MasterScrat#6910: New office in Europe would be welcome as well 🇪🇺 𓅬 gabriel_syme 𓅬#3220: it feels you'd write about that if you did, before the model was out StellaAthena#3530: He's quote tweeting someone. I'm not sure what the confusion is. He's disagreeing with Pinker, because Pinker spends most of his time commenting on things he knows nothing about 45#2247: would you start a research project from a gif demo? like that could 100% be pre-recorded Teemochu#8740: *mixes the two corpi just for chaos* 𓅬 gabriel_syme 𓅬#3220: the cool thing about working in architecture and engineering, no one is out to crush your work lol Ravna#1831: https://cdn.discordapp.com/attachments/729741769738158194/859550694654083072/Screen_Shot_2021-06-30_at_04.47.24.png Ravna#1831: It says it's fast enough to use as you type too. 45#2247: great https://cdn.discordapp.com/attachments/729741769738158194/859551366065815572/unknown.png Ravna#1831: well, fictions have fan-fictions, there's no reason why code shouldn't have fan-code:berk: dmvaldman#4711: I find it interesting that Microsoft Research has been doing DL research in the code autocompletion space for years (https://arxiv.org/abs/1912.00742) and then went with openai. 𓅬 gabriel_syme 𓅬#3220: yeah, I remember reading some work from one of their researchers 𓅬 gabriel_syme 𓅬#3220: it was quite a bit of work 𓅬 gabriel_syme 𓅬#3220: also they took 'pythia' from us damn it Ravna#1831: Yeah and they don't even try to merge their production lines. It's like they are releasing Door 3 and Floor 8 after Windows 11 and all three of them are operating systems.
dmvaldman#4711: Lol. Door 3. That took me a second. 𓅬 gabriel_syme 𓅬#3220: they were doing GNNs I think right 𓅬 gabriel_syme 𓅬#3220: I remember I was looking to train some on visual algorithmic programs we use in architecture, which are literally just xml in the background 𓅬 gabriel_syme 𓅬#3220: I was hoping to be able to 'autocomplete' graphs of components 𓅬 gabriel_syme 𓅬#3220: turns out, I might just do it with GPT-J chilli#5665: from what I hear, they pretty much don't work together chilli#5665: lol Teemochu#8740: :bigbrain: GPT-Neo and GPT-J are fanmodels of OpenAI's work 45#2247: gpt-4 is eleuther fanservice 45#2247: gpt-n is just a trick to have flesh humans generate more data for gpt-n+1 &.#0001: OpenAI Codex is an API. They said later this summer they will provide access. mega b#6696: hmm 🤔 EricHallahan#1051: hmmm mega b#6696: doesn't sound like the open in OpenAI makes much sense now chilli#5665: Let's just make a GPT-J-code chilli#5665: and release that 𓅬 gabriel_syme 𓅬#3220: I like that 🙂 𓅬 gabriel_syme 𓅬#3220: I'm definitely trying the XML thing in the 'near' future cfoster0#4356: https://cdn.discordapp.com/attachments/729741769738158194/859565362588418048/5eyidg.jpg AI_WAIFU#2844: Fuck that's good
&.#0001: Codex is an OpenAI model trained on code &.#0001: Source: GPT-3 waitlist &.#0001: I may desire to help fund this (if need be) chilli#5665: well, I don't think we'd need funding zzz#9120: Alex Polozov is fairly bullish on a language modelling approach https://twitter.com/Skiminok chilli#5665: we'd just need the desire/time zphang#7252: would it be different from just re-training GPT-J on a subset of the pile bmk#1476: neo:neox::code:codex bmk#1476: g h p y time zzz#9120: Polozov is also a major program synthesis researcher chilli#5665: well, he's been moving towards DL for some time now AI_WAIFU#2844: Yeah I just have to dig up my scripts and change some settings, then BMK can do a run after his interviews. AI_WAIFU#2844: Of course that's the same "just" as tweaking GPT-J to do 20B. zphang#7252: except there shouldn't be any technical challenges, unlike scaling up to 20B AI_WAIFU#2844: I was under the impression there were no technical challenges to scaling to 20B rn. AI_WAIFU#2844: It's just the codebase is hardcoded for MP=8 AI_WAIFU#2844: I mean 200B to run with Neo, not even Jax. AI_WAIFU#2844: Granted it was a bit slow and gRPC errors kept killing the runs. Kia#2550: I love this:berk: zphang#7252: is there a deeper meaning to "passion" on the smaller jug
cfoster0#4356: 🤐 mega b#6696: I'm sure the pile has more than enough data chilli#5665: That’s a technical challenge lol bmk#1476: it's a slightly bigger problem than that from what i understand bmk#1476: going above mp=8 means parallelizing across multiple tpus mega b#6696: hmm lemme start with this whole gpt-j-code thing: ```python print("W.I.P") ``` okay good luck 👍 chilli#5665: yeah, that's right bmk#1476: ah ok One#5919: what's stopping us from going up to 100,000 tokens of context? not enough RAM? what about supercomputers? AI_WAIFU#2844: we can do 100,000 now, but it won't be that good. One#5919: does attention get blurry when it goes that far back? One#5919: and also, why does it have to be back. we should be able to generate the text that would RESULT in a certain prompt One#5919: i.e. preceding not succeeding text One#5919: https://tenor.com/view/mc-hammer-hammer-time-cant-touch-this-gif-11068835 kinoc#5731: You could use ThePile to generate some synthetic tasks which would include MLM and reverse direction prediction. You would definitely have enough. Wondering if MLM with code would help ... chilli#5665: depending on what you mean, people have already done this
One#5919: call it "prompt guessing" kinoc#5731: I saw one HF model that was "question guessing". Instead of a sequence of [text,question => answer] it was [text, answer => question] , and good question generation is useful and the corpus is just re-arranging QA datasets or running symbolic parsers over existing text data. kinoc#5731: _```There are two apples on the counter. A:two apples Q:```_ ```What is name of the item that is on the counter? ``` alexyz#3459: any model that's [question, answer → text]? alexyz#3459: Would be interesting to see a model that has to generate the context for the question and answers One#5919: pre-text generation would be fun, the AI trying to guess what prompted you to give a certain prompt 😄 One#5919: or what kind of argument could lead to a certain conclusion One#5919: "Considering all of the above, it can't be argued that *X*" 𓅬 gabriel_syme 𓅬#3220: This reminds me of decision transformer and prefixing a reward. I'm trying that soon One#5919: can we train the AI by rating its generated outputs by coherence, and marking areas where coherence is weak in red One#5919: the longer blocks of text it can generate that make sense, the smarter it must be One#5919: and have several or more humans rating the same text's coherence One#5919: make it a heatmap instead of discrete sections One#5919: human-in-the-loop training One#5919: indefinite. or do transformers and continual learning not mix yet 𓅬 gabriel_syme 𓅬#3220: DT and PL are indeed a great thing to try, gwern wrote a nice write up 𓅬 gabriel_syme 𓅬#3220: The heatmap thing will be map elites for me One#5919: see @StellaAthena i can keep up even if i'm largely non-technical One#5919: we need a weekly voice chat "open forum" where the technical experts and the "big picture" generalists can intermingle and exchange ideas One#5919: collaboration seems super crucial for this particular alchemy
One#5919: building upon building upon and so on bmk#1476: in reverse mode AD, what's the typical way of tracking the children of each node? One#5919: are you asking me? bmk#1476: no im just asking everyone One#5919: 😄 bmk#1476: in my impl, i make each new node in the computational graph add itself to a children list of each of its parents bmk#1476: but this is kinda suboptimal bmk#1476: whenever i build a new graph that overlaps with an old graph, it always tries to propagate through the old graph too bmk#1476: also everything ends up referencing everything else so nothing ever gets garbage collected One#5919: make it hierarchical One#5919: each node and edge can't be equal to every other one One#5919: some children should be less valuable than others and weigh less in the overall scheme AI_WAIFU#2844: You build a graph right? bmk#1476: that.. doesnt really make any sense One#5919: ah well, i tried One#5919: with my limited understanding of what you're talking about One#5919: you got a free completion of a prompt from a real live human bmk#1476: https://gist.github.com/leogao2/ee670000ac1c98430617fd9c0c77f082 here's my code bmk#1476: tldr yes i do build a graph bmk#1476: but it's in kind of a suboptimal way
One#5919: you're trying to make a network whose nodes relate to each other in a meaningful efficient way One#5919: generate it and organize it bmk#1476: im trying to make it feel like pytorch bmk#1476: rather than tensorflow bmk#1476: but i dont really understand how pytorch works under the hood bmk#1476: so this is my closest-approximation One#5919: and it works? zzz#9120: I used this blog as a guide, https://rufflewind.com/2016-12-30/reverse-mode-automatic-differentiation AI_WAIFU#2844: 1 .Keep track of the order that you add nodes to the graph. 2. Make the nodes point at their parents 3. When you backprop accumulate the gradient in the parent and then delete the node. One#5919: hiearchy AI_WAIFU#2844: Run through the nodes in reverse order. bmk#1476: so toposort it and then run through that and instead of summing over the children from the perspective of each node, you have each node add itself to its parent? AI_WAIFU#2844: You don't have to do toposort just keep track of the order in which you did compuations in a stack or something AI_WAIFU#2844: But yeah Louis#0144: Is it a graph of knowledge AI_WAIFU#2844: Gradients are additive Louis#0144: How’s the headache @bmk bmk#1476: slightly better
Louis#0144: Pog AI_WAIFU#2844: It's called a graident *tape* for a reason. bmk#1476: ok yeah - so i thought of this, but i dismissed it because i was worried it would propagate down one path and then down a second path intersecting with the first meaning i'd have to do it again, and could possibly lead to exponentially more expensive backprop One#5919: yoooooooo AI_WAIFU#2844: No you nix the node after you propagate the gradient. zzz#9120: @bmk in the link I pasted there are several implementations starting from simplest to quickest which should help. Including one that uses a Wengert list One#5919: why don't we all join voice chat One#5919: i'm in there One#5919: MULTIMODAL One#5919: pls tho One#5919: 😄 bmk#1476: @One pls stop being disruptive One#5919: i'm trying to contribute bmk#1476: oh ill take a look at that Louis#0144: Most of us don’t really enjoy vc @One Louis#0144: Honestly I think Eleuther would totally survive as a mailing list One#5919: yeah it's very interesting One#5919: never VC AI_WAIFU#2844: You should hit the books first before trying to do that, else you'll just make noise. One#5919: yeah see above
One#5919: when i was talking to @𓅬 gabriel_syme 𓅬 One#5919: i was making suggestions that would actually work One#5919: even if most of my ideas are meaningless to the people who know the specific way of implementing them, one might change the entire paradigm. and it's not just me who's a creative generalist with a basic understanding of AI zzz#9120: @bmk if you are trying to get something close to pytorch I do recommend checking out the autograd code if you haven't already (https://github.com/HIPS/autograd/tree/master/autograd), it's what pytorch was directly inspired by and the code-base is small enough to not be overwhelming to navigate One#5919: at this point AI architecture is alchemy One#5919: no one knows what might bring 1,000x improvement over state-of-the-art One#5919: so we should try EVERYTHING Sahl#0630: ironically, attention is important One#5919: i hear it's all you need One#5919: we should try everything that might conceivably work* One#5919: and as many combinations of things that might work as practicable kurumuz#5695: this makes no sense Louis#0144: We have a good idea Louis#0144: :morelayers: One#5919: more Sahl#0630: or unfair comparisons One#5919: exclude non-technical people all you want, code snobs One#5919: cognition is cognition One#5919: even if i don't know the details of pytorch i can get the idea One#5919: "predict the next word"
One#5919: i can sit with you One#5919: https://tenor.com/view/cant-see-hiding-lucille-ball-i-love-lucy-hide-behind-hand-gif-15752400 bmk#1476: @One consider this an official mod warning: if you want somewhere for nontechnical people to discuss AI, please go somewhere else, this is not that place One#5919: can there maybe be a room for it? bmk#1476: it's ok not to know something ,it's not ok to refuse to learn about it before providing opinions on it bmk#1476: no bmk#1476: go make your own server if you want bmk#1476: or find another one bmk#1476: this is not that place One#5919: are you one of the founders? forgive my ignorance Sahl#0630: #communities has some One#5919: a room as in # One#5919: ban me for this One#5919: i'm ready to die on this cross cfoster0#4356: Correct. We don't have such a room because we try to maintain a decently high expectations for conversations here One#5919: generalists are trash huh cfoster0#4356: Barring goose memes One#5919: no good ideas can come from a non-AI expert One#5919: that's idiotic EricHallahan#1051: > We uphold high norms of polite discourse. Administration reserves the right to enforce these norms as necessary.
- #rules ari#9020: The people who already have credentials or have established themselves here can always close the server to outsiders, the ones who you are hurting here are the lurkers who just want to learn, and contribute when they actually have something useful to say triggerhappygandi#0001: funny that you excluded goose memes from high expectations bmk#1476: goose memes are the highest form of comedy wtf are you talking about One#5919: my point is that a good idea can come from anywhere, and we live in unprecedented times triggerhappygandi#0001: ikr bmk#1476: @One final warning: this is our server, we make the rules. if you don't like the rules go make your own server AI_WAIFU#2844: Ideas are cheap, we already have far more than we need. Our problem is sorting though them. Your not helping that. EricHallahan#1051: Well, you know the old formula: Comedy equals tragedy plus time. triggerhappygandi#0001: they can but we prefer them coming from people who have implemented some bmk#1476: tragigeese StE_gUy#5856: @bmk any news on your python-tuned gpt model? (sorry, way too tired to remember the actual name of it right now) bmk#1476: ghpy? bmk#1476: too busy to get it up rn StE_gUy#5856: That's the one bmk#1476: if I ever decide to work on it again, step one is a ground up rewrite of pyfra StE_gUy#5856: Kinda overshadowed anyway by this copilot thing, that's gonna be the new hotness at least for a while bmk#1476: meh ghpy was never meant to be a big deal bmk#1476: it was just a thing I did for fun James#6892: Still waiting on people to try it and see how well it actually works. I heard its only 35% good generations.
StE_gUy#5856: GPT-J is damn good at writing code. Just sayin'. James#6892: Its damn good at writing code, but can it write the code you want in your specific context? lol StE_gUy#5856: Like scarily good (at least sometimes). I would be curious to see it implemented by a text editor in a way similar to copilot to see how it stacks up. StE_gUy#5856: True, true. Context is important. James#6892: I have no trouble believing it can write a good piece of code, but my 35% number is whether it generates the **right contextual** code i need in that moment lol StE_gUy#5856: At least part of that problem is figuring out the right input to feed into the LM and then presenting it in a way that makes sense to the user. Doesn't matter if you have a 175B model if you're using it wrong. James#6892: yeah, thats true, and also has to match how the data is trained too for optimal results jmerizia#4039: If I may chime in here, I've been working on a project to tackle this "35% good" for the context issue. The approach I'm running with is to create a programming language that is a loosely defined variant of natural language. Then it's parsed using a web of completion queries that are constructed with the context. This is in contrast to generating the next few lines of code in a single shot. chilli#5665: can you elaborate? Particularly this line > Then it's parsed using a web of completion queries that are constructed with the context. jmerizia#4039: Yea, for example, if you're generating an SQL statement, you can use the schema as context. jmerizia#4039: then once you've generated SQL, you can then use a completion query again to determine the types of the parameters that need to be bound, and then fish through the existing context of variables. That way if you're not happy with the variables that were bound, you can re-write that line of natural language and bind the variables you intend. jmerizia#4039: You can go much further and even extract types for static type-checking :p kinoc#5731: I was thinking of something similar. Given the finite context window, what is the most informative information available up to the point of generating the next line of code. It may not be the last N lines, but the most important and informative N lines. kinoc#5731: And both a generic NLP or programming language extractor could be used to stuff that context window. jmerizia#4039: Of course strategies that try to get around the small window constraint run the risk of becoming obsolete when retrieval mechanisms start working well kinoc#5731: yeah. I was just thinking that the equivalent of text summarizing applied to code might have a fighting chance since you would have variables to latch on to. kinoc#5731: That and any libraries called out. Libraries would be an example to "boiler plate context' you might always give high informative score to. kinoc#5731: If there was enough content on a particular library you could train a sub module on just that content. Like Pytorch or SQL, etc, versus specific languages. Hmmm ... do I detect the possibility of MoE ?
kinoc#5731: Some libraries become (or imply) their own subdomain and generate something like their own DSL. (see the common libraries in use around here) jmerizia#4039: Do you mean for completing code, or modifying code from a natural language prompt? I think the latter is quite compelling :p kinoc#5731: I was thinking for completing code, but I can see both. kinoc#5731: You have a window/chunk of code, now find alternate ways of modifying it. What bugs exist? What is a more efficient rewrite? What is more "native" to the normal / expert library user? That may be where some MLM applied to code might come in handy. jmerizia#4039: I'm sure we'll start to see more papers around this Mike#1327: yo guys any of you saw https://copilot.github.com ? EricHallahan#1051: ^ Mike#1327: 👍🏼 Mike#1327: Would making an OSS version of this be feasible on the long-term? kinoc#5731: a search of the channel for "copilot" should return a day's worth of hits Mike#1327: Sorry about this, didn't want to be the n-one, should have checked first bmk#1476: g h p y bmk#1476: https://huggingface.co/lg/ghpy_20k kinoc#5731: Just phrase your coding tasks as interview questions 😋 ... https://discordapp.com/channels/729741769192767510/730510538060071043/859668465886560258 kurumuz#5695: hmm, I wonder if they used gpt2 tokenizer for this model. EricHallahan#1051: This model? kurumuz#5695: no, openai and github's EricHallahan#1051: No idea. EricHallahan#1051: I have a suspicion they did though. kurumuz#5695: Well, you have a lot of code available to finetune. Can't you just change the tokenizer slightly to make things like arithmetic better?
kurumuz#5695: You can make it so numbers are tokenized by digits and that would most likely improve arithmetic EricHallahan#1051: You can make it so numbers are tokenized by digits and that would ~~most likely~~ improve arithmetic AI_WAIFU#2844: Sid has been bashing his head against a similar issue for a while now. AI_WAIFU#2844: Every tokenizer has it's own flavour of stupid. kinoc#5731: How do you find the "least stupid" to rule them all ? EricHallahan#1051: You don't. kinoc#5731: That's why each model has its own task specific tokenizer. kinoc#5731: Where "task specific" can include "Talk pretty to the most peeps" versus "Argue intelligently with mathematicians and code geeks" gdawg16#0493: Good rhyme kinoc#5731: you might find this interesting https://github.com/neulab/external-knowledge-codegen kinoc#5731: https://raw.githubusercontent.com/neulab/external-knowledge-codegen/master/doc/approach.png nz#9710: https://twitter.com/andy_l_jones/status/1410155911457521677 nz#9710: 🤔 sowa705#2498: imo, the only way for something like this to work as an open source thing is local inference and that is infeasible with current hardware sowa705#2498: unless specialized ai acceleration gets cheap then no real ai will be done locally triggerhappygandi#0001: I know I am probably late to the party but openai released a docstring -> code LM for github triggerhappygandi#0001: So is the access for that handed out like their API too Kia#2550: Ow yeah Kia#2550: Is it a paid software:thonk: triggerhappygandi#0001: not rn
triggerhappygandi#0001: but this can't possibly be free Kia#2550: True CRG#8707: Yeah, but GPT-3 uses the GPT-2 tokenizer. StellaAthena#3530: GPT-3 uses the GPT-2 tokenizer StellaAthena#3530: GPT-3 is mostly the same architecturally as GPT-2 triggerhappygandi#0001: is it just fine tuned 13B/175B on github triggerhappygandi#0001: probably StellaAthena#3530: That’s pretty unlikely. You can’t just mix and match models and tokenizers triggerhappygandi#0001: imagine if this fundamentally changes the paradigm lol. At this rate programmers will soon become prompt engineers Drakkaa#3367: that will probably be a thing in the near future Drakkaa#3367: I'm fixing a large dataset with horrible ocr and linebreak removal Drakkaa#3367: going to build something with this: Drakkaa#3367: https://github.com/wolfgarbe/SymSpell Drakkaa#3367: anyone know if its good ? it looks just what i want 🙂 Drakkaa#3367: it's not bad, but it's not great 🙂 Drakkaa#3367: it removes all capitalization and interprets sentences with low word correlation, so it fixes some mistakes but also generates new ones Drakkaa#3367: ah well Kharr#7888: You can train a very small Transformer to fix these mistakes. garbage --> encoder --> decoder --> final sentence Drakkaa#3367: I tried training gpt-neo small on -> input: garbage -> ouput: fixed garbage Drakkaa#3367: but it was too creative, so input wasnt always related to output
Drakkaa#3367: i'm not that great with creating a new tranformer, so i'm not sure how to do that. Sid#2121: you would want to use a seq2seq model rather than an autoregressive one Drakkaa#3367: i'll look into that, thanks Kharr#7888: Try reading through https://huggingface.co/blog/warm-starting-encoder-decoder and maybe warm-start with distill BERT Drakkaa#3367: i will! thanks Louis#0144: whats the biggest JAX MLM? Louis#0144: does anyone know? Louis#0144: after googling I couldnt find much Louis#0144: :/ StellaAthena#3530: @Louis If you train a moderate one, it probably will be the biggest. The Jax ecosystem is not very filled out Louis#0144: ughhh Louis#0144: ok nz#9710: I don't know of any Louis#0144: @sweg do we wanna train 1b roberta Louis#0144: lmao Louis#0144: just for the memes Louis#0144: theres a FLAX roberta training script Louis#0144: kinda surprised no one has trained it sweg#8920: why not if TPUs are easy to access Louis#0144: ye
StellaAthena#3530: Yeah MLMs are two OOMs behind autoregressive Louis#0144: how good is rotary for MLMs StellaAthena#3530: Worse than for autoregressive StellaAthena#3530: The original authors were focused on MLMs and their results were far less exciting than ours Louis#0144: ok if we set up Roberta to train on the pile can we use one of the eleuther pods StellaAthena#3530: DM me, let’s talk alstroemeria313#1694: hey, if i have multiple outputs from a classifier for the same input (i.e. random augmentations of it) alstroemeria313#1694: can i average the logits together and then softmax or should i softmax then average the probabilities? alstroemeria313#1694: (also if i average then softmax i should logsoftmax first, right?) alstroemeria313#1694: (oh, it doesn't matter if you logsoftmax, you get the same result, i tried it just now) Spy#9778: If you average the logits that's something like taking the geometric mean of the distributions Spy#9778: I think averaging the post softmax distributions is more common but I may be wrong alstroemeria313#1694: ah, ty :blobcutehappy: alstroemeria313#1694: > Remark. For discrete input distributions, the analogous definition of conflation is the normalized product of the probability mass functions alstroemeria313#1694: i.e. sum the logits instead of mean? alstroemeria313#1694: this is gonna result in a much sharper distribution alstroemeria313#1694: i don't really think the observations are independent though alstroemeria313#1694: it seems wrong to do in this case due to lack of independence Hatter the mad#7424: Folks random question Hatter the mad#7424: How much operative (not GPU) memory does one need in order to train 3B parameter model on GPU?
StellaAthena#3530: > Training a 1.5 billion parameter model is estimated at $1.6m [92]. StellaAthena#3530: 🤔 Hatter the mad#7424: Sorry lol fine tuning Hatter the mad#7424: Yeh fine-tuning marmiteCloud#5923: this works well for our OCR (with sym spell, edit distance is important): https://cdn.discordapp.com/attachments/729741769738158194/859831699281018880/unknown.png Drakkaa#3367: also using symm i see, i'll try that example StellaAthena#3530: What? StellaAthena#3530: I wasn't talking to you. I was laughing at a paper I read Hatter the mad#7424: Ah ok StellaAthena#3530: Training a 1.5B model does not cost 1.6M chilli#5665: which paper? chilli#5665: lol Daj#7482: Training can cost anything if you're bad enough at it :bigbrain: StellaAthena#3530: "Adversarial Watermarking Transformer: Towards Tracing Text Provenance with Data Hiding" says that when making excuses for why they trained a tiny model Hatter the mad#7424: Well you can if you really want to train a model to cost anything rly:yann: chilli#5665: wait, @StellaAthena , you left out the best part chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/859833003961090078/unknown.png chilli#5665: somehow, 1.5 Billion => 1.6m dollars chilli#5665: but scale that up by 100x chilli#5665: 3x increase in dollars required
chilli#5665: lol Hatter the mad#7424: Wow that is god tier math indeed guac#4716: economies of scale in large lm training hmmm StellaAthena#3530: @guac @LDJ @Hatter the mad These numbers are sufficiently wrong that I can promise a 20B model to anyone who is willing to give 500k in funding by the end of July with no outside help. I won't even ask for a salary, just the leftovers from whatever of the 500k doesn't get spent. I'll wager my 600k apartment as collateral. bmk#1476: the 1.6M number is so hilarious bmk#1476: where the heck did they get this StellaAthena#3530: Anyone who thinks the numbers in this paper are anywhere close to right should 1. Take me up on the offer 2. Hire me immediately and pay me a million dollars a year. Because I'm saving you more than that on the very first model I deliver. Daj#7482: I'll do it for 1$ less than Stella guac#4716: didn't ben say the total cost of gpt-j 6B would be around $80k lmao (sorry if i'm misremembering what he exactly stated) Louis#0144: what paper are they citing Louis#0144: LMAO Louis#0144: what is [92] James#6892: Last time I trained a 1.5B model from (not GPT) from huggingface it costed $10 🙂 bmk#1476: 4.6M is also kinda sus but I haven't ever spent time figuring out where exactly the math is wrong StellaAthena#3530: > O. Sharir, B. Peleg, and Y. Shoham, “The cost of training nlp models: A concise overview,” arXiv preprint arXiv:2004.08900, 2020. bmk#1476: oh my god bmk#1476: I think I know this paper chilli#5665: The cost of training nlp models: A concise (and wrong) overview
chilli#5665: apparently bmk#1476: it's the one where they factor in the hyperparam search they assume the authors did bmk#1476: lol chilli#5665: lol StellaAthena#3530: @bmk I have some very rough estimates for 175B that are about half of what they're quoting. A little less even. chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/859844773213110322/unknown.png bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/859844782187872319/Screenshot_from_2021-06-30_11-15-48.png chilli#5665: lmao bmk#1476: damn beat me to it chilli#5665: clown town bmk#1476: lmao imagine spending $80k on a 1.5B model bmk#1476: even the low estimate is insanely high chilli#5665: well, that's not hilariously off lol chilli#5665: are they factoring in hardware costs? guac#4716: what's with those upper bounds lol bmk#1476: you shouldnt count hardware costs though bmk#1476: because those are amortized over the lifetime of the hardware bmk#1476: at most count how much it would cost to rent the hardware for that time chilli#5665: the fuck bmk#1476: wait it gets better
chilli#5665: where do they even show how they came up with these figures bmk#1476: look at the future work section bmk#1476: > That said, we see several factors that may help tame this explosion and prevent things fromgetting out of hand. In increasing order of importance: https://cdn.discordapp.com/attachments/729741769738158194/859845245855465542/Screenshot_from_2021-06-30_11-17-02.png chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/859845314790162473/unknown.png chilli#5665: wtffff chilli#5665: 🤡 Daj#7482: AI labs posting their Ls chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/859845506921529394/unknown.png Daj#7482: Good thing they're humble at least bmk#1476: the points they give in future work are all somewhere from slightly to very bad bmk#1476: we will not run out of data lmao bmk#1476: also lol https://cdn.discordapp.com/attachments/729741769738158194/859846005134852096/Screenshot_from_2021-06-30_11-20-50.png bmk#1476: more attention heads = worse if embed dim is held constant bmk#1476: this isnt evidence of models getting bigger at all bmk#1476: clearly they just took all the hparams, assumed larger values = bigger, and went with it bmk#1476: :withered: https://cdn.discordapp.com/attachments/729741769738158194/859846378673668128/Screenshot_from_2021-06-30_11-22-10.png CRG#8707: T5 didn't keep emb dim constant in the attn, there's an expansion. bmk#1476: oh they didnt? bmk#1476: huh bmk#1476: but then putitngi t in the same plot doesnt make much sense anymore does it
Louis#0144: yoooo https://cdn.discordapp.com/attachments/729741769738158194/859847899714158662/Screen_Shot_2021-06-30_at_1.28.26_PM.png EricHallahan#1051: https://discord.com/channels/729741769192767510/747850033994662000/859846112735133726 EricHallahan#1051: :lucid: Louis#0144: ice cream strikes again Louis#0144: i want some ice cream tbh Louis#0144: damn Louis#0144: chocolate EricHallahan#1051: ^ mgostIH#0245: What's a practical estimate for the cost of training 1.5B? James#6892: I'd imagine the gpt-neo team would know this best Louis#0144: https://arxiv.org/abs/2004.08900 Louis#0144: im actually impressed how good their diagrams are Louis#0144: but how wrong the information is Louis#0144: LMAO Louis#0144: like Louis#0144: they put so much effort into neatly presenting wrong information mgostIH#0245: Ye that's the thing above, I doubt you need 80k dollaridoos for training that bmk#1476: $0 Louis#0144: ye mgostIH#0245: how
bmk#1476: step 1: meme yourself to global AGI player status mgostIH#0245: Well, how long does it need to train for mgostIH#0245: I guess we could at least have electricity estimates StellaAthena#3530: https://cdn.discordapp.com/attachments/729741769738158194/859849719694098463/image0.png StellaAthena#3530: Okay how bad of an idea is this mgostIH#0245: Although I don't really think it's a price that matters in the large scale of things, was just a bit curious over the cost of raw compute StellaAthena#3530: @mgostIH Less than 10k. It’s hard for me to estimate down because you lose economies of scale. But it should be easily under 10k. Louis#0144: LMAO bmk#1476: the subject line sounds like a nigerian scammer Louis#0144: go for it Louis#0144: that would be so funny bmk#1476: so it'll probably go straight in the scam bin Louis#0144: honestly the paper is almost embarassing Louis#0144: its like bmk#1476: ikr Louis#0144: a 2 page vent bmk#1476: it's so bad bmk#1476: it's totally factually wack bmk#1476: and it has more cites than i have :withered: AI_WAIFU#2844: Consider the following:
They wrote that paper that way so that the grant agencies would give them more money. bmk#1476: :withered: :withered: :withered: bmk#1476: all the equilibria are fucked mgostIH#0245: Hello, I am a cyborg prince that got stuck in this timeline because of a fault in my time machine. I need your help in building another AGI, with your donation I can travel back to my reality and will promise you an aligned AI. chirp#4545: MLPerf results are out! https://mlcommons.org/en/training-normal-10/ chirp#4545: > tpu-v4-6912 StellaAthena#3530: I sent the email bmk#1476: lmao bmk#1476: i bet they wont even see it bmk#1476: it'll go straight into the spam folder chilli#5665: talking a bit about it in #research Louis#0144: Where’s tenstorrent Louis#0144: @kurumuz Louis#0144: I’m rooting for them Louis#0144: lmao StellaAthena#3530: Put this on the starboard damnit chirp#4545: I think they’re still a ways away from shipping their large-scale training system bmk#1476: what if they actually say yes tho bmk#1476: what's your game plan
bmk#1476: tfrc+mtj and then pocket the 500k? cognomen#6297: cue klezmer and a victory dance as the first step zphang#7252: at that budget Stella can probably pay for the TPUs zphang#7252: idk if using TRC to win monetary bets falls within the TOS :p bmk#1476: yeah but then she can't pocket the entire 500k zphang#7252: wait, she probably has to pay for the TPUs, since she's making a point about how much it costs StellaAthena#3530: I was planning on using AWS and “only” pocketing 300k Louis#0144: I’m rly excited for tenstorrent tbh Louis#0144: Wormhole looks so cool mega b#6696: bouta email openai to transfer the ownership rights to me, wish me luck 👍 Louis#0144: Lul TurnTrout#5101: Can we get :stacy: react on this server? We only have :chad: right now UnsupervisedLearner#4148: has anyone done a giant MoE model on vision? With public results on metrics? CRG#8707: Other than https://arxiv.org/abs/2106.05974 ? UnsupervisedLearner#4148: Nope I missed it, thank you. Very recent, cool stuff UnsupervisedLearner#4148: :bigbrain: idea here, not sure if this exists Anyone do self-distillation on each expert of the MoE? As in, after pre-training, split the experts up into individual dense models, distill (in whatever scheme) the whole thing into each expert, and then train again? inox#5400: not really the same but https://arxiv.org/abs/1706.00384 inox#5400: big caveat being that model disillation on convnets and transformers is quite different
UnsupervisedLearner#4148: really? I guess because they take CE on logits for the former and some other metric for the latter? Something else? inox#5400: DeiT's soft and hard attention are both different from traditional model distillation, and traditional model distillation doesn't actually work very well on most ResNet-like architectures, you've got to use attention transfer (which has nothing to do with attention in transformers) inox#5400: adding a new token to the transformer to do the distillation affects a lot more than adding an extra term to the loss function of the ConvNet UnsupervisedLearner#4148: It's always weird to me that you add a token for downstream tasks instead of using the final represented token sequence CRG#8707: The scaling ViT paper removed it for "Multihead Attention Pooling" <https://arxiv.org/abs/2106.04560> https://cdn.discordapp.com/attachments/729741769738158194/859871766729064448/Screenshot_20210630-210227.png nz#9710: yea most ViT-derivatives have moved away from the class token as it was pretty much a leftover from NLP alstroemeria313#1694: Doesn't the CLIP ViT take the embedding from the position of the last input token? alstroemeria313#1694: Also if the exact token count is a problem they can use 16x16 patches instead CRG#8707: CLIP doesn't have a CLS token? :thonk: alstroemeria313#1694: my bad, it takes it from the first alstroemeria313#1694: I just reread the source to make sure. alstroemeria313#1694: hm alstroemeria313#1694: nah, looks like it does alstroemeria313#1694: it goes first alstroemeria313#1694: this would be the relevant line https://github.com/openai/CLIP/blob/main/clip/model.py#L223 CRG#8707: Yeah, makes sense alstroemeria313#1694: I was confusing it with the text encoder which uses the hidden state at the EOS token as the output. alstroemeria313#1694: So you have to pick it out of a different place for each sequence. zphang#7252: More of a difference between MLM and LMs zphang#7252: LMs have to take the last token
alstroemeria313#1694: https://cdn.discordapp.com/attachments/821173872111517696/859884083353747507/out_0170000.png alstroemeria313#1694: This is FID 12.07? alstroemeria313#1694: FID on this run is still going down zphang#7252: MLMs it costs nothing to have an separate token, and it lets you keep a token which doesn’t need to encode any tokenwise info alstroemeria313#1694: Whereas this is FID 33.99 https://cdn.discordapp.com/attachments/821173872111517696/859885065856614400/out_0140000-2.png alstroemeria313#1694: I am not sure I could tell which of the two is supposed to be better! alstroemeria313#1694: I guess the FID 12 is better but it's not super better alstroemeria313#1694: The 33.99 is from an RGB space WGAN-GP and the 12.07 is from a VGG-16 relu3_3 space WGAN-GP. alstroemeria313#1694: With the same G arch. UnsupervisedLearner#4148: I have a mental model of the tokens that pop out at the last layer of a transformer as a recomposed representation of the input tokens based on their context Having a separate token takes away some capacity of the model to go into building this representation. Like how you use a projection head when doing vision SSL One#5919: my mental model of your mental model of the transformer: a network where each node is connected to each other one whether it's by 1 to almost 0 zphang#7252: using the last token means the last token needs to fulfill two goals: contain information about the whole sequence, and be easily projectable to the embedding space to predict that token One#5919: but it's all relational, each token is defined by all the other tokens. like indra's net One#5919: a token can't do much on its own EricHallahan#1051: Also known as a GNN lol One#5919: aha One#5919: how about this for optimization: define the difference between almost zero and zero as trivial One#5919: so whether a node is very distant from another node or not connected at all it's considered the same thing
One#5919: so whenever you have a node not connected to another node you say they're faintly connected One#5919: or for every faintly connected node you say it's connected to every non-connected one One#5919: so it doesn't have to be an exhaustive graph UnsupervisedLearner#4148: The whole sequence has information about the whole sequence The second can be overcome without sacrificing capacity, IMO One#5919: more like a jpg compressed one EricHallahan#1051: Also known as quantization. EricHallahan#1051: Also known as sparsification. One#5919: aha One#5919: so we make all the 0s into 0.000000000000000000000000000001s? One#5919: i.e. everything that doesn't connect to anything else, connect it to EVERYTHING else but very slightly EricHallahan#1051: No, the other way around. the goal is to *not* have to calculate it. One#5919: then everything that connects to a little else, connect it to almost everything One#5919: we don't calculate it Tinytitan#5596: why One#5919: we ascribe it Tinytitan#5596: would you do that One#5919: because you're emulating a brain Tinytitan#5596: brains arnt fully conected
One#5919: think about how it goes about categorization One#5919: of a completely novel object One#5919: it's a full list search Tinytitan#5596: like at all One#5919: yup Tinytitan#5596: they are verry sparce One#5919: i'm talking about the internal representation of the world One#5919: the world hits you with something completely new One#5919: so you compare it to everything UnsupervisedLearner#4148: You probably can trace a path through every node though One#5919: because it compares to nothig One#5919: yup EricHallahan#1051: That doesn't mean much. I am certain that the brain is one graph if you abstract it, doesn't mean that it isn't very sparse. One#5919: we're trying to distill graphs One#5919: like factor analysis Tinytitan#5596: tecniques for that type of thing exist Tinytitan#5596: though I do not see how this would help One#5919: right now hardware is king One#5919: how often has that stayed true? Tinytitan#5596: all the time
One#5919: ok ok bitter lesson One#5919: so the bitter lesson is like moore's law One#5919: pretty much an axiom One#5919: http://www.incompleteideas.net/IncIdeas/BitterLesson.html Tinytitan#5596: we are all aware of the bitter lesson One#5919: i doubt all 10,000 of us are Tinytitan#5596: all active memebers Tinytitan#5596: or, semi active One#5919: leveraging optimization and compression vs heavy hardware to emulate the same functionality doesn't contradict the bitter lesson Tinytitan#5596: we are quite aware that model compression is a thing One#5919: but what's the state of the art for graph compression One#5919: https://www.youtube.com/watch?v=RzWB5jL5RX0 One#5919: is it greedy algorithm shit One#5919: i'd just flip all the zeros to almost-zeros One#5919: brains are fuzzy One#5919: not quantized Tinytitan#5596: right, but this would not help preformance in any way One#5919: training One#5919: it's like free training Tinytitan#5596: How?
One#5919: it assumes everything is connected and goes from there Tinytitan#5596: thats, like the oposite of model compression One#5919: exactly One#5919: which lets you do the best compression Tinytitan#5596: No???? One#5919: claude shannon shit - information is all about the amount of surprise at each junction Tinytitan#5596: being delibertly inefficient is not good One#5919: if it's a surprise between 0 and 0.0000000000000001 one is infinitely less surprise Tinytitan#5596: i dont think you understand how ANNs work One#5919: sometimes the ultimate inefficiency is the ultimate efficiency. like brute force and what richard did Tinytitan#5596: they dont use suprise internaly One#5919: one sec One#5919: https://arxiv.org/ftp/arxiv/papers/1901/1901.02478.pdf One#5919: i'd seen this or something like this before One#5919: they COULD if we WANTED them to Tinytitan#5596: it would be massivly ineficent and slow One#5919: hey could the comments generating the code for copilot be several paragraphs? Tinytitan#5596: probably One#5919: can't wait for that access One#5919: i signed up for the program, answered "Rarely" when asked how often i use VSC 😄 once counts as rarely
One#5919: i was deploying this https://paintiterations.web.app/ One#5919: which is a partial demo of http://zeroprecedent.com/lore/flipmark.html One#5919: which i wrote Tinytitan#5596: #off-topic One#5919: the multimodal version of copilot where you can show it the functionality you with an animated gif will be heavy One#5919: sorry was just rambling cfoster0#4356: Lord, people sure do have some *opinions* on Copilot cfoster0#4356: It just doesn't feel so significant to warrant this reaction imo James#6892: I think its just something that has strong marketing appeal and people want to talk about, it remains to be seen whether the utility is actually there though. Deleted User#0000: developers tend to think too highly of their skills Deleted User#0000: we've seen this reaction every time deep learning makes headway into a new domain anyways. this is nothing new Deleted User#0000: whether it's computer vision or linguistics Deleted User#0000: even protein folding Deleted User#0000: I also predicted this long ago https://news.ycombinator.com/item?id=19751880 Deleted User#0000: Except OpenAI didn't do it first. It was Jacob Jacobson with Tabnine AI_WAIFU#2844: It looks like it's actually good. Which is causing mountains of salt to flow. See my previous comment on the subject. bmk#1476: not 100x bmk#1476: the numbers are just garbage bmk#1476: the paper is terrible in a number of ways StellaAthena#3530: @LDJ The author of the paper knows absolutely nothing. This isn't a question of tech advancing or their assumptions being a little off. It's closer to 1000x off than 50x off
StellaAthena#3530: 3 actually, with several more on the way StellaAthena#3530: The best so far being: https://discord.com/channels/729741769192767510/851918317039255592/852009851219214366 StellaAthena#3530: Yeah it's not on HF quite yet StellaAthena#3530: but if you check out the pins in #gpt-j you can find instructions on how to run it through HF's interface bmk#1476: wait, we have a roadmap? bmk#1476: uhh EricHallahan#1051: No bmk#1476: I don't think we have a roadmap lol StellaAthena#3530: We don't have a road map bmk#1476: well they're wrong bmk#1476: if that person happens to have been me, then it's probably because past-me was dumb bmk#1476: (i still am, but I also was, too) StellaAthena#3530: Our official position is now that we don't have estimates for anything StellaAthena#3530: Makes life a lot easier lol UnsupervisedLearner#4148: I hope so I hate writing tests. Now I only will have to read them bmk#1476: all the TDD people rn: :wojackcry: bmk#1476: :noo: noooo you can't write the code first and then autogen tests bmk#1476: :brr: haha codex go brrr StellaAthena#3530: Browse around the channels StellaAthena#3530: We do all of our research out in the open. There's no reason you can't lurk in project channels
bmk#1476: yeah our number one goal is to do stuff, which doesn't leave much time for PR or stirring hype or whatever bmk#1476: or communication in general, which sucks, but we only have so much time TruGerman#6672: There certainly is a lot of communication, just not the kind that's easily comprehended by mere mortals such as myself AI_WAIFU#2844: Let's put it this way, many of us hang around in this discord all day and none of us fully understand what's going on here, who's working on what, and what the status of things are, let alone when we can expect them to be done. chilli#5665: not even the people working on a project even know when they expect the project to be done TruGerman#6672: So I'm not the only one. Cool. 𓅬 gabriel_syme 𓅬#3220: organized? wdym :guilty: bmk#1476: :harold: TruGerman#6672: Something about genius and chaos being two sides of the same coin Kia#2550: I giant Organized Chaos 😄 𓅬 gabriel_syme 𓅬#3220: so had a question, is there a way to stop generating output with gpt models when you hit a specific token? EricHallahan#1051: I believe there is a feature for that in HF? 𓅬 gabriel_syme 𓅬#3220: oh really, welp. My bad, I'll go find it bmk#1476: the HF generate function is a huge Swiss Army knife 𓅬 gabriel_syme 𓅬#3220: that should be easy to do then, thanks 𓅬 gabriel_syme 𓅬#3220: want to make my layout generation more efficient, this will help bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/859947428848074752/Screenshot_20210630-180348_Chrome.jpg cfoster0#4356: I believe the core of the strategy for this is just a for loop with a check 𓅬 gabriel_syme 𓅬#3220: ohhh, so you for loop with a length 1? cfoster0#4356: Like unless there's some weird pipelining optimization I don't know about
cfoster0#4356: Nah like you loop and at every loop iteration you check the output against your condition 𓅬 gabriel_syme 𓅬#3220: cool I'll try it Louis#0144: and its brutally difficult to modify Louis#0144: because it is not modular Louis#0144: and not abstracted Louis#0144: like at all Louis#0144: 😄 Louis#0144: wanna add a logits processor? lmao good luck kinoc#5731: Hack it once and get a metal 🦸 . Hack it twice and get a padded room ... 🤪 nostalgebraist#3542: HF generate is pain mkualquiera#3484: I've seen that same interface in many ML libraries and it's horrible mkualquiera#3484: a ``generate()`` function that is supposed to do everything but that you can't modify at all 𓅬 gabriel_syme 𓅬#3220: thankfully I only need it for simple things, a stopping criterion would be enough 𓅬 gabriel_syme 𓅬#3220: I was considering to retrieve probs from beam search and then try to mutate those trees (is that even possible?) but I quickly gave up lol kinoc#5731: maybe check "expand_node" and "grow_branches" of https://github.com/summerstay/sentence-completions-gpt-2/blob/master/completions_tree.py for clues or inspiration ... kinoc#5731: Their "true_poetry" project may also be relevant but the same functions are more "evolved", since poetry is about placing constraints on generation Manny96#3437: Hey Guys and Gals! The GPT set of algorithms are intractable for a class of problem statements disproportionately larger — an MDP; a rigorous common way to solve that problem is through "Deep Reinforcement Learning". But, there is a component we have to solve an MDP, partially, GPT can do state estimation -- but can't do estimation of the complete bellman value function tuple <r, d, s, a> (reward, discount, state, action, transition of tuple). This is important for language manipulation tasks (dialogue modelling). Manny96#3437: What we could do is solve the bellman value function through a recursive function of GPT - solve subsections of the bellman value function tuple UnsupervisedLearner#4148: You know, this inspired me so I wrote up some pseudocode just now https://cdn.discordapp.com/attachments/729741769738158194/860005927243350016/IMG_20210630_225528.jpg Manny96#3437: Holy shit! Your fast!
cfoster0#4356: Why not, specifically? That's not obvious to me UnsupervisedLearner#4148: It might or might not meet or exceed state of the art on common RL baselines Manny96#3437: Ah no actor critic model - does explain it? cfoster0#4356: Not by itself. You might need to use more words to explain your reasoning :berk: Manny96#3437: I'll provide some learning resources, aye Manny96#3437: I'll right up some pseudocode, maths — and hopefully get something compiling start of next week. Manny96#3437: Although, open source is community driven — let's have discussions and unanimous decisions? Manny96#3437: Sorry, I'll right up a blog and you'll be my first invite... enquiry welcomed. bmk#1476: s/right/write/g bmk#1476: pls Manny96#3437: s?g? guac#4716: (It’s vim-speak for replace right with write and do it globally) Manny96#3437: Legendary, thank you bmk#1476: once is a typo, twice is bad spelling Manny96#3437: I learned pigeon English in Africa, so I'm fundamentally a bad speller lol (or maybe could be significantly better - below average in the educated class hahaha) Manny96#3437: And also, an engineer by trade Manny96#3437: Just as a child Manny96#3437: Post I learned in Australia Manny96#3437: Very welcoming — appreciate it. bmk#1476: oh pidgin languages are interesting stuff
bmk#1476: what's your first language? Manny96#3437: When I was a child it was Amharic (lanuage in Ethiopia, Africa). Currently, English - but had to train hard to learn Australian, English - still get a little confused sometimes. Manny96#3437: A lot of expats in Ethiopia from the wars – so we spoke pidgin English. bmk#1476: ah interesting Manny96#3437: We were ourselfs expats. Manny96#3437: Funny stories — I'll divulge at some point. bmk#1476: pidgin languages are really cool because of how they mix together different languages bmk#1476: and it's different from context switching too Manny96#3437: Refugees Manny96#3437: Yeah, true! Manny96#3437: You might be laughing at me and thinking to yourself — definitely has actions and states; well, it doesn't parametrically do that online with an actor-critic model - just a marginal probability of state-action sequence — no reward signal or discount factor from a critic. Manny96#3437: Online or Offline Manny96#3437: Check out the Bellman Equation Manny96#3437: And deep Q-learning Manny96#3437: But, have seen Transformer Deep Reinforcement Learning AI_WAIFU#2844: Have you read the decision transformer paper. Not quite what your describing but in the same ballpark Manny96#3437: No, cool, thank you! 𓅬 gabriel_syme 𓅬#3220: highly recommended! Also this one which does seem a bit more systematic when describing what is going on wrt RL (given who wrote it): https://trajectory-transformer.github.io/ Manny96#3437: Thank you very much! Manny96#3437: Both!
Manny96#3437: Pretty, cool! There are so many more things you can pile on that; to name a few, HDRL (hierarchical deep reinforcement learning) Deep-Q, model based learning-to-learn, hindsight experience replay (utility states are an identity function). Manny96#3437: Temporal Difference Learning (Bellman function tuple are caried over time) Manny96#3437: I'm intermediate at best — use to be a little better, but recently decided to do more business management - just to learn, and so essentially need to improve. Manny96#3437: And as I'm strictly open source — it can be hard to find projects in Open AI — definitely growing. I have a non conformist educational background (no courses from tuition and closed institutions); it's hard to find jobs in my field. Manny96#3437: I have to be totally honest - currently going through a clinical process - changing medication — not feeling the best — recovering, although - in a facility at the moment; should be back on track in a few weeks (no terminal illness). Manny96#3437: I'll be much more regular, soon Manny96#3437: And efficient! Manny96#3437: Just some historical stuff — doing much better now, although. 𓅬 gabriel_syme 𓅬#3220: Apparently FlaxGPTNeo will be ready early next week by HF 🙂 In time for my flax-jax project Sounds cool though right? Manny96#3437: Sounds great! I'll look into it. Manny96#3437: "Yeah, this is an AI project, community. But, can't help with my business management background (MIT OCW and some education from a Stanford GSB student - 6 months — it was pretty cool). So, look into growth hacking Eleuther.ai organisation (to be totally honest, personal gains — looking to run a business with some of Eleuther.ai tech). And essentially building DAO - DeFi, dApp would be exponential in growing business; it can be a fork from Eleuther.ai." - Message I sent to Sid. What do you guys and gals think? My personal favourite Blockchain Tech at the moment include, Etherium Smart Contract (growing faster than Central Bank interest - due to Bitcoin) and my most favourite Chia-Network (very energy efficient and backed by BitTorrent founder - Bram Cohen). Have heard people are liking dogecoin. To my personal failure I'm not very well versed in Blockchain as I could be --- into AI. Manny96#3437: Chia-Network is more energy efficient at mining than Bitcoin nz#9710: can't wait for the ssd/hdd shortage thanks to chia Manny96#3437: Lol Manny96#3437: I promise I'll build a dApp with Eleuther.ai kurumuz#5695: :ThinkingRot:
Deleted User#0000: Is there any website for custom image generation with gpt3 dalle? Deleted User#0000: Paid or free EricHallahan#1051: They only released the VQVAE, not the Transformer behind it. EricHallahan#1051: So no. EricHallahan#1051: CogView exists though. UnsupervisedLearner#4148: Lmao chia is such a huge scam. We don't need more Po(expensive thing) chains. If you need proof of storage I can use IPFS cluster and actually have the storage be useful. Ded on arrival no wonder the luddite media is pushing it to make nerds mad at another potential shortage bmk#1476: yeah I have no idea who people are so obsessed with various proof of (not work or stake) stuff bmk#1476: let's just get pos working and then never think about the base chain again UnsupervisedLearner#4148: PoS *is* working. At least, it seems to be. Check out Kusama, it's live and running now. Polkadot coming soon bmk#1476: I'm waiting for eth pos bmk#1476: it's easy to try something out on your $1M cap chain that nobody cares about UnsupervisedLearner#4148: Eth is slow as fug and honestly sucks (even though I own a bunch of it 😃 ) bmk#1476: eth pos should fix that bmk#1476: also lol just wait for low gas times UnsupervisedLearner#4148: Polkadot is a huge mcap and backed by one of the original Eth developers bmk#1476: literally right now gas prices are 24 gwei bmk#1476: I don't think I've ever paid more than 50 gwei for a transaction, I always just wait until gas fees are low again bmk#1476: I'm never in a rush
UnsupervisedLearner#4148: What about a 1.8B mcap nominated pos chain with active sharding and WASM based (as opposed to EVM) runtime UnsupervisedLearner#4148: https://www.coingecko.com/en/coins/kusama bmk#1476: I don't see how wasm is better than evm for contracts but ok UnsupervisedLearner#4148: It's better for runtimes because it allows better soft forking and heterogeneous shard chains alstroemeria313#1694: we have CLIP methods to generate images from text but they're kind of... more artistic and considerably less coherent than DALL-E outputs bmk#1476: I don't see how that has anything to do with the runtime alstroemeria313#1694: see pins in #art cfoster0#4356: ~~transfers were never meant to be cheap~~ Deleted User#0000: Yeah, it’s ok, what website can do it? alstroemeria313#1694: there are colab notebooks pinned in #art alstroemeria313#1694: (also we have a bot for some of them) cfoster0#4356: Continue crypto talk in #off-topic ? bmk#1476: sure Louis#0144: Should probably post this here too https://cdn.discordapp.com/attachments/729741769738158194/860189628744859688/flyer_eleuther_ai_invited_talk_jul_7_2021.png Louis#0144: my coworker is doing a talk next week bmk#1476: happy canada day my geese :goose7: kinoc#5731: Ya'll have suffered enough. I'm off to the backwoods for a few days (with late night check-ins). Have fun! cfoster0#4356: Looks like he's streaming rn, interesting to watch it live https://twitter.com/marksaroufim/status/1410715994926968834?s=19 StellaAthena#3530: This is a barrel of laughs: https://twitter.com/KathleenACreel/status/1409910756389404672?s=20 cfoster0#4356: I'm so glad we included those
cfoster0#4356: Also, Sean Carroll retweeted it 👀 StellaAthena#3530: As did the head editor of the most prestigious philosophy journal in the world StellaAthena#3530: Also the official account of a philosophy journal lol mega b#6696: Just got access to github copilot! mega b#6696: seems like they are accepting applications ATM mega b#6696: check your emails! Kia#2550: Congrats! mega b#6696: give me something to try out 👍 Kia#2550: Ask the weights of Dalle:bigbrain: mega b#6696: lul Louis#0144: if you guys were switching bart to rotary Louis#0144: would you switch both the encoder and decoder? Louis#0144: the encoder is bidirectional Louis#0144: so i am almost inclined to think maybe it isnt necessary? cfoster0#4356: Both, I think? Though it would depend on the objective you're training it with. You'd probably need to make sure they're lined up the right way, since I think the decoder would start with a SOS token whereas the encoder wouldn't Louis#0144: would the SOS token make a difference though Louis#0144: I would imagine the model would learn to resolve that Louis#0144: since its seq2seq cfoster0#4356: If you didn't account for it, the attenuation would be slightly wrong but maybe not the end of the world Louis#0144: so shift the positional embeddings in the decode back by one
cfoster0#4356: Uhh you should roll them forward by one, right? Louis#0144: oh sorry yeah Louis#0144: lol &.#0001: Language is not a prerequisite for cognition. Allow me to demonstrate: Quack! Quaaaack! Quackkk!! Louis#0144: https://cdn.discordapp.com/attachments/729741769738158194/860303463094681635/image0.png One#5919: connection is One#5919: en masse One#5919: language just happens to be the optimal connection medium we can capture and reproduce One#5919: with computing scaling as it is and seemingly being the sole requirement for large language models i'd say we're way on our way to transformative AI or general AI or something we have no words for yet One#5919: https://openai.com/blog/multimodal-neurons/ One#5919: imagine dealing with trillions not billions of those One#5919: https://cdn.discordapp.com/attachments/729741769738158194/860320112153853993/absent_concepts.png One#5919: where are they @mkualquiera ym#0104: that twitter link is also getting shared on the daily nous heap of links StellaAthena#3530: Shit, it’s possible I saw someone who runs the daily nous retweet it and thought it just said *Nouû* AI_WAIFU#2844: https://www.tomshardware.com/news/amd-aldebaran-memory-subsystem-detailed Manny96#3437: Good idea! Although, @Sid was saying it's not focus (Crypto). Instead, let's start a separate Discord and build DeFi for Eleuther.ai (or similar to Discord - looked at Rocket.chat, easy to spin up a personal instance)? Manny96#3437: Might, help with funding. Sid#2121: nah, we're good Manny96#3437: Yep
AI_WAIFU#2844: Apparently Tens-torrent Wormhole looks like it's going to use GlobalFoundries 12nm process for their chip. Manny96#3437: That's a fundamental limit to Transistor size cfoster0#4356: nah, we're good Manny96#3437: You don't want to do my proposal? cfoster0#4356: I was talking about limits to transistor sizes UnsupervisedLearner#4148: In general, is it looking like them or anyone else is going to build something more than 1-2% better than Nvidia gpus? cfoster0#4356: No, not particularly Manny96#3437: Right... The limit I heard from Michio Kaku is 12nm AI_WAIFU#2844: I think google already has. Manny96#3437: Okay, sure, won't insist. AI_WAIFU#2844: I'm bullish on tenstorrent since they seem to have recognized the importance of interconnect. AI_WAIFU#2844: https://www.youtube.com/watch?v=Id3enIOAY2Q 6771#5026: How popular is Facebook’s AI for developers? I heard it’s 3b parameters, is it one of the top out right now? Louis#0144: Now if you can only use it for DL… alexyz#3459: *what* AI? alexyz#3459: Blenderbot? Manny96#3437: Looked at it bit - they do a lot — spatial, NLP alexyz#3459: It's a chatbot model. There's a 9.7B model of it also, pretty sure it's SOTA in the field. alexyz#3459: The only one that's 3B params that I can think of is Blenderbot, but correct me if I'm wrong Manny96#3437: What's 9B params?
alexyz#3459: Blenderbot has multiple sizes, a 90M model, 2.7B model (sometimes called 3B) and a 9.7B model Manny96#3437: Okay, cool, thank you very much! alexyz#3459: The only one that I can think of that would be better would be Google's new chatbot AI alexyz#3459: LaMDA Manny96#3437: Open source - trained parameters included? alexyz#3459: Nope. alexyz#3459: They never released it. Manny96#3437: Thank you for bringing up Blenderbot! Blenderbot vs. GPT-J, if you so please? Manny96#3437: Maybe not the right comparison. Manny96#3437: Eleuther.ai doesn't have risk of perverse incentives. Manny96#3437: No, public company pressure - quite a large one too and still growth. One#5919: https://rifters.com/real/2009/01/iterating-towards-bethlehem.html One#5919: https://cdn.discordapp.com/attachments/729741769738158194/860366910251991080/portia.png James#6892: So, did you get a feeling for how well it worked? cfoster0#4356: Certainly better than I imagined. I might have to revise my earlier comment on it cfoster0#4356: Like I could easily see becoming a more productive coder as a result of it, for myself and for a decent subset of people 𓅬 gabriel_syme 𓅬#3220: that's great news, it'd be amazing if it could take someone like me and empower to do things out of reach till then 𓅬 gabriel_syme 𓅬#3220: even if things failed at first, that's actually the cool part of learning 𓅬 gabriel_syme 𓅬#3220: although I'd imagine they would prefer it more for production than learning? not sure kindiana#1016: :thonk:
kindiana#1016: why in parallel? James#6892: Do you think its good for learning as well, or mainly to produce faster without knowing what's going on? Kinda like copy-pasting from stack overflow guac#4716: it feels like copy-and-paste. i can't image copilot being good for introductory programming lol cfoster0#4356: I think you need to know what you're looking for. Like it seems like it's good for filling in the details when you know what you're doing, which should speed things up, but I still wouldn't trust it to teach the correct/idiomatic method James#6892: Very interesting. So seems to benefit intermediate people James#6892: The most James#6892: If you’re beginner you’re essentially copy-pasting without learning much lol 𓅬 gabriel_syme 𓅬#3220: which is great, I really hope we see this as a learning tool kindiana#1016: I'll be giving a talk on some of the engineering behind mesh transformer jax tomorrow, come watch/say hi if you are interested haha https://github.com/huggingface/transformers/tree/master/examples/research_projects/jax-projects#friday-july-2nd kindiana#1016: (5.30pm-6.00 CEST / 8.30am-9.00 PST Friday, July 2nd) 𓅬 gabriel_syme 𓅬#3220: i hope I can manage, if not in the recording spirit-from-germany#1488: I have been trying to connect to scaleway C14 with SSH or rclone ... but their tutorial seems out of date ... https://blog.scaleway.com/c14-with-rclone-sync-files-and-directories-to-c14-from-anywhere/ spirit-from-germany#1488: Could anyone help me? 😄 spirit-from-germany#1488: Need it to archive datasets mr_seeker#1337: @spirit-from-germany C14 is said to work with SFTP spirit-from-germany#1488: yes, but how to access it? spirit-from-germany#1488: with rclone or ssh mr_seeker#1337: with rclone? https://rclone.org/sftp/#c14 mr_seeker#1337: Else use an SFTP client such as winSCP Drakkaa#3367: i'm tokenizing a pretty large dataset with 15k files, is the --files_per 1000 setting something i change ?
nostalgebraist#3542: you don't have to, but i recommend you do this if you're fine-tuning gpt-j because of the way it iterates over tfrecords https://discord.com/channels/729741769192767510/851918317039255592/857414721833271297 Drakkaa#3367: i'll look into it, thanks Drakkaa#3367: ah it skips the end of small files Gurkenglas#7362: uhhh this torch.matmul perfplot run just now showed it to take n³. do i have to say something to make it do magic n^omega stuff? nz#9710: https://twitter.com/mitsuhiko/status/1410886329924194309 nz#9710: ^ copilot spitting out fast inverse square root from quake 3 arena (https://en.wikipedia.org/wiki/Fast_inverse_square_root) aze#1010: is there a smaller version of the pile so i can take a look how its formatted and stuff? kurumuz#5695: dat inference speed kurumuz#5695: one day... StellaAthena#3530: I don’t think torch.matmul supports n^log 7 multiplication alstroemeria313#1694: does anyone actually use faster-asymptotically-than-n^3 matmuls irl? alstroemeria313#1694: i thought they all had horrible constant factors StellaAthena#3530: @alstroemeria313 Strassen is the best real world algorithm alstroemeria313#1694: ah kindiana#1016: https://www.cise.ufl.edu/~sahni/papers/strassen.pdf kindiana#1016: its getting close to good enough lol StellaAthena#3530: It can do a 2x2 matrix with 7 multiplications (naive requires 8) and then scales that up via block decomp kindiana#1016: (on more recent hardware https://dl.acm.org/doi/fullHtml/10.1145/3372419) StellaAthena#3530: That’s where the random 7 in the log 7 I mentioned comes from
StellaAthena#3530: More specifically it’s $$\mathcal{O}\left((7+o(1))^n\right)$$ for a matrix that has side length $2^n$ TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/860554075958804490/193204646687408129.png kindiana#1016: I'm kinda sus of efficient matmuls tbh StellaAthena#3530: Which means that for any $c > log_2(7)$ the algorithm is $\mathcal{O}(N^c)$ where $N = 2^n$ is the side length kindiana#1016: they add so much additional complexity and numeric stability issues for a small gain for the matrix sizes we usually use TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/860554467970777100/193204646687408129.png StellaAthena#3530: @kindiana Do you have any references for the stability issues? StellaAthena#3530: I’ve heard that that’s an issue but haven’t done the computations myself kindiana#1016: https://cdn.discordapp.com/attachments/729741769738158194/860554833081925642/unknown.png kindiana#1016: from the second paper CRISPR IQ300#6848: https://www.youtube.com/watch?v=4duqI8WyfqE CRISPR IQ300#6848: Thoughts? A GPT3 version of prompt engineering for generating code, made by Microsoft, better than Kite? Orz#3023: wait what exactly is kite btw? sheggle#6841: Another AI code autocomplete extension sheggle#6841: Just Google it my dude Orz#3023: My bad I googled the wrong stuff before got it now thanks
CobraPL#6494: Guys, I have a question about the `pile`. Will there be any effort to clean it up from meta text and urls? This is examples what leaked in NAI: ```Enjoy and vote! Thanks for reading!``` ```Wanna read more stories like this? Try:``` ```Note: This is a work in progress. If you have any questions, please leave a comment. I'll reply as soon as possible. Thank you for reading.``` ```End of the story. Commentary I really enjoyed writing this story. I wrote it in a short time, so I had to be very efficient. This is my first long story.``` ```The End. (Yes, I wrote a sequel.) Do you think I should continue to write stories about these characters? -Thank you for reading. Your support means a lot to me.``` ```* * * The End END OF STORY. If you liked this story, please consider voting and leaving comments. I appreciate all the feedback.``` ```This story was inspired by "Necessary Evil", a story in the blog called "Fantasy Fiction - Fantasy & Science Fiction"
If you have any suggestions or comments, please write. Thanks!``` ```(This story is in the fantasy setting. Characters are fictional and do not represent any real person. Names and places are fictitious.) Fantasy stories should always have a happy ending. But sometimes, things don't go the way we want.``` ```All characters are over eighteen. FURTHER READING: I don't have much experience with stories about interracial relationships. But I can tell you a few things. If you are interested in writing a story about such thing, I suggest to read the books below.``` CobraPL#6494: All above is generated by AI based on leaked text. Daj#7482: The Pile is considered "complete", and no one is currently interested in working on a successor. The problem is that the Pile is _big_, there is no way to manually filter out text like that at scale Daj#7482: It's an unsolved problem CobraPL#6494: Hmm, there are some commercial projects like NAI. I hope someone will simply pay for such work. The pile really could be finetuned. Or at least cleaned. bmk#1476: "leaked text"? what the heck are you talking about CobraPL#6494: This is metainformation for the story. I play NAI, NAI generates stories and sometimes it causes things like above or links to appear. Daj#7482: You really can't imagine how big 800GB of raw text is lol Daj#7482: The better solution will probably be finding a way to finetune/control the models to know such text is off topic bmk#1476: we are not NAI, we are not affiliated with NAI, we do not do things specifically for NAI CobraPL#6494: I know. bmk#1476: if NAI wants to clean stuff they can do that but it has nothing to do with us bmk#1476: so don't ask us to CobraPL#6494: I asked for technical aspects if it's doable, not YOU personally to do so. Daj#7482: chill bmk lol
Daj#7482: He's just asking questions, it's ok bmk#1476: k cognomen#6297: for some idea of the magnitude of the problem, instead of asking "how hard would it be to curate content in the pile to remove garbage?", consider the question of "how hard would it be to curate content *on the internet* to remove garbage?" Daj#7482: 2-month, 10-man sounds about right AI_WAIFU#2844: We've had discussions about how to go about doing something like that, but there's very little will to make it happen. cognomen#6297: *we'll be back home for christmas!* CobraPL#6494: This is not much for paid models in the future. 1000-man 1y would be scary. I think someone with money will be interested one day. I hope at least. If someone pay 10-man team for 2-3 months for work, when he sells for 25$/mo to 50000 users or more, then I see no problem. Daj#7482: This was a joke referring to https://en.wikipedia.org/wiki/Dartmouth_workshop Daj#7482: A famous workshop that thought AI was so easy they could make "significant progress" on it in two months with 10 grad students Daj#7482: in 1956 Daj#7482: So no, we're talking like "millions of man-hours" CobraPL#6494: Or someone will train AI to clean the pile lol Daj#7482: More likely yea Daj#7482: but that might also introduce weird biases Daj#7482: ¯\_(ツ)_/¯ AI_WAIFU#2844: Yeah, that's more sensible AI_WAIFU#2844: but then you gotta run it over 800GBs of text
CobraPL#6494: Well, it could just search for questionable text areas and just erase them. Not change whole pile. Daj#7482: Yea but what qualifies as "questionable" is a really hard question lol Daj#7482: The Pile already went through _significant_ filtering and cleaning Daj#7482: Which is why not 90% of the documents are literally gibberish Daj#7482: But yeah there are ways to approach this, I wanted to do some experiments in this direction but haven't gotten around to it StellaAthena#3530: @Daj Anything in particular you have in mind to do? I'm generically interested but don't have specific experiments. CRG#8707: Gwern's compression idea should help with the "whole batch consistent of bee emoji" problem. Daj#7482: Reward model from human preferences and the like UnsupervisedLearner#4148: As in, your window to the internet, or the internet itself? I feel like the former is a fairly surmountable problem Daj#7482: Have humans label suspicious sequences or something Daj#7482: Also imaginable that with advanced interpretability/hidden states dynamics understanding you could detect "drift" in the model as it switches from "story mode" to "footnote mode" peterwiggin#8566: anybody know why my loss is much worse when restarting training from a checkpoint in pytorch? I'm saving and reloading the optimizer state dict and learning rate scheduler state dict in addition to the model one peterwiggin#8566: also using mixed precision training if that's relevant peterwiggin#8566: validation works fine so it's gotta be something about the optimizer or scheduler... StellaAthena#3530: This is a really interesting article https://www.theatlantic.com/science/archive/2021/07/gamers-are-better-scientists-catching-fraud/619324/ peterwiggin#8566: ok I've figured it out peterwiggin#8566: if you're training using apex.amp you have to save and load amp's state_dict too StellaAthena#3530: That makes sense lol StellaAthena#3530: @peterwiggin feel free to open a PR on GitHub that documents this fact.
peterwiggin#8566: oh it's in amp's documentation peterwiggin#8566: buried... Drakkaa#3367: one hour of debugging and finally i check what python ver 'python' runs on my vm.. not version 3 apparently, doh Drakkaa#3367: python3 fixed it.. alstroemeria313#1694: Oh no bmk#1476: use pyenv to forever avoid python environment issues Drakkaa#3367: the error was a weird syntax error, didn't scream wrong pyhton ver at me haha Drakkaa#3367: python ver will be on my radar again for a while Drakkaa#3367: good idea thanks, i'll global my v3 cognomen#6297: boils down to determinism and transparency, I would think cognomen#6297: gets a lot harder when you don't have perfect knowledge of the system involved in producing the results UncleDavy#9536: Greetings, new lurker here, experimenting with poetry generation using the 2.7B-parameter model. StellaAthena#3530: Welcome! Poetry is a very cool space to be working in. I'm curious how you've done with getting it to rhyme... our general impression is that the tokenizer is a major limitation in this reguard UncleDavy#9536: I'm learning to like free verse StellaAthena#3530: Lol. I guess that's an answer, in its own way bmk#1476: sad gwern noises wyrdc#1871: I have a TRC invite and I'm preparing to use it to experiment with GPT-J finetunes. The preparation process has down a rabbit hole of collecting and cleaning text data so that it's formatted correctly. While doing preliminary testing on GPT-2 using a single concatenated text file, I realized GPT-J (and NEO) would prefer the data to be chunked and concatenated by create_tfrecords.py. No big deal, wrote a bit of code to change my data. All this work got me thinking though. Does anyone know of a place that collects small (relative to The Pile) text datasets designed for finetuning language models downstream tasks? If such a place *doesn't* exist, would there be interest in it? I know many of the well-known dataset sources, but very few of them have much of what I have in mind. I'm thinking maybe genre-separated fiction collections for those training story generators, chat transcripts for chat models, code from assorted languages, etc.; all cleaned and available in at least two formats. What do you all think? Useful? Waste of time? Someone else has already done it? Better ideas? StellaAthena#3530: @wyrdc I think that this would be *extremely* useful if you genuinely curate and organize submissions
bmk#1476: HF datasets? bmk#1476: the curation or lack thereof would be worth far more than the platform itself bmk#1476: (i specify "lack thereof" because maybe, idk, existing platforms are *too* curated and you want to make a free-for-all platform? i havent actually done the research but this is one possibility) wyrdc#1871: I would! It's kinda hard work but I don't mind it much. Realistically the datasets will have a decent amount of time between releases and there might not be many, so I should probably focus on the most in-demand ideas wyrdc#1871: It would definitely be focused on quality curation over quantity wyrdc#1871: Especially given the evidence that larger models don't need as much finetune data wyrdc#1871: Anyway since you two like the idea and I've used both of your works before I think it's only fair for me to give back, I'm gonna do it and keep everyone here posted. Anyone who sees this feel free to DM me suggestions, I'd like to gauge where the demand is Louis#0144: There’s a Berkeley professor who did this Louis#0144: Their name is slipping my mind Louis#0144: David something Louis#0144: I just saw a talk on this at WNU2021 Louis#0144: Curating this is difficult mostly because OCR is awful on public domain stuff Louis#0144: And because the interesting stuff you can’t redistribute Louis#0144: lol Louis#0144: That’s kinda the issue here Louis#0144: It would probably be illegal for you or HF to host the dataset Louis#0144: This is specifically wrt stories Louis#0144: You can use abridged versions or summaries. Or uh apparently if you scan the book and do ocr that counts as transformative? Louis#0144: Idk Louis#0144: David gave an entire talk on that at WNU
wyrdc#1871: That is a good point, most of what I have so far is public domain but not stories, copyright tends to be more complicated there. I'll figure a way around it, I mean not everything in The Pile is properly licensed either, right? Louis#0144: I mean the only thing that could have been an issue is books wyrdc#1871: There appear to be several Davids in data science at Berkeley so hopefully someone can narrow that down lol, I am interested in that Louis#0144: Which is no longer in the default pile (correct me if I’m wrong) kinoc#5731: Would a MLM of a story/book be a copyright problem after transformation? Louis#0144: https://people.ischool.berkeley.edu/~dbamman/ Louis#0144: This guy Louis#0144: He’s very very nice Louis#0144: If you send him a nicely worded email he will probably respond Louis#0144: Assuming he has time ofc wyrdc#1871: AFAIK it's a gray area that we're hoping counts as fair use Louis#0144: Finetuning on copyrighted information so far is only considered transformative in the EU economic zone Louis#0144: Not in the US Louis#0144: The US is working on it though wyrdc#1871: Thanks! I'll check out his work first, so hopefully I don't have to waste his time lol Louis#0144: There’s also a weird catch here that the data you finetune on has to also be from the EU (hosted) Louis#0144: Which I don’t really understand Louis#0144: I spent like a week reading legal documents Louis#0144: I’m still very confused Louis#0144: I assume they mean that the company you scrape from has to be based out of the EU or the data itself must literally be hosted in the EU
Louis#0144: But 🤷‍♂️ kinoc#5731: Maybe multiple linked symbolic parsing of books might be an interesting dataset ... Louis#0144: It alludes me Louis#0144: Welcome to knowledge graphs Louis#0144: And how storytelling researchers got around copyright before deep learning Louis#0144: LMAO Louis#0144: That’s an trick that you see in lots of old papers in my field Louis#0144: It’s rly funny tbh kinoc#5731: Ya 😋 UnsupervisedLearner#4148: IP is dumb and illogical and I cannot think of a single instance where ot is obviously helpful and not a predatory move that dampens innovation bmk#1476: sad internet protocol noises UnsupervisedLearner#4148: If someone wants to monetize their model and is afraid of IP laws let me know and I'll help you integrate crypto wallets directly in your webpage, I can even make it so you only accept USD stablecoins for api use bmk#1476: im not afraid of international paper laws at all bmk#1476: and my instruction pointer is perfectly logical uwu1#4864: Hi all 🙂 I'm new to the world of open source contributing but not to ML, and I would love to open source some of my work and contribute it. My background is in ML, physics and generative art. I also do ML research and engineering for work... Which brings me to my question: How do you balance that? Also the broad language in employment contracts regarding IP related to a field you work on? Recently, I've been exploring applications of tensor networks for NAS and compressing huge neural networks. I hope this was the right channel to say hi and such. I think this tensor network based stuff could accelerate transformers a lot. uwu1#4864: At work I only work on tiny models (they have to run in real time)... But I did snag an RTX3090 way back so I started playing around on it, mostly with quantum circuit simulation and such (hence the tensor network inspiration). I also tried fine tuning but it's hard with just 1 GPU... EricHallahan#1051: I'll take a crack at answering your questions. a) Yes, this is a good place to introduce yourself. Welcome! b) I would say that I don't balance it at all. I am quite bad at time management and my passion is to be here rather than to do something else. c) I guess I am lucky in that I haven't encountered such contracts, because I don't work in industry: I am a student and therefore I do not need to deal with that kind of stuff.
EricHallahan#1051: ~~Wait, RTX 3090s exist?~~ uwu1#4864: Hopefully you won't have to 🙂 I'm hoping there are and will be more of orgs that support that uwu1#4864: Only if you're in Canada Teemochu#8740: A rough estimate of 800GB of text is that it's a million entire books (if not more, it's 1m books if they're about the size of the 3-5th or so HP novels) bmk#1476: (or, in terms that this crowd is more familiar with, about twenty entire copies of HPMOR) Teemochu#8740: "a significant fraction of what you will see in a large library building" cfoster0#4356: Say more about this tensor network business. Kharr has teased some interesting directions for tensor trains but didn't go into much detail cfoster0#4356: Also hi! 👋 uwu1#4864: So tensor networks are a way of representing tensors with other (usually smaller) tensors. These tensors are connected up into a network which can be sampled efficiently without needing to materialize the whole tensor uwu1#4864: The name of the game then is what kind of structure are you going to give to the network? This will broadly reflect the way correlations will decay across the network. Tensor trains have exponentially decaying correlation for instance CobraPL#6494: Well, reading the all is sth undoable. Reviewing search results for specific phrases is rather doable. OFC algorithm or even AI could search and show for review. uwu1#4864: A better way is a MERA, which is like a binary tree of tensors, which at 2x the cost of a train can represent a much more complex --quantum state-- tensor. Theres a few papers on compressing DNN linear layers with MERA but they were rejected because they didn't also compress the conv layers (the authors tired to do it for CIFAR) (in addition to not great results because of that). uwu1#4864: Also, while you can treat a tensornetwork as just a tensor and sample from it as you would (like, network[0, 1, 20, 50, 100] for a 5D tensor), you can do much better if you interact networks against each other. Also, there's a lot of fun tools developed for disproving Google's quantum supremacy that can be used to optimize the tensor network contractions, like https://github.com/jcmgray/cotengra uwu1#4864: Going a little beyond into hypotheticals.. self attention is "just" a multi-body particle system, and more complex kinds are like various spin glasses. cfoster0#4356: Yeah I've seen this connection, esp. in Matthias Bal's blog posts. Still trying to figure out what practically to do with that information uwu1#4864: Me neither tbh. After all a turing complete CPU running a program is also just that uwu1#4864: The BKT stuff is quite cool as an example of emergence though. Also the way the emergent vorticies are made in + and - pairs, eerily like a semiconductor. Maybe a less out-there application is instead of deep learning models taking in flat vectors and tensors they would input and output tensor networks. cfoster0#4356: BKT? uwu1#4864: Oh, I thought he had made a blog post about it, but actually it was someone else, oops.
uwu1#4864: He was just cited in it uwu1#4864: I can't even find it now but this SO answer covers it well: https://physics.stackexchange.com/questions/255909/what-is-the-kosterlitz-thouless-transition Basically, 2D systems with local couplings where the values of the field are cyclical (like a grid of coupled clocks), shouldn't have any complex long range order, due to a specific kind of pattern/quasiparticle (called a Goldstone Boson) that appears in them. However, there's a little trick, which is the BKT transition uwu1#4864: We should also make up so Bosons of our own... if the physicists can do it so can we. 𓅬 gabriel_syme 𓅬#3220: The tensor network sounds really cool. Are there any simple practical, non QMs examples out there for smallbrains like me? uwu1#4864: There's this beautiful blog post that covers the notation and some uses, only using linear algebra: https://www.math3ma.com/blog/matrices-as-tensor-network-diagrams uwu1#4864: I suppose one could argue that is is QM example as QM is just linear algebra (even more than deep learning is just linear algebra) uwu1#4864: One interesting thing from that post that maybe isnt so obvious is that those operations of splitting/SVD, diagonalization and such can be applied to whole networks just as to individual tensors. This is mechanism that distills and simplifies tensors and tensor networks, and it also highlights the QM-ness in that you can "rewrite" a state into a different basis Manny96#3437: Hey Gals and Guys, how are we doing? I'm cooking up something --- hope you're interested. GPT-Neo reinforcemnet learning (: cfoster0#4356: So I guess my question has always been whether it's possible to train networks in this compressed format and save on compute there / get bigger effective model capacity. As opposed to after the fact chirp#4545: Super broad question, but I’m curious how y’all think the world will look in 5 or 10 years, and what role AI will/won’t play in that Orz#3023: Well The point is the AI's are trained based on output, which is something humans have already found out So as long as there's no improvement in the way they are trained idk if stuff will change any sooner uwu1#4864: > So I guess my question has always been whether it's possible to train networks in this compressed format and save on compute there / get bigger effective model capacity. As opposed to after the fact @cfoster0 Yes you can only do that actually. The frameworks support autodiff backends to optimize the tensors within the networks
cfoster0#4356: Interesting. A tensor train / tensor ring reformulation would just be some funky einsum with multiple inputs, right? cfoster0#4356: Might have to re-read some stuff here and play with it myself uwu1#4864: Basically, yeah! I like using quimb, has nice vis tools cfoster0#4356: Super appreciative for the references :hap: uwu1#4864: np! :) im excited to learn from you and y'all too 𓅬 gabriel_syme 𓅬#3220: you should definitely stick around and keep talking about this stuff 🙂 𓅬 gabriel_syme 𓅬#3220: that blogpost is beauitful btw, I haven't read a word but looking at all the nice images 🙂 GrimSqueaker#8837: Hi everyone! I'm Dan, a researcher (Background: Data scientist, autoML & feature engineering, Bioinformatics, proteins, neuropeptides, Neurobiology, Data Czar, dual MSC @ HUJI). I'm one of the authors of ProteinBert. In deeplearning, I don't know any pytorch, and only know Keras (and proud of it :P) . I work these days as a researcher at Medtronic (medical multimodal data and surgical planning + AI), and am starting a PhD soon. https://scholar.google.co.il/citations?user=uDx2ItYAAAAJ&hl=en https://www.biorxiv.org/content/10.1101/2021.05.24.445464v1 I also won the WIDS 2020 ICU survival prediction contest 🙂 https://ieeexplore.ieee.org/abstract/document/9462159 https://www.kaggle.com/c/widsdatathon2020/discussion/133189 Houman#8141: Hello everyone, hope this is a general enough of a channel to ask a question about a blog post 🙂 I was reading the 'Rotary Embeddings' and then decided to read the self-attention paper, and there is one line in its brief description of the sine/cosine positional encoding that bothers me: > We chose this function because we hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset k, PEpos+k can be represented as a linear function of PEpos.
It's rather straightforward to prove the second half of the sentence i.e. to prove that for any fixed offset k, PEpos+k can be represented as a linear function of PEpos. However, I do not understand how that leads to their hypothesize? The closest I can justify it to myself, is that if each offset is just a linear transformation of the first positional embedding, then the series of the self-attention paper's positional embeddings is not an absolute embedding to begin with, but is an embedding of latent relative positional transformations. Is that all that it means? This is a bit perplexing, also because it makes me wonder why we need relative positional embedding if the original sine/cosine is already relative? Thanks! CRG#8707: See section #5 in: <https://towardsdatascience.com/master-positional-encoding-part-i-63c05d90a0c3> MrRee#0946: Hi I'm relatively new to machine learning and the progression of AI. But want to do a project for my college degree discussing the theoretical question of can AI show creativity and can they "think" like we do? Haven't come up with my conclusions yet as I'm doing more research on the study and reading alan turings paper on the subject. But would be interested to know what the general consensus is on the matter from people with more experience especially with the recent progression into AI over the recent years. From what I've seen so far it appears to me that AI are finding patterns for predictions, text generation, etc and rather than actually thinking. Daj#7482: Let me save you a bunch of time: If you want a good grade and praise by middlebrow mainstream popsci, just write some long winded babble about how these systems are just statistics/matmuls/interpolation, whatever, and call it a day. If you _actually_ want the truth, demand actual _rigorous_ definitions for terms like "thinking", "intelligence" and "creativity". You will quickly notice that people either a) avoid the question, b) give you an answer that is impossible to fulfill, c) give you an answer AIs easily fulfill, or (most likely) d) lack rigorous definitions all together. And then you notice that the whole debate is actually pointless and you should focus on more important matters lol MrRee#0946: Lol I do see where you're coming from, I'm completely aware the answer is purely subjective but we have to do a theoretical piece where there's no right or wrong answer and come up with my personal conclusions so decided it on AI. But will take your comment to hand though 🙂 Daj#7482: Well if there's "no right or wrong answer", why even bother lol, seems like a waste of time, but I guess that's modern education for ya Daj#7482: Why not let GPT3 write the essay? lol MrRee#0946: I agree but I guess it's the journey from research and coming up with your personal conclusion that they are more interested in looking at. But have always been interested in AI and especially how it's evolved over the recent years so that's why I picked that as the subject 🙂 Daj#7482: fair enough I guess Daj#7482: But you should totally try to make https://6b.eleuther.ai/ write your essay, it's a good example lol aze#1010: i wonder how many academic papers were written this way Daj#7482: Pretty sure you could get a paper written by ~~GPT3~~ ~~GPT-J~~ GPT-Neo into a humanities journal no prob Kia#2550: Ow god...
Kia#2550: Really Fun Way to Think How Many papers are created this way😄 MrRee#0946: Lol that could actually be my project instead, the other project will be something to do with advanced C++, looking forward to that more thenightocean#6100: Someone should legit try to do it. It might be like a more badass version of the Sokal affair. Kia#2550: Someone should try to recreate Gpt-Neo paper with Neo😄 joaogui1#8461: Yo folks, this Friday we'll be having the closing panel of our seminar series: "The Many Paths to Understanding Deep Learning", with Oriol Vinyals, Yasaman Bahri, Karolina Dziugaite and Brendan Fong, please come watch and ask your questions! For more info check this twitter thread or just ping me: https://twitter.com/_joaogui1/status/1409209399114076167?s=20 quinn#9100: Is anyone in london? Daj#7482: I like to imagine quinn saying this as he arrives in London and notices there is literally no other living human there quinn#9100: Thatd be a more fun quest than this quinn#9100: I think once I eat I'll be able to have fun with what happened to me today. If someone's in london I'll buy them dinner for a favor tho just message me Kia#2550: Stay safe and Find some geese's to😄 quinn#9100: I saw swans at the last english town i was in quinn#9100: I'm gonna move to off topic Houman#8141: Thanks CRG. The link provides the proof of the second section, which I was already aware of, but that doesn't explain how that leads to the hypotheses that " it would allow the model to easily learn to attend by relative positions" (whatever this really means) aze#1010: alstroemeria313#1694: ahah :blobcutehappy: nev#4905: ah yes, stickers James#6892: Why am I seeing crypto in this channel lol UnsupervisedLearner#4148: I really need to write out some hackable blueprints on my idea for a cryptoeconomic distributed embeddings table
workable hash-addressed embeddings table -> distributed low level minimal opinion knowledge graph on IPFS based data (already has axriv and libgen, soon scihub, soon *everything* including live sensor data) -> federated ML inference models that can compose datasets from these knowledge graphs UnsupervisedLearner#4148: Like gwern said, program your data not your models! This will be invaluable tool and get rid of so many headaches on dataset creation, curation, and sharing cfoster0#4356: Crypto stuff belongs in #off-topic HypnoPump17#9322: hey guys, since you all working in NLP, i think you'll find this useful (dont know if this is the appropiate channel or somewhere else is better): https://gist.github.com/hypnopump/73bd8b3968b00cc6342401bb2dffdc19 chilli#5665: What were your results lol chilli#5665: I did something similar at some time Louis#0144: Anyone know a good way to compute a span for extractive qa Louis#0144: Huggingface used to have helper code Louis#0144: But I can’t find it now Imperishable_NEET#1969: Impressed with the "5G" service on the MBTA underground, this used to be a mobile dead zone https://cdn.discordapp.com/attachments/729741769738158194/860942396141338644/PXL_20210703_175732580.MP.jpg thenightocean#6100: went to the general store nearby and its closed . Went to other one bit further and that one is closed too. I am like wtf… Its not holiday or smth… and they usually work even then.. thenightocean#6100: this turned out to be the reason https://cdn.discordapp.com/attachments/729741769738158194/860944621576257536/image0.png James#6892: Real 5G? thenightocean#6100: welcome to 21st century :guilty: James#6892: Last I checked not many places had real 5G James#6892: but some marketed fake 5G that was actually 4G LTE EricHallahan#1051: \*cough\* \*cough\* 5G E \*cough\* \*cough\* EricHallahan#1051: Also why is this in #general? AI_WAIFU#2844: Because productive topical discussion is being had in #off-topic James#6892: meanwhile, off-topic is exploding with productive discussion
Imperishable_NEET#1969: I forgot... HypnoPump17#9322: 10x slower. Dont know about the mem bc i did all on the laptop and idk a good way to measure peak allocation in the cpu joaogui1#8461: which one was 10x slower? ethan caballero#6044: @bmk 🤣 https://twitter.com/GaryMarcus/status/1411401525210062849 bmk#1476: I'm not going to spend my time testing Gary Marcus' conjectures lol alstroemeria313#1694: …Why not encode the input numbers with Fourier features actually, that’s such a weird input encoding for an NN Daj#7482: My man really trying to claim he discovered the concept of out of distribution generalization Daj#7482: What a clown Daj#7482: lol bmk#1476: :mesh: https://cdn.discordapp.com/attachments/729741769738158194/860966988577898526/Screenshot_20210703-133506_Twitter.jpg Daj#7482: So tempted to start a snarky quote tweet feud, but I'm better than that lol Daj#7482: It would just be helping his grift mgostIH#0245: Holy shit they found how to download groceries bmk#1476: all your milk and broccoli are belong to us StellaAthena#3530: This person uses the exact words "out of distribution generalization" in 2001, and statisticians have been using the same concept since the beginning of statistics, but go off I guess https://www.public.asu.edu/~rguo12/research.pdf thenightocean#6100: Is this first time Gary mentioned us? Are we now on his enemy list? bmk#1476: nah he didn't mention *us* he mentioned @eleuther which is some random person that I don't know
bmk#1476: also he already went off once about how wrong I am thenightocean#6100: we should invite him here, to have fun flame wars thenightocean#6100: #gofai project channel soon! kurumuz#5695: gary? UnsupervisedLearner#4148: I did not know Gary was such a foundational pillar of AI Why are we not listening more closely to this man's ideas? He's like an even better schmidhuber cfoster0#4356: Courtesy of Sid https://cdn.discordapp.com/attachments/729741769738158194/860984282352844800/BETTER_MEME.jpg UnsupervisedLearner#4148: But, on-topic, what do people think of neural approaches to symbolic reasoning? Fusing GOFAI with data driven models kurumuz#5695: compression is intelligence compression is intelligence compression is intelligence compression is intelligence compression is intelligence compression is intelligence Daj#7482: Such an underrated meme kurumuz#5695: :pain: Daj#7482: Feature engineering :nooo: bmk#1476: that's the good marcus UnsupervisedLearner#4148: Hmm, I guess the best symbolic reasoning models would be emergent not explicit cfoster0#4356: I think the way you get useful symbols is by abstracting over perceptual inputs, so starting off with symbols leads you down the wrong path. Also :brr: bmk#1476: tbh Marcus Hutter was really ahead of the game with the whole "compression is intelligence" thing kurumuz#5695: yea bmk#1476: its looking like the most likely candidate for prosaic AI is going to be basically a really good compression algorithm Daj#7482: Hutter's lab spawned, like, a third of the AI I read lol
Daj#7482: Indirectly mgostIH#0245: Bruh just compress twice thenightocean#6100: AIXI guy? mgostIH#0245: Also Hutter prize bmk#1476: AIXI in particular isnt super useful, even theoretically, because solomonoff induction isn t even super well defined bmk#1476: at least imo bmk#1476: i also think the hutter prize has the problem that the amount of data is too small and the resource constraints are too low, which doesnt mesh with scaling bmk#1476: but the idea of compression being useful for intelligence is really important imo AI_WAIFU#2844: That's an odd way to spell Ray Solomonoff AI_WAIFU#2844: Also markus got his ideas from :schmid: when he was his student. AI_WAIFU#2844: So really it's not that he's ahead of the game. AI_WAIFU#2844: It's more like the party started 6 decades ago and 99% of people still haven't figured that out yet. HypnoPump17#9322: The keops bmk#1476: damn why is everything schmidhoobah cfoster0#4356: I'm actually tempted to try this for the lulz alstroemeria313#1694: ehehe~ EricHallahan#1051: pls try it cfoster0#4356: Though if I do I'll choose to ignore his absurd restriction to numbers between 2 and 1024 cfoster0#4356: By construction it should be trivial for a residual MLP setup, and I suspect even a non residual setup might work One#5919: i'd restrict it 0 to 10. the human brain learned to think despite being able to count (at most) up to that much
One#5919: whatever happened to learned embeddings? EricHallahan#1051: We use base ten because we have ten digits, not because it is optimal for the human brain. UnsupervisedLearner#4148: We have ten digits because we count in base ten :bigbrain: nostalgebraist#3542: to be fair to marcus, in the algebraic mind he was pointing out the obvious for the sake of connectionist researchers who actually did not understand the obvious nostalgebraist#3542: as discussed in my post https://www.lesswrong.com/posts/ZFtesgbY9XwtqqyZ5/human-psycholinguists-a-critical-appraisal nostalgebraist#3542: a lot of connectionism was like, small MLPs with naive input/output encodings, presented as a theory of how humans learn uwu1#4864: on the topic of :schmid: - did everyone forget about OOPS and Levin Search?? One#5919: the more you can do with less, the more you can do with more One#5919: the brain is one giant cludge One#5919: only a tiny area of the retina captures full color and sharpness, but saccades and masking give us a seamless sharp wide view Louis#0144: Im so tired EricHallahan#1051: Get sleep and drink. Louis#0144: Ur right time for booze bmk#1476: instructions unclear, took entire bottle of sleeping pills and had an entire bottle of port Louis#0144: LMAO Teemochu#8740: instructions even more unclear, had an entire bottle of #starboard One#5919: i've been told to try Quinta das Carvalhas bmk#1476: that's the retrieval one? bmk#1476: i can never remember the arxiv codes bmk#1476: why cant you get one from your advisor?
bmk#1476: huh bmk#1476: apologies for the bluntness but your track record in this server doesnt inspire confidence in your research quality bmk#1476: so could you at least post the paper first? bmk#1476: otherwise the prior is not in your favor One#5919: https://www.gwern.net/Unseeing#curse-of-expertise gwern#1782: don't try to pin this one on me man One#5919: just a sentiment i agree with One#5919: https://en.wikipedia.org/wiki/Kludge#Computer_science Drexler#4006: Prediction is compression, insight is compression. Louis#0144: 2 late Zac-HD#7996: > Have people thought about integrating evaluation of the interpeter for the programming language with the sampling procedure from the model? @uwu1 yes, and I've worked on similar projects too! It... doesn't actually work very well, you're better off with something like a markov chain + online learning of the transition weights + diversity sampling. Throughput is really important, and often you explicitly want inputs _unlike_ any human-written code. Zac-HD#7996: See https://github.com/Zac-HD/hypothesmith and https://hypofuzz.com/docs/literature.html Zac-HD#7996: I'm keen to investigate something like Copilot (if only there was a nice api...) as a complement to my Ghostwriter though; I have some really nice introspection to generate test data and function preludes, but it often has trouble with the _body_ of the function. See https://hypothesis.readthedocs.io/en/latest/ghostwriter.html and https://zhd.dev/phd/ghostwriter.html uwu1#4864: interesting! i wonder if a gpu based wasm engine is needed to really get the throughput up to that level - if one looks at genetic algorithms for fitting functions on gpu literature they report a usable speed for switch dispatch based interpereters on gpu uwu1#4864: but doing it for python seems quite hard :') uwu1#4864: what do you think of the DreamCoder approach of going with a very simple language? uwu1#4864: i didn't consider that a test/fuzzer would want inhuman code but that makes a lot of sense :')
Zac-HD#7996: Some people want to generate "the production distribution" (good for load-testing); others "the uniform distribution" (good for publishing theory papers); fuzzing people mostly want "the bug-finding distribution" _ala_ Thompson sampling Zac-HD#7996: And re speed; ideally your fuzzer + target can generate and execute an input in a few dozen microseconds. My impression is that ML models are, uh, not in that latency budget... there's a steady stream of papers which find that adding ML is great on a *per input* basis, but we actually care about a *per wall-clock second* (or *per dollar*) basis. Zac-HD#7996: Lovely, if for some reason you want to test a language that hardly anyone uses! But to test C compilers you need to generate C code; for Python tools you need Python code, etc. Zac-HD#7996: https://www.fuzzingbook.org/ is a **fantastic** resource for all this stuff if you're interested. alembic#5293: Hi all! 👋 Thought I’d quickly introduce myself. I’m an ECE PhD student at McGill/Mila currenty studying transcriptomics through the lens of representation learning (my background is in biochem/cancer). (@jszym elsewhere) I’m plenty interested in NLP, however, and look forward to meeting and discussing with you all! (Also open invitation to buy anybody in/visiting Montreal a pint now that bars are opening up!) derivmug#3558: Hey everyone 👋 I am a physics undergrad from Germany and at the moment mostly interested in generative models and the application of deep learning for physical simulation. Looking forward to meeting and discussing with you all! (Also invitation to buy anybody in/visiting the Nuremberg area a drink!) mgostIH#0245: The more I get into bayesian stuff the more I am convinced it's **THE TRUE** thing we should pursue but the lack of good, practical methods to sample from the posterior is infuriating mgostIH#0245: There's also something quite sisyphean in HMC, we are rolling boulders in order to discover the truth
inox#5400: you read this https://www.inference.vc/the-secular-bayesian-using-belief-distributions-without-really-believing/ ? triggerhappygandi#0001: Why do I think you read every single message here triggerhappygandi#0001: How closely do you know Bengio lol mgostIH#0245: Did so, but I don't quite get what the takeaway should be, so might read the paper it refers to later mgostIH#0245: Why would it be a problem to assume the model exists in some sense just like the data does? mgostIH#0245: I also didn't understand (but I didn't spend too much time into it either) how they fix the issue if they can recover the exact same formulation using the log loss alembic#5293: Other than using his compute and catching the rare glimpse, not at all 😛 aze#1010: i made a sublime plugin that uses GPT J with copilot-like functionality im impressed with the results https://cdn.discordapp.com/attachments/729741769738158194/861253024311541760/unknown.png aze#1010: ( only hand written the comment here ) aze#1010: sublime3 plugin if anyone wants to try it themselves ( very basic ) https://cdn.discordapp.com/attachments/729741769738158194/861253274451312670/copilot.py Daj#7482: Cool! aze#1010: u have to open sublime console and run the "complete" command like this ```view.run_command('complete')``` aze#1010: the completion gets put wherever your cursor is StellaAthena#3530: That’s quite cool StellaAthena#3530: How hard would it be to make it a script that’s running in the background and triggers off of key sequence (say, ctrl+j) aze#1010: after giving it a longer max_seq to work with it also checks wikipedia and github aze#1010: https://cdn.discordapp.com/attachments/729741769738158194/861253842611732490/unknown.png aze#1010: you can just edit the sublime keybinds and bind it to the "complete" command aze#1010: but it kinda freezes for ~2-3 seconds during completion StellaAthena#3530: Lol it has the Wikipedia code backwards
aze#1010: ye it also returns "success" when theres a HTTP error lol StellaAthena#3530: I saw someone tweet “copilot: the world’s most advanced tool for introducing old bugs into new code” which I found pretty amusing aze#1010: this is funny https://cdn.discordapp.com/attachments/729741769738158194/861254523456061490/unknown.png aze#1010: (only written the top comment) StellaAthena#3530: It will likely do better if you give it more to work with aze#1010: i let it generate a few times more aze#1010: ```python # python function that calculates the meaning of life, given the ugliness of a person. import random import numpy as np import os import pymc3 as pm import pprint from pymc3 import Model as m import scipy as sp import matplotlib.pyplot as plt import matplotlib.image as mpimg #initialization from collections import defaultdict
import itertools import unicodedata #possible life choices GENDER = ["male", "female"] #male, female, other, unknown (like 1) ISMASTY = ["smudgy", "brokee", "brukie", "native", "stoosy", "neutral"] PERSONS = ["person1", "person2", "person3", "person4", "person5"] BEDROOMS = ["ground", "1stfloor", "2ndfloor", "3rdfloor", "4thfloor", "ceil", "basement", "attic"] SEX = ["straight", "gay", "bisexual", "lesbian", "asexually", "opression", "omg"] INDEPENDENT = ["no", "yes"] FACTIONS = ["bromance", "hate", "hate2", "hate3", "love", "fun"] AGE = ["18-30", "30-40", "40-50", "50-60", "60-70", "70-80", "80-90"] DRESS = ["black", "blue", "brown", "red", "striped", "gold", "white", "white", "red", "green", "striped", "silver", "jazz", "crazy"] FAMILY = ["single", "married", "couple", "single", "none", "child", "no child", "child", "none", "no child", "parent", "parent", "none", "none"] #default values DEFAULT_LIFE_OUTCOME = 'nono' DEFAULT_LIFE_STYLE = 'old' DEFAULT_LIFE_SEX ='straight' DEFAULT_LIFE_PERSONS = [1, 1]
DEFAULT_LIFE_BEDROOMS = ['ground', '1stfloor'] DEFAULT_LIFE_SEX_INDEPENDENT = "no" DEFAULT_LIFE_SEX_FACTIONS = [1, 1, 1, 1] DEFAULT_LIFE_AGE = [18, 30, 40, 50, 60, 70, 80, 90] DEFAULT_LIFE_DRESS = ['black', 'blue', 'brown','red','striped', 'gold', 'white', 'white','red', 'green','striped','silver', 'jazz', 'crazy'] DEFAULT_LIFE_FAMILY = ["single", "married", "couple", "single", "none", "child", "no child", "child", "none", "no child", "parent", "parent", "none", "none"] def meaningOfLife(person): #define a random number to be used for a purpose #any random number will do #but there must be a randomness to it. random_number = random.random() #define the purpose of the random number random_number_for = 1 random_number_for_is = 1 #if the person is a male if person.gender: if person.gender =='male':
#person is heterosexual random_number_for = 1 random_number_for_is = 1 #if the person is a female if person.gender == 'female': #person is heterosexual random_number_for = 1 random_number_for_is = 1 ``` alstroemeria313#1694: that code is weird aze#1010: wrote top 2 comments https://cdn.discordapp.com/attachments/729741769738158194/861255809035141131/unknown.png CKtalon#7792: seems like copilot is causing a lot of chatter about licensing and fair use. I think this was a case for GPT-X as well? Copyright material goes into training the AI, what it spits out isn't copyrighted CKtalon#7792: if I'm not wrong, The Pile also has copyrighted material in it, right? alstroemeria313#1694: i think so CKtalon#7792: wonder if this Copilot hooha will cause fair-use laws to be accelerated CKtalon#7792: because as of now, it's all a gray area StellaAthena#3530: That’s extremely unlikely, as copilot isn’t fair use (at least, not in the US) CKtalon#7792: well, using it to train an AI is fairuse? what it spits out should be considered 'original'? CKtalon#7792: no matter if it looks like something it copied off some github
CKtalon#7792: because it's literally not copying from github, but from its weights in the model alstroemeria313#1694: if it worked like this you could just "train an AI" i.e. memorize the training set, and copyright launder anything CKtalon#7792: that's where it's a gray area? StellaAthena#3530: This is very unsettled law, but what you’re interested in is not fair use. Fair use is a specific type of exemption with a well defined scope and purpose CKtalon#7792: because the point at which it violates something isn't set in stone yet because legislation hasn't caught up CKtalon#7792: just musing about it alstroemeria313#1694: yeah you prob shouldn't use it without actually checking to see if the code was copied, idk what github is thinking here StellaAthena#3530: In the US, the guideline is >>> In determining whether the use made of a work in any particular case is a fair use the factors to be considered shall include: 1. the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; 2. the nature of the copyrighted work; 3. the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and 4. the effect of the use upon the potential market for or value of the copyrighted work. alstroemeria313#1694: since they could just hash the entire training set anyway and check vs the hashes alstroemeria313#1694: at least, i think CKtalon#7792: I think a lot of the time 3) is what matters CKtalon#7792: since one isn't expecting to just use copilot's output wholesale? alstroemeria313#1694: however this is very unsettled and the things i am saying are to urge caution, not actual law StellaAthena#3530: For co-pilot the answers are
commercial a commercial product the whole thing unclear, but copilot has the ability to create a direct competitor CKtalon#7792: and then it becomes the owner who will have to do the suing, etc CKtalon#7792: which becomes untenable CKtalon#7792: like what happens if a snippet of code is used so often (which is why the AI memorizes it), that there are different licensing of it across Github CKtalon#7792: and one of them might be the most liberal kind of licensing? aze#1010: dang this is impressive aze#1010: https://cdn.discordapp.com/attachments/729741769738158194/861258292940242944/unknown.png aze#1010: i've only written the function names for all of these aze#1010: im impressed that printRandomHexColor() referenced the createHexColor() it generated earlier CKtalon#7792: it will be impressive if it was a few thousand lines down 😛 aze#1010: sure but still CKtalon#7792: attention is all you need aze#1010: i came up with all of the function names and it comprehended them pretty well CKtalon#7792: i think gpt-3 could already do something similar though CKtalon#7792: so it's not surprising i guess StellaAthena#3530: Do you have a reference for this claim? CKtalon#7792: nope 😛
StellaAthena#3530: It seems pretty unlikely given that GPT-3 wasn’t trained on much code CKtalon#7792: like simple snippets of code, i think it was possible CKtalon#7792: remember seeing people producing demos of giving descriptions and letting GPT-3 come out with simple code CKtalon#7792: it was likely from the paracrawl CKtalon#7792: wasn't that the reason why github was included in The Pile to better produce such phenomenon? spirit-from-germany#1488: Is here someone with experience with the kaggel API and how to start Notebooks remotely? EricHallahan#1051: Never have used Kaggle. nev#4905: :yes: triggerhappygandi#0001: @spirit-from-germany clashluke apparently has done some tpu debugging triggerhappygandi#0001: On kaggle dkpb#3480: Been trying to understand a bit more about what's going in #the-faraday-cage-archive with @BoneAmputee's bot. I looked through the VQGAN paper, but don't quite understand what the advantage is of using discrete codewords rather than using the continuous vectors from the encoder network, and then regressing to embeddings for the decoder (rather than a discrete classification of codewords). Anyone get it? Is it just a regularization thing? Daj#7482: You should ask in #art , I'm sure @alstroemeria313 or someone else has some answers Daj#7482: or at least hunches EricHallahan#1051: The problem is if you push the vectors to far away from the distribution, it performs poorly. EricHallahan#1051: It is *Vector Quantised* for a reason. EricHallahan#1051: If you are talking about the VQGAN paper, then I suggest reading more about VQVAEs. EricHallahan#1051: The purpose of VQ is to act as a compression method. alstroemeria313#1694: the decoder can produce weird artifacty outputs if you feed in codes that aren't in the codebook alstroemeria313#1694: so i don't do this. dkpb#3480: Hm, but it seems like just a traditional autoencoder is already acting like a compression method because the latent space is smaller than the information in the image. So it seem like if the latent dim size is small enough, then can get the same effect without needing to discretize into the codewords. Then you are also able to backprop all the way through easily
dkpb#3480: thanks, yeah should probably go back and read that paper dkpb#3480: right, that makes sense. But my question was around why do you have to have the codebook in the first place, rather than just learning the model in continuous space alstroemeria313#1694: oh, because the model was made to train an autoregressive transformer on the output of EricHallahan#1051: VQ is simply a different tool for achieving compression. EricHallahan#1051: It isn't a special concept to VQVAE and VQGAN. alstroemeria313#1694: the transformer model needs sequences of discrete inputs and the output needs to be a sequence of logits for discrete outputs dkpb#3480: does it though? What happens if rather than finding logits (so taking the last token embedding, multiplying by dot product of each class embedding, then doing softmax to predict next token), one just takes the last token embedding and then does a MSE regression loss for what you want the next "token" to be dkpb#3480: the next input is going into a decoder that takes in real values anyways dkpb#3480: so seems like can regress to a continous value next token. You don't need actual discrete tokens like you do in language alstroemeria313#1694: idk, that would be a totally different sequence model type? dkpb#3480: Yeah, it's still a Transformer (in the sense that its "Attention is All You Need" self attention only). It's just a different loss function for your "next token prediction" EricHallahan#1051: You absolutely can, but it would likely be more difficult to train. dkpb#3480: I get though that you probably need some regularization term, and quantization is one form of regulization I guess dkpb#3480: because the mode "get's confused" with the larger space, or because it is somehow more computationally expensive? EricHallahan#1051: More the larger space. EricHallahan#1051: The difference in computational complexity is effectively negligible. dkpb#3480: Hm, interesting. I guess I can sort of see this. But seems like would be a really slow process to separate out the codeword clusters into meaningful clusters since can only "push/pull" on one vector at a time Sphinx#2092: https://arxiv.org/abs/1812.04616 Sphinx#2092: That is what you want. Sphinx#2092: There was some initial hype, since in theory it saves you from the softmax computation, but I don't think the line of research went anywhere.
alstroemeria313#1694: Is softmax that expensive for large vocab sizes dkpb#3480: oo, yeah. Similar (at least from reading the abstract). I think this is a little different since there they still want their final output to be fundamentally discrete natural language tokens. However, in this image generation case we don't have that discrete output restriction. It seems fine to have continuous vector outputs since they are going into a decoder that takes continous vectors anyways EricHallahan#1051: That is only true if the the decoder was trained with continuous input vectors. alstroemeria313#1694: yeah, but i assume for this you would train it that way, with means and logvars outputs from the encoder or smth alstroemeria313#1694: And a sampling step before the decoder, and KL loss on the means and logvars dkpb#3480: I don't think it is the softmax that is expensive, but getting all the logits as vocab size gets 50k+ is moderately expensive (?) alstroemeria313#1694: oh, it's just a matmul though? dkpb#3480: yeah, but a moderately big one alstroemeria313#1694: ah dkpb#3480: I mean there are large matmul's all the over the place alstroemeria313#1694: i have not actually profiled the individual ops in my models dkpb#3480: but if you have 2k+ embedding size, and then if you have a very large vocab if like 50k it's a large number of parameters and compute dkpb#3480: yeah, something like that. But I don't know if you need a sampling step since the vector is coming directly from the transformer (but I guess you have to sample the start token) alstroemeria313#1694: well sampling is for information bottleneck alstroemeria313#1694: i... you could just learn the actual autoencoder latent space though dkpb#3480: (idk, I've never really done much practical things with GANs or VAE's so probably not speaking super intelligently here) alstroemeria313#1694: without constraining it as in a VAE alstroemeria313#1694: But you would have to make it smaller. alstroemeria313#1694: VQGAN's is prob too big. alstroemeria313#1694: Guessing here though, would love to see empirical transformer+continuous autoencoder results.
dkpb#3480: make the whole network smaller, or the encoder/decoder latent vector size? alstroemeria313#1694: the latter dkpb#3480: Hm, yeah, not sure. If it is really just a regularlization/make-the-space-smaller thing, I still don't quite get why the discrete-quantization as a regularizer is a better regularizer than the traditional VAE KL thing. But I think I need to read the original VQVAE paper to get the motivation they presented for discrete states a bit better. Also that paper @Sphinx mentioned looks like a interesting read. Thanks everyone! (and sorry for the spam on general. I get now that #art probably would have been a better place) rom1504#5008: I think VQGAN/VQVAE and how their tokens are used in transformers are interesting for this reason: makes it clear that it is possible (and apparently useful) to make discrete an initially continuous space in a good way. I don't have answers as to whether it is required, but the same question can be asked about natural languages words. Humans really want to convey information about the world and about imaginary concepts. Those information are not really discrete initially. So maybe making them discrete as words has some good properties. (Easier to express both in audio and in various written forms maybe ?) alstroemeria313#1694: Well for DALL-E the transformer had both text and image tokens alstroemeria313#1694: It was not image only cfoster0#4356: Yeah I've been wondering a lot of the same things as you have @dkpb cfoster0#4356: I don't think I've seen much done training transformers autoregressively on continuous sequences instead of quantized/discrete ones alstroemeria313#1694: Can you actually sample without output logits somehow cfoster0#4356: You don't *have* to sample cfoster0#4356: Like you could just feed the vector directly back in cfoster0#4356: Or you could do something like a VAE and predict means and SDs alstroemeria313#1694: Ah dkpb#3480: Yes, the "possible" makes sense, but the "useful" part it would be nice to see more evidence of. (which is why asked here and wanted to see if if there was prior work on demonstrating the superiority of the VQ part) Natural languages are operating on a very different set of constraints (audio is channel with limited information density, and physical limits on how fast we can speak otherwise we would fracture our mandible/vocal tissue). So yes, abstraction seems essential to intelligence, but not sure if systems which aren't as constrained by channels need the equivalent of discrete words. But idk... dkpb#3480: Hm, this a good point though which I didn't appreciate. If the only sources of randomness have is sampling of an initial "token vector" or the conditioning, you might end up the transformer settling into repetitive sequences similar to greedy decoding of NL. So would make sense might need extra tricks... uwu1#4864: re VQ* - it also can help avoid mode collapse somewhat. Also, a similar system is sometimes used for tensor dictionary learning, which is used in reconstruction of MRI and CAT scans (although often it's computed just by nonnegative factorization of the data) uwu1#4864: I wonder if that closed form factorization could be useful for "local abstraction"
uwu1#4864: Like a dynamic encoding of a context cfoster0#4356: This work hasn't gotten any follow up that I know of, but may be relevant to the above https://github.com/Gsunshine/Enjoy-Hamburger uwu1#4864: im enjoying Manny96#3437: Hey Gals and Guys! How are we doing? So, new path - don't develop GPT-Neo RL just yet; select low hanging fruit and favour the build of simply training on a dataset, finetune on "Intent Classifications Datasets" — common issue seen in clients. Manny96#3437: Could I please add it to the ideas board? Manny96#3437: Although, I see RL everywhere, aye. Adding RL wouldn't increase parameter size — yet, still, provide exponential returns. Yeah, final, GPT-Neo RL; and train on "Intent Classification Datasets - Dialogue modeling". On the board? Manny96#3437: Don't have space for error. Manny96#3437: Correction, adding RL would increase parameter size of the network; increase accuracy of state representations of features. nshepperd#2316: i sort of figured the main reason for the quantization in vqgan was just that we know how to learn discrete sequences autoregressively EricHallahan#1051: I would expect that to be the reason. nshepperd#2316: you can make an autoregressive continuous model with normal marginals by outputting mean and std from a transformer. but that requires that the marginals are normal. so it's not fully general, unlike a discrete language model Manny96#3437: Poisson Distributed, right Manny96#3437: @UnsupervisedLearner You seemed interested? Manny96#3437: You can have a discrete language model - and compute Poisson Distributed gates (continues), gradients of the loss function. nshepperd#2316: :thonk: UnsupervisedLearner#4148: I'm not much for RL. I was memeing about the decision transformer when you mentioned you could not use GPT-like architecture for markov process nshepperd#2316: one thing you could do is use the output of a transformer to control a normalising flow model. that gets you a pretty much fully general continuous autoregressive model nshepperd#2316: normalising flows are awfully slow though. and it might just suck
EricHallahan#1051: I've been kicking around the concept of running an LM over Codec2 frames for some time, but I haven't gotten around to doing that. Louis#0144: How expensive would it be to host a “this X does not exist” Louis#0144: For geese EricHallahan#1051: This isn't relevant to #general. Louis#0144: O true Manny96#3437: To be more precise, the gates are Poisson and then marginalisation makes it discreet. Manny96#3437: You can have continues marginalisation. Manny96#3437: @nshepperd Bruce23#6204: @kindiana I'd like to use your https://github.com/kingoflolz/mesh-transformer-jax/blob/master/device_train.py but I fail to download the model onto the TPU because they have very limited disk sizes by default. I tried to a) create the TPU with 500 gb using --disk-size (resulted in errors) b) connected a bucket to it (download aborted after few kb). how did you do it? kindiana#1016: #gpt-j please Bruce23#6204: alright, mea culpa UnsupervisedLearner#4148: What is the best way to find really good colab notebooks on any topic? UnsupervisedLearner#4148: What is everyone's go to? rom1504#5008: Links in Projects docs (for example keras or pytorch), popular GitHub repos, paper with code repos, sometimes twitter sometimes this discord EricHallahan#1051: a) it depends on the hardware/software involved b) this would probably be better suited to #off-topic. chirp#4545: > If AGI is possible soon, how might that happen? ... Perhaps someone develops an app or tool, using a model of GPT-3’s size or larger, that’s a huge productivity multiplier. ... If you code 2x faster, that’s probably 1.5x as much research output. — Alex Irpan from last year (https://www.alexirpan.com/2020/08/18/ai-timelines.html)
Just realized that this scenario has already come true, now that Copilot is out 🤯 chirp#4545: Could Copilot drive a big wave of investment in hardware? Quick estimate — how much hardware is required to run Copilot effectively? Let's say GitHub charges each developer $1 per day. There's maybe 30 million software developers in the world, and if everyone uses Copilot that's $30M/day or **$10B/year (!!)** which is a significant fraction of the total GPU compute market of ~$50B (https://www.nextplatform.com/2019/03/21/the-still-expanding-market-for-gpu-compute/) nev#4905: oh thenightocean#6100: I am still slightly skeptical of how much will Copilot be used in production-grade code. Cause usually you are always constrained by gazillion dependencies and domain code structure so it's a bit hard to add useful code if system doesn't know the larger context. kindiana#1016: if your compiler/interpreter can find the dependences, I'm sure copilot version n+1 can figure it out thenightocean#6100: fair. I haven't tested it so I still wonder about its utility in real-world tasks. chirp#4545: tbh I'm just in shock at my $10B/yr estimate chirp#4545: that is **crazy** for such a simple product chirp#4545: like, at an 100X valuation multiple (totally reasonable for cutting-edge AI like this), that's a $1T company thenightocean#6100: (can it also review other people PR's, and write snarky comments? 😛 Thats what the devs spend a lot of time on these days 😄 ) zoë#1337: > and write snarky comments? I think the quake inverse sqrt thing shows it can at least *memorize* snarky comments chirp#4545: copilot can deal with this to some extent - it is aware of the other code in the same file, so can know class names and such guac#4716: Who is going to pay $365 for a year of copilot usage lol chirp#4545: @guac i think it's pretty reasonable if it increases your productivity even 5% chirp#4545: developers are expensive!
chirp#4545: like, assume developers spend 50% of their time coding, and copilot speeds that up by 10% zoë#1337: people pay $100-200 for jetbrains licenses each year uwu1#4864: is the majority of time coding really the typing though chirp#4545: @uwu1 copilot is more of a substitute for googling zoë#1337: ^ uwu1#4864: compared to debugging/thinking/logging/build system mockery zoë#1337: remembering the api, etc guac#4716: I’ve been using it lately and there’s a lot of subtle bugs it introduces. I don’t see the productivity boost yet. You have to massage it to the right direction of your code space. It might be a decrease in prod for me chirp#4545: also, remember this is literally the very first version of copilot chirp#4545: i'm sure it will get way better uwu1#4864: i wonder if one could make it output code with pre/post conditions at least guac#4716: True. It can only get better thenightocean#6100: yeah but why would you commit to paying it long-term when EAI will probably enable this stuff to be open sourced at some point lol. zoë#1337: especially with the fine-grained interaction data that they'll get from the people using it. like, "they asked for this, this is what they chose out of the 3 completions they gave us, and then they went and changed this word around afterwards" uwu1#4864: this is true .. i feel like this part could almost be refactored out of it though, like LSP autocompletuon/suggestions already do that to some extent zoë#1337: > EAI will probably enable this stuff to be open sourced at some point lol. I hope so! this seems like something where an early lead will give them access to a ton more private interaction data from using the tool and thus attract more users. very susceptible to the network effect uwu1#4864: the autocompletes used in gboard and such are fine-tuned on device with federated learning to at least try to preserve the privacy of what each person typed uwu1#4864: i hope they take similar consideration when fine tuning on what people actually write
uwu1#4864: but it seems unlikely with the size of the model to do that guac#4716: Vscode already comes baked with telemetry I don’t think MS cares much lol zoë#1337: it sounds like it's all service based and it'll be sending all the data back to the mothership for their big server. zoë#1337: and their TOS says they can use the data to improve the service zoë#1337: which is pretty open ended ﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽#8659: hi amigos suh#2879: anyone working on text to 3d model? nev#4905: nice nickname ﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽﷽#8659: yes cfc#2691: https://github.com/lvwerra/trl cfc#2691: Have you guys seen this? cfc#2691: Redirecting the objective function of language models with some arbitrary scoring algorithm. inox#5400: one benefit to copilot could be if it codes homogeneously then everyone converges to similar code styles and less barrier to understanding different projects suh#2879: anyone mind sharing the where they got the model used in the faraday cage? i got a few questions about it Daj#7482: Check the pinned notebooks in #art suh#2879: thank you so much 𓅬 gabriel_syme 𓅬#3220: hi yeah I am but not in the way (I guess) most people think about it 𓅬 gabriel_syme 𓅬#3220: oh wait you mean clip guided models nvm 🙂 yeah check art! suh#2879: no suh#2879: i meant any 3d models
Zac-HD#7996: There's an amazing level of resistance to simple autoformatters like `black`, let alone more comprehensive canonicalisation like my `shed` project. And unlike Copilot they already work, without adding potential legal problems or telemetry questions. inox#5400: true I just started using black with nbqa and pre-commit and why did it take so long to start Ravna#1831: Also if legacy programming languages like C and Javascript were likely to last for another 100 years before, it's even more likely to do so after tools like copilot are out. Even less incentive to improve the abstraction power and semantic rigor of programming languages when tools could alleviate some of the worst symptoms. Zac-HD#7996: You might like the extra tools in https://pypi.org/project/shed/ too then 😉 alstroemeria313#1694: `black` uses double quotes and i use single triggerhappygandi#0001: huh... Never thought about that triggerhappygandi#0001: But if it does have an $10/month fee, I can live with that. As I assume can most devs triggerhappygandi#0001: Doubt it'll reach 30M people though triggerhappygandi#0001: And if that happens, inference servers will take up a shitload of GPUs. triggerhappygandi#0001: anyone here got the access btw? I'm more curious about this than the API Deleted User#0000: jukebox-j :thonk: bmk#1476: it probably doesn't tho bmk#1476: I mean, going off the theory that it's basically gpt3 but tuned on github, it's going to do stuff all over the place Louis#0144: I have never heard anyone complain about black or shed? Louis#0144: @guac Louis#0144: #off-topic triggerhappygandi#0001: Yeah but how do you collect data when everything is copyrighted to hell Ravna#1831: Use only the music before the 20th century. Ravna#1831: It would still be more than enough for now, given that the major bottleneck is still the model size/compute, not the dataset size. triggerhappygandi#0001: Is it? Just slap a 50b transformer with a large vae and you're done. We also have extremely huge supercomputers now.
alembic#5293: mostly an impractical thought experiment, but it would be an interesting application for online learning considering radio/streaming/live performance licensing is less prohibitive cfoster0#4356: I think the dataset size actually should be the bottleneck Ravna#1831: I'm not convinced that 50b is nearly enough for passable music. Given that the 3b one on music was much less impressive than the 1.5b GPT2 on text. triggerhappygandi#0001: :morelayers: just works™ triggerhappygandi#0001: But we won't know if we can't try Ravna#1831: Is there a "wider layer" emoji too? triggerhappygandi#0001: What would it look like triggerhappygandi#0001: :morelayers: but w i d e? EricHallahan#1051: Who cares about aspect ratio? EricHallahan#1051: *More Layers are All You Need.* Kharr#7888: This has to do with encoding more than model size. VQ-VAE + 1.5B GPT model should have no problem generating very high quality music. AI_WAIFU#2844: I think it's just our models are shit. You shouldn't be trying to predict waveforms using AR. There's transforms you can do to make the problem significantly more tractable. AI_WAIFU#2844: Music is far less complicated at a high level than language. Ravna#1831: The OpenAI's show case of sparse transformers, MuseNet, before Jukebox, was done on comparatively abstract music notes, not waveforms. The result was also unimpressive. Ravna#1831: Don't know how big that one's size was though. guac#4716: there's way more audio than midi files though lol Leo Sanders#1157: Hi all, I have a question on GPTNEO, do you have any example code using HF libraries for inference but not using the “pipeline()” of HF? I mean directly using GPTNeoForCasualLM() and GPTNeoConfig()? Whatever info you have will be helpful! 😊 cfoster0#4356: I would compare to Music Transformer, which was very impressive imo and I don't think very big. Trained on MIDI performances Kharr#7888: Just check out the work by ByteDance AI, speaks for itself
EricHallahan#1051: If you cannot find anything else that helps (unfortunately I don't know of any repos off the top of my head), I suggest just searching GitHub and reading the documentation. `model.generate` shouldn't be too hard to figure out. cfoster0#4356: Got a link? Google search isn't pulling up any music generation stuff from them Kharr#7888: Check out their github (has video/papers/code) https://github.com/bytedance/GiantMIDI-Piano I used their encoder to make a training set for a model downstream which produces pretty stellar music with only 60M params. Leo Sanders#1157: Got it. I was hoping to save some time 😆 Emad#9608: Has anyone seen how/on what Wu Dao 2.0 was trained? 1.75 trillion parameters is quite a lot and not sure there are that many GPUs over in China ex the deluge about to come from crypto being banned there. There are some interesting uses in MoE etc on the official release but still would need a hefty chunk of compute. Emad#9608: Also anyone tried their API? Ravna#1831: None of the existing model uses that many GPUs. No one has seriously made a Manhattan project level of effort so far. No even within 2 orders of magnitude. mgostIH#0245: Is there any work at all being put towards distributed machine learning? mgostIH#0245: Where you have a lot of compute but across many smaller units and with more latency Ravna#1831: Wu Dao 2.0 is not necessarily more hefty than GPT-3, which is itself not a really big project if you compare it to a lot of other scientific endeavors. Ravna#1831: Astrophysics, meteorology, and even chemistry departments abuse their fundings to build huge supercomputers every year in the past two decades. Ravna#1831: I bet GPT-3 is tiny among these projects. inox#5400: maybe the most prominent thing would be something like the openAI evolution strategies project a while ago? There's definitely a whole field there Kharr#7888: One thing to remember is that we're still in the "proof of concept" stage for a lot of these models, including GPT-3. A model you would see in a production environment is a lot more polished and has much better guard rails and tuning. The real cool stuff hasn't happened yet. inox#5400: vision models never reached production that was more polished inox#5400: well maybe that's not a fair comparison Kharr#7888: I'm not sure I agree with that statement, there is technology like what's in Google Lens which is pretty mind blowing if you dig into it and use it. inox#5400: true rocketscienceguy#0662: Are there publicly accessible instances of neo or other content producers I can ping api calls against (hosted on herokuapp perhaps)? Installing github projects that are not my own tends to be a lengthy process for me, and it would be nice to try a codeset out before committing the effort. StellaAthena#3530: @rocketscienceguy if you go to 6b.eleuther.ai you can interact with our largest model on our servers
mega b#6696: https://bellard.org/textsynth/ is great for fast responses cfoster0#4356: What kind of code/help? We typically don't do a lot of tech troubleshooting here, except for stuff regarding the GPT-J 6B model in #gpt-j cfoster0#4356: Hmm. I guess #off-topic cfoster0#4356: Just be warned someone might bite your head off lol Louis#0144: 👀 Sinon#7923: 😦 Sinon#7923: We should build a microwave that dosen't nuke your stomach when pressing the wrong button Sinon#7923: idk how Sinon#7923: maybe like what microsofts so called 'windows update learning system' Sinon#7923: like it adapts to your health and warms it up with a voice command Sinon#7923: cause buttons are tricky Sinon#7923: I am a coder not cook One#5919: https://arxiv.org/abs/2106.10207 rocketscienceguy#0662: thanks for the referrals, @StellaAthena and @mega b Fessus#9563: Do >1D dataset types (i.e. 2D Images) cause suboptimal behavior with rotary embeddings? Seems like the usual process of just converting the image to a sequence breaks some of the relative position assumptions cfoster0#4356: I dunno if folks have tried using it straight out the box with >1D dataset types, instead of splitting the embedding into D parts and applying rotary to each dimension separately Zac-HD#7996: It's usually more "I don't like autoformatting" in general; see eg https://discuss.python.org/t/pep-proposal-automatically-formatting-the-cpython-code/5603 for CPython core devs getting nowhere. -Archivist#7336: could someone give me the skinny on image classification models, where we're at with them etc. I have upwards of 4 billion random ass images/photos I'd like to do something with Sid#2121: what do you want to know exactly rom1504#5008: you could decide to compute clip embeddings of them. That'd allow you to do english search on them, image search and also zero shot classification (classify in any label in english)
it's cheap enough to do. unlike training some new models, for that you'll need a bunch of gpus/tpus for 4B samples alstroemeria313#1694: the easiest way to classify images these days is to use CLIP zero-shot classification probably alstroemeria313#1694: you literally just specify the classes as English text -Archivist#7336: label them, build a database I can search for 'cat' and it return the pictures including cats for example. at this point it's just 4 billion images I have no real use for and obviously going over them manually to find things is a none-starter of impossibility -Archivist#7336: reading over clip specs it would seem that's best option, hmmmmmmm CRG#8707: Yeah, CLIP should also let you search by image similarity (like this: <https://share.streamlit.io/thoppe/streamlit-clip-unsplash-explorer>) -Archivist#7336: what's the local setup for this, everything I'm seeing is running on google colab shit which may as well be a blackbox to me 🤷🏼‍♂️ 𓅬 gabriel_syme 𓅬#3220: Any idea if a larger model (e.g. gpt2-large vs gpt2-mini) would have higher loss when finetuning on a dataset? I would imagine it's more sample efficient? Does context length impact this (I had to use a 4x smaller to fit it)? That is the only difference I think I have in the training. Sid#2121: @-Archivist is that you in your pfp Sid#2121: I can try and build an example for you a little later - basically you want to precompute embeddings for each image from clip - then when you search with a word get the top k most similar embeddings (after embedding the tokens with some LM). Over billions of images I'm not sure what sort of times we'd be talking about here - but gpu would be faster than cpu -Archivist#7336: yes Sid#2121: you are a fuccboi, sir -Archivist#7336: thanks, I can run this locally on 4x rtx titans -Archivist#7336: my wife... okay, valid 🤷🏼‍♂️ Sid#2121: lmao i'm joking. I just honestly thought you'd look like a shadowy figure in a trenchcoat with no face -Archivist#7336: most people do -Archivist#7336: granted that image is about 18 months old and I'm now bald due to surgery and the beard is much less shitty Daj#7482: You look literally just like my brother and that freaks me out :berk: -Archivist#7336: hey bro Sid#2121: did you have surgery to transfer your head hair to your face
-Archivist#7336: yes, that and tumour removal from brain Sid#2121: I assumed it was something like that, and now i feel terrible. Hope you're ok now man. -Archivist#7336: all good, still testing neurologically abnormal but physically all good for now 🤞🏼 Louis#0144: step bro what are you doing with my ftp server rom1504#5008: clip inference is about 2k sample/s on a good gpu, so that's `4*10^9/(4*2000)/3600/24 = 6 days` on these 4 gpus for 4B pictures, pretty reasonable rom1504#5008: something as simple as https://github.com/rom1504/clip-retrieval/blob/main/clip_batch.py can do the trick (modulo you change the data loader to work with whatever format you have) rom1504#5008: and this https://github.com/criteo/autofaiss can be used to produced efficient indices from the embeddings. (for 4B elements you'd still need like at least 200GB of ram to hold the index even with a highly quantized index though) ; just an easier way to use faiss rom1504#5008: I can almost guarantee the resulting demo will be super cool though. knn indices have < 10ms of latence so it's real time like google image, and you can query very specific things with clip rom1504#5008: (and once you got the embeddings as said above you can not only build knn search from text and image, but also zero shot classification, and why not also easily produce subset of images using text queries) -Archivist#7336: Ahh that's much better than I was expecting, anything under 3-4 weeks I can live with given power cost of running these things. cc @Sid I'd still also like your input when you get time, no rush alstroemeria313#1694: you are going to need a bunch of worker processes to load and uncompress the images alstroemeria313#1694: and may actually end up limited by this rom1504#5008: that totally depends on what kind of storage he got, if he mounts a big distributed system locally on the 4 gpu machine, he can literally just run the script above rom1504#5008: if the images are in a tar you need to add tar streaming alstroemeria313#1694: and number of cpus -Archivist#7336: 4b images stored as is (no compression) on a local (in chassis) zfs array (data size is uncalculated, stopped counting at 2PB) 2TB ram, dual epyc 7702p cpu and the titans, though I've only ever ran 2xgpus in this machine I'm not actually sure I can get all 4 in there, will have to check that out if I want to run it all contained on this machine another thing I'm unsure about, given source the extensions are wrong on the images, everything is `.jpg` gifs,pngs are the other extensions so I'll need to do some presort and remove the gifs most likely but is there any consideration to be made jpg/png wise with clip?
alstroemeria313#1694: i think Pillow (the imaging library most PyTorch stuff uses) just autodetects the file type anyway alstroemeria313#1694: also the gifs should be ok alstroemeria313#1694: unless they're animated or smth, it might just look at the first frame alstroemeria313#1694: or transparent -Archivist#7336: I have no reason to label the gifs really, they're all animated gifs and random meme shit so those can go. the files are straight though so even unix `file` is fine at identifying them via headers rather than using pillow to quantify alstroemeria313#1694: ah Sid#2121: maybe best to do some preproc step just getting rid of corrupt files then Sid#2121: and/or gifs -Archivist#7336: agreed alstroemeria313#1694: sometimes files can be corrupt in such a way that Pillow fails to open them but other stuff succeeds, or vice versa alstroemeria313#1694: so you should filter for corrupt files with Pillow itself Sid#2121: yep exactly -Archivist#7336: I'm going to sub sample 500k images at random for testing before I go all in on the what will be left of the 4b with gifs removed alstroemeria313#1694: (or just catch the exception Pillow raises and ignore the file) -Archivist#7336: fair 👌🏼 rom1504#5008: I'd advise to consider resizing first to 224 -Archivist#7336: ? alstroemeria313#1694: CLIP's input size is 224x224 rom1504#5008: clip doesn't do anything with higher resolutions and it will make your life easier EricHallahan#1051: Why is 224 the standard anyway?
Sid#2121: since we might want to do other things with the data than run it through clip - best to just do the resizing on the fly alstroemeria313#1694: Its training set was prepared by resizing the images so their short edge was 224 and then center cropping them. EricHallahan#1051: Cursed because it is not a power of two. -Archivist#7336: ahh shit, that's going to be quite a task, as in time spent resizing everything then, and I'm going to want to keep the originals too soooo damn xD alstroemeria313#1694: you can just resize when you feed them to CLIP bmk#1476: pretty sure 224 came from the size of the VGG nets rom1504#5008: my point is if you resize on the fly, then this is likely going to be the bottleneck alstroemeria313#1694: early models were trained on imagenet resized to 256x256 and then random cropped to 224x224 bmk#1476: and then everyone just used it for backwards compat -Archivist#7336: ahh so it can do this on the fly without nuking originals? or something I'm going to have to add to the whole process and clip scripts I'll be running? EricHallahan#1051: So why maintain the trend? alstroemeria313#1694: yes you resize them in memory with pillow after loading -Archivist#7336: sound alstroemeria313#1694: i guess so results were comparable EricHallahan#1051: I guess that is valid, but it feels like we are holding a constraint just because rather than it being a true constraint. rom1504#5008: random big images will probably be like 1MB of size, whereas 224x224 is around 20KB so 20KB per image means "only" 80TB which will be much faster to ingest by the clip inference, and I guess is "small" to store for you (without deleting originals) alstroemeria313#1694: also for a ViT you use one token per patch and then one extra class token and a 256x256 would make the input token number a power of two plus one alstroemeria313#1694: idk if this was the exact reason alstroemeria313#1694: for CLIP
alstroemeria313#1694: ...i guess not since there were some 14x14 patch CLIPs alstroemeria313#1694: And this is 256 patches for a 224x224 input. bmk#1476: I propose a compromise solution, what if we go halfway in between - so, 240 -Archivist#7336: I'm glad I'm not just nuts thinking this was possible and you've all had input telling me that it's actually possible I just need to learn how to run some pre-exiting scripts 🙂 alstroemeria313#1694: i think the actual reason is just that the ViT people used 224x224 because they were comparing to ImageNet trained convnets and then OpenAI took the arch straight from the paper -Archivist#7336: so thanks guys ari#9020: Last I played with stylegan (which uses PIL but is probably pretty lazy about it), getting to `convert -resize 256x256\! -strip -background white -flatten -alpha off` in my preprocessing script was enough to get it to not constantly break... except on indexed-mode and greyscale images rom1504#5008: something like https://github.com/rom1504/kaggle-fashion-dalle/blob/main/resize.py can resize fairly fast to whatever resolution assuming you got some CPU cores rom1504#5008: (it's not doing anything particular special, you can ofc use anything else) alstroemeria313#1694: if you do `.convert('RGB')` on the Pillow image in memory you do not have to do a lot of this stuff alstroemeria313#1694: and it also handles indexed and grayscale alstroemeria313#1694: and CMYK alstroemeria313#1694: and other weird formats i have seen in actual datasets -Archivist#7336: that's another mostly unrelated question I've got, what's the deal greyscaling of images, I've often seen this when reading about AI/ML shit, what's the deal with that? alstroemeria313#1694: grayscaling? -Archivist#7336: yeah, often I see examples where all the images are black and white, or greyscale 🤷🏼‍♂️ alstroemeria313#1694: oh alstroemeria313#1694: it just requires less resources to train probably -Archivist#7336: ahh -Archivist#7336: okie dokie then
alstroemeria313#1694: 1/3 the data rom1504#5008: black and white is 1 bit whereas RGB is 3 bytes (24 bits) alstroemeria313#1694: if you make your data 1 bit you can treat the continuous output of the net as p(white) alstroemeria313#1694: this was a common thing back in the early days of variational autoencoders i think alstroemeria313#1694: but now ppl just output 3 location channels (one for r, g, b) and 3 scale channels, or smth alstroemeria313#1694: or they just do 3 loc channels and assume fixed scale alstroemeria313#1694: you mean 24 bits? alstroemeria313#1694: though in practice it is three floats rom1504#5008: yeah I got one 8 factor too much rom1504#5008: but indeed with floats, it's going to be x4 bigger axiom#3599: we got plans to mirror openAI’s codex? i wanna run the universal dovetailer bmk#1476: ghpy axiom#3599: :Amethinking: Daj#7482: You keep namedropping ghpy and never explain what it is or provide links lol alstroemeria313#1694: @Daj https://huggingface.co/models?search=ghpy alstroemeria313#1694: isn't it this Daj#7482: It is, it's just a tad rude towards beginners alstroemeria313#1694: the ones trained on **g**it**h**ub **py**thon alstroemeria313#1694: ah alstroemeria313#1694: ...there are more there than there were when i looked last?
StellaAthena#3530: yeah Leo keeps making it larger alstroemeria313#1694: ahh bmk#1476: i sometimes provide the link when i can get it easily PsiClone#8758: I am getting this error and i have no idea where to post this https://cdn.discordapp.com/attachments/729741769738158194/862399820551421992/unknown.png Sid#2121: try using https://bellard.org/textsynth/ Sid#2121: our API has been absolutely hammered lately and Ben's busy PsiClone#8758: ahh okay, i am trying to write a story , and last time really had good results Kharr#7888: Maybe try using the Colab notebook? PsiClone#8758: i am not a coding guy...so if u ask me to do something out of my scope...i am going to get a panic attack!! mr_seeker#1337: NovelAI, KoboldAI, all good to make stories. StellaAthena#3530: > Stella: I just saw confirmation from GitHub that MSFT’s Copilot was trained on all of GitHub with no respect for licensing or copyright > @ethan caballero “Commercial use" is different than commercial AI training. Does there even exist any license that says "AI is not allowed to train on me"? For copyright issues, the output of model can just be filtered if n-gram overlap with non-commercial use repo(s) is too high. StellaAthena#3530: In the US, this is completely up in the air. The closest applicable case law is Google vs Writers Guild, the case about Google Books. But the facts under consideration here are incredibly different, and I would not take that case as an answer to this one StellaAthena#3530: Like @Daj said, this well may be the case that determines the issue in court in many countries Parker#3197: https://discord.com/channels/729741769192767510/730095596861521970/862388403138723881 Parker#3197: I doubt it. I was already talking about it there Parker#3197: they also haven't filed a lawsuit. and the EFF usually is representing parties who infringed copyright, not the other way around (so I doubt they'll be too interested, but idk the lawyers probably can choose what they want to do somewhat) cfoster0#4356: Also @ethan caballero yes. Here's an example from a lab that's got a reputation for slapping these kinds of horrid licenses on their research https://smpl.is.tue.mpg.de/modellicense >>> This license also prohibits the use of the Software to train methods/algorithms/neural networks/etc. for commercial use of any kind. By downloading the Software, you agree not to reverse engineer it.
Parker#3197: that's on the software not the data. nvidia also uses some kind of non-commercial license Parker#3197: but, using the research (as long as you aren't using their copyrighted software implementation) for other for-profit stuff is usually fine. (as long as it isn't covered under patent) cfoster0#4356: It's both the software and the data Daj#7482: MPI is a german government funded agency Daj#7482: boo Daj#7482: This is public money cfoster0#4356: >>> Please read carefully the following terms and conditions and any accompanying documentation before you download and/or use the SMPL Software/Data, (the "Software"). Parker#3197: yeah, but I think the data may still fall to fair use law (I am not a lawyer obviously) it probably isn't a very good idea to go against what someone else wants without consulting a lawyer, but my point is to say that it **it could** fall to fair use depending on what you're trying to do with the data cfoster0#4356: I'd be fine if they didn't put restrictions on non-commercial uses and distribution Parker#3197: there's been issues with creating data licenses in a gpl form because of that problem (and other problems) Parker#3197: (there's some interesting discussions about it online) Parker#3197: and there's also been issues in the past with trying to patent algorithms. SVM (support vector **machines**) they had the wording of machine because they were trying to get around patent law that prevented patenting algorithms (but I think they did fail) alembic#5293: lol well that explains that awfully strange name bmk#1476: time to call LMs Language Machines zphang#7252: reminds me of the history of the term "dynamic programming" bmk#1476: or Entropy zitterbewegung#4846: This seems like a good dataset to add to the pile https://www.kaggle.com/andrewmvd/automatic-slide-generation-from-scientific-papers is there a channel for the pile? cfoster0#4356: There was but it's archived now StellaAthena#3530: We stopped adding things to the Pile about 9 months ago. But yes, it's an excellent dataset bmk#1476: the dataset is tiny